Skip to main content

Triggers

When creating a flow, the first node is always a trigger node that starts the flow. The trigger node provides data to every downstream node in the flow. You can use this data in filters and other nodes.

What should start your flow

There are four types of triggers:

One flow can have only one trigger node. Schedule, webhook, and event triggers run only after the flow is published; a manual trigger lets you run the flow on demand from the editor once it is published.

Custom schedule

A Custom schedule is useful when you need a flow to run at specific times or recurring intervals. For example, checking the utilization rates of Google Cloud Compute Engine instances at 9:00 AM every day.

  • Configuration options: Time zone, Start date, Time, Frequency.

  • Frequency defines how often a flow is triggered. Supported values: Daily, Weekly, Monthly, Custom, and Run once.

    • When choosing Daily, you can trigger flows to run at specific times throughout the day. The minimum interval between runs is one hour, allowing you to schedule up to 24 executions per 24-hour period.

    • Choose Custom if you need the flow to run at recurring intervals. Supported values: Hour, Day, Week, or Month. For example, every two hours or every two weeks.

    • Choose Run once to schedule a single, non-repeating execution that can be run at a custom time. This is useful when performing one-time actions, such as updating the OS kernel on a legacy server or applying a patch during a maintenance window.

Below is an example custom schedule.

Condition configuration

Manual trigger

This type of trigger is used when a flow should only run on demand. It helps guarantee human oversight as well as offer flexibility, allowing you to control flow execution as needed.

To start a flow with a manual trigger, select Run in the top bar of the CloudFlow editor after the flow has been published.

Condition configuration

Webhook trigger

You can use a webhook trigger to start a flow using data from an API. Consequently, any external system capable of making API calls can trigger a flow. For example, you can use the Run flow action in Zapier to trigger a flow. See also DoiT Integrations.

You must provide a sample of your JSON so that we can detect the structure of your data. Once we have detected the structure, the flow automatically creates the fields required for subsequent actions. With this, your data becomes part of the flow and can be used in any activity nodes, just like other data.

Webhook trigger configuration

  • Webhook URL: Provide the URL to the service that will send data to trigger this flow.

  • Sample JSON: Provide a sample of your JSON so that we can detect the structure of your data. Select Detected structure to ensure that we have identified the fields correctly. If you think it is incorrect, amend your JSON sample and paste it again.

  • Generate API key: Create a DoiT API key from the trigger node so you do not have to open your Profile in DoiT Cloud Intelligence. If you already have an API key, the panel shows a link to manage your API key on the Profile page. This key is used only for DoiT APIs.

DoiT Cloud Intelligence event trigger

You can use a DoiT Cloud Intelligence event trigger to start a flow using an event generated by DoiT Cloud Intelligence. There are four categories of events:

  • DoiT Cloud Intelligence events: DoiT Cloud Intelligence events that you can use to start flows.

  • Applies when you use cost and usage alerts. Alerts: Fires when a cost or usage alert condition is satisfied or later resolves for the same period (and optional breakdown).

  • Applies to AWS only. AWS CloudTrail: Events from AWS CloudTrail—for example, an EC2 instance being started or terminated, or an RDS instance created or modified.

  • Available for any cloud where anomaly detection is configured (billing and/or real-time). Cost anomaly detection: Fires when cost anomalies are created or updated.

DoiT Cloud Intelligence events

DoiT Cloud Intelligence can emit platform events that trigger flows. When you select a DoiT Cloud Intelligence event, Event payload details shows the fields you can reference in your flow.

Alerts

You can trigger a flow when any of the following alert events occur. These use the same evaluation as alert notifications (for example, email and Zapier) when a condition is met or clears.

  • DoiT Alert Condition Satisfied: When an alert's condition is met, for example, a cost threshold is met for the evaluated time period. For alerts evaluated per dimension (such as per service), an event can be emitted for each dimension value that triggers. The payload includes the value, the period that was evaluated, and alert details (name, threshold, metric, and so on). Sometimes an alert checks items one-by-one (for example, each service or each project), not just one grand total. In this situation, breakdown tells you which specific item triggered the alert (for example, Compute Engine) and breakdownLabel tells you the item category (for example, Service).

  • DoiT Alert Condition Resolved: When an alert is resolved, for example, cost drops below the threshold for the same period. The alert aligns with DoiT Alert Condition Satisfied (same alert, period, and optional breakdown) so you can pair flows, for example, closing a Jira ticket or stopping follow-up actions you started when the condition was satisfied.

Cost anomaly detection

You can trigger a flow when any of the following cost anomaly events occur:

  • DoiT Cost Anomaly Acknowledged Changed: When an anomaly is acknowledged or the acknowledgement is edited.
  • DoiT Cost Anomaly Cost Changed: When the cost of an active anomaly changes.
  • DoiT Cost Anomaly Created: When a new cost anomaly is detected.
  • DoiT Cost Anomaly Severity Changed: When the severity of an anomaly changes.
  • DoiT Cost Anomaly Status Changed: When status changes (for example, Active to Inactive).
  • DoiT Cost Anomaly Top SKUs Changed: When the top contributing SKUs for an anomaly change.
Note

You must have real-time cost anomaly detection configured to trigger flows for cost anomalies.

Event trigger configuration

  • Select an event: From the list, select a DoiT Cloud Intelligence event for which you want to trigger a flow.

  • Event payload details: A list of referenced fields that are available within the selected event are displayed. These are the fields that you can reference in your flow.

  • (Optional) Event payload filter: Add conditions on payload fields so the flow runs only when an incoming event matches the conditions. You can use a filter to start a flow only for relevant case, for example, a specific cost anomaly severity, a particular AWS resource ARN, or other values in the event data.

Event payload filter

You can filter which events trigger your flow by adding conditions based on the event payload. When one or more filter conditions are defined, the flow runs only when the incoming event matches the conditions. This lets you react to specific events, for example, triggering only when a cost anomaly has a particular severity or when an AWS CloudTrail event targets a specific resource.

When you add multiple conditions, they all must be true for the flow to continue. CloudFlow uses AND logic, meaning the flow won't move forward unless every single requirement is met.

Info

To check a single field for multiple values at once, use operators like In or Not In. This allows you to create a list of approved (or blocked) items. See Filter operators.

For example:

  • Your flow listens for DoiT Cost Anomaly Created, but you only want it to run when the anomaly is Critical, not for every new anomaly. You can add the following condition:

    • Field: severity

    • Operator: Equal to (==)

    • Value: Critical

    Event trigger filter example

  • For an AWS CloudTrail event, you might filter on a field such as details.resourceArn with contains and a substring of your production ARN (or use Equal to (==) with the full ARN) so the flow runs only when the API activity targets that resource.

  • To narrow AWS CloudTrail activity to a specific cloud footprint, add multiple conditions so every one must match. For example, run the flow only when the event is in your production AWS account and a chosen Region:

    • Field: userIdentity.accountIdOperator: Equal to (==) — Value: your 12-digit AWS account ID (for example, 111122223333).

    • Field: awsRegionOperator: Equal to (==) — Value: the Region code you care about (for example, eu-west-1).

    Use the names shown in Event payload details for your selected event; nested fields often appear with dot notation (for example, userIdentity.accountId).

To add a filter condition:

  1. In Event payload filter, select + Add condition.

  2. Configure the condition:

    • Field: Either select the field to display a list of available fields or manually type a field, for example, severity or details.resourceArn. The fields shown depend on the event you selected.

    • Operator: Choose a comparison operator. The Filter operators available depend on the field's data type.

    • Value: Enter the value to compare against. For timestamp fields, a date-time picker is shown. The value input is hidden for is null and is not null operators.

    Set Event trigger filter

  3. Select Save.

    To add multiple criteria that must all be true (AND), select + Add condition for each condition you want to include. You can edit or delete a condition at any time.

Automate event responses

When your flow is started by a DoiT Cloud Intelligence event trigger (DoiT Cloud Intelligence events, alerts, AWS CloudTrail, or cost anomaly detection), you can automate responses—for example, post to Slack, create tickets, or run remediation steps. Reference fields from Event payload details in your downstream nodes:

  • Add a Notification node to send event details to your team. Use payload fields (for example, alert name and value for alert events, anomaly ID and cost for cost anomalies, or resource ARN for AWS CloudTrail events) in the message.

  • Use a Branch node to run different steps depending on fields in that event's payload, for example, notify only when a cost anomaly's severity is high, when an alert's metric value is above a number you choose, or when a breakdown or other dimension field matches a specific project or service.

Trigger node results

This section lists the fields available in the output of a trigger node that you can reference in downstream nodes. For Webhook and DoiT Cloud Intelligence event triggers, the node output also includes event-specific payload fields (for example, alert name and value, or cost anomaly ID and cost). Those fields are shown in Event payload details when you configure the trigger and can be referenced like any other trigger output.

Note

When configuring a node, you typically choose one upstream node whose output to reference. The Schedule trigger node is an exception: its output can be referenced from any node in the flow, in addition to that chosen node.

The trigger result includes date and time in both legacy and ISO 8601 formats. Use the iso8601 object when an API requires ISO 8601 timestamps. The existing currentDate and startTime fields remain for backward compatibility.

NameDescriptionExample value
currentDateDate of the run in YYYY-MM-DD format2026-02-09
currentDayDay of month (1–31)9
currentMonthMonth (1–12)2
currentYearFour-digit year2026
customerIdDoiT customer or organization identifierABCDeFhijKLm1nopQrStUVwx
ownerEmailEmail of the flow owner[email protected]
startTimeUnix timestamp in milliseconds1770644776279
startTimeMillisUnix timestamp in milliseconds1770644776279
startTimeSecondsUnix timestamp in seconds1770644776
userIdDoiT user identifieraBBCDe1FG2hIJkL34MNO
iso8601Object with ISO 8601 date and time strings:
  • currentDate (YYYY-MM-DDT00:00:00.000Z)
  • startTime (YYYY-MM-DDTHH:mm:ss.sssZ)
currentDate: "2026-02-09T00:00:00.000Z", startTime: "2026-02-09T15:21:01.760Z"
variablesGlobal and local flow variablesglobalVariables: {}, localVariables: {}
billingScopesLists of cloud project and account identifiers for the customer. Each key contains an array of objects with id and name:
  • google-cloud — GCP project IDs
  • amazon-web-services — AWS account IDs
  • microsoft-azure — Azure subscription IDs
Use these in downstream nodes to iterate over or filter by cloud scope.
google-cloud: [{id: "my-project", name: "My GCP Project"}], amazon-web-services: [...], microsoft-azure: [...]
billingScopesRowCountTotal number of cloud scopes across all three providers (GCP + AWS + Azure).3
Tip

Use billingScopes when you need to run actions per GCP project, AWS account, or Azure subscription—for example, in loops or filters that reference Variables from the trigger node.

Event trigger example

You can automate responses from a trigger, for example, post to Slack, create tickets, or run remediation steps.

For example, a flow triggered by a DoiT Cost Anomaly Created can use a Branch node to filter for anomalies with a Critical severity. A Notification node then sends a message containing the anomaly ID and cost. Similarly, a flow triggered by DoiT Alert Condition Satisfied can notify a channel with the alert name, period, and value. These values come from the trigger node's output and can be mapped into any downstream node.