Skip to main content

Triggers

When creating a flow, the first node is always a trigger node that starts the flow. The trigger node provides data to every downstream node in the flow. You can use this data in filters and other nodes.

What should start your flow

There are four types of triggers:

One flow can have only one trigger node. Schedule, webhook, and event triggers run only after the flow is published; a manual trigger lets you run the flow on demand from the editor once it is published.

Custom schedule

A Custom schedule is useful when you need a flow to run at specific times or recurring intervals. For example, checking the utilization rates of Google Cloud Compute Engine instances at 9:00 AM every day.

  • Configuration options: Time zone, Start date, Time, Frequency.

  • Frequency defines how often a flow is triggered. Supported values: Daily, Weekly, Monthly, Custom, and Run once.

    • When choosing Daily, you can trigger flows to run at specific times throughout the day. The minimum interval between runs is one hour, allowing you to schedule up to 24 executions per 24-hour period.

    • Choose Custom if you need the flow to run at recurring intervals. Supported values: Hour, Day, Week, or Month. For example, every two hours or every two weeks.

    • Choose Run once to schedule a single, non-repeating execution that can be run at a custom time. This is useful when performing one-time actions, such as updating the OS kernel on a legacy server or applying a patch during a maintenance window.

Below is an example custom schedule.

Condition configuration

Manual trigger

This type of trigger is used when a flow should only run on demand. It helps guarantee human oversight as well as offer flexibility, allowing you to control flow execution as needed.

To start a flow with a manual trigger, select Run in the top bar of the CloudFlow editor after the flow has been published.

Condition configuration

Webhook trigger

You can use a webhook trigger to start a flow using data from an API. Consequently, any external system capable of making API calls can trigger a flow. For example, you can use the Run flow action in Zapier to trigger a flow. See also DoiT Integrations.

You must provide a sample of your JSON so that we can detect the structure of your data. Once we have detected the structure, the flow automatically creates the fields required for subsequent actions. With this, your data becomes part of the flow and can be used in any activity nodes, just like other data.

Webhook trigger configuration

  • Webhook URL: Provide the URL to the service that will send data to trigger this flow.

  • Sample JSON: Provide a sample of your JSON so that we can detect the structure of your data. Select Detected structure to ensure that we have identified the fields correctly. If you think it is incorrect, amend your JSON sample and paste it again.

  • Generate API key: Create a DoiT API key from the trigger node so you do not have to open your Profile in DoiT Cloud Intelligence. If you already have an API key, the panel shows a link to manage your API key on the Profile page. This key is used only for DoiT APIs.

DoiT Cloud Intelligence event trigger

You can use a DoiT Cloud Intelligence event trigger to start a flow using an event generated by DoiT Cloud Intelligence. There are three categories of events:

  • DoiT Cloud Intelligence events: DoiT Cloud Intelligence events that you can use to start flows.

  • Applies to AWS only. AWS CloudTrail: Events from AWS CloudTrail—for example, an EC2 instance being started or terminated, or an RDS instance created or modified.

  • Available for any cloud where anomaly detection is configured (billing and/or real-time). Cost anomaly detection: Fires when cost anomalies are created or updated.

DoiT Cloud Intelligence events

DoiT Cloud Intelligence can emit platform events that trigger flows. When you select a DoiT Cloud Intelligence event, Event payload details shows the fields you can reference in your flow.

Cost anomaly detection

You can trigger a flow when any of the following cost anomaly events occur:

  • DoiT Cost Anomaly Acknowledged Changed: When an anomaly is acknowledged or the acknowledgement is edited.
  • DoiT Cost Anomaly Cost Changed: When the cost of an active anomaly changes.
  • DoiT Cost Anomaly Created: When a new cost anomaly is detected.
  • DoiT Cost Anomaly Severity Changed: When the severity of an anomaly changes.
  • DoiT Cost Anomaly Status Changed: When status changes (for example, Active to Inactive).
  • DoiT Cost Anomaly Top SKUs Changed: When the top contributing SKUs for an anomaly change.
Note

You must have real-time cost anomaly detection configured to trigger flows for cost anomalies.

Event trigger configuration

  • Select an event: From the list, select a DoiT Cloud Intelligence event for which you want to trigger a flow.

  • Event payload details: A list of referenced fields that are available within the selected event are displayed. These are the fields that you can reference in your flow.

Automate event responses

When your flow is started by a DoiT Cloud Intelligence event trigger (DoiT Cloud Intelligence events, AWS CloudTrail, or cost anomaly detection), you can automate responses—for example, post to Slack, create tickets, or run remediation steps. Reference fields from Event payload details in your downstream nodes:

  • Add a Notification node to send event details to your team. Use payload fields (for example, anomaly ID and cost for cost anomalies, or resource ARN for AWS CloudTrail events) in the message.

  • Use a Branch node to run different steps depending on the payload—for example, notify only when a cost anomaly's severity is high, or run an action only for certain AWS event types.

Trigger node results

This section lists the fields available in the output of a trigger node that you can reference in downstream nodes. For Webhook and DoiT Cloud Intelligence event triggers, the node output also includes event-specific payload fields (for example, anomaly ID or cost for cost anomaly events). Those fields are shown in Event payload details when you configure the trigger and can be referenced like any other trigger output.

Note

When configuring a node, you typically choose one upstream node whose output to reference. The Schedule trigger node is an exception: its output can be referenced from any node in the flow, in addition to that chosen node.

The trigger result includes date and time in both legacy and ISO 8601 formats. Use the iso8601 object when an API requires ISO 8601 timestamps. The existing currentDate and startTime fields remain for backward compatibility.

NameDescriptionExample value
currentDateDate of the run in YYYY-MM-DD format2026-02-09
currentDayDay of month (1–31)9
currentMonthMonth (1–12)2
currentYearFour-digit year2026
customerIdDoiT customer or organization identifierABCDeFhijKLm1nopQrStUVwx
ownerEmailEmail of the flow owner[email protected]
startTimeUnix timestamp in milliseconds1770644776279
startTimeMillisUnix timestamp in milliseconds1770644776279
startTimeSecondsUnix timestamp in seconds1770644776
userIdDoiT user identifieraBBCDe1FG2hIJkL34MNO
iso8601Object with ISO 8601 date and time strings:
  • currentDate (YYYY-MM-DDT00:00:00.000Z)
  • startTime (YYYY-MM-DDTHH:mm:ss.sssZ)
currentDate: "2026-02-09T00:00:00.000Z", startTime: "2026-02-09T15:21:01.760Z"
variablesGlobal and local flow variablesglobalVariables: {}, localVariables: {}
billingScopesLists of cloud project and account identifiers for the customer. Each key contains an array of objects with id and name:
  • google-cloud — GCP project IDs
  • amazon-web-services — AWS account IDs
  • microsoft-azure — Azure subscription IDs
Use these in downstream nodes to iterate over or filter by cloud scope.
google-cloud: [{id: "my-project", name: "My GCP Project"}], amazon-web-services: [...], microsoft-azure: [...]
billingScopesRowCountTotal number of cloud scopes across all three providers (GCP + AWS + Azure).3
Tip

Use billingScopes when you need to run actions per GCP project, AWS account, or Azure subscription—for example, in loops or filters that reference Variables from the trigger node.

Event trigger example

You can automate responses from a trigger, for example, post to Slack, create tickets, or run remediation steps.

For example, a flow triggered by a DoiT Cost Anomaly Created can use a Branch node to filter for anomalies with a Critical severity. A Notification node then sends a message containing the anomaly ID and cost. These values come from the trigger node's output and can be mapped into any downstream node.