Skip to main content

CLI node

A CLI node lets you run custom CLI commands as a step in your flow and pass structured output to the next nodes. The script runs in a secure, sandboxed environment. You must attach an AWS or GCP connection; the script runs with that connection's credentials, and gcloud or aws are pre-installed.

The CLI node is useful for simplifying your flows. For example, you can run the aws ec2 attach-volume command in a single CLI node. With regular AWS nodes, you would need to need to chain three nodes: DescribeInstances (validation), DescribeVolumes (state check), and AttachVolume.

Before you begin​

Create an AWS or GCP connection before adding a CLI node. For an overview of connections, see Connections.

If you want to reuse the same connection across multiple action and CLI nodes, create a connection variable.

Configure the CLI node​

Selecting a CLI node opens a side panel with a Parameters tab.

CLI node Parameters tab showing connection, provider, script button, and output referencing

  • Cloud connection provider: Select Google Cloud (GCP) or Amazon Web Services (AWS). A connection is required; the script runs with that connection's credentials.

    You can select a direct connection or a connection variable. Connection variables let you manage the connection in one place and reuse it across nodes. The Connection dropdown shows both connections and connection variables.

    (AWS only) Account and Region: When the connection is Amazon Web Services (AWS), select the account and optionally the region. If not set, Region defaults to all regions.

    CLI node with AWS connection showing Account and Region fields

    When you select a connection variable in a CLI node, CloudFlow resolves that variable to its underlying connection ID before running your script. Updating the connection variable changes which connection the CLI node uses without editing the node itself.

  • Select Add script or Edit script to open the script editor. The script runs in a sandbox. When you have selected a connection, gcloud or aws is available and authenticated for the chosen provider. You should print a single JSON value to standard output (stdout). Downstream nodes can then reference individual fields from that output, use it in conditions, or pass it to the next stepβ€”so producing valid JSON is how you make the CLI node's result usable in the rest of your flow.

    CLI script editor with $nodes example and completion

    You can reference output from any preceding node in the flow using $nodes["<Node name>"]. Add an optional path to reach nested values, for example $nodes["Manually start"][0].results[0].currentDate.

    # Reference previous node output in your script with $nodes["<node name>"]
    echo $nodes["Manually start"][0].results[0].currentDate
    Tip

    The script editor supports completion for $nodes. Use it to insert node names and paths.

  • Referencing the output: Defines how other nodes in your flow can reference the CLI node's output. For how to reference values from this node in later nodes (e.g. via the + button), see Parameter types.

    • Basic referencing: The output is referenced as a single field. Use this for simple return values.

    • Advanced referencing: Define a JSON schema so that specific fields in your output can be individually referenced by downstream nodes. See Output schema for how to define the schema.

Execution limits​

The script runs in a sandboxed environment with the following limits:

  • Maximum execution time: 10 seconds. If the script exceeds this, the node fails with a timeout error.

  • Rate limit: The sandbox may enforce rate limits. If exceeded, the node fails with a rate-limit error.

Examples​

The following script lists Google Cloud Storage buckets when a GCP connection is configured. The command outputs JSON directly, so downstream nodes can reference the array or its fields (for example with advanced referencing and a schema).

gcloud storage buckets list --format=json --limit=5

With an AWS connection and account selected, you can list S3 buckets instead:

aws s3api list-buckets --output json

To reference and filter output from an upstream node (for example, an AWS Describe instances node), use $nodes and then process the JSON. The following script reads the first result from an upstream node named Describe instances, filters EC2 instances to those in running state, and outputs a JSON array for downstream nodes. Replace the node name with your own.

# Filter running EC2 instances from upstream Describe instances node
echo '$nodes["Describe instances"][0].results[0]' | python3 -c "
import json, sys
data = json.load(sys.stdin)
instances = [
{\"InstanceId\": i[\"InstanceId\"], \"State\": i[\"State\"][\"Name\"]}
for r in data.get(\"Reservations\", [])
for i in r.get(\"Instances\", [])
if i.get(\"State\", {}).get(\"Name\") == \"running\"
]
print(json.dumps(instances))
"
Note

The node name in $nodes["Describe instances"] must match the exact name of the upstream node in your flow (including spaces and capitalization).

See also​