Application Workflow Jobs
The following topics describe job types for application workflow platforms and services:
Job:Airflow Link copied to clipboard
Airflow enables you to monitor and manage DAG workflows in Control-M. You can monitor DAG executions in the Airflow tab in the Monitoring domain. You can also view the specific details of each task, open the DAG in the Airflow web server user interface, and view XCom variables from the Airflow tab.
To deploy and run this job, ensure that you have completed the following:
-
Installed Control-M for Airflow, as described in Control-M for Airflow Installation.
-
Created a connection profile, as described in ConnectionProfile:AWS (Deprecated).
The following example shows how to define an Airflow job:
"AirflowJob":
{
"Type": "Job:Airflow",
"Host": "AgentHost",
"ConnectionProfile": "AIRFLOW_CONNECTION_PROFILE",
"DagId": "example_bash_operator",
"ConfigurationJson": "\{\"key1\":1, \"key2\":2, \"key3\":\"value3\"\}",
"OutputDetails": "FAILED_TASKS"
}
The following table describes the Airflow job parameters.
Parameter |
Description |
---|---|
ConnectionProfile |
Defines the ConnectionProfile:Airflow name that connects Control-M to Airflow. |
DagId |
Defines the unique identifier of a DAG. |
ConfigurationJson |
(Optional) Defines the JSON object, which describes additional configuration parameters (key:value pairs). |
OutputDetails |
Determines whether to include Airflow DAG task logs in the Control-M job output, as follows:
Default: FAILED_TASKS |
Job:Apache AirflowLink copied to clipboard
Apache Airflow enables you to create, schedule, and monitor complex data processing and analytics pipelines. It provides an environment to define, manage, and execute workflows as Directed Acyclic Graphs (DAGs) to control task dependencies and execution order.
To deploy and run an Apache Airflow job, ensure that you have installed the Apache Airflow plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for Apache Airflow.
The following example shows how to run an Apache Airflow job:
"Apache Airflow_Job_1":
{
"Type": "Job:Apache Airflow",
"ConnectionProfile": "AIRFLOW",
"Action": "Run DAG",
"DAG Name": " Example_DAG",
"DAG Run ID": " RunID-1",
"Parameters": "{"variable" : "Value"}",
"Status Polling Frequency":"60",
"Failure Tolerance":"3"
}
Parameter |
Description |
---|---|
Connection Profile |
Defines the ConnectionProfile:Apache Airflow name that connects Control-M to Apache Airflow. |
Action |
Determines whether to run a new DAG or rerun a DAG. Valid Values:
|
DAG Name |
Defines the logical name of the Directed Acyclic Graph (DAG). |
DAG Run ID |
(Optional) Defines the specific DAG run (execution) ID in Airflow to track and manage individual workflow executions.If you do not provide a DAG Run ID, the system generates a random Run ID. |
Parameters |
Defines the parameters for the Apache Airflow job, in JSON format, which enables you to control how the job executes. Use backslashes to escape quotes. CopyCopied to clipboard
If you are not adding parameters, type {}. |
Only Failed Tasks |
Determines whether to rerun a DAG job only with failed tasks or with all tasks. Valid Values:
|
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 60 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Job:Apache NiFiLink copied to clipboard
Apache NiFi is an open-source tool that automates data flow across systems in real time.
To deploy and run an Apache NiFi job, ensure that you have installed the Apache NiFi plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for Apache NiFi.
The following example shows how to run an Apache NiFi job:
"Apache NiFi_Job_1":
{
"Type": "Job:Apache NiFi",
"ConnectionProfile": "NFI",
"Processor Group ID": "3b315648-c11b-1ff4-c672-770c0ba49da3",
"Processor ID": "3b316c50-c11b-1ff4-99f2-690aa6f35952v",
"Action": "Run Processor",
"Disconnected Node Ack": "unchecked",
"Status Polling Frequency":"5",
"Failure Tolerance":"0"
}
The following table describes the Apache NiFi job parameters.
Parameter |
Description |
---|---|
ConnectionProfile |
Defines the ConnectionProfile:Apache NiFi name that connects Control-M to Apache NiFi. |
Processor Group ID |
Defines the ID number of a specific processor group. |
Processor ID |
Defines the ID number of a specific processor. |
Action |
Determines the action to perform on Apache NiFi. Valid Values:
|
Disconnected Node Ack |
Determines whether to disconnect the node to allow mutable requests to proceed. Values:
Default: unchecked |
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 5 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 0 |
Job:AstronomerLink copied to clipboard
Astronomer is a workload automation service based on Apache Airflow that enables you to create, schedule, and manage your workflows.
To deploy and run an Astronomer job, ensure that you have installed the Astronomer plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for Astronomer.
The following example shows how to run an Astronomer job:
"Astronomer_Job_2":
{
"Type": "Job:Astronomer",
"ConnectionProfile": "ASTRONOMER",
"Action": "Run DAG",
"DAG Name": "Example_dag_basic",
"DAG Run ID": "",
"Parameters":
{
"Variable1":"Value1",
"Variable2":"Value2",
}
"Status Polling Frequency": "60",
"Failure Tolerance": "3"
}
The following table describes the Astronomer job parameters.
Parameter |
Description |
---|---|
Connection Profile |
Defines the ConnectionProfile:Astronomer name that connects Control-M to Astronomer. |
Action |
Determines whether to run a new DAG or rerun a DAG. Valid Values:
|
DAG Name |
Defines the logical name of the Directed Acyclic Graph (DAG). The DAG Name is defined in the Airflow interface. |
DAG Run ID |
(Optional) Defines the specific DAG run (execution) ID in Airflow. |
Parameters |
Defines the JSON-based body parameters to pass when the DAG executes, in the following format: CopyCopied to clipboard
|
Only Failed Tasks |
Determines whether to rerun a DAG job only with failed tasks or with all tasks. Valid Values:
|
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 60 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 3 |
Job:AWS MWAALink copied to clipboard
AWS Managed Workflows for Apache Airflow (MWAA) is an orchestration service built on Apache Airflow, designed to create, schedule, and monitor data pipelines and workflows.
To deploy and run an AWS MWAA job, ensure that you have installed the AWS Step Functions plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for Amazon MWAA.
The following example shows how to define an AWS MWAA job:
"AWS MWAA_Job_2":
{
"Type": "Job:AWS MWAA",
"ConnectionProfile": "MWAA",
"Action": "Run DAG",
"AWS MWAA Environment Name": "MyAirflowEnvironment",
"DAG Name": "example_dag_basic",
"DAG Run ID": "",
"Parameters": "{}",
"Status Polling Frequency":"60",
"Failure Tolerance":"3"
}
Parameter |
Description |
---|---|
Connection Profile |
Defines the ConnectionProfile:AWS MWAA name that connects Control-M to AWS MWAA. |
Action |
Determines whether to run a new DAG or rerun a DAG. Valid Values:
|
MWAA Environment Name |
Defines the logical name of the MWAA environment. |
DAG Name |
Defines the logical name of the Directed Acyclic Graph (DAG). Defines the logical name of the Directed Acyclic Graph (DAG). |
DAG Run ID |
(Optional) Defines the unique identifier for a specific DAG run in an orchestration system. The ID helps track and manage individual workflow executions. If you do not provide a DAG Run ID, the system generates a random Run ID. |
Parameters |
Defines the parameters for the Amazon MWAA job, in JSON format, which enables you to control how the job executes. Use backslashes to escape quotes. CopyCopied to clipboard
If you are not adding parameters, type {}. |
Only Failed Tasks |
Determines whether to rerun a DAG job only with failed tasks or with all tasks. Valid Values:
|
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 60 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 3 |
Job:AWS Step FunctionsLink copied to clipboard
AWS Step Functions enables you to create visual workflows that can integrate other AWS services.
To deploy and run an AWS Step Functions job, ensure that you have installed the AWS Step Functions plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for AWS Step Functions.
The following example shows how to define an AWS Step Functions job:
"AWS Step Functions_Job_2":
{
"Type": "Job:AWS Step Functions",
"ConnectionProfile": "STEPFUNCTIONSCCP",
"Execution Name": "Step Functions Exec",
"State Machine ARN": "arn:aws:states:us-east-1:155535555553:stateMachine:MyStateMachine",
"Parameters": "{\\\"parameter1\\\":\\\"value1\\\"}",
"Show Execution Logs": "checked",
"Status Polling Frequency":"10",
"Failure Tolerance":"2"
}
The following table describes the AWS Step Functions job parameters.
Parameter |
Description |
---|---|
ConnectionProfile |
Defines the ConnectionProfile:AWS Step Functions name that connects Control-M to AWS Step Functions. |
Execution Name |
Defines the name of the Step Function execution. An execution executes a state machine, which is a workflow. |
State Machine ARN |
Determines the Step Function state machine to use. A state machine is a workflow, and an Amazon Resource Name (ARN) is a standardized AWS resource address. |
Parameters |
Defines the parameters for the Step Function job, in JSON format, which enables you to control how the job runs. If you are not adding parameters, type {}. |
Show Execution Logs |
Determines whether to add the job log to the output. Values:
Default: unchecked |
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 20 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Job:AWS:StepFunction (Deprecated)Link copied to clipboard
This job type is deprecated. Use the job type Job:AWS Step Functions. For migration information, see Control-M for AWS Plug-in Migration Tool.
AWS Step Function enables you to create visual workflows that can integrate other AWS services.
To deploy and run this type of AWS job, ensure that you have completed the following:
-
Installed the Application Pack, which includes the Control-M for AWS plug-in.
-
Created a connection profile, as described in ConnectionProfile:AWS (Deprecated).
The following example shows how to define an AWS Step Function job:
"AwsStepFunctionJob":
{
"Type": "Job:AWS:StepFunction",
"ConnectionProfile": "AWS_CONNECTION",
"StateMachine": "StateMachine1",
"ExecutionName": "Execution1",
"Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
"AppendLog": true
}
The following table describes the AWS Step Function job parameters.
Parameter |
Description |
---|---|
StateMachine |
Defines the State Machine to use. |
ExecutionName |
Defines a name for the execution. |
Input |
Define the Step Function input in JSON format. Escape all special characters. |
AppendLog |
Determines whether to append the log to the output. Values:
Default: true |
Job:Azure Logic AppsLink copied to clipboard
Azure Logic Apps enables you to design and automate cloud-based workflows and integrations.
To deploy and run an Azure Logic Apps job, ensure that you have installed the Azure Logic Apps plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for Azure Logic Apps.
The following example shows how to define an Azure Logic Apps job:
"Azure Logic Apps Job":
{
"Type": "Job:Azure Logic Apps",
"ConnectionProfile": "AZURE_LOGIC_APPS",
"Workflow": "tb-logic",
"Parameters": "{\"bodyinfo\":\"hello from CM\",\"param2\":\"value2\"}",
"Get Logs": "unchecked",
"Status Polling Frequency": "20",
"Failure Tolerance": "2"
}
The following table describes the Azure Logic Apps job parameters.
Parameter |
Description |
---|---|
ConnectionProfile |
Defines the ConnectionProfile:Azure Logic Apps name that connects Control-M to Azure. |
Workflow |
Determines which of the Consumption logic app workflows executes from your predefined set of workflows. This job does not execute Standard logic app workflows. |
Parameters |
Defines parameters, in JSON format, that enable you to control the presentation of data. Rules:
|
Get Logs |
Determines whether to display the job output when the job ends. |
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 20 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Job:Azure:LogicAppsLink copied to clipboard
Azure Logic Apps enables you to design and automate cloud-based workflows and integrations.
To deploy and run this type of Azure job, ensure that you have completed the following:
-
Installed the Application Pack, which includes the Control-M for Azure plug-in.
-
Created a connection profile, as described in ConnectionProfile:Azure.
BMC recommends that you use the newer job type, Job:Azure Logic Apps.
The following example shows how to define a Azure Logic Apps job:
"AzureLogicAppJob":
{
"Type": "Job:Azure:LogicApps",
"ConnectionProfile": "AZURE_CONNECTION",
"LogicAppName": "MyLogicApp",
"RequestBody": "{\\n \"name\": \"BMC\"\\n}",
"AppendLog": false
}
The following table describes the Azure Logic Apps job parameters.
Parameter |
Description |
---|---|
LogicAppName |
Defines the name of the Azure Logic App. |
RequestBody |
(Optional) Defines the JSON for the expected payload. |
AppendLog |
(Optional) Determines whether to append the log to the output. Values:
Default: true |
Job:GCP ComposerLink copied to clipboard
Google Cloud (GCP) Composer is a managed workflow orchestration service built on Apache Airflow that enables you to automate workflow tasks.
To deploy and run a GCP Composer job, ensure that you have installed the GCP Composer plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for GCP Composer.
The following example shows how to define a GCP Composer job:
"GCP Composer_Job_2":
{
"Type": "Job:GCP Composer",
"ConnectionProfile": "GCPCOMPOSER",
"Action": "Run DAG",
"Dag Name": "Example_dag_basic",
"DAG Run ID": "",
"Parameters":
{
"Variable1":"Value1",
"Variable2":"Value2"
}
"Status Polling Frequency": "60",
"Failure Tolerance": "3"
}
The following table describes GCP Composer job attributes.
Parameter |
Description |
---|---|
ConnectionProfile |
Defines the ConnectionProfile:GCP Composer name that connects Control-M to GCP Composer. |
Action |
Determines whether to run a new DAG or rerun a DAG. Valid Values:
|
DAG Name |
Defines the logical name of the Directed Acyclic Graph (DAG). The DAG Name is defined in the GCP interface. |
DAG Run ID |
(Optional) Defines the specific DAG run (execution) ID in GCP Composer. |
Parameters |
Defines the JSON-based body parameters to pass when the DAG executes, in the following format: CopyCopied to clipboard
For no parameters, type {}. |
Only Failed Tasks |
Determines whether to rerun a DAG job only with failed tasks or with all tasks. Valid Values:
|
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 60 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 2 |
Job:GCP WorkflowsLink copied to clipboard
GCP Workflows enables you to design and automate cloud-based workflows and integrations.
To deploy and run a GCP Workflows job, ensure that you have installed the GCP Workflows plug-in with the provision image command or the provision agent::update command.
For more information about this plug-in, see Control-M for GCP Workflows.
The following example shows how to define a GCP Workflows job:
"GCP Workflows_Job":
{
"Type": "Job:GCP Workflows",
"ConnectionProfile": "GCPWF",
"Project ID": "12345id",
"Location": "us-central1",
"Workflow Name": "workflow-1",
"Parameters JSON Input": "argument" : {\"var1\":\"value1\",\"var2\":\"value2\"},
"ExecutionLabel" : "{"labelName":"name"}",
"Show Workflow Results": "checked",
"Status Polling Frequency": "20",
"Failure Tolerance": "3"
}
The following table describes the GCP Workflows job parameters.
Parameter |
Description |
---|---|
Connection Profile |
Defines the ConnectionProfile:GCP Workflows name that connects Control-M to GCP Workflows. |
Project ID |
Defines the identifier of the GCP project where the job runs. A project is a set of configuration settings that define the resources the jobs utilize and how they interact with GCP. |
Location |
Determines the region where the job runs. us-central1 |
Workflow Name |
Determines the predefined GCP Workflow that executes. |
Parameters JSON Input |
Defines the JSON-based body parameters that are passed to the function, in the following format: CopyCopied to clipboard
|
Execution Label |
Defines a job execution label, which enables you to group similar executions in the GCP Workflows log. {"labelName": "name"} |
Show Workflow Results |
Determines whether the GCP Workflow results appear in the job output. Valid Values:
Default: unchecked |
Status Polling Frequency |
Determines the number of seconds to wait before checking the job status. Default: 20 |
Failure Tolerance |
Determines the number of times to check the job status before the job ends Not OK. Default: 3 |