Application Workflow Jobs

The following topics describe job types for application workflow platforms and services:

Job:Airflow

Airflow enables you to monitor and manage DAG workflows in Control-M. You can monitor DAG executions in the Airflow tab in the Monitoring domain. You can also view the specific details of each task, open the DAG in the Airflow web server user interface, and view XCom variables from the Airflow tab.

To deploy and run this job, ensure that you have completed the following:

The following example shows how to define an Airflow job:

Copy
"AirflowJob"
{
   "Type": "Job:Airflow",
   "Host": "AgentHost",
   "ConnectionProfile": "AIRFLOW_CONNECTION_PROFILE",
   "DagId": "example_bash_operator",
   "ConfigurationJson": "\{\"key1\":1, \"key2\":2, \"key3\":\"value3\"\}",
   "OutputDetails": "FAILED_TASKS"
}

The following table describes the Airflow job parameters.

Parameter

Description

ConnectionProfile

Defines the ConnectionProfile:Airflow name that connects Control-M to Airflow.

DagId

Defines the unique identifier of a DAG.

ConfigurationJson

(Optional) Defines the JSON object, which describes additional configuration parameters (key:value pairs).

OutputDetails

Determines whether to include Airflow DAG task logs in the Control-M job output, as follows:

  • NO_TASKS: No task logs.

  • FAILED_TASKS: Include failing task logs.

  • ALL_TASKS: Include all task logs.

Default: FAILED_TASKS

Job:Apache NiFi

Apache NiFi is an open-source tool that automates data flow across systems in real time.

To deploy and run an Apache NiFi job, ensure that you have installed the Apache NiFi plug-in with the provision image command or the provision agent::update command.

The following example shows how to run an Apache NiFi job:

Copy
"Apache NiFi_Job_1"
{
   "Type": "Job:Apache NiFi",
   "ConnectionProfile": "NFI",
   "Processor Group ID": "3b315648-c11b-1ff4-c672-770c0ba49da3",
   "Processor ID": "3b316c50-c11b-1ff4-99f2-690aa6f35952v",
   "Action": "Run Processor",
   "Disconnected Node Ack": "unchecked",
   "Status Polling Frequency":"5",
   "Failure Tolerance":"0"
}

The following table describes the Apache NiFi job parameters.

Attribute

Description

ConnectionProfile

Defines the ConnectionProfile:Apache NiFi name that connects Control-M to Apache NiFi.

Processor Group ID Defines the ID number of a specific processor group.
Processor ID Defines the ID number of a specific processor.
Action

Determines one of the following actions to perform on Apache NiFi:

  • Run Processor

  • Stop Processor

  • Update Processor

  • Run Processor Once

Disconnected Node Ack

Determines whether to disconnect the node to allow mutable requests to proceed.

Values:

  • checked

  • unchecked

Default: unchecked

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 5

Failure Tolerance

Determines the number of times the job tries to run before ending Not OK.

Default: 0

Job:Astronomer

Astronomer is a workload automation service based on Apache Airflow that enables you to create, schedule, and manage your workflows.

To deploy and run an Astronomer job, ensure that you have installed the Astronomer plug-in with the provision image command or the provision agent::update command.

The following example shows how to run an Astronomer job:

Copy
"Astronomer_Job_2"
{
   "Type": "Job:Astronomer",
   "ConnectionProfile": "ASTRONOMER",
   "DAG Name": "Example_dag_basic",
   "DAG Run ID": "",
   "Parameters"
   {
      "Variable1":"Value1",
      "Variable2":"Value2",
   }
   "Status Polling Frequency": "60",
   "Failure Tolerance": "3"
}

The following table describes the Astronomer job parameters.

Parameter

Description

Connection Profile

Defines the ConnectionProfile:Astronomer name that connects Control-M to Astronomer.

DAG Name

Defines the logical name of the Directed Acyclic Graph (DAG), as defined in the Airflow interface.

DAG Run ID

(Optional) Defines the specific DAG run (execution) ID in Airflow.

Parameters

Defines the JSON-based body parameters to pass when the DAG executes, in the following format:

Copy
"Variable1":"Value1","Variable2":"Value2"

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 60

Failure Tolerance

Determines the number of times the job tries to run before ending Not OK.

Default: 3

Job:AWS Step Functions

AWS Step Functions enables you to create visual workflows that can integrate other AWS services.

To deploy and run an AWS Step Functions job, ensure that you have installed the AWS Step Functions plug-in with the provision image command or the provision agent::update command.

The following example shows how to define an AWS Step Functions job:

Copy
"AWS Step Functions_Job_2":
{
   "Type": "Job:AWS Step Functions",
   "ConnectionProfile": "STEPFUNCTIONSCCP",
   "Execution Name": "Step Functions Exec",
   "State Machine ARN": "arn:aws:states:us-east-1:155535555553:stateMachine:MyStateMachine",
   "Parameters": "{\\\"parameter1\\\":\\\"value1\\\"}",
   "Show Execution Logs": "checked",
   "Status Polling Frequency":"10",
   "Failure Tolerance":"2"
}

The following table describes the AWS Step Functions job parameters.

Parameter

Description

ConnectionProfile

Defines the ConnectionProfile:AWS Step Functions name that connects Control-M to AWS Step Functions.

Execution Name

Defines the name of the Step Function execution. An execution runs a state machine, which is a workflow.

State Machine ARN

Determines the Step Function state machine to use.

A state machine is a workflow, and an Amazon Resource Name (ARN) is a standardized AWS resource address.

Parameters

Defines the parameters for the Step Function job, in JSON format, which enables you to control how the job runs.

If you are not adding parameters, type {}.

Show Execution Logs

Determines whether to add the job log to the output

Values:

  • checked

  • unchecked

Default: unchecked

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 20

Failure Tolerance

Determines the number of times to check the job status before ending Not OK.

Default: 2

Job:AWS:StepFunction

AWS Step Function enables you to create visual workflows that can integrate other AWS services.

To deploy and run this type of AWS job, ensure that you have completed the following:

  • Installed the Application Pack, which includes the Control-M for AWS plug-in.

  • Created a connection profile, as described in ConnectionProfile:AWS .

BMC recommends that you use the newer job type, Job:AWS Step Functions.

The following example shows how to define an AWS Step Function job:

Copy
"AwsStepFunctionJob"
{
   "Type": "Job:AWS:StepFunction",
   "ConnectionProfile": "AWS_CONNECTION",
   "StateMachine": "StateMachine1",
   "ExecutionName": "Execution1",
   "Input": ""{\"myVar\" :\"value1\" \\n\"myOtherVar\" : \"value2\"}" ",
   "AppendLog": true
}

The following table describes the AWS Step Function job parameters.

Parameter

Description

StateMachine

Defines the State Machine to use.

ExecutionName

Defines a name for the execution.

Input

Define the Step Function input in JSON format.

Escape all special characters.

AppendLog

Determines whether to append the log to the output.

Values:

  • true

  • false

Default: true

Job:Azure Logic Apps

Azure Logic Apps enables you to design and automate cloud-based workflows and integrations.

To deploy and run an Azure Logic Apps job, ensure that you have installed the Azure Logic Apps plug-in with the provision image command or the provision agent::update command.

The following example shows how to define an Azure Logic Apps job:

Copy
"Azure Logic Apps Job"
{
   "Type": "Job:Azure Logic Apps",
   "ConnectionProfile": "AZURE_LOGIC_APPS",
   "Workflow": "tb-logic",
   "Parameters": "{\"bodyinfo\":\"hello from CM\",\"param2\":\"value2\"}",
   "Get Logs": "unchecked",
   "Status Polling Frequency": "20",
   "Failure Tolerance": "2"
}

The following table describes the Azure Logic Apps job parameters.

Parameter

Description

ConnectionProfile

Defines the ConnectionProfile:Azure Logic Apps name that connects Control-M to Azure.

Workflow

Determines which of the Consumption logic app workflows to run from your predefined set of workflows.

This job does not run Standard logic app workflows.

Parameters

Defines parameters that enable you to control the presentation of data.

Rules:

  • Characters: 2–4,000

  • Format: JSON

  • If you are not adding parameters, type {}.

Get Logs Determines whether to display the job output when the job ends.
Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 20

Failure Tolerance

Determines the number of times to check the job status before ending NOT OK.

Default: 2

Job:Azure:LogicApps

Azure Logic Apps enables you to design and automate cloud-based workflows and integrations.

To deploy and run this type of Azure job, ensure that you have completed the following:

  • Installed the Application Pack, which includes the Control-M for Azure plug-in.

  • Created a connection profile, as described in ConnectionProfile:Azure.

BMC recommends that you use the newer job type, Job:Azure Logic Apps.

The following example shows how to define a Azure Logic Apps job:

Copy
"AzureLogicAppJob"
{
   "Type": "Job:Azure:LogicApps",
   "ConnectionProfile": "AZURE_CONNECTION",
   "LogicAppName": "MyLogicApp",
   "RequestBody": "{\\n  \"name\": \"BMC\"\\n}",
   "AppendLog": false
}

The following table describes the Azure Logic Apps job parameters.

Parameter

Description

LogicAppName

Defines the name of the Azure Logic App.

RequestBody

(Optional) Defines the JSON for the expected payload.

AppendLog

(Optional) Determines whether to append the log to the output.

Values:

  • true

  • false

Default: true

Job:GCP Composer

Google Cloud (GCP) Composer is a managed workflow orchestration service built on Apache Airflow that enables you to automate workflow tasks.

To deploy and run a GCP Composer job, ensure that you have installed the GCP Composer plug-in with the provision image command or the provision agent::update command.

The following example shows how to define a GCP Composer job:

Copy
"GCP Composer_Job_2"
{
   "Type": "Job:GCP Composer",
   "ConnectionProfile": "GCPCOMPOSER",
   "Dag Name": "Example_dag_basic",
   "DAG Run ID": "",
   "Parameters"
   {
      "Variable1":"Value1",
      "Variable2":"Value2",
   }
   "Status Polling Frequency": "60",
   "Failure Tolerance": "3"
}

The following table describes GCP Composer job attributes.

Attribute

Description

ConnectionProfile

Defines the ConnectionProfile:GCP Composer name that connects Control-M to GCP Composer.

DAG Name

Defines the DAG logical name as defined in the GCP interface.

DAG Run ID

(Optional) Defines the specific DAG run (execution) ID in GCP Composer.

Parameters

Defines the JSON-based body parameters to pass when the DAG executes, in the following format:

Copy
"Variable1":"Value1","Variable2":"Value2"

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 60

Failure Tolerance

Determines the number of times to check the job status before ending Not OK.

Default: 2

Job:GCP Workflows

GCP Workflows enables you to design and automate cloud-based workflows and integrations.

To deploy and run a GCP Workflows job, ensure that you have installed the GCP Workflows plug-in with the provision image command or the provision agent::update command.

The following example shows how to define a GCP Workflows job:

Copy
"GCP Workflows_Job"
{
   "Type": "Job:GCP Workflows",
   "ConnectionProfile": "GCPWF",
   "Project ID": "12345id",
   "Location": "us-central1",
   "Workflow Name": "workflow-1",
   "Parameters JSON Input": "argument" : {\"var1\":\"value1\",\"var2\":\"value2\"},
   "ExecutionLabel" : "{"labelName":"name"}",
   "Show Workflow Results": "checked",    
   "Status Polling Frequency": "20",
   "Failure Tolerance": "3"
}

The following table describes the GCP Workflows job parameters.

Parameter

Description

Connection Profile

Defines the ConnectionProfile:GCP Workflows name that connects Control-M to GCP Workflows.

Project ID

Defines the GCP project ID where the batch job executes.

A project is a set of configuration settings that define the resources your GCP Workflows jobs use and how they interact with GCP.

Location

Defines the region where the GCP Workflow job executes.

us-central1

Workflow Name

Determines the predefined GCP Workflow that executes.

Parameters JSON Input

Defines the JSON-based body parameters that are passed to the function, in the following format:

Copy
{\"var1\":\"value1\",\"var2\":\"value2\"}

Execution Label

Defines a job execution label, which enables you to group similar executions in the GCP Workflows log.

{"labelName": "name"}

Show Workflow Results

Determines whether the GCP Workflow results appear in the job output, as follows:

Valid Values:

  • checked

  • unchecked

Default: unchecked

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 20

Failure Tolerance

Determines the number of times to check the job status before ending Not OK.

Default: 3