Data Processing Jobs

The following topics describe job attributes that work with data processing platforms and services:

AWS Data Pipeline Job

AWS Data Pipeline is a cloud-based ETL service that enables you to automate the transfer, processing, and storage of your data.

The following table describes the AWS Data Pipeline job attributes.

Attribute

Action

Description

Connection Profile

N/A

Determines the authorization credentials that are used to connect Control-M to AWS Data Pipeline.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Action

N/A

Determines one of the following AWS Data Pipeline actions:

  • Trigger Pipeline: Runs an existing AWS Data Pipeline.

  • Create Pipeline: Creates a new AWS Data Pipeline.

Pipeline Name

Create Pipeline

Defines the name of the new AWS Data Pipeline.

Pipeline Unique ID

Create Pipeline

Defines the unique ID (idempotency key) that guarantees the pipeline is created only once. After successful execution, this ID cannot be used again.

Valid characters: Any alphanumeric characters.

Parameters

Create Pipeline

Defines the parameter objects, which define the variables, for your AWS Data Pipeline in JSON format.

For more information about the available parameter objects, see the descriptions of the PutPipelineDefinition and GetPipelineDefinition actions in the AWS Data Pipeline API Reference.

Copy
"parameterObjects": [ {
   "attributes": [
      {
         "key":"description",
         "stringValue":"S3outputfolder"
      }
    ],
   "id": "myS3OutputLoc"
}
],
"parameterValues": [ {
   "id":"myShellCmd",
   "stringValue":"grep -rc \"GET\" ${IN_DIR}/* > ${OUT_DIR}/out.txt"
}
],
"pipelineObjects": [ {
   "fields": [
      {
         "key":"input",
         "refValue":"S3InputLocation"
      },
      {
         "key":"stage",
         "stringValue":"true"
      }
   ],
   "id":"ShellCommandActivityObj",
   "name":"ShellCommandActivityObj"
}
]

Trigger Created Pipeline

Create Pipeline

Determines whether to run, or trigger, the newly created AWS Data Pipeline.

Pipeline ID

Trigger Pipeline

Determines which pipeline to run, or trigger.

Status Polling Frequency

All actions

Determines the number of seconds to wait before checking the status of the Data Pipeline job.

Default: 20

Failure Tolerance

All actions

Determines the number of times the job tries to run before ending Not OK.

Default: 2

AWS EMR Job

AWS EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data.

The following table describes AWS EMR job attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to AWS EMR.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Cluster ID

Defines the name of the AWS EMR cluster to connect to the Notebook.

In the EMR API, this field is called the Execution Engine ID.

Notebook ID

Determines which Notebook ID executes the script.

In the EMR API, this field is called the Editor ID.

Relative Path

Defines the full path and name of the script file in the Notebook.

Notebook Execution Name

Defines the job execution name.

Service Role

Defines the service role to connect to the Notebook.

Use Advanced JSON Format

Enables you to provide Notebook execution information through JSON code.

This JSON Body parameter replaces the values of the following parameters (Cluster ID, Notebook ID, Relative Path, Notebook Execution Name, and Service Role).

JSON Body

Defines Notebook execution settings in JSON format. For a description of the syntax of this JSON, see the description of StartNotebookExecution in the Amazon EMR API Reference.

Copy
{

"EditorId": "e-DJJ0HFJKU71I9DWX8GJAOH734",

"RelativePath": "ShowWaitingAndRunningClustersTest2.ipynb",

"NotebookExecutionName":"Tests",

"ExecutionEngine": {

   "Id": "j-AR2G6DPQSGUB"

},

"ServiceRole": "EMR_Notebooks_DefaultRole"

}

Azure Databricks Job

Azure Databricks is a cloud-based data analytics platform that enables you to process large workloads of data.

The following table describes the Azure Databricks job type attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Azure Databricks.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

  • Variable Name: %%AZURE-ACCOUNT

Databricks Job ID

Determines the ID of the Azure Databricks job that is created in a Databricks workspace.

Parameters

Defines task parameters to override when the job runs, according to the Databricks convention. Your list of parameters must begin with the name of the parameter type.

Copy
"notebook_params":{"param1":"val1", "param2":"val2"}
"jar_params": ["param1", "param2"]

For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided in the Azure Databricks documentation.

When there are no parameters, specify the following value:

Copy
"params": {}

Idempotency Token

(Optional) Defines a token to use to rerun job runs that timed out in Databricks.

Values:

  • Control-M-Idem_%%ORDERID: With this token, upon rerun, Control-M invokes the monitoring of the existing job run in Databricks. Default.

  • Any other value: Replaces the Control-M idempotency token. When you rerun a job using a different token, Databricks creates a new job run with a new unique run ID.

Status Polling Frequency

(Optional) Determines the number of seconds to wait before checking the status of the job.s

Default: 30

Azure HDInsight Job

Azure HDInsight enables you to run an Apache Spark batch job for big data analytics.

The following table describes Azure HDInsight job parameters:

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Azure HDInsight.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Parameters

Determines which parameters are passed to the Apache Spark Application during job execution, in JSON format (name:value pairs).

This JSON must include the file and className elements.

Status Polling Interval

Determines the number of seconds to wait before the Apache Spark batch job is verified.

Default: 10 seconds

Bring job logs to output

Determines whether logs from Apache Spark appear in the job output.

Azure Synapse Job

Azure Synapse Analytics enables you to perform data integration and big data analytics.

The following table describes Azure Synapse job parameters:

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Azure Synapse.

Pipeline Name

Defines the name of a pipeline that you defined in your Azure Synapse workspace.

Parameters

Defines pipeline parameters to override when the job runs, defined in JSON format as pairs of name and value, as follows::

Copy
 {"param1":"value1", "param2":"value2"}

For no parameters, specify {}.

Status Polling Interval

(Optional) Defines the number of seconds to wait before checking the status of the job.

Default: 20 seconds

Databricks Job

The Databricks job enables you to integrate jobs created in the Databricks environment with your existing Control-M workflows. The following table describes Databricks job type attributes:

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Databricks.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Databricks Job ID

Determines the ID of the Databricks job that is created in a Databricks workspace.

Parameters

Defines task parameters to override when the job runs, according to the Databricks convention. Your list of parameters must begin with the name of the parameter type.

Copy
"notebook_params":{"param1":"val1", "param2":"val2"}
"jar_params": ["param1", "param2"]

For more information about the parameter types, review the properties of RunParameters in the OpenAPI specification provided through the Azure Databricks documentation.

When there are no parameters, specify the following value:

Copy
"params": {}

Idempotency Token

(Optional) Defines a token to use to rerun job runs that timed out in Databricks.

Values:

  • Control-M-Idem_%%ORDERID: With this token, upon rerun, Control-M invokes the monitoring of the existing job run in Databricks. Default.

  • Any other value: Replaces the Control-M idempotency token. When you rerun a job using a different token, Databricks creates a new job run with a new unique run ID.

Status Polling Frequency

(Optional) Determines the number of seconds to wait before checking the status of the job.

Default: 30

GCP Dataflow Job

The following table describes parameters for a Google Cloud Platform (GCP) Dataflow job, which performs cloud-based data processing for batch and real-time data streaming applications.

Parameter

Description

Connection profile

Determines the authorization credentials that are used to connect Control-M to GCP Dataflow.

Project ID

Defines the project ID for your Google Cloud project.

Location

Defines the Google Compute Engine region to create the job.

Template Type

Defines one of the following types of GCP Dataflow templates:

  • Classic Template: Developers run the pipeline and create a template. The Apache Beam SDK stages files in Cloud Storage, creates a template file (similar to job request), and saves the template file in Cloud Storage.

  • Flex Template: Developers package the pipeline into a Docker image and then use the Google Cloud CLI to build and save the Flex Template spec file in Cloud Storage.

Template Location (gs://)

Defines the path for temporary files. This must be a valid Google Cloud Storage URL that begins with gs://.

The pipeline option tempLocation is used as the default value, if it has been set.

Parameters (JSON Format)

Defines input parameters to be passed on to job execution, in JSON format (name:value pairs).

This JSON must include the jobname and parameters elements, as in the following example:

Copy


    "jobName": "wordcount", 

    "parameters": { 

        "inputFile": "gs://dataflow-samples/shakespeare/kinglear.txt", 

        "output": "gs://controlmbucket/counts" 

    } 

Verification Poll Interval (in seconds)

(Optional) Defines the number of seconds to wait before checking the status of the job.

Default: 10

Log Level

Determines one of the following levels of details to retrieve from the GCP logs in the case of job failure:

  • TRACE

  • DEBUG

  • INFO

  • WARN

  • ERROR

GCP Dataproc Job

The following table describes parameters for a Google Cloud Platform (GCP) Dataproc job, which performs cloud-based big data processing and machine learning.

Parameter

Description

Connection profile

Determines the authorization credentials that are used to connect Control-M to GCP Dataproc.

Project ID

Defines the project ID for your Google Cloud project.

Account Region

Defines the Google Compute Engine region to create the job.

Dataproc task type

Defines one of the following Dataproc task types to execute:

  • Workflow Template: Reusable workflow configuration that defines a graph of jobs with information on where to run those jobs.

  • Job: A single Dataproc job.

Workflow Template

(For a Workflow Template task type) Defines the ID of a Workflow Template.

Parameters (JSON Format)

(For a Job task type) Defines input parameters to be passed on to job execution, in JSON format.

You retrieve this JSON content from the GCP Dataproc UI, using the EQUIVALENT REST option in job settings.

Verification Poll Interval (in seconds)

(Optional) Defines the number of seconds to wait before checking the status of the job.

Default: 20

Tolerance

Defines the number of call retries during the status check phase.

Default: 2 times

Hadoop Job

The Hadoop job connects to the Hadoop framework, and it enables the distributed processing of large data sets across clusters of commodity servers. You can expand your enterprise business workflows to include tasks running in your Big Data Hadoop cluster from Control-M using the different Hadoop-supported tools, including Pig, Hive, HDFS File Watcher, Map Reduce Jobs, and Sqoop.

The following table describes the Hadoop job attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Hadoop.

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Variable Name: %%HDP-ACCOUNT

Execution Type

Determines the execution type for Hadoop job execution, as follows:

Variable Name: %%HDP-EXEC_TYPE

Pre Commands

Defines the Pre commands performed before job execution (not for HDFS Commands jobs and Oozie Extractor jobs), and the argument for each command.

Fail the job if the command fails

Determines whether the entire job fails if any of the Pre commands fail (not for HDFS Commands jobs and Oozie Extractor jobs).

Post Commands

Defines the Post commands performed before job execution (not for HDFS Commands jobs and Oozie Extractor jobs), and the argument for each command.

Fail the job if the command fails

Determines whether the entire job fails if any of the Post commands fail (not for HDFS Commands jobs and Oozie Extractor jobs).

DistCp Job Attributes

The following table describes the DistCp job attributes.

Attribute

Description

Target Path

Defines the absolute destination path.

Variable Name: %%HDP-DISTCP_TARGET_PATH

Source Path

Defines the source paths.

Variable Name: %%HDP-DISTCP_SOURCE_PATH-Nxxx_ARG

Command Line Options

Defines the sets of attributes and values that are added to the command line.

Variable Names:

  • Name: %%HDP-DISTCP_OPTION-Nxxx-NAME

  • Value: %%HDP- DISTCP_OPTION-Nxxx-VAL

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job outputClosedA tab in the job properties pane in the Monitoring domain that shows the output of a job, which indicates whether a job ended OK, and used, for example, with jobs that check file location.

Distributed Shell Job Attributes

The following table describes the Distributed Shell job attributes.

Attribute

Description

Shell Type

Determines what the Distributed Shell job runs, as follows:

  • Command: Runs a shell command entry as defined by Command.

  • Script File: Runs a script file as defined by Command, Script Full Path, and Shell Script Arguments.

Variable Name: %%HDP-SHELL_TYPE

Command

Defines the shell command entry to run for the job execution.

Variable Name: %%HDP-SHELL_COMMAND

Script Full Path

Defines the full path to the script file which is executed. The script file is located in the HDFS.

Variable Name: %%HDP-SHELL_SCRIPT_FULL_PATH

Shell Script Arguments

Defines the shell script arguments.

Variable Name: %%HDP-SHELL-Nxxx-ARG

More Options

Opens more attributes.

Files/Archives

Defines the full path to the file or archive to upload as a dependency to the HDFS working directory.

Variable Names:

  • Type: %%HDP-SHELL_FILE_DEP-Nxxx-TYPE

  • Path: %%HDP-SHELL_FILE_DEP -Nxxx-PATH

Options

Defines the additional option (Name and Value) to set when executing the job.

Variable Names:

  • Name: %%HDP-SHELL_OPTION -Nxxx-NAME

  • Value: %%HDP-SHELL_OPTION -Nxxx-VAL

Environment Variables

Defines the environment variables for the shell script/command.

Variable Name: %%HDP-SHELL_ENV_VARIABLE-Nxxx-ARG

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

HDFS Commands Job Attributes

The following table describes the HDFS Commands job attributes.

Attribute

Description

Command

Defines the command for the argument to be performed with job execution.

Variable Name: %%HDP-HDFS_CMD_ACTION-Nxxx-CMD

Arguments

Defines the argument used by the command.

Variable Name: %%HDP-HDFS_CMD_ACTION-Nxxx-ARG

HDFS File Watcher Job Attributes

The following table describes the HDFS File Watcher job attributes.

Attribute

Description

File name full path

Defines the full path of the file being watched.

Variable Name: %%HDP-HDFS_FILE_PATH

Min detected size

Determines the minimum file size in bytes to meet the criteria and finish the job as OK. If the file arrives, but the size is not met, the job continues to watch the file.

Variable Name: %%HDP-MIN_DETECTED_SIZE

Max time to wait

Determines the maximum number of minutes to wait for the file to meet the watching criteria. If criteria are not met (file did not arrive, or minimum size was not reached) the job fails after this maximum number of minutes.

Variable Name: %%HDP-MAX_WAIT_TIME

File Name Variable

Defines the variable name that is used in succeeding jobs.

Variable Name: %%HDP-FW_DETECTED _FILE_NAME_VAR

Impala Job Attributes

The following table describes the Impala job attributes.

Attribute

Description

Source

Determines the source type to run the queries, as follows:

  • Query File: Runs a query file as defined by Query File Full Path.

  • Open Query: Runs an open query command as defined by Query.

Variable Name: %%HDP-IMPALA_QUERY_SOURCE

Query File Full Path

Defines the location of the file used to run the queries.

Variable Name: %%HDP-IMPALA_QUERY_FILE_PATH

Query

Defines the query command used to run the queries.

Variable Name: %%HDP-IMPALA_OPEN_QUERY

Command Line Options

Defines the sets of attributes and values that are added to the command line.

Variable Name: %%HDP-HDP-IMPALA_CMD_OPTION-Nxxx-ARG

Hive Job Attributes

The following table describes the Hive job attributes.

Attribute

Description

Full path to Hive script

Defines the full path to the Hive script on the Hadoop host.

Variable Name: %%HDP-HIVE_SCRIPT_NAME

Script Parameters

Defines the list of parameters for the script.

Variable Names:

  • Name: %%HDP-HIVE_SCRIPT_PARAM_Nxxx-NAME

  • Value: %%HDP-HIVE_SCRIPT_PARAM-Nxxx-VAL

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Java-Map-Reduce Job Attributes

The following table describes the Java Map-Reduce job attributes.

Attribute

Description

Full path to Jar

Defines the full path to the jar containing the Map Reduce Java program on the Hadoop host.

Variable Name: %%HDP-JAVA_JAR_NAME

Main Class

Defines the class that is included in the jar containing a main function and the map reduce implementation.

Variable Name: %%HDP-JAVA_MAIN_CLASS

Arguments

Defines the argument used by the command.

Variable Name: %%HDP-JAVA_Nxxx_ARG

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Oozie Job Attributes

The following table describes the Oozie job attributes.

Attribute

Description

Job Properties File

Defines the job properties file path.

Variable Name: %%HDP-OOZIE_JOB_PROPERTIES_FILE

Job Properties (Add/Overwrite)

Defines the Oozie job properties.

A set of properties is comprised of the following:

  • Key: Defines a key name associated with each property.

    Variable Name: %%HDP-OOZIE_PROPERTY-Nxxx-KEY

  • Value: Defines a value associated with each property.

    Variable Name: %%HDP-OOZIE_PROPERTY-Nxxx-VAL

You can add new properties or override property values defined in the Job Properties File.

Rerun from point of failure

Determines whether to rerun an Oozie job from the point of its failure.

Pig Job Attributes

The following table describes the Pig job attributes.

Attribute

Description

Full Path to Pig Program

Defines the full path to the Pig program on the Hadoop host.

Variable Name: %%HDP-PIG_PROG_NAME

Pig Program Parameters

Defines the list of program parameters.

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Properties

Defines a list of properties (Name and Value) to be executed with the job.

These properties override the Hadoop defaults.

Archives

Defines the location of the Hadoop archives.

Files

Defines the location of the Hadoop files.

Spark Job Attributes

The following table describes the Spark job attributes.

Attribute

Description

Program Type

Determines the Spark program type, as follows:

  • Python Script: As defined by Full Path to Script.

  • Java / Scala Application: As defined by Application Jar File and Full Path to Script.

Variable Name: %%HDP-SPARK_PROG_TYPE

Full Path to Script

Defines the full path to the python script to execute.

Variable Name: %%HDP-SPARK_FULL_PATH_TO_PYTHON_SCRIPT

Application Jar File

Defines the path to the jar including your application and all the dependencies.

Variable Name: %%HDP-SPARK_APP_JAR_FULL_PATH

Main Class to Run

Defines the main class of the application.

Variable Name: %%HDP-SPARK_MAIN_CLASS_TO_RUN

Application Arguments

Defines the attribute arguments that are added at the end of the Spark command line either after the main class for Java / Scala Applications or after the script of the Python Script.

Variable Name: %%HDP-SPARK_Nxxx_ARG

Command Line Options

Defines the sets of attributes and values that are added to the command line.

Variable Names:

  • Name: %%HDP-SPARK_OPTION -Nxxx-NAME

  • Value: %%HDP-SPARK_OPTION -Nxxx-VAL

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Sqoop Job Attributes

The following table describes the Sqoop job attributes.

Attribute

Description

Command Editor

Defines any valid Sqoop command necessary for job execution. Sqoop can only be used for job execution if defined in Sqoop connection attributes.

HDP-SQOOP_COMMAND

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Properties

Defines a list of properties (Name and Value) to be executed with the job.

These properties override the Hadoop defaults.

Archives

Defines the location of the Hadoop archives.

Files

Defines the location of the Hadoop files.

Streaming Job Attributes

The following table describes the Streaming job attributes.

Attribute

Description

Input Path

Defines the input file for the Mapper step.

Variable Name: %%HDP-INPUT_PATH

Output Path

Defines the HDFS output path for the Reducer step.

Variable Name: %%HDP-OUTPUT_PATH

Mapper Command

Defines the command that runs as a mapper.

Variable Name: %%HDP-MAPPER_COMMAND

Reducer Command

Defines the command that runs as a reducer.

Variable Name: %%HDP-REDUCER_COMMAND

Streaming Options

Defines the sets of attributes (Name and Value) that are added to the end of the Streaming command line.

Variable Names:

  • Name: %%HDP-STREAMING_PARAM-Nxxx-NAME

  • Value: %%HDP-STREAMING_PARAM-Nxxx-VAL

Generic Options

Defines the sets of attributes (Name and Value) that are added to the Streaming command line.

Variable Names:

  • Name: %%HDP-GENERIC_PARAM-Nxxx-NAME

  • Value: %%HDP-GENERIC_PARAM-Nxxx-VAL

Append Yarn aggregated logs to output

Determines whether to add Yarn aggregated logs to the job output.

Tajo Job Attributes

The following table describes the Tajo job attributes.

Attribute

Description

Command Source

Determines the source of the Tajo command, as follows:

  • Input File: Runs the Tajo command from an input file as defined by the Full File Path.

    Variable Name: %%HDP-TAJO_INPUT_FILE

  • Open Query: Runs an open query as the Tajo command, as defined by Open Query.

    Variable Name: %%HDP-TAJO_OPEN_QUERY

Full File Path

Defines the file path of the input file that runs the Tajo command.

Open Query

Defines the query.

Variable Name: %%HDP-TAJO_OPEN_QUERY

Snowflake Job

Snowflake is a cloud computing platform that you can use for data storage, processing, and analysis.

The following table describes the Snowflake job type attributes.

Attribute

Action

Description

Connection Profile

N/A

Determines one of the following types of authorization credentials, which are used to connect Control-M to Snowflake:

  • Snowflake

  • Snowflake IdP

Rules:

  • Characters: 1−30

  • Case sensitive: Yes

  • Invalid characters: Spaces

Database

N/A

Determines the database that the job uses.

Schema

N/A

Determines the schema that the job uses.

A schema is an organizational model that describes the layout and definition of fields and tables, and their relationships to each other, in a database.

Action

N/A

Determines one of the following Snowflake actions to perform:

  • SQL Statement: Runs any number of Snowflake-supported SQL statements, such as queries, calling or creating procedures, database maintenance tasks, and creating and editing tables.

  • Copy from Query: Copies a queried database and schema into an existing or new file in cloud storage.

  • Copy from Table: Copies from an existing table.

  • Create Table and Query: Creates a table, populated by a query, in the specified database and schema.

  • Create Snowpipe: Creates a Snowpipe and saves it to a file in cloud storage.

  • Start or Pause Snowpipe: Starts or pauses an existing Snowpipe.

  • Stored Procedure: Calls an existing procedure and its arguments.

  • Snowpipe Load Status: Monitors the status of a Snowpipe for a set period of time.

Snowflake SQL Statement

SQL Statement

Determines one or more Snowflake-supported SQL commands.

Rule: Must be written in a single line, with strings separated by one space only.

Statement Timeout

All Actions

Determines the maximum number of seconds to run the job in Snowflake.

Show More Options

All Actions

Determines whether the following job-defining attributes are displayed:

  • Parameters

  • Role

  • Bindings

  • Warehouse

Parameters

All Actions

Defines Snowflake-provided parameters that let you control how data is presented.

Copy

  "param1":"value1",
  "param2":"value2"
}

Role

All Actions

Determines the Snowflake role used for this Snowflake job.

A role is an entity that can be assigned privileges on secure objects. You can be assigned one or more roles from a limited selection.

Bindings

All Actions

Defines the values to bind to the variables used in the Snowflake job, in JSON format.

For more information on bindings, see the Snowflake documentation.

The following JSON script defines two binding variables:

Copy
"1": { 
      "type": "FIXED"
      "value": "123" 
    } 
"2": { 
      "type": "TEXT"
      "value": "String" 
    }

Warehouse

All Actions

Determines the warehouse used in the Snowflake job.

A warehouse is a cluster of virtual machines that processes a Snowflake job.

Show Output

All Actions

Determines whether to show a full JSON response in the log output.

Status Polling Frequency

All Actions

Determines the number of seconds to wait before checking the status of the job.

Default: 20

Query to Location

Copy from Query

Defines the cloud storage location.

Query Input

Copy from Query

Defines the query used for copying the data.

Storage Integration

  • Copy from Query

  • Copy from Table

Defines the storage integration object.

Overwrite

  • Copy from Query

  • Copy from Table

Determines whether to overwrite an existing file in the cloud storage, as follows:

  • Yes

  • No

File Format

  • Copy from Query

  • Copy from Table

  • Create Snowpipe

Determines one of the following file formats for the saved file:

  • JSON

  • CSV

Copy Destination

Copy from Table

Defines where the JSON or CSV file is saved.

You can save to Amazon Web Services, Google Cloud Platform, or Microsoft Azure.

s3://<bucket name>/

From Table

Copy from Table

Defines the name of the copied table.

Create Table Name

Create Table and Query

Defines the name of the new or existing table where the data is queried.

Query

Create Table and Query

Defines the query used for the copied data.

Snowpipe Name

  • Create Snowpipe

  • Start or Pause Snowpipe

  • Snowpipe Load Status

Defines the name of the Snowpipe.

A Snowpipe loads data from files when they are ready, or staged.

Copy into Table

Create Snowpipe

Defines the table that the data is copied into.

Copy Data from Stage

Create Snowpipe

Defines the stage from where the data is copied.

Start or Pause Snowipe

Start or Pause Snowpipe

Determines whether to start or pause the Snowpipe, as follows:

  • Start Snowpipe

  • Pause Snowpipe

Stored Procedure Name

Stored Procedure

Defines the name of the stored procedure.

Procedure Argument

Stored Procedure

Defines the value of the argument in the stored procedure.

Table Name

Snowpipe Load Status

Defines the table that is monitored when loaded by the Snowpipe.

Stage Location

Snowpipe Load Status

Defines the cloud storage location.

A stage is a pointer that indicates where data is stored, or staged.

s3://CloudStorageLocation/

Days Back

Snowpipe Load Status

Determines the number of days to monitor the Snowpipe load status.

Status File Cloud Location Path

Snowpipe Load Status

Defines the cloud storage location where a CSV file log is created.

The CSV file log details the load status for each Snowpipe.

Storage Integration

Snowpipe Load Status

Defines the Snowflake configuration for the cloud storage location, defined in the previous attribute−Status File Cloud Location Path.

S3_INT