Machine Learning Jobs

Machine learning platforms and services enables you to create, train, deploy, and manage machine learning models on premises, in the cloud, and on edge devices.

The following topics describe job attributes that work with Machine Learning platforms and services:

Amazon SageMaker Job

Amazon SageMaker enables you to create, train, and deploy machine learning models on premises, in the cloud, and on edge devices.

To create an Amazon SageMaker job, see Creating a Job. For more information about this plug-in, see Control-M for Amazon SageMaker.

The following table describes the Amazon SageMaker job type attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to Amazon SageMaker, as described in Machine Learning Connection Profiles.

Rules:

  • Characters: 1−30

  • Case Sensitive: Yes

  • Invalid Characters: Blank spaces.

Pipeline Name

Determines the name of the preexisting AWS SageMaker pipeline used in this job.

Idempotency Token

(Optional) Defines the unique ID (idempotency token) that guarantees the job runs only once. After it runs successfully, this ID cannot be reused.

To allow a re-execution of the job with a new token, replace the default value with a unique ID that has not been used before. Use the RUN_ID, which can be retrieved from the job output.

Default: Token_Control-M_for_SageMaker%%ORDERID.

Add Parameters

Determines whether to add or change default parameters in the execution of the pipeline.

Parameters

Defines the parameters, in JSON format, to add or change, according to the Amazon SageMaker convention.

The list of parameters must begin with the name of the parameter type.

Copy
{ "Name": "string", "Value": "string" }

Retry Pipeline Execution

Determines whether to retry the execution of a pipeline, which you might want to do if a previous execution fails or stops.

Pipeline Execution ARN

Defines the Amazon Resource Name (ARN) of the pipeline, which is required to retry the execution of the pipeline.

An ARN is a standardized AWS resource address.

arn:aws:sagemaker:us-east-1:122343283363:pipeline/test-demo-123-p-ixxyfil39d9o/execution/4tl5r9q0ywpw

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 30

Failure Tolerance

Determines the number of times to check the job status before the job ends Not OK.

Default: 2

Azure Machine Learning Job

Azure Machine Learning enables you to build, train, deploy, and manage machine learning models on premises, in the cloud, and on edge devices.

To create an Azure Machine Learning job, see Creating a Job. For more information about this plug-in, see Control-M for Azure Machine Learning.

The following table describes the Azure Machine Learning job type attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect to Control-M to Azure Machine Learning, as described in Azure Machine Learning Connection Profile Parameters.

Rules:

  • Characters: 1−30

  • Case Sensitive: Yes

  • Invalid Characters: Blank spaces.

Workspace Name

Determines the name of the Azure Machine Learning workspace for the job.

Resource Group Name

Determines the Azure resource group that is associated with a specific Azure Machine Learning workspace.

A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.

Action

Determines one of the following Azure Machine Learning actions to perform:

  • Trigger Endpoint Pipeline: Starts a pipeline.

  • Compute Management: Stops, starts, restarts, or deletes a host or a cluster.

Pipeline Endpoint ID

Determines the pipeline endpoint ID, which points to a published pipeline in Azure Machine Learning.

Parameters

Defines additional parameters for the pipeline, in JSON format.

Copy
{
    "ExperimentName": "test",
    "DisplayName":"test1123"
}

For no parameters, type {}.

Compute Name

Defines the name of the compute function.

Compute Action

Determines one of the following compute actions to perform:

  • Start: Starts a host or a cluster.

  • Stop: Stops a host or a cluster.

  • Restart: Restarts a host or a cluster.

  • Delete: Deletes a host or a cluster.

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 15

Failure Tolerance

Determines the number of times to check the job status before the job ends Not OK.

Default: 2

GCP Vertex AI Job

GCP Vertex AI enables you to build generative AI applications, and train and deploy machine learning models.

To create an GCP Vertex AI job, see Creating a Job. For more information about this plug-in, see Control-M for GCP Vertex AI.

The following table describes the GCP Vertex AI job type attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect to Control-M to GCP Vertex AI, as described in GCP Vertex AI Connection Profile Parameters.

Rules:

  • Characters: 1−30

  • Case Sensitive: Yes

  • Invalid Characters: Blank spaces.

Action

Determines one of the following GCP Vertex AI actions to perform:

  • Run a Pipeline

  • Start Notebook Instance

  • Stop Notebook Instance

Project Name

Defines the name of the predefined Google Cloud project with configured APIs, authentication information, billing details, and job resources.

Instance Name

Defines the notebook instance when you run the job.

Pipeline Run Name

Defines a unique name of a single execution of a pipeline.

Service Account

Defines the Google service account.

service_account_name@project_id.iam.gserviceaccount.com

Pipeline Specification

Defines the specifications of the pipeline that you run, which is usually a JSON file.

GCS Output Directory

Defines the path to a GCP Cloud Storage bucket that serves as the root output directory of the pipeline.

Add Parameters

Determines whether to include Pipeline Runtime Parameters.

Pipeline Runtime Parameters

Defines the parameters of the pipeline that you run in JSON format.

Zone

Defines the location of the notebook resources.

Default: us-central1-a

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 60

OCI Data Science Job

OCI Data Science is an Oracle Cloud Infrastructure (OCI) platform, that enables you to build, train, deploy, and manage machine learning (ML) models with Python and open source tools.

To create an OCI Data Science job, see Creating a Job. For more information about this plug-in, see Control-M for OCI Data Science.

The following table describes OCI Data Science job attributes.

Attribute

Description

Connection Profile

Determines the authorization credentials that are used to connect Control-M to OCI Data Science Services, as described in OCI Data Science Connection Profile Parameters.

Rules:

  • Characters: 1−30

  • Case Sensitive: Yes

  • Invalid Characters: Blank spaces.

Variable Name: %%INF-ACCOUNT

Actions

Determines one of the following OCI Data Science actions:

  • Start Job Run: Runs the OCI Data Science job.

  • Start Pipeline Run: Activates the OCI Data Science pipeline.

  • Create Model Deployment: Creates a Model Deployment.

  • Create Notebook Session: Creates a Notebook Session.

  • Delete Model Deployment: Deletes a Model Deployment.

  • Delete Notebook Session: Deletes a Notebook Session.

Parameters

Defines the parameters for the following actions, in JSON format, for example:

  • Start Job Run

    Copy
    {
       "projectId":"ocid1.datascienceproject.oc1.phx.amaaaghfyu65876kjh345j35uagkr6es4j5txq2ehjqq4ioyqygq",
       "compartmentId":"ocid1.compartment.oc1..aaaaaaaahjo7g63l5dhmgepb7xjk34kj5k3j5dybd4wywxuz5aziyqpkvq",
       "jobId":"ocid1.datasciencejob.oc1.phx.amaaaaaatdg3y3qakj345kj34ed6v5kfuovupwaqeao3js2mcmcrk673w3fysq",
       "definedTags":{},
       "displayName":"test234",
       "freeformTags":{},
       "jobConfigurationOverrideDetails":{"jobType":"DEFAULT"},
       "jobLogConfigurationOverrideDetails":{"enableAutoLogCreation":false,"enableLogging":false}
    }
  • Start Pipeline Run

    Copy
    {
       "pipelineId": "ocid1.datasciencepipeline.oc1.phx.amaaaaaatdg3y3qaqevmdwhe7kjhk345j345h6xiinyyj6rdkvj2l2lkzdvmwoq",
       "compartmentId": "ocid1.compartment.oc1..aaaaaaaahjo7g63l5dhmgepb7ljh3453uby4rdybd4wywxuz5aziyqpkvq"
    }
  • Create Model Deployment

    Copy
    {
       "displayName": "Control_M",
       "projectId": "ocid1.datascienceproject.oc1.phx.amaaaahi34h5kj43h562ticili6ovduagkr6es4j5txq2ehjqq4ioyqygq",    
       "compartmentId": "ocid1.compartment.oc1..aaaaaaaahjo7g63l5dhmgepkjl5kj34lk5j34ikuby4rdybd4wywxuz5aziyqpkvq",
       "modelDeploymentConfigurationDetails"
       {
          "deploymentType": "SINGLE_MODEL",
          "modelConfigurationDetails"
          {
             "modelId": "ocid1.datasciencemodel.oc1.phx.amaaaaaatdg3y3qabio6hdklkj34l534kyfbllazy7n462yqidp3tjwqc7ga",
             "instanceConfiguration"
             {
                "instanceShapeName": "VM.Standard.E4.Flex",
                "modelDeploymentInstanceShapeConfigDetails":
                {
                   "ocpus": 1.0,
                   "memoryInGBs": 16.0,
                   "cpuBaseline": null
                },
                "subnetId": null
             },            
          "scalingPolicy"
             {                
                "policyType": "FIXED_SIZE",
                "instanceCount": 1
             },
          "bandwidthMbps": 10,
          "maximumBandwidthMbps": null
          }
       }
    }
  • Create Notebook Session

    Copy
    {
       "displayName": "Demo",
       "projectId": "ocid1.datascienceproject.oc1.phx.amaaaaaathfgdh6rdejfghjpw62ticili6ovduagkr6es4j5txq2ehjqq4ioyqygq",
       "compartmentId": "ocid1.compartment.oc1..aaaaaaaahjo7g63l5dhmgepb7xfszhpgik6456fhfghywxuz5aziyqpkvq",
       "notebookSessionConfigDetails":
       {
          "shape": "VM.Standard.E4.Flex",
          "blockStorageSizeInGBs": 100,
          "subnetId": null,
          "privateEndpointId": null,
          "notebookSessionShapeConfigDetails"
          {
             "ocpus": 4.0,
             "memoryInGBs": 64.0
          }
       }
    }

    For more information about the action parameters, see Oracle API documentation.

Model Deployment ID

(Delete Model Deployment) Determines the OCID of the model deployment that is deleted.

Notebook Session ID

(Delete Notebook Session) Determines the OCID of the notebook session that is deleted.

Status Polling Frequency

Determines the number of seconds to wait before checking the job status.

Default: 60

Failure tolerance

Determines the number of times to check the job status before the job ends Not OK.

Default: 2