Control-M
Overview
Operator Command Quick Reference
The following is a list of some of the more common Control‑M Monitor operator commands explained in this chapter.
In these commands, CONTROLM represents the name of the Control-M monitor.
The CTMPLEX Monitor column indicates on which monitor the operator command can be run in CTMPLEX mode. Possible values are:
-
GSM - On the Global Sysplex Manager (GSM) system only.
-
Both - On both the GSM and LSM systems.
-
None - Not relevant to CTMPLEX. For example, CMEM has its own monitor and does not use the ControlM monitor.
Table 85 Operator Command Quick Reference
Category and Task |
Command |
CTMPLEX Monitor |
---|---|---|
General Operations |
||
S CONTROLM |
Both |
|
P CONTROLM |
Both |
|
F CONTROLM,INTERVAL=xx |
GSM |
|
F CONTROLM,SHOWPARM |
Both |
|
F CONTROLM,NEWPARM |
|
|
F CONTROLM,LISTDD |
Both |
|
F CONTROLM,LISTVDET F CONTROLM,LISTVSUM |
Both |
|
F CONTROLM,RELOAD=userExit |
GSM |
|
Stopping Control-M monitors - Using STOPALL to shut down the Control-M monitor |
F CONTROLM,STOPALL |
GSM |
F CONTROLM,PERFDATA=NOW |
Both |
|
F CONTROLM,PERFDATA=nnnn |
Both |
|
Quiesce Time Management |
||
Scheduling Quiesce Time - |
F CONTROLM,QUIESTIME=hhmm |
GSM |
Stopping submission of any Job |
F CONTROLM,QUIESTIME=NOW |
GSM |
Cancelling requests for Quiesce Time |
F CONTROLM,QUIESTIME=OFF |
GSM |
Displaying the current status of QUIESTIME |
F CONTROLM,QUIESTIME=DISPLAY |
GSM |
Quiesced Resource Management |
||
Scheduling quiesced qualitative resources - Activating and Deactivating Quiesced Quantitative Resources |
F CONTROLM,QUIESQRES=resource-name,hhmm |
GSM |
Stopping the use of a quantitative resource |
F CONTROLM,QUIESQRES=resource-name,NOW |
GSM |
Cancelling previous requests for quiesced resources |
F CONTROLM,QUIESQRES=resource-name,OFF |
GSM |
Displaying the current status of QUIESQRES |
F CONTROLM,QUIESQRES=resource-name,D |
GSM |
Destination Tables |
||
F CONTROLM,NEWDEST=member |
GSM |
|
F CONTROLM,NEWMAILDST |
GSM |
|
Loading and Refreshing the SNMP Destination Table (SNMPDEST) |
F CONTROLM,NEWSNMPDST |
GSM |
Reloading the WLMSCTBL table |
F CONTROLM,NEWLMSCTBL |
GSM |
Deadline Scheduling and Job Network Dependencies |
||
Refreshing DUE OUT times - Refreshing Deadline Scheduling and Job Network Dependencies |
F CONTROLM,DEADLINE |
GSM |
Shifting the DUE OUT time forward - Shifting DUE OUT Times for Control-M Jobs |
F CONTROLM,SDOUT=+nnn |
GSM |
Shifting the DUE OUT time backward -Shifting DUE OUT Times for Control-M Jobs |
F CONTROLM,SDOUT=-nnn |
GSM |
Refreshing PRIORITY values - Refreshing Deadline Scheduling and Job Network Dependencies |
F CONTROLM,PROP |
GSM |
Refreshing both the DEADLINE (DUE OUT) times and the PRIORITY values - Refreshing Deadline Scheduling and Job Network Dependencies |
F CONTROLM,REFSCHED |
GSM |
Refreshing the List of Dependent Jobs in the Job Dependency Network File - Refreshing Deadline Scheduling and Job Network Dependencies |
F CONTROLM,NET |
GSM |
Simultaneously refreshing the DEADLINE (DUE OUT) times, PRIORITY values, and the List of Dependent Jobs in the Job Dependency Network (NET) - Refreshing Deadline Scheduling and Job Network Dependencies |
F CONTROLM,REFALL |
GSM |
Security |
||
F CONTROLM,NEWSECDEF |
GSM |
|
Automatic Tape Adjustment Facility |
||
Refreshing the UNITDEF Table (Unit Definition Table) |
F CONTROLM,NEWUNITDEF |
GSM |
Trace Facility |
||
Using the Control‑M Internal Trace facility - Problem Determination using the Internal Trace Facility |
F CONTROLM,TRACE=level |
GSM |
Supporting Interfaces |
||
F CONTROLM,SAPI=NO |
GSM |
|
Switching from PSO to SAPI Support -Switching from SAPI to PSO Support |
F CONTROLM,SAPI=YES |
GSM |
F CONTROLM,IDL=<IDLModifyCommand> |
Both |
|
AutoEdit Variables and the Cache |
||
Reloading AutoEdit definitions to cache - Loading %%GLOBAL Members to Cache |
F CONTROLM,AECACHE=RELOAD |
GSM |
Reloading AutoEdit definitions using new list members to cache - Loading %%GLOBAL Members to Cache |
F CONTROLM,AECACHE= |
GSM |
Stopping cache until AECACHE=RELOAD - Loading %%GLOBAL Members to Cache |
F CONTROLM,AECACHE=STOP |
GSM |
Newday Operations |
||
Modify the number of intervals to wait for Newday |
F CONTROLM,NEWDAYWT= |
GSM |
F CONTROLM,NEWDAY=expression |
GSM |
|
Optimization Commands |
||
F CONTROLM,LOAD-INDEX <commands> |
Both |
|
F CONTROLM,WLLIST |
GSM |
|
F CONTROLM,WLREFRESH |
GSM |
|
CMEM Facility |
||
Manually Activating the CMEM Monitor -Activating the CMEM Facility |
S CTMCMEM |
None |
Shutting Down the CMEM Facility - Deactivating the CMEM Facility |
F CTMCMEM,STOP |
None |
Replacing an Active CMEM Monitor - Replacing an Active CMEM Monitor |
S CTMCMEM |
None |
Replacing the Active CMEM Executor Modules -Replacing the Active CMEM Executor Modules |
F CTMCMEM,RELOAD=module |
None |
Manual Loading of Rules - Manual Loading of Rules using Operator Commands |
F CTMCMEM,C=library(table) |
None |
Replacing All CMEM Rule Tables in All CPUs - Replacing All CMEM Rules Tables in All CPUs |
F CONTROLM,NEWCONLIST |
None |
F CTMCMEM,D=library(table) |
None |
|
F CTMCMEM,DISPLAY[=DETAIL] |
None |
|
F CTMCMEM,LOG=mode |
None |
|
F CTMCMEM,INTERVAL=nn |
None |
|
F CTMCMEM,NEWSECDEF |
None |
|
F CTMCMEM,TRACE=nn |
None |
|
F CTMCMEM,USAGESTATS |
None |
|
Journaling |
||
F CONTROLM,JOURNAL=ENABLE |
GSM |
|
F CONTROLM,JOURNAL=DISABLE |
GSM |
|
AJF Space Reuse Facility |
||
Activate deletion for space reuse of jobs copied to History AJF - History file processing for AJF space reuse |
F CONTROLM,HISTALOC=ENABLE |
GSM |
Deactivate deletion for space reuse of jobs copied to History AJF - History file processing for AJF space reuse |
F CONTROLM,HISTALOC=DISABLE |
GSM |
CTMPLEX Facility |
||
Start the Control-M monitor on any system. The monitor becomes either a GSM or LSM monitor depending on whether there are other Control-M monitors of the same CTMPLEX. - Controlling CTMPLEX |
S CONTROLM |
GSM |
Stop any LSM (when issued on the system where an LSM runs) or stop the entire CTMPLEX (when issued on the system where the GSM runs). - Controlling CTMPLEX |
P CONTROLM |
GSM |
Activates or inactivates Work Balancing mode, overriding the value of parameter BALANCEM - Controlling CTMPLEX |
F CONTROLM,BALANCE=YES|NO |
GSM |
Stops all LSM monitors. The GSM monitor continues working in regular (not CTMPLEX) mode. - Controlling CTMPLEX |
F CONTROLM,STOPPLEX |
GSM |
Stops the GSM. - Controlling CTMPLEX |
F CONTROLM,STOPGSM |
GSM |
Displays information about all monitors (GSM and LSMs) of the CTMPLEX. - Controlling CTMPLEX |
F CONTROLM,LISTPLEX |
GSM |
Resumes CTMPLEX processing after environmental errors related to the CTMPLEX Coupling facility structure occur. - Controlling CTMPLEX |
F CONTROLM,STARTPLEX |
GSM |
Displays information about the GSM from an LSM - Controlling CTMPLEX |
F CONTROLM,WHOGLOBAL |
Both |
Issues a diagnostic dump (to any of SYS1.DUMPxx datasets) in order to obtain the contents of the CTMPLEX Coupling facility structure. - Controlling CTMPLEX |
F CONTROLM,DUMPPLEX |
GSM |
Basic Operations
This section discusses the following basic operations:
Special Newday Parameters
The Newday procedure is normally executed once daily at the time specified by the DAYTIME parameter in CTMPARM. Under certain circumstances (such as disaster recovery), you might need to execute Newday at a different time or to skip a Newday run. The NEWDAY command options described in this section enable you to accomplish such non-standard tasks.
WARNING: The Control-M for z/OS User Guide describes the deprecated RETRO parameter. (If a job did not run as scheduled, a value of RETRO=Y results in the job automatically running at a later time.) Even though RETRO=Y still works as designed, BMC recommends that you remove RETRO expressions from all job scheduling definitions. Doing so will also enable you to take advantage of all of the options described in the table below. If you did not remove RETRO=Y expressions from job scheduling definitions, do NOT include date parameters in special Newday commands that you run.
Special Newday commands have the following syntax:
F CONTROLM,NEWDAY=expression
where expression is one of the options described in the table below.
Table 86 Newday Special Parameters
Parameter |
Description |
---|---|
SKIP |
Skip the next Newday process. Although Newday does not run at the time indicated by the DAYTIME parameter in CTMPARM, the AJF is updated as if Newday ran. Under certain circumstances, the Control-M monitor initiates Newday processing immediately upon startup. To skip Newday processing at startup, start the Control-M monitor with the following command: Copy
The SKIP option is useful in disaster recovery scenarios in case you need to not initiate the Newday procedure upon start-up. See Example 1 - Continue execution at recovery site. When Newday is bypassed at startup by NEWDAY=SKIP, the next upcoming Newday will run at its normal time, unless another NEWDAY=SKIP is issued. |
hhmm|NOW |
NOW—run Newday immediately. hhmm—run Newday at the next occurrence of hhmm. If hhmm is earlier than the current time, the command runs NEWDAY the following day at hhmm. The next day’s regularly scheduled Newday procedure is also executed. The command F CONTROLM,NEWDAY=hhmm does not change the value DAYTIME in CTMPARM. |
hhmm[,date] |
Run Newday at time hhmm (or NOW). If date is not specified, the current ODATE is used. Otherwise, date determines the ODATE. According to the value of the DATETYP parameter in IOAPARM, use the appropriate of the following date formats: yymmdd, ddmmyy, mmddyy. This option is useful to reschedule a workload after the computer has not been working for one or more days due to holiday, hardware failure, and so on, using the original scheduling date for each Newday iteration. See Example 2 — system down for three days. |
hhmm,RERUN |
Rerun Newday with the current ODATE at time hhmm (or NOW). |
hhmm,ORDERONLY[,date] |
Rerun the Newday process except for the compress phase at the time specified (hhmm) or now, if no time is specified. If date is not specified, the current ODATE is used. Otherwise, date determines the ODATE. This option is useful when the job ordering phase of the Newday procedure terminated prematurely without ordering its full complement of jobs. See Example 4 - Newday processing abended during job ordering. To use this option, ensure that Enhanced Daily Checkpointing is implemented. For more information, see Date Control Records and Enhanced Daily Checkpointing. |
hhmm,FORMATONLY |
Compress the AJF at time hhmm (or NOW). Control-M monitor enters suspend mode during this AJF compression, and resumes execution at its conclusion. There is no need to shut down Control-M monitor (which is required when you use the CTMCAJF utility COMPRESS command). |
Examples
This section describes several scenarios that call for running the Newday procedure with special parameters.
Example 1 - Continue execution at recovery site
The system date at a disaster recovery site differs from the date at the production site in a way that starting Control-M monitor at the recovery site would trigger Newday processing for the wrong day. Enter the following command to start Control-M without running Newday:
S CONTROLM,NEWDAY=SKIP
Example 2 — system down for three days
The system was down for three days. After starting Control-M monitor according to "Example 1 — continue execution at recovery site," you probably need to run Newday for each of the three days in succession. If so, then in Control-M monitor, enter the following command three times, specifying the appropriate ODATE value for date each time (and waiting for job processing to conclude between each repetition):
F CONTROLM,NEWDAY=NOW,date
Example 3 — system error requires restart of Newday processing
Due to an error in JES, all of the jobs that Newday submitted ended with JCL errors. After resolving the JES issue and clearing the AJF of jobs submitted, enter the following command to rerun Newday:
F CONTROLM,NEWDAY=NOW,RERUN
Example 4 - Newday processing abended during job ordering
If Newday processing abended during job ordering, and ODATE has not changed, enter the following command to restart job ordering:
F CONTROLM,NEWDAY=NOW,ORDERONLY
Activating the Control-M Monitor
The Control‑M monitor usually operates 24 hours a day as a started task (STC). Usually the monitor is automatically activated as part of the IPL process. To activate the monitor manually, use the operator command
S CONTROLM
If the monitor is successfully activated, the following message is displayed on the operator console:
CTM100I CONTROL‑M MONITOR STARTED
When Control‑M operates in standalone mode, once the Control‑M monitor is active, if you try to activate an additional Control‑M monitor with the same IOA components in the same computer environment where a Control‑M monitor is active, the new (that is, additional) monitor immediately shuts down and an appropriate message is issued.
It is possible to activate more than one Control‑M monitor in the same computer environment (for example, PROD and TEST version) by defining a different IOA environment (and a different QNAME) for each monitor. For more information see the Control‑M chapter of the INCONTROL for z/OS Installation Guide: Installing.
Under CTMPLEX configuration, more than one Control‑M can be active under an IOA environment. For more information about Control‑M in CTMPLEX configuration see CTMPLEX: Control-M for the Sysplex.
You can issue Control‑M operator commands that are executed immediately when you start the Control‑M monitor. You do this by specifying the operator command in parentheses as the fourth positional operator in the START command.
Activate the Control-M monitor in QUIESCE mode by issuing the following command:
S CONTROLM,,,(QUIESTIME=NOW)
No jobs will be submitted by Control-M until you issue the QUIESTIME=OFF command.
Activate the Control-M monitor in QUIESCE mode and simultaneously instruct Control-M to skip the next NEWDAY process (see Special Newday Parameters.) by issuing the following command:
S CONTROLM,,,(QUIESTIME=NOW),NEWDAY=SKIP
No jobs will be submitted by Control-M until you issue the QUIESTIME=OFF command.
Shutting Down the Control-M Monitor
To shut down the Control‑M monitor, use the P CONTROLM operator command.
After a few seconds (up to a minute), the Control‑M monitor shuts down and the following messages are displayed on the operator console:
CTM107I SHUT DOWN UPON REQUEST FROM OPERATOR
CTM120I Control‑M MONITOR SHUTTING DOWN
In case of emergency, you can cancel the Control‑M monitor. However, you should avoid doing this unless absolutely necessary, because cancelling the monitor may corrupt the database in the Active Jobs file, Conditions file, and Log file. There are times when cancelling the Control-M monitor is unavoidable (for example, when there are severe problems in JES). However, in such cases, BMC recommends that the user first try to QUIESCE Control-M, if possible. In this way, you can minimize the activity taking place within Control-M before the cancellation, and thereby minimize the potential for corruption.
When canceling the monitor, as in the case where the Control-M monitor is hung, a system (SVC) dump of the Control-M Monitor address space should be taken. To do this:
-
Enter MVS console command 'DUMP'
-
Specify JOBNAME or ASID of the monitor
-
Specify parameter SDATA=(CSA,GRSQ,SUM,RGN,TRT)
The SVC dump should be taken before trying to stop/cancel the Monitor.
When you shut down the Control‑M monitor, all other Control‑M facilities (for example, CMEM), IOA Online monitors, and Online facility sessions can remain active.
Modifying the Control-M Sleeping Interval
Periodically, at a predefined interval, Control‑M "wakes up" and checks what it has to do. This interval is set using a Control‑M installation parameter and can be changed by the INCONTROL administrator. In addition, the sleep interval can be altered by the
F CONTROLM,INTERVAL=ss[.th]operator command.
In this command
-
ss is the interval in seconds
-
th is the interval in hundredths of seconds
The interval should be modified by automatic commands invoked by the Control‑M monitor itself according to set conditions and time ranges, and not manually by the operator.
At most sites, the interval should be longer during the day (when fewer batch production jobs are executing) and shorter during the night. The minimum sleep interval is 0.1 seconds.
When the modification is received by Control‑M, the following message is displayed on the operator console from which the modify command was issued:
CTM123I Control‑M INTERVAL IS SET TO ss.th SECONDS
Displaying Control-M Installation Parameters
Control‑M installation parameters contain general information about your system.
To display the values of some of the more important parameters, issue the following operator command:
F CONTROLM,SHOWPARM
Dynamically Refreshing Control-M Parameters
The CTMPARM installation parameters table can be refreshed dynamically, that is, without stopping and restarting the Control‑M Monitor, using the following operator command:
F CONTROLM,NEWPARM
After the command has been executed, the Control‑M Monitor uses the new installation parameters from CTMPARM.
If Control‑M/Restart is installed, NEWPARM also refreshes CTRPARM, and the Monitor then starts to use the new CTRPARM parameters.
Almost all Control‑M and all Control‑M/Restart installation parameters can be dynamically refreshed in this way. For those Control‑M parameters that cannot, the original values are not replaced and the Control‑M Monitor continues to use their original values. These Control‑M parameters are:
-
AJFSIZE
-
ARMELMNT
-
AUTOTAPE
-
CTMPLEX
-
ENHNJE
-
JRNL
-
MVBO
-
NONSWAPM
-
NEWDAYIM
-
OPTMODE
To replace these values that cannot be refreshed dynamically, do the following:
-
Stop the ControlM Monitor.
-
Replace the values in the CTMPARM member.
-
Restart the Monitor.
Displaying a List of Allocated Datasets
To display the currently allocated datasets, enter the command F CONTROLM,LISTDD.
The currently allocated datasets are passed to your console and to the JOBLOG of the Control‑M Monitor.
Displaying Storage Maps for the Control-M Monitor
A pair of MODIFY commands provide you with information about storage memory allocations. These reports are issued into the file referred by the DD card DAPRENV. You can choose between a detailed report or a summary report.
To display a detailed storage map by TCB storage key and subpool, specify the following operator command:
F CONTROLM,LISTVDET
Every allocated block is listed by TCB and subpool number. The following information is displayed on the operator console from which the modify command was issued:
-
TCB
-
SUBPOOL: Subpool number
-
FROMADDRESS: Address from which the dataset is allocated
-
LENGTH: Size of the dataset (both above and below the line)
To display a summary storage map, specify the following operator command:
F CONTROLM,LISTVSUM
Totals for all allocated blocks are listed.
Dynamically Reloading User Exits
Control-M user exits can be dynamically reloaded without the need to recycle the Control-M Monitor by using the operator command:
F CONTROLM,RELOAD=userExit.
where userExit = CTMX001, CTMX002, CTMX003, CTMX004, or CTMX015.
For security purposes, some customers choose to link-edit user exit CTMX001 into load module CTMJOB or user exit CTMX002 into load module CTMSUB, or both. The RELOAD operator command will only reload these user exits if they have not been link-edited into load modules CTMJOB and CTMSUB, respectively. Before reloading these user exits, a check is made that will cause the RELOAD command to abort if these exits are link-edited directly into the aforementioned load modules.
The RELOAD command fully supports a CTMPLEX environment. All local (LSM) monitors will automatically reload the relevant user exits.
Due to the efficient way Control-M subtasks operate, the actual RELOAD of the user exits and the resulting messages, CTMR0AI and CTMR09E, may not occur until a job is ordered or a DO FORCEJOB is executed by Control-M.
Dynamically Refreshing CTMPLEX Parameters
The System Entries parameters of the CTMPLEX parameters member can be dynamically refreshed using the following operator command:
F CONTROLM,NEWPLEX
The parameters that can be refreshed in this way are the System Entries parameters of the CTMPLEX parameters member. However, the General parameters are not processed by this command.
The General parameters of the CTMPLEX parameters member can only be refreshed by one of the following methods:
-
using the STOPPLEX and STARTPLEX commands
-
stopping and then restarting the ControlM Monitor
Using STOPALL to shut down the Control-M monitor
The STOPALL command may be also used to shut down Control-M. In a non-CTMPLEX environment, this command works in the same way as the P CONTROLM operator command. In a CTMPLEX environment, this command stops all Control-M monitors (both the Global and all Locals).
To stop one or more Control-M monitors, enter the following operator command:
F CONTROLM,STOPALL
Setting a Planned Shutdown Time (Quiesce Time)
Setting the Control‑M monitor planned shutdown time (QUIESTIME) stops the submission of jobs that, according to their average execution time, cannot finish before the specified QUIESTIME. Setting a QUIESTIME only affects submission processing and not other Control‑M functions, such as post-processing.
QUIESTIME is not applied to jobs that have already issued an ELIGIBLE FOR RUN message in the log. Such jobs are submitted as scheduled.
QUIESTIME is set by the operator command
F CONTROLM,QUIESTIME=xxxx
In this command, xxxx is one of the values described in the following table:
Table 87 QUIESTIME Values
Value |
Description |
---|---|
hhmm |
Where hh is the hour, based on a 24-hour clock mm is the minute The planned shutdown time before which, based on their execution time, jobs must end. If any jobs cannot end by that time, QUIESTIME stops their submission. A QUIESTIME command using this value supersedes any previous shutdown time setting. |
NOW |
Immediately stops the submission of all jobs. |
OFF |
Cancels any QUIESTIME requests that are currently active. |
D |
Displays the current status of QUIESTIME, in the form of messages CTML19I and RUNL19I. Message CTML19I appears in the IOA Log in the form Copy
and message RNL19I appears in the System Log in the form Copy
where yyyy is hhmm, NOW, or OFF. |
By default, QUIESTIME affects both tables and jobs. However, if the IGNQTMGR parameter in the CTMPARM member is set to Y, QUIESTIME only affects jobs.
Recycling of the Control-M monitor cancels the previously defined QUIESTIME. The QUIESTIME can be defined when the Control-M monitor is activated with the START command. For more details, see Activating the Control-M Monitor.
Activating and Deactivating Quiesced Quantitative Resources
When a job is ordered by Control-M, the job ordering process checks for any quantitative resources that have been deactivated or are to be deactivated at a later time. If the job requires such a quantitative resource, and if the time that the job is expected to complete is later than the time at which the quantitative resource is deactivated, then the quantitative resource is not assigned to the job, and the job will not run.
The QUIESQRES command enables users to activate and deactivate quantitative resources, and to display the status of those resources.
To display or change the status of a specific resource, enter the following command:
F CONTROLM,QUIESQRES=resource-name,DISPLAY|NOW|OFF|hhmm
where
-
resource-name is the quantitative resource
-
DISPLAY displays the activation status of the quantitative resource
-
NOW immediately deactivates the quantitative resource
-
OFF immediately reactivates the quantitative resource
-
hhmm deactivates the quantitative resource at the specified time
The current status of all quiesced quantitative resources can be displayed by using an asterisk (*) as the value of the resource-name variable, as shown in the following example:
F CONTROLM,QUIESQRES=*,D
All quiesced quantitative resources can be immediately reactivated by using an asterisk (*) as the value of the resource-name variable, as shown in the following example:
F CONTROLM,QUIESQRES=*,OFF
You can use an asterisk as the value of the resource-name variable only with the DISPLAY and OFF subparameters.
Shout / Mail Facility Destination Table Operations
The IOA Shout and Mail facilities allow the user to specify messages to be sent to various destinations, defined by the following tables:
-
Dynamic Destination Table (IOADEST)
Destinations in a production environment are not necessarily fixed. For example, the TSO logon ID of the shift manager is different in every shift. The Dynamic Destination table enables the user to specify a group name destination and which final destinations it represents.
-
Mail Destination Table (MAILDEST)
Mail destinations consist of names, addresses and groups to whom ControlM can send e–mail messages.
-
SNMP Destination Table (SNMPDEST)
SNMP destinations consist of host names, IP addresses, nicknames, group names, and port numbers where Control-M can send SNMP traps (messages).
For instructions on how to manage these tables, see Shout / Mail Facility Destination Table Administration.
Loading a New Dynamic Destination Table (IOADEST)
When the Control‑M monitor is started, the Dynamic Destination table, IOADEST, is loaded. To replace the Dynamic Destination table, IOADEST, with a new table, use the following operator command:
F CONTROLM,NEWDEST=member
where member is the name of the member with the new Dynamic Destination table.
After a few seconds, a message describing the result of the operation is displayed on the operator console from which the modify command was issued.
Loading a New Mail Destination Table
The Mail Destination table contains a list of names, addresses, and groups to whom e–mail messages can be sent.
When the Control-M monitor is started, the Mail Destination table is loaded. Message CTM280I - MAILDEST TABLE WAS LOADED is generated when the Mail Destination table is reloaded successfully.
Refreshing the Mail Destination Table (MAILDEST)
When a name, address or group is added or changed, the Mail Destination table must be reloaded by using the following command:
F CONTROLM,NEWMAILDST
A new Mail Destination table replaces the existing one, and the following message is displayed on the operator console from which the modify command was issued when the monitor resumes job processing:
CTM280I MAILDEST TABLE WAS RELOADED.
If the table is not found, the following message is displayed:
CTM281W MAILDEST TABLE WAS NOT FOUND IN ANY LIBRARY REFERENCED BY DD STATEMENT DAPARM. UNABLE TO SEND SHOUT
If an error occurs while loading or reloading the table, the following message is displayed:
CTM288E ERROR IN PREPARING SHOUT TO MAIL, RC=rc
Loading and Refreshing the SNMP Destination Table (SNMPDEST)
When the Control-M monitor is started, the SNMPDEST SNMP Destination table is loaded. The table contains host names, IP addresses, nicknames, group names, and port numbers where Control-M DO SHOUT and SHOUT WHEN can send SNMP traps (messages). When any address or name is added, changed, or deleted, the table should be reloaded with the new one by using the following command:
F CONTROLM,NEWSNMPDST
After a few seconds, a message describing the result of the operation is displayed on the operator console.
Refreshing Deadline Scheduling and Job Network Dependencies
A group of operator commands enable you to refresh the calculation of job dependency information and runtime scheduling criteria that impact job flows. The following operator commands are available:
-
To refresh the DUE OUT times of jobs and set optimal deadlines for jobs in flows, issue the following operator command:
CopyF CONTROLM,DEADLINE
-
To refresh PRIORITY values of jobs so that jobs in a flow have consistent priorities and no job has a lower priority than any of its successor jobs, issue the following operator command:
CopyF CONTROLM,PROP
-
To refresh both the DEADLINE (DUE OUT) times and the PRIORITY values, issue the following operator command:
CopyF CONTROLM,REFSCHED
-
To refresh the list of dependent jobs in the Job Dependency Network File, issue the following operator command:
CopyF CONTROLM,NET
-
To simultaneously refresh all three types of job dependency information (as done by all three commands: DEADLINE, PROP, and NET), issue the following operator command:
CopyF CONTROLM,REFALL
For more information about the adjustment of job deadlines and propagation of priorities, see "Automatic Job Flow Adjustment" in the Introduction chapter of the Control-M for z/OS User Guide.
Shifting DUE OUT Times for Control-M Jobs
If SHOUT WHEN LATE * is specified in a Control‑M job scheduling definition, a message is issued if the job does not finish executing by the specified DUE OUT time. A large number of such messages may be issued if Control‑M is brought up after it, OS/390, or z/OS was down for a significant amount of time.
These messages can be avoided by shifting the DUE OUT time forward an appropriate amount of time (for example, if Control‑M was down for two hours, shift the DUE OUT time 120 minutes forward).
To shift the DUE OUT time forward or backward, issue the command
F CONTROLM,SDOUT={+|‑}nnn
where
-
+ and - indicate whether to shift the DUE OUT time forward (later) or backward (earlier), respectively.
-
nnn is the number of minutes to shift the DUE OUT time. From 1 to 999 minutes can be specified.
Jobs with a HELD status are not shifted by the SDOUT operator command.
The SDOUT operator command only works if the REFRESH DEADLINE IOA online command or the DEADLINE operator command (see Refreshing Deadline Scheduling and Job Network Dependencies) was previously issued.
Modifying number of intervals to wait for NewDay
After Control-M monitor issues the message CTM113I Control-M MONITOR <monitor name> NEWDAY PROCESSING STARTED, it waits 30 Control-M sleep intervals for the NewDay started task to start executing. If the NewDay procedure does not start to execute, a CTML03W NEW DAY PROCEDURE NOT DETECTED message is issued, followed by CTML06W REPLY 'R' FOR RESUME OR 'E' FOR END.
The number of intervals to wait is set in the CTMPARM parameter NEWDAY#W, which has a default value of 30. For example, if the Control-M sleep interval is 3 seconds, the monitor waits 90 seconds for the Newday started task to start executing.
The number of intervals can be modified by using the following operator command:
F CONTROLM,NEWDAYWT=<number of intervals>
The number of intervals must be a number containing 1 to 4 digits.
When the modification is received by Control-M, the following message is displayed on the operator console where the modify command was entered:
CTM109I THE NUMBER OF INTERVALS TO WAIT FOR THE CONTROL-M DAILY IS SET TO <number of intervals>
Refreshing the Control-M Security Cache
Control‑M security modules use a security block to identify each user for which an authority check is performed. The first time a user’s security authorization is checked, Control‑M creates a security block for that user. The security block can then optionally be saved for the next time the user’s security authorization is checked.
Security blocks saved for subsequent checks are kept in the Control‑M security cache.
The Control‑M security cache holds security blocks for the last 30 users to have their security authorization checked.
Changes made to a user’s security authorization (since the last time that user’s security block was created) are not automatically included in the information in the user’s security block in the Control‑M security cache. However if a user’s security authorization has been changed and there is no security block in the Control‑M security cache for that user, changes made to the user’s security authorization is in effect the next time that user’s security authorization is checked.
To immediately include new user authorization information in the Control‑M security cache, refresh the security cache using the following operator command:
F CONTROLM,NEWSECDEF
This command refreshes all user authorization information in the Control‑M security cache.
Issuing Operator Commands using a Job or Started Task
Utility IOAOPR can be used to issue operator commands from MVS, JES2, JES3, VTAM, and so on. It can be activated as a job step or as a started task, and allows full control over when to issue a command, and what to do afterwards. It is also possible to send the command to any computer (because Control‑M can schedule a started task in any computer).
For a description of the IOAOPR utility, see the INCONTROL for z/OS Utilities Guide.
Switching from SAPI to PSO Support
SAPI is the IBM SYSOUT processing subsystem. It is the default SYSOUT processing subsystem for Control‑M when Control‑M is operating under z/OS version 1.1 and later. However, Control‑M continues to maintain support for PSO.
If you encounter a problem associated with job post-processing (for example, jobs not properly identified, unpredictable errors), you can switch from SAPI support to PSO support.
-
To switch from SAPI support to PSO support, issue the operator command:
CopyF CONTROLM,SAPI=NO
-
To switch to SAPI support from PSO support, issue the operator command:
CopyF CONTROLM,SAPI=YES
For more information about post-processing, see in the introduction chapter in the Control‑M for z/OS User Guide.
Loading %%GLOBAL Members to Cache
%%GLOBAL members can be placed in cache memory from where they can be accessed as needed. If the members are placed in cache, the JCL accesses the contents from the cache, instead of accessing the members themselves.
This can be very advantageous if many jobs access %%GLOBAL members, because each access of the member increases I/O and processing overhead. Only those %%GLOBAL members that are specifically requested are loaded to cache.
Requests are generally made by listing the desired %%GLOBAL members in a special cache list member in the DAGLOBAL library. This cache list member (default name: CACHLST) is pointed to by parameter AECACHL in member CTMPARM in the IOA PARM library.
Use the following format to list members in the cache list member:
%%GLOBAL memname
where memname is the name of the %%GLOBAL member pointed to by DD statement DAGLOBAL.
The cache list member can optionally contain the following control statement as its first non-comment statement:
%%RESOLVE ALLCACHE
This control statement affects AutoEdit processing only if an AutoEdit variable has not been resolved by searching the %%GLOBAL members identified in the job. The statement instructs Control‑M to continue the variable resolution process by checking all members loaded into cache. Members in cache are searched in the same sequence they are listed in the cache list member.
%%GLOBAL members are loaded to cache:
-
In the Control-M monitor's address space, at the time of Control-M startup.
-
In the online address space, when the user performs AutoEdit simulations (options 2.%, 3.%, or 6.M2 ), or enters a JCL edit session (2.J or 3.J).
-
At the end of the option processing, the AutoEdit cache is deleted.
-
In the batch AutoEdit simulation job.
The following commands can be used between Control-M startups, and affect only the Control-M monitor's cache processing.
To reload %%GLOBAL members to cache, specify the reload command in either of the following formats:
F CONTROLM,AECACHE=RELOAD
F CONTROLM,AECACHE=RELOAD(membername)
Each of these formats deletes the current %%GLOBAL members from cache, and then (re)loads to cache the %%GLOBAL members listed in the cache list member.
If the command is specified without a member name, the name of the cache list member that was last loaded is used. This format is especially useful if there are changes to the list of %%GLOBAL members in the cache list member and/or changes to the contents of the currently loaded %%GLOBAL members.
If the command is specified with a member name, the member name must identify a cache list member in DAGLOBAL (other than the currently active cache list member).
To stop using AutoEdit cache, issue the following command:
F CONTROLM,AECACHE=STOP
Accumulating performance data
Various components of Control‑M collect performance related data. The accumulated data is written to SMF records. The records are written to SMF once every Newday, periodically or in response to an operator command (PERFDATA). In addition to writing the performance data to SMF records, the data is also written to the file defined by ddname DATRACE. The writing of the SMF records is accompanied by corresponding messages in the IOA trace file. For more information on the collection of performance data, see Identity Level (IDL) facility.
The SMF records containing the performance data can be extracted and processed by the Control‑M CTMRSMF utility. For more information on the CTMRSMF utility, - see the INCONTROL for z/OS Utilities Guide.
Writing accumulated performance data on demand
To immediately write the accumulated performance data to an SMF record, use the following command:
F CONTROLM,PERFDATA=NOW
where the NOW option requests that the accumulated performance data be written immediately and a new period for accumulating performance data be started.
Modifying the performance data accumulation interval
You can temporarily change the interval of time (expressed in minutes) between writes of an SMF record containing the accumulated performance data. This temporary change is reset to the default when the Control‑M monitor is restarted. The default is specified by the PFMINT parameter in the CTMPARM member. You can change the interval using the following command:
F CONTROLM,PERFDATA=nnnn
where nnnn is the number of minutes between writes of an SMF record containing the accumulated performance data. Use a number from 1 to 1440.
The Job/Step Completion Status Facility (JSCSF)
The Job/Step Completion Status Facility (JSCSF) enables users to define common rules for analyzing job results that determine whether jobs ended OK or NOTOK. The rules are defined in the PGMST member located in the CTM.PARM library. In addition to the Step and Procedure Step Names, the Program Names can also be defined under the Job Step criteria in the PGMST member.
The JSCSF rules are logically added to the 'DO OK' / 'DO NOTOK' actions of the ON PGMST statements specified in the Control-M Job Scheduling Definitions. The general rules in the JSCSF might override the more specific rules defined in the ON PGMST statements or might be overridden by them, depending on the JSCSF definitions in the PGMST member.
Example of the PGMST member:
/********************************************************************
/* *
/* JOB/STEP COMPLETION STATUS FACILITY DEFINITION MEMBER *
/* *
/*------------------------------------------------------------------*
/*MEMBER JOB STEP PROCSTEP PROGRAM OK/ OVER CODES *
/*NAME NAME NAME NAME NAME NOTOK JDEF VALUES *
/*-------------------------------------------------------------------*
TEST* TEST12 * * * OK N C0000-C0100
ABC* * * * PROGRAM1 OK Y C0008,C0020
* PROD* * * IDCAMS OK N <=C0004
* * * * ASMA90 NOTOK N >C0004
++ SCHLIB=MYLIB1, TABLE=TABLE1
++ APPL=APPLICATION1
* * * IOATEST IOATEST OK N C*,U*,S*,NS0C?
CRITICAL * * * * NOTOK Y >=C0002
ANYCODE * * * * OK N C*
* MYCODES * * MANYCODE OK Y C0022,C0033,
C0044,C0055,
C0066,C0077,
C0088,C0099,
C0100-C0200
* * CONTROLR * CTRCTR NOTOK Y NC0000
++ SCHLIB=MYLIB2, TABLE=TABLE2
* * * * \IDCAMS NOTOK N C0004
* * * * ASMA90 *ASME N >C0004
* * * * ASMA90 *ASMW N C0004
Notes:
-
The lines starting with ‘/*’ (slash and asterisk) are comment lines.
-
The data must be defined in Columns 1 – 72 of the member.
-
Valid CODE values are: Cxxxx = Completion Code, Uxxxx = User ABEND, Sxxx = System ABEND, Nyxxxx = NOT some code (where y can be C, U, or S).
-
The following wild characters are supported: ‘*’ matches any string; ‘?’ matches any character. The wild characters are not supported in the STATUS (OK/NOTOK) position. The codes used in defining ranges or greater or less than relationships cannot contain wild characters (‘*’, ‘?’).
-
The codes and code ranges defined in PGMST are processed with ‘OR’ relationships. The only exception is the Nyxxxx definition, which must match the event (AND relationship). It is similar to the logic of ON PGMST in Control-M Schedule Definition.
-
Unlike ON PGMST statements, a Nyxxxx definition can be the only code in the record and its matching is enough for a matching of the condition (in ON PGMST statements, a Step Code must also match any other Code defined without ‘NOT’ relationship). See the example in the Program Name CTRCTR record.
-
Continuation record(s) can be defined (see the MANYCODE line in the example above). Only CODE VALUES can be defined in the continuation record.
-
Codes ranges and more or less relationships can be defined for Completion Codes (Cxxxx) and User Abends (Uxxxx), but not for System Abends (Sxxx), since the number suffixes in the System Abends do not indicate priority (for example, S0C4 is not "more than" S0C1).
-
The first entry in the table which matches all the job step criteria (Member Name, Job Name, Step Name, Procstep Name, Program Name) is processed. If CODES match as well, the corresponding rule would be implemented. If the CODE does not match (but all the criteria mentioned before do match), the rule is not implemented and no more PGMST records are checked for the step. Therefore, ensure that the entries which are more specific are listed first and the more global entries are listed last.
The selection criteria described above may be altered by coding the special control statement '++CONTINUE_SEARCH++' anywhere in the PGMST member. In such a case, the CODES are considered part of the selection criteria and a rule entry is not chosen for action until all the criteria, including CODES, are matched.
-
If any of the PGMST member definitions matches all the job step criteria for some step of the job (Member Name, Job Name, Step Name, Procstep Name, Program Name) the Job Return Code (JOBRC) extracted by Control-M for z/OS (and, if defined, the corresponding ON PGMST +JOBRC statement) will be ignored in evaluating whether a job ended OK or NOT OK.
-
The OVER JDEF value defines which definition takes precedence in the case of a conflict. A value of N for OVER JDEF indicates that the local ON PGMST definitions in the Job Schedule Definition take precedence over the common (global) rules defined in the corresponding record in this member. A value of Y for OVER JDEF indicates that the common (global) rules defined in the corresponding record in this member take precedence over the local ON PGMST definitions in the Job Schedule Definition.
-
To define exclusion criteria, you can add the ^ (Hex 5F) or \ (Hex E0) character as the first character in any of the following criteria: MEMBER NAME, JOB NAME, STEP NAME, PROCSTEP NAME, or PROGRAM NAME. The criteria is matched when the actual item is different from the one that you defined. In the example above, Completion Code C0004 is defined as NOTOK for all programs except IDCAMS (third line from the end).
-
In the STATUS (OK/NOTOK) position, as an alternative to the OK and NOTOK values, you can define a special (logical) Code value. Such a special Code value must begin with an asterisk character *, followed by 1 to 4 characters, where the first character after the asterisk is an alphabetical character. This special Code value can be used in the CODES of ON PGMST statements in the same way as Cxxxx, Sxxx, or Uxxxx Codes. In the example above, the last two lines define a Logical Code for when program ASMA90 ends with errors (RC>4), and another Logical Code for when program ASMA90 ends with warnings (RC=4).
-
The PGMST member can be divided into sections. A section starts with one or more '++' records called section records, which set filter criteria for the PGMST records that follow, until the beginning of the next section (that is, until the next appearance of a section record). If section records or some filters in these records are missing, the corresponding filters or limitations do not exist. Fields in the section records support masking.
The following filters/ keywords can be defined in section records:
-
SCHLIB=Name of Schedule Library
-
TABLE=Name of Schedule Table
-
APPL=Job Application name
-
-
The NEWPARM operator Modify command for Control-M Monitor can dynamically reload the PGMST member (in additional to reloading the CTMPARM, CTRPARM, and TIMEZONE definitions).
Load-Index Optimization Commands
LOAD-INDEX modify commands enable you to query and modify the defined Load-Indexes.
For more information about Load-Indexes, see Using Load Indexes in workload optimization in the Control-M for z/OS User Guide.
You can perform the following actions:
Table 87a Load-Index modify commands
Command |
Action |
---|---|
LIST |
|
SET |
|
OVERRIDE |
|
RELEASE |
Listing details of Load-Indexes
The LIST (or LI) command obtains a list of defined Load-Indexes.
Use the following operator command:
F CONTROLM,LOAD-INDEX LIST,parameters
The following parameters can be used in this command:
Table 87b Parameters of the LOAD-INDEX LIST command
Parameter |
Description |
---|---|
NAME |
(Optional) Limit the list of Load-Indexes by name. Masking is supported. The default is NAME=* (that is, the list includes all defined Load-Indexes). |
TYPE |
(Optional) Limit the list of Load-Indexes by type. Valid values: UTIL, 4HRA, EXT, MVA |
ISOVERRIDDEN (or ISOVER) |
(Optional) Limit the list of Load-Indexes by whether or not their load levels are currently overridden. Valid values: Y (Yes) | N (No) |
LEVEL (or LVL) |
(Optional) Limit the list of Load-Indexes by current load level, including Load-Indexes of the specified level only. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
FROMLEVEL (or FROMLVL) |
(Optional) Limit the list of Load-Indexes by minimum load level, including Load-Indexes of the specified level and all higher levels. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
DETAILS |
(Optional) Show full details for each Load-Index in the list. |
The command LOAD-INDEX LIST NAME=* returned the following list of Load-Indexes. This list contains only the most basic details for each Load-Index — Name, Type, Level, and Override status.
WLIC10I NAME=INDEX1 TYPE=4HRA LEVEL=MEDIUM OVERRIDDEN=Y
WLIC10I NAME=INDEX2 TYPE=EXT LEVEL=HIGH OVERRIDDEN=N
WLIC10I NAME=INDEX3 TYPE=UTIL LEVEL=IDLE OVERRIDDEN=N
For a detailed list, the command LOAD-INDEX LIST NAME=*,DETAILS returned the following list of Load-Indexes. This list contains additional details for each Load-Index — Description, Set At (timestamp), Set By, and Note.
WLIC12I NAME=INDEX1 TYPE=4HRA DESC=FIRST LOAD-INDEX
WLIC13I LEVEL=MEDIUM OVERRIDDEN=Y SET AT=2019/10/02 11:47:00 BY=M37 NOTE=
WLIC14I -------------------------------------------------------------------
WLIC12I NAME=INDEX2 TYPE=EXT DESC=EXTERNAL LOAD-INDEX
WLIC13I LEVEL=HIGH OVERRIDDEN=N SET AT=2019/10/02 09:47:33 BY=M37 NOTE=
WLIC14I -------------------------------------------------------------------
WLIC12I NAME=INDEX3 TYPE=UTIL DESC=
WLIC13I LEVEL=IDLE OVERRIDDEN=N SET AT=2019/10/02 09:47:13 BY=M37 NOTE=
WLIC14I -------------------------------------------------------------------
Setting the level for an external Load-Index
The SET command sets a load level value for an external Load-Index (type EXT).
Use the following operator command:
F CONTROLM,LOAD-INDEX SET,parameters
The following parameters can be used in this command:
Table 87c Parameters of the LOAD-INDEX SET command
Parameter |
Description |
---|---|
NAME |
Name of Load-Index to set. Masking is supported. |
NEWLEVEL (or NEWLVL) |
The new level to set for the Load-Index. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
ISOVERRIDDEN (or ISOVER) |
(Optional) Perform the action depending on whether or not the load level of the Load-Index is currently overridden. Valid values: Y (Yes) | N (No) |
LEVEL (or LVL) |
(Optional) Perform the action only if the current load level is as specified. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
FROMLEVEL (or FROMLVL) |
(Optional) Perform the action only if the current load level is as specified or higher. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
NOTE |
(Optional) Add a note to the action. |
Command:
LOAD-INDEX SET NAME=INDEX2 NEWLEVEL=V-HIGH
Response indicating a successful action:
WLIC15I INDEX2 SET TO V-HIGH ENDED SUCCESSFULLY
WLIC00I ENDED OK, 0001 LOAD-INDEXES MODIFIED
Command:
LOAD-INDEX SET NAME=INDEX1 NEWLEVEL=LOW
Response warning that the specified Load-Index is not external:
WLIC1AW INDEX1 NOT EXTERNAL, SKIPPED
WLIC01W ENDED WITH WARNING, 0000 LOAD-INDEXES MODIFIED
Overriding the level of a Load-Index
The OVERRIDE (or OVER) command overrides the load level of a Load-Index with a new value.
Use the following operator command:
F CONTROLM,LOAD-INDEX OVERRIDE,parameters
The following parameters can be used in this command:
Table 87d Parameters of the LOAD-INDEX OVERRIDE command
Parameter |
Description |
---|---|
NAME |
Name of Load-Index to override. Masking is supported. |
NEWLEVEL (or NEWLVL) |
The new level to override the Load-Index. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
TYPE |
(Optional) Perform the action only if the Load-Index is of a specific type. Valid values: UTIL, 4HRA, EXT, MVA |
ISOVERRIDDEN (or ISOVER) |
(Optional) Perform the action depending on whether or not the load level of the Load-Index is currently overridden. Valid values: Y (Yes) | N (No) |
LEVEL (or LVL) |
(Optional) Perform the action only if the current load level is as specified. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
FROMLEVEL (or FROMLVL) |
(Optional) Perform the action only if the current load level is as specified or higher. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
NOTE |
(Optional) Add a note to the action. |
Command:
LOAD-INDEX OVERRIDE NAME=INDEX1 NEWLEVEL=HIGH
Response indicating a successful action:
WLIC15I INDEX1 OVERRIDE TO HIGH ENDED SUCCESSFULLY
WLIC00I ENDED OK, 0001 LOAD-INDEXES MODIFIED
Releasing a Load-Index override
The RELEASE (or REL) command releases an override on a Load-Index, restoring its load level to the most recently measured or most recently set value.
Use the following operator command:
F CONTROLM,LOAD-INDEX RELEASE,parameters
The following parameters can be used in this command:
Table 87e Parameters of the LOAD-INDEX RELEASE command
Parameter |
Description |
---|---|
NAME |
Name of Load-Index to release. Masking is supported. |
TYPE |
(Optional) Perform the action only if the Load-Index is of a specific type. Valid values: UTIL, 4HRA, EXT, MVA |
ISOVERRIDDEN (or ISOVER) |
(Optional) Perform the action depending on whether or not the load level of the Load-Index is currently overridden. Valid values: Y (Yes) | N (No) |
LEVEL (or LVL) |
(Optional) Perform the action only if the current load level is as specified. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
FROMLEVEL (or FROMLVL) |
(Optional) Perform the action only if the current load level is as specified or higher. Valid values: CRITICAL, V-HIGH, HIGH, MEDIUM, LOW, IDLE |
NOTE |
(Optional) Add a note to the action. |
Command:
LOAD-INDEX RELEASE NAME=INDEX2
Response indicating a successful action:
WLIC15I INDEX2 RELEASE TO LOW ENDED SUCCESSFULLY
WLIC00I ENDED OK, 0001 LOAD-INDEXES MODIFIED
Displaying the status of active Workload Policies
Two commands are available for generating reports that list active Workload Policies, along with basic details regarding the rules defined in each Workload Policy and their effect on the running of jobs.
WLLIST
The WLLIST command generates a report with details of active Workload Policies in a tabular format. The report demonstrates the impact of rules in the Workload Policies on the execution of jobs.
Use the following operator command to generate the report:
F CONTROLM,WLLIST
The resulting report has the following structure:
-
Each line in the report is identified by a message ID, for which you can find more information in the Messages Manual.
-
Information for each active Workload Policy can span several lines. In the first line of each block of lines (marked by message ID CTMWLTI), the Workload Policy is identified by its name in the WORKLOAD NAME column, along with a prefix that indicates the type of Workload Policy — either L- (local, defined in Control-M for z/OS) or G- (global, defined in Control-M/EM). In addition, this first line contains information about the total utilization of jobs associated with the Workload Policy in the UTIL column.
-
If the Workload Policy contains JOB rules, those rules appear first in the block of lines, one line for each JOB-type rule.
-
If the Workload Policy contains RESOURCE (RES) rules, those rules appear next, one line for each RES-type rule.
-
If the Workload Policy contains SEPARATE EXECUTION rules, this information appears last for the Workload Policy, and it may span several lines. One line displays SEPARATE EXECUTION in the RULE TYPE column and provides the total number of jobs that are waiting in the Workload Policy due to this rule. The following lines provide information about all Workload Policies that were specified in the rule and are blocking jobs from running, one line for each Workload Policy.
-
Even if there are several SEPARATE EXECUTION rules in the Workload Policy, information is provided only for the first SEPARATE EXECUTION rule to be analyzed.
-
The following table describes the information displayed in the various columns for each rule within each Workload Policy:
Table 87f Columns of information in WLLIST output
Column |
Description |
---|---|
RULE TYPE |
Type of rule, one of the following:
|
NAME OF RESOURCE/ |
For a rule of type RES, the name of the resource that is limited. For a rule of type SEP WKLD, the name of a Workload Policy specified in the rule, one Workload Policy on each line. When jobs of these specified Workload Policies are running, jobs of the current Workload Policy are blocked from running. |
LIMIT |
The defined number of maximum allowed concurrent jobs or maximum allowed number of the specified resource. If the rule is currently not being applied due to scheduling settings (that is, the current time is outside of the time period defined for the rule) or due to defined Load-Index levels, N/A is displayed. |
UTIL |
The number of running jobs that meet the rule criteria, for any rule of type JOB or RES. Note that this number may sometimes be higher than the limit. For example, jobs may have started running before the Workload Policy became active or the rule took effect. The UTIL number is also provided in the following lines:
|
WAITING JOBS |
The number of jobs that are currently waiting to be run, that is, jobs that are blocked due to the limit defined in the rule. |
CTMWLMI WLLIST - ACTIVE WORKLOAD POLICIES REPORT
CTMWL4I WORKLOAD NAME RULE NAME OF LIMIT UTIL WAITING
CTMWLDI TYPE RESOURCE/SEP WKLD JOBS
CTMWLEI ------------------------------------------- ----- ----- -------
CTMWLTI G-ABC123 00003
CTMWL5I JOB 00003 00003 0000002
CTMWLEI ------------------------------------------- ----- ----- -------
CTMWLTI L-WKLD1 00000
CTMWL5I RES RESOURCE_A N/A
CTMWL5I SEPARATE EXECUTION: 0000002
CTMWL5I SEP WKLD L-HI_PRIO_1 00003
CTMWL5I SEP WKLD L-HI_PRIO_2 00000
CTMWL5I SEP WKLD HI_PRIO_3 N/A
CTMWLEI ------------------------------------------- ----- ----- -------
WLSTAT
The WLSTAT command generates a report with details of active Workload Policies in a textual format. The report demonstrates the impact of rules in the Workload Policies on the execution of jobs.
Use the following operator command to generate the report:
F CONTROLM,WLSTAT
The resulting report has the following structure:
-
Each line in the report is identified by a message ID, for which you can find more information in the Messages Manual.
-
Information for each active Workload Policy can span several lines. In the first line of each block of lines, marked by message ID CTMWLSI, the Workload Policy is identified by its name, along with an indication of its type (LOCAL or GLOBAL). In addition, this first line contains information about the total utilization of jobs associated with the Workload Policy.
-
If the Workload Policy contains JOB rules, those rules appear first in the block of lines, one line for each JOB-type rule, marked by message ID CTMWLOI. Details demonstrate the effect of the job limit on the execution of jobs.
-
If the Workload Policy contains RESOURCE rules, those rules appear next, one line for each RESOURCE-type rule, marked by message ID CTMWLPI. Details demonstrate the effect of the resource limit on the execution of jobs.
-
If the Workload Policy contains SEPARATE EXECUTION rules, this information appears last for the Workload Policy, and it may span several lines. One line, marked by message ID CTMWLQI, begins with SEPARATE EXECUTION and provides the total number of jobs that are waiting in the Workload Policy due to this rule. The following lines, marked by message ID CTMWLRI, provide information about all Workload Policies that were specified in the rule and are blocking jobs from running, one line for each Workload Policy.
-
Even if there are several SEPARATE EXECUTION rules in the Workload Policy, information is provided only for the first SEPARATE EXECUTION rule to be analyzed.
-
Any rule that is not currently applied displays N/A in its details. Similarly, under a SEPARATE EXECUTION rule, any Workload Policy that is not currently active displays N/A.
CTMWLMI WLSTAT - ACTIVE WORKLOAD POLICIES REPORT
CTMWLSI GLOBAL WORKLAD ABC123 , UTILIZIATION 00003
CTMWLNI -------------------------------------------------------
CTMWLOI JOB LIMIT 00003 EXECUTING 00003 JOBS, WAITING 0000002 JOBS
CTMWLZI
CTMWLSI LOCAL WORKLAD WKLD1 , UTILIZIATION 00000
CTMWLNI -------------------------------------------------------
CTMWLPI RESOURCE RESOURCE_A LIMIT N/A
CTMWLQI SEPARATE EXECUTION - CURRENTLY WAITING FOR 0000002 JOBS
CTMWLRI LOCAL WORKLOAD HI_PRIO_1 WITH 00003 JOBS EXECUTING
CTMWLRI LOCAL WORKLOAD HI_PRIO_2 WITH 00000 JOBS EXECUTING
CTMWLZI
Refreshing Workload Policies
The WLREFRESH command enables you to recalculate and rebuild all Workload Policies and refresh all their definitions — including filters, rules, and associated calendars.
Use the following operator command to perform the refresh:
F CONTROLM,WLREFRESH
Below is a sample response to this command:
CTML18I COMMAND RECEIVED: WLREFRESH
CTMWL1I CONTROL-M STARTED REBUILDING WORKLOAD DATA
CTMWLBI 0019 LOCAL WORKLOAD POLICIES LOADED
CTMWL2I CONTROL-M FINISHED REBUILDING WORKLOAD DATA. 000000
Basic Administrative Functions
This section discusses the following administrative issues:
Time Zone Support
Overview
Many Control‑M users have production environments spread around the world, and need to schedule jobs based on the time in a time zone other than that on their local system. Because businesses are often situated in locations very remote from each other, the work day on a particular date may span as much as 48 hours in real time.
The Time Zone feature of Control‑M enables you to ensure that a job runs during the time span you require, even though the limits of that time span may be set within another time zone. By this means, you can schedule and link dependencies between jobs that must run on a specific date in one time zone with jobs that run on the same business day in another time zone, which may be very far away.
If you set the TIME ZONE parameter of the job appropriately, Control‑M calculates the corresponding times automatically, and the job runs only during the hours you require.
In order to ensure backward compatibility, jobs that do not use the Time Zone feature continue to run as they always did prior to version 6.1.00. The existing concept of a working day is not affected.
As of version 6.1.00, ODATE has an enhanced definition, in which ODATE has either a VALUE or RUN attribute, which is of particular importance in relation to time zone jobs. For more information, see the discussion of date definition concepts in the introductory chapter of the Control‑M for z/OS User Guide.
Pre-Ordering Jobs
As a result of differences between time zones, the working day on a specific Control‑M logical date can be a period of up to 48 hours, because the actual length of time between the beginning of the day on a date in the furthest East time zone and the end of that day in the furthest West time zone can reach almost 48 hours. A job in one time zone may be dependent on the outcome of another job in a different time zone. The ODATE of each job appears to the users in two different time zones to be identical, but in the absence of some adjustment to take account of the different time zones, one of the jobs may in fact run on what appears at one site to be a different work day than the work day at the site where the other job runs.
Because of this, it is necessary to pre-order jobs, in order to ensure that they run at the time the user wants.
In the case of a time zone job, the logical date is shifted to the actual date defined in the TIME ZONE parameter of the job, so that the logical date begins at the New Day time in the distant time zone and ends at the next New Day time in that same time zone.
The New Day procedure is executed at the New Day time at the site where Control‑M is running. The New Day procedure orders all pre-ordered jobs for all time zones. However, for the Time Zone feature to operate, the Active Jobs file must contain jobs with ODATES that may start during the next 24 hours. The New Day procedure therefore orders all jobs with Time Zone parameter settings of the next working day. This ensures that those time zone jobs will be in the Active Jobs file, ready to be made eligible when the new ODATE arrives. Jobs without Time Zone parameter settings are ordered for the current ODATE as usual.
All jobs that are pre-ordered have the ODATE attribute RUN, because in all Time Zone jobs Control‑M automatically treats ODATE as a RUN attribute rather than a VALUE attribute. This ensures that they do not run on the wrong date.
Time Zone jobs are pre-ordered according to the following rules:
-
If a SMART Table Entity contains a Time Zone parameter setting, all jobs in the SMART Table will be pre-ordered for ODATE+1, even if they do not contain Time Zone parameter settings.
-
If the SMART Table Entity does not contain a Time Zone parameter setting, no job in it will be pre-ordered for ODATE+1, even if one of the individual jobs in it contains Time Zone parameter settings.
-
If a Time Zone job is not in a SMART Table, it will be pre-ordered for ODATE+1.
The activation of the pre-ordering feature is controlled by the GDFORWRD parameter in the CTMPARM member. The default value for GDFORWRD is Y. When GDFORWRD is set to N, pre-ordering does not occur, and all jobs are ordered for ODATE, even if they are Time Zone jobs.
A user who wants to change the ODATE attribute to RUN can do so, as follows:
-
When a job is ordered from the Job List Screen (Screen 2), the confirmation window contains the parameter WAIT FOR ODATE. The default setting for this parameter is N, but if the user changes this to Y, the ODATE of the job has the attribute RUN.
-
When a job is ordered using the CTMJOB utility, the ODATEOPT parameter can be changed to RUN. This also changes to RUN the attribute of ODATE in the New Day procedure.
The CLOCKnn Member
In order for the Time Zone feature to work properly, you must check the information in the CLOCKnn member of the SYS1.PARMLIB library, where nn is either the number specified in the IEASYS member in SYS1.PARMLIB, or 00.
You must verify the information in the following statement:
TIMEZONE x.hh.mm.ss
where
-
x is either W (West of the Greenwich Meridian, that is, -GMT) or E (East, that is, +GMT)
GMT (Greenwich Mean Time) is also known as UTC (Coordinated Universal Time).
-
hh are system time hours
-
mm are system time minutes; valid values are either 00 or 30
-
ss are system time seconds
For full information on the TIMEZONE statement, see the IBM manual MVS Initialization and Tuning Reference.
Defining a Job for a Specific Time Zone
The TIME ZONE parameter appears in the Job Scheduling Definition screen (Screen 2) and the Active Environment Zoom screen (Screen 3.Z). The parameter is set using one of the 3-character codes in the TIMEZONE member in the IOA PARM library. A sample TIMEZONE member is provided, but you can edit this to suit your local site requirements. For example, you can use "EST" or "NYC" instead of "G-5" for US Eastern Standard Time.
You can also add a time zone to the predefined list. For more information, see Adding and Modifying Time Zone Definitions.
WARNING: If you modify the 3-character name of a time zone in the TIMEZONE member, but fail to modify every job scheduling definition that uses that time zone in the same way, job scheduling definitions that specify that time zone become invalid. The same happens if you delete a time zone from the TIMEZONE member.
When defining Time Zone jobs, you must take into account the following special considerations:
-
If you define a new Time Zone job, you must save it at least 48 hours before the first execution date. This ensures that the job is ordered automatically by the New Day procedure or the User Daily procedure, and is ordered on the date you want.
-
If a new Time Zone job must run on the day when you define it, order it manually, by one of the following means:
-
using the CTMJOB utility
-
online, using the Job Scheduling Definition screen (Screen 2)
-
-
In addition to the Time Zone facility, you can also order a job for execution on a future date. For more information on this facility, see the description of the ODATEOPT parameter in the discussion of the CTMJOB utility in the INCONTROL for z/OS Utilities Guide.
-
The New Day procedure orders a Time Zone job if the scheduling date of the job occurs within the next 48 hours. However, the User Daily procedure only orders jobs with scheduling criteria for the current working date. BMC therefore recommends that you arrange the jobs for each time zone in a separate table. For more information, see the following section.
Recommended Method for Ordering Time Zone Jobs
Prior to version 6.1.00, the Active Jobs file contained only jobs that were ordered for the current working day. When the end of the working day arrived, the New Day procedure removed from the Active Jobs file all jobs with that ODATE, provided that the setting of the MAXWAIT parameter of specific jobs did not prevent such removal. Jobs so removed ceased to be eligible for submission.
As of version 6.1.00, the New Day procedure does not remove any Time Zone job from the Active Jobs file until the end of the ODATE at the Time Zone of the job, when the job is no longer eligible for submission.
With the introduction of the Time Zone feature, jobs may be pre-ordered before the ODATE specified in them, and may remain in the Active Jobs file after that ODATE.
As a result
-
jobs may stay in the Active Jobs file for more than 24 hours
-
the Active Jobs file may contain jobs that are to run on different ODATEs
-
the Active Jobs file may consequently be much larger
-
processing may consequently be slowed
This problem can be avoided by doing the following:
-
Create a separate table for each time zone that you use, and put the jobs for each time zone in the appropriate table.
-
Define a User Daily job with an order statement for each table created in step 1, as follows:
-
Set an AutoEdit value in one of the following ways:
-
Set the value of ODATE to %%DT. When the User Daily job runs, this value is replaced by an appropriate date. The date depends on the setting of the GDFORWRD parameter in member CTMPARM of the IOA PARM library.
-
If GDFORWRD is set to Y, %%DT contains the date of the next day.
-
If GDFORWRD is set to N, %%DT contains the current Control-M work date.
-
-
Set the ODATEOPT parameter to RUN. The ODATE value is then used to determine the working date on which the jobs run. Note that ODATEOPT can be abbreviated to ODOPT.
-
An example order statement:
-
ORDER DD=DALIB,MEMBER=TIMEZONE,ODATE=%%DT,ODOPT=RUN
-
The TIMEZONE member in the above example is the name of one of the tables created in step 1.
-
For more details on the ORDER statement, refer to the CTMJOB utility in the INCONTROL for z/OS Utilities Guide.
-
-
-
Modify the User Daily table, using the following parameters:
-
Set the time zone to the appropriate value.
-
Set the time for the User Daily job so that it runs just after the beginning of the working day in that time zone.
-
If you follow this procedure, jobs are ordered only when necessary, resulting in a smaller Active Jobs file and faster processing.
Adding and Modifying Time Zone Definitions
The time zone definitions used by Control‑M are kept in the TIMEZONE member in the IOA PARM library. Control-M also supports definitions for daylight saving time zones.
If your Control-M for z/OS is registered to Helix Control-M, time zone definitions are synchronized with Helix Control-M and the TIMEZONE member is periodically overwritten by the definitions in Helix Control-M. In this case, do not modify time zone definitions in mainframe.
Standard Time Zone Definitions
You can add a new standard time zone definition, or modify an existing definition, using the following syntax:
xxx = GMT+hh.mm | GMT-hh.mm
In the preceding syntax statement
-
xxx is a 3-character time zone code to be used as a value for the TIME ZONE parameter in job scheduling definitions
-
hh is the difference in hours between the relevant time zone and Greenwich Mean Time (GMT), expressed as a 2-figure number
Use a leading zero if necessary.
-
mm is the additional difference in minutes between the relevant time zone and Greenwich Mean Time (GMT), expressed as a 2-figure number
To create a new time zone definition, NYC, for New York, where the time is five hours earlier than Greenwich Mean Time (GMT), use the following syntax:
NYC = GMT-05.00
If you modify the 3-character name of a time zone in the TIMEZONE member, but fail to modify every job scheduling definition that uses that time zone in the same way, job scheduling definitions that specify that time zone become invalid. The same happens if you delete a time zone from the TIMEZONE member.
To activate changes in any time zone definition, do the following:
-
Use the NEWPARM command to refresh the time zone member used by the ControlM monitor. For information on the procedure for using the NEWPARM command, see Dynamically Refreshing Control-M Parameters.
-
Log off TSO, and log on again.
Daylight Saving Time Zone Definitions
You can include daylight saving time definitions when defining a time zone. To do so, define a time zone with the following statement:
{LOCAL | xxx} = [GMT+hh.mm | GMT-hh.mm]FROM date DD.MM hh.mm
TO date DD.MM hh.mm [GMT+hh.mm | GMT-hh.mm]
In the preceding syntax statement
-
LOCAL is a special time zone definition that specifies the parameter as relative to the local computer where Control-M is operating
A LOCAL definition is needed only when specifying a daylight savings time range for the local time zone.
-
xxx, hh and mm are the time zone code, hours, and minutes as described in Standard Time Zone Definitions
-
In a FROM or TO clause, date is the date (the DD.MM or MM.DD depending on the installation date format DATETYP) when the clock time is changed
-
In a FROM or TO clause, hh and mm are the time in hours and minutes when the clock time is changed, each expressed as a 2-figure number
In all daylight saving time zone definitions, the first time period relates to the winter zone and the second time period relates to the summer zone. A zone cannot span over the end of the calendar year (for example, you cannot define a zone that starts in November and ends in February).
The FROM keyword defines the beginning of the daylight saving time period, and the TO keyword defines the end of the daylight saving time period. The first GMT clause defines the standard (non-daylight saving time) difference between the local time and GMT, while the second GMT clause (the one after the TO clause) defines the time difference during the daylight saving period (the dates between the FROM and TO statements.
You can define a time zone without a daylight saving time zone definition. However, when you use the FROM keyword, you must then enter a full daylight saving time definition, including the TO keyword as well as the FROM keyword.
To create a new daylight saving time zone definition, JST, for Japan, where the time is nine hours later than Greenwich Mean Time (GMT), and daylight saving time begins on March 1st at 1:59 and ends on October 24th at 2:00, use the following syntax:
JST = GMT+09.00 FROM 01.03 01.59 TO 24.10 02.00 GMT+10.00
To create a new daylight savings time zone definition for the same time zone if Control-M is operating in that time zone, use the following syntax:
LOCAL = GMT+09.00 FROM 01.03 01.59 TO 24.10 02.00 GMT+10.00
Daylight Saving Time Considerations
In the IBM manual MVS Setting up a Sysplex, IBM recommends that you do not reset your time-of-day clock to switch to, or from, daylight saving time. Instead, IBM recommends that you set the time-of-day clock to Greenwich Mean Time (GMT), and use the CLOCKnn member in the PARMLIB library to adjust this time setting as appropriate for the local time at your site.
The following sections discuss adjusting the time setting forward or backward by one hour, using the CLOCKnn member, to take account of daylight saving time. All examples assume 02:00 a.m. as the time of change.
Advancing the Clock Forward
The following examples assume that the clock is moved ahead at 2:00 a.m. (that is, 2:00 a.m. becomes 3:00 a.m.):
New Day Procedure
No special action should be taken after the clock is advanced.
-
If the New Day procedure starts before you reset the clock, the New Day procedure starts working before the clock is advanced and continues normally (even if the clock is advanced while the New Day procedure is in process).
-
If the New Day procedure is scheduled to begin at exactly 2:00 a.m., the same considerations apply. It is possible that the New Day procedure starts execution before the clock is manually changed. Otherwise, changing the clock initiates New Day processing.
-
If the New Day procedure is scheduled to begin between 2:00 a.m. and 3:00 a.m., once the computer clock is advanced, the monitor starts the normal New Day processing.
-
If the New Day procedure is scheduled to begin after 3:00 a.m., no action is required. The monitor starts the standard New Day procedure.
Time-Dependent Shouts
-
Shout messages scheduled before 2:00 a.m. do not require any action.
-
Shout messages scheduled between 2:00 a.m. and 3:00 a.m. are issued, even though there may not be a delay in production since the time frame for production is smaller.
-
The above also applies to jobs that have shout messages scheduled at a later time (for example, 6:00 a.m.). These jobs may be considered late because of the tighter production time frame.
Time-Dependent Schedules (FROM-UNTIL)
Jobs whose scheduled time overlaps the time gap created by the clock shift may need manual intervention. For example, it is possible that a job with a FROM value of 2:15 a.m. and an UNTIL value of 2:45 a.m. will not be submitted at all. Adjust these jobs manually.
Cyclic Jobs
The next run of cyclic jobs with an interval of more than one hour runs one hour sooner than it was scheduled. Cyclic jobs with an interval of less than one hour run immediately.
IOA Log File
The IOA Log file does not contain entries with timestamps between 2:00 a.m. and 3:00 a.m. Any KSL scripts and programs that rely on log entry time must be checked for possible discrepancies due to advancing the clock.
Control-M Reports
Certain Control-M reports (such as CTMRNSC) that depend on the IOA Log file to report on job elapsed times may show incorrect elapsed times for jobs that either started or ended (or both) in the one hour period during which the clock was moved forward.
QUIESTIME
When the clock is moved forward, some jobs that are selected in accordance with QUIESTIME may finish later than QUIESTIME.
Moving the Clock Backward
The following examples assume that the clock is moved back at 2:00 a.m. (that is, 2:00 a.m. becomes 1:00 a.m.):
New Day Procedure
-
If the New Day procedure starts before 1:00 am, do not take any special action. The New Day procedure runs only once.
-
If the New Day procedure starts exactly at 1:00 a.m., do not adjust the CLOCKnn member at 1:00 a.m., to avoid another New Day process. A second New Day procedure requires manual intervention. It is advisable to wait a few minutes (until 2:05 a.m., for example) and then adjust the CLOCKnn member.
-
If the New Day procedure is scheduled to begin between 1:00 a.m. and 2:00 a.m., do one of the following:
-
wait at least one full hour after the daily run begins, and then adjust the CLOCKnn member (the New Day procedure will already have ended)
or
-
update the CLOCKnn member before New Day processing begins.
For example, if the New Day procedure is scheduled to begin at 1:45 a.m., adjust the CLOCKnn member at about 1:40 a.m. If this is not done by 1:40 a.m., wait until about 2:50 a.m. and then adjust the CLOCKnn member.
-
-
If the New Day procedure is scheduled to begin after 2:00 a.m., do not take any special action.
Time-Dependent Shouts
Shout messages scheduled between 1:00 a.m. and 2:00 a.m. may be issued twice.
Time-Dependent Schedules (FROMUNTIL)
Do not take any special action for jobs with FROM‑UNTIL schedules. Jobs scheduled to start between 1:00 a.m. and 2:00 a.m. start at the first occurrence of that hour (provided that other conditions, such as input conditions, resources, are met). However, they can be restarted after the CLOCKnn member has been adjusted.
Cyclic Jobs
The next run of cyclic jobs run one hour later than it was scheduled.
IOA Log File
The IOA Log file may contain entries with times earlier than previous entries, due to the time shift.
Control-M Reports
Certain Control-M reports (such as CTMRNSC) that depend on the IOA Log file to report on job elapsed times may show incorrect elapsed times for jobs that either started or ended (or both) in the one hour period during which the clock was moved backward. Some reports (such as CTMROGR) may totally omit reporting on such jobs if their apparent job end time precedes the job start time due to the clock movement.
QUIESTIME
When the CLOCKnn member is adjusted so as to "move the clock back", there may be jobs that are not selected for execution after the specified QUIESTIME, although they can finish before the QUIESTIME (because the time was added by adjusting the CLOCKnn member).
Time Zone Support
If you are using the Control‑M Time Zone feature, the following matters are of particular importance:
Daylight Saving Time at Your Site
For information about how to switch to or from daylight saving time at the site where Control‑M is running, see Daylight Saving Time Considerations.
In order to ensure that the Control‑M Time Zone feature works as it should, you must follow the IBM recommendation. Use the TIMEZONE statement in the CLOCKnn member in the PARMLIB library to adjust the time setting at your site.
Daylight Saving Time in the Time Zone of a Job
If the time when a job must run is dependent on the local time in a Time Zone other than that at your local site, you must modify the Time Zone definitions of the job.
Assume a job must not run before the New York Stock Exchange closes. The Time Zone in this job is defined as NYC. When daylight saving time begins or ends in New York, the entry NYC in the TIMEZONE member must be modified, by adding or subtracting an hour as appropriate.
For more information on how to modify the definition of a Time Zone, see Adding and Modifying Time Zone Definitions.
Shout / Mail Facility Destination Table Administration
The IOA Shout (SHOUT WHEN, DO SHOUT) and Mail (DO MAIL) facilities allow the user to specify messages / e-mails to be sent to various destinations, defined by the following tables:
-
ControlM Dynamic Destination Table (IOADEST)
Destinations in a production environment are not necessarily fixed. For example, the TSO logon ID of the shift manager is different in every shift. The Dynamic Destination table enables the user to specify a group name destination and which final destinations it represents. For more information about setting up Dynamic Destination tables, see both IOA Concepts and Components, and IOA Administration.
Although the Control-M Shout facility supports use of the MAILDEST table, BMC recommends that e-mails be sent using the DO MAIL facility and not the Shout facility (because of inherent limitations when using the Shout facility to send e-mail). See the Control-M for z/OS User Guide for information on the DO MAIL parameter.
-
ControlM Mail Destination Table (MAILDEST)
Mail destinations consist of names, addresses and groups to whom ControlM can send e–mail messages. The following section describes how to set up the Mail Destination table.
-
IOA SNMP Destination Table (SNMPDEST)
SNMP destinations consist of host names, IP addresses, nicknames, group names, and port numbers to whom Control-M can send SNMP traps (messages). For information about setting up the table, see IOA Administration.
Setting up the Mail Destination Table (MAILDEST)
The Mail Destination table (MAILDEST) contains a list of names, addresses and groups to whom e–mail messages can be sent. The Mail Destination table is loaded during the initialization of the Control‑M monitor. It can also be loaded using operator command NEWMAILDST. For more information about loading the Mail Destination table, see Loading a New Dynamic Destination Table (IOADEST).
When modifications are made to the Mail Destination table, it must be refreshed. For more information, Refreshing the Mail Destination Table (MAILDEST).
The options in the table are available for specifying specific addresses using Control-M DO SHOUT, SHOUT WHEN, and DO MAIL parameters (and also within the Mail Destination table itself).
Table 88 Options for Specify Addresses
Option |
Description |
---|---|
Using Full Mail Addresses |
Complete addresses are specified—for example, [email protected]. You may want to use this option for specifying recipients that do not receive mail from you on a regular basis. |
Using the Default Suffix |
The name of the recipient is specified—for example, GEORGE_SMITH—and the default company suffix is assumed and appended to the end of the recipient name to create a complete mail address. The company suffix is stored in the Mail Destination table. You may want to use this option for internal company mail, because the company suffix is the same for all internal recipients. |
Using Nicknames |
A short name for the recipient is specified—for example, GEORGE—whose complete name is defined in the Mail Destination table. You may want to use this option for specifying recipients to whom you send mail frequently, but do not belong to the company. |
Distribution lists can also be set up in the Mail Destination table.
Mail Destination Table Syntax
The following table describes the proper syntax for the Mail Destination Table.
Table 89 Mail Destination Table Sections
Parameter |
Description |
---|---|
Nicknames Section This section sets up nicknames for recipient names and their corresponding addresses. These nicknames are used as shortcuts when defining mail messages. Any number of nicknames or recipients can be defined. This feature will not function unless you specify both the NICK and the ADDR parameters. |
|
NICK |
Defines a short name for the recipient—for example, GEORGE. Any value specified in the NICK parameter can be used as a recipient in a mail message. |
ADDR |
Full email address of recipient—for example, [email protected]. |
Groups Section This section facilitates the creation of groups of addresses, for use as distribution lists. Any number of groups or distribution lists can be defined. The addresses for a group or distribution list can be specified with any of the following:
|
|
GROUP |
Name of the group or distribution list. |
TOADDR |
Defines a full email address of the recipient. |
CCADDR |
Defines a full email address of the person copied to the email. |
TOMAIL |
Defines the name of the email recipient, to which is the default mail suffix is appended, as defined by the DFLTSFFX parameter in the MAIL section of the IOAPARM member. GEORGE_SMITH |
CCMAIL |
Defines the name of the person copied to the email, to which the company mail suffix is appended, as defined by the DFLTSFFX parameter in the MAIL section of the IOAPARM member. MARY_JONES |
TONICK |
Defines the name of the recipient, as specified in the Addresses section by nickname. GEORGE |
CCNICK |
Defines the name of the person copied to the nicknamed destination, as specified in the Addresses section by a nickname. MARY |
Creating the Mail Destination Table
A sample Mail Destination table is provided in member MAILDEMO of the IOA PARM library. Change the member name to MAILDEST after modifying it, leaving the original member MAILDEMO intact.
The DO MAIL statements, shown in the following example, are valid based on the contents of the sample mail destination table shown below:
Figure 19 DO MAIL Example
ON PGMST ANYSTEP PROCST CODES OK
DO MAIL
TO EVERYBODY
CC BMC_STAFF
EXTERNAL_RECIPE
SUBJ JOB FINISHED O.K
TEXT Continue processing
Figure 20 Sample Mail Destination Table
*------------------------------------------------------------*
* DEFINITION OF ALL 'NICKNAME' ADDRESSES *
*------------------------------------------------------------*
NICK=GEORGE
[email protected]
NICK=MARY
[email protected]
NICK=MARTA
[email protected]
*------------------------------------------------------------*
* DEFINITION OF ALL 'NICKNAME' GROUPS *
*------------------------------------------------------------*
GROUP=EVERYBODY
[email protected]
[email protected]
TOMAIL=ROBERT
TOMAIL=LESLIE
TONICK=GEORGE
TONICK=MARY
CCMAIL=DAVID
CCNICK=MARTA
GROUP=BMC_STAFF
TOMAIL=ROBERT
TOMAIL=LESLIE
CCMAIL=DAVID
GROUP=EXTERNAL_RECIPIE
[email protected]
[email protected]
TONICK=GEORGE
TONICK=MARY
CCNICK=MARTA
**************************** Bottom of Data **********************
Adjusting Resources
Adjusting Resource Acquisition in the Scheduling Algorithm
Control‑M enables the user to modify the Control‑M scheduling algorithm using Control‑M User Exit CTMX004. The user can assign weight (importance) to quantitative resources, such as tapes, CPU, and so on. This exit is loaded when the Control‑M monitor is started. To replace the current exit with a new one, with a new set of weights, use the following operator command:
F CONTROLM,RELOAD=CTMX004
Using the Automatic Tape Adjustment Facility
The Automatic Tape Resource Adjustment facility optimizes usage of tape or cartridge drives during production batch processing. This facility makes modifications automatically (as opposed to prior versions, in which the user had to manually modify the job definition as necessary). This facility enables Control‑M tape drive resources to be automatically assigned (overriding tape drive resource allocations specified in the RESOURCE parameter of the job scheduling definition).
The Automatic Tape resource Adjustment facility can make the modifications automatically because it tracks usage statistics for each tape resource as it is used.
To implement the Automatic Tape Adjustment facility, perform the following steps:
-
Set the AUTOTAPE parameter to Y (Yes) during ControlM customization in the INCONTROL Installation and Customization Engine (ICE).
-
Modify the UNITDEF member in the ControlM PARM library to specify the device numbers of all drives that the facility must control. The format of the definitions is
Copydevicetype={(}from-to{,from2-to2,...)},DESC=description
The following tabledescribes the parameters in this command:
Table 90 UNITDEF Parameters
Parameter
Description and Values
devicetype
Type of device being defined. A device type is the set of all drives of the same type. Each tape drive type must be named. The name can be a maximum of 20 characters long, and must not contain embedded blanks. A maximum of 12 tape drive types can be defined.
For example, all 3420 tape drives can be named TAPE, and all 3490 cartridges can be called CART.
WARNING: The order of the device types in the UNITDEF member must not be changed. Old, obsolete device types should not be deleted, and new device types should only be added at the end of the member.
from-to
Unit address ranges for each device. The unit address ranges are specified as a series of pairs, the first of which is the starting address and the second of which is the ending address of the device. All addresses must be specified in 4 digits (for example, 0460 and not 460).
If one tape drive type consists of more than one unit address range, additional ranges can be specified, separated by commas and enclosed in parentheses.
description
Descriptive text for the device being defined.
Copy************************************************************************
TAPE=0460-046F,DESC=UNITS FOR EXTERNAL TAPES
CARTRIDGE=(0480-0483,0440-0445,0300-031F,0552-0553,0554-0555,
0556-0557),DESC=3490 RANGE
************************************************************************ -
Shut down and restart the ControlM monitor.
-
Exit the IOA online environment and reenter the IOA online environment.
For more information about the turning on the Automatic Tape Adjustment facility, see the chapter on customizing INCONTROL products in the INCONTROL for z/OS Installation Guide: Customizing.
Refreshing the UNITDEF Table
To refresh the Unit Definition (UNITDEF) table, issue the following operator command:
F CONTROLM,NEWUNITDEF
Quiescing Quantitative Resources
Quantitative resources can be assigned to jobs at any time, as long as they are available. The QUIESQRES command enables users to activate and deactivate quantitative resources for a defined time, and to display the status of those resources. For further information see Activating and Deactivating Quiesced Quantitative Resources.
Expanding Control-M Files
The following Control‑M files can be expanded using the Installation and Customization Engine (ICE):
-
Resources file (RES)
-
Jobs Dependency Network file (GRF)
-
Statistics file (STAT)
-
Dual Active Jobs file
-
History Jobs file (HST)
-
Journaling file
-
Journaling conditions file
-
Active Jobs file (AJF) (For the procedure, see the Control-M file customization section in the INCONTROL for z/OS Installation Guide: Customizing.)
Perform the following steps to expand Control‑M files:
-
Close all monitors and IOA activities. For example, to shut down the ControlM monitor, issue operator command P CONTROLM.
-
Rename the old file (that you want to expand).
-
Using ICE, select Customization. Enter CTM in the Product field, select Product Customization, and then select major step2, "Customize CONTROL-M Dataset Parameters." Perform the minor steps in order for each file you want to expand.
-
Perform minor step1, "Customization Instructions," which provides an overview of the process.
-
Perform minor step2, "ControlM Dataset Parameters," which lets you specify values for ICE to use to calculate the appropriate file size. During this step, specify a question mark (?) in any parameter field for help.
Modify only the parameters relevant for the files you want to expand. The following parameters can be changed using this step:
Table 91 Parameters for Expanding Various Control-M File
Parameter
Description
Relevant Files
AJFSIZE
Number of records in Active Jobs File
Located: Major Step 1 "CTMPARM Post-Installation"; Minor Step 2 "CKP Parameters"
Active Jobs file, Dual Active Jobs file, and AJF for Journaling
AJFTYPE
Type of AJF file (BASIC, LARGE, or EXTENDED)
Active Jobs file, Dual Active Jobs file, and AJF for Journaling
CNDREC#
Number of records in the Journaling Conditions file
This is the same parameter that controls the number of records in the IOA Conditions file.
Condition file for Journaling
RESBQNT#
Max # of different resources defined
Resources file
RESQNT#
# of records for QUANTITIVE resources
Resources file
RESCNTL#
# of records for Control resources
Resources file
HSTSIZE
Number of records in History AJF
History Jobs file
HSTTYPE
Type of HST file (BASIC, LARGE, or EXTENDED)
History Jobs file
JNLPSIZ
Primary space (cyl) for journaling file
Journaling file
JNLSSIZ
Second. space (cyl) for journaling file
Journaling file
GRFSIZE
Space (cyl) for GRF file
Jobs Dependency Network file
STTPSIZ
Primary space (cyl) for Statistics file
Statistics file
STTSSIZ
Secondary pace (cyl) for Statistics file
Statistics file
-
Perform minor step3, "Save Parameters into Product Libraries," to save the parameter values that you specified in minor step2.
-
Minor steps4 through10 are jobs that perform the expansion. Perform only those steps relevant to the files you want to expand.
Table 92 Jobs for Expanding Various Control-M Files
Job
Files
Job Description
FORMCKP in the Control‑M INSTALL library
Active Jobs file and Dual Active Jobs file
This job allocates and formats a new Active Jobs file (AJF) with the new size. If the journaling feature is being utilized, you must also expand the AJF for Journaling file.
FORMGRF in the Control‑M INSTALL library
Jobs Dependency Network file
This job allocates and formats a new Jobs Dependency Network (GRF) file with the new size.
FORMHST in the Control‑M INSTALL library
History Jobs file
This job allocates and formats a new History Jobs file (HST) with the new size.
FORMJAJF in the Control‑M INSTALL library
AJF for Journaling file
This job allocates and formats a new AJF for Journaling file with the new size.
FORMJCND in the Control‑M INSTALL library
Conditions file for Journaling
This job allocates and formats a new Conditions file for Journaling with the new size.
FORMJRES in the Control-M INSTALL library
Resources file for Journaling
This job allocates and formats a new Resource file for Journaling with the new size.
FORMJRNL in the Control‑M INSTALL library
Journaling file
This job allocates and formats a new journaling file with the new size.
FORMRES in the Control‑M INSTALL library
Resources file
This job allocates and formats a new Resources file (RES) with the new size.
FORMSTT in the Control‑M INSTALL library
Statistics file
This job allocates and formats a new Statistics (STAT) file with the new size.
If it is anticipated that the Statistics file will grow to be very large (over 4G bytes) then the STATFILE should be defined with Extended Addressability capability.
Extended format data sets are system-managed and the mechanism for requesting extended format is through the SMS data class DSNTYPE=EXT parameter and subparameters R (required) or P (preferred). The storage administrator can specify R to ensure the data set is extended. The storage administrator must also set the extended addressability attribute to Y to request extended addressability. See the IBM DFSMS: Using Data Sets manual for further details.
After allocation, check the LISTCAT output for the Extended addressability indicator, EXT-ADDR, in the Attributes group to ensure that extended addressability is in affect.
-
Copy the old files into the new ones according to the instructions below:
Table 93 Copy Methods for Expanding Various Control-M Files
Files
Copy Method
Active Jobs file, Dual Active Jobs file, and AJF for Journaling file
Copy using utility CTMCAJF. For information about the CTMCAJF utility, see the INCONTROL for z/OS Utilities Guide.
Resources file
Copy using utility CTMCRES. For information about the CTMCRES utility, see the INCONTROL for z/OS Utilities Guide.
History Jobs file
Copy using utility CTMHCOP. For information about the CTMHCOP utility, see the INCONTROL for z/OS Utilities Guide.
Journaling file
Copy the old journaling file into the new file using a standard IBM copying utility.
Conditions file for Journaling
Copy using the IOACCND utility. For information about the IOACCND utility, see the INCONTROL for z/OS Utilities Guide.
Resources file for Journaling
Copy using the IOACRES utility. For information about the CTMCRES utility, see the INCONTROL for z/OS Utilities Guide.
Statistics file
Copy the old STAT file into the new file using IDCAMS REPRO.
-
Start the monitors by issuing the following operator command:
CopyS CONTROLM
EAV Support
All Control-M databases and files can reside on Extended Address Volumes (EAV).
For the AJF, History, and Journal files, this is controlled by choosing the AJFTYPE and HSTTYPE parameters as Extended or Large in the ICE installation panels.
For the Statistics file, see the information in Table 92 in Expanding Control-M Files.
For the Control-R CDAM files, this is controlled by the EAVUSE#R installation parameter.
For further details, see the INCONTROL for z/OS Installation Guide: Installing.
The IOA Conditions and LOG must not reside on Extended Address Volumes.
Expanding the IOA Manual Conditions File (NRS)
To increase the size of the IOA Manual Conditions file, see Expanding Various IOA Files.
Active Jobs File Space Reuse Facility
The Active Jobs File (AJF) Space Reuse Facility is used (in parallel with Control‑M functionality) to dynamically delete finished scheduled jobs from the Active Jobs File, and reuse the space for new jobs. The AJF Space Reuse Facility is controlled by the REUSTIME and REUSAPPL Control‑M installation parameters.
REUSTIME sets the retention period of finished scheduled jobs in the AJF before they are deleted. REUSAPPL specifies the prefix of the APPL parameter for the scheduled jobs that are to be handled by AJF Space Reuse Facility.
For further information see the references to the REUSTIME and REUSAPPL parameters in the INCONTROL for z/OS Installation Guide.
For AJF Space Reuse functionality and for keeping information about free and occupied AJF records, Control-M uses new index records (called MIF Index Records) in the AJF file. These index records are created or rebuilt if the AJF Space Reuse Facility is activated (that is, if REUSTIME is not zero) during AJF format or AJF compress (either by the CTMCAJF utility or by the Control‑M New Day Procedure). As a result, if you dynamically activate the AJF Space Reuse Facility (by specifying a valid value other than zero for the REUSTIME parameter and by stopping and restarting the monitor), the facility is activated, but only after the next AJF compress or New Day Processing.
To dynamically inactivate the facility, set REUSTIME to zero, stop the Control-M monitor and compress the AJF. After changing the REUSAPPL parameter, the stop and start of Control-M monitor is necessary to apply the new value.
The retention period begins the moment a job finishes, and does not depend on Control-M monitor activity or the moment that a job received ENDED status in Control-M.
The AJF Space Reuse facility deletes finished scheduled jobs that match the following criteria:
-
The jobs must finish with OK, Forced OK, or Deleted status.
-
Scheduled jobs belonging to a SMART Table are processed by the AJF Space Reuse Facility only after the corresponding SMART Table has finished with OK status.
The AJF Space Reuse facility does not delete finished scheduled jobs that match the following criteria:
-
The jobs are in Held status.
-
Jobs with a MAXWAIT value of 99 (unless they are in Deleted status).
-
Jobs containing a Time Zone specification.
History file processing for AJF space reuse
By default, when the Control-M monitor is started, History file processing for AJF space reuse is enabled. As a result, when a job matches the relevant criteria for space reuse and the job contains a History retention period, the job is copied to the History AJF before its records are designated for space reuse. If History file allocation processing is disabled, a job containing a retention period will subsequently not become a candidate for space reuse. The job will be excluded for space reuse until History file processing is enabled.
History file activity is controlled by issuing modify commands to the Control-M monitor. The following modify commands are available:
F CONTROLM,HISTALOC=DISABLE
This command deallocates the History file from the Control-M monitor. As a result, space reuse continues processing without considering for deletion jobs having a History retention period.
The DISABLE command can be used when the History file size must be increased.
F CONTROLM,HISTALOC=ENABLE
This command allocates the History file to the Control-M monitor. As a result, space reuse will consider for deletion jobs having a History retention period.
Expanding the CMEM file configuration
To add new entries to the CMEM file configuration, see the topic about adding, deleting, and/or changing an SMFID in the CPUs List in the INCONTROL for z/OS Installation Guide.
SYSDATA processing
For information on the definition, use, and management of Control-M SYSDATA, see the following:
-
Control-M for z/OS User Guide > "Introduction to Control-M" > "Control-M Concepts"
-
INCONTROL for z/OS Installation Guide: Installing > "Installing Control-M" > "Installation considerations"
Accumulating Job Execution Statistics
Control‑M allows accumulation of job execution information using its Statistics file. The accumulated information can then easily be viewed using option S in the Active Environment screen or the JOBSTAT command in Scheduling Definition screen.
Control‑M manages statistical information for the most recent job runs, up to a maximum of 200. In a multi‑CPU environment, Control‑M keeps this information for each CPU (SMF ID) in which the job executes.
The Statistics file is updated by Control‑M utility CTMJSA. For more information about this utility, see the INCONTROL for z/OS Utilities Guide.
BMC recommends that you include this utility in the New Day procedure. Execute it before executing the User Dailies; this ensures that production jobs always use the most up-to-date information.
In addition to viewing statistical information online, a number of optional facilities can be employed. These optional facilities can significantly enhance production flow and management. These facilities all rely on the information accumulated in the Statistics file.
After proper management of the Statistics file is implemented, information in the file can be effectively used for the following reports and facilities:
-
simulation and forecasting
-
dataset – job cross-reference (Control-M CTMRJDS utility)
-
automatic tape adjustment
-
deadline scheduling
-
ControlM/Enterprise Manager live simulation
-
shout processing, which depends on job elapse time (EXECTIME)
-
QUIESCE facility (planned shutdown)
Elapsed Time Calculation
Control‑M calculates the elapsed time of a job to be used for
-
IOA Log Message SPY281I
-
job statistics calculations
The elapsed time of a job is the amount of time between the start of the job and the end of the job. The elapsed time of a SMART Table is calculated in a similar way. The elapsed time of a SMART Table is the amount of time between the start of the first job of the SMART Table and the end of the last job of the SMART Table.
The calculation of the elapsed time of a job is based on IBM time-related messages. The table shows the principal IBM time-related messages that are generated when most jobs run.
Table 94 IBM Time-Related Messages Generated on Running Jobs
Message |
Explanation |
---|---|
IEF403I |
This message displays the time that the processing of the job began, after any resource contention problem had been resolved. This message appears in the first part of the job output stream. |
IEF404I |
This message displays the time that the processing of the job ended. The message appears in the first part of the job output stream. |
IEF375I |
This message displays the time that the job was first initiated into the system, which may have occurred before any resource contention problem was resolved. The message appears in the third part of the job output stream. |
IEF376I |
This message displays the time that the processing of the job ended. The message appears in the third part of the job output stream. |
IOA Log Message SPY281I
The data required for the elapsed time component of IOA Log Message SPY281I is calculated as follows:
Elapsed time = [IBM Message IEF376I] - [IBM Message IEF375I]
If there was any delay caused by resource contention before or during the execution of the job, Control‑M does not subtract the delay time from the elapsed time of the job. This maintains consistency with IBM practice, in treating the job initiation time as the primary job start time.
The elapsed time of a job is displayed in a SPY281I message even if the job ended in one of the following ways:
-
The job abended.
-
The job ended due to a JCL error (if IBM Messages IEF375I and IEF376I are present in the job output).
-
The job ended with a condition code greater than zero.
The CPU (SMF ID) in the SPY281I message is extracted from the IBM message $HASP373. For users of the Thruput Manager product, the Initiator and Jobclass fields in the SPY281 message might appear as blanks.
Connecting Control-M for z/OS to Control-M/Enterprise Manager
BMC recommends that, if possible, steps 1 to 4, described below, be performed before installation so that step 5 can be performed as part of the Express installation.
To connect Control-M for z/OS to Control-M/Enterprise Manager do the following:
-
Define all Control-M/EM user IDs as Control-M for z/OS user IDs, with the correct authorizations and files access rights. If a user ID name is longer than 8 characters, or needs a transformation, implement exit CTWX001 either using a table (such as sample CTWX001A) or by modifying the site's own exit.
-
Designate three ports for communication with Control-M. The three port numbers must include a pair of consecutive numbers together with a third additional number. None of these ports can be used by other applications.
-
Decide on a value for the STATACT parameter that is suitable for your organization. If a value of NONE is selected, Control-M/EM will only have display capabilities. If a value of MSG is selected, the Console Operation software will have to be configured to perform the required activities (Up / Down / Parameters / Commands). If a value of CMD is selected, Control-M will perform the actions itself, but it requires the relevant authorizations.
-
Ensure that all 3270 terminal end users who can update job scheduling definitions, can also update the job scheduling definitions through Control-M/EM, and the usage of table/folder is coordinated to avoid the necessity of manually resolving conflicts.
-
Build the IOAGATEC/CTMCAS and IOAGATEM/CTMAS procedures, and tailor the ECAPARMC and ECAPARMM members with the selected numbers, by doing one of the following:
-
In Express installation, specify the installed Control-M/EM and Control-M Configuration Manager and port numbers in the Short Parameters Data Entry screen.
-
If you have already completed the installation phase, then enter the INCONTROL Customization Engine (ICE) on the mainframe and do the following:
-
Select Customization.
-
Select the Environment.
-
Specify Product IOA.
-
Select Product Customization.
-
Select major step 14 – ‘Install IOAGATE’.
-
Select minor step 2 – ‘Configure IOAGATE parameters’.
-
Select sub-options 0 thru 4, specifying ECAPARMM (for IOAGATEM) in option 0.
-
Select sub-options 0 thru 4, specifying ECAPARMC (for IOAGATEC) in option 0.
-
Return to the Install IOAGATE menu, and create IOAGATEM and IOAGATEC procedures as described in minor step 3 ‘Set Up IOAGATE Procedure(s)’.
-
-
-
Set STATACT using ICE, if either the CMD or MSG option was selected.
-
Enter the INCONTROL Customization Engine (ICE) on the mainframe and do the following:
-
Select Customization.
-
Select the Environment.
-
Change the Product to CTM.
-
Select Product Customization.
-
Select major step 3 - 'Specify additional CONTROL-M Parameters'.
-
Select minor step 2 – 'Customize CONTROL-M/CM App. Server parm.'.
-
Change the value of STATACT to desired value (CMD or MSG).
-
-
-
Set the IOAGATEC/IOAGATEM procedures to be active at all times.
-
Ensure either CMEM or Control-O are active at all times.
-
Using ICE, coordinate the values used for the Control-M/EM History retention with the Control-M for z/OS History Jobs file retention and ensure that the IOALOG size is large enough to hold the required number of days or generations.
Enter the INCONTROL Customization Engine (ICE) on the mainframe and do the following:
-
Select Customization.
-
Select the Environment.
-
Specify Product IOA.
-
Select Product Customization.
-
Select the following options:
-
Major step 2 - 'Customize IOA Dataset Parameters'.
-
Minor step 2 - 'IOA Log File Space Calculation'.
-
Minor step 9 - 'Format IOA LOG File'.
-
Minor step 17 - 'Format the IOALOG index file'.
-
Minor step 18 - 'Reload the IOALOG index file'.
-
-
-
Activate the IOALOG Index facility to eliminate delays when accessing the LOG from Control-M/EM.
-
Configure DVIPA if there is a need for an IOAGATE on multiple LPARs to be connected to Control-M/EM, where only one is active.
-
Populate SYNCLIBS, as described below.
Types of job scheduling definition synchronization
Synchronization of job definitions, Calendars, and Rule Based Calendars between the Control-M/Enterprise Manager Server and the Control-M/Server(s) occurs automatically, depending on the configuration settings of the data center definition on the Control-M/Enterprise Manager. There are four synchronization modes, as described in the following table:
Mode
Description
No Synchronization
No synchronization takes place between Control-M/EM and Control-M/Server. To synchronize manually, you can download the Control-M/Server data to Control-M/EM, upload the Control-M/EM data to Control-M/Servers, or create a regular calendar.
Update Control-M/Server definition during Check-in
Synchronizes only Control-M/EM Workspace and Calendar changes with Control-M/Server during Check-in. Other Control-M/EM definition changes are not synchronized with Control-M/Server. Control-M/Server changes are not synchronized with Control-M/EM.
Update Control-M/Server only
Synchronizes Control-M/EM changes with Control-M/Server. Control-M/Server changes are not synchronized with Control-M/EM.
Update Control-M/Server and Control-M/EM
Synchronizes all Control-M/EM and Control-M/Server changes with each other, for full synchronization. Also known as two-way synchronization.
Implementation of synchronization
-
For mainframe components
-
CTMSYNC Userid
The Control-M/Enterprise Manager uses a special user to request automatic synchronization between mainframe and Control-M/EM. By default, the user ID defined is CTMSYNC. This user must have dataset access authorization to all the libraries specified in the SYNCLIBS parameter member (see below), and to the calendar libraries pointed to by DD names DACAL and DARBC. In addition, in Extended Definition mode, CTMSYNC must have authorization to the $$ECSVWF facility. Note that CTMSYNC is the default user name which is customizable within the Control-M/EM system parameters. So if it changed, the new user name must be authorized as described.
-
SYNCLIBS parameter member
The purpose of SYNCLIBS is to define a list of table libraries that are automatically synchronized between Control-M and Control-M/Enterprise Manager. Whenever a table in one of these libraries is updated on the mainframe, Control-M/EM will automatically request downloading that table/folder to the Control-M/EM database so that it is synchronized with Control-M. To support such synchronization of table libraries, two-way synchronization must be enabled.
-
In each row in the SYNCLIBS member, specify the name of a library in columns 1-72. Do not include any additional text (such as comments).
-
It is recommended that all scheduling libraries be added to this member. This can be performed by doing one of the following:
-
Directly edit the (SYNCLIBS) dataset.
-
Enter the INCONTROL Customization Engine (ICE) on the mainframe and do the following:
-
Select Customization.
-
Select the Environment.
-
Change the Product to CTM.
-
Select Product Customization.
-
Select major step 3 - 'Specify additional CONTROL-M Parameters'.
-
Select minor step 3 – 'Update Member SYNCLIBS'.
-
Add Scheduling libraries.
-
Exit ICE.
If you remove a library from the list in the SYNCLIBS member, the library’s tables are not automatically deleted in Control-M/EM. If you do not want the library’s tables to appear in Control-M/EM, you can delete them manually through the Planning Domain in Control-M/EM. This will delete the tables also from the table library (due to two-way synchronization). If you need the tables to remain in the library, you can back them up before deleting them in Control-M/EM, and restore them in the library after deleting.
-
-
-
-
For distributed systems components
-
From the Control-M Configuration Manager, double click the Control-M Server definition and change the Synchronization mode to ‘Update Control-M/Server and Control-M/EM’.
-
For further information, refer to the Control-M Administrator Guide.
-
-
-
Define the new Control-M for z/OS in Control-M Configuration Manager. It can be defined as managed (using the Discovery process) or non-managed, where all the details must be supplied during the definition. If the non-managed option is used, there is no need for the IOAGATEC/CTMCAS started tasks, but on the other hand many capabilities for managing Control-M for z/OS from Control-M Configuration Manager will not be available.
-
If the language you are using for Control-M for z/OS is other than English, you must define the EBCDIC code page for the appropriate language by setting the EBCDIC_cp Gateway parameter using Control-M Configuration Manager (CCM).
The available values are documented in the Control-M Help in the “System parameters for Control-M/EM components” topic in the “Western European language configuration” sub-folder under the “Language and Customization” folder.
Ordering and Submitting Jobs and Started Tasks
Job Ordering using New Day Processing
Overview
The Control‑M monitor is usually activated as a started task and remains active 24 hours a day. At a set time each day (defined using installation parameters), New Day processing is performed by the Control‑M monitor.
New Day processing consists of both automatic cleanup from the previous day’s job ordering and automatic ordering of jobs for the New Day.
The main components related to New Day processing are
-
tables and job scheduling definitions
-
New Day procedure and User Daily job
-
Date Control records
-
Active and History Jobs files
-
IOA Conditions file
-
Journaling
New Day processing is completely automated through the use of the New Day procedure and User Daily jobs. The main purpose of the New Day procedure and User Daily jobs is to call programs that
-
change the Control-M logical working date
-
perform cleanup of previous days' jobs and compress the AJF in the process. If a History Jobs file was defined during Control-M installation, the deleted jobs may optionally be copied to the History AJF
-
perform IOA Conditions file cleanup to delete conditions whose ODAT is the same as the upcoming Control-M working date
-
scan tables to select jobs for scheduling
-
schedule the selected jobs (place copies of the selected job scheduling definitions as job orders in the Active Jobs file)
-
perform History Jobs file cleanup based on the retention criteria specified in the jobs’ scheduling definition
-
delete archived SYSOUT datasets that are no longer referenced by jobs in the AJF or History Jobs file
-
back up the previous day's Journal file and initialize the current day's Journal files.
Both the New Day procedure and each User Daily job must have its own Date Control record. A Date Control record is a member in the Control‑M PARM library in which relevant date information is placed during New Day processing. This date information is used to manage job orders.
Selection of jobs is based on the Date Control record, the current date and the Basic Scheduling parameters of the jobs in the tables. Any time the User Daily job is run, the current working date is placed in the Date Control record. The Basic Scheduling parameters of each job in the table are checked against this date to determine if the job must be placed in the Active Jobs file.
The following figure shows New Day Processing:
Figure 21 New Day Processing
New Day processing generally works as follows:
-
The New Day procedure is performed each day at a predefined time. The New Day procedure:
Schedules User Daily jobs
Schedules maintenance jobs. These jobs call programs that perform cleanup after the previous day’s processing.
-
If a cyclic job is executing at the time the New Day procedure is run, the New Day procedure changes the job to a noncyclic job and handles the job accordingly.
-
If a job that was not submitted on its original scheduling date contains a > specification in its TIME UNTIL field, when the New Day procedure is next run, the procedure deletes the > and TIME FROM specification from the job order, making the job immediately eligible for execution.
-
If History Jobs file processing is enabled, jobs deleted from the Active Jobs file during cleanup can be placed in the History Jobs file.
-
-
User Daily jobs (scheduled by the New Day procedure) select and schedule all other jobs for that day.
-
The following figureshows New Day Procedure and User Daily Jobs:
-
Figure 22 New Day Procedure and User Daily Jobs
Sample New Day Processing
Control‑M is supplied with samples of several of the above mentioned components.
To effectively implement New Day processing at your site, you must first understand how the sample components operate. Once the operation of the sample components is understood, you can then customize New Day processing based on site requirements. Sample New Day processing components are described in the following section.
Sample Components Provided With Control-M
At time of installation, each site is provided with the components shown in the table.
Table 95 Supplied Sample Control-M Components
Component |
Description |
---|---|
New Day Procedure |
A single New Day procedure is provided. Its default name is CTMTDAY (the name can be changed). This procedure must have wide authorization for accessing tables and jobs. |
User Daily Jobs |
The following sample User Daily jobs are provided:
These sample User Daily jobs are defined in table MAINDAY in the SCHEDULE Library. These jobs activate User Daily procedure CTMDAILY, which is responsible for ordering the production jobs. It is generally advisable to use these sample User Daily jobs to create separate User Daily jobs according to department (or other functional entity), and according to authorization. For more information, see Job Ordering using New Day Processing. |
Maintenance Jobs |
The following maintenance jobs are provided:
These maintenance jobs are defined in table MAINDAY in the SCHEDULE library. For a description of utilities IOALDNRS and IOACLCND, see the INCONTROL for z/OS Utilities Guide. |
Table: MAINDAY |
A table called MAINDAY is provided in the SCHEDULE library. This table contains User Daily jobs DAILYSYS and DAILYPRD and maintenance jobs IOACLCND and IOALDNRS. |
Date Control Records |
The following Date Control records (members) are supplied in the Control‑M PARM library:
|
Called Programs |
The New Day procedure and User Daily jobs call programs that perform various steps of New Day processing (checking the Date Control record, selecting job orders, and so on). For a description of these programs, see Programs Called During New Day Processing. |
How the Sample Components Perform New Day Processing
New Day processing performed with the sample components works as follows:
-
During New Day processing, the New Day procedure accesses its Date Control record, scans table MAINDAY and selects and loads the maintenance jobs and User Daily jobs to the Active Jobs file.
The following figure shows Sample Components of the New Day Procedure:
Figure 23 Sample Components of the New Day Procedure
-
The User Daily and maintenance jobs placed in the Active Jobs file are submitted by ControlM according to their runtime scheduling criteria. When a User Daily job is executed, it accesses its own Date Control record, scans the tables defined to it, selects jobs and places the selected job orders in the Active Jobs file. (User Daily jobs can also schedule maintenance jobs as required.)
The following figure shows Jobs Placed in the Active Jobs File:
Figure 24 Jobs Placed in the Active Jobs File
Because the User Daily is a job, its use is not restricted to New Day processing. Although User Daily jobs are normally executed immediately after they are ordered by the New Day procedure, they can be executed at any time defined in their Runtime Scheduling parameters. Furthermore, they can be ordered at any time by any of the methods described in the selected implementation issues chapter in the Control‑M for z/OS User Guide.
Date Control Records and Enhanced Daily Checkpointing
A Date Control record must be defined for each User Daily job. This record is usually defined by the INCONTROL administrator.
The Date Control record for User Daily jobs consists of six fields. At different stages in New Day processing (before or after the execution of specific called programs that perform New Day processing), the current original scheduling date is placed in one of these date fields.
This enables Control‑M to manage the process of job ordering. Furthermore, if New Day processing or a User Daily job is interrupted at any point, the values in these fields can indicate which called program was in use when the interruption occurred.
Enhanced Daily Checkpointing
The Enhanced Daily Checkpoint record is the second record in the Date Control member. It contains fields that store information about the last ordered job: JCL member name, internal sequence number, order ID, and the SMART Table to which the job belongs. (For a description of the format of this record, see Create Date Control Records.) If an interruption occurs during job ordering, the Enhanced Daily Checkpointing record enables precise identification of where job ordering stopped. During recovery, job ordering continues from that point.
BMC cautions against deleting an Enhanced Daily Checkpoint record. If you need to rerun User Daily, and the Checkpoint record has been deleted (or was never present), then all jobs in the table are considered for scheduling, including jobs already scheduled by the interrupted run.
If the job belongs to a SMART Table, the recovery procedure reorders the entire SMART Table. The original SMART Table Entity remains in the Active Jobs file, together with the jobs that were ordered prior to abnormal termination. However, the status of the original SMART Table Entity is set to HELD WAIT SCHEDULE to prevent the jobs in that SMART Table from being submitted. Changing the status of the original SMART Table Entity using the Online facility is blocked.
The same Date Control Record cannot be shared by the Newday procedure, by different User Dailies, and any jobs that invoke the CTMJOB utility. Failure to allocate unique Date Control records for each task that requires one may lead to unpredictable job ordering results.
Before the ordering process starts, the program checks if the checkpoint fields in Record 2 are blank.
If the checkpoint fields are blank, the User Daily job continues normal processing. Before each job is ordered, the fields in Record 2 are updated (overwritten) with information identifying the current job being ordered. Only upon successful completion of the User Daily job is the information in the checkpoint fields erased.
If the checkpoint fields are not blank, the recovery procedure described in Recovery Procedure Using Enhanced Checkpointing is activated.
As part of "Continue on Cards Error" processing, if parameter CNTERCRD in member CTMPARM is set to yes, Control‑M does not stop the ordering process. Control‑M continues even if errors exist. Checkpointing in this case is only relevant for abends or premature termination.
Create Date Control Records
Date Control records are members in the Control‑M PARM library. A different Date Control record must be defined for each User Daily job. It is usually defined only once for each job and from then on it is usually updated by the User Daily job.
The length of the User Daily Date Control record is 80 characters. The format of the dates in the record is mmddyy, ddmmyy or yymmdd, depending on the site standard.
The table shows the format of the Date Control record and indicates when the User Daily adds the original scheduling date values to the fields in the record.
Table 96 Date Control Record Format
Column |
Value Added |
Description |
---|---|---|
01–06 |
date1 |
User Daily adds the ODATE before the User Daily procedure is begun. |
18–23 |
date2 |
User Daily adds the ODATE before the job ordering process begins for jobs being scheduled based on DATES, DAYS, and/or DCAL parameters. |
25–30 |
date3 |
User Daily adds the ODATE after the job ordering process ends for jobs being scheduled based on DATES, DAYS, and/or DCAL parameters. |
43–48 |
date4 |
User Daily adds the ODATE before the job ordering process begins for jobs being scheduled based on WDAYS and/or WCAL parameters. |
50–55 |
date5 |
User Daily adds the ODATE after the job ordering process ends for jobs being scheduled based on WDAYS and/or WCAL parameters. |
60–65 |
Blank (date7) |
In the User Daily Date Control records, these columns are blank. In the New Day procedure Date Control record, these columns are the last formatting date, date7, of the Active Jobs file (used by program CTMFRM). This field prevents formatting from being carried out twice on the same day. When this date is in a record, program CTMCHK recognizes the record as a New Day procedure Date Control record. If there are any problems concerning the date, the program presents the operator with a series of prompts. Misuse of this field by the user frequently leads to the display of error message CTM916W. For more information, see the INCONTROL for z/OS Messages Manual. |
67–72 |
date6 |
User Daily adds the ODATE upon completion of all processing. |
A second Date Control record is defined for each User Daily job to implement Enhanced Daily Checkpointing. The column formats of this record are described in the table.
Table 97 Format of the Second Date Control Record (for Enhanced Daily Checkpointing)
Column |
Constant or Value Added |
Description |
---|---|---|
01–04 |
JOB= |
Constant. |
05–12 |
blank |
In this area, Control‑M stores the MEMNAME value of the last ordered job. |
13–23 |
,SERIAL_NO= |
Constant (note the comma before the "S"). |
24–28 |
blank |
In this area, Control‑M stores its internal sequence number of the last ordered job. |
29–37 |
,ORDERID= |
Constant (note the comma before the "O"). |
38–42 |
blank |
In this area, Control‑M stores the order ID of the last ordered job. |
43–49 |
,GROUP= |
Constant (note the comma before the "G"). |
50–69 |
blank |
In this area, Control‑M stores the group name of the last ordered job. |
When creating this record, the user must
-
specify the indicated constants (for example, JOB) in the appropriate columns
-
leave blank the columns indicated as blank. These columns are filled in by the User Daily during processing
When a SMART Table is ordered, the values in the second Date Control record will be those of the SMART Table Entity, even if a failure occurs in one of the SMART Table’s jobs.
Recovery Procedure Using Enhanced Checkpointing
When a SMART Table is ordered, the values in the second Date Control record will be those of the SMART Table Entity, even if a failure occurs in one of the SMART Table’s jobs.
The program passes over the jobs in the input tables, counting the jobs and comparing the count to the value in the SERIAL_NO field, until the count and serial number match. The matching job is selected.
The program then compares the values in the JOB and GROUP fields to the values belonging to the selected job. If the fields do not match, error message CTMD67S is issued and processing terminates.
-
If the fields match, the program checks the Active Jobs file for a job with an order ID matching the order ID recorded in Record 2. Ifthe match is found, an additional check is performed to verify that the job’s MEMNAME and GROUP values match the checkpoint JOB and GROUP values.
-
If the Active Jobs file already contains the job, the job is not ordered again and the program switches to normal processing starting with the next job.
-
If the Active Jobs file does not contain the job, the job is ordered. The program then switches to normal processing.
If input tables are modified prior to rerunning User Daily jobs (or the New Day procedure), the checkpointed job and internal sequence number might not match. In this case, rerun of the User Daily jobs is terminated and manual intervention is required.
If a problem is encountered in Newday processing related to job ordering, the Newday procedure can be rerun using the ORDERONLY parameter as follows:
S CTMTDAY,NEWDAY=ORDERONLY
The job can be run while the Control-M monitor is active.
Implementing New Day Processing
As indicated above, sample User Daily jobs DAILYSYS and DAILYPRD are supplied with Control‑M in table MAINDAY.
In theory, it is not necessary to use User Daily jobs. It is possible (but not recommended) to place all job scheduling definitions in one or more tables and have them scheduled by the New Day procedure.
It is also possible (and also not recommended) to maintain only the two sample User Daily jobs provided with Control‑M and to order all user jobs through the User Daily DAILYPRD.
The recommended method to automate the production environment using New Day Processing is by
-
defining a different table for each set of related jobs
-
defining a different User Daily job for each department, application or comparable entity
The table described the advantages that such an implementation provides.
Table 98 Advantages of Recommended Method of Automating the Production by Means of New Day Processing
Advantage |
Description |
---|---|
Improved performance |
Many User Daily jobs running in parallel can order the full day’s production jobs in significantly less time than can one User Daily that orders all jobs individually. |
Ease of administration |
The INCONTROL administrator can make each department responsible for its own User Daily jobs and tables and for controlling its own job ordering. |
Increased security |
While maintaining exclusive authorization for the New Day procedure, the INCONTROL administrator can limit each department’s authorization to its own User Daily jobs. |
Minimization of problems |
Problems encountered in one User Daily do not necessarily affect the job ordering of other User Daily jobs. |
Differences Between the New Day Procedure and User Daily Jobs
The New Day procedure uses the program list in member PROGDAYM. User Daily jobs use the program list in member PROGUSR.
The New Day procedure uses Date Control record DATEREC (which contains the last Active Jobs file format date in columns 60 through 65). User Daily jobs use Date Control record DATERECU (which contains blanks in columns 60 through 65). Using the wrong Date Control record causes message CTM916W to be generated.
User Daily jobs can be run manually (that is, not initiated by the Control‑M monitor.) However, the Control‑M monitor must initiate the New Day procedure. If an attempt is made to run the New Day procedure manually, problems may be caused by failure of the Control‑M monitor to free the Active Jobs file for use by the New Day procedure.
Implementation Tasks
Perform the following tasks when implementing New Day processing:
-
Decide which User Daily jobs are needed (and for which tables)
-
Customize the New Day procedure.
-
Use the sample JCL to create JCL for each User Daily job
-
Create User Daily job scheduling definitions and customize table MAINDAY
-
Create Date Control and Enhanced Daily Checkpointing records
-
Date Control records cannot be contained in a PDSEtype library.
-
The New Day procedure and its accompanying Date Control record are defined at time of installation. They require no further implementation.
-
-
Ensure subsequent runs of utility IOALDNRS if necessary
Decide Which User Daily Jobs Are Needed (and for Which Tables)
A job scheduling definition is defined for each job and each job scheduling definition is placed within a table. Usually, related job scheduling definitions are grouped into their own table.
Based on the tables defined at your site and the jobs they contain, decide what User Daily jobs you require, and which tables each User Daily job must scan.
Customize the New Day Procedure
The New Day procedure normally performs a cleanup of the AJF, the History AJF, and the IOA Conditions file automatically. The criteria by which jobs and conditions are deleted from the AJF, the History AJF, and the IOA Conditions file are illustrated in the CTMFRM program, described in Programs Called During New Day Processing. The user may change the default actions of CTMFRM by coding SELECT and IGNORE statements in the DAFRMIN DD statements, in both the main step and the CLRHIST step of the CTMTDAY procedure. The DD statements DAFRMIN reference members IGNORE and IGNORHST in the main and CLRHIST steps respectively. For further information, see SELECT and IGNORE Statements. Using these SELECT and IGNORE statements, the user can cause jobs and conditions that normally would be deleted to be retained, and vice versa.
SELECT and IGNORE Statements
SELECT and IGNORE statements identify jobs or conditions that must or must not be deleted.
One or more parameters can be specified in any SELECT or IGNORE statement (in any order). For a description of parameters GROUP, JOBNAME, MEMBER, STATUS, FROM, and TO, see the CTMCAJF utility in the INCONTROL for z/OS Utilities Guide.
A job specified for deletion using a SELECT statement is deleted unconditionally even if the job is currently executing.
Conditions that are not date‑related can be defined with a date reference of STAT, which eliminates the need for including SELECT or IGNORE statements in procedure CTMTDAY.
To suppress the erasure of the next day’s conditions by the New Day procedure, specify the definition
IGNORE COND *
When suppressing the function, remember to delete conditions (using utility IOACLCND). If this is not done, jobs in the next year’s schedule is triggered because of today’s conditions.
IGNORE JOBNAME OPER*
IGNORE JOBNAME PROD* STATUS ENDNOTOK
SELECT GROUP TEST
In this example, no jobs whose names begin with prefix OPER are deleted. Also no jobs whose names begin with prefix PROD that ended NOTOK are deleted. Of the remaining jobs, those belonging to group TEST are deleted. In addition, the default action is also taken. All jobs that ended OK and all jobs whose MAXWAIT interval is exceeded are also deleted even though they are not part of group TEST.
IGNORE STATUS ACTIVE
SELECT JOB OPER*
In this example, jobs whose names begin with prefix OPER are deleted if they are in WAITSCHED, ENDOK or ENDNOTOK status (that is, jobs whose status is ACTIVE are not deleted). In addition, the default action is also taken. All jobs that ended OK and all jobs whose MAXWAIT interval is exceeded are also deleted even though they do not begin with prefix OPER.
AutoEdit variables and functions are supported in the SELECT and IGNORE statements. For more information, see the CTMCAJF utility in the INCONTROL for z/OS Utilities Guide.
Use the Sample JCL to Create JCL for Each User Daily Job
Create the JCL for each User Daily job by selecting one of the alternative methods of identifying tables (below) and customizing the JCL accordingly.
Table 99 Methods for Identifying Tables
Method |
Description |
---|---|
Method 1 |
Copy
This method requires that the user specify the name of a table and library directly in the JCL. Copy
|
Method 2 |
Copy
This method requires that the user specify a parm_library and member containing ORDER requests that identify scheduling libraries, tables, and jobs to schedule—and/or in-stream ORDER requests—following //DAJOB DD * Method 2 provides the following advantages over Method 1 Changes required can be made to the member in the parm_library without changing the JCL of the User Daily job. Individual jobs can be specified. An entire library can be ordered with one order statement in the format: ORDER DSN=AAA.BBB.CCC,MEMBER=* When using Method 2, specify at least one ORDER statement and, optionally, SELECT or IGNORE statements. The Date Control record is referenced by DD statement DACHK. For the syntax, parameter descriptions and functionality of the ORDER, SELECT and IGNORE statements, see the CTMJOB utility in the INCONTROL for z/OS Utilities Guide. |
Create User Daily Job Scheduling Definitions and Customize Table MAINDAY
The supplied sample User Daily jobs, DAILYPRD and DAILYSYS, scan the tables referenced by DD statement DAJOB. However, different authorization is granted to each of these User Daily jobs.
Use these sample User Daily jobs to create a User Daily job for each department in table MAINDAY. Assign the authorizations accordingly. Each User Daily job must scan a different set of tables than the other User Daily jobs.
It is common in many sites for the INCONTROL administrator to create a customized User Daily job for each department and then turn the table over to the control of the department. The department can then modify the table (and job scheduling definitions) as necessary.
Although User Daily jobs can execute immediately after the jobs have been placed in the Active Jobs file, a site may choose to delay execution of a User Daily. To delay the submission of a User Daily, define the User Daily’s runtime scheduling criteria accordingly.
If groups of User Daily jobs are executed at different times, rerun IOALDNRS after running each group of User Daily jobs.
Add additional maintenance jobs table MAINDAY as necessary.
Ensure Subsequent Runs of Utility IOALDNRS if Necessary
If all User Daily jobs are scheduled to run in parallel, utility IOALDNRS only needs to run once, after the User Daily jobs have finished execution. However, if User Daily jobs are executed at various times during the day, utility IOALDNRS must be run after each group of User Daily jobs is executed. This can be ensured by having each group of User Daily jobs set the appropriate prerequisite conditions to ensure the execution of IOALDNRS.
Programs Called During New Day Processing
The most important programs in New Day processing are CTMILZ and CTMILU.
-
The New Day procedure executes program CTMILZ.
-
Each User Daily calls procedure CTMDAILY, which executes program CTMILU.
Programs CTMILZ and CTMILU both execute other programs that implement New Day processing. The programs called by CTMILZ and CTMILU are listed in the table below. Both CTMILZ and CTMILU read the member referenced by DD statement DAPROG and activate the programs listed in the member.
The following table describes the format for each record in the program list:
Table 100 Column Format for Program List Records
Column |
Description |
---|---|
01–08 |
Program name |
10–11 |
Maximum return code allowable in the preceding program If a higher return code is encountered in the preceding program, the current program is not executed. |
13–72 |
Program arguments |
The following table shows the programs called by program CTMILZ (the New Day procedure) and by program CTMILU (User Daily jobs).
Table 101 Programs Called by New Day Procedure and User Daily Jobs
Program |
Purpose |
---|---|
CTMCHK |
(called by CTMILZ and by CTMILU) Checks the current date and its relation to the Date Control record (described in the topic Use of the Date Control Record by User Daily Jobs. When called by CTMILZ, the program always prompts the operator to verify that Control‑M is activated on the correct date. When called by CTMILU, the program prompts the operator to verify that Control‑M is activated on the correct date only if the value CONFIRM is specified as the program argument (anywhere within columns 13 through 72). |
CTMFRM |
(called by CTMILZ) Reformats the Control‑M Active Jobs file, Control-M History Jobs file, and the IOA Conditions file: Control‑M Active Jobs File By default (that is, if no SELECT or IGNORE statements are specified), the following jobs are erased from the Active Jobs file and the file is compressed:
Control-M History Jobs File Compresses the Control‑M History Jobs file (if activated) by removing jobs whose retention criteria (RETENTION – # OF DAYS or GENERATIONS TO KEEP) have been exceeded. IOA Conditions File This program erases all prerequisite conditions whose data is the same as the new Control‑M working date (that is, this program erases all prerequisite conditions of the coming execution date).
At start of execution, this program creates a backup copy of the Active Jobs file (BKP file) for recovery purposes. |
CTMJOB |
(called by CTMILZ and by CTMILU) Places job orders in the Active Jobs file according to the date in the Date Control record and the data in the tables supplied. |
CTMPDA |
(called by CTMILZ and by CTMILU) Marks the end of the Daily run. |
If History Jobs file processing is enabled, program CTMFRM is run again using program CTMILZ, this time against the History Jobs file, as shown in the above table.
If Control-M/Restart is installed or the History feature is activated, steps DELARCH and CLRHIST are run after the conclusion of program CTMILZ, as shown in the following table:
Table 102 Additional Steps Executed by New Day Procedure if Control-M/Restart Is Installed or the History feature is activated
Program |
Purpose |
---|---|
CTMDAS |
Deletes archived SYSDATA (CDAM files) of jobs that were deleted from the Active Jobs file by program CTMFRM according to the following logic:
|
CTMHSC |
Deletes expired jobs from the History Jobs file. |
The following table shows the additional step that is run to copy the Control-M Journaling file:
Table 103 Additional Step Executed by New Day Procedure if the Control-M Journaling feature Is Activated
Program |
Purpose |
---|---|
IKJEFT01 |
Copies the Control-M Journaling file to a backup file (via CLIST CTMCJNL). |
Use of the Date Control Record by User Daily Jobs
The workflow of User Daily jobs is dependent on the Date Control record. The main steps of a User Daily job are
-
checking the last running date of the User Daily job (using internal program CTMCHK)
The first date in the Date Control record (columns 1 through 6) is compared to the current working date (at the time of the run).
-
If they match, the User Daily job has already run today. Anappropriate message is issued and the condition code is set to 0008.
-
If the current working date is earlier than the first date of the Date Control record, a User Daily job run has been attempted before its time. TheUser Daily job stops executing and notifies the user accordingly.
-
If the current working date is later than the first date of the Date Control record (the normal situation), the first date of the Date Control record (columns 1 through 6) is updated to the current working date. This date is then used as the current scheduling date.
If the User Daily job did not run for more than one day, a warning message is issued, and the User Daily job tries to schedule the jobs for all of the days that have passed since the last scheduling date (according to the production parameters). In such a case, you may want to run the User Daily job from a previous day, as described in Running a User Daily from a Previous Working Day.
However, if the program list record for program CTMCHK contains the program argument CONFIRM, the User Daily issues a series of WTOR messages. For information about operator responses to these messages, see New Day Procedure Flow.
-
-
placing job orders in the Active Jobs file according to the current scheduling date and the last running date (using utility CTMJOB)
There are two methods for placing job orders in the Active Jobs file using utility CTMJOB. For a description of both methods, see Use the Sample JCL to Create JCL for Each User Daily Job.
For each job, the program checks whether the job must be scheduled on one or all of the days that have passed since the last original scheduling date (date3 or date5) until the working date in the record (date1). If the job must be scheduled, a job order is placed in the Active Jobs file.
When the program finishes processing the user tables, the finish indicator dates (date3 and date5) are updated to the working date (date1) calculated by program CTMCHK.
Before program CTMJOB starts operating, it compares date2 with date3 (and date4 with date5). If they do not match, a previous run of program CTMJOB of the same User Daily job has probably abended. The user is notified and the program terminates. To correct the error, adjust the date values in the user Date Control record (using a standard editor).
When manually modifying the Date Control record, make sure that jobs are not scheduled to run twice on the same day.
indicating that the User Daily job has ended (using program CTMPDA)
Program CTMPDA updates the finish indicator date (date6) by setting it to the value of the running date (date1). This indicates that the User Daily job finished successfully.
-
rerunning the User Daily job after a failure
For further information, see Date Control Records and Enhanced Daily Checkpointing.
New Day Procedure Flow
Once a day, at a time set by the INCONTROL administrator, the Control‑M monitor begins New Day processing by going into a suspended state and issuing the following messages (the first is a highlighted, unrollable message):
CTM113I Control‑M MONITOR monitor NEW DAY PROCESSING STARTED
CTML00I Control‑M MONITOR monitor PROCESSING SUSPENDED
CTML07W Control‑M MONITOR monitor WAITING FOR NEWDAY PROCEDURE
Shortly after that last message is issued, started task CTMTDAY (the New Day procedure) is automatically activated.
-
If CTMTDAY finishes executing without any problems, the following messages are issued, and the suspended Control‑M monitor resumes normal processing:
CopyCTML01I Control‑M MONITOR monitor PROCESSING RESUMED
CTML02I Control‑M MONITOR monitor NEW DAY PROCESSING COMPLETE -
If a problem occurs during the formatting step (CTMFRM) of CTMTDAY processing, the ControlM monitor prompts the operator for an appropriate response using the following messages:
CopyCTML05W NEW DAY PROCESSING ERROR DETECTED
CTML06W REPLY "R" FOR RESUME OR "E" FOR END
The operator should try to correct the problem and rerun the CTMTDAY procedure as described below. Once the CTMTDAY procedure runs successfully, the operator should reply R to message CTML06W, which enables the
Control-M monitor to resume normal processing. Terminating execution of the Control-M monitor (option E) should only be requested if the problem cannot be corrected.
Procedure CTMTDAY can be rerun—while the Control-M monitor is suspended for Newday processing—in one of the modes described in the following table.
Table 104 CTMTDAY modes with Control-M monitor suspended
Command |
Description |
---|---|
Copy
|
AJF formatting and job ordering are performed as in normal Newday processing. The formatting step includes deletion of the current Control-M working day conditions from the IOA Conditions file. |
Copy
|
Performs job ordering only and not AJF formatting. If date is not specified, the current ODATE is used. Otherwise, date determines the ODATE. |
Copy
|
Performs AJF formatting only (equivalent to the COMPRESS command of utility CTMCAJF). Does not delete conditions for the current working day. |
If the CTMTDAY problem is related to a Table problem, setting parameter CNTERCRD to Y in member CTMPARM can avoid such CTMTDAY failures. If this parameter is set to Y, the ordering process bypasses scheduling errors within a job, and skips to the next job. If the error is in a SMART Table Entity or in a job belonging to a SMART Table, processing skips the entire table and continues with the next job or table. If the CNTERCRD parameter is set to N, it may be necessary to rerun the job ordering process of the CTMTDAY procedure, as follows:
S CTMTDAY,NEWDAY=ORDERONLY
-
During New Day processing, CTMTDAY checks the system date and time against what it expects to find in the ControlM control files. If they do not match, the operator is prompted with the following messages:
CopyCTM426W CTMTDAY "DAILY" DID NOT RUN FOR nnnnnn DAYS
CTM43CI CONTENTS OF DATE Control RECORD:
CTM437I date-1 date-2 date-3 date-4 date-5 date-6 date-7
CTM439W REPLY 'C' TO CONTINUE, 'U' TO UPDATE DATEREC TO
CURRENT DAY EXECUTION, OR 'E' TO ENDRespond using one of the following options:
-
C—all conditions in the IOA conditions file whose date corresponds to the intermittent days will be deleted. If RETRO is enabled in the scheduling definition, jobs for all intermittent days will also be ordered.
-
U—updates the Date Control Record to the current system date and continues execution. (Only jobs scheduled for the current working day will be ordered.)
-
-
If the computer has not been working for a few days (for example, a hardware failure or holiday), enter one of the following:
-
If the computer was IPLed with the wrong date, enter E, check and correct the date on the computer, and then restart procedure CTMTDAY.
-
If the date on the computer is correct and was working the previous day, contact the INCONTROL administrator to check the cause of the problem.
-
If the Control‑M monitor has been down for more than 28 days, the previous working date (the current working date minus 1) must be manually specified as date values 1 through 6.
Running a User Daily from a Previous Working Day
At times, you might need to run or rerun a User Daily from a previous working day. To semi-automate this process, perform the following steps:
-
Define a member PROGNCHK in the CTM PARM library. This member must contain at least the following 2 lines:
CopyCTMJOB 04
CTMPDA 04These lines may be copied from the second and third lines of member PROGUSR in the same library.
-
Copy member DATERECU in the CTM PARM library to a new member DATERECR. In this new member, specify the required scheduling date for the particular run as the first date on the statement, and the remaining dates as an earlier date.
For example, in the following member, the scheduling date is October 12, 2020:
Copy121020 111020 111020 111020 111020 111020
-
Run the CTMDAILY User Daily procedure with the DATEREC and PROGLST parameters as set below:
Copy//PRD EXEC CTMDAILY,DATEREC=DATERECR,PROGLST=PROGNCHK
-
When ordering the User Daily job via Control-M, specify the required scheduling date as the ODATE of the job.
This ensures that job order messages JOB528I in the User Daily output contain the desired ODATE.
Managing User Daily Jobs from Control-M/EM
The content of User Daily jobs executing in an MVS datacenter can also be managed from the Control-M/Enterprise Manager, which runs on a Windows platform. Special User Daily jobs must be defined for this purpose. They provide a means by which the following functions can be performed from Control-M/EM:
-
Add a table residing in a specific library to an existing user daily job.
-
Change the location of a table by moving it from one user daily job to another.
-
Delete a table from a user daily job.
For more information, see User Daily in the documentation for Control-M for Distributed Systems.
By default, the EMUSDLY parameter, in the CTMPARM member, is set to Y. This enables you to use the CTMEMUDL procedure, which runs the CTMUDR program, to process the Special User Daily Jobs.
The User Daily data that is uploaded from Control-M/EM is stored in one of the following locations:
-
Versions earlier than 9.0.21.100: In IOA Global variables
Usage of this storage location requires global variable database structures to be active, as well as Control-M Event Manager or Control-O.
It also limits the efficiency of using Control-M/EM dailies. Therefore, limit the use of Control-M/EM dailies to no more than a few ad-hoc folders. For most of your folders, use the standard mainframe dailies, controlled by CTMJOB.
-
Version 9.0.21.100 or later: In member UDLINDEX
UDLINDEX is located in CTMPARM (by default), or in some other non-default member that you specify in DD card DAUDR.
To use this storage location, you must first activate it and migrate existing User Daily data into it (for example, if you have upgraded from an earlier product version). You activate the UDLINDEX member using a one-time CTMEMUDL procedure, as described below.
In addition to the Special User Daily jobs, which are processed using the CTMEMUDL procedure, those tables that are specified from Control‑M/EM to run automatically (and are therefore marked as SYSTEM tables) are also processed as part of the New Day procedure. It is the responsibility of the Control-M Administrator to schedule the execution of the other User Daily jobs. For more information, see the INCONTROL for z/OS Installation Guide.
To activate the storage of User Daily data in member UDLINDEX (in product version 9.0.21.100 or later), run the following CTMEMUDL procedure:
EXEC CTMEMUDL,EMDAILY='$$CONVERT'
Use the following JCL template to routinely run the CTMEMUDL procedure:
EXEC CTMEMUDL,EMDAILY=userdailyname
Before issuing the command replace userdailyname with a 10 character name referring to a set of tables to be ordered in the AJF.
The CTMEMUDL procedure generates ORDER statements for all tables that belong to the Control‑M/EM User Daily job, userdailyname. The ORDER statements are subsequently processed by the CTMJOB utility, which places the jobs in the AJF.
To view the contents of a specific Control-M/EM User Daily job, use the LIST option, as shown in the following command:
EXEC CTMEMUDL,EMDAILY='userdailyname,LIST'
To view the contents of all Control‑M/EM User Daily jobs, use the '*,LIST' option, as shown in the following command:
EXEC CTMEMUDL,EMDAILY='*,LIST'
Job Ordering and Submission Considerations
Library Compression
If a job is ordered or submitted while certain libraries are being compressed, the member may not be found or the wrong job may be submitted. To avoid this problem, compress a library only when Control‑M is down, or no jobs contained in or referencing the library are being submitted or ordered. The following libraries are relevant to this issue:
-
the JCL library
-
ControlM job scheduling libraries
-
IOA calendar libraries
JCL Parameter MSGLEVEL
Output of a Control‑M job is written to the Control‑M SYSDATA only if a MSGLEVEL of (1,1) is specified. If Optional Wish WM0735 is applied at your site and no MSGLEVEL, or a MSGLEVEL other than (1,1) is specified, Control‑M automatically changes the MSGLEVEL to (1,1).
Comment Lines Added During Job Submission
Control‑M adds the following comment lines to the JCL output of each job that is executed using Control‑M:
//*-- SUBMITTED BY Control-M (FROM lib) ODATE=odate
//*-- SCHEDULE schedlib(sched-table)
//*-- SCHEDULED DUE TO RBC: rbc-name
//*-- JCL jcllib(jclmembr)
//*-- Control-M JOB IDENTIFICATION: ORDER ID=order-id RUN NO.=run-number
where
-
lib is either MEMLIB or OVERLIB
-
odate is the Control-M order date
-
schedlib is the scheduling library from which the job was ordered
-
sched-table is the table from which the job was ordered
-
rbc-name is either blank or (for jobs in SMART Tables) the schedule RBC that caused the job to be ordered
-
jcllib is the JCL library from which the job was submitted
-
jclmembr is the JCL member from which the job was submitted
-
order-id is the Control-M order id assigned to the job
-
run-number is the number of times the job has run or rerun
If Control-M is upgraded from a non-supported version, the values for the scheduling library, table, JCL library, and JCL member may appear as UNKNOWN for jobs ordered before the upgrade.
The value in the SCHED comment line is also indicated as UNKNOWN when you perform an AutoEdit simulation using the JCL library mode.
Volume Defragmentation and Compaction
If a job is ordered or submitted while DASD volumes containing IOA or Control‑M libraries are being defragmented or compacted by DASD management products, the library may be in use, not found or not cataloged—causing the job not to be submitted. To avoid this problem, defragment or compact volumes containing IOA or Control‑M libraries only when Control‑M is down, or no jobs contained in or referencing these libraries are being ordered/submitted.
The following libraries are relevant to this issue:
-
The JCL library
-
ControlM job scheduling libraries
-
IOA calendar libraries
At sites where running the above type of DASD housekeeping while Control‑M is active is unavoidable, carefully set the following parameters (defined in member CTMPARM of the IOA PARM library) to alleviate the problem:
-
INUSE#RT
-
INUSE#WI
Job Order Interface—Defining Job Lists for Each User
When an end user orders jobs using the End User Job Order Interface utility, the list of jobs that end user can order is displayed. For more information, see the online facilities chapter in the Control‑M for z/OS User Guide.
When using the End User Job Order interface, a user is permitted to order jobs in tables determined by the INCONTROL administrator. Multiple users can utilize the same table. The INCONTROL administrator must ensure that the tables do not contain jobs with duplicate jobnames.
To identify which table each user can utilize, the INCONTROL administrator defines a special control member. This control member lists users and the available table for each user.
The control member must be defined in a PDS with LRECL=80, RECFM=F, or FB. The default location is the @@USRTBL member in the Control‑M SCHEDULE library, but these values are parameters to CLIST and CTMJBINT, and can be modified according to site requirements. This control member may contain multiple lines for each user ID or mask, and is maintained by the INCONTROL administrator.
The following table shows how each line is formatted:
Table 105 Format of Lines in the Control Member
Columns |
Description |
---|---|
Cols 1–8 |
TSO user ID or mask. See TSO User ID Masking. |
Col 9 |
Blank. |
Cols 10–17 |
Table name in the scheduling table library. |
Cols 18–19 |
Blank. |
Cols 20–63 |
Name of the scheduling table library (required only if different from the library where the control member is located). If this entry is non‑blank, it must contain a fully qualified dataset name including the high-level qualifier and must not be enclosed in quotes. |
Col 64 |
Blank. |
Cols 65-72 |
Jobname prefix. If this field is blank, the user can order any job in the table. |
Col 73 |
Indicates whether the jobname in column 65 is to be treated as a full jobname or a generic jobname prefix. An X in column 73 prevents the jobname from being treated as a generic prefix name. |
The @@USRTBL member must not contain TSO line numbers in columns 73-80.
Any line containing an asterisk in the first column is treated as a comment and is not processed.
TSO User ID Masking
An asterisk (*) specified as the final non‑blank character represents any number of characters (including no characters). For example, if columns 1 through 8 on the control card contain the value ABC*
-
user ID ABC or ABCDEF result in a match
-
user ID AB or XABC do not result in a match
Security Considerations
All security parameters must be backed up in such a way that they can be installed in the backup computer as a whole, not as a special "patch" installation.
Pay special attention to the following points:
-
The correct implementation of the security authorizations needed by the ControlM monitor (that is, defining the ControlM monitor and its special authorization to the security package used in the backup computer).
-
All security parameters and definitions must be backed up and copied to the backup computer.
-
Third-party vendor exits relating to ControlM (for example, RACF exit for R1.7 and R1.8—see the RACF Security Guide) must be copied, installed and checked in the backup computer, thus enabling a quick and correct implementation if the need arises.
-
ControlM security exits, if used, must be checked and passed as a part of the disaster recovery plan.
Executing the M6 utility as a REXX EXEC
When executed as a REXX executable, the M6 utility contains additional options. These allow additional arguments to be passed for enhanced processing.
TSO CTMJBINT [arg1] [arg2] [arg3]
Parameter |
Definition |
---|---|
arg1 |
Specifies whether a debug trace of the REXX should be produced. Such traces need to be produced only when requested by BMC. Otherwise, this parameter should be specified as X=X. |
arg2 |
Specifies whether the Job Scheduling Definition should be forced or ordered by specifying YES or NO. |
arg3 |
Specifies an alternate control member that identifies the table the user should utilize. |
If the default value of the parameter is satisfactory, you do not need to add additional arguments. However, if arg3 is specified, you must also specify arg1 and arg2. Similarly, if arg2 is specified, arg1 is required.
Order the job scheduling definitions selected from the table specified by the @ALTUSR control member:
TSO CTMJBINT X=X NO @ALTUSR
Force the job scheduling definitions selected from the table specified by the default control member:
TSO CTMJBINT X=X YES
Produce a debug trace when requested by BMC:
TSO CTMJBINT DEBUG ("DEBUG")
Activation of Started Tasks
Control‑M can activate started tasks as well as jobs. For a description of the JES2/JES3 definitions that are required to support started tasks, see the Control‑M chapter in the INCONTROL for z/OS Installation Guide: Installing.
When working in a multi‑CPU environment, Control‑M can also activate started tasks in CPUs other than the one in which the Control‑M monitor is active.
Under JES2, the Control‑M monitor activates started tasks in other CPUs by using command $Mm, where m is the appropriate system ID. This system ID is defined in the JES2 initialization parameter one of the following ways:
MASDEF SID(n)=cccc
Sn SID=cccc (under older versions)
For more details, see the IBM manual JES2 Initialization and Tuning Reference.
JES2 fails a $Mm command if m is the ID of the system ID in which the Control‑M monitor itself is working. Therefore, when Control‑M is ordered to activate a started task in a specific system, it determines whether a $Mm command or a regular MVS START command must be issued. To ensure that this check is performed correctly, all the CPUs in your computer complex must be defined. For specific definition information see Step 6.3 – Specify IOA CPUs, in the Customized installation section of the INCONTROL for z/OS Installation Guide: Installing.
Under JES3, the Control‑M monitor activates started tasks in other CPUs by issuing a "*T cccc" JES3 command, where "*" is the JES3 command prefix and cccc is the required system ID. This system ID is defined in the JES3 initialization deck (INISHDECK) as follows:
MAINPROC,NAME=cccc,SYSTEM=JES3,...
For MVS, JES3 command RO cccc is issued.
Managing the Control-M Application Server (CTMAS)
The Control-M Application Server (CTMAS) communicates with the Control‑M/Enterprise Manager, a software product that runs on a UNIX or Windows platform and provides centralized control of the job scheduling environment for the enterprise. The purpose of the Control-M Application Server is to interface between the Control-M/EM and the Control-M environment on the z/OS platform.
Functions of the Control-M Application Server
The primary functions of the Control-M Application Server are:
-
to synchronize the data in the Control-M active environment on the z/OS platform with that on the Control-M/EM server.
-
to process user requests received from the Control-M/EM environment and acting on the z/OS data center. Such requests include uploading tables and calendars, ordering jobs, and monitoring job execution.
-
to process system requests received from the Control-M/EM environment and acting on the z/OS data center. Such requests include receiving and sending global conditions.
Activating the Control-M Application Server
The Control-M Application Server is activated by starting the IOAGATE started task that starts the corresponding CTMAS task. To do so, issue the following operator command:
S IOAGATE
Deactivating the Control-M Application Server
The Control-M Application Server is deactivated by stopping the IOAGATE started task that starts the corresponding CTMAS task. To do so, issue the following operator command:
P IOAGATE
CTMAS Operator Commands
To stop communication between CTMAS and Control-M/EM, issue the following command:
F CTMAS.CTMAS001,STOPLINK
To establish communication between CTMAS and Control-M/EM, issue the following command:
F CTMAS.CTMAS001,STARTLINK
To enable or disable trace entries pertaining to CTMAS, issue the following command with the appropriate trace parameters:
F CTMAS.CTMAS001,TRACE=(…)
For details about usage and parameters, see Internal Trace facility.
To print out a summary of storage memory allocations to the DAPRENV statement, issue the following command:
F CTMAS.CTMAS001,LISTVSUM
To print out a detailed map of storage memory allocations to the DAPRENV statement, issue the following command:
F CTMAS.CTMAS001,LISTVDET
The Download Process
The download process consists of transferring a new image of the Control-M repository to the Control-M/EM server. Data transferred consists of the following files:
-
the Control-M Active Jobs File
-
the Control-M Resource files
-
the IOA Conditions file
Download always takes place following New Day processing by the Control-M monitor. Download also occurs whenever communication with the Control‑M/EM gateway is reestablished.
Message CTWH06I in the CTMAS job log signals the completion of the download process. It indicates the confirmation by Control-M/EM that the download was successful.
Download Job Filtering
Sometimes it is necessary to manually prevent a specific job from being downloaded to Control-M/EM, because the job definition causes problems on the Control‑M/EM database or because it caused CTMAS to abend during the previous download. In the latter case, the CTMPARM parameter DWNLDERR can be set to value EMX (the default value) in order to automatically exclude the job from the next download when CTMAS is restarted.
Alternatively, the LOG value can be specified for the DWNLDERR parameter, in which case a message is written to the IOA log indicating which job was being processed at the time of the abend, but the job is not excluded from the next download.
In order to manually prevent a specific job from being downloaded to Control‑M/EM, the EMDOWNLD service of the CTMAPI utility can be used to perform such an action. EMDOWNLD provides the following functions:
-
EXCLUDE - Exclude job specified from being downloaded to ControlM/EM
-
ACCEPT - No longer exclude job specified from being downloaded to ControlM/EM
-
EXCLUDE LIST - List all jobs currently excluded from download to ControlM/EM
-
ACCEPT ALL - Include all currently excluded jobs in the next download to Control-M/EM
For information on the format of commands using the CTMAPI utility, please see the Control-M for z/OS User Manual.
Prevent download of job: Orderid 000DB
//S1 EXEC PGM=CTMAPI,PARM='EMDOWNLD EXCLUDE OID=000DB’
Allow download of job: Member name BR14
//S1 EXEC PGM=CTMAPI,PARM='EMDOWNLD ACCEPT MEMBER=BR14’
Managing the API Gateway
The Control-M API Gateway Application Servers (CTMAASx) are controlled by the Control-M API Gateway and are responsible for sending API REST requests to the relevant services.
These Application Servers are started and stopped on demand by the API Gateway according to the load of user requests directed from the EM. For example, when only sporadic Job Log requests need to be served, two Application Servers are started. However, if the load changes and more frequent requests are encountered ( for example, requests initiated by the Control-M Workload Archive), up to four servers are started.
Activating the Control-M API Gateway
The Control-M API Gateway is normally activated by CTMCAS, and the Control-M API Gateway then activates the Application Servers (CTMAASx). To manually activate the API Gateway, you can issue the following operator command:
S CTMAPIG
Deactivating the Control-M API Gateway
The Control-M API Gateway Application Servers are deactivated by stopping the Control-M API Gateway started task. To do so, issue the following operator command:
P CTMAPIG
CTMAASx Operator Commands
To enable or disable trace entries pertaining to CTMAASx (the Control-M API Gateway Application Servers, where x is the Application Server number), issue the following command with the appropriate trace parameters:
F CTMAASx,TRACE=(…)
For details about usage and parameters, see Internal Trace facility.
To print out a summary of storage memory allocations to the DAPRENV statement, issue the following command:
F CTMAASx,LISTVSUM
To print out a detailed map of storage memory allocations to the DAPRENV statement, issue the following command:
F CTMAASx,LISTVDET
CTMAPIG Operator Command
To display the current status of the active and idle mailboxes, which are used by the Control-M API Gateway (CTMAPIG) to communicate with its Application Servers, issue the following command:
F CTMAPIG,APPL=HEALTH
Controlling the API Gateway Trace
To control the API Gateway trace, adjust the logging levels in member CTMAPARM in the ilprefa.PARM library, as directed by BMC Support. For example:
--logging.level.root=INFO \
--logging.level.com.bmc.ctmsServices.ZosApiGtw=DEBUG \
--logging.level.com.bmc.ctmsServices.ZosApiGtwConnectors=TRACE
Managing the CMEM Facility
The Control‑M Event manager (CMEM) handles events occurring outside the control of the Control‑M monitor. CMEM consists of a monitor that uses the IOA subsystem to perform predefined actions in response to system events (for example, arrival of a specified job on the job spool).
CMEM and Control-O
If Control‑O is installed, the Control‑O monitor assumes control of the CMEM facility and performs CMEM functions using its own monitor and subsystem facilities, rendering this description of CMEM irrelevant. Control‑O and the IOA subsystem use the same subsystem name. For information about managing the CMEM facility when Control‑O is installed, see Control-O.
Before starting Control‑O, CMEM must be shut down.
When Control‑O is shut down, the CMEM facility is also shut down. To restart Control‑O CMEM support after Control‑O has been shut down, issue the following operator command (do this only in an emergency situation):
S CONTROLO,TYPE=CTOCMEM
Activating the CMEM Facility
It is recommended that CMEM be active in every computer in the data center (not just in the computer where the Control‑M monitor is working). However, it is possible that in your data center CMEM does not operate in all the computers. (This option is controlled by the Control‑M installation parameters.)
The CMEM monitor must operate 24 hours a day. The usual way to ensure this is to automatically initialize the CMEM monitor during the IPL process. For more information, see the Control‑M chapter in the INCONTROL for z/OS Installation Guide: Installing. To activate the CMEM Subsystem manually, use the operator command
S CTMCMEM
The same operator command can be used to activate the CMEM monitor manually.
Deactivating the CMEM Facility
Under normal circumstances, the CMEM monitor is not shut down. However, CMEM shutdown may be necessary for the following reasons:
-
to resolve a problem that cannot otherwise be resolved
-
In this case, the monitor must be immediately restored to minimize the impact of the shutdown on the work environment.
-
to clean up (erase) all loaded CMEM tables from memory, or stop all CMEM functionality (for example,for a system shutdown)
To stop and immediately restart the CMEM facility, replace the active CMEM monitor by starting a new CMEM monitor. For more information, see Replacing an Active CMEM Monitor.
When the monitor replacement method is not applicable, and a complete shutdown is required, issue one of the following operator commands:
F CTMCMEM,STOP
P CTMCMEM
CMEM shuts down after a few minutes.
CMEM rules are never triggered for dataset events and step termination events caused by jobs that start when CMEM is down.
Replacing an Active CMEM Monitor
If a CMEM monitor is currently active, and a new CMEM monitor is started (using operator command S CTMCMEM), the current CMEM monitor passes execution control to the new CMEM monitor and then shuts down. It is not necessary to reload the rule tables. They are passed from the current monitor to the new one. Therefore, to stop and immediately restart the CMEM monitor with minimum interference to ongoing work, issue the following operator command:
S CTMCMEM
Replacing the Active CMEM Executor Modules
When the active CMEM monitor is replaced, most CMEM modules are automatically reloaded. If maintenance is supplied for CMEM executors modules or their messages, a reload command can be used to replace the modules without stopping CMEM.
The following modules can be refreshed:
-
CTOWTO, a CMEM executor module
-
CTOAIDT, a CMEM executor module
-
messages that are used by the above modules
To replace module CTOWTO, use the operator command
F CTMCMEM,RELOAD=CTOWTO
To replace the messages used by CTOWTO, use the operator command
F CTMCMEM,RELOAD=MESSAGES
Replacing the Active UNIX for z/OS (OpenEdition) Interface Module (CTOAODT)
When the active CMEM Monitor is replaced, most CMEM modules are automatically reloaded. However, the CTOAODT module must be separately reloaded if maintenance is supplied for the Unix for OS/390 (OpenEdition) interface module.
CTOAODT is shared among different IOA environments that are active in the system. Therefore, to replace the module, the current CTOAODT copy must be deactivated in all IOA environments on the system before a new copy can be loaded.
Deactivating the Current Copy of CTOAODT
To deactivate the current CTOAODT copy in all IOA environments on the system, do the following:
-
Stop UNIX for z/OS (OpenEdition) support by issuing the following operator command for the CMEM Monitor of every IOA environment on the system:
CopyF monitor,STOPOE
-
Alternatively, stop the appropriate monitor.
Wait for the following message to appear:
CopyCTO792I OPENEDITION INTERFACE MODULE REMOVED
-
After the CTO7921 message has been displayed, UNIX for z/OS (OpenEdition) has been stopped, and the new copy of CTOADT can be loaded.
Loading the New Copy of CTOAODT
The procedure for loading the new CTOAODT copy in all IOA environments on the system is shown in the following steps:
-
Load the new module with the following operator command for the CMEM Monitor of the environment in which the PTF was applied:
CopyF CMEM,STARTOE
-
Verify that the following message appears:
CopyCTO781I OPENEDITION INTERFACE MODULE SUCCESSFULLY LOADED
-
Restore the OpenEdition support in the rest of the IOA environments where it was previously stopped, by running the following operator command for the CMEM Monitor of each one of them:
CopyF monitor,STARTOE
-
Alternatively, restart the appropriate monitor if it was stopped.
Automatic Restart Management (ARM) and CMEM
The CMEM monitor should not be defined for Automatic Restart Management (ARM) because
-
CMEM has its own recovery process
-
since CMEM is active on each system, there is no need to move it to another system when the original system becomes inactive.
Loading Rules
CMEM loads rules in E/CSA. Rules are loaded during CMEM startup under the following conditions:
-
it is the first time CMEM is started up
-
the operator issues the CMEM modify command C
-
a user forces the CMEM rules table from the Tables list of the ControlM Event Manager Rule Definition screen (ScreenC)
During the load process, the monitor performs logical checks to verify the correctness of the rule.
In case of an error, the rule is rejected and an error message prints in the IOA Log and the CMEM monitor SYSPRINT.
Automatic Loading of Rules
When the CMEM facility is started (and is not replacing an active CMEM monitor), it loads CMEM rule tables specified in the CMEM list. The CMEM list is a partitioned dataset (PDS) containing the names of the tables to be ordered. A default CMEM list is located in member IOACMEML in the IOA PARM library (referenced by DD statement DACTMLST). The default list can be overridden by specifying the ORDER parameter in command S CTMCMEM, which references a different CMEM list.
Each line in the CMEM list has the following format:
* library table
where
-
* must be included as a constant
-
library is the rule library name
-
table is the rule table name (or mask).
Manual Loading of Rules using the CMEM Online Facility
The CMEM list specified during startup contains a list of rule tables to be activated by CMEM when it is started.
To load additional tables, or to replace a currently active table with a new (updated) copy of the rules in the table using the CMEM facility, enter the CMEM Online facility (=C) and use the FORCE option in the Table List screen.
Manual Loading of Rules using Operator Commands
Rules are normally loaded automatically, as discussed in "Automatic Loading of Rules." However, manual intervention is possible.
The CMEM list specified during startup contains a list of rule tables to be activated by CMEM when it is started.
To load additional tables, or to replace a currently active table with a new (updated) copy of the rules in the table, issue the following operator command:
F CTMCMEM,C=library(table)
where
-
C loads a CMEM rule. Each rule is loaded by the CMEM monitor and is activated.
-
library is the rule library name
-
table is the rule table name (or mask).
F CTMCMEM,C=CTM.PROD.RULES(DATASET)
Loads table DATASET from CTM.PROD.RULES
F CTMCMEM,C=CTM.PROD.RULES(*)
Loads all tables from CTM.PROD.RULES
F CTMCMEM,C=CTM.PROD.RULES(PROD*)
Loads tables whose name starts with PROD from CTM.PROD.RULES
Replacing All CMEM Rule Tables in One CPU
To replace all loaded CMEM tables with those in the CMEM list (referenced by DD statement DACTMLST), use the following operator command:
F CTMCMEM,C=ALL[,REBUILD]
If the REBUILD option is specified, CMEM rule tables not listed in the CMEM list are deleted.
If the REBUILD option is not specified, previously loaded CMEM rule tables are replaced by a new copy of the rule table, and unchanged tables are left intact.
Replacing All CMEM Rules Tables in All CPUs
All CMEM rules in all the CPUs where the CMEM monitor is active can be reloaded at the same time. The reload process is performed in the same way as the automatic loading is performed during startup of the CMEM monitor. All active rules are deleted, and all rule tables specified in the CMEM list referenced by DD statement DACTMLST are loaded.
To replace all rules in all the CPUs issue the following command:
F CONTROLM,NEWCONLIST
Specifying this command is the same as specifying F CTMCMEM, C=ALL,REBUILD in all CPUs.
Control‑M informs the CMEM monitor running in each CPU about this command request.
Rule tables that were manually loaded and/or are not in the CMEM list are deleted during execution of this operator command.
Deleting (Deactivating) an Active Rule Table
An active CMEM rule table can be manually deactivated using the following operator command:
F CTMCMEM,D=library(table)
where
-
D deactivates a CMEM rule table. Each rule is deactivated by the CMEM monitor
-
library is the rule library name
-
table is the rule table name (or mask)
F CTMCMEM,D=CTM.PROD.RULES(PRODTAB1)
Displaying Active Rules
A list of the active rules in the CMEM facility (up to a maximum of 1000 rules) can be displayed on the operator console. To display the list, enter the following operator command
F CTMCMEM,DISPLAY[=DETAIL]
The optional DETAIL parameter enables you to generate a detailed list, with more extensive information than a regular list.
A regular list of CMEM rules includes the following information:
Table 106 Information in a Regular List of Rules
Field |
Description |
---|---|
RULE |
Rule name (that is, the name in the first ON statement of the rule definition). |
TYPE |
Rule type. Valid types:
|
STATUS |
Rule status. Valid statuses are:
|
PRTY |
Internal CMEM rule scanning priority |
TABLE |
Name of the table (or member) that contains the rule |
LIBRARY |
Name of the library that contains the rule member |
The following example shows the format of a detailed list of CMEM rules with one ON DSNEVENT rule:
CTO12SI RULE LIST DISPLAY FOR LPAR MVS3:
NAME TYPE STATUS OWNER LAST-ORDERED ACTIVE# LAST-TRIGGERED
TABLE SEQNO LIBRARY PRIORITY
IN ADDITIONAL-FILTERS ON#
------------------------------------------------------------
FTP* D ACTIVE K81 20230301 19:45 000001 20230302 13:10
K81ONDSN 00002 IOAA.DEV#R3.CTO.OPR.RULES ( )
N FTP* INCOMING.DATA.SET C 01
A detailed list of CMEM rules includes the following information:
Table 106a Information in a Detailed List of Rules
Field |
Description |
---|---|
NAME |
Rule name (that is, the name in the first ON statement of the rule definition). |
TYPE |
Rule type. Valid types:
|
STATUS |
Rule status. Valid statuses are:
|
OWNER |
Name of rule owner |
LAST-ORDERED |
Date and time when the rule was last ordered |
ACTIVE# |
Number of times that the rule was triggered (activated) since Order time |
LAST-TRIGGERED |
Date and time when the rule was last triggered (that is, activated) |
TABLE |
Name of the table (or member) that contains the rule |
SEQNO |
Serial number of the rule in the table |
LIBRARY |
Name of the library that contains the rule member |
PRIORITY |
Internal CMEM rule scanning priority |
IN |
Whether the rule has prerequisite conditions for activation.
Default: N |
ADDITIONAL-FILTERS |
Additional optional fields that are specific to each type of rule. For more details, see the next table. |
ON# |
Number of ON statements defined in the rule |
Table 106b Additional Filters in a Detailed List of Rules
Rule Type |
Additional Fields |
---|---|
D - ON DSNEVENT |
|
R - ON JOBARRIVAL |
Job name or mask |
X - ON JOBEND |
Job name or mask |
Z - ON STEP |
|
V - ON MAINVIEW ALARM |
|
Controlling CMEM Rule Operation Mode
The mode of operation (the trace mode) for a CMEM rule is determined by parameter MODE in its rule definition. Sometimes it is useful to override the operation mode of all active rules and verify that events and actions are recorded in a particular way. For example
-
Ensure a trace of all rules (that is, all events and actions are recorded) to facilitate analysis of the interaction between rules.
-
Record (trace) only the triggering of every rule.
Global trace operations are requested using operator commands, as follows:
-
Activate a complete trace by issuing the following command:
CopyF CTMCMEM,LOG=ALL
-
All rules are fully traced as if they were defined with mode LOG. This operator command must only be used temporarily for specific tests, because extended use of LOG mode can adversely affect CMEM performance.
Trace rule triggering only by issuing the following command:
CopyF CTMCMEM,LOG=TRIGGER
-
Only rule triggering is traced for all rules. However, rules defined with mode LOG are fully traced.
Restore the default operation mode (as defined in the rule definition) for each rule by issuing the following command:
CopyF CTMCMEM,LOG=DEFAULT
Modifying the CMEM Sleeping Interval
CMEM "wakes up" every few seconds. This time interval is defined using the Control‑M installation parameters and can be changed by the INCONTROL administrator. In addition, the interval can be modified with the operator command
F CTMCMEM,INTERVAL=nn
where nn represents the interval in seconds.
When the modification is accepted by CMEM, the following message is displayed on the operator console:
CTO123I CMEM INTERVAL IS SET TO nn SECONDS
Refreshing the CMEM Security Cache
CMEM security modules use a security block to identify each user for which an authority check is performed. The first time a user’s security authorization is checked, CMEM creates a security block for that user. The security block can then optionally be saved for the next time the user’s security authorization is checked. Security blocks saved for subsequent checks are kept in the CMEM security cache.
The CMEM security cache holds security blocks for the last 30 users whose security authorization was checked.
Changes made to a user’s security authorization (since the last time that the user’s security block was created) are not automatically included in the user’s security block in the CMEM security cache. However if a user’s security authorization has been changed, and there is no security block in the CMEM security cache for that user, changes made to the user’s security authorization is in effect the next time the user’s security authorization is checked.
To immediately include new user authorization information in the CMEM security cache, refresh the security cache using the following operator command:
F CTMCMEM,NEWSECDEF
This command refreshes all user authorization information in the CMEM security cache.
When the modification is accepted, the following message is displayed on the operator console:
CTO251I RUNTIME SECURITY REFRESH ENDED OK
Private REGION Requirements of the CMEM Monitor
CMEM monitor procedure CTMCMEM is supplied with a default region size of 5 MB. The region size can optionally be increased to a maximum of 2 GB.
Calculating Region Size
Include the following items in your calculation of the amount of virtual storage needed by the CMEM monitor:
-
block size of the IOA Conditions file (fixed at 32,760)
The storage chunks allocated for this requirement are above the 16 MB line.
-
CMEM monitor working buffers require approximately 6500K of virtual storage. The storage chunks allocated for this requirement are mostly above the 16 MB line.
-
CMEM monitor software requires approximately 2000 K of virtual storage, depending on the environment and the functions used. The storage chunks allocated for this requirement are both above and below the 16 MB line.
-
site defined work areas and programs (for example, user exits)
These items usually require a small amount of virtual storage. Therefore, it is usually not necessary to calculate the requirements of sitedefined components precisely. However, it is important that you allow some extra storage space for these components. The storage chunks allocated for this requirement are both above and below the 16 MB line.
You should specify a larger than necessary region size to ensure sufficient storage space for CMEM and related MVS activities.
A site has the following:
-
IOA Conditions file block size of 32760
-
32 slots per block (CNDREC#)
-
site-defined components requiring approximately 0.20 MB of virtual storage
Calculate virtual storage for the CMEM monitor as follows:
Table 107 CMEM Monitor Virtual Storage (Below the 16 MB Line)
Component |
Size |
Comments |
---|---|---|
CMEM software |
1.00 MB |
|
CMEM working buffers |
1.00 MB |
|
Site-defined components |
0.20 MB |
|
Extra space for MVS activities |
0.20 MB |
|
Total |
2.40 MB |
Table 108 CMEM Monitor Virtual Storage (Above the 16 MB Line)
Component |
Size |
Comments |
---|---|---|
IOA Conditions file |
34.00 MB |
(32,760 * 32 days * 32 slots per record) + 64K |
CMEM software |
1.00 MB |
|
CMEM working buffers |
5.50 MB |
|
Site-defined components |
0.20 MB |
|
Extra space for MVS activities |
0.20 MB |
|
Total |
40.90 MB |
Troubleshooting
MVS allocates the region size specified for the CMEM monitor unless a local exit (for example, IEALIMIT, IEFUSI, or another MVS or JES exit) is used to limit the region size of jobs and/or started tasks at the site.
Depending on the value of the REGION parameter in the EXEC DD statement, some MVS versions determine and calculate the amount of the allocated region above the line. In case of doubt, see the REGION parameter of the EXEC DD statement in the JCL Reference Guide for your operating system level.
Message IEF374I in the third SYSOUT of the CMEM monitor indicates the amount of virtual storage used by the CMEM monitor. Compare the information in this message with the existing region size definition.
If sufficient virtual storage is not available for the CMEM monitor, use on-site performance tools to determine if the specified region size was rejected by MVS (for example, using a local exit).
-
If MVS accepted the specified region, recalculate the CMEM monitor’s virtual storage requirements, as shown above, and modify the region size in the EXEC statement of the CMEM monitor procedure accordingly.
-
If an MVS procedure rejected the specified region size, consult your system administrator.
Storage Allocation
At startup, the CMEM monitor allocates working storage. CMEM can allocate most virtual storage above the 16 MB line. MVS (which considers the specified job, the amount of requested storage, MVS exits, and so on), determines whether to allocate above or below the 16 MB line.
Structure of the IOA Conditions File
For information about the structure and space requirements of the IOA Conditions file, see the section that discusses the structure of the IOA Conditions File in the INCONTROL for z/OS Installation Guide: Installing.
CMEM Usage of the Common Service Area (CSA and ECSA)
CMEM receives control for processing events under the various tasks in the system, that is, CMEM acts as part of the address space that issued the corresponding message, command, or other event. For that reason, some of the code and data that are in use by CMEM reside in common storage, accessible from all address spaces, as outlined below. Most of this common storage is above the 16MB line, in the ECSA, but a small portion is allocated by CMEM below the 16MB line, in the CSA, due to MVS services requirements.
Use the information in the following tables to calculate CMEM ECSA and CSA storage requirements:
Table 109 CMEM’s Usage of ECSA Storage Above the 16 MB Line
Item |
Size |
Comments |
---|---|---|
Subsystem executor |
250 K |
|
Work Buffers |
480 K |
The CMEM monitor allocates 20 work buffers of 24K each, in internal control blocks called WSCs. |
Rules |
50 K |
This amount assumes 500 rules and an average of 100 bytes per rule. |
XES Preallocated Buffers |
3000 K |
Preallocated buffers for XES operations. |
Total |
3780 K |
|
Table 110 CMEM’s Usage of CSA Storage Below the 16 MB Line
Item |
Size |
---|---|
SWT and other system control blocks |
5.0 K |
Dataset triggering executor |
50.0 K |
UNIX for z/OS interface (USS) |
4.0 K |
Total |
59.0 K |
CMEM—Control-M Communication
The Control‑M installation chapter of the INCONTROL for z/OS Installation Guide: Installing describes the installation and implementation of the two methods used by the Control‑M Event Manager (CMEM) to communicate with Control‑M. These methods are
-
subsystem-to-monitor (S2M) communication files
-
MVS System Logger Sysplex interface
For a description of the advantages of the MVS System Logger Sysplex interface, see the Control‑M chapter of the INCONTROL for z/OS Installation Guide: Installing.
The following topics discuss the coupling facility, the coupling facility resource manager, and the MVS System Logger Sysplex interface.
Coupling Facility and Coupling Facility Resource Management
A coupling facility is a shareable storage medium (not a shared storage device) that facilitates high-speed access to shared data across applications and subsystems running on the same or different MVS systems.
A coupling facility can be shared by the systems in one Sysplex only. It enables data to be shared by all users in a Sysplex while ensuring data integrity and consistency. To share data, systems in the Sysplex must be connected to the coupling facility using coupling facility channels and must have access to the coupling facility resource management (CFRM) couple dataset.
Storage in a coupling facility is divided into distinct objects called structures. Structures are used by authorized programs to implement data sharing and high-speed serialization. Structure types are cache, list and lock, each providing a specific function to the application. MVS System Logger is a set of standard services that allows an application to write to, browse in, and delete from a coupling facility structure or linear dataset.
A coupling facility is managed using the coupling facility resource management (CFRM) policy. The CFRM policy allows a user to specify how a coupling facility and its resources are to be used at the site. In a CFRM policy, a user supplies information about each coupling facility and each coupling facility structure at the site. For information on planning a CFRM policy, see the IBM manual MVS Setting Up a Sysplex.
Perform the following steps to set up a CFRM policy:
-
Format a CFRM couple dataset by using the IXCL1DSU format utility program. For more information, see the IBM manual MVS Setting Up a Sysplex.
-
Define one or more CFRM administrative policies by using the IXCMIAPU administrative data utility. For more information, see the IBM manual MVS Setting Up a Sysplex.
-
Make one of the defined CFRM policies the active administrative policy for the Sysplex. Start it by using operator command SETXCF START,POLICY,TYPE=CFRM. For more information, seethe IBM manual MVS Setting Up a Sysplex.
MVS System Logger Sysplex Interface
MVS System Logger is a robust set of standard MVS services that allows an application to write to, browse in, and delete from a coupling facility structure or linear dataset. This set of MVS services has been chosen to implement Control‑M Event Manager (CMEM)—Control‑M communications and to replace the subsystem-to-monitor communication files. The write, browse and delete functions of the MVS System Logger are tailor-made for CMEM ‘writing’ to the coupling facility and Control‑M ‘reading’ from the coupling facility.
Perform the following steps to install and implement the MVS System Logger Sysplex interface:
Follow the instructions to set up a CFRM policy (summarized above).
-
Specify CMEM Sysplex configuration parameters—CMMPLEX. For details, see the ControlM chapter of the INCONTROL for z/OS Installation Guide: Installing.
-
For a discussion of the advantages and other implementation-related details of the MVS System Logger Sysplex interface, see the ControlM chapter of the INCONTROL for z/OS Installation Guide: Installing.
Problem Determination
If the CMEM facility is not functioning correctly, you can try the following methods to determine what the problem is:
CMEM Internal Trace
CMEM is supplied with the following internal trace facilities:
-
the ability to print an internal trace
-
the ability to print the contents of the CMEM internal data areas
Under normal circumstances, the debugging facilities are dormant. However, if required (that is, your BMC Customer Support has requested trace information), it is possible to activate the trace facilities as follows:
Perform either step 1 or 2 below.
-
Start a new CMEM monitor with the following operator command:
CopyS CTMCMEM,TRACE=nn
The current CMEM monitor passes control to the new CMEM monitor and shuts down.
-
Issue the following operator command:
CopyF CTMCMEM, TRACE=level
The required tracing level is supplied by BMC Customer Support. It can be any value from 000 to 255. (000 specifies no trace.)
Table 111 Trace Levels for the CMEM Internal Trace Facility
Field |
Description and Options |
---|---|
level |
Trace levels to be activated or deactivated. The CMEM Internal Trace facility has 128 levels (that is, from 1 through 128). Any number of these levels can be on at a given time. Valid values: x - Trace level to turn on TRACE=3 turns on trace level 3. ‑x - Trace level to turn off TRACE=‑3 turns off trace level 3. (x:y) - Range of trace levels to turn on, where x is the first level in the range and y is the last level in the range. TRACE=(1:10) turns on trace levels 1 through 10. (‑x:‑y) - Range of trace levels to turn off, where x is the first level in the range and y is the last level in the range. TRACE=(‑1:‑10) turns off trace levels 1 through 10. (x,y,z,...) - Multiple trace levels to turn on. TRACE=(3,5,29) turns on trace levels 3, 5 and 29. (‑x,‑y,‑z,...) - Multiple trace levels to turn off. TRACE=(‑3,‑5,‑29) turns off trace levels 3, 5 and 29. SHOW - Shows the current status of all trace levels. |
Avoid activating CMEM with the TRACE parameter on a regular basis, because if a JES problem occurs, CMEM may get hung up waiting for JES.
-
The trace information is printed to DD statements DATRACE and DADUMP of the CMEM procedure. If you are running a trace on the Subsystem Interface (SSI), start the General Trace Facility (GTF).
-
When you have finished your problem determination procedures, start a new CMEM using the either of following operator command:
CopyS CTMCMEM
F CTMCMEM,TRACE=00
Print CMEM Internal Data Areas
To print CMEM internal data areas, issue the following operator command:
F CTMCMEM,SNAP[=name1,name2 ...,namen]
where name1, name2,... namen are the names of the CMEM internal data areas.
When no name is specified, all data areas are printed. Your BMC Customer Support can provide the list of data area names. Which data areas are printed depends on the problem encountered:
Table 112 Valid Data Area Names
ALL |
ALO |
CAS |
CONLIST |
CONS |
CONSOLE |
DLY |
EXO |
LINK |
MAIN |
MCT |
MTO |
MTOINX |
MTOLNK |
MTOMIX |
MTOMPT |
MTOPLB |
MTOPND |
MTOPNX |
MTOSRV |
MTOSRVA |
MTOWSC |
MVS |
OMT |
OPR |
PARM |
PND |
RFR |
RQCALO |
RQCDLY |
RQCEXO |
RQCFREE |
RQCMTO |
RQCRFR |
RQCSLO |
RQCSRV |
RQCSTO |
RQC |
RQH |
RULES |
SEC |
SLO |
SRV |
SSCT |
SSVT |
STO |
SWT |
UCM |
VARS |
WISHES |
WSC |
When the snap is completed, the following message is displayed on the console:
CME150I SNAP COMMAND WAS PERFORMED SNAPID=xxxx
where xxxx is the snap identifying number that is displayed at the lower right of the screen after the snap is completed.
Displaying Internal Resource Utilization Statistics
To obtain statistical information on internal resource utilization, issue the following operator command:
F CTMCMEM,USAGESTATS[=type]
In this command, type designates the type of a specific internal resource.
Valid values for type in this command are RQC, PND, and WSC. When ALL is specified as a resource type, or when the parameter is omitted, information regarding all the above resource types is displayed.
The following is a typical sequence of messages displayed when this command is issued:
CTO356I USAGESTATS
CTO15EI RQC USAGE: CURRENTLY 1%, HIGHEST 1% (000001 AND 000019 OUT OF 010000)
CTO15EI PND USAGE: CURRENTLY 0%, HIGHEST 0% (000000 AND 000000 OUT OF 000011)
CTO15EI WSC USAGE: CURRENTLY 0%, HIGHEST 10% (000000 AND 000002 OUT OF 000020)
CTO357I COMMAND ENDED SUCCESSFULLY
For more information about these messages, see the INCONTROL for z/OS Messages Manual.
CMEM users can tune the PND and WSC by adjusting the values of the WAITPR# and WSCs# parameters in the CMMPARM member. However, the RQC cannot be tuned. Look for any PTFs that correct problems handling RQC.
CMEM Diagnostic Tests
This section describes basic tests for locating installation problems in the Control‑M CMEM facility.
The CMEM facility requires the proper setup and functioning of the following major components:
-
the ControlM monitor (started task CONTROLM)
-
one or more CMEM monitors (started tasks CTMCMEM). One CMEM monitor is normally required per CPU
-
the Monitor-to-Subsystem file (M2S). This file passes requests from the ControlM monitor to the CMEM monitors
-
Subsystem-to-Monitor communication. Communication is established either through (S2M) files or the Sysplex Logger function
-
This communication passes requests from the CMEM monitors to the ControlM monitor. Using the file method, one Subsystem-to-Monitor (M2S) file is required for each CPU. The files are required if the Sysplex Logger is not in use.
Perform the tests only after the CMEM has been fully installed. Corrections to installation parameters can be made either manually in the corresponding members, or by using ICE.
-
Before testing, check that
-
parameters in member IOACPRM describe all CPUs in the complex, in addition to the names of the communication files or Logger structure
-
each communication file is uniquely named
-
either the communication files between the monitor and the subsystems have been allocated and formatted, or the Logger structure was allocated before the first CMEM or ControlM
This attribute is only available for ControlO monitor starts.
-
an appropriate CMEM rule has been created, and either manually or automatically ordered by the CMEM monitor
The most basic diagnostic test is to define a CMEM rule table so that when a certain job enters the system (that is, it is displayed on the reader), a condition is added for the same ODAT. For this basic test, it is recommend that you define a specific job name (do not use generic names with asterisks).
An additional test is to define a CMEM rule table so that when a certain job enters the system (that is, it is displayed on the JES internal reader), a schedule table is ordered. For this test, the scheduling definition should contain one simple job definition.
-
the subsystem has been defined in SYS1.PARMLIB in all CPUs where CMEM must work (or SSALLOC=Y has been specified in member IOAPARM)
-
if DSNEVENT or step events are to be monitored, check that JOBNAMES monitoring is turned on.
-
all provided fixes from BMC (with regard to CMEM functions) have been applied
You can use the following command to check the monitoring facility status that SETCON sets:
CopyD OPDATA,MONITOR
-
if DSNEVENT or step events are to be monitored, the MSGLEVEL parameter of all jobs, started tasks, or TSUs to be monitored contain the value 1
For details about installation requirements to activate CMEM, see
-
the INCONTROL for z/OS Installation Guide: Installing
-
the INCONTROL for z/OS Security Guide
-
the JCL and AutoEdit facility chapterin the ControlM for z/OS User Guide
For information on changing CMEM-related parameters, see the INCONTROL for z/OS Installation Guide: Installing.
-
-
-
Stop the CMEM monitors (if active) in all CPUs, using the following operator command:
CopyF CTMCMEM,STOP
-
If member IOACPRM was corrected, stop and restart the ControlM monitor.
-
Before restarting the ControlM monitor, remember to refresh any program fetch product (PDSMAN, PMO, QFETCH, and so on). If the IOA LOAD library was added to the linklist, refresh LLA also.
-
When the ControlM monitor comes up, it must issue the message CTM440I monitor ready to receive CMEM requests. If this message was not issued, search for error messages within
-
the IOA Log
-
the job log of ControlM monitor SYSOUT
-
the MVS syslog
-
-
If no error message is displayed, IOACPARM does not request CMEM processing to be performed. This means that one of the following was not specified:
-
parameter CPUS
-
parameter CTM2SBS in conjunction with parameter Use System Logger set to N (No)
-
-
If an error message is displayed, it means that ControlM encountered an error while processing CMEM-related parameters. Locate the problem, correct it and restart the ControlM monitor.
-
-
Start CMEM in all CPUs where it must run by issuing the following operator command in each CPU:
CopyS CTMCMEM
If CTMCMEM successfully initialized, the following messages appear in the CTMCMEM job log:
CopyCTM227I IOA subsystem "I600" initialization of ControlM functions completed.
CTO147I CTMCMEM - initialization complete. Type=CTMCMEM,SUB=JES2, RUN#=0001-
If the above messages do not appear in the job log, search for error messages within the CTMCMEM job log and the MVS system log (SYSLOG) in general.
-
While searching, note that all related messages start with a prefix of CTM, CME, or IOA. An error message number usually contains the suffix E (Error) or S (Severe error).
-
Locate the problem, correct it and restart CTMCMEM. If the problem correction includes changes to member IOAPARM, CTMPARM or IOACPARM, return to step1 above.
-
-
Submit a test job to evaluate whether CMEM as a whole functions properly. You should perform the basic test described in Step 1 above.
-
The job must be submitted from TSO or ROSCOE in one of the CPUs in which the CMEM monitor is active, and must have the exact name as defined in the CMEM rule table.
-
After message HASP100 (for JES2) or IAT6101/IAT6100 (depending on the JES3 version) is displayed, wait a few seconds and check if the action defined in the CMEM rules table was performed for
-
a request to order a schedule table, check the IOA Log and the Active Environment screen (Screen3)
-
a request to add or delete a condition, check the IOA Log and the IOA Conditions or Resources Display (Screen4)
-
The action is actually performed by the ControlM monitor, so to test the CMEM functions properly the ControlM monitor must be up.
IOA Exit7 and ControlM Exit1 receive control before the condition is added or deleted or the table is ordered (respectively). If you use a localized version of these exits, make sure that the exits perform the localized corrections.
-
-
Repeat this step for all CPUs in which the CMEM monitor is active. If the requested action was not performed by ControlM, skip to step8 below.
-
-
Change the definitions in the CMEM Rule table, or add new events. Define events to test all event types: JOBARRIVAL, JOBEND, DSNEVENT and STEP. For information on changing the CMEM Rule table, see the online facilities chapter of the ControlM for z/OS User Guide.
-
Issue the operator command F CONTROLM,NEWCONLIST to cause CMEM to reload the updated CMEM Rule tables in all CPUs.
-
This command must be issued only in the CPU where the ControlM monitor is active. It must be issued each time that the CMEM rule tables are modified, and can also be issued to test if the ControlM monitor and the CMEM monitors communicate with each other.
-
The command is directed to the ControlM monitor. After several seconds, the monitor must issue the message
CopyCTM101I NEWCONLIST COMMAND ACCEPTED ...
-
After several more seconds, the CMEM monitors must issue the message
CopyCTO240I NEWCONLIST COMMAND RECEIVED. THE CMEM TABLES WERE RELOADED
-
If the CMEM monitor encounters a problem while performing the NEWCONLIST request, an error message is issued to the job log instead of message CME240I.
-
If the CMEM monitor does not issue any message at all, the communication files between the ControlM monitor and the CMEM monitors or Sysplex Logger were not set up correctly. Locate the error and correct it. If the correction involves changes in IOACPRM or a reformat of the communication files, repeat the test from step1 above.
-
-
Submit jobs to test all the event types as defined in step 5 above.
-
These jobs must be submitted from TSO or ROSCOE in one of the CPUs where the CMEM monitor is active. They must run in the same CPU, and must have the exact names as defined in the CMEM Rule table.
-
Check that the actions defined in the CMEM Rule table are performed (the condition was added or deleted, a schedule table was ordered, and so on).
-
Repeat this step in all CPUs where a CMEM monitor is active.
-
After all jobs ended execution, wait a few seconds and check if the action defined in the CMEM rules table has been performed for
-
a request to order a schedule table, check the IOA Log and the Active Environment screen (Screen3)
-
a request to add or delete a condition, check the IOA Log and the IOA Conditions/Resources screen (Screen4)
-
a request to stop the job, check the job log of the executed job for messages CTMC41E and CTMC42E
-
-
If the action for a DSNEVENT or step event is not performed, verify that
-
JOBNAMES monitoring is turned on and message IEF403I is issued in the job log of the tested jobs.
In product version 9.0.18.100 or later, JOBNAMES monitoring is turned on automatically at CMEM or Control-O startup.
You can then use the following command to check the monitoring facility status that SETCON sets:
CopyD OPDATA,MONITOR
-
the MSGLEVEL of the tested jobs is set to (x,1); that is, the JESYSMSG sysout file (the third file listed in the sysout) is created with all the deallocation messages.
-
-
no error messages appear in the job log of the executed job.
-
-
If one of these situations cannot be verified, locate the problem, correct it and repeat this step.
-
If these situations can be verified, or if actions for JOBARRIVAL/JOBEND were not performed, continue to the next step.
-
If CMEM does not work properly, and the reason for the error was not located while performing the steps mentioned so far, produce and save the following documentation:
-
Create a dump of the subsystem communication files (Monitor-to-Subsystem file, and all Subsystem-to-Monitor files). The dump can be created using utility IDCAMS, with the following statements:
CopyPRINT IFILE(ddname1) DUMP (subsys-to-monitor file)
PRINT IFILE(ddname2) DUMP (monitor-to-subsys file)A sample JCL member can be found in member LISTFILE of the IOA JCL library.
-
Save the part of the MVS syslog that contains the entire test.
-
Print the rule table.
-
Save the IOA log of the entire test. If the IOA log is printed using KSL or Screen5, use the SHOW command and specify Y in all CM and CO+CMEM options before printing the log.
-
Print members CTMPARM, IOAPARM and IOACPRM in the IOA PARM library.
After saving this documentation, contact your BMC Customer Support with an exact description of the problem.
-
Managing the CMEM Facility – System Logger Recovery
If the Control-M Event Manager (CMEM) communicates with Control-M via the MVS System Logger Sysplex interface, it is possible that one or more of several software and hardware components can fail and may require periodic maintenance. Some of these possibilities are described below.
Unplanned Outages
The MVS System Logger is a comprehensive facility for handling communication across the Sysplex. The System Logger is meant to be treated as a black box, providing automatic recovery support if the system, Sysplex, or Coupling Facility structure fails. The z/OS MVS Assembler Services Guide discusses in detail the various components that can fail. Among the failures discussed are:
-
MVS system failure
-
system logger address space failure
-
coupling facility (CF) structure failure
-
log stream or staging data set filling up
-
damaged log stream
-
DASD I/O error
The system logger and MVS initiate automatic recovery of many or most of the failures. It is recommended for users to read this section before using the system logger for communication between CMEM and Control-M.
Depending on the particular failure, the interface between CMEM and Control-M will either:
-
retry the request
-
rebuild the system logger environment (re-connect) and retry the request
-
disable the CMEM facility
For example, the following errors cause the interface to reconnect to the system logger address space:
-
severe XES error (error code 802)
-
stream token not valid (error code 806)
-
stream token not valid–expired (error code 82D)
-
rebuild in progress (error code 861)
-
no connectivity (error code 864)
-
staging data set being formatted (error code 868)
-
system logger address space initializing (error code 891)
Planned Outages
In most customer sites, if the coupling facility (CF) must be brought down for maintenance, a CF structure must be moved. If other CF-related planned outages must occur, the system (including all production jobs and system address spaces) is brought down. If the customer site does not want to bring their system activity to a halt, we recommend temporarily switching the interface between CMEM and Control‑M, meaning both CMEM and Control-M, to use the communication files. Assuming the IOACPRM PARM member is updated with the system IDs, system names, and the communication file names, this involves a simple change from SYSTLOGR=Y to SYSTLOGR=N and the recycling of CMEM and Control-M.
If CF system maintenance is attempted without switching over to the communication files, CMEM will not be able to write to the system logger and CMEM events will be lost. CMEM does not queue, save, and retry CMEM events in this case. Also, Control-M will not be able to read from the system logger and eventually the CMEM facility will be disabled.
The interface between CMEM and Control-M assumes a healthy and stable system logger environment. If this is not the case, the customer site should use the communication files instead of the system logger interface.
Considerations and Notes
For detailed explanations about what to do when a CMEM‑related parameter is changed, see the INCONTROL for z/OS Installation Guide: Installing and INCONTROL for z/OS Installation Guide: Upgrading.
-
The CMEMrelated parameter member is member IOACPRM in the IOA PARM library.
-
The CMEM subsystem is triggered by the following messages:
-
IEF403I - Job started.
-
IEF125I - TSO user logged on.
-
$HASP100 (under JES2), IAT6101/IAT6100 (under JES3) - Job on header.
-
$HASP395, $HASP001 (under JES2), IAT6108, IEF404I, IEF450I
-
IEF453I (under JES3) - Job ended
-
-
These messages must be issued for all jobs. However, if these messages must not be displayed on the console, they can be suppressed using member MPFLSTnn in the SYS1.PARMLIB library.
Supporting Interfaces
General Considerations for Third Party Product Interfaces
To prevent insufficient region abends, file integrity problems, and false AJF-full conditions
-
exclude IOA and Control-M files from any third party buffering or caching products such as DLF, HIPER-CACHE, Ultimizer, Startpool, Batch Optimizer (MVBO), and so on.
-
exclude the ControlM Active Jobs file from any third party blocksize optimization products like CA-Optimizer, and so on.
-
exclude ControlM files and the IOA Conditions file from disk volumes under DFSMS control on which the partial release attribute has been defined.
-
exclude the Control-M Statistics file from the VSAMTUNE VSAM performance tool.
CDAM Files
Control‑M attempts to use the unused space on Compressed Data Access Method (CDAM) files. Therefore, the unused space on CDAM files must not be released by any product that releases unused space (for example, products that perform disk defragmentation).
If CDAM files are allocated on SMS volumes, these volumes must be defined with PARTIAL RELEASE=NO.
GRF File Considerations
Sometimes the GRF file is not used to its full capacity. The DASD management software installed at your site must therefore be instructed not to release any unused space from the GRF file.
HSM Migrated Files
JCL libraries, which the Control-M Monitor requires for job submission, may be migrated.
When the Control-M Monitor detects such a situation, it attempts to recall the libraries asynchronously, and temporarily bypasses processing the job. The monitor later retries to process the job again governed by the parameters INUSE#RT and INUSE#WI, where INUSE#RT is the number of retries which are attempted and INUSE#WI is the interval between retries. For details about these parameters, see the chapter about customizing INCONTROL products in the INCONTROL for z/OS Installation Guide: Customizing.
Control-M Monitor and JES
Cases of JES Malfunction
The Control‑M monitor uses JES services to receive information about the status of the jobs running in the system. If Control‑M detects a critical error in JES operation, it shuts itself down. This precaution prevents incorrect submission of jobs due to a JES malfunction. In this case, one of the following highlighted, unrollable messages is displayed on the operator console:
CTM168S Control‑M SHUTTING DOWN ‑ COMMUNICATION TO "JES" NOT AVAILABLE
CTM256S Control‑M SHUTTING DOWN ‑ COMMUNICATION TO "JES" NOT AVAILABLE
At certain times when the JES subsystem is shut down, especially when doing a hot start, Control-M does not detect that JES was shut down. To avoid incorrect submission or post-processing of jobs, deactivate Control-M prior to shutting down JES and bring it back up after JES is brought back up.
Special Considerations
Control‑M uses JES services to read the job’s output. This is how Control‑M analyzes how the job finished executing. It is important to remember the following limitations:
-
Jobs submitted by ControlM can be canceled by the operator. It is important, however, not to purge their outputs. Therefore JES commands $PJnnn, $CJnnn,P and similar commands must not be used.
-
Job output for jobs submitted by ControlM must not be released for printing except by ControlM. Therefore, do not activate MVS JES2 command $TO and similar commands on the job’s output. Ensure that output management products (such as CA-View, CA-DISPATCH) do not remove a job’s output from the spool until ControlM has first analyzed the output.
-
If JES operator command $HJ is issued for the job, the job must be released from held status before ControlM can read the job’s output. Otherwise, the job status is changed to EXECUTING (SYSOUT IN HOLD STATUS).
If the Control‑M monitor cannot read a job’s SYSOUT, the following message is displayed on the operator console:
CTM262Wn UNSUCCESSFUL ATTEMPTS TO READ JOB DATA BY SUBSYSTEM REQUEST. RETRY CONTINUES
Message CTM262W does not necessarily indicate a serious problem.
Examples
-
When a job is not run due to a JCL error, only two SYSOUT datasets exist for the job. Therefore, ControlM cannot read the expected third SYSOUT dataset, and the above message is displayed.
-
When JES is very busy, a period of up to a minute (in extreme cases) may pass between the time the job has finished executing and the time JES enables ControlM to read its SYSOUT (in other words, JES is "stuck" in the output processing stage).
-
By default, CTM262W is displayed every 5 times the ControlM monitor attempts to read the job SYSOUT and does not succeed. If after 20 attempts the ControlM monitor still cannot read the SYSOUT, the following message is displayed:
CTMD50S READING JOB DATA BY SUBSYSTEM REQUEST FAILED AFTER n ATTEMPTS.
LAST RC rc FILE filename jobname/jobid
These two default values can be changed using installation defaults.
On the other hand, message CTM262W can indicate serious problems with the job’s sysout. The following problems can cause this message to be displayed:
-
When a job’s output is released for print (that is, the job’s output is no longer held), the job’s output must be printed or purged.
-
In a multicomputer environment, the following chain of events can occur:
-
ControlM monitor submits the job from computer A.
-
Computer A crashes (or is shut down).
-
ControlM monitor is activated on computer B and the job executes in computer B. When the job finishes executing, ControlM cannot read the job’s output, and message CTM262W is displayed.
-
This is caused by the job waiting to be "handled" by the JES of computerA.
-
This problem can be overcome by assigning the job to computer B using JES command $TJnnn,S=sysid. ControlM then reads the output, and the message is removed from the operator console.
Message CTM262W Summary
Whenever message CTM262W is displayed, wait one or two minutes. If the message continues to be displayed every few seconds for the same job, perform the following steps:
To stop the message from being displayed on the operator console while you are checking the problem, hold the job order in the Control‑M Active Environment screen (Screen 3). Release it when you have resolved the problem.
-
Issue JES2 commands $DJnnn and $LJnnn, and scan the results.
-
Check if the job’s output is in held class (the job waits for print). If it is, the ControlM monitor cannot analyze the output, so you must analyze it manually. Print or purge the output of the job. Make sure that the job order in ControlM is not HELD. Wait about a minute until the status of the job changes to DISAPPEARED. Manually add or delete prerequisite conditions according to the result of the run using the IOA Conditions/Resources screen (Screen4).
To stop the message from being displayed on the operator console while you are checking the problem, hold the job order in the Active Environment screen. Remember to release it once you have resolved the problem.
-
If the job is "waiting for output processing," check if the job (not the output) is held (by a previously issued $HJ command). If the job is held, release it using JES2 command $AJnn.
-
If the job is "waiting for output processing" by a system ID that is currently not active, try to resolve the problem using JES command $TJnn.
Stopping CMEM Before JES Is Stopped
Before shutting down the JES subsystem, deactivate CMEM using the command
P CTMCMEM
If this is not done, warning messages (about OPEN DEBs) are issued. Under some conditions, these messages may be followed by an SC03 abend in JES. This does not cause any harm, since JES has finished all of its processing at this time.
After JES is brought up, restart CMEM using the command
S CTMCMEM
Controlling z/OS UNIX System Services (USS/OpenEdition) Support
z/OS has introduced major changes and enhancements to the UNIX for z/OS (OpenEdition) environment to make it part of the MVS core. Consequently, certain applications, such as IBM FTP, were converted to use the USS (Unix Services for z/OS). As a result, IBM FTP stopped issuing allocation and deallocation messages to the JESYSMSG spool dataset.
CMEM provides a special interface to support dataset-triggering events originating from UNIX.
The Unix for z/OS interface is shared by all CMEM installations in the LPAR and is version-independent. The first CMEM subsystem to initialize loads the Unix for z/OS interface module to common storage. This interface is later used by all other CMEM subsystems. Upon startup, every CMEM subsystem registers itself with the Unix for z/OS interface. This registration mechanism enables the Unix for z/OS interface to recognize all the available CMEM subsystems and to call them when a new process is created by a fork/spawn request and associated with a BPXAS initiator (for example, when a new ftp session has started). The CMEM monitors are called one by one in the order that they registered with the interface. The first CMEM subsystem to have a matching rule for this address space will monitor it for z/OS dataset-triggering events.
When a CMEM subsystem shuts down, it removes itself from the Unix for z/OS interface. The last CMEM subsystem to shut down removes the Unix for z/OS interface from common storage.
The following sequences of messages indicate that the Unix for z/OS interface was successfully installed:
-
For the first CMEM subsystem to initialize
CopyCME820I INITIALIZATION OF OPENEDITION SUPPORT STARTED
CME821I OPENEDITION INTERFACE MODULE SUCCESSFULLY LOADED
CME822I SUBSYSTEM REGISTERED WITH OPENEDITION INTERFACE
CME823I INITIALIZATION OF OPENEDITION SUPPORT ENDED SUCCESSFULLY -
For any subsequent CMEM subsystem
CopyCME820I INITIALIZATION OF OPENEDITION SUPPORT STARTED
CME822I SUBSYSTEM REGISTERED WITH OPENEDITION INTERFACE
CME823I INITIALIZATION OF OPENEDITION SUPPORT ENDED SUCCESSFULLY
The following sequences of messages indicate that the Unix for z/OS interface was successfully deactivated:
-
For any CMEM subsystem except from the last subsystem to shut down
CopyCME830I DEACTIVATION OF OPENEDITION SUPPORT STARTED
CME831I SUBSYSTEM REMOVED FROM OPENEDITION INTERFACE
CME833I DEACTIVATION OF OPENEDITION SUPPORT ENDED SUCCESSFULLY -
For the last CMEM subsystem to shut down
CopyCME830I DEACTIVATION OF OPENEDITION SUPPORT STARTED
CME831I SUBSYSTEM REMOVED FROM OPENEDITION INTERFACE
CME832I OPENEDITION INTERFACE MODULE REMOVED
CME833I DEACTIVATION OF OPENEDITION SUPPORT ENDED SUCCESSFULLY -
CMEM enables the operator to start and stop the Unix for z/OS interface using the Modify operator command. Usually, there is no need to intervene with the default processing performed by CMEM. The following operator commands are available:
CopyF CONTROLO,STARTOE
F,CONTROLO,STOPOE[,FORCE] -
The STARTOE command instructs a CMEM subsystem to restart the Unix for z/OS interface. This includes initializing the interface (if no other subsystem has initialized it) and/or registering the current subsystem with the Unix for z/OS interface. If the STARTOE command is issued for a subsystem that is already registered with the Unix for z/OS interface, the following message is generated:
CopyCME828I SUBSYSTEM ALREADY REGISTERED WITH OPENEDITION INTERFACE
-
The STOPOE command instructs a CMEM subsystem to deactivate the Unix for z/OS interface. This includes removing the current subsystem from the Unix for z/OS interface and removing the Unix for z/OS interface from common storage if no other subsystem is using it. If the STOPOE command is issued for a subsystem that is not registered with the Unix for z/OS interface, the following message is generated:
CopyMTO796W SUBSYSTEM NOT REMOVED FROM OPENEDITION INTERFACE: SUBSYSTEM NOT FOUND
-
If the STOPOE command is issued when the Unix for z/OS interface is not installed, the following message is issued:
CopyMTO795W OPENEDITION INTERFACE MODULE NOT INSTALLED
-
The STOPOE,FORCE command instructs CMEM to remove the Unix for z/OS interface from common storage even if one or more CMEM subsystems are still registered with it.
CMEM also provides Started Procedure CTOOEDSC. This procedure can be started from the console using the START command. Procedure CTOOEDSC acts like a STOPOE,FORCE command and removes the Unix for z/OS interface regardless of any registered subsystems. The STOPOE,FORCE command and procedure CTOOEDSC must be used only in case of emergency.
CONNECT DIRECT Support (Formerly NDM Support)
Control‑M supports CONNECT DIRECT software, which creates dataset events (that is, the appearance of a dataset) on the system. CONNECT DIRECT support enables dataset events to automatically trigger Control‑M operations (adding and deleting prerequisite conditions, and/or triggering jobs) to handle these events.
The Control‑M user creates and modifies dataset event definitions by using online Event Definition and Event List screens.
CONNECT DIRECT support consists of the following phases:
-
Implementation and Customization
-
Create a Rules library with the following attributes:
CopyLRECL=132,BLKSIZE=13200,DSORG=PO,RECFM=FB
-
Add the @@IDCNTL member to the ControlM PARM library. This member must contain a single line (in the format shown in the table below) for each user who is going to use this facility.
-
Table 113 Format of Line in the @@IDCNTL Member
Columns |
Description |
---|---|
01–08 |
User ID |
09–52 |
Name of the Rules library |
53–60 |
Member name (user ID recommended) |
When a table or an event list table was in use during the execution of an IOADDC request, and no successfully triggered event was processed, Control‑M may try again to execute the request, depending on the values set for the FORCE#RT and FORCE#WI installation parameters. For more information on the FORCE#RT and FORCE#WI installation parameters, see the customization chapter of the INCONTROL for z/OS Installation Guide: Customizing.
When wish WM3683 is set to APPLY=YES (see member IOADFLT in the IOA IOAENV library) then all IOADDC/IOADDR Requests will verify that the requestor's USERID (ACEEUSER from the job's USER= JOB statement or the user who submitted the job) matches the USERID specified in col 1-8 of each @@IDCNTL record before checking the associated rule library/member for the corresponding dataset.
If the wish is set to NO, then the userid is not verified and the first rule library/member in which the dataset appears is used.
The setting of WM3683 does not affect IOA basic or extended security considerations but is simply used to determine which @@IDCNTL record (rule library) is to be used.
The following sample CONNECT DIRECT script calls the IOADDR dataset driver to set a different prerequisite condition upon successful or unsuccessful completion of a file transfer:
IOADDSTR PROCESS PNODE=PRIMNODE
SNODE=SECDNODE
STEP01 COPY FROM (PNODE DSN=INDSN DISP=SHR) -
TO (SNODE DSN=OUTDSN DISP=SHR)
STEP02 IF (STEP01 = 0) THEN
RUN TASK (PGM=IOADDR, -
PARM=('OUTDSN.COPY.GOOD')) SNODE
ELSE
RUN TASK (PGM=IOADDR, -
PARM=('OUTDSN.COPY.FAILED')) SNODE
EIF
-
Dataset Event Definition (REXX Procedure IOACDDR)
Dataset event definitions are created and modified by the Control-M user using online Event Definition and Event List screens. The user must define at least one Event list. An Event list is composed of dataset names and, for each dataset name, the operations that the dataset event must trigger. Event lists are defined or modified in the Dataset Event Definition facility that is activated using the IOACDDR REXX procedure. For more information, see “REXX Procedure IOACDDR: Dataset Event Definition.
-
Automatic Operation Triggering Upon Dataset Appearance (Module IOADDR)
Once Event lists are defined, they can be used to trigger operations that are based on dataset events. The main module involved is module IOADDR.
Whenever a dataset event occurs, the IOADDR module must be invoked, and the name of the dataset must be passed to it as a standard parameter. The IOADDR module looks for the passed dataset name in the Event lists. If it finds the dataset name in a list, it initiates the corresponding action.
The IOADDR module can be called from any calling environment (job, TSO, CONNECT DIRECT, and so on). Before the module can be called, certain files must be allocated to the module.
In most cases, the calling environment allocates the required files and calls the IOADDR module directly.
If the calling environment cannot allocate the files, it cannot directly call the IOADDR module. Instead, replace calls to IOADDR with calls to the IOADDC module. In this case, the process is as follows:
The calling environment calls the IOADDC module and passes the dataset name as an argument. The IOADDC module places the dataset name in a System Logger log block that is read by Control-M.
If the IOADDC module cannot build the System Logger environment or write a System Logger log block (for example, the address space running IOADDC is not authorized), the module issues error messages to help the user troubleshoot the problem.
Control-M calls the IOADDR module and passes the dataset name as an argument. The IOADDR module checks the Dataset or Event table and triggers the corresponding event. For information on setting up the System Logger, see “CMEM—Control-M Communication” and the Control-M chapter in the INCONTROL for z/OS Installation Guide:Installing.
If the IOADDC module is executed and the System Logger interface was not enabled by the user (parameter SYSTLOGR in the IOACPRM IOA PARM member is set to 'N'), then instead of passing the dataset name argument to Control-M using the System Logger, IOADDC directly calls IOADDR to trigger the corresponding event.
To enable a single CONNECT DIRECT-caller to communicate with multiple IOA installations simultaneously, see CONNECT DIRECT Cross Installation Support.
CopyEXEC IOADDC,PARM=(’dataset-name’)
The environment that calls the IOADDR/IOADDC module must have the DAPARM DD statement allocated to it. For further information see Customizing the IOA Online Environment.
REXX Procedure IOACDDR: Dataset Event Definition
Event Lists are defined or modified in the Dataset Event Definition facility screens that are activated using REXX procedure IOACDDR.
The interface to the utility consists of two screens, as described in the following table:
Table 114 Screens in the Dataset Event Definition Facility
Screen |
Description |
---|---|
Event List |
Lists all dataset events defined by the user. This screen is displayed upon entry to the utility. |
Event Definition |
Used to define or modify specific events. When modifying an existing event, only a section of the Event Definition screen, called the Event Modification screen, is displayed. |
When the utility is accessed for the first time (when no events are yet defined for the user), the Event List screen is not displayed. Instead, the Event Definition screen is displayed directly. After one or more events are defined, the Event List screen is displayed upon entry to the utility.
Your user ID is automatically displayed at the top of both screens because Control‑M checks security authorization before implementing the request.
Only one user at a time can edit a particular Event list. Other users can access that Event list in browse mode only.
The following table describes the types of operations that can be specified for events in the Event list:
Table 115 Event List Operations
Operation |
Description |
---|---|
JOB |
A job can be ordered (or forced). |
COND |
A prerequisite condition can be added or deleted. |
Event List Screen
The Event List screen, as shown below, lists dataset events that are already defined.
The Event List screen, as shown below, lists dataset events that are already defined.
EVENT LIST - M21.NDM.TAB(M21) ------------------------------- ROW 1 TO 2 OF 2
COMMAND ===> SCROLL ===> CSR
S - Select, I - Insert, D - Delete
- List Of Dsnames --------------------------- TYPE -------------------------
- M21.LIB* COND
- M21.LIB* JOB
***************************** BOTTOM OF DATA ********************************
For each defined event, the screen displays the name of the dataset and the type of operation that the event must trigger.
Only one operation can be specified for each occurrence of a dataset name in the list. However, the same dataset name can be specified many times, thereby allowing multiple operations to be specified for the same dataset event.
The following table describes the options in the Event Modification screen. Specify one of these options to the left of a dataset name, and press Enter.
Table 116 Options in the Event Modification Screen
Option |
Description |
---|---|
S (Select) |
Display the selected event in the Event Modification screen. The event can then be modified, if desired. |
I (Insert) |
Add a new event below the selected event. The Event Definition screen is displayed with no entry. |
D (Delete) |
Delete the selected entry. A confirmation window is displayed (default: No). |
Event Definition Screen
The following code shows the Event Definition screen:
---------------------- K15 EVENT DEFINITION SCREEN ------------------------
COMMAND ===> SCROLL ===> CSR
DSNAME ===>
--------------------------- 'JOB' TYPE PARAMETERS --------------------------
SCHED. LIB. ===>
TABLE NAME ===>
JOB NAME ===>
ODATE ===> (Date/Odat) OR ===> (MM DD)
FORCED SCHED. ===> (Yes/No)
-------------------------- 'COND' TYPE PARAMETERS --------------------------
FUNCTION ===> (Add/Delete)
CONDITION NAME ===>
CONDITION DATE ===> (Date/Wdate/STAT) OR ===> (MM DD)
The screen is divided into three sections of parameters, as described in the following table:
Table 117 Event Definition Screen Sections
Section |
Description |
---|---|
Dataset Name |
This section contains one parameter for the name of the dataset event.
|
‘JOB’ TYPE PARAMETERS |
This section lists parameters that are relevant only if the event must trigger the scheduling of a job.
|
‘COND’ TYPE PARAMETERS |
This section lists a parameter that is relevant only if the event must trigger the addition or deletion of a prerequisite condition.
|
To define an event and corresponding operation, fill in the DSNAME and either the JOB or the COND type parameters (only one type can be used in each definition), and press Enter.
If you selected an existing event in the Event List screen, only the screen section relating to that event (JOB or COND) is displayed and the screen is called the Event Modification screen. Modify the event as desired and press Enter.
CONNECT DIRECT Cross Installation Support
CONNECT DIRECT Cross Installation supports a single CONNECT DIRECT address space which communicates with multiple IOA installations, including different releases of IOA, simultaneously. This support also provides for a complete and seamless upgrade path from one IOA release to another vis-a-vis the CONNECT DIRECT‑IOADDC interface (see explanation below of the IOADDC routine).
Components
The support is based on two components:
-
IOADDI - A short-running job (the procedure found in the IOA PROCLIB library) you execute prior to any CONNECTDIRECTIOADDC interface request. This job 'registers' the IOA installation by saving installation information in a persistent system-wide control block. Among other fields, this control block contains relevant fields from the IOACPRM and CMMPLEX IOA PARM members referenced by the DAPARM DD statement.
This job must be run:
-
with different JCL statements (in the DAPARM DD statement) for every IOA installation on every system that runs CONNECTDIRECT and IOADDC.
-
only once.
-
prior to the first CONNECTDIRECTIOADDC request.
-
whenever there is any change to the IOACPRM and CMMPLEX IOA PARM members.
-
IOADDC - A routine linking a CONNECTDIRECT address space to an IOA installation. The routine receives an input trigger and the IOA installation QNAME. The QNAME is set by the user and taken from the QNAME parameter in the IOAPARM IOA PARM member. Depending on the settings in the IOACPRM and CMMPLEX IOA PARM members referenced by the DAPARM DD statement of the relevant run of the IOADDI job, IOADDC determines whether the input trigger is passed to the IOA installation (the IOADDR module) directly or via the MVS system logger interface.
This routine accesses the control block built by IOADDI. If the user-specified QNAME is not found in the system-wide control block or the control block is not found, the request is aborted with a clear error message. If no QNAME is present in the request, module IOADDC will function as it did previously, that is, IOADDC sends the input trigger to the IOA installation (IOADDR module) based on the relevant fields from the IOACPRM and CMMPLEX IOA PARM members referenced by the DAPARM DD statement allocated to the IOADDC caller.
Tracing
To trace the IOADDI and IOADDC process, set the trace level to 72 by adding the following to the IOADDI job and to the CONNECT DIRECT address space that calls IOADDC:
//DATRCIN DD *
TRACE=72
/*
All trace messages will appear in the system log, so BMC recommends that you perform the trace infrequently or when you encounter problems.
Workload management service class support
Control-M can automatically assign jobs to workload management (WLM) service classes. The WLM SRVCLASS table, WLMSCTBL, located in the CTM PARM library, can be created by the user and will be used by Control-M as the driver of workload management service class support. If present, the table is automatically loaded at Control-M initialization. When a job is submitted by Control-M, the job is assigned the 'Job-Init' service class. If the job is submitted after DUE-IN time, the job is assigned the 'AftDueIn' service class. If the job is submitted or is running after DUE-OUT time, the job is reset to the 'AfDueOut' service class. Workload management service class support is composed of the following:
-
The WLMSCSAM sample table, which resides in the CTM PARM library. It contains usage notes, a complete description of the processing involved, and the table layout. This sample table can be used as a template when the actual WLM SRVCLASS table, WLMSCTBL, is created.
-
The NEWLMSCTBL operator command, which enables the user to have Control-M dynamically reload the table while Control-M is running.
-
The CTMX020 user exit, in which the user can make a last-moment change to the service class Control-M is about to assign to the job.
WLMSCTBL table
The user can create the WLMSCTBL table, which is the WLM SRVCLASS table in the CTM PARM library. A sample table, WLMSCSAM, is provided in the CTM PARM library, for this purpose.
The WLMSCTBL table is the main driver of workload management service class support. If present, it is loaded at Control-M initialization time and may be reloaded by a user request, by issuing the following operator command:
F CONTROLM,NEWLMSCTBL
After a successful load or reload, a positive informational message is displayed. If the table does not exist, no error or warning message is displayed. If the table exists but a syntax error is detected, clear error messages will be displayed describing the error.
WLMSCTBL table security authorization
In order for you to control or monitor access to the WLMSCTBL table separately from other Control-M libraries, a separate DAPARMM key was created in the IOADSN member of the IOA IOAENV library. This allows the user's security authorization facility to focus on this particular table alone.
Table layout
The WLMSCTBL table contains several fields, some of them optional, laid out in fixed columns. The following table outlines the uses for each column:
Table 118 WLMSCTBL table
Column Number |
Definition |
Comment |
---|---|---|
1 |
comment indicator (*) |
|
2 |
job name mask |
up to 8 characters |
11 |
application name mask |
up to 20 characters |
32 |
from time |
in HHMM format |
37 |
to time |
in HHMM format |
42 |
Job-Init service class |
up to 8 characters |
51 |
AftDueIn service class |
up to 8 characters |
60 |
AfDueOut service class |
up to 8 characters |
The following code shows an example of some of the entries in the WLMSCSAM sample table:
*0 1 3 3 4 5 6
*2 1 2 7 2 1 0
*! ! ! ! ! ! !
*! ! ! ! ! ! !
*V V V V V V V
K15JOBA TESTAPPLICATION 2200 0200 STANDARD QUICK QUICKER
N50* 0000 2359 QUICK QUICKER QUICKEST
EMERGENCYAPPL 0000 2359 QUICKEST QUICKEST QUICKEST
STAMJOB 1700 2000 STANDARD
STAMJOB 2200 0200 QUICK
* PROD* 0000 2359 STANDARD QUICKER QUICKEST
Processing and usage notes
When a job starts on time, if the job or application name (or both) appears in the table and the current time is within the time-from and time-to range, the 'Job-Init' service class is assigned to the job (if column 42 is not set to blank).
When a job starts after its DUE-IN time, if the job or application name (or both) appears in the table and the current time is within the time-from and time-to range, the 'AftDueIn' service class is assigned to the job (if column 51 is not set to blank).
When a job is executing after its DUE-OUT time, if the job or application name (or both) appears in the table and the current time is within the time-from and time-to range, the job is reset to the 'AfDueOut' service class (if column 60 is not set to blank).
If the job name mask (but not the application name mask) is present, a match will be attempted on the job name mask only. If the application name mask (but not the application name mask) is present, a match will be attempted on the application name mask only. If both job name and application name masks are present, a match will be attempted on both.
In addition to searching for a job name or application name match, the current time must be within the time-from to time-to range in order to be considered a 'matched entry'. In other words, there may be several entries for the same job name or application name (or both) with different time-from to time-to ranges.
The first job name or application name mask match in the current time range, that is, within the current time in the time-from to time-to range, stops the search. At this first matching occurrence, Control-M does not continue looking through the table for additional matches. Based on this rule, more specific entries should be placed on the top of the table and less specific, general entries should be placed on the bottom of the table.
This processing is only done for non-NJE jobs. For NJE jobs, since the WLM environment may be completely different on the remote system, service class setting is not performed.
The ROUTE and E (RESET) operator command will be used if Control-M cannot tell whether the job is running on the current system or on another system in the same SPOOL. (The ROUTE command deals with all systems in the SYSPLEX.) An informational message will be sent to IOA LOG, to indicate the service class setting.
If Control-M can determine that the job is running on the current system, Control-M issues the IWMRESET WLM macro. If successful, a message is sent to IOA LOG to indicate the service class setting. If WLM returns with an error response, Control-M sends an error message to IOA LOG describing the service class and the error return and reason code.
Job (not STC) names are unique within this SPOOL, so using the ASID parameter on the E operator command is not necessary.
CTMX020 exit processing
After Control-M determines which service class to assign to the job, but immediately before the service class is about to be set or reset (even if Control-M will not set any service class), Control-M calls user exit 20. The user exit is passed the following information:
-
8-character function code (JOBINIT, AFTDUEIN, or AFDUEOUT)
-
pointer to job name
-
pointer to MCT
-
pointer to MIT
-
pointer to start of the internal WLMSCTBL table (if the table exists)
-
service class to be set or blanks
-
pointer to the matched internal WLMSCTBL table entry (if the table exists and if a table entry actually matched the current job)
The user exit may change the service class. After returning to Control-M, the service class is checked. If it is non-blank or blank, Control-M will either issue or not issue the appropriate E operator command or WLM macro, to set or reset the service class.
Control-M VM Support
Most medium to large computer centers maintain a complex production environment based on multiple operating systems and platforms. A typical, large computer center can employ MVS/ESA, VM, VAX/VMS, AS/400, Unix machines, PCs, and so on.
This section details how Control‑M and IOA can easily be implemented in order to automate control of VM operations through standard MVS and VM operating system functions and specific Control‑M features.
One of the most popular combinations at these computer centers is the coupling of MVS and VM. These computer centers require integrated production control capabilities for both operating systems. One aspect of integrated production control is the capability to automate processes in the VM environment. For example, VM commands or EXECs must automatically be executed under VM at certain times, or according to events that occur in either MVS or VM. Usually, these VM commands or EXECs must be executed in certain sequences, and the results must be checked to ensure that the commands or sequences have completed successfully. Another aspect of integrated production control is the synchronization of processes in and between MVS and VM. For example, an MVS-based application may require an input file to be received from VM before the application can proceed.
There is no single answer or solution to every problem. Much depends on the hardware and software configurations implemented at your site. Some of the solutions described in this section may not be appropriate for your site. Therefore, for some problems, more than one solution or approach has been presented. Each site can determine which solution is most suitable for its environment.
VM Configurations
To automate processes in the VM environment and synchronize MVS and VM processes, the VM configuration must facilitate appropriate communication with MVS. Three popular configurations exist for running MVS and VM operating systems at the data center:
-
MVS running under VM
-
MVS and VM running on separate computers
-
MVS and VM running under LPAR
The configuration implemented at your site determines which techniques are applicable. The following topics describes these configurations.
MVS Running Under VM
The VM system runs the Control Program (CP) together with a number of Conversational Monitor System (CMS) virtual machines. In addition, an MVS virtual machine is activated that operates Control‑M.
When MVS is running under VM, the following options are available for transferring data between the MVS and VM operating systems:
-
A VM CP command can be issued in the MVS machine, using the DIAGNOSE machine command. This allows the MVS machine to issue commands to be processed by VM.
-
If an RSCS machine is operated under VM, then RJE or NJE connections can be established between VM and MVS. This allows jobs and sysouts to be transferred between MVS and VM.
-
If a VM/VTAM machine is operated under VM, SNA connections can be established between MVS and VM. This allows a VM terminal user to invoke the IOA Online interface.
-
If a disk or minidisk is shared between MVS and VM, a PS or PO file created under MVS can be read from VM CMS.
-
If a card reader or punch is defined in MVS, files can be passed between MVS and VM.
The following figure shows MVS Running Under VM:
Figure 28 MVS Running Under VM
MVS and VM Running on Separate Computers
The above VM and MVS systems run on separate computers. However, on some levels, communication exists between the two computers.
The options available for transferring data between the MVS and VM operating systems running on separate computers include the following:
-
If an RSCS machine is operated under VM, then RJE or NJE connections can be established between VM and MVS. This allows jobs and sysouts to be transferred between MVS and VM.
-
If a VM/VTAM machine is operated under VM, SNA connections can be established between MVS and VM. This allows a VM terminal user to invoke the IOA Online interface.
-
If a disk is shared between MVS and VM, a PS or PO file created under MVS can be read under VM.
MVS and VM Running Under LPAR
The LPAR (Local Partitioning) feature in an IBM mainframe allows the installation to (optionally) divide the mainframe into partitions and run multiple operating systems in parallel.
For our purposes, each partition can be regarded as a standalone mainframe computer. Therefore, when partitioning is used, we can regard the processor complex as a type of multi-CPU configuration.
The following figure shows MVS and VM Running Under LPAR:
Figure 29 MVS and VM Running Under LPAR
If, as in the diagram above, one partition runs MVS and the other partition runs VM, the previous discussion of VM and MVS running on separate machines is also applicable.
The above discussion of the PR/SM feature also applies to users of MDF (Multi-Domain Facility) from AMDAHL, MLPF (Multiple Logical Processor Facility) from HDS (Hitachi Data Systems), and any other supported CPU with hardware partitioning capabilities.
Invoking the IOA Online Facility From a VM Terminal
If appropriate interactive communication connections are set up between the VM and MVS operating systems, a VM terminal user can log onto an MVS VTAM application that supports the IOA Online facility (for example, TSO, CICS, IMS/DC, IOA VTAM Monitor, IDMS/DC, ROSCOE and COM-PLETE).
All examples in this document assume the use of an IOA VTAM monitor. However, each site must determine which of the previously mentioned MVS VTAM applications is most suitable.
Once the VM user has entered the IOA Online facility, all tracking and control options of Control‑M and IOA are available to the user. For example, the user can add or delete prerequisite conditions, define a new job schedule, order a job, view job run results, hold a job, and so on.
Several methods exist for setting up interactive communication connections, depending on the software and hardware configurations used at each site:
-
dialing into the z/OS computer (MVS under VM only)
-
using VM/VTAM
-
using the IOA Logical Terminal Emulator
-
using a session handling product
Dialing Into the MVS Machine (MVS Under VM Only)
A VM terminal user can dial directly into the MVS machine (at a predefined address), receive the MVS VTAM logon screen, and then log onto the IOA VTAM monitor running under MVS.
Using VM/VTAM
If VM/VTAM is employed under VM, a VM terminal user can dial into the VM/VTAM machine, and then establish (from the VM/VTAM screen) a cross-domain session with the IOA VTAM monitor running under MVS.
Using IOA’s Logical Terminal Emulator
The IOA Logical Terminal Emulator can be employed in conjunction with VM/VTAM. This facility allows the user to establish a VTAM session without leaving CMS. A session can be established in this way with the IOA VTAM monitor running under MVS. At the end of the session, the user remains at the VM/CMS machine.
In addition, the user’s sign-on procedure "can be taught" this option and, on future invocations, can automatically repeat the option. For additional information on this subject, see the IOA chapter in the INCONTROL for z/OS Installation Guide: Installing.
Using a Session Handling Product
There are several session handling products available for the VM environment, such as VM/Pass-through (PVM), Tubes and Vterm.
If such a product is employed under VM, a VM terminal user can use that product to log on the MVS system, and then log on to the IOA VTAM monitor running under MVS.
File Transfer From MVS to VM
General
This chapter demonstrates several techniques for sending a SYSOUT or file to VM. Some of these techniques utilize Control‑M functions. Others utilize standard functions of MVS/JES, VM, and/or other products.
All of the following JCL examples assume that
-
an NJE connection exists between the MVS and VM machines
-
the VM node ID is VMPROD
-
the sysout or report is to be routed to a VM machine (user) named USER1
-
the sysout class is A
Sysouts or files that are sent to the VM user using NJE/RSCS are placed in the VM user’s Reader queue. The VM user periodically checks the user’s Reader queue using the RL (RDRLIST) command. If a file from MVS is found, the user can, for example, browse the file (using the PEEK command) or move it to the A minidisk (using the RECEIVE command).
Routing the Production Job’s Report to VM using JCL
A specific report created by a Control‑M production job can be routed using NJE services to a VM machine (user), simply by defining the destination of the report in the job’s JCL. The following examples demonstrate how to implement this using standard MVS and JES statements:
-
Route the report to a certain VM user using parameter DEST=(node,user) in the appropriate DD statement:
Copy//REPORT DD SYSOUT=A,DEST=(VMPROD,USER1)
-
Route the report to a specified VM user using an MVS OUTPUT statement and the report’s DD statement referencing that MVS OUTPUT statement
Copy//JOB1 JOB ...
//OUTREP OUTPUT DEST=VMPROD.USER1
//STEP1 EXEC PGM=...
//REPORT DD SYSOUT=A,OUTPUT=*.OUTREP -
Route printed sysout files of the job to a specified VM user using a JES2 /*ROUTE PRINT statement:
Punched sysout files of the job can be sent to VM using the JES2 /*ROUTE PUNCH statement. A punched SYSOUT file consists of 80-character records. Most sites define JES output CLASS B as a punch class.
Copy//JOB1 JOB ...
/*ROUTE PRINT VMPROD.USER1
//STEP1 EXEC PGM=...
//REPORT DD SYSOUT=A
The following figure shows Routing the Production Job’s Report to VM using JCL:
Figure 30 Routing the Production Job’s Report to VM using JCL
Routing Production Job SYSOUT to VM using JCL
When a Control‑M production job has finished executing, the Control‑M monitor requires that the job’s first three SYSOUT files (SYSDATA) reside in the MVS spool in held mode, in order to analyze how the job has completed. Once the SYSDATA has been analyzed, it can be purged, released for printing, and so on.
Sometimes, the SYSDATA of a Control‑M production job may require routing to a VM user. This can be accomplished by specifying two MVS output statements in the job’s JCL.
These two output statements cause the creation of two copies of the SYSDATA. One copy is assigned standard attributes and can be analyzed by Control‑M. The other copy is directed to the VM machine named USER1.
//jobname JOB ...
//COPY1 OUTPUT JESDS=ALL,CLASS=*
//COPY2 OUTPUT JESDS=ALL,CLASS=A,DEST=VMPROD.USER1
.....
Routing Production Job SYSOUT to VM using Control-M SYSOUT Functions
Control‑M SYSOUT functions can route the production job’s SYSOUT (or parts of it) to a VM node. Only the VM node name can be specified (no user ID can be assigned within the destination name). However a JES2 destination ID (defined in JES2 as nodeid.userid) can be specified. As a result, the output is routed to a specific VM user operating in the VM node.
Control‑M can then be used to route selected outputs of a production job (for example, the whole SYSOUT, one or more reports, messages, files) to a specific VM machine (user).
Sending a File to VM in the Form of a Sysout
Control‑M can be used to trigger file transfer to a VM machine. Perhaps the easiest way to perform the file transfer is for Control‑M to schedule a job to run under MVS and produce a SYSOUT file with the appropriate destination.
In the following example, the data to be sent are 80-byte records. In order to print larger data records, use DD statement SYSUT1 to reference a sequential input file that contains these larger data records.
The following figure shows sending a file to VM as a Sysout:
Figure 31 Sending a File to VM as a Sysout
//jobname JOB ...
//PRINT EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMY
//SYSUT2 DD SYSOUT=A,DEST=(VMPROD,USER1)
//SYSUT1 DD *
data to be sent
data to be sent
data to be sent
/*
//
File Transfer Product Information
For information about this topic, see File Transfer Products.
Utilize a Shared Disk Between MVS and VM
If a disk or minidisk is shared between MVS and VM, a PS or PO file created under MVS can be read from VM CMS.
Sometimes there is no need to transfer the PO/PS file to VM. Perhaps just a notification is needed to inform the user that the file created under MVS is now available. For a description of various user notification options, see IOA—An Integrated Solution.
For a description of the various VM CMS commands that process and query MVS datasets such as MOVEFILE or STATE, see the relevant IBM manual.
MVS catalog services cannot be accessed in a standard way from VM. Therefore, to read the file, the VM CMS user must know on which disk the file resides.
Triggering an Event in Control-M by a VM User
The following topics describe two techniques for triggering an event in Control‑M. Triggered events can cause Control‑M to run jobs in the z/OS environment, stop jobs from being submitted, order VM-generated jobs into Control‑M, and so on.
Submitting a Job to MVS to Execute a Control-M Utility
A VM user can communicate with Control‑M by submitting a job that invokes an appropriate utility.
For example
-
A VM user can submit a job to MVS that invok