Control-M/Server Cluster Configuration
You can configure Control-M/Server in a cluster environment on UNIX and Window:
-
Control-M/Server Windows Cluster Configuration: The Control-M/Server Windows installation is cluster aware. All components defined as part of the installation are defined as cluster resource and are managed by the cluster manager.
-
Control-M/Server UNIX Cluster Configuration: The Control-M/Server UNIX installation is not cluster aware. You need to configure the components using scripts.
Control-M/Server Windows Cluster Configuration
The Control-M/Server installation on Windows automatically recognizes the cluster environment and prompts you to define the Cluster group (cluster aware) or Local install (cluster non-aware). To install, see Control-M/Server Installation.
Automatic installation and automatic upgrade of Control-M/Server is not supported for Microsoft Windows cluster environments.
The Cluster Group installation defines the following components and as a resource in the cluster manager in online status:
-
Control-M/Server
-
Control-M PostgreSQL (If installed with PostgreSQL)
-
Control-M/Server Configuration Agent
By default, the Control-M/Server components are automatically defined as resources in the cluster manager. If a component shuts down, you must start it up from the cluster manager.
Configuring Control-M/Server as a Cluster Resource on Windows
This procedure describes how to configure Control-M/Server in a cluster-aware environment on Windows.
Begin
-
Install Control-M/Server on the primary host, as described in Installing Control-M/Server on Windows.
The Control-M/Server, Control-M Server Configuration Agent, and the PostgreSQL for Server900 (if installed) cluster resource components are installed and online.
-
From the cluster manager on the primary node, run the Move Group command to move the Control-M/Server cluster group to the secondary node.
-
Verify that the Disk, IP address, and Network Name resources are online on the secondary node.
-
Do one of the following:
-
Verify that the DVD drive is available to both nodes and run Setup_files\3RD\setup_ctm.bat.
-
From the DVD, copy the Setup_files\3RD\setup_ctm.bat file to the secondary node and run setup_ctm.bat.
-
-
Verify that the secondary node is online in the Windows Cluster Administrator window.
You have now completed the configuration of Control-M/Server with clusters on Windows.
-
Restart all the cluster nodes.
Control-M/Server UNIX Cluster Configuration
The Control-M/Server UNIX installation is not cluster aware. Therefore, you need to configure the UNIX cluster manager to manage Control-M/Server components.
Before you can configure Control-M/Server in a cluster environment on UNIX, you must do the following:
-
Create two user accounts, as described in Configuring a User Account on UNIX.
-
Set the $BMC_HOST_INSTALL environment variable to the virtual cluster hostname designated for the Control-M/Server resource group prior to installation. For information on setting variables, see Setting Environment Variables in UNIX.
-
Install, as described in Installing Control-M/Server on UNIX.
-
To configure Control-M/Server as a cluster resource, see Configuring Control-M/Server as Cluster Resource on UNIX.
Configuring Control-M/Server as Cluster Resource on UNIX
This procedure describes how to configure Control-M/Server as a cluster resource on UNIX, which enables the cluster manager to start, check, and stop Control-M/Server processes.
Begin
-
From the CCM, set the Control-M/Server component to Ignore, as described in Ignoring a component:
-
On the active host create the following scripts in a directory that is not on the shared disk:
-
start_all: startdb && start_ca && start_ctm && start-ag -u `whoami` -p ALL -s
-
show_all: show_ca && shctm && shagent
-
shut_all: shut_ca && shut_ctm && shut-ag -u `whoami` -p ALL && shutdb
-
-
Copy the scripts to the other host and allow access to the Control-M/Server user.
-
On the active host, create the following scripts in a directory that is not on the shared disk:
-
ctms_start_all: sudo -u <ctmuser> -i start_all
-
ctms_show_all: sudo -u <ctmuser> -i show_all
ret=$?
if [ $ret -eq 0 ]
then
exit 0
fi
exit 100
-
ctms_shut_all: sudo -u <ctmuser> -i shut_all
-
-
Copy the scripts to the other host and allow access to the cluster user.
-
Verify that resource group and logical hostname resource are already defined on the cluster.
-
From the cluster manager, run the following command to create a Control-M/Server resource, register it with required resource group, and allocate it to a virtual host:
Contact your system admin for the exact commands required on your platform and version of the cluster software.
Oracle Veritas Cluster Server
sudo /usr/cluster/bin/clrs create -g <resource_group> -t SUNW.gds:6 -p Scalable=false -p Start_timeout=120 -p Stop_timeout=300 -p Probe_timeout=20 -p Start_command="/<host_private_directory>/ctms_start_all.sh" -p Stop_command="/<host_private_directory>/ctms_shut_all.sh" -p Probe_command="/<host_private_directory>/ctms_show_all.sh" -p Child_mon_level=-1 -p Port_list="2369/tcp" -p Resource_dependencies=<logical_hostname> -p Failover_enabled=TRUE -p Stop_signal=9 <Control-M/Server_resource>
-
Restart all the cluster nodes.