These are exciting times, marked by a transformative change in the way modern applications are rolled out. The transition to Cloud and related technologies is adding considerable value to the process. If you are utilizing solutions like SiteMinder SSO or CA Access Gateway, having access to real-time metrics is invaluable. In the following article, we’ll explore the inherent features of the CA SSO container form factor that facilitate immediate metrics generation, compatible with platforms like Grafana.
Our Lab cluster is an On-Premise RedHat OpenShift Kubernetes Cluster which has the CA SSO Container solution, available as part of the Broadcom Validate Beta Program. The deployment of different SSO elements like policy servers and access gateway is facilitated through a Helm package provided by Broadcom. Within our existing OpenShift environment, a Prometheus metrics server is configured to gather time-series data. By default, the tracking of user workload metrics isn’t activated in OpenShift and must be manually enabled. To do so, make sure the ‘enableUserWorkload‘ setting is toggled to ‘true‘. You can either create or modify the existing configmap to ensure this setting is activated.
Grafana is also deployed for visuals and connected to the Prometheus data source to create metrics visuals. Grafana data source can be created using the YAML provided below. Note that creation of the grafana datasource will require the Prometheus URL as well as an authorization token to access stored metrics. This token can be extracted from the cluster using the below commands.
Also ensure that a role binding exists to allow the service account (prometheus-k8s) in the openshift-monitoring namespace access to the role which allows monitoring of resources in the target (smdev) namespace.
Once the CA SSO helm chart is installed with metrics enabled, we must also ensure that the namespace in which CA SSO gets deployed has openshift.io/cluster-monitoring label set as true.
We are all set now and should see the metrics getting populated using the OpenShift console (Observe -> Metrics menu item) as well as available for Grafana’s consumption.
In the era of next-generation application delivery, integrated monitoring and observability features now come standard, offering considerable advantages, particularly for operations and management teams seeking clear insights into usage and solution value. This heightened value is especially notable in deployments via container platforms. If you’re on the path to modernization and are looking to speed up your initiatives, feel free to reach out. We’re committed to your success and are keen to partner with you.
The recent DNS challenges for a large organization that impacted their worldwide customers bring to mind a project we completed this year, a global password reset redundancy solution.
We worked with a client who desired to manage unplanned WAN outages to their five (5) data centers for three (3) independent MS Active Directory Domains with integration to various on-prem applications/ endpoints. The business requirement was for self-service password sync, where the users’ password change process is initialed/managed by the two (2) different MS Active Directory Password Policies.
Without the WAN outage requirement, any IAM/IAG solution may manage this request within a single data center. A reverse password sync agent process is enabled on all writable MS Active Directory domain controllers (DC). All the world-wide MS ADS domain controllers would communicate to the single data center to validate and resend this password change to all of the users’ managed endpoint/application accounts, e.g. SAP, Mainframe (ACF2/RACF/TSS), AS/400, Unix, SaaS, Database, LDAP, Certs, etc.
With the WAN outage requirement, however, a queue or components must be deployed/enabled at each global data center, so that password changes are allowed to sync locally to avoid work-stoppage and async-queued to avoid out-of-sync password to the other endpoint/applications that may be in other data centers.
We were able to work with the client to determine that their current IAM/IAG solution would have the means to meet this requirement, but we wished to confirm no issues with WAN latency and the async process. The WAN latency was measured at less than 300 msec between remote data centers that were opposite globally. The WAN latency measured is the global distance and any intermediate devices that the network traffic may pass through.
To review the solution’s ability to meet the latency issues, we introduced a test environment to emulate the global latency for deployment use-cases, change password use-cases, and standard CrUD use-cases. There is a feature within VMWare Workstation, that allows emulation of degraded network traffic. This process was a very useful planning/validation tool to lower rollback risk during production deployment.
VMWare Workstation Network Adapter Advance Settings for WAN latency emulation
The solution used for the Global Password Rest solution was Symantec Identity Suite Virtual Appliance r14.3cp2. This solution has many tiers, where select components may be globally deployed and others may not.
We avoided any changes to the J2EE tier (Wildfly) or Database for our architecture as these components arenot supported for WAN latency by the Vendor. Note: We have worked with other clients that have deployment at two (2) remote data centers within 1000 km, that have reported minimal challenges for these tiers.
We focused our efforts on the Provisioning Tier and Connector Tier. The Provisioning Tier consists of the Provisioning Server and Provisioning Directory.
The Provisioning Server has no shared knowledge with other Provisioning Servers. The Provisioning Directory (Symantec Directory) is where the provisioning data may be set up in a multi-write peer model. Symantec Directory is a proper X.500 directory with high redundancy and is designed to manage WAN latency between remote data centers and recovery after an outage. See example provided below.
The Connector Tier consists of the Java Connector Server and C++ Connector Server, which may be deployed on MS Windows as an independent component. There is no shared knowledge between Connector Servers, which works in our favor.
Requirement:
Three (3) independent MS Active Directory domain in five (5) remote data centers need to allow self-service password change & allow local password sync during a WAN outage. Passwords changes are driven by MS ADS Password Policies (every N days). The IME Password Policy for IAG/IAM solution is not enabled, IME authentication is redirected to an ADS domain, and the IMPS IM Callback Feature is disabled.
Below is an image that outlines the topology for five (5) global data centers in AMER, EMEA, and APAC.
The flow diagram below captures the password change use-case (self-service or delegated), the expected data flow to the user’s managed endpoints/applications, and the eventual peer sync of the MS Active Directory domain local to the user.
Observation(s):
The standalone solution of Symantec IAG/IAM has no expected challenges with configurations, but the Virtual Appliance offers pre-canned configurations that may impact a WAN deployment.
During this project, we identified three (3) challenges using the virtual appliance.
Two (2) items needed the assistance of the Broadcom Support and Engineering teams. They were able to work with us to address deployment configuration challenges with the “check_cluster_clock_sync -v ” process that incorrectly increments time delays between servers instead of resetting a value of zero between testing between servers.
Why this is important? The “check_cluster_clock_sync” alias is used during auto-deployment of vApp nodes. If the time reported between servers is > 15 seconds then replication may fail. This time check issue was addressed with a hotfix. After the hot-fix was deployed, all clock differences were resolved.
The second challenge was a deployment challenge of the IMPS component for its embedded “registry files/folders”. The prior embedded copy process was observed to be using standard “scp”. With a WAN latency, the scp copy operation may take more than 30 seconds. Our testing with the Virtual Appliance showed that a simple copy would take over two (2) minutes for multiple small files. After reviewing with CA support/engineering, they provided an updated copy process using “rsync” that speeds up copy performance by >100x. Before this update, the impact was provisioning tier deployment would fail and partial rollback would occur.
The last challenge we identified was using the Symantec Directory’s embedded features to manage WAN latency via multi-write HUB groups. The Virtual Appliance cannot automatically manage this feature when enabled in the knowledge files of the provisioning data DSAs. Symantec Directory will fail to start after auto-deployment.
Fortunately, on the Virtual appliance, we have full access to the ‘dsa’ service ID and can modify these knowledge files before/after deployment. Suppose we wish to roll back or add a new Provisioning Server Virtual Appliance. In that case, we must disable the multi-write HUB group configuration temporarily, e.g. comment out the configuration parameter and re-init the DATA DSAs.
Six (6) Steps for Global Password Reset Solution Deployment
We were able to refine our list of steps for deployment using pre-built knowledge files and deployment of the vApp nodes in blank slates with the base components of Provisioning Server (PS) and Provisioning Directory) with a remote MS Windows server for the Connector Server (JCS/CCS).
Step 1: Update Symantec Directory DATA DSA’s knowledge configuration files to use the multiple group HUB model. Note that multi-write group configuration is enabled within the DATA DSA’s *.dxc files. One Directory servers in each data center will be defined as a “HUB”.
To assist this configuration effort, we leveraged a serials of bash shell scripts that could be pasted into multiple putty/ssh sessions on each vApp to replace the “HUB” string with a “sed” command.
After the HUB model is enabled (stop/start the DATA DSAs), confirm that delayed WAN latency has no challenge with Symantec Directory sync processes. By monitoring the Symantec Directory logs during replication, we can see that sync operation with the WAN latency is captured with the delay > 1 msecs between data centers AMER1 and APAC1.
Step 2: Update IMPS configurations to avoid delays with Global Password Reset solution.
Note for this architecture, we do not use external IME Password Policies. We ensure that each AD endpoint has the checkbox enabled for “Password synchronization agent is installed” & each Global User (GU) has “Enable Password Synchronization Agent” checkbox enabled to prevent data looping. To ensure this GU attribute is always enabled, we updated an attribute under “Create Users Default Attributes”.
Step 3a: Update the Connector Tier (CCS Component)
Ensure that the MS Windows Environmental variables for the CCS connector are defined for Failover (ADS_FAILOVER) and Retry (ADS_RETRY).
Step 3b: Update the CCS DNS knowledge file of ADS DCs hostnames.
Important Note: Avoid using the refresh feature “Refresh DC List” within the IMPS GUI for the ADS Endpoint. If this feature is used, then a “merge” will be processed from the local CCS DNS file contents and what is defined within the IMPS GUI refresh process. If we wish to manage the redirection to local MS ADS Domain Controllers, we need to control this behavior. If this step is done, we can clean out the Symantec Directory of extra entries. The only negative aspect is the local password change may attempt to communicate to one of the remote MS ADS Domain Controllers that are not within the local data center. During a WAN outage, a user would notice a delay during the password change event while the CCS connector timed out the connection until it connected to the local MS ADS DC.
Step 3c: CCS ADS Failover
If using SSL over TCP 636 confirm the ADS Domain Root Certificate is deployed to the MS Windows Server where the CCS service is deployed. If using SASL over TCP 389 (if available), then no additional effort is required.
If using SSL over TCP 636, use the MS tool certlm.msc to export the public root CA Certificate for this ADS Domain. Export to base64 format for import to the MS Windows host (if not already part of the ADS Domain) with the same MS tool certlm.msc.
Step 4a: Update the Connector Tier for the JCS component.
Add the stabilization parameter “maxWait” to the JCS/CCS configuration file. Recommend 10-30 seconds.
Step 4b: Update JCS registration to the IMPS Tier
You may use the Virtual Appliance Console, but this has a delay when pulling the list of any JCS connector that may be down at this time of the check/submission. If we use the Connector Xpress UI, we can accomplish the same process much faster with additional flexibility for routing rules to the exact MS ADS Endpoints in the local data center.
Step 4c: Observe the IMPS routing to JCS via etatrans log during any transaction.
If any JCS service is unavailable (TCP 20411), then the routing rules process will report a value of 999.00, instead of a low value of 0.00-1.00.
Step 5: Update the Remote Password Change Agent (DLL) on MS ADS Domain Controllers (writable)
Step 6a: Validation of Self-Service Password Change to selected MS ADS Domain Controller.
Using various MS Active Directory processes, we can emulate a delegated or self-service password change early during the configuration cycle, to confirm deployment is correct. The below example uses MS Powershell to select a writable MS ADS Domain Controller to update a user’s password. We can then monitor the logs at all tiers for completion of this password change event.
A view of the password change event from the Reverse Password Sync Agent log file on the exact MS Domain Controller.
Step 6b: Validation of password change event via CCS ADS Log.
Step 6c: Validation of password change event via IMPS etatrans log
Note: Below screenshot showcases alias/function to assist with monitoring the etatrans logs on the Virtual Appliance.
Below screen shot showcases using ldapsearch to check timestamps for before/after of password change event within MS Active Directory Domain.
We hope these notes are of some value to your business and projects.
Appendix
Using the MS Windows Server for CCS Server
Get current status of AD account on select DC server before Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with current password.)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password05" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Change AD account's password via Powershell:
PowerShell Example:
Set-ADAccountPassword -Identity "idmpwtest" -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "Password06" -Force) -Server dc2016.exchange.lab
Get current status of AD account on select DC server after Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with NEW password)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password06" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Using the Provisioning Server for password change event
Get current status of AD account on select DC server before Password Change:
LDAPSearch Example: (From IMPS server - as user with current password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password05 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
Change AD account's password via ldapmodify & base64 conversion process:
LDAPModify Example:
BASE64PWD=`echo -n '"Password06"' | iconv -f utf8 -t utf16le | base64 -w 0`
ADSHOST='192.168.242.154'
ADSUSERDN='CN=Administrator,CN=Users,DC=exchange2012,DC=lab'
ADSPWD='Password01!’
ldapmodify -v -a -H ldaps://$ADSHOST:636 -D "$ADSUSERDN" -w "$ADSPWD" << EOF
dn: CN=idmpwtest,OU=People,dc=exchange2012,dc=lab
changetype: modify
replace: unicodePwd
unicodePwd::$BASE64PWD
EOF
Get current status of AD account on select DC server after Password Change:
LDAPSearch Example: (From IMPS server - with user's account and new password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password06 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
On Linux OS, there are two (2) device drivers that provide entropy “noise” for components that require encryption, e.g. the /dev/random and the /dev/urandom device drivers. The /dev/random is a “blocking” device driver. When the “noise” is low, any component that relies on this driver will be “stalled” until enough entropy is returned. We can measure the entropy from a range of 0-4096. Where a value over 1000 is excellent. Any value in the double or single digits will impact the performance of the OS and solutions with delays. The root cause of these delays is not evident during troubleshooting, and typically there are no warning nor error messages related to entropy.
The Symantec Identity Suite solution, when deployed on Linux OS is typically deployed with the JVM switch -Djava.security.egd=file:/dev/./urandom for any component that uses Java (Oracle or AdoptOpenJDK), e.g. Wildfly (IM/IG/IP) and IAMCS (JCS). This JVM variable is sufficient for most use-cases to manage the encryption/hash needs of the solution.
However, for any component that does not provide a mechanism to use the alternative of /dev/urandom driver, the Linux OS vendors offer tools such as the “rng-tools” package. We can review what OS RNGD service is available using package tools, e.g.
dnf list installed | grep -i rng
If the Symantec Identity Suite or other solutions are deployed as standalone components, then we may adjust the Linux OS as we need with no restrictions to add the RNGD daemon as we wish. One favorite is the HAVEGED daemon over the default OS RNGD.
See prior notes on value and testing for Entropy on Linux OS (standalone deployments):
The challenge for Virtual Appliances is that we are limited to what functionality the Symantec Product Team provides for us to leverage. The RNGD service was available on the vApp r14.3, but was disabled for OS challenges with 100% utilization with CentOS 6.4. The service is still installed, but the actual binary is non-executable.
A new Virtual Appliance patch would be required to re-enable this RNGD on vApp r14.3cp2. We have access via sudo, to /sbin/chkconfig, /sbin/service to re-enable this service, but as the binary is not executable, we cannot progress any further. We can see the alias in the documentation still exist, but the OS alias was removed in the cp2 update.
However, since vApp r14.4 was release, we can focus on this Virtual Appliance which is running Centos 8 stream. The RNGD service here is disabled (masked) but can be re-enabled for our use with the sudo command. There is no current documented method for RNGD on vApp r14.4 at this time, but the steps below will show an approved way using the ‘config’ userID and sudo commands.
Confirm that the “rng-tools” package is installed and that the RNGD binary is executable. We can also see that the RNGD service is “masked”. Masked services are prevented from starting manually or automatically as an extra safety measure when we wish for tighter control over our systems.
If we test OS entropy for this vApp r14.4 server without RNGD, we can monitor how a simple BASH shell script that emulates a password being generated will impact the “entropy” of /dev/random. The below script will reduce the entropy to low numbers. This process will now impact the OS itself and any components that reference /dev/random. We can observe with “lsof /dev/random” that the java programs will still reference /dev/random; even though most activity is going to /dev/urandom.
Using the time command in the BASH shell script, we can see that the response is rapid for the first 20+ iterations, but as soon as the entropy is depleted, each execution is delayed by 10-30x times.
Now let’s see what RNGD service will do for us when it is enabled. Let’s follow the steps below to unmask, enable, and start the RNGD service as the ‘config’ userID. We have access to sudo to the Centos 8 Stream command of /sbin/systemctl.
After the RNGD service is enabled, test again with the same prior BASH shell script but bump the loops to 1000 or higher. Note using the time command we can see that each loop finishes within a fraction of a second.
Aim to keep the solution footprint small and the right-sized to solve the business’ needs. Do not accept the default performance; avoid over-purchasing to scale to your expected growth.
Use the JVM switch wherever there is a java process, e.g. BLC or home-grown ETL (extract-transform-load) processes.
-Djava.security.egd=file:/dev/./urandom
If you suspect a dependence may impact the OS or other processes on /dev/random, then enable the OS RNGD and perform your testing. Monitor with the top command to ensure RNGD service is providing value and not impacting the solution.
“DSA is attempting to start after a long outage, perform a recovery procedure before starting”
Challenge: The IMPD (Identity Manager Provisioning Directory) Data DSAs have been offline for a while, e.g. 7 days+ (> 1 week), and the Symantec/CA Directory solution will, to protect the data, refuse to allow the DATA DSAs to start unless there is manual intervention to prevent the possibility of production data (Live DATA DSAs) being synced with older data (Offline DATA DSAs).
If we were concern, we would follow best practices and remove the offline DATA DSAs’ *.db & *.dp files, and replace the *.db with current copies of the Live DATA DSAs’ *.db files; generate temporary time files of *.dx and allow the time files of *.dp to rebuild themselves upon startup of the offline DATA DSAs.
However, if we are NOT concern, or the environment is non-production we can avoid the multiple shells, multiple commands to resync by using a combinations of bash shell commands. The proposal below outlines using the Symantec/CA Identity Suite virtual appliance, where both the IMPD and IMPS (Identity Manager Provisioning Server) components reside on the same servers.
Proposal: Use a single Linux host to send remote commands as a single user ID; sudo to the ‘dsa’ and ‘imps’ service IDs, and issue commands to address the restart process.
Pre-Work: For the Identity Suite vApp, recommend that .ssh keys be used to avoid using a password for the ‘config’ user IDs on all vApp nodes.
If using .SSH keys, do not forget to use this shortcut to cache the local session: eval `ssh-agent` && ssh-add
Steps: Issue the following bash commands with the correct IPs or hostnames.
If possible, wrap the remote commands in a for-loop. The below example uses the local ‘config’ user ID, to ssh to remote servers, then issues a local su to the ‘dsa’ service ID. The ‘dsa’ commands may need to be wrapped as shown below to allow multiple commands to be executed together. We have a quick hostname check, stop all IMPD DATA DSAs, find the time-stamp file that is preventing the startup of the IMPD DATA DSAs and remove it, restart all IMPD DATA DSA, and then move on to the next server with the for-loop. The ‘imps’ commands are similar with a quick hostname check, status check, stop and start process, another status check, then move on to the next server in the for-loop.
for i in {136..141}; do ssh -t config@192.168.242.$i "su - dsa -c \"hostname;dxserver stop all;pwd;find ./data/ -type f \( -name '*.dp' \) -delete ;dxserver start all \" "; done
for i in {136..141}; do ssh -t config@192.168.242.$i "su - imps -c \"hostname;imps status;imps stop;imps start;imps status \" "; done
View of for-loop commands output:
Additional: Process to assist with decision to sync or not sync.
Check if the number of total entries in each individual IMPD DATA DSA match with their peers (Multi-Write groups). Goal: Avoid any deltas > 1% between peers. The IMPD “main”, “co”, “inc” DATA DSA should be 100% in sync. We may see some minor flux in the “notify” DATA DSA, as this is temporary data used by the IMPS server to store data to be sent to the IME via the IME Call Back Process.
If there are any deltas, then we may export the IMPD DATA DSAs to LDIF files and then use the Symantec/CA Directory ldifdelta process to isolate and triage the deltas.
su - dsa OR [ sudo -iu dsa ]
export HISTIGNORE=' *' {USE THIS LINE TO FORCE HISTORY TO IGNORE ANY COMMANDS WITH A LEADING SPACE CHARACTER}
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd {USE SPACE CHARACTER IN FRONT TO AVOID HISTORY USAGE}
# NOTIFY BRANCH (TCP 20404)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD NOTIFY DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# INC BRANCH (TCP 20398)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD INC DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# CO BRANCH (TCP 20396)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD CO DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# MAIN BRANCH (TCP 20394)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD MAIN DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
NOTIFY DSA is temporary data and will have deltas. This DSA is used for the IME CALL BACK process.
A very common challenge we see is the modification of the CA/Symantec Connector Server Service(s) startup order for the embedded C++ (CCS) Connector. This CCS connector service on MS Windows OS is marked default as “Manual” startup.
Since the solution documentation is not clear on why this is configured as manual, we will see site’ administrators that will either change this service from “Manual” to “Automatic” or will start the CCS service manually themselves upon a restart.
However, either of these processes will impact the ability of the JCS Service from managing the CCS Services cache upon startup. The JCS will NOT be able to manage the CCS service for a number of minutes until it can resolve this challenge. Unfortunately, when this occurs, the traffic to any CCS managed endpoints will be placed in a long time out within the JCS Service. The IMPS (Provisioning Service) will think that it successfully handed off the task to the JCS/CCS tier, but the task will stay in a holding pattern until either the memory of the JCS is overwhelmed or the CCS Service restarts/crashes or a timeout of the task.
TL;DR – Please do not start the CCS Service manually. Only stop/start the JCS Service, wait a full minute and you should see the CCS Service start up. If the CCS Service does NOT start, investigate why.
JCS Service’s management of the CCS Service:
To understand how the JCS Service manages the CCS Service (via localhost TCP 20402), we can review two (2) files and use MS Sysinternals Process Explorer to view the JCS Service starting the CCS Service via the command “net start im_ccs”. The JCS Service will now have access to update the CCS service’s cache with information for a managed endpoint, e.g. Active Directory.
The two (2) JCS Service configuration files for CCS Service are:
C:\Program Files (x86)\CA\Identity Manager\Connector Server\jcs\conf\server_osgi_ccs.xml [File contains startup properties of how the JCS will manage timeouts to the CCS Service & connections pools]
C:\Program Files (x86)\CA\Identity Manager\Connector Server\jcs\conf\override\server_ccs.properties [File contains the bind credentials and the service port to communicate to on localhost:20402. The password hash will be PBES or AES format depending if FIPS is enabled.]
And finally a view of the startup of the CCS Service via JCS Service using MS Sysinternals Process Explorer https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer We can see that a child process is started from the JCS Service that will call the MS Windows “net.exe” command and execute “net start im_ccs”
Keeping the JCS Service and CCS Service as-is for startup processes will help avoid confusion for the provisioning tier of the CA/Symantec solution. Please only stop/start the JCS Service. If the CCS Service does not stop after 2 minutes, kill it. But never start the CCS by itself.
A view of the data path from IMPS (IM Provisioning Server) to Active Directory (manage endpoint) via the Connector tier.
Performance Improvements
While we may not adjust the startup from manual to automatic, we can enhance the default configurations for performance and timeout improvements. The JCS Service starts up with a default of 1 GB RAM. The JCS Service is 64 bit based on using 64 bit JAVA and the memory can be increased accordingly. After testing with large data sets, we recommend increasing the JCS JVM max memory from 1 GB to 4 GB. We can confirm after startup of the JCS will use over 1 GB of RAM with MS Sysinternals Process Explorer.
Other improvement include updating the JAVA that is supporting the JCS Service. CA/Symantec now recommends using AdoptOpenJDK. The documentation now explains how this may be updated in-place. Or as we prefer to reinstall and allow the installer to update the path statements for AdoptOpenJDK.
The below image below shows in the MS Windows Registry for the JCS Service (Procrun 2.0/im_jcs) the key value pairs that are updated for AdoptOpenJDK. https://adoptopenjdk.net/ If managing Active Directory, please review your OS environmental variables to control the behavior from the CCS Service to Active Directory.
After you restart the JCS Service, open the JCS Administration Console via http://localhost:20080/main or https://localhost:20443/main right click on the “Local Connector Server” ICON and it should display that AdoptOpenJDK is in use now. Only major release 8 is supported, avoid trying later releases (11,15) until support is confirmed.
Stability Improvements
The default JCS Service configuration file has knowledge of the connection pool and timeouts, but appears to be missing the “maxWait” token defined. If we are willing to wait 5-10 minutes for the JCS Service to reset its knowledge of the CCS service, we can leave the default. However for a large environment, we have found that lowering the wait times will greatly avoid the delays in transactions when there is stoppage. We have identified two (2) configuration parameters that will assist with the long term stability of the solution. Adding the “maxWait” of 60 seconds (60000 milliseconds) to the JCS configuration file for CCS service and updating the default IM Provisioning Server domain configuration parameter of “Connections/Refresh Time” to 90 seconds.
Troubleshooting and Logging
To assist with RCA efforts, we have the following recommendations. Enable verbose logging for both the JCS Service and the managed endpoint to isolate issues. You may also need to increase logging for the API Gateway or docker logs.
Below is the example to enable verbose logging.
To monitor the JCS logs, there are several tools that will assist, but we find that the latest releases of Notepad++ allow for “tailing” the active JCS logs.
Example of verbose logs for Active Directory via the CCS’s ADS and JCS logs.
Important Logging Note: Enable the new IM r14.3cp2 feature to auto rotate your CCS ADS log. Avoid stop/start of the CCS Service yourself, that may interrupt the JCS behavior to the CCS Service (error communicating to localhost:20402 will display in JCS logs). New file(s): Connector Server\ccs\data\ADS\<Endpoint_Name>.logconfig
Example of setting MS Windows OS Environmental variables with “setx” and description of the value of each variable for Active Directory/MS Exchange
1
[High Value. Will force AGENTLESS connection to Exchange 2010 & UP]
setx ADS_AGENTLESS_MODE 1 /m
2
[High Value. Default value = 2, Kerberos authentication for Exchange Powershell API]
setx ADS_AGENTLESS_AUTHMETHOD 2 /m
3
[High Value. Default value = 3. Increase to 100 and ALSO have Exchange Admin create a new quota for the service account used to create mailboxes. Default Exchange Powershell Quota is 18. New-ThrottlingPolicy MaxPowershell -PowerShellMaxConcurrency 100 AND Set-Mailbox ServiceAccountID -ThrottlingPolicy MaxPowershell ]
setx ADS_AGENTLESS_MAXCONN 100 /m
4
[Monitor. Default value = 1. Error level ONLY, increase to level 3 for debugging powershell logging to MS Exchange]
setx ADS_AGENTLESS_LOGLEVEL 1 /m
5
[Medium Value. CCS service will wait 10 minutes for single account. Exchange Powershell Mailbox Quota of 18 and BLC with 100’s of users.]
setx ADS_CONFIRM_MAILBOX 600 /m
6
[Low Value. Mask the AD Failover List in the IM Prov Manager UI]
setx ADS_DISABLE_DCSTATUS 1 /m
7
[Low Value. Mask the viewing the default AD Primary Group in the IM Prov Manager UI]
setx ADS_DISABLE_PRIMARYGROUPNAME 1 /m
8
[High Value. Send the DC hostname to the Exchange server to query first instead of Exchange relying on its current pool]
setx ADS_E2K_SEND_DC 1 /m
9
[High Value. Requires service account can view all alternatives DC. May limit failover DC via properties file.]
setx ADS_FAILOVER 1 /m
10
[Medium Value. Performance if Terminal Services attribute are NOT being managed, e.g. changed in Account Templates or PX rules.]
setx ADS_WTS_TIMEOUT -1 /m
11
[Set “ADS_OPERATION_TIMEOUT” to -1 to disable the client side timeout functionality in the Environment Variable, otherwise 60]
setx ADS_OPERATION_TIMEOUT 60 /m
12
[The failover retry interval is the time that the Active Directory connector waits before checking the stopped server. The default retry interval is 15 minutes]
setx ADS_RETRY 15 /m
13
[To allow groups in unmanaged domains to be a part of synchronization, Defines whether the synchronization operation searches the global catalog. The value of x can be 0 or 1: 0: (Default) The synchronization operation queries the local catalog only. It does not consider universal groups in unmanaged domains. When x is set to 0, the y value has no effect. 1: Synchronization queries the global catalog to allow it to consider groups in unmanaged domains. y Defines which domains the synchronization operation considers. 0: Synchronization considers groups in both managed and unmanaged domains. 1: Synchronization considers groups in managed domains only.]
setx ADS_MANAGE_GROUPS 01 /m
14
[Monitor. Seems only valuable for debugging. Has performance hit but may assist for CCS debugging to ActiveDirectory.]
setx ADS_FORCELOG 1 /m
15
[Low Value. The IMPS service can page with lower limits. Impact if this value is > what AD default page limit size is. ]
setx ADS_SIZELIMIT 50000 /m
Reinstalling the JCS Service from the Virtual Appliance
If you are using the CA/Symantec Identity Suite virtual appliance, consider after patching the solutions on the virtual appliance, to re-installing the remote JCS Services. This will avoid any confusion on which patches are deployed on the remote JCS servers. Any patches on the virtual appliance will be incorporated into the new installer. We prefer to use the JCS only on the MS Windows OS, as it can service both JCS type managed endpoints & CCS type managed endpoints together. We also have full access to adjust the behavior of these service on MS Windows OS rather than the limited access provided by the virtual appliance for the JCS service.
Hopefully some of these notes will help you avoid any challenges with the connector tier and if you do, how to isolate the issues.
Advance Review: Review how CCS Service receives IMPS data via the JCS Tier.
The below example will load the DLL for the CCS Service (pass-through), then the information to bind to the ADS endpoint will be sent, then two (2) modify operations will be executed. This process emulates the IMPS behavior with the JCS and CCS. The bind information for the ADS endpoint that is stored in the CA Provisioning User Store, and queried/decrypted by the IMPS to send to the JCS as needed. Only after this information is stored in the CCS service, will the solution be able to explore or manage the ADS endpoint accounts.
If unable to re-install, please delete the CA install/registry tracking file under C:\Windows folder, C:\Windows\vpd.properties , then reboot before attempting a re-install of the JCS/CCS component.
ECS Services
These five (5) ECS Services are typically not active used & may be changed to manual for minor CPU relief. ECS features are retained for supporting libraries.
Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.
In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.
Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.
Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.
Cluster Out-of-Sync Scenario
Awareness
The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.
Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.
After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.
The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.
Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)
su - dsa OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
# NOTIFY BRANCH (TCP 20404)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb
# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount
# Scroll to see entire line
A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.
Recovery Processes
The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.
The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.
This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.
The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.
The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).
Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.
Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file.
This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.
This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.
Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.
STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.
su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;
# Scroll to see entire line
Step 3 will then ask for an updated online backup to be executed.
In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command.
In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process.
This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures.
Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.
Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior.
Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.
STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep `date '+%Y-%m-%d'`
# Scroll to see entire line
Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.
The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.
Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.
# RSYNC METHOD
sudo -iu dsa
time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
# Scroll to see entire line
# SCP METHOD
sudo -iu dsa
scp REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb /tmp/dsa_data
/usr/bin/mv /tmp/dsa_data/<incorrect_dsaname>.zdb $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db
# Scroll to see entire line
Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.
The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db
Remove the prior <dsaname>.dp (ASCII time-stamp file) { the DATA DSA will auto replace this file with the *.dx file contents } and the <dsaname>.tx (binary data transaction file).
Step 5a Startup the DATA DSA with the command
dxserver start all
If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)
dxserver -d start <dsaname>
Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)
Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.
bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'
# Scroll to see entire line
LDIF Recovery Processes
The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.
We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).
Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.
Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.
The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.
The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.
The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.
In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.
Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.
Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.
We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.
Hope this has value to you and any challenges you may have with your environment.
The CA Directory solution provides a mechanism to automate daily on-line backups, via one simple parameter:
dump dxgrid-db period 0 86400;
Where the first number is the offset from GMT/UTC (in seconds) and the second number is how often to run the backup (in seconds), e.g. Once a day = 86400 sec = 24 hr x 60 min/hr x 60 sec/min
Two Gaps/Challenge(s):
History: The automated backup process will overwrite the existing offline file(s) (*.zdb) for the Data DSA. Any requirement or need to perform a RCA is lost due to this fact. What was the data like 10 days ago? With the current state process, only the CA Directory or IM logs would be of assistance.
Size: The automated backup will create an offline file (*.zdb) footprint of the same size as the data (*.db) file. If your Data DSA (*.db) is 10 GB, then your offline (*.zdb) will be 10 GB. The Identity Provisioning User store has four (4) Data DSAs, that would multiple this number , e.g. four (4) db files + four (4) offline zdb files at 10 GB each, will require minimal of 80 GB disk space free. If we attempt to retain a history of these files for fourteen (14) days, this would be four (4) db + fourteen (14) zdb = eighteen (18) x 10 GB = 180 GB disk space required.
Resolutions:
Leverage the CA Directory tool (dxdumpdb) to convert from the binary data (*.db/*.zdb) to LDIF and the OS crontab for the ‘dsa’ account to automate a post ‘online backup’ export and conversion process.
Step 1: Validate the ‘dsa’ user ID has access to crontab (to avoid using root for this effort). cat /etc/cron.allow
If access is missing, append the ‘dsa’ user ID to this file.
Step 2: Validate that online backup process have been scheduled for your Data DSA. Use a find command to identify the offline files (*.zdb ). Note the size of the offline Data DSA files (*.zdb).
Step 3: Identify the online backup process start time, as defined in the Data DSA settings DXC file or perhaps DXI file. Convert this GMT offset time to the local time on the CA Directory server. (See references to assist)
Step 4: Use crontab -e as ‘dsa’ user ID, to create a new entry: (may use crontab -l to view any entries). Use the dxdumpdb -z switch with the DSA_NAME to create the exported LDIF file. Redirect this output to gzip to automatically bypass any need for temporary files. Note: Crontab has limited variable expansion, and any % characters must be escaped.
Example of the crontab for ‘dsa’ to run 30 minutes after (at 2 am CST) the online backup process is scheduled (at 1:30 am CST).
# Goal: Export and compress the daily DSA offline backup to ldif.gz at 2 AM every day
# - Ensure this crontab runs AFTER the daily automated backup (zdb) of the CA Directory Data DSAs
# - Review these two (2) tokens for DATA DSAs: ($DXHOME/config/settings/impd.dxc or ./impd_backup.dxc)
# a) Location: set dxgrid-backup-location = "/opt/CA/Directory/dxserver/backup/";
# b) Online Backup Period: dump dxgrid-db period 0 86400;
#
# Note1: The 'N' start time of the 'dump dxgrid-db period N M' is the offset in seconds from midnight of UTC
# For 24 hr clock, 0130 (AM) CST calculate the following in UTC/GMT => 0130 CST + 6 hours = 0730 UTC
# Due to the six (6) hour difference between CST and UTC TZ: 7.5 * 3600 = 27000 seconds
# Example(s):
# dump dxgrid-db period 19800 86400; [Once a day at 2330 CST]
# dump dxgrid-db period 27000 86400; [Once a day at 0130 CST]
#
# Note2: Alternatively, may force an online backup using this line:
# dump dxgrid-db;
# & issuing this command: dxserver init all
#
#####################################################################
# 1 2 3 4 5 6
# min hr d-o-m month d-o-w command(s)
#####################################################################
#####
##### Testing Backup Every Five (5) Minutes ####
#*/5 * * * * . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-main" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
#####
##### Backup daily at 2 AM CST - 30 minutes after the online backup at 1:30 AM CST #####
#####
0 2 * * * . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-main" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * * . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-co" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-co" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * * . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-inc" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-inc" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * * . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-notify" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-notify" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
Example of the above lines that can be placed in a bash shell, instead of called directly via crontab. Note: Able to use variables and no need to escape the `date % characters `
Monitor with tail -f /var/log/cron (or syslog depending on your OS version), when the crontab is executed for your ‘dsa’ account
View the output folder for the newly created gzip LDIF files. The files may be extracted back to LDIF format, via gzip -d file.ldif.gz. Compare these file sizes with the original (*.zdb) files of 2GB.
Recommendation(s):
Implement a similar process and retain this data for fourteen (14) days, to assist with any RCA or similar analysis that may be needed for historical data. Avoid copied the (*.db or *.zdb) files for backup, unless using this process to force a clean sync between peer MW Data DSAs.
The Data DSAs may be reloaded (dxloadb) from these LDIF snapshots; the LDIF files do not have the same file size impact as the binary db files; and as LDIF files, they may be quickly search for prior data using standard tools such as grep “text string” filename.ldif.
This process will assist in site preparation for a DAR (disaster and recovery) scenario. Protect your data.
While assisting a site with their upgrade process from CA API Gateway 9.2 (docker) to the latest CA API Gateway 9.4 image, we needed to clarify the steps. In this blog entry, we have capture our validation processes of the documented and undocumented features of API Gateway docker deployment ( https://hub.docker.com/r/caapim/gateway/ ), pedantic verbose steps to assist with training of staff resources; and enhanced the external checks for a DAR (disaster and recovery) scenario using docker & docker-compose tools.
Please use this lab to jump start your knowledge of the tools: ‘docker’, ‘docker-compose’ and the API Gateway. We have added many checks and the use of bash shell to view the contents of the API Gateway containers. If you have additional notes/tips, please leave a comment.
To lower business risk during this exercise, we made the follow decisions:
1) Avoid use of default naming conventions, to prevent accidental deletion of the supporting MySQL database for CA API Gateway. The default ‘docker-compose.yml’ was renamed as appropriate for each API Gateway version.
2) Instead of using different folders to host configuration files, we defined project names as part of the startup process for docker-compose.
3) Any docker container updates would reference the BASH shell directly instead of a soft-link, to avoid different behaviors between the API GW container and the MySQL container.
Challenges:
Challenge #1: Both the API Gateway 9.2 and 9.4 docker container have defects with regards to using the standardized ‘docker stop/start containerID‘ process. API Gateway 9.2 would not restart cleanly; and API Gateway 9.4 container would not update the embedded health check process, e.g. docker ps -a OR docker inspect containerID
Resolution #1: Both challenges were addressed in the enclosed testing scripts. Docker-compose is used exclusively for API Gateway 9.2 container, and touching an internal file in the API Gateway 9.4 container.
Challenge #2: The docker parameters between API Gateway 9.2 and API Gateway 9.4 had changed.
Resolution #2: Identify the missing parameters with ‘docker logs containerID’ and review of the embedded deployment script of ‘entrypoint.sh’
Infrastructure: Seven (7) files were used for this lab on CentOS 7.x (/opt/docker/api)
ssg_license.xml (required from Broadcom/CA Sales Team – ask for 90 day trial if a current one is not available)
docker-compose-ssg94.yml (the primary install configuration file for API GW 9.4)
docker-compose-ssg92.yml (the primary install configuration file for API GW 9.2)
docker-compose-ssg94-join-db.xml (the restart configuration file – use as needed)
docker-compose-ssg92-join-db.xml (the restart configuration file – use as needed)
01_create_both_ssg92_and_ssg94_docker_deployments.sh (The installation of ‘docker’ and ‘docker-compose’ with the deployment of API GW 9.2 [with MySQL 5.5] and API GW 9.4 [with MySQL 5.7] ; with some additional updates)
02_backup_and_migrate_mysql_ssg_data_ from_ssg92_to_ssg94_db.sh (The export/import process from API GW 9.2 to API GW 9.4 and some additional checks)
#!/bin/bash
##################################################################
#
# Script to validate upgrade process from CA API GW 9.2 to 9.4 with docker
# - Avoid using default of 'docker-compose.yml'
# - Define different project names for API GW 9.2 and 9.4 to avoid conflict
# - Explictly use bash shell /bin/bash instead of soft-link
#
# 1. Use docker with docker-compose to download & start
# CA API GW 9.4 (with MySQL 5.7) &
# CA API GW 9.2 (with MySQL 5.5)
#
# 2. Configure CA API GW 9.4 with TCP 8443/9443
# CA API GW 9.2 with TCP 8444/9444 (redirect to 8443/9443)
#
# 3. Configure MySQL 5.7 to be externally exposed on TCP 3306
# MySQL 5.5 to be externally exposed on TCP 3307
# - Adjust 'grant' token on MySQL configuration file for root account
#
# 4. Validate authentication credentials to the above web services with curl
#
#
# 5. Add network modules via yum to API GW 9.4 container
# - To assist with troubleshooting / debug exercises
#
# 6. Enable system to use API GW GUI to perform final validation
# - Appears to be an issue to use browers to access the API GW UI TCP 8443/8444
#
#
# Alan Baugher, ANA, 10/19
#
##################################################################
echo ""
echo ""
echo "################################"
echo "Install docker and docker-compose via yum if missing"
echo "Watch for message: Nothing to do "
echo ""
echo "yum -y install docker docker-compose "
yum -y install docker docker-compose
echo "################################"
echo ""
echo "################################"
echo "Shut down any prior docker container running for API GW 9.2 and 9.4"
cd /opt/docker/api
pwd
echo "Issue this command if script fails: docker stop \$(docker ps -a -q) && docker rm \$(docker ps -a -q) "
echo "################################"
echo ""
echo "################################"
export SSG_LICENSE_ENV=$(cat ./ssg_license.xml | gzip | base64 --wrap=0)
echo "Execute 'docker-compose down' to ensure no prior data or containers for API GW 9.4"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml down
echo "################################"
echo "Execute 'docker-compose down' to ensure no prior data or containers for API GW 9.2"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml down
echo "################################"
echo ""
echo "################################"
echo "Execute 'docker ps -a' to validate no running docker containers for API GW 9.2 nor 9.4"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""
echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
echo "Execute 'docker-compose up -d' to start docker containers for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml up -d
echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
echo "Execute 'docker-compose up -d' to start docker containers for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml up -d
echo "################################"
echo ""
echo "################################"
echo "Backup current API GW 9.4 running container for future analysis"
echo "docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""
echo "################################"
echo "Update API GW 9.4 running container with additional supporting tools with yum"
echo "docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c \"yum install -y -q net-tools iproute unzip vi --nogpgcheck\" "
docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c "yum install -y -q net-tools iproute unzip vi --nogpgcheck "
echo "Export API GW 9.4 running container after supporting tools are added"
echo "docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""
echo "################################"
echo "Validate network ports are exposed for API GW Manager UI "
netstat -anpeW | grep -e docker -e "Local" | grep -e "tcp" -e "Local"
echo "################################"
echo ""
echo "################################"
echo "Sleep 70 seconds for both API GW to be ready"
echo "################################"
sleep 70
echo ""
echo ""
echo "################################"
echo "Extra: Open TCP 3306 for mysql remote access "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "echo -e '\0041includedir /etc/mysql/conf.d/\n\0041includedir /etc/mysql/mysql.conf.d/\n[mysqld]\nskip-grant-tables' > /etc/mysql/mysql.cnf && cat /etc/mysql/mysql.cnf "
#docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql restart"
#docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql status && echo -n"
echo "################################"
docker restart ssg94_mysql57
echo ""
echo "################################"
echo "Execute 'docker ps -a' to validate running docker containers for API GW 9.2 and 9.4 with their correct ports"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""
echo "################################"
echo "Test authentication with the SSG backup URL for API 9.2 TCP 8444 - should see six (6) lines"
echo "curl -s --insecure -u pmadmin:7layer https://$(hostname -s):8444/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "######### ############"
curl -s --insecure -u pmadmin:7layer https://$(hostname -s):8444/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""
echo "################################"
echo "Test authentication with the SSG backup URL for API 9.4 TCP 8443 - should see six (6) lines"
echo "curl -s --insecure -u pmadmin:7layer https://$(hostname -s):8443/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "######### ############"
curl -s --insecure -u pmadmin:7layer https://$(hostname -s):8443/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""
echo "################################"
echo "Next Steps:"
echo " Open the API GW UI for 9.2 and create a new entry in the lower left panel"
echo ""
echo "Example: "
echo " Right click on hostname entry and select 'Publish RESTful Service Proxy with WADL' "
echo " Select Manual Entry, then click Next"
echo " Enter data for two (2) fields:"
echo " Service Name: Alan "
echo " Resource Base URL: http://www.anapartner.com/alan "
echo " Then select Finish Button "
echo "################################"
echo ""
View of the API Gateway via the MS Windows API GW UI for both API GW 9.2 (using the 9.3 UI) and API 9.4 (using the 9.4 UI). The API GW Policies will be migrated from API 9.2 to API 9.4 via the export/import of MySQL ssg database. After import, the API GW 9.4 docker image will ‘auto’ upgrade the ssg database to the 9.4 version.
Interesting view of the API GW 9.4 MySQL database ‘ssg’ after import and a restart (that will ‘auto’ upgrade the ssg database version). Note multiple Gateway “nodes” that will appear after each ‘docker restart containerID’
The follow methodology was used to isolate performance challenges with the increase number of cluster nodes for a common database, the Jgroup/JTS/JMS communication, database pools values for each “instance” in the wildfly/JBOSS configuration file.
Note: The individual nodes name are generated with a port offset of 100-800 for each of the eight (8) nodes; any hard-coded values are updated as well (via addition or multiplication).
To ensure the hornetq and Jgroup names are correctly defined for the chain cluster, a case statement is used to ensure that each node’s standalone-full-ha.xml configuration file is updated accordingly, if # of nodes are changed (this is offered as a variable at the top of the script.)
The below example also shows how to leverage CA APM / Wily agent for each J2EE/Wildfly node.
#!/bin/bash
###############################################################################################
#
# Goal: Create a N node J2EE Cluster using Wildfly 8.x.x for CA Identity Manager on a single host
# Use for sandbox testing and validation of performance I/O parameters
#
# Notes: Tested for 2-8 nodes and with the CA APM (Wily) agent enabled for each node
#
#
# Author: A. Baugher, ANA, 8/2019
#
#
###############################################################################################
#set -vx
tz=`/bin/date --utc +%Y%m%d%H%M%S.3%N.0Z`
MAX=5
counter=1
JBOSS_HOME=/opt/CA/wildfly-idm
echo "###### STEP 00: Stop all prior work with cluster testing ######" > /dev/null 2>&1
kill -9 `ps -ef | grep java | grep -v grep | grep UseString | awk '{print $2}'`
echo "###### STEP 01: Copy the current IME (Wildfly) folder to a new folder & with new port offset ######"
echo "Create this many cluster nodes: $MAX"
echo "Current TimeStamp: $tz"
echo ""
while [ $counter -le $MAX ]
do
c=$counter
n=$((100+counter)); n=${n#1}
o=$((100*counter))
nettyo=$((5456+o))
jgrpo=$((7600+o))
cli=$((9990+o))
echo "Current counter is: $counter and the jboss number is: $n with a port offset of: $o"
echo ""
if [ -d $JBOSS_HOME$n ]; then
echo "Prior directory exists for $JBOSS_HOME$n"
kill -9 `ps -ef | grep "wildfly-idm$n" | grep -v grep | awk '{print $2}'` > /dev/null 2>&1
echo "Remove any running processes then sleep 5 seconds before removing directory: $JBOSS_HOME$n "
sleep 5
rm -rf /opt/CA/wildfly-idm$n
fi
cp -r -p /opt/CA/wildfly-idm /opt/CA/wildfly-idm$n
cd $JBOSS_HOME$n/standalone
echo "Current Folder is: `pwd`"
ls -rt
echo "Remove data tmp log folders for new node"
rm -rf data tmp log
ls -rt
echo ""
echo ""
echo "Update standalone-full-ha.xml for hardcoded port 5456 with offset $o"
cd $JBOSS_HOME$n/standalone/configuration
echo "Current Folder is: `pwd`"
cp -r -p ca-standalone-full-ha.xml ca-standalone-full-ha.xml.$tz
sed -i "s|5456|$nettyo|g" ca-standalone-full-ha.xml
echo "Updated Jgroup netty connector port: $nettyo"
grep $nettyo ca-standalone-full-ha.xml
echo ""
echo ""
echo "Update standalone.conf (wildfly.conf) & jboss-cli.xml for port offset by $o"
cd $JBOSS_HOME$n/bin
echo "Current Folder is: `pwd`"
ls -lart standalone.conf
ls -lart jboss-cli.xml
cp -r -p ./init.d/wildfly.conf ./init.d/wildfly.conf.conf.$tz
cp -r -p jboss-cli.xml jboss-cli.xml.$tz
sed -i "s|/opt/CA/wildfly-idm|/opt/CA/wildfly-idm$n|g" ./init.d/wildfly.conf
sed -i "s|9990|$cli|g" jboss-cli.xml
unlink standalone.conf
ln -s $JBOSS_HOME$n/bin/init.d/wildfly.conf standalone.conf
echo "JAVA_OPTS=\"\$JAVA_OPTS -Djboss.socket.binding.port-offset=$o\"" >> standalone.conf
ls -lart standalone.conf
ls -lart jboss-cli.xml
grep "port-offset" standalone.conf
grep "$cli" jboss-cli.xml
echo ""
echo ""
echo "Update standalone.sh for node name & tcp group port"
cd $JBOSS_HOME$n/bin
pwd
cp -r -p standalone.sh standalone.sh.$tz
ls -larth standalone.sh
sed -i "s|iamnode1|iamnode$n|g" standalone.sh
case "$MAX" in
1) echo "Creating JGroups for one node with port offset of $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\]|g" $JBOSS_HOME$n/bin/standalone.sh
;;
2) echo "Creating JGroups for two nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###################
;;
3) echo "Creating JGroups for three nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###################
;;
4) echo "Creating JGroups for four nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node4_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node4_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 4 ]
then
sed -i '684s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node4_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node4_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###########################
;;
5) echo "Creating JGroups for five nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node5_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node5_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 4 ]
then
sed -i '684s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 5 ]
then
sed -i '684s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node5_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node5_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###########################
;;
6) echo "Creating JGroups for six nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node6_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node6_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 4 ]
then
sed -i '684s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 5 ]
then
sed -i '684s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 6 ]
then
sed -i '684s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node6_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node6_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
###########################
;;
7) echo "Creating JGroups for seven nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node7_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node7_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 4 ]
then
sed -i '684s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 5 ]
then
sed -i '684s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 6 ]
then
sed -i '684s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node7|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 7 ]
then
sed -i '684s|node1|node7|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node7_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node7_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###########################
sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
;;
8) echo "Creating JGroups for eight nodes with port offset of 100 - $o"
sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\],caim-srv-01\[8400\]|g" $JBOSS_HOME$n/bin/standalone.sh
###################
if [ $counter -eq 1 ]
then
sed -i '684s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node8_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node8_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 2 ]
then
sed -i '684s|node1|node2|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 3 ]
then
sed -i '684s|node1|node3|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 4 ]
then
sed -i '684s|node1|node4|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 5 ]
then
sed -i '684s|node1|node5|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 6 ]
then
sed -i '684s|node1|node6|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node7|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 7 ]
then
sed -i '684s|node1|node7|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node7_live_to_node8_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node7_live_to_node8_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node8|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
elif [ $counter -eq 8 ]
then
sed -i '684s|node1|node8|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '738s|node1_live_to_node1_backup|node8_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '753s|node1_live_to_node1_backup|node8_live_to_node1_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '902s|node1|node1|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '943s|node1_live_to_node1_backup|node7_live_to_node8_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '953s|node1_live_to_node1_backup|node7_live_to_node8_backup|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
fi
###########################
sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
;;
esac
ls -lart $JBOSS_HOME$n/bin/standalone.sh
grep caim-srv $JBOSS_HOME$n/bin/standalone.sh
echo ""
echo "For Node: $n"
echo ""
grep node $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
echo ""
echo ""
echo ""
echo ""
echo ""
echo "Update CA APM / Wily Information / Agent for this instance"
cp -r -p /opt/CA/VirtualAppliance/custom/apm/wily_im $JBOSS_HOME$n/standalone/wily_im
chown -R wildfly:wildfly $JBOSS_HOME$n/standalone/wily_im
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.jmx.enable=true -Dcom.wily.introscope.agent.agentManager.url.1=localhost:5001 -Djboss.modules.system.pkgs=com.wily,com.wily.*,org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/p:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logging/main/jboss-logging-3.1.4.GA.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/as/logging/main/wildfly-logging-8.2.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.2.Final.jar\"" >> standalone.conf
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.agentName=iamnode$n -Dcom.wily.introscope.agentProfile=$JBOSS_HOME$n/standalone/wily_im/core/config/IntroscopeAgent.profile -javaagent:$JBOSS_HOME$n/standalone/wily_im/Agent.jar \"" >> standalone.conf
echo ""
echo ""
counter=$(( $counter + 00001 ))
done
counter=1
while [ $counter -le $MAX ]
do
echo "Reset ownership permissions for $JBOSS_HOME$n to wildfly userID"
chown -R wildfly:wildfly $JBOSS_HOME$n
echo "Start up node: $n of $MAX Wildfly cluster"
n=$((100+counter)); n=${n#1}
if [ "$(whoami)" != "wildfly" ]; then
echo "Run this process under the wildfly userid to avoid permissions issue with root"
su - wildfly -c "$JBOSS_HOME$n/bin/standalone.sh &"
chown -R wildfly:wildfly $JBOSS_HOME$n
else
$JBOSS_HOME$n/bin/standalone.sh &
fi
counter=$(( $counter + 00001 ))
done
If you plan on starting your J2EE services manually, and wish to keep them running after you log out, a common method is to use nohup ./command.sh &.
The challenge with the above process, is it will create its own output file nohup.out in the folder that the command was executed in.
Additionally, this nohup.out would be a 2nd I/O operation that would recreate the server.log file for the J2EE service.
To avoid this 2nd I/O operation, review leveraging a redirection of the nohup to /dev/null or determine if this J2EE service can be enabled as a RC/init.d or systemd service.
Example to update the wildfly .profile to allow an “alias” using a bash shell function, to start up the wildfly service; and avoid the creation of the nohup.out file.
echo "Enable alias (or function) to start and stop wildfly"
#Example of function - Use this to avoid double I/O for nohup process (nohup.out file)
function start_im01 () {
echo "Starting IM 01 node with nohup process"
cd /opt/CA/wildfly-idm01/bin/
pwd
nohup ./standalone.sh >/dev/null 2>&1 &
sleep 1
/bin/ps -ef | grep wildfly-idm01 | grep -v grep
}
export -f start_im01
function stop_im01 () {
echo "Stopping IM 01 node"
echo "This may take 30-120 seconds"
cd /opt/CA/wildfly-idm01/bin/
pwd
./jboss-cli.sh --connect --command=":shutdown"
sleep 5
/bin/kill -9 `/bin/ps -ef | grep wildfly-idm01 | grep -v grep | awk '{print $2}'` >/dev/null 2>&1
}
export -f stop_im01
You may now start and stop your J2EE Wildfly service with the new “aliases” of start_im01 and stop_im01
You may note that stop_im01 attempts to cleanly stop the Wildfly service via the JBOSS/Wildfly management console port ; and if that fails, we will search and kill the associated java service. If you did “kill” a service, and have startup issues suggest removing the $JBOSS_HOME/standalone/tmp & /data folders before restart.