Disaster Recovery Scenarios for Directories

Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.

In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.

Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.

Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.

Cluster Out-of-Sync Scenario

Awareness

The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.

Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.

After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.

The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.

Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)

su - dsa    OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

# NOTIFY BRANCH (TCP 20404) 
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb

# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount

# Scroll to see entire line 

A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.

Recovery Processes

The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.

The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.

This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.

https://knowledge.broadcom.com/external/article?articleId=54088

The above document is dated, and still mentions additional file structures that have been retired, e.g. oc/zoc, at,zat.

An enhancement request has been submitted for both of these requests:

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=c71a304b-a689-4894-ac1c-786c9a2b2d0d

The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.

The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).

Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.

Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file. 

This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.

This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.  

Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.

STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.

su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;

# Scroll to see entire line 

Step 3 will then ask for an updated online backup to be executed. 

In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command. 

In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process. 

This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures. 

Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.

Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior. 

Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.

STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

# Scroll to see entire line 

Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.

The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.

https://anapartner.com/2020/05/03/wan-latency-rsync-versus-scp/

Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.

# RSYNC METHOD
sudo -iu dsa

time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data

# Scroll to see entire line 
# SCP METHOD   
sudo -iu dsa

scp   REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb   /tmp/dsa_data
/usr/bin/mv  /tmp/dsa_data/<incorrect_dsaname>.zdb   $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db

# Scroll to see entire line 

Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.

The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db

Remove the prior <dsaname>.dp (ASCII time-stamp file) { the DATA DSA will auto replace this file with the *.dx file contents } and the <dsaname>.tx (binary data transaction file).

Step 5a Startup the DATA DSA with the command

dxserver start all

If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)

dxserver -d start <dsaname>

Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)

Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.

bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'

# Scroll to see entire line 

LDIF Recovery Processes

The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.

We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).

Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.

Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.

The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

ldifdelta -x -S ca-prov-srv-01-impd-co  /tmp/20200819_122820_1597858100_ca-prov-srv-01-impd-co.ldif   /tmp/20200819_123108_1597858268_ca-prov-srv-01-impd-co.ldif  |  perl -p00e 's/\r?\n //g'  >   /tmp/delta_file_ca-prov-srv-01-impd-co.ldif   ; cat /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
dxmodify -v -c -h`hostname` -p 20391  -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -f /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

# Scroll to see entire line 

The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.

The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.

In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.

Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.

Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.

We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.

Hope this has value to you and any challenges you may have with your environment.

Be safe and automate your backups for CA Directory Data DSAs to LDIF

The CA Directory solution provides a mechanism to automate daily on-line backups, via one simple parameter:

dump dxgrid-db period 0 86400;

Where the first number is the offset from GMT/UTC (in seconds) and the second number is how often to run the backup (in seconds), e.g. Once a day = 86400 sec = 24 hr x 60 min/hr x 60 sec/min

Two Gaps/Challenge(s):

History: The automated backup process will overwrite the existing offline file(s) (*.zdb) for the Data DSA. Any requirement or need to perform a RCA is lost due to this fact. What was the data like 10 days ago? With the current state process, only the CA Directory or IM logs would be of assistance.

Size: The automated backup will create an offline file (*.zdb) footprint of the same size as the data (*.db) file. If your Data DSA (*.db) is 10 GB, then your offline (*.zdb) will be 10 GB. The Identity Provisioning User store has four (4) Data DSAs, that would multiple this number , e.g. four (4) db files + four (4) offline zdb files at 10 GB each, will require minimal of 80 GB disk space free. If we attempt to retain a history of these files for fourteen (14) days, this would be four (4) db + fourteen (14) zdb = eighteen (18) x 10 GB = 180 GB disk space required.

Resolutions:

Leverage the CA Directory tool (dxdumpdb) to convert from the binary data (*.db/*.zdb) to LDIF and the OS crontab for the ‘dsa’ account to automate a post ‘online backup’ export and conversion process.

Step 1: Validate the ‘dsa’ user ID has access to crontab (to avoid using root for this effort). cat /etc/cron.allow

If access is missing, append the ‘dsa’ user ID to this file.

Step 2: Validate that online backup process have been scheduled for your Data DSA. Use a find command to identify the offline files (*.zdb ). Note the size of the offline Data DSA files (*.zdb).

Step 3: Identify the online backup process start time, as defined in the Data DSA settings DXC file or perhaps DXI file. Convert this GMT offset time to the local time on the CA Directory server. (See references to assist)

Step 4: Use crontab -e as ‘dsa’ user ID, to create a new entry: (may use crontab -l to view any entries). Use the dxdumpdb -z switch with the DSA_NAME to create the exported LDIF file. Redirect this output to gzip to automatically bypass any need for temporary files. Note: Crontab has limited variable expansion, and any % characters must be escaped.

Example of the crontab for ‘dsa’ to run 30 minutes after (at 2 am CST) the online backup process is scheduled (at 1:30 am CST).

# Goal:  Export and compress the daily DSA offline backup to ldif.gz at 2 AM every day
# - Ensure this crontab runs AFTER the daily automated backup (zdb) of the CA Directory Data DSAs
# - Review these two (2) tokens for DATA DSAs:  ($DXHOME/config/settings/impd.dxc  or ./impd_backup.dxc)
#   a)   Location:  set dxgrid-backup-location = "/opt/CA/Directory/dxserver/backup/";
#   b)   Online Backup Period:   dump dxgrid-db period 0 86400;
#
# Note1: The 'N' start time of the 'dump dxgrid-db period N M' is the offset in seconds from midnight of UTC
#   For 24 hr clock, 0130 (AM) CST calculate the following in UTC/GMT =>  0130 CST + 6 hours = 0730 UTC
#   Due to the six (6) hour difference between CST and UTC TZ:  7.5 * 3600 = 27000 seconds
# Example(s):
#   dump dxgrid-db period 19800 86400;   [Once a day at 2330 CST]
#   dump dxgrid-db period 27000 86400;   [Once a day at 0130 CST]
#
# Note2:  Alternatively, may force an online backup using this line:
#               dump dxgrid-db;
#        & issuing this command:  dxserver init all
#
#####################################################################
#        1      2         3       4       5        6
#       min     hr      d-o-m   month   d-o-w   command(s)
#####################################################################
#####
#####  Testing Backup Every Five (5) Minutes ####
#*/5 * * * *  . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-main" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
#####
#####  Backup daily at 2 AM CST  -  30 minutes after the online backup at 1:30 AM CST #####
#####
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-main"   | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main"   | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-co"     | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-co"     | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-inc"    | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-inc"    | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-notify" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-notify" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz

Example of the above lines that can be placed in a bash shell, instead of called directly via crontab. Note: Able to use variables and no need to escape the `date % characters `

# set DSA=main &&   dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=co &&     dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=inc &&    dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=notify && dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
#

Example of the output:

Monitor with tail -f /var/log/cron (or syslog depending on your OS version), when the crontab is executed for your ‘dsa’ account

View the output folder for the newly created gzip LDIF files. The files may be extracted back to LDIF format, via gzip -d file.ldif.gz. Compare these file sizes with the original (*.zdb) files of 2GB.

Recommendation(s):

Implement a similar process and retain this data for fourteen (14) days, to assist with any RCA or similar analysis that may be needed for historical data. Avoid copied the (*.db or *.zdb) files for backup, unless using this process to force a clean sync between peer MW Data DSAs.

The Data DSAs may be reloaded (dxloadb) from these LDIF snapshots; the LDIF files do not have the same file size impact as the binary db files; and as LDIF files, they may be quickly search for prior data using standard tools such as grep “text string” filename.ldif.

This process will assist in site preparation for a DAR (disaster and recovery) scenario. Protect your data.

References:

dxdumpdb

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/administrating/tools-to-manage-ca-directory/dxtools/dxdumpdb-tool-export-data-from-a-datastore-to-an-ldif-file.html

dump dxgrid-db

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/reference/commands-reference/dump-dxgrid-db-command-take-a-consistent-snapshot-copy-of-a-datastore.html

If you wish to learn more or need assistance, contact us.

API Gateway and Docker Lab

While assisting a site with their upgrade process from CA API Gateway 9.2 (docker) to the latest CA API Gateway 9.4 image, we needed to clarify the steps. In this blog entry, we have capture our validation processes of the documented and undocumented features of API Gateway docker deployment ( https://hub.docker.com/r/caapim/gateway/ ), pedantic verbose steps to assist with training of staff resources; and enhanced the external checks for a DAR (disaster and recovery) scenario using docker & docker-compose tools.

Please use this lab to jump start your knowledge of the tools: ‘docker’, ‘docker-compose’ and the API Gateway. We have added many checks and the use of bash shell to view the contents of the API Gateway containers. If you have additional notes/tips, please leave a comment.

To lower business risk during this exercise, we made the follow decisions:

1) Avoid use of default naming conventions, to prevent accidental deletion of the supporting MySQL database for CA API Gateway. The default ‘docker-compose.yml’ was renamed as appropriate for each API Gateway version.

2) Instead of using different folders to host configuration files, we defined project names as part of the startup process for docker-compose.

3) Any docker container updates would reference the BASH shell directly instead of a soft-link, to avoid different behaviors between the API GW container and the MySQL container.

Challenges:

Challenge #1: Both the API Gateway 9.2 and 9.4 docker container have defects with regards to using the standardized ‘docker stop/start containerID‘ process. API Gateway 9.2 would not restart cleanly; and API Gateway 9.4 container would not update the embedded health check process, e.g. docker ps -a OR docker inspect containerID

Resolution #1: Both challenges were addressed in the enclosed testing scripts. Docker-compose is used exclusively for API Gateway 9.2 container, and touching an internal file in the API Gateway 9.4 container.

Challenge #2: The docker parameters between API Gateway 9.2 and API Gateway 9.4 had changed.

Resolution #2: Identify the missing parameters with ‘docker logs containerID’ and review of the embedded deployment script of ‘entrypoint.sh’

Infrastructure: Seven (7) files were used for this lab on CentOS 7.x (/opt/docker/api)

  1. ssg_license.xml (required from Broadcom/CA Sales Team – ask for 90 day trial if a current one is not available)
  2. docker-compose-ssg94.yml (the primary install configuration file for API GW 9.4)
  3. docker-compose-ssg92.yml (the primary install configuration file for API GW 9.2)
  4. docker-compose-ssg94-join-db.xml (the restart configuration file – use as needed)
  5. docker-compose-ssg92-join-db.xml (the restart configuration file – use as needed)
  6. 01_create_both_ssg92_and_ssg94_docker_deployments.sh (The installation of ‘docker’ and ‘docker-compose’ with the deployment of API GW 9.2 [with MySQL 5.5] and API GW 9.4 [with MySQL 5.7] ; with some additional updates)
  7. 02_backup_and_migrate_mysql_ssg_data_ from_ssg92_to_ssg94_db.sh (The export/import process from API GW 9.2 to API GW 9.4 and some additional checks)

Example of the seven (7) lab files’ contents:

  1. ssg_license.xml ( a view of the header only )
<?xml version="1.0" encoding="UTF-8"?>
<license Id="5774266080443298199" xmlns="http://l7tech.com/license">
    <description>LIC-PRODUCTION</description>
    <licenseAttributes/>
    <valid>2018-12-10T19:32:31.000Z</valid>
    <expires>2019-12-11T19:32:31.000Z</expires>
    <host name=""/>
    <ip address=""/>
    <product name="Layer 7 SecureSpan Suite">
        <version major="9" minor=""/>
        <featureset name="set:Profile:EnterpriseGateway"/>
    </product>

2. docker-compose-ssg94.yml

version: "2.2"
services:
    ssg94:
      container_name: ssg94
      image: caapim/gateway:latest
      mem_limit: 4g
      volumes:
         - /opt/docker/api/ssg_license.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
      expose:
      - "8777"
      - "2142"
      ports:
        - "8443:8443"
        - "9443:9443"
      environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql57"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql57:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false -Dcom.l7tech.server.disableFileLogsinks=false "
      links:
        - mysql57
    mysql57:
      container_name: ssg94_mysql57
      image: mysql:5.7
      restart: always
      mem_limit: 2g
      ports:
       - "3306:3306"
      environment:
         - MYSQL_ROOT_PASSWORD=7layer
         - MYSQL_USER=gateway
         - MYSQL_PASSWORD=7layer
         - MYSQL_DATABASE=ssg

3. docker-compose-ssg92.yml

version: "2.2"
services:
    ssg92:
      container_name: ssg92
      image: caapim/gateway:9.2.00-9087_CR10
      mem_limit: 4g
      expose:
      - "8778"
      - "2143"
      ports:
        - "8444:8443"
        - "9444:9443"
      environment:
        SKIP_CONFIG_SERVER_CHECK: "true"
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql55"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "root"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql55:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_ADMIN_USER: "pmadmin"
        SSG_ADMIN_PASS: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=true -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false "
        SSG_LICENSE: "$SSG_LICENSE_ENV"
      links:
        - mysql55
    mysql55:
      container_name: ssg92_mysql55
      image: mysql:5.5
      restart: always
      mem_limit: 2g
      ports:
      - "3307:3306"
      environment:
        - MYSQL_ROOT_PASSWORD=7layer

4. docker-compose-ssg94-join-db.yml

version: "2.2"
services:
    ssg94:
      container_name: ssg94
      image: caapim/gateway:latest
      mem_limit: 4g
      volumes:
         - /opt/docker/api/ssg_license.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
      expose:
      - "8777"
      - "2142"
      ports:
        - "8443:8443"
        - "9443:9443"
      environment:
        ACCEPT_LICENSE: "true"
        #SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_COMMAND: "join"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql57"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql57:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false -Dcom.l7tech.server.disableFileLogsinks=false "
      links:
        - mysql57
    mysql57:
      container_name: ssg94_mysql57
      image: mysql:5.7
      restart: always
      mem_limit: 2g
      ports:
       - "3306:3306"
      environment:
         - MYSQL_ROOT_PASSWORD=7layer
         - MYSQL_USER=gateway
         - MYSQL_PASSWORD=7layer
         - MYSQL_DATABASE=ssg

5. docker-compose-ssg92-join-db.yml

version: "2.2"
services:
    ssg92:
      container_name: ssg92
      image: caapim/gateway:9.2.00-9087_CR10
      mem_limit: 4g
      expose:
      - "8778"
      - "2143"
      ports:
        - "8444:8443"
        - "9444:9443"
      environment:
        SKIP_CONFIG_SERVER_CHECK: "true"
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "join"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql55"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "root"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql55:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_ADMIN_USER: "pmadmin"
        SSG_ADMIN_PASS: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=true -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false "
        SSG_LICENSE: "$SSG_LICENSE_ENV"
      links:
        - mysql55
    mysql55:
      container_name: ssg92_mysql55
      image: mysql:5.5
      restart: always
      mem_limit: 2g
      ports:
      - "3307:3306"
      environment:
        - MYSQL_ROOT_PASSWORD=7layer

6. 01_create_both_ssg92_and_ssg94_docker_deployments.sh

#!/bin/bash
##################################################################
#
# Script to validate upgrade process from CA API GW 9.2 to 9.4 with docker
#  - Avoid using default of 'docker-compose.yml'
#  - Define different project names for API GW 9.2 and 9.4 to avoid conflict
#  - Explictly use bash shell  /bin/bash  instead of soft-link
#
# 1. Use docker with docker-compose to download & start
#      CA API GW 9.4 (with MySQL 5.7) &
#      CA API GW 9.2 (with MySQL 5.5)
#
# 2. Configure CA API GW 9.4 with TCP 8443/9443
#              CA API GW 9.2 with TCP 8444/9444 (redirect to 8443/9443)
#
# 3. Configure MySQL 5.7 to be externally exposed on TCP 3306
#              MySQL 5.5 to be externally exposed on TCP 3307
#  - Adjust 'grant' token on MySQL configuration file for root account
#
# 4. Validate authentication credentials to the above web services with curl
#
#
# 5. Add network modules via yum to API GW 9.4 container
#   - To assist with troubleshooting / debug exercises
#
# 6. Enable system to use API GW GUI to perform final validation
#   - Appears to be an issue to use browers to access the API GW UI TCP 8443/8444
#
#
# Alan Baugher, ANA, 10/19
#
##################################################################


echo ""
echo ""
echo "################################"
echo "Install docker and docker-compose via yum if missing"
echo "Watch for message:  Nothing to do "
echo ""
echo "yum -y install docker docker-compose "
yum -y install docker docker-compose
echo "################################"
echo ""


echo "################################"
echo "Shut down any prior docker container running for API GW 9.2 and 9.4"
cd /opt/docker/api
pwd
echo "Issue this command if script fails:  docker stop \$(docker ps -a -q)  && docker rm \$(docker ps -a -q)   "
echo "################################"
echo ""


echo "################################"
export SSG_LICENSE_ENV=$(cat ./ssg_license.xml | gzip | base64 --wrap=0)
echo "Execute  'docker-compose down'  to ensure no prior data or containers for API GW 9.4"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml down
echo "################################"
echo "Execute  'docker-compose down'  to ensure no prior data or containers for API GW 9.2"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml down
echo "################################"
echo ""


echo "################################"
echo "Execute  'docker ps -a'   to validate no running docker containers for API GW 9.2 nor 9.4"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""


echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
echo "Execute  'docker-compose up -d'  to start docker containers for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml up -d
echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
echo "Execute  'docker-compose up -d'  to start docker containers for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml up -d
echo "################################"
echo ""


echo "################################"
echo "Backup current API GW 9.4 running container for future analysis"
echo "docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""


echo "################################"
echo "Update API GW 9.4 running container with additional supporting tools with yum"
echo "docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c \"yum install -y -q net-tools iproute unzip vi --nogpgcheck\" "
docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c "yum install -y -q net-tools iproute unzip vi --nogpgcheck "
echo "Export API GW 9.4 running container after supporting tools are added"
echo "docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""


echo "################################"
echo "Validate network ports are exposed for API GW Manager UI "
netstat -anpeW | grep -e docker -e "Local" | grep -e "tcp" -e "Local"
echo "################################"
echo ""

echo "################################"
echo "Sleep 70 seconds for both API GW to be ready"
echo "################################"
sleep 70
echo ""


echo ""
echo "################################"
echo "Extra:  Open TCP 3306 for mysql remote access "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "echo -e '\0041includedir /etc/mysql/conf.d/\n\0041includedir /etc/mysql/mysql.conf.d/\n[mysqld]\nskip-grant-tables' > /etc/mysql/mysql.cnf && cat /etc/mysql/mysql.cnf "
#docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql restart"
#docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql status && echo -n"
echo "################################"
docker restart ssg94_mysql57
echo ""



echo "################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.2 and 9.4 with their correct ports"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.2 TCP 8444 - should see six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.4 TCP 8443 - should see six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Next Steps:"
echo "       Open the API GW UI for 9.2 and create a new entry in the lower left panel"
echo ""
echo "Example: "
echo "       Right click on hostname entry and select 'Publish RESTful Service Proxy with WADL' "
echo "       Select Manual Entry, then click Next"
echo "       Enter data for two (2) fields:"
echo "                  Service Name:  Alan "
echo "                  Resource Base URL:  http://www.anapartner.com/alan "
echo "       Then select Finish Button "
echo "################################"
echo ""

7. 02_backup_and_migrate_mysql_ssg_data_from_ssg92_to_ssg94_db.sh

#!/bin/bash
#######################################################################
#
# Script to validate upgrade process from CA API 9.2 to 9.4 with docker
#  - Avoid using default of 'docker-compose.yml'
#  - Define different project names for API GW 9.2 and 9.4 to avoid conflict
#  - Explictly use bash shell  /bin/bash  instead of soft-link /bin/sh
#
# 1. Stop docker containers for CA API GW 9.2 & 9.4 (leave mysql containers running)
#    - To prevent any updates to mysql db during migration
#
# 2. Use mysqldump command to export CA API 9.2 MySQL 5.5 ssg database with stored procedures (aka routines)
#   - Review excluding the audit tables to avoid carrying over excessive data
#
# 3. Use mysql command to import sql file to CA API 9.4 MySQL 5.7 ssg database
#   - Review if dropping / recreate the ssg database will avoid any install issues
#   - Keep eye on table cluster_info {as this has the Gateway1 defination with the host IP address}
#
# 4. Restart CA API GW 9.2 & 9.4 containers
#
#    - Challenge 1: CA API GW 9.2 docker image has issue with docker stop/start process
#    the reference /root/entrypoint.sh will loop with creation of a license folder
#    - Addressed with custom docker-compose file to recreate image to join existing MySQL 5.5 container
#
#    - Challenge 2: CA API GW 9.4 docker image has issue with docker stop/start process
#    The new heathcheck.sh process calls a base.sh script that compare the date-time stamp for two files
#    , the datestamp for one file is not updated correctly upon docker start process.
#    - Addressed with custom docker bash script to "touch" the primary file to allow date stamp to be updated.  Validate with: docker logs ssg94
#      WARNING 1      com.l7tech.server.boot.GatewayBoot: Unable to touch /opt/SecureSpan/Gateway/node/default/var/started:
#                  /opt/SecureSpan/Gateway/node/default/var/started (Permission denied)
#
#    - Challenge 3: CA API GW 9.4 docker image appears to have similar issue for hazelcast startup
#    The container may hold for 300 seconds due to hazelcast configuration not completing correctly
#     SEVERE  1      com.hazelcast.instance.Node: [172.17.0.3]:8777 [gateway] [3.10.2] Could not join cluster in 300000 ms. Shutting down now!
#     Unable to start the server: Error starting server : Lifecycle error: Could not initialize Hazelcast cluster
#     WARNING 107    com.hazelcast.cluster.impl.TcpIpJoiner: [172.17.0.3]:8777 [gateway] [3.10.2] Error during join check!
#    - Addessed with different project names to avoid conflict between API GW 9.2 broadcast to API GW 9.4
#
#    - Challenge 4: CA API GW 9.2 appears to be stuck in a while loop for /opt/docker/entrypoint.sh
#      apim-provisioning: INFO: waiting for the shutdown file at "/userdata/shutdown" to be created
#    - Addressed:  Does not seem to have impact with current testing.  Ignore.  Validate with:  docker logs ssg92
#
# 5. Important Note: Ensure that the SSG_CLUSTER_HOST and SSG_CLUSTER_PASSWORD values for CA API GW 9.4 docker-compose file
#    match those set in the configured MySQL database.
#    After CA API GW 9.4 container connects to the existing Gateway database, the Container Gateway will automatically upgrades
#    the ssg database if the ssg database version is lower than the version of the Container Gateway.
#    - Ensure the jdbc hostname
#
#    - Ref:  https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/9-4/other-gateway-form-factors/using-the-container-gateway/getting-started-with-the-container-gateway/connect-the-container-gateway-to-an-existing-mysql-database.html
#
#
# Alan Baugher, ANA, 10/19
#
#######################################################################

echo ""
echo "####################################"
echo "Early check: Address a file permission issue with the API GW 9.4 container"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c 'chmod 666 /opt/SecureSpan/Gateway/node/default/var/started' "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "chmod 666 /opt/SecureSpan/Gateway/node/default/var/started"
echo "May validate issue with:  docker logs ssg94 "
echo "####################################"


echo ""
echo "####################################"
echo "Temporarily shutdown the API GW containers for 9.2 and 9.4 to avoid any updates to the mysql db during export & import"
echo "docker stop ssg92 ssg94 "
docker stop ssg92 ssg94
echo "####################################"
echo ""


echo "####################################"
echo "Validate API GW container are down and the MySQL db containers are up and working"
echo "Pause for 5 seconds:  sleep 5"
sleep 5
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.2 MySQL 5.5 ssg db with stored procedures (aka routines)"
echo "time docker exec -i `docker ps -a | grep mysql:5.5 | awk '{print $1}'` mysqldump -u root --password=7layer ssg  --routines >  ssg92.backup_with_routines.sql  2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.5 | awk '{print $1}'` mysqldump -u root --password=7layer ssg  --routines >  ssg92.backup_with_routines.sql  2> /dev/null
echo "View the size of the MySQL 5.5. ssg db for API GW 9.2"
ls -lart | grep ssg92.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db with stored procedures (aka routines) as a 'before' reference file"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.before.backup_with_routines.sql  2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.before.backup_with_routines.sql  2> /dev/null
echo "View the size of the MySQL 5.7. ssg db for API GW 9.4 as the 'before' reference file"
ls -lart | grep ssg94.before.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Import the MySQL 5.5 ssg db with stored procedures (aka routines) into MySQL 5.7 for API GW 9.4"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysql -u root --password=7layer ssg    < ssg92.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysql -u root --password=7layer ssg    < ssg92.backup_with_routines.sql 2> /dev/null
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db wht stored procedures (aka routines) as a 'after' import reference file"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.after.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.after.backup_with_routines.sql 2> /dev/null
echo "View the size of the MySQL 5.7. ssg db for API GW 9.4 as the 'after' reference file"
ls -lart | grep ssg94.after.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Restart the API GW containers for 9.2 and 9.4 "
# Note: Restart of the ssg94 container will 'auto' upgrade the ssg database to 9.4 tags
echo "docker restart ssg94 "
docker restart ssg94
#docker rm ssg94
#docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94-join-db.yml up -d
echo "####################################"
# Note:  API GW 9.2 docker image was not designed for stop/start correctly; rm then redeploy
export SSG_LICENSE_ENV=$(cat ssg_license.xml | gzip | base64 --wrap=0)
echo "Remove the API GW 9.2 container via:  docker rm ssg92"
docker rm ssg92
echo "Redeploy the API GW 9.2 container "
echo "docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92-join-db.yml up -d "
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92-join-db.yml up -d
echo "####################################"
echo ""



echo "####################################"
echo "Validate API GW container are up and the mysql db containers are working"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db after import & after the 'auto' upgrade as an 'after' auto upgrade reference file"
docker stop ssg94
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.auto.after.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.auto.after.backup_with_routines.sql 2> /dev/null
echo "View all the exported MySQL files to compare process flow"
ls -lart ssg*.sql
docker start ssg94
echo "View the auto upgrade from version 9.2 to version 9.4 with a delta compare of the exported sql files"
echo "diff ssg94.after.backup_with_routines.sql  ssg94.before.backup_with_routines.sql  | grep -i \"INSERT INTO .ssg_version.\" "
diff ssg94.after.backup_with_routines.sql  ssg94.before.backup_with_routines.sql  | grep -i "INSERT INTO .ssg_version."
echo "####################################"
echo ""


echo "####################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.4 and 9.2"
echo "docker ps --format \"table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}\" "
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 MySQL 5.7 databases"
echo "Validate that 'ssg' database exists "
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  -e \"show databases;\" "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  -e "show databases;"
echo "####################################"
echo ""


echo "####################################"
echo "Review for any delta of the MySQL ssg database after import"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.5 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"show tables;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.5 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "show tables;" > ssg92.tables.txt
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"show tables;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "show tables;" > ssg94.tables.txt
echo "Observer for any delta from the below command"
echo "diff ssg92.tables.txt ssg94.tables.txt"
diff ssg92.tables.txt ssg94.tables.txt
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 admin user in the MySQL 5.7 ssg database"
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;\" "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 admin user in the intermediate configuration file on the AIP GW 9.4 container"
echo "docker exec -it -u root -e TERM=xterm ssg94 /bin/bash -c \"grep -i -e l7.login -e l7.password /opt/SecureSpan/Gateway/node/default/etc/bootstrap/bundle/001_update_admin_user.xml.req.bundle\" "
docker exec -it -u root -e TERM=xterm ssg94 /bin/bash -c "grep -i -e l7.login -e l7.password /opt/SecureSpan/Gateway/node/default/etc/bootstrap/bundle/001_update_admin_user.xml.req.bundle"
echo "####################################"
echo ""


echo "####################################"
echo "Show all 'new' files created or linked in API GW 9.4 container with mtime of 1 day. Excluding lock (LCK) files"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"find /opt -type f -mtime -1 -ls | grep -i -v -e '.LCK'\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "find /opt -type f -mtime -1 -ls | grep -i -v -e '.LCK'"
echo "####################################"
echo ""


echo "####################################"
echo " View the license.xml file that was copied to the API GW 9.4 container bootstrap folder before copied to the MySQL 5.7 ssg db table "
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"ls -lart  /opt/SecureSpan/Gateway/node/default/etc/bootstrap/license \" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "ls -lart  /opt/SecureSpan/Gateway/node/default/etc/bootstrap/license "
echo "####################################"
echo ""


echo "####################################"
echo "View logon count for the API GW 9.4 admin user via MySQL 5.7 ssg db"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=gateway --password=7layer ssg -e \"select hex(goid), version, hex(provider_goid), login, fail_count, last_attempted, last_activity, state from logon_info;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=gateway --password=7layer ssg -e "select hex(goid), version, hex(provider_goid), login, fail_count, last_attempted, last_activity, state from logon_info;"
echo "####################################"
echo ""


echo "####################################"
echo "View the API GW 9.4 MySQL 5.7 mysql.user table"
### docker logs `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  2>&1 | grep  "GENERATED ROOT PASSWORD"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"SELECT User,account_locked,password_expired,password_last_changed,authentication_string FROM mysql.user;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "SELECT User,account_locked,password_expired,password_last_changed,authentication_string FROM mysql.user;"
echo "####################################"
echo ""


echo "####################################"
echo "To remove any locked account (including pmadmin SSG Admin User ID) from the MySQL 5.7 ssg logon_info table  {or any account}"
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"delete from logon_info where login ='pmadmin';\" "
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"truncate logon_info;\"  "
echo "####################################"
echo ""


echo "####################################"
echo "To change root password for MySQL 5.7 mysql.user db"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=OLD_PASSWORD -e  \"SET PASSWORD FOR 'root'@'localhost' = PASSWORD('7layer');\" "
echo "####################################"
echo ""


echo "####################################"
echo "Sleep 30 seconds to address restart health check time-stamp issue with API GW 9.4"
sleep 30
echo "####################################"
echo ""


echo "####################################"
echo "Address API GW 9.4 container health check upon stop/start or restart gap.  (base.sh script)"
echo "docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot\" "
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot"
echo "Touch to update date-time stamp for one file: /opt/SecureSpan/Gateway/node/default/var/started"
echo "docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"touch /opt/SecureSpan/Gateway/node/default/var/started\" "
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "touch /opt/SecureSpan/Gateway/node/default/var/started"
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot"
echo "####################################"
echo ""


echo "####################################"
echo "Sleep 30 seconds to allow health check status to update for API GW 9.4"
echo "May also monitor health and overall status with:   docker inspect ssg94 "
sleep 30
echo "####################################"
echo ""


echo "####################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.4 and 9.2"
echo "docker ps --format \"table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}\" "
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.2 TCP 8444 - should see minimal of six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.4 TCP 8443 - should see minimal of six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""

View of the API Gateway via the MS Windows API GW UI for both API GW 9.2 (using the 9.3 UI) and API 9.4 (using the 9.4 UI). The API GW Policies will be migrated from API 9.2 to API 9.4 via the export/import of MySQL ssg database. After import, the API GW 9.4 docker image will ‘auto’ upgrade the ssg database to the 9.4 version.

Interesting view of the API GW 9.4 MySQL database ‘ssg’ after import and a restart (that will ‘auto’ upgrade the ssg database version). Note multiple Gateway “nodes” that will appear after each ‘docker restart containerID’

Build an eight (8) node Wildfly cluster on a single server

The follow methodology was used to isolate performance challenges with the increase number of cluster nodes for a common database, the Jgroup/JTS/JMS communication, database pools values for each “instance” in the wildfly/JBOSS configuration file.

Note: The individual nodes name are generated with a port offset of 100-800 for each of the eight (8) nodes; any hard-coded values are updated as well (via addition or multiplication).

To ensure the hornetq and Jgroup names are correctly defined for the chain cluster, a case statement is used to ensure that each node’s standalone-full-ha.xml configuration file is updated accordingly, if # of nodes are changed (this is offered as a variable at the top of the script.)

The below example also shows how to leverage CA APM / Wily agent for each J2EE/Wildfly node.

#!/bin/bash
###############################################################################################
#
#  Goal:  Create a N node J2EE Cluster using Wildfly 8.x.x for CA Identity Manager on a single host
#         Use for sandbox testing and validation of performance I/O parameters
#
#  Notes:  Tested for 2-8 nodes and with the CA APM (Wily) agent enabled for each node
#
#
#  Author:  A. Baugher, ANA, 8/2019
#
#
###############################################################################################
#set -vx
tz=`/bin/date --utc +%Y%m%d%H%M%S.3%N.0Z`
MAX=5
counter=1
JBOSS_HOME=/opt/CA/wildfly-idm


echo "######  STEP 00:  Stop all prior work with cluster testing ######"  > /dev/null 2>&1
kill -9 `ps -ef | grep java | grep -v grep | grep UseString | awk '{print $2}'`

echo "######  STEP 01:  Copy the current IME (Wildfly) folder to a new folder & with new port offset ######"
echo "Create this many cluster nodes:  $MAX"
echo "Current TimeStamp:  $tz"
echo ""
while [ $counter -le $MAX ]
do
  c=$counter
  n=$((100+counter)); n=${n#1}
  o=$((100*counter))
  nettyo=$((5456+o))
  jgrpo=$((7600+o))
  cli=$((9990+o))

 echo "Current counter is: $counter and the jboss number is:  $n  with a port offset of: $o"
 echo ""
 if [ -d $JBOSS_HOME$n ]; then
   echo "Prior directory exists for $JBOSS_HOME$n"
   kill -9 `ps -ef | grep "wildfly-idm$n" | grep -v grep | awk '{print $2}'` >   /dev/null 2>&1
   echo "Remove any running processes then sleep 5 seconds before removing directory: $JBOSS_HOME$n "
   sleep 5
   rm -rf /opt/CA/wildfly-idm$n
 fi

 cp -r -p /opt/CA/wildfly-idm /opt/CA/wildfly-idm$n
 cd $JBOSS_HOME$n/standalone
 echo "Current Folder is: `pwd`"
 ls -rt
 echo "Remove data tmp log folders for new node"
 rm -rf data tmp log
 ls -rt
 echo ""
 echo ""


 echo "Update standalone-full-ha.xml for hardcoded port 5456 with offset $o"
 cd $JBOSS_HOME$n/standalone/configuration
 echo "Current Folder is: `pwd`"
 cp -r -p ca-standalone-full-ha.xml ca-standalone-full-ha.xml.$tz
 sed -i "s|5456|$nettyo|g"  ca-standalone-full-ha.xml
 echo "Updated Jgroup netty connector port:  $nettyo"
 grep  $nettyo ca-standalone-full-ha.xml
 echo ""
 echo ""

 echo "Update standalone.conf (wildfly.conf) & jboss-cli.xml for port offset by $o"
 cd $JBOSS_HOME$n/bin
 echo "Current Folder is: `pwd`"
 ls -lart standalone.conf
 ls -lart jboss-cli.xml
 cp -r -p ./init.d/wildfly.conf ./init.d/wildfly.conf.conf.$tz
 cp -r -p jboss-cli.xml jboss-cli.xml.$tz
 sed -i "s|/opt/CA/wildfly-idm|/opt/CA/wildfly-idm$n|g" ./init.d/wildfly.conf
 sed -i "s|9990|$cli|g" jboss-cli.xml
 unlink standalone.conf
 ln -s $JBOSS_HOME$n/bin/init.d/wildfly.conf standalone.conf
 echo "JAVA_OPTS=\"\$JAVA_OPTS -Djboss.socket.binding.port-offset=$o\""  >> standalone.conf
 ls -lart standalone.conf
 ls -lart jboss-cli.xml
 grep "port-offset" standalone.conf
 grep "$cli" jboss-cli.xml
 echo ""
 echo ""



 echo "Update standalone.sh for node name & tcp group port"
cd $JBOSS_HOME$n/bin
pwd
cp -r -p standalone.sh   standalone.sh.$tz
ls -larth standalone.sh
sed -i "s|iamnode1|iamnode$n|g"  standalone.sh


case "$MAX" in

1)  echo "Creating JGroups for one node with port offset of $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ;;
2)  echo "Creating JGroups for two nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###################
    ;;
3)  echo "Creating JGroups for three nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###################
    ;;
4)  echo "Creating JGroups for four nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    ;;
5)  echo "Creating JGroups for five nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    ;;
6)  echo "Creating JGroups for six nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml

    ###########################
    ;;
7)  echo "Creating JGroups for seven nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 7 ]
        then
    sed -i '684s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    ;;
8)  echo "Creating JGroups for eight nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\],caim-srv-01\[8400\]|g"  $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 7 ]
        then
    sed -i '684s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node8|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 8 ]
        then
    sed -i '684s|node1|node8|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    ;;
esac

ls -lart $JBOSS_HOME$n/bin/standalone.sh
grep caim-srv $JBOSS_HOME$n/bin/standalone.sh
echo ""
echo "For Node: $n"
echo ""
grep node $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
echo ""
echo ""
echo ""


echo ""
echo ""
echo "Update CA APM / Wily Information / Agent for this instance"
cp -r -p /opt/CA/VirtualAppliance/custom/apm/wily_im $JBOSS_HOME$n/standalone/wily_im
chown -R wildfly:wildfly $JBOSS_HOME$n/standalone/wily_im
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.jmx.enable=true -Dcom.wily.introscope.agent.agentManager.url.1=localhost:5001 -Djboss.modules.system.pkgs=com.wily,com.wily.*,org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/p:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logging/main/jboss-logging-3.1.4.GA.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/as/logging/main/wildfly-logging-8.2.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.2.Final.jar\""  >> standalone.conf
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.agentName=iamnode$n  -Dcom.wily.introscope.agentProfile=$JBOSS_HOME$n/standalone/wily_im/core/config/IntroscopeAgent.profile -javaagent:$JBOSS_HOME$n/standalone/wily_im/Agent.jar    \""  >> standalone.conf
echo ""
echo ""

 counter=$(( $counter + 00001 ))
done






counter=1
while [ $counter -le $MAX ]
do
  echo "Reset ownership permissions for $JBOSS_HOME$n to wildfly userID"
  chown -R wildfly:wildfly $JBOSS_HOME$n
  echo "Start up node: $n of $MAX Wildfly cluster"
  n=$((100+counter)); n=${n#1}


  if [ "$(whoami)" != "wildfly" ]; then
       echo "Run this process under the wildfly userid to avoid permissions issue with root"
       su - wildfly -c "$JBOSS_HOME$n/bin/standalone.sh &"
       chown -R wildfly:wildfly $JBOSS_HOME$n
  else
  $JBOSS_HOME$n/bin/standalone.sh &
  fi

  counter=$(( $counter + 00001 ))
done


Reduce log duplication: Avoid nohup.out

If you plan on starting your J2EE services manually, and wish to keep them running after you log out, a common method is to use nohup ./command.sh &.

The challenge with the above process, is it will create its own output file nohup.out in the folder that the command was executed in.

Additionally, this nohup.out would be a 2nd I/O operation that would recreate the server.log file for the J2EE service.

To avoid this 2nd I/O operation, review leveraging a redirection of the nohup to /dev/null or determine if this J2EE service can be enabled as a RC/init.d or systemd service.

Example to update the wildfly .profile to allow an “alias” using a bash shell function, to start up the wildfly service; and avoid the creation of the nohup.out file.

echo "Enable alias (or function)  to start and stop wildfly"

#Example of function - Use this to avoid double I/O for nohup process (nohup.out file)
function start_im01 () {
     echo "Starting IM 01 node with nohup process"
     cd /opt/CA/wildfly-idm01/bin/
     pwd
     nohup ./standalone.sh  >/dev/null 2>&1 &
     sleep 1
     /bin/ps -ef | grep wildfly-idm01 | grep -v grep
}
export -f start_im01

function stop_im01 () {
     echo "Stopping IM 01 node"
     echo "This may take 30-120 seconds"
     cd /opt/CA/wildfly-idm01/bin/
     pwd
     ./jboss-cli.sh --connect  --command=":shutdown"
     sleep 5
     /bin/kill -9 `/bin/ps -ef | grep wildfly-idm01 | grep -v grep | awk '{print $2}'` >/dev/null 2>&1
}
export -f stop_im01

You may now start and stop your J2EE Wildfly service with the new “aliases” of start_im01 and stop_im01

You may note that stop_im01 attempts to cleanly stop the Wildfly service via the JBOSS/Wildfly management console port ; and if that fails, we will search and kill the associated java service. If you did “kill” a service, and have startup issues suggest removing the $JBOSS_HOME/standalone/tmp & /data folders before restart.