WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$


Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid locking a userID in a Virtual Appliance

The below post describes enabling the .ssh private key/public key process for the provided service IDs to avoid dependency on a password that may be forgotten, and also how to leverage the service IDs to address potential CA Directory data sync challenges that may occur when there are WAN network latency challenges between remote cluster nodes.

Background:

The CA/Broadcom/Symantec Identity Suite (IGA) solution provides for a software virtual appliance. This software appliance is available on Amazon AWS as a pre-built AMI image that allows for rapid deployment.

The software appliance is also offered as an OVA file for Vmware ESXi/Workstation deployment.

Challenge:

If the primary service ID is locked or password is allowed to expire, then the administrator will likely have only two (2) options:

1) Request assistance from the Vendor (for a supported process to reset the service ID – likely with a 2nd service ID “recoverip”)

2) Boot from an ISO image (if allowed) to mount the vApp as a data drive and update the primary service ID.

Proposal:

Add a standardized SSH RSA private/pubic key to the primary service ID, if it does not exist. If it exists, validate able to authentication and copy files between cluster nodes with the existing .SSH files. Rotate these files per internal security policies, e.g. 1/year.

The focus for this entry is on the CA ‘config’ and ‘ec2-user’ service IDs.

An enhancement request has been added, to have the ‘dsa’ userID added to the file’/etc/ssh/ssh_allowed_users’ to allow for the same .ssh RSA process to address challenges during deployments where the CA Directory Data DSA did not fully copy from one node to another node.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

AWS vApp: ‘ec2-user’

The primary service ID for remote SSH access is ‘ec2-user’ for the Amazon AWS is already deployed with a .ssh RSA private/public key. This is a requirement for AWS deployments and has been enabled to use this process.

This feature allows for access to be via the private key from a remote SSH session using Putty/MobaXterm or similar tools. Another feature may be leveraged by updating the ‘ec2-user’ .ssh folder to allow for other nodes to be exposed with this service ID, to assist with the deployment of patch files.

As an example, enabling .ssh service between multiple cluster nodes will reduce scp process from remote workstations. Prior, if there were five (5) vApp nodes, to patch them would require uploading the patch direct to each of the five (5) nodes. With enabling .ssh service between all nodes for the ‘ec2-user’ service ID, we only need to upload patches to one (1) node, then use a scp process to push these patch file(s) from one node to another cluster node.

On-Prem vApp: ‘config’

We wish to emulate this process for on-prem vApp servers to reduce I/O for any files to be uploaded and/or shared.

This process has strong value when CA Directory *.db files are out-of-sync or during initial deployment, there may be network issues and/or WAN latency.

Below is an example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

An example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

Below is an example to push the newly created SSH RSA files to the remote host(s) of the vApp cluster. After this step, we can now use scp processes to assist with remediation efforts within scripts without a password stored as clear text.

Copy the RSA folder to your workstation, to add to your Putty/MobaXterm or similar SSH tool, to allow remote authentication using the public key.

If you have any issues, use the embedded verbose logging within the ssh client tool (-vv) to identify the root issue.

ssh -vv userid@remote_hostname

Example:

config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > eval `ssh-agent` && ssh-add
Agent pid 5717
Enter passphrase for /home/config/.ssh/id_rsa:
Identity added: /home/config/.ssh/id_rsa (/home/config/.ssh/id_rsa)
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ >
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > ssh -vv config@192.168.242.128
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.242.128 [192.168.242.128] port 22.
debug1: Connection established.
debug1: identity file /home/config/.ssh/identity type -1
debug1: identity file /home/config/.ssh/identity-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/config/.ssh/id_rsa type 1
debug1: identity file /home/config/.ssh/id_rsa-cert type -1
debug1: identity file /home/config/.ssh/id_dsa type -1
debug1: identity file /home/config/.ssh/id_dsa-cert type -1
debug1: identity file /home/config/.ssh/id_ecdsa type -1
debug1: identity file /home/config/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-sha1
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug2: mac_setup: found hmac-sha1
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 141/320
debug2: bits set: 1027/2048
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.242.128' is known and matches the RSA host key.
debug1: Found key in /home/config/.ssh/known_hosts:2
debug2: bits set: 991/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/config/.ssh/id_rsa (0x5648110d2a00)
debug2: key: /home/config/.ssh/identity ((nil))
debug2: key: /home/config/.ssh/id_dsa ((nil))
debug2: key: /home/config/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering public key: /home/config/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug2: input_userauth_pk_ok: SHA1 fp 39:06:95:0d:13:4b:9a:29:0b:28:b6:bd:3d:b0:03:e8:3c:ad:50:6f
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Thu Apr 30 20:21:48 2020 from 192.168.242.146

CA Identity Suite Virtual Appliance version 14.3.0 - SANDBOX mode
FIPS enabled:                   true
Server IP addresses:            192.168.242.128
Enabled services:
Identity Portal               192.168.242.128 [OK] WildFly (Portal) is running (pid 10570), port 8081
                                              [OK] Identity Portal Admin UI is available
                                              [OK] Identity Portal User Console is available
                                              [OK] Java heap size used by Identity Portal: 810MB/1512MB (53%)
Oracle Database Express 11g   192.168.242.128 [OK] Oracle Express Edition started
Identity Governance           192.168.242.128 [OK] WildFly (IG) is running (pid 8050), port 8082
                                              [OK] IG is running
                                              [OK] Java heap size used by Identity Governance: 807MB/1512MB (53%)
Identity Manager              192.168.242.128 [OK] WildFly (IDM) is running (pid 5550), port 8080
                                              [OK] IDM environment is started
                                              [OK] idm-userstore-router-caim-srv-01 started
                                              [OK] Java heap size used by Identity Manager: 1649MB/4096MB (40%)
Provisioning Server           192.168.242.128 [OK] im_ps is running
                                              [OK] co file usage: 1MB/250MB (0%)
                                              [OK] inc file usage: 1MB/250MB (0%)
                                              [OK] main file usage: 9MB/250MB (3%)
                                              [OK] notify file usage: 1MB/250MB (0%)
                                              [OK] All DSAs are started
Connector Server              192.168.242.128 [OK] jcs is running
User Store                    192.168.242.128 [OK] STATS: number of objects in cache: 5
                                              [OK] file usage: 1MB/200MB (0%)
                                              [OK] UserStore_userstore-01 started
Central Log Server            192.168.242.128 [OK] rsyslogd (pid  1670) is running...
=== LAST UPDATED: Fri May  1 12:15:05 CDT 2020 ====
*** [WARN] Volume / has 13% Free space (6.2G out of 47G)
config@cluster01 VAPP-14.3.0 (192.168.242.128):~ >

A view into rotating the SSH RSA keys for the CONFIG UserID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Advanced Oracle JDBC Logging

One of the challenges for a J2EE application is to understand the I/O operations to the underlying database.

The queries/stored procedures/prepared statements all have value to the J2EE applications but during RCA (root-cause-analysis) process, it can be challenging to identify where GAPs or improvements may be made. Improvements may be from the vendor of the J2EE applications (via a support ticket/enhancement) or custom client efforts for business rule SQL processes or embedded JAR with JDBC logic.

To assist with this RCA efforts we examined four (4) areas that have value to examine an application using an Oracle 12c/18c+ Database.

  1. J2EE logging using embedded features for datasources. This includes the JDBC spy process for Wildfly/JBOSS; and the ability to dynamically change logging levels or on/off features with the jboss-cli.sh process.

2. Intercept JDBC processes – An intermediate JDBC jar that will add additional logging of any JDBC process, e.g. think Wireshark type process. [As this will require additional changes; will describe this process last]

3. Diagnostics JDBC Jars – Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

4. AWR (Automatic Workload Repository ) Reports – Using either command line (SQL) or the Oracle SQL Developer GUI.

J2EE Logging (JDBC Spy)

An example of the above process using the Wildfly/JBOSS jboss-cli.sh process is provided below in a CLI script. The high value of this process is that it takes advantage of OOTB existing features that do NOT require any additional files to be download or installed.

Implementing this process will be via a flat-file with keywords for the command line JBOSS management console, which is taken as an input via the jboss-cli.sh process. (if you are not running under the context of the Wildfly/JBoss user, you will need to use the add-user.sh process).

The below script focuses on the active databases for the Symantec/Broadcom/CA Identity Management solution’s ObjectStore (OS) and TaskPersistence (TP) databases. JDBC debug entries will be written to server.log.

# Name:  Capture JDBC queries/statements for TP and OS with IM business rules
# Filename: im_jdbc_spy_for_tp_and_os.cli
# /apps/CA/wildfly-idm/bin/jboss-cli.sh --connect  --file=im_jdbc_spy_for_tp_and_os.cli
# /opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01!  --file=im_jdbc_spy_for_tp_and_os.cli
#
connect
echo "Take a snapshot backup of IM configuration file before any changes"
:take-snapshot
# Query values before setting them
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo "Enable JDBC Spy for TP and OS DB"
echo
# Always use batch with run-batch - will auto rollback on any errors for updates
batch
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:write-attribute(name=spy,value=true)
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:write-attribute(name=spy,value=true)
/subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=error,value=true)
run-batch
#/subsystem=logging/logger=jboss.jdbc.spy/:read-resource(recursive=false)
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo ""

echo "Check if logger is enabled already; if not then set"
# Use log updates in separate batch process to prevent rollback of entire process if already set
echo
batch
/subsystem=logging/logger=jboss.jdbc.spy/:add(level=TRACE)
run-batch

#
#
# Restart the IM service (reload does not work 100%)
# stop_im start_im restart_im

A view of executing the above script. Monitor the server.log for JDBC updates.

If you have access to install the X11 drivers for the OS, you may also view the new updates to the JBoss/Wildfly datasources.

Diagnostics JDBC Jars

Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

Ref: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjdbc/JDBC-diagnosability.html#GUID-4925EAAE-580E-4E29-9B9A-84143C01A6DC

This diagnostic feature has been available for quite some time and operates outside of the JBOSS/Wildfly tier, by focusing only on the Oracle Debug Jar and Java, which are managed through two (2) JVM switches.

A challenge exists for the CA Identity Manager solution on Wildfly 8.x, that the default logging utility process is using the newer JBOSS logging modules. This appears to interfere with the direct use of the properties file for Oracle debugging process. The solution appears to ignore the JVM switch of “-Djava.util.logging.config.file=OracleLog.properties.”

To address the above challenge, without extensively changing the CA IM solution logging modules, we integrated into the existing process.

This process will require a new jar deployed, older jars renamed, and two (2) reference XML files updated for Wildfly/Jboss.

AWR Reports

This is a common method that DBAs will use to assist application owners to identify I/O or other challenges that may be impacting performance. The ability to execute the AWR Report may be delegated to a non-system service ID (schema IDs).

This process may be executed at the SQLplus prompt but for application owners, it is best executed from Oracle Developer GUI. The process outlined in one (1) slide diagram below will showcase how to execute an AWR report, and how to generate one for our requirements.

This process assumes the administrator has access to a DB schema ID that has access to the AWR (View/DBA) process. If not, we have included the minimal access required.

Intercept JDBC processes

One of the more interesting processes is a 3rd party “man-in-the-middle” approach that will capture all JDBC traffic. We can think of this as Wireshark-type process.

We have left this process to last, as it does require additional modification to the JBoss/Wildfly environment. If the solution deployed is standalone, then the changes are straightforward to change. If the solution is on the Virtual Appliance, we will need to review alternative methods or request access to add additional modules to the appliance.

For this process, we chose the p6spy process.

A quick view to see the JDBC calls being sent from the IM solution to the Oracle Database with values populated. We can also capture the return, but this would be a very large amount of data.

A quick view of the “spy.log” output for the CA IM TP database during startup.


#!/bin/bash
##################################################
# Name: p6spy.sh
#
# Goal:  Deploy the p6spy jar to assist with RCA for IM solution
# Ref:  https://github.com/p6spy/p6spy
#
# A. Baugher, ANA, 04/2020
##################################################
JBOSS_HOME=/opt/CA/wildfly-idm
USER=wildfly
GROUP=wildfly

mkdir -p /tmp/p6spy
rm -rf /tmp/p6spy/*.jar
cd /tmp/p6spy
time curl -L -O https://repo1.maven.org/maven2/p6spy/p6spy/3.9.0/p6spy-3.9.0.jar
ls -lart *.jar
md5sum p6spy-3.9.0.jar
rm -rf $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
mkdir -p $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cp -r -p p6spy-3.9.0.jar $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
ls -lart $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cd /opt/CA/wildfly-idm/modules/system/layers/base/com/p6spy/main
cat << EOF >> module.xml
<module xmlns="urn:jboss:module:1.0" name="com.p6spy">
    <resources>
        <resource-root path="p6spy-3.9.0.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
<module name="com.ca.iam.jdbc.oracle"/>
    </dependencies>
</module>
EOF
chown -R $USER:$GROUP $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 755  $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 664  $JBOSS_HOME/modules/system/layers/base/com/p6spy/main/*
ls -lart
cat module.xml
# Update spy.properties file
curl -L -O https://raw.githubusercontent.com/p6spy/p6spy/master/src/main/assembly/individualFiles/spy.properties
rm -rf  $JBOSS_HOME/standalone/tmp/spy.properties
cp -r -p spy.properties $JBOSS_HOME/standalone/tmp/spy.properties
chown -R $USER:$GROUP $JBOSS_HOME/standalone/tmp/spy.properties
chmod -R 666 $JBOSS_HOME/standalone/tmp/spy.properties
ls -lart $JBOSS_HOME/standalone/tmp/spy.properties


A view with “results” enabled. May view with binary or text results. (excludecategories=info and excludebinary=true)
Enable these configurations in the CA IM standalone-full-ha.xml file – Focus only on the CA IM TP database (where most activity resides)
 <drivers>
                    <driver name="p6spy" module="com.p6spy">
                       <driver-class>com.p6spy.engine.spy.P6SpyDriver</driver-class>
                    </driver>
                    <driver name="ojdbc" module="com.ca.iam.jdbc.oracle">
                        <driver-class>oracle.jdbc.OracleDriver</driver-class>
                        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
                    </driver>
 </drivers>

<!-- ##############################  BEFORE P6SPY ##########################
                <datasource jta="false" jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="iam_im-imtaskpersistencedb-ds" enabled="true" use-java-context="true" spy="true">
                    <connection-url>jdbc:oracle:thin:@//database_srv:1521/xe</connection-url>
                    <driver>ojdbc</driver>
   ##############################  BEFORE P6SPY ##########################
-->
                <!-- <datasource jndi-name="java:/jdbc/p6spy" enabled="true" use-java-context="true" pool-name="p6spyPool"> -->

                <datasource jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="p6spyPool" enabled="true" use-java-context="true">
                        <connection-url>jdbc:p6spy:oracle:thin:@//database_srv:1521/xe</connection-url>
                        <driver>p6spy</driver>


These error messages may occur during setup or mis-configuration.

References:

https://p6spy.readthedocs.io/en/latest/index.html https://p6spy.readthedocs.io/en/latest/configandusage.html#common-property-file-settings   [doc on JDBC intercept settings] https://github.com/p6spy/p6spy/blob/master/src/main/assembly/individualFiles/spy.properties  [sample spy.property file for JDBC intercepts] https://github.com/p6spy/p6spy   [git] https://mvnrepositor.com/artifact/p6spy/p6spy    [jar]

Extra(s)

A view into the ojdbc8_g.jar for the two (2) system properties, that are all that typically is required to use this logging functionality.

oracle.jdbc.Trace and java.util.logging.config.file

Be safe and automate your backups for CA Directory Data DSAs to LDIF

The CA Directory solution provides a mechanism to automate daily on-line backups, via one simple parameter:

dump dxgrid-db period 0 86400;

Where the first number is the offset from GMT/UTC (in seconds) and the second number is how often to run the backup (in seconds), e.g. Once a day = 86400 sec = 24 hr x 60 min/hr x 60 sec/min

Two Gaps/Challenge(s):

History: The automated backup process will overwrite the existing offline file(s) (*.zdb) for the Data DSA. Any requirement or need to perform a RCA is lost due to this fact. What was the data like 10 days ago? With the current state process, only the CA Directory or IM logs would be of assistance.

Size: The automated backup will create an offline file (*.zdb) footprint of the same size as the data (*.db) file. If your Data DSA (*.db) is 10 GB, then your offline (*.zdb) will be 10 GB. The Identity Provisioning User store has four (4) Data DSAs, that would multiple this number , e.g. four (4) db files + four (4) offline zdb files at 10 GB each, will require minimal of 80 GB disk space free. If we attempt to retain a history of these files for fourteen (14) days, this would be four (4) db + fourteen (14) zdb = eighteen (18) x 10 GB = 180 GB disk space required.

Resolutions:

Leverage the CA Directory tool (dxdumpdb) to convert from the binary data (*.db/*.zdb) to LDIF and the OS crontab for the ‘dsa’ account to automate a post ‘online backup’ export and conversion process.

Step 1: Validate the ‘dsa’ user ID has access to crontab (to avoid using root for this effort). cat /etc/cron.allow

If access is missing, append the ‘dsa’ user ID to this file.

Step 2: Validate that online backup process have been scheduled for your Data DSA. Use a find command to identify the offline files (*.zdb ). Note the size of the offline Data DSA files (*.zdb).

Step 3: Identify the online backup process start time, as defined in the Data DSA settings DXC file or perhaps DXI file. Convert this GMT offset time to the local time on the CA Directory server. (See references to assist)

Step 4: Use crontab -e as ‘dsa’ user ID, to create a new entry: (may use crontab -l to view any entries). Use the dxdumpdb -z switch with the DSA_NAME to create the exported LDIF file. Redirect this output to gzip to automatically bypass any need for temporary files. Note: Crontab has limited variable expansion, and any % characters must be escaped.

Example of the crontab for ‘dsa’ to run 30 minutes after (at 2 am CST) the online backup process is scheduled (at 1:30 am CST).

# Goal:  Export and compress the daily DSA offline backup to ldif.gz at 2 AM every day
# - Ensure this crontab runs AFTER the daily automated backup (zdb) of the CA Directory Data DSAs
# - Review these two (2) tokens for DATA DSAs:  ($DXHOME/config/settings/impd.dxc  or ./impd_backup.dxc)
#   a)   Location:  set dxgrid-backup-location = "/opt/CA/Directory/dxserver/backup/";
#   b)   Online Backup Period:   dump dxgrid-db period 0 86400;
#
# Note1: The 'N' start time of the 'dump dxgrid-db period N M' is the offset in seconds from midnight of UTC
#   For 24 hr clock, 0130 (AM) CST calculate the following in UTC/GMT =>  0130 CST + 6 hours = 0730 UTC
#   Due to the six (6) hour difference between CST and UTC TZ:  7.5 * 3600 = 27000 seconds
# Example(s):
#   dump dxgrid-db period 19800 86400;   [Once a day at 2330 CST]
#   dump dxgrid-db period 27000 86400;   [Once a day at 0130 CST]
#
# Note2:  Alternatively, may force an online backup using this line:
#               dump dxgrid-db;
#        & issuing this command:  dxserver init all
#
#####################################################################
#        1      2         3       4       5        6
#       min     hr      d-o-m   month   d-o-w   command(s)
#####################################################################
#####
#####  Testing Backup Every Five (5) Minutes ####
#*/5 * * * *  . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-main" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
#####
#####  Backup daily at 2 AM CST  -  30 minutes after the online backup at 1:30 AM CST #####
#####
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-main"   | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main"   | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-co"     | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-co"     | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-inc"    | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-inc"    | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-notify" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-notify" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz

Example of the above lines that can be placed in a bash shell, instead of called directly via crontab. Note: Able to use variables and no need to escape the `date % characters `

# set DSA=main &&   dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=co &&     dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=inc &&    dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=notify && dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
#

Example of the output:

Monitor with tail -f /var/log/cron (or syslog depending on your OS version), when the crontab is executed for your ‘dsa’ account

View the output folder for the newly created gzip LDIF files. The files may be extracted back to LDIF format, via gzip -d file.ldif.gz. Compare these file sizes with the original (*.zdb) files of 2GB.

Recommendation(s):

Implement a similar process and retain this data for fourteen (14) days, to assist with any RCA or similar analysis that may be needed for historical data. Avoid copied the (*.db or *.zdb) files for backup, unless using this process to force a clean sync between peer MW Data DSAs.

The Data DSAs may be reloaded (dxloadb) from these LDIF snapshots; the LDIF files do not have the same file size impact as the binary db files; and as LDIF files, they may be quickly search for prior data using standard tools such as grep “text string” filename.ldif.

This process will assist in site preparation for a DAR (disaster and recovery) scenario. Protect your data.

References:

dxdumpdb

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/administrating/tools-to-manage-ca-directory/dxtools/dxdumpdb-tool-export-data-from-a-datastore-to-an-ldif-file.html

dump dxgrid-db

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/reference/commands-reference/dump-dxgrid-db-command-take-a-consistent-snapshot-copy-of-a-datastore.html

If you wish to learn more or need assistance, contact us.

Build an eight (8) node Wildfly cluster on a single server

The follow methodology was used to isolate performance challenges with the increase number of cluster nodes for a common database, the Jgroup/JTS/JMS communication, database pools values for each “instance” in the wildfly/JBOSS configuration file.

Note: The individual nodes name are generated with a port offset of 100-800 for each of the eight (8) nodes; any hard-coded values are updated as well (via addition or multiplication).

To ensure the hornetq and Jgroup names are correctly defined for the chain cluster, a case statement is used to ensure that each node’s standalone-full-ha.xml configuration file is updated accordingly, if # of nodes are changed (this is offered as a variable at the top of the script.)

The below example also shows how to leverage CA APM / Wily agent for each J2EE/Wildfly node.

#!/bin/bash
###############################################################################################
#
#  Goal:  Create a N node J2EE Cluster using Wildfly 8.x.x for CA Identity Manager on a single host
#         Use for sandbox testing and validation of performance I/O parameters
#
#  Notes:  Tested for 2-8 nodes and with the CA APM (Wily) agent enabled for each node
#
#
#  Author:  A. Baugher, ANA, 8/2019
#
#
###############################################################################################
#set -vx
tz=`/bin/date --utc +%Y%m%d%H%M%S.3%N.0Z`
MAX=5
counter=1
JBOSS_HOME=/opt/CA/wildfly-idm


echo "######  STEP 00:  Stop all prior work with cluster testing ######"  > /dev/null 2>&1
kill -9 `ps -ef | grep java | grep -v grep | grep UseString | awk '{print $2}'`

echo "######  STEP 01:  Copy the current IME (Wildfly) folder to a new folder & with new port offset ######"
echo "Create this many cluster nodes:  $MAX"
echo "Current TimeStamp:  $tz"
echo ""
while [ $counter -le $MAX ]
do
  c=$counter
  n=$((100+counter)); n=${n#1}
  o=$((100*counter))
  nettyo=$((5456+o))
  jgrpo=$((7600+o))
  cli=$((9990+o))

 echo "Current counter is: $counter and the jboss number is:  $n  with a port offset of: $o"
 echo ""
 if [ -d $JBOSS_HOME$n ]; then
   echo "Prior directory exists for $JBOSS_HOME$n"
   kill -9 `ps -ef | grep "wildfly-idm$n" | grep -v grep | awk '{print $2}'` >   /dev/null 2>&1
   echo "Remove any running processes then sleep 5 seconds before removing directory: $JBOSS_HOME$n "
   sleep 5
   rm -rf /opt/CA/wildfly-idm$n
 fi

 cp -r -p /opt/CA/wildfly-idm /opt/CA/wildfly-idm$n
 cd $JBOSS_HOME$n/standalone
 echo "Current Folder is: `pwd`"
 ls -rt
 echo "Remove data tmp log folders for new node"
 rm -rf data tmp log
 ls -rt
 echo ""
 echo ""


 echo "Update standalone-full-ha.xml for hardcoded port 5456 with offset $o"
 cd $JBOSS_HOME$n/standalone/configuration
 echo "Current Folder is: `pwd`"
 cp -r -p ca-standalone-full-ha.xml ca-standalone-full-ha.xml.$tz
 sed -i "s|5456|$nettyo|g"  ca-standalone-full-ha.xml
 echo "Updated Jgroup netty connector port:  $nettyo"
 grep  $nettyo ca-standalone-full-ha.xml
 echo ""
 echo ""

 echo "Update standalone.conf (wildfly.conf) & jboss-cli.xml for port offset by $o"
 cd $JBOSS_HOME$n/bin
 echo "Current Folder is: `pwd`"
 ls -lart standalone.conf
 ls -lart jboss-cli.xml
 cp -r -p ./init.d/wildfly.conf ./init.d/wildfly.conf.conf.$tz
 cp -r -p jboss-cli.xml jboss-cli.xml.$tz
 sed -i "s|/opt/CA/wildfly-idm|/opt/CA/wildfly-idm$n|g" ./init.d/wildfly.conf
 sed -i "s|9990|$cli|g" jboss-cli.xml
 unlink standalone.conf
 ln -s $JBOSS_HOME$n/bin/init.d/wildfly.conf standalone.conf
 echo "JAVA_OPTS=\"\$JAVA_OPTS -Djboss.socket.binding.port-offset=$o\""  >> standalone.conf
 ls -lart standalone.conf
 ls -lart jboss-cli.xml
 grep "port-offset" standalone.conf
 grep "$cli" jboss-cli.xml
 echo ""
 echo ""



 echo "Update standalone.sh for node name & tcp group port"
cd $JBOSS_HOME$n/bin
pwd
cp -r -p standalone.sh   standalone.sh.$tz
ls -larth standalone.sh
sed -i "s|iamnode1|iamnode$n|g"  standalone.sh


case "$MAX" in

1)  echo "Creating JGroups for one node with port offset of $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ;;
2)  echo "Creating JGroups for two nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###################
    ;;
3)  echo "Creating JGroups for three nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###################
    ;;
4)  echo "Creating JGroups for four nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    ;;
5)  echo "Creating JGroups for five nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    ;;
6)  echo "Creating JGroups for six nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>4000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml

    ###########################
    ;;
7)  echo "Creating JGroups for seven nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\]|g" $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 7 ]
        then
    sed -i '684s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node7_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3300</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    ;;
8)  echo "Creating JGroups for eight nodes with port offset of 100 - $o"
    sed -i "s|caim-srv-01\[7600\]|caim-srv-01\[7700\],caim-srv-01\[7800\],caim-srv-01\[7900\],caim-srv-01\[8000\],caim-srv-01\[8100\],caim-srv-01\[8200\],caim-srv-01\[8300\],caim-srv-01\[8400\]|g"  $JBOSS_HOME$n/bin/standalone.sh
    ###################
    if [ $counter -eq 1 ]
       then
    sed -i '684s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 2 ]
        then
    sed -i '684s|node1|node2|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node1_live_to_node2_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 3 ]
        then
    sed -i '684s|node1|node3|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node2_live_to_node3_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 4 ]
        then
    sed -i '684s|node1|node4|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node3_live_to_node4_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 5 ]
        then
    sed -i '684s|node1|node5|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node4_live_to_node5_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 6 ]
        then
    sed -i '684s|node1|node6|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node5_live_to_node6_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 7 ]
        then
    sed -i '684s|node1|node7|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node8|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node6_live_to_node7_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    elif [ $counter -eq 8 ]
        then
    sed -i '684s|node1|node8|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '738s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '753s|node1_live_to_node1_backup|node8_live_to_node1_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '902s|node1|node1|'                                             $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '943s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '953s|node1_live_to_node1_backup|node7_live_to_node8_backup|'   $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    fi
    ###########################
    sed -i '682s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    sed -i '901s|<journal-max-io>5000</journal-max-io>|<journal-max-io>3000</journal-max-io>|' $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
    ;;
esac

ls -lart $JBOSS_HOME$n/bin/standalone.sh
grep caim-srv $JBOSS_HOME$n/bin/standalone.sh
echo ""
echo "For Node: $n"
echo ""
grep node $JBOSS_HOME$n/standalone/configuration/ca-standalone-full-ha.xml
echo ""
echo ""
echo ""


echo ""
echo ""
echo "Update CA APM / Wily Information / Agent for this instance"
cp -r -p /opt/CA/VirtualAppliance/custom/apm/wily_im $JBOSS_HOME$n/standalone/wily_im
chown -R wildfly:wildfly $JBOSS_HOME$n/standalone/wily_im
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.jmx.enable=true -Dcom.wily.introscope.agent.agentManager.url.1=localhost:5001 -Djboss.modules.system.pkgs=com.wily,com.wily.*,org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/p:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logging/main/jboss-logging-3.1.4.GA.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/as/logging/main/wildfly-logging-8.2.0.Final.jar:$JBOSS_HOME$n/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.2.Final.jar\""  >> standalone.conf
echo "JAVA_OPTS=\"\$JAVA_OPTS -Dcom.wily.introscope.agent.agentName=iamnode$n  -Dcom.wily.introscope.agentProfile=$JBOSS_HOME$n/standalone/wily_im/core/config/IntroscopeAgent.profile -javaagent:$JBOSS_HOME$n/standalone/wily_im/Agent.jar    \""  >> standalone.conf
echo ""
echo ""

 counter=$(( $counter + 00001 ))
done






counter=1
while [ $counter -le $MAX ]
do
  echo "Reset ownership permissions for $JBOSS_HOME$n to wildfly userID"
  chown -R wildfly:wildfly $JBOSS_HOME$n
  echo "Start up node: $n of $MAX Wildfly cluster"
  n=$((100+counter)); n=${n#1}


  if [ "$(whoami)" != "wildfly" ]; then
       echo "Run this process under the wildfly userid to avoid permissions issue with root"
       su - wildfly -c "$JBOSS_HOME$n/bin/standalone.sh &"
       chown -R wildfly:wildfly $JBOSS_HOME$n
  else
  $JBOSS_HOME$n/bin/standalone.sh &
  fi

  counter=$(( $counter + 00001 ))
done


Reduce log duplication: Avoid nohup.out

If you plan on starting your J2EE services manually, and wish to keep them running after you log out, a common method is to use nohup ./command.sh &.

The challenge with the above process, is it will create its own output file nohup.out in the folder that the command was executed in.

Additionally, this nohup.out would be a 2nd I/O operation that would recreate the server.log file for the J2EE service.

To avoid this 2nd I/O operation, review leveraging a redirection of the nohup to /dev/null or determine if this J2EE service can be enabled as a RC/init.d or systemd service.

Example to update the wildfly .profile to allow an “alias” using a bash shell function, to start up the wildfly service; and avoid the creation of the nohup.out file.

echo "Enable alias (or function)  to start and stop wildfly"

#Example of function - Use this to avoid double I/O for nohup process (nohup.out file)
function start_im01 () {
     echo "Starting IM 01 node with nohup process"
     cd /opt/CA/wildfly-idm01/bin/
     pwd
     nohup ./standalone.sh  >/dev/null 2>&1 &
     sleep 1
     /bin/ps -ef | grep wildfly-idm01 | grep -v grep
}
export -f start_im01

function stop_im01 () {
     echo "Stopping IM 01 node"
     echo "This may take 30-120 seconds"
     cd /opt/CA/wildfly-idm01/bin/
     pwd
     ./jboss-cli.sh --connect  --command=":shutdown"
     sleep 5
     /bin/kill -9 `/bin/ps -ef | grep wildfly-idm01 | grep -v grep | awk '{print $2}'` >/dev/null 2>&1
}
export -f stop_im01

You may now start and stop your J2EE Wildfly service with the new “aliases” of start_im01 and stop_im01

You may note that stop_im01 attempts to cleanly stop the Wildfly service via the JBOSS/Wildfly management console port ; and if that fails, we will search and kill the associated java service. If you did “kill” a service, and have startup issues suggest removing the $JBOSS_HOME/standalone/tmp & /data folders before restart.