WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$

Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid locking a userID in a Virtual Appliance

The below post describes enabling the .ssh private key/public key process for the provided service IDs to avoid dependency on a password that may be forgotten, and also how to leverage the service IDs to address potential CA Directory data sync challenges that may occur when there are WAN network latency challenges between remote cluster nodes.

Background:

The CA/Broadcom/Symantec Identity Suite (IGA) solution provides for a software virtual appliance. This software appliance is available on Amazon AWS as a pre-built AMI image that allows for rapid deployment.

The software appliance is also offered as an OVA file for Vmware ESXi/Workstation deployment.

Challenge:

If the primary service ID is locked or password is allowed to expire, then the administrator will likely have only two (2) options:

1) Request assistance from the Vendor (for a supported process to reset the service ID – likely with a 2nd service ID “recoverip”)

2) Boot from an ISO image (if allowed) to mount the vApp as a data drive and update the primary service ID.

Proposal:

Add a standardized SSH RSA private/pubic key to the primary service ID, if it does not exist. If it exists, validate able to authentication and copy files between cluster nodes with the existing .SSH files. Rotate these files per internal security policies, e.g. 1/year.

The focus for this entry is on the CA ‘config’ and ‘ec2-user’ service IDs.

An enhancement request has been added, to have the ‘dsa’ userID added to the file’/etc/ssh/ssh_allowed_users’ to allow for the same .ssh RSA process to address challenges during deployments where the CA Directory Data DSA did not fully copy from one node to another node.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

AWS vApp: ‘ec2-user’

The primary service ID for remote SSH access is ‘ec2-user’ for the Amazon AWS is already deployed with a .ssh RSA private/public key. This is a requirement for AWS deployments and has been enabled to use this process.

This feature allows for access to be via the private key from a remote SSH session using Putty/MobaXterm or similar tools. Another feature may be leveraged by updating the ‘ec2-user’ .ssh folder to allow for other nodes to be exposed with this service ID, to assist with the deployment of patch files.

As an example, enabling .ssh service between multiple cluster nodes will reduce scp process from remote workstations. Prior, if there were five (5) vApp nodes, to patch them would require uploading the patch direct to each of the five (5) nodes. With enabling .ssh service between all nodes for the ‘ec2-user’ service ID, we only need to upload patches to one (1) node, then use a scp process to push these patch file(s) from one node to another cluster node.

On-Prem vApp: ‘config’

We wish to emulate this process for on-prem vApp servers to reduce I/O for any files to be uploaded and/or shared.

This process has strong value when CA Directory *.db files are out-of-sync or during initial deployment, there may be network issues and/or WAN latency.

Below is an example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

An example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

Below is an example to push the newly created SSH RSA files to the remote host(s) of the vApp cluster. After this step, we can now use scp processes to assist with remediation efforts within scripts without a password stored as clear text.

Copy the RSA folder to your workstation, to add to your Putty/MobaXterm or similar SSH tool, to allow remote authentication using the public key.

If you have any issues, use the embedded verbose logging within the ssh client tool (-vv) to identify the root issue.

ssh -vv userid@remote_hostname

Example:

config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > eval `ssh-agent` && ssh-add
Agent pid 5717
Enter passphrase for /home/config/.ssh/id_rsa:
Identity added: /home/config/.ssh/id_rsa (/home/config/.ssh/id_rsa)
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ >
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > ssh -vv config@192.168.242.128
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.242.128 [192.168.242.128] port 22.
debug1: Connection established.
debug1: identity file /home/config/.ssh/identity type -1
debug1: identity file /home/config/.ssh/identity-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/config/.ssh/id_rsa type 1
debug1: identity file /home/config/.ssh/id_rsa-cert type -1
debug1: identity file /home/config/.ssh/id_dsa type -1
debug1: identity file /home/config/.ssh/id_dsa-cert type -1
debug1: identity file /home/config/.ssh/id_ecdsa type -1
debug1: identity file /home/config/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-sha1
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug2: mac_setup: found hmac-sha1
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 141/320
debug2: bits set: 1027/2048
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.242.128' is known and matches the RSA host key.
debug1: Found key in /home/config/.ssh/known_hosts:2
debug2: bits set: 991/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/config/.ssh/id_rsa (0x5648110d2a00)
debug2: key: /home/config/.ssh/identity ((nil))
debug2: key: /home/config/.ssh/id_dsa ((nil))
debug2: key: /home/config/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering public key: /home/config/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug2: input_userauth_pk_ok: SHA1 fp 39:06:95:0d:13:4b:9a:29:0b:28:b6:bd:3d:b0:03:e8:3c:ad:50:6f
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Thu Apr 30 20:21:48 2020 from 192.168.242.146

CA Identity Suite Virtual Appliance version 14.3.0 - SANDBOX mode
FIPS enabled:                   true
Server IP addresses:            192.168.242.128
Enabled services:
Identity Portal               192.168.242.128 [OK] WildFly (Portal) is running (pid 10570), port 8081
                                              [OK] Identity Portal Admin UI is available
                                              [OK] Identity Portal User Console is available
                                              [OK] Java heap size used by Identity Portal: 810MB/1512MB (53%)
Oracle Database Express 11g   192.168.242.128 [OK] Oracle Express Edition started
Identity Governance           192.168.242.128 [OK] WildFly (IG) is running (pid 8050), port 8082
                                              [OK] IG is running
                                              [OK] Java heap size used by Identity Governance: 807MB/1512MB (53%)
Identity Manager              192.168.242.128 [OK] WildFly (IDM) is running (pid 5550), port 8080
                                              [OK] IDM environment is started
                                              [OK] idm-userstore-router-caim-srv-01 started
                                              [OK] Java heap size used by Identity Manager: 1649MB/4096MB (40%)
Provisioning Server           192.168.242.128 [OK] im_ps is running
                                              [OK] co file usage: 1MB/250MB (0%)
                                              [OK] inc file usage: 1MB/250MB (0%)
                                              [OK] main file usage: 9MB/250MB (3%)
                                              [OK] notify file usage: 1MB/250MB (0%)
                                              [OK] All DSAs are started
Connector Server              192.168.242.128 [OK] jcs is running
User Store                    192.168.242.128 [OK] STATS: number of objects in cache: 5
                                              [OK] file usage: 1MB/200MB (0%)
                                              [OK] UserStore_userstore-01 started
Central Log Server            192.168.242.128 [OK] rsyslogd (pid  1670) is running...
=== LAST UPDATED: Fri May  1 12:15:05 CDT 2020 ====
*** [WARN] Volume / has 13% Free space (6.2G out of 47G)
config@cluster01 VAPP-14.3.0 (192.168.242.128):~ >

A view into rotating the SSH RSA keys for the CONFIG UserID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Advanced Oracle JDBC Logging

One of the challenges for a J2EE application is to understand the I/O operations to the underlying database.

The queries/stored procedures/prepared statements all have value to the J2EE applications but during RCA (root-cause-analysis) process, it can be challenging to identify where GAPs or improvements may be made. Improvements may be from the vendor of the J2EE applications (via a support ticket/enhancement) or custom client efforts for business rule SQL processes or embedded JAR with JDBC logic.

To assist with this RCA efforts we examined four (4) areas that have value to examine an application using an Oracle 12c/18c+ Database.

  1. J2EE logging using embedded features for datasources. This includes the JDBC spy process for Wildfly/JBOSS; and the ability to dynamically change logging levels or on/off features with the jboss-cli.sh process.

2. Intercept JDBC processes – An intermediate JDBC jar that will add additional logging of any JDBC process, e.g. think Wireshark type process. [As this will require additional changes; will describe this process last]

3. Diagnostics JDBC Jars – Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

4. AWR (Automatic Workload Repository ) Reports – Using either command line (SQL) or the Oracle SQL Developer GUI.

J2EE Logging (JDBC Spy)

An example of the above process using the Wildfly/JBOSS jboss-cli.sh process is provided below in a CLI script. The high value of this process is that it takes advantage of OOTB existing features that do NOT require any additional files to be download or installed.

Implementing this process will be via a flat-file with keywords for the command line JBOSS management console, which is taken as an input via the jboss-cli.sh process. (if you are not running under the context of the Wildfly/JBoss user, you will need to use the add-user.sh process).

The below script focuses on the active databases for the Symantec/Broadcom/CA Identity Management solution’s ObjectStore (OS) and TaskPersistence (TP) databases. JDBC debug entries will be written to server.log.

# Name:  Capture JDBC queries/statements for TP and OS with IM business rules
# Filename: im_jdbc_spy_for_tp_and_os.cli
# /apps/CA/wildfly-idm/bin/jboss-cli.sh --connect  --file=im_jdbc_spy_for_tp_and_os.cli
# /opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01!  --file=im_jdbc_spy_for_tp_and_os.cli
#
connect
echo "Take a snapshot backup of IM configuration file before any changes"
:take-snapshot
# Query values before setting them
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo "Enable JDBC Spy for TP and OS DB"
echo
# Always use batch with run-batch - will auto rollback on any errors for updates
batch
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:write-attribute(name=spy,value=true)
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:write-attribute(name=spy,value=true)
/subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=error,value=true)
run-batch
#/subsystem=logging/logger=jboss.jdbc.spy/:read-resource(recursive=false)
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo ""

echo "Check if logger is enabled already; if not then set"
# Use log updates in separate batch process to prevent rollback of entire process if already set
echo
batch
/subsystem=logging/logger=jboss.jdbc.spy/:add(level=TRACE)
run-batch

#
#
# Restart the IM service (reload does not work 100%)
# stop_im start_im restart_im
A view of executing the above script. Monitor the server.log for JDBC updates.

If you have access to install the X11 drivers for the OS, you may also view the new updates to the JBoss/Wildfly datasources.

Diagnostics JDBC Jars

Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

Ref: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjdbc/JDBC-diagnosability.html#GUID-4925EAAE-580E-4E29-9B9A-84143C01A6DC

This diagnostic feature has been available for quite some time and operates outside of the JBOSS/Wildfly tier, by focusing only on the Oracle Debug Jar and Java, which are managed through two (2) JVM switches.

A challenge exists for the CA Identity Manager solution on Wildfly 8.x, that the default logging utility process is using the newer JBOSS logging modules. This appears to interfere with the direct use of the properties file for Oracle debugging process. The solution appears to ignore the JVM switch of “-Djava.util.logging.config.file=OracleLog.properties.”

To address the above challenge, without extensively changing the CA IM solution logging modules, we integrated into the existing process.

This process will require a new jar deployed, older jars renamed, and two (2) reference XML files updated for Wildfly/Jboss.

AWR Reports

This is a common method that DBAs will use to assist application owners to identify I/O or other challenges that may be impacting performance. The ability to execute the AWR Report may be delegated to a non-system service ID (schema IDs).

This process may be executed at the SQLplus prompt but for application owners, it is best executed from Oracle Developer GUI. The process outlined in one (1) slide diagram below will showcase how to execute an AWR report, and how to generate one for our requirements.

This process assumes the administrator has access to a DB schema ID that has access to the AWR (View/DBA) process. If not, we have included the minimal access required.

Intercept JDBC processes

One of the more interesting processes is a 3rd party “man-in-the-middle” approach that will capture all JDBC traffic. We can think of this as Wireshark-type process.

We have left this process to last, as it does require additional modification to the JBoss/Wildfly environment. If the solution deployed is standalone, then the changes are straightforward to change. If the solution is on the Virtual Appliance, we will need to review alternative methods or request access to add additional modules to the appliance.

For this process, we chose the p6spy process.

A quick view to see the JDBC calls being sent from the IM solution to the Oracle Database with values populated. We can also capture the return, but this would be a very large amount of data.

A quick view of the “spy.log” output for the CA IM TP database during startup.


#!/bin/bash
##################################################
# Name: p6spy.sh
#
# Goal:  Deploy the p6spy jar to assist with RCA for IM solution
# Ref:  https://github.com/p6spy/p6spy
#
# A. Baugher, ANA, 04/2020
##################################################
JBOSS_HOME=/opt/CA/wildfly-idm
USER=wildfly
GROUP=wildfly

mkdir -p /tmp/p6spy
rm -rf /tmp/p6spy/*.jar
cd /tmp/p6spy
time curl -L -O https://repo1.maven.org/maven2/p6spy/p6spy/3.9.0/p6spy-3.9.0.jar
ls -lart *.jar
md5sum p6spy-3.9.0.jar
rm -rf $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
mkdir -p $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cp -r -p p6spy-3.9.0.jar $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
ls -lart $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cd /opt/CA/wildfly-idm/modules/system/layers/base/com/p6spy/main
cat << EOF >> module.xml
<module xmlns="urn:jboss:module:1.0" name="com.p6spy">
    <resources>
        <resource-root path="p6spy-3.9.0.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
<module name="com.ca.iam.jdbc.oracle"/>
    </dependencies>
</module>
EOF
chown -R $USER:$GROUP $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 755  $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 664  $JBOSS_HOME/modules/system/layers/base/com/p6spy/main/*
ls -lart
cat module.xml
# Update spy.properties file
curl -L -O https://raw.githubusercontent.com/p6spy/p6spy/master/src/main/assembly/individualFiles/spy.properties
rm -rf  $JBOSS_HOME/standalone/tmp/spy.properties
cp -r -p spy.properties $JBOSS_HOME/standalone/tmp/spy.properties
chown -R $USER:$GROUP $JBOSS_HOME/standalone/tmp/spy.properties
chmod -R 666 $JBOSS_HOME/standalone/tmp/spy.properties
ls -lart $JBOSS_HOME/standalone/tmp/spy.properties

A view with “results” enabled. May view with binary or text results. (excludecategories=info and excludebinary=true)
Enable these configurations in the CA IM standalone-full-ha.xml file – Focus only on the CA IM TP database (where most activity resides)
 <drivers>
                    <driver name="p6spy" module="com.p6spy">
                       <driver-class>com.p6spy.engine.spy.P6SpyDriver</driver-class>
                    </driver>
                    <driver name="ojdbc" module="com.ca.iam.jdbc.oracle">
                        <driver-class>oracle.jdbc.OracleDriver</driver-class>
                        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
                    </driver>
 </drivers>
<!-- ##############################  BEFORE P6SPY ##########################
                <datasource jta="false" jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="iam_im-imtaskpersistencedb-ds" enabled="true" use-java-context="true" spy="true">
                    <connection-url>jdbc:oracle:thin:@//database_srv:1521/xe</connection-url>
                    <driver>ojdbc</driver>
   ##############################  BEFORE P6SPY ##########################
-->
                <!-- <datasource jndi-name="java:/jdbc/p6spy" enabled="true" use-java-context="true" pool-name="p6spyPool"> -->

                <datasource jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="p6spyPool" enabled="true" use-java-context="true">
                        <connection-url>jdbc:p6spy:oracle:thin:@//database_srv:1521/xe</connection-url>
                        <driver>p6spy</driver>

These error messages may occur during setup or mis-configuration.

References:

https://p6spy.readthedocs.io/en/latest/index.html https://p6spy.readthedocs.io/en/latest/configandusage.html#common-property-file-settings   [doc on JDBC intercept settings] https://github.com/p6spy/p6spy/blob/master/src/main/assembly/individualFiles/spy.properties  [sample spy.property file for JDBC intercepts] https://github.com/p6spy/p6spy   [git] https://mvnrepositor.com/artifact/p6spy/p6spy    [jar]

Extra(s)

A view into the ojdbc8_g.jar for the two (2) system properties, that are all that typically is required to use this logging functionality.

oracle.jdbc.Trace and java.util.logging.config.file

COVID-19 and Privacy Preserving Contact Tracing


Contact Tracing makes it possible to combat the spread of the COVID-19 virus by alerting participants of possible exposure to someone who they have recently been in contact with, and who has subsequently been positively diagnosed as having the virus.” – Apple

The conventional deterrent from the adoption of contact tracing is the lack of privacy controls. Apple has to be involved when privacy is vital, and in time we are living today, having a framework that allows for privacy-focused contact tracing to limit the spread of novel viruses is extremely important. Apple and Google collaboratively have put a structure in place for enabling contact tracing while preserving privacy.

Below is a simplified explanation of how contact tracing would work using a mobile device, as described in the specification.

  • Use of Bluetooth LE (Low energy) for proximity detection (no use of location-based services which would be essential for preserving privacy)
  • Generation of daily tracing keys using a one-way hash function
  • Generation of rolling proximity identifiers that change every ~15 minutes and based on daily tracing key.
  • Advertise self proximity identifiers and discover foreign proximity identifiers.
  • User decides when to contribute to contact tracing
  • If diagnosed with COVID-19, the user consents to upload a subset of daily tracing keys.

To detect if one may have come in contact with a COVID-19 positive individual, they would:

  • Download COVID-19 positive daily tracing keys
  • Contact tracing app computes time-based proximity identifiers from the downloaded daily tracing keys on the local device
  • Checks if this local device has previously recorded any of these identifiers

The above mechanism of determining if one has come in contact with a COVID-19 positive person preserves privacy by:

  • Ensuring that the contact tracing keys cannot be reverse-engineered into computing identifying information of the originating device.
  • Not associating GPS or other location services with the keys
  • Performing verification of being in contact with another COVID-19 positive person on a local mobile device.

The specification is preliminary, and there is a strong attempt to ensure privacy of individuals are protected within this framework. COVID-19 positive information uploaded onto central servers also DO NOT contain any personally identifiable information. The detection itself is a decentralized process as it gets computed locally on an individual’s device. Central entities are not in control of either detecting or informing people in this entire process.

All of this works very well if everyone does their civic duty and report when they are positive.

For a broad adoption of contact tracing, there are opportunities for further improvement though. The specification should talk about server(s) responsible for collecting uploaded information. A (mobile) device generates daily tracing keys, and although the keys themselves do not have any mechanism to be associated with that device by an external process, uploading these keys from the same mobile device opens up the possibility of linking information. IP address of a device that uploads anything to a server is always known to it. Applications that are going to use the above privacy framework additionally need to consider connectivity related exposure while uploading information to central servers. The specification should have a section and considerations on protection and controls for exposure of privacy as a result of connectivity.

Another area of improvement could be on controls around potential abuse of this Contact Tracing mechanism. As per the specification, a person who has tested positive for COVID-19 opts-in and uploads their daily tracing key information to central servers. The intent is for others who have been in proximity with them, know, and take actions to self-quarantine to limit the spread. Pranksters or other entities may use it to cause general disruptions by falsely claiming to be COVID-19 positive to unnecessarily or intentionally cause disruption. If such actions happen at a large scale, it will severely impact reliability, credibility, and adoption of contact tracing.

Preserving Privacy is not an easy problem to solve, and larger mindshare is needed to solve these challenges.

[Detailed specification at Apple’s website]

This town is big enough for us all: Expanding the CA Provisioning Tier Schema to 900+ Custom Fields

Based on recent requests, we wished to revisit this “hidden” gem to expand the CA Identity Suite Provisioning schema to meet unique business requirements. Enable 100’s of SaaS and onPrem applications/endpoints for custom business logic to user’s endpoint accounts’ attributes.

Since early days of the CA Identity Suite solution (eAdmin r8.1sp2), there has been a provisioning SDK that provided an approved process to extend the CA Identity Manager’s IMPD (provisioning directory) schema from the default of 99 user custom fields to 900 additional user custom fields.   To compare, typically, the default 99 user custom fields are used with the standard 40-50 default user profile fields, e.g. givenName (First Name), sn (LastName), userID, telephone #, etc. to meet most business use-cases.

Unfortunately, this extended schema process is not well known.

The only known documentation is an embedded readme.txt within a compressed package. Occasionally there will be support tickets or community notes that request this feature as an “enhancement”.

This package is included in the Provisioning SDK download; for IM r14.3, the file name is:

Component Name: CA Identity Manager r14.3 Legacy components
File: GEN500000000002780.zip ~ 200 MB

Background:

CA Identity Suite (Identity Manager) Provisioning Tier does NOT attempt to be a meta-directory, but act as a virtual directory to the 1000’s of managed endpoints/userstores/applications.   As long as the “explore” operation was successful, there will be a “pointer” object that references the correct location of the endpoint accounts.  And when a “correlation” operation occurs, this endpoint account “pointer” object is attached (via inclusion referential objects), to the associated global user ID.    

By using this “virtual directory” architecture, it is possible for IM business rules or 3rd party tools to directly view the 1000’s of managed endpoints “real data” and not a “stored” representation of this data.

However, some clients do wish to “collect” the native data, and store this within the IMPD provisioning store, as SNAPSHOT data, to monitor for non-approved / OOB (out-of-band) access.   If some fields are dedicated to select endpoints, the default of 99 custom fields may quickly run out.

Tackling Case-insensitivity Requirement:

Adjusting the IMPD schema for case-insensitivity; this would allow for case-insensitive correlation rules, and if the new fields are exposed to the IME, case-insensitive comparisions for business rules (PX).

Challenge:

The above Provisioning SDK process will build the extended eTCustomField100-999 and eTCustomFieldName100-999 attributes with case=sensitive. Interestingly, we did not identify a requirement for case sensitivity with the default custom fields, but it does appear this was a decision when the SDK was created. Please note the observation of the OOTB etrust_admin.schema file (for the IMPS data). This OOTB schema for the default custom fields displays a mix of case sensitivity for the eTCustomField00-99 and eTCustomFieldName00-99.

Proposal:

To address this new requirement; and to clarify there are three (3) possible deployments to enable this extended schema. We will review the pro/cons of each possible deployment choice.

Supporting Note 1:

  • eTCustomFieldXXX is the attribute that will contain a value.
  • eTCustomFieldNameXXX is the attribute that will contain a business name for this custom field.

Supporting Note 2:

The CA IM Provisioning Tier was/is developed with early x86 MS VC++ code. We attempted to use later release of the MS Visual Studio VC solution for this process but it failed to generate the output files.

Phase 1 Steps: Enhance the IM Provisioning Tier with 900 new custom fields with case = insensitive.

  1. Download & install MS Visual Studio VC 2010 Express, to have access to the ‘nmake’ executable.
  1. Update OS PATH variables to reference this MS VC 2010 bin folder
  1. Execute the nmake binary, to ensure it is working fine
    • where make & make /?
  1. Download & install CA IM Provisioning SDK on the same server/ workstation as ‘nmake’ binary.
    • IM r14.3 GEN50000000002780.zip 200 MB
  1. Open a command line window; and then change folder to the Provisioning SDK’s COSX Samples folder

cd “C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\samples\COSX”

  1. Execute the gencosx.bat batch file to generate the additional schema for N attributes.

gencosx.bat 900 { Max allowed value is 900; which will generate 100-999 attributes}

The output text file: cosxparse.pty

**** The above steps only need to be executed ONCE on a workstation. After the output text file is generated, we should only need & retain this file for future updates. ****

################################################################

  1. Use Notepad++ to search and replace a string in the following file, cosxparse.pty

“case=sensitive” to “case=insensitive”

{We may be selective and only replace a few attributes instead of all additional 900 attributes.}

  1. Execute the following commands to generate the binary file.
  • Use batch files to set environmental values for the nmake program.
    “C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat“
  • Execute ‘nmake’
    nmake

The new output file (binary) will be:
C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\data\ cosxparse.ptt

  1. Before overwriting existing files; backup the three (3) prior files of IMPS/CCS data folder & IMPD schema folder for:
    etrust_cosx.schema
    etrust_cosx.dxc
    cosxparse.ptt
  1. Copy the file, cosxparse.ptt, to the IMPS server data folder
  1. Stop IMPS service: su – imps & imps stop
  1. Execute the follow command: schemagen -n COSX
  1. This process will create two (2) new output files:
    • etrust_cosx.dxc
    • etrust_cosx.schema
  1. Validate the two (2) new generated files have case-insensitivity set.
  1. Copy etrust_cosx.dxc to all CA Directory schema folders; including DX routers (on IMPS servers).
    • Validate this file is reference in the IMPD group knowledge schema file: etrust_admin.dxg
  1. Copy etrust_cosx.schema & cosxparse.ptt to all CA IMPS Servers, the CCS Servers’ data folders, & the CA IMPS GUI data folder.
    • Validate the file, etrust_cosx.schema, is reference in the IMPS configuration file: etrust_admin.conf
  1. Restart CA Directory and IMPS/CCS Services.
  • dxserver stop all / dxserver start all
  • imps stop / imps start
  • net stop im_jcs / net start im_jcs {this will also restart the im_ccs service}
  1. With the IMPS GUI
  • Assign a ‘business name’ to the newly created eTCustomField100+ under
    SYSTEM/GLOBAL PROPERTIES/CUSTOM USER FIELDS
    {If you do not see these newly created fields, then the IMPS GUI data folder was not updated per step 11.}
  • Validate that E&C Correlation Rules will now work for these extended fields with case-insensitivity.
    SYSTEM/DOMAIN CONFIGURATIONS/EXPLORE AND CORRELATE/CORRELATION ATTRIBUTE/
  • Validate the custom fields are viewable for each Global User.

We may now STOP HERE if we do NOT need to expose these new custom fields to the IME.

Pro: Able to use customfields for account templates and correlations rules.

Con: Not exposed to IME for 1:1 mapping nor exposed for PX Business Rules.

#############################################################

Phase 2 Steps – Advance Configuration – Add custom fields to the IME to allow for 1:1 mapping and use of PX Business Rules.

  1. Update the JIAM (Java LDAP to IMPS API) reference file, jiam.jar, to allow the IME to manage these extended fields for PX business rules.
    • Use 7zip https://www.7-zip.org/ to extract files from jiam.jar; update the file CommonObjects.xml; then replace this file in the jar file.
    • Location of reference file: ./wildfly-idm/standalone/deployments/ iam_im.ear/library/jiam.jar
    • Location for property files to update: \com\ca\iam\model\impl\datamodel\ CommonObjects.xml
  1. Update sections after eTCustomField99 with the below data with the case insensitive.

<property name="eTCustomField100"> <doc>Custom Field #100</doc> <value default="false"> <setValue> <baseType default="false"> <strValue></strValue> </baseType> </setValue> </value> <metadata name="jiam.syncToAccounts"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.modifyPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.ownerPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="isMultiValued"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="beanPropertyName"> <value> <strValue>customField100</strValue> </value> </metadata> <metadata name="pt.minimumAbbreviation"> <value> <intValue>10</intValue> </value> </metadata> <metadata name="pt.internalName"> <value> <strValue>CustomField100</strValue> </value> </metadata> <metadata name="pt.editType"> <value> <strValue>string</strValue> </value> </metadata> <metadata name="pt.editFlag"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.caseSensitivity"> <value> <strValue>insensitive</strValue> </value> </metadata> <metadata name="pt.asciiOnly"> <value> <boolValue>false</boolValue> </value> </metadata> <metadata name="pt.dataLocation"> <value> <strValue>db</strValue> </value> </metadata> </property>

  1. Update the CA IMPS directory.xml as needed for some or all 900 fields.
<ImsManagedObjectAttr physicalname="eTCustomField100" description="Custom Field 100" displayname="Custom Field 100" valuetype="String" multivalued="true" wellknown="%CUSTOM_FIELD_100%" maxlength="0"/>
  1. Update the IME’s IMCD to IMPS 1:1 mappings.
    • identityEnv_environment_settings.xml

We may now stop here. The next advance configuration is only required if we wish to manage the various Endpoint Mapping Tab with the IM UI; instead of the IMPS GUI. We would consider the next Phase 3 Steps, to be low value for the effort; as this configuration is typically set once and done in the IMPS GUI.

Pro: Able to use customfields for account templates and correlations rules. Also able to map these files 1:1 in the IME for IMCD attributes to be mapped to the IMPS extended custom attributes. These IMPS extended custom attributes will now be exposed for PX Business Rules.

Con: Not exposed to IM UI to update Endpoint’s Mapping TAB for ADS and DYN endpoints.

Phase 3 Steps – IME Advanced

If planning on exposing these new custom fields in both the Endpoint’ Mapping Attribute Screen & Endpoint Account Templates via the IME, follow these additional steps:

  1. Replace commonobjects.xml in ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\lib\roledefgen.jar by following the steps given below:
    • rename roledefgen.jar as roledefgen.zip
    • Open roledefgen.zip
    • open com\ca\iam\roledefgen\commonobjects.xml and replace the contents with the attached/provided commonobjects.xml file
    • save the zip
    • rename the zip to jar
  2. Now roledefgen.jar will contain the commonobjects.xml file with extended custom attributes
  1. execute the below RoleDefGenerator.bat to generate jars for all the required java/Dyn endpoints
    • ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\bin> RoleDefGenerator.bat -d -h -u “”
  1. open the generated endpoint jars one by one and modify them by following below steps:
    • rename the original .jar as .zip
    • open framework.xml and increase the version “version=” (2nd line)
    • rename the generated .jar as .zip
    • open and copy the contents of -RoleDef.xml
    • paste the copied content in step 4 to the file -RoleDef.xml in original .zip (step 1)
    • save the original .zip and rename it to .jar
    • replace the save .jar in ..\wildfly-8.2.0.Final\standalone\deployments\iam_im.ear\ user_console.war\WEB-INF\lib
  2. restart IM and test the Custom attributes in IM web-UI

Post Update Note:

  • Validate if we may need to rebuild the IMPD DSAs (4) for existing users that may have already had these extended attributes but with case=sensitive set previously.
    • This step is not required if this is the first time the extended attributes have been deployed or if the case=sensitive has not been changed.
    • Process: Export the IMPD LDIFs, rebuild the IMPD DSA and then re-import LDIFs.

Home Assistant: Docker Lab

Intro

I was interested to see an intersection between Docker, VMware, and an application (Home Assistant) that users may wish to run on their laptops and/or workstations.

The Home Assistant application seemed especially valuable to business travelers/road warriors that would like a simple and flexible dashboard to keep an eye out for activity at home.

I have put together the following steps to be completed in thirty (30) minutes or less using community and/or non-commercial licenses.

This lab will cover the following solutions/applications: VMware player (free personal license), home assistant (open-source home automation platform ), Docker (automation of application on a prebuilt os), Ring Door Bell (ring.com) and Fast.com (monitor of download speeds)

Please review and see if this lab may have value to your project team(s) to increase their awareness of docker and still have value for home use.

Ring Door Bell (ring.com) & Fast.com

The above Dashboard image is the goal of this lab; to take advantage of the community tools for home automation, and enable your Ring.com credentials to allow viewing/monitoring while on the road or at home. Additionally, we have added Fast.com configuration to allow for bandwidth monitoring of download speed using the Netflix’s sponsored site.

Step 1: Create a single folder for download(s) and installation

Avoid clutter from VMware configuration and data files if allowed to use defaults. Otherwise, we may have files in two (2) different folders.

Step 2a:  Download the Home Assistant VMDK bootable disk image

We wish to pre-download this bootable image to be ready to be consumed by VMware Player (Note: If you already have VMware workstation, you may use it as well instead of VMware Player)

See the link below in the next step.

Step 2b:  Download the Home Assistant VMDK bootable disk image

The pre-built vmdk compressed file may be accessed under “Getting Started” and “Software Requirements”

https://www.home-assistant.io/hassio/installation/

Select the “VMDK (VMWare Workstation) link to download this file.

Step 2c:  Copy and Extract the VMDK from the compressed gz file

Suggest a copy be made of the vmdk file, as future steps will modify this file. The file is compressed with gzip, but you may use 7zip ( https://www.7-zip.org/ ) or other 3rd party tools to extract. The MS Windows built-in zip tool will not likely extract this file.

Step 3a:  Download a free, personal license copy of Vmware Player

If you already have VMware Workstation, you may skip these series of steps; or you may wish to install this VMware Player package along with your existing VMware workstation installation.

https://www.vmware.com/products/workstation-player.html

Step 3b:  Install Vmware Player, and designate for personal use aka “non-commercial use” when asked for license key.

During installation, when asked for a license, select “non-commercial use” for personal use on your home laptop/workstation.

Step 4a:  Start Vmware Player, and select “Create a New Virtual Machine”

Now we are ready to create our first Virtual Machine on our laptop/workstation. We will use a default boot-strap configuration to build the initial settings, then modify them for the Home Assistant pre-built bootable disk image.

Step 4b:  Select the following configurations to jump start VMDK

Choose a generic Linux Operating System and Version configuration. I selected “Other Linux 5.x or later kernel 64-bit”. Next, select the folder where the Home Assistance vmdk file was extracted. Rename your VM as you wish. I kept it as “homeassistant”.

Step 4c:  Allow discovery of the VMDK for Home Assistant

VMware player will recognize that a pre-existing vmdk file exists in this folder, and will warn you of this fact. Click Continue to accept this warning message.

On the next screen, select “Store virtual disk as a single file” to avoid the clutter of temporary files.

Step 4d:  Create the new Virtual Machine

We are now ready to complete the new Virtual Machine with default configurations.

Note: When this step is complete, please do NOT start/play the VM yet; as that will define default OS configuration settings; which we do not require.

Step 5a:  Edit the new Virtual Machine Settings

Now we are ready to adjust the default configurations to enable the use of the pre-built Home Assistant VMDK bootable disk file.

Reminder: Do NOT start/play the image yet.

Select the “Edit Virtual Machine Settings”

Step 5b:  Edit the new Virtual Machine Settings

Remove four (4) default configurations item

[ 1. Hard Drive (SCSI), 2. CD/DVD (IDE), 3. Sound Card, 4. Printer ]

Adjust the memory to 1 GB (1024 MB)

Do NOT click OK yet.

Step 5c:  Add correct Hard Drive Type (IDE) for bootable VMDK

Select “Add” button, to re-add a “Hard Drive” with Type = IDE. Select “Use an existing virtual disk”. This “existing virtual disk” will be the Home Assistant VMDK file.

Select Next button.

Step 5d: Select the “hassos_ova-2.xx.vmdk” file for the bootable existing disk

Select the Home Assistant VMDK file that was extracted. Ensure that you do NOT select the temporary file that was created prior with the name “homeassistant.vmdk”

Select Finish button.

Step 5e: Allow vmdk disk to be imported

You may convert or allow the VMDK to remain in its prior “format”. We have tested with both selections; and have not observed any impact with either selection.

After import, observe that the Hard Drive now has IDE as the connection configuration.

We will now expand this Hard Drive from the default of 6 GB (maximum size) in the next step.


Step 5f: Expand VMDK from 6 GB (default) to 32 GB for max disk size

Select “Hard Drive”, then in the right sub-panel, select “Expand disk capacity”

Update the value from 6.0 to 32.0 for maximum disk size in GB.

Click OK and observe the update on both panel windows for the hard drive.

Click OK to close edit windows. Reminder: Do NOT start/play the image yet.

Step 6a: Convert “BIOS” (default) to “EFI” type for new Virtual Machine

Last step before we start the image. The Home Assistant bootable VMDK disk was designed and configured for the boot-loader of EFI, instead of the older legacy “BIOS” boot-loader.

If you have VMware workstation/ ESXi server, you may have access to a GUI entry to adjust this virtual firmware bootloader configuration.

However, VMware Player does not expose this setting in the GUI. To address this challenge, we will use VMware documented method to directly update the configuration file for our new Virtual machine for one (1) setting. https://communities.vmware.com/docs/DOC-28494

Navigate to the folder where the VMDK was extracted. You will now see several other files, include the primary configuration file for our new Virtual Machine. Its name will be “homeassistant.vmx” . The “*.vmx” filename extension/suffix will contain hardware configuration for booting the VWmare VM server image.

Step 6b: Edit configuration file for new Virtual Machine

Use either MS Windows notepad.exe or Notepad++ or similar tool to edit the configuration file.

If the VM image was not started, we will NOT find a key:value pair with the string “firmware”. Note: If the VM image was started before we add in our entry, then startup issues will occur. (If this happens, please restart the lab from Step 4a.)

Append the following string to the bottom of the file & save the file.

firmware = “efi“

Step 7a: Start the new Virtual Machine

We are now ready to start our image and begin to use the Home Assistance application. Select our new Virtual Machine & click “Play virtual machine”.

Observe the screen for “boot-loader” information related to EFI. This will be confirmation that we did configure the VMDK hard drive image to load correctly and will have no unexpected issues.

Step 7b: Click within Virtual Machine window to “active” and then <enter>

The VM will boot fairly quickly, and you may notice the text will appear to stop.

Click within the VM window with your mouse, then press the <ENTER> key to see the login prompt.

Enter the login userID: root

Note: If you wish to re-focus your mouse/keyboard outside of VMware Player, press the keys <CNTRL> and <ALT> together, to redirect focus. Click back into the VMware Player window anytime to enter new text.

Step 7c: Discover IP address of homeassistant docker application

Now we get to play with some basic shell and docker commands to get our IP address and validate a port.

At the hassio > prompt, enter the text: login

This will give us a root shell account. To find our current dynamic IP address, that the VMplayer installation created for us, issue the following command:

ip addr | grep dynamic

To view the three (3) docker containers, issue the following command:

docker ps

This will display the status of each container. After 1 minute uptime, we can use the Home Assistant application.

To validate the actual TCP Port used (8123), issue the following docker command:

docker exec -it -u root -e term=xterm homeassistant /bin/bash -c “netstat -anp | grep tcp | grep LISTEN”

We will use the IP address and TCP port (8123) within a browser window (IE/Chrome/Firefox/Opera/etc.) on the laptop/workstation to access the Home Assistant application.

Step 8a: Login to Home Assistant Application with a Browser

http://ip_address_here:8123

When we first start the Home Assistant Application, it will ask for a primary account to be created. Use either your name or admin or any value.

If you plan to eventually expose this application to the internet from your home system, we would recommend a complex password; and perhaps storage in a key safe like LastPass https://www.lastpass.com/ or locally in Key Pass https://keepass.info/ file.

Step 8b: Use detect to re-assign default location to your area

Adjust the defaults to your location if you wish. Use the “detect” feature to reset values, then click next. May use a mouse to assist with refinement of location on the embedded map feature.

Step 8c: Home Assistant Landing Page

Click Finish to skip the question about early integration.

Now we are at the Landing Page for Home Assistant. Congratulations with the setup of Home Assistant.

We now will configure two (2) items that have value to home users.

Step 9a: Enable the Home Assistance Configuration Tool

Before we add-on new features, we need to make it easy for us to adjust the Home Assistance configuration file.

Select the MENU item (three lines in the upper left window – Next to HOME string)

You will see a side panel of selection items. Select “Hass.io

Step 9b: Select Add-On Store & Configurator Tool

Select the “ADD-ON STORE” displayed at the top of the window. Scroll down till you view the item “Configurator” under the section “Official add-ons”

Select the item “Configurator”

Step 9c: Install and Start the Configurator Tool

Select “Install” and “Start” of the “Configurator” Tool

Step 9d: Open the Web UI to use the Configurator Tool

Select the “Open Web UI” link. You may wish to save this URL link in your favorites or remember how to re-access this URL with additional updates.

After the landing page for the “Configurator” tool has loaded, select the FOLDER ICON in the upper left of the window. This will allow you to access the various configuration files.


Step 9e: Select primary Home Assistant configuration file (configuration.yaml)

Now select configuration.yaml from the left panel. The default configuration file will load with minimal information.

This is where we will make most of the updates to enable our home applications of Ring Doorbell and Fast.com (download monitor).

Step 10a: Add fast.com & Ring Door Bell Add-On (with sensors/camera)

We are now ready to add in as many integrations as we wish.

There are 100’s of prebuilt configurations that can be reviewed on the Home Assistant site.

For Ring Doorbell (ring.com) and Fast.com, we have already identified the configurations we need, and these can be pasted to the primary configuration file. We have also enclosed the references for each configuration.

# Download speed test for home use
# Ref: https://www.home-assistant.io/integrations/fastdotcom/

fastdotcom:  
   scan_interval:      
      minutes: 30     

# Ring Doorbell     
# Ref: https://www.home-assistant.io/integrations/ring/
# Ref: http://automation.moebius.site/2019/01/hassio-home-assistant-installing-a-ring-doorbell-and-simple-automations/
# Ref: https://www.ivobeerens.nl/2019/01/15/install-home-assistant-hass-io-in-vmware-workstation/

sensor:  
  - platform: ring 

ring:  
   username: !secret ring_username  
   password: !secret ring_password  

camera: 
  - platform: ring

binary_sensor: 
  - platform: ring

Step 10b: Save configuration.yaml file  & confirm no syntax errors

Click save, and validate that you have a GREEN checkbox (this is used for syntax checking of the configuration files for spacing and formatting).

After saving, click the FOLDER ICON in the upper left.

We will now add the Ring.com credentials to the secrets.yaml file.

Step 10c: Select “secrets.yaml” to host the Ring.com credentials

From the side panel, select the “secrets.yaml configuration file to add the Ring.com credentials.

Step 10d: Enter Ring.com credentials & save this file

Enter Ring.com credentials in the following format.

# Enter your ring.com credentials here to keep them separate 
# from the default configuration file.

ring_username:  email_address_used_for_ring.com_here@email.com
ring_password:  password_used_for_ring.com_here

Step 11a: Restart Home Assistance Application

Configurations are done. Restart the Home Assistance Application to use the configurations for Ring.com and Fast.com

Select “Configuration” from the left panel menu, then scroll down in right panel to select “Server Controls”

Step 11b: Restart Home Assistant Application

Select “Restart” and accept the warning message with OK. The connection will drop for 30-60 seconds, then the browser may reload with the prior screen. (If you saved your credentials in the browser password management section when “asked” by the browser). If not, re-authenticate with your Home Assistant credentials.

Step 11c: Extra – Monitor for Error Messages in Notification Logs

This section is ONLY needed if you see an error message in the Notification Logs, e.g. missing data in the secrets.yaml and/or incorrect credentials for Ring.com.

Step 12: Done – Site 1 & Site 2

Below example for one (1) site with just one (1) Door Bell Ring device and integrated with Fast.com

Example with many devices integrated with Ring.com

We hope this lab was of value, and that others take advantage of this prebuilt appliance with docker and vmware. Please share with others to allow them to to gain awareness of docker processes.

Extra of interest: AWS and Ring.com Mp4 Videos

There are additional configurations that will allow auto-downloading of the mp4 videos from the AWS hosted site for Ring.com. Note the Video_URL for camera.front_door.

A view of the many pre-built integrations for Home Assistant

https://www.home-assistant.io/integrations/

Additional Docker Commands for the Home Assistant Application

docker ps               [List all containers & running status; should see a minimum of three (3) running containers]
docker images           [List all images]
docker logs homeassistant   2>&1 | more
docker logs hassos_supervisor  2>&1 | more
docker logs hassio_dns
docker exec -it -u root -e term=xterm homeassistant /bin/bash   [shell]
docker exec -it -u root -e term=xterm homeassistant /bin/bash -c 'netstat -anp | grep tcp | grep LISTEN'  [validate network port TCP 8123]

Extra Step – Disable the annoying backspace keyboard beep within a VMware image for VMWare Player

VMware Player configuration item:

Add this line in C:\ProgramData\VMware\VMware Player\config.ini
mks.noBeep = “TRUE”

Enclosing a PDF of the lab for offline review

To Syslog or Not

Docker Image(s) provide fantastic value, as this platform-as-a-service methodology gets us all out of the painful “install-business”.  We may focus effort on the business value that a solution provides.

However, the associated docker containers may provide some challenges.

Business Risks:

For the CA API Gateway solution, we have two (2) business risks to address:

  1. The docker container of the API Gateway solution may be ephemeral & replaced with newer releases.
    • Example: Any API Gateway application logs that reside on the docker container, as a file, may be lost when the container version is updated or redeployed.
  2. The docker container of the associated MySQL Database may have growth concerns with the default OOTB API Gateway Audit Event process.
    • Example: The MySQL Database of ibdata1 may continue to grow and be impacted by current disk constraints.   To reduce MySQL Database size and remove the low value audit data, it will be necessary to declare an outage window to export (w/o audit data)/resize mysql db/import the data (w/o audit data).

Resolution(s):

To address the above risks, we may leverage the syslog feature set provided by the API Gateway:

  • The API Gateway documentation does allow for syslog configuration for the primary Audit Event and the application logs.

This blog entry will review the use of syslog, create individual syslog files for each API Gateway application, avoid common mis-configurations, and how to validate the processes.

Before we start this process, to provide justification, lets review how “large” is the API Gateway Audit Events in the MySQL Database & that API Gateway logs are no longer retained on the container (as of r9.4 release)

Validation of API Gateway Database Growth & Application Logs:

Pre-Step 00: Review the current MySQL ibdata1 file size (via docker command)

docker exec -it -u root -e TERM=XTERM ssg94_mysql57 /bin/bash -c "find /var -type f -mtime -1 -ls | head -5"

Pre-Step 01: Review the audit tables sizes in the ‘ssg’ MySQL database with the below query:

docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=gateway --password=7layer ssg -e "SELECT TABLE_NAME, table_rows, data_length, index_length, round(((data_length + index_length) / 1024 / 1024),2) 'Size in MB' FROM information_schema.TABLES WHERE table_schema = 'ssg' ORDER BY (data_length + index_length) DESC; "

Pre-Step 02: Validate no API Gateway application files reside and primary ssg file is redirect to /dev/null (r9.4)

docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c 'ls -larth /opt/SecureSpan/Gateway/node/default/var/logs'

Enabling Syslog for API Gateway Applications

Step 1: Enable & update the remote rsyslogd service on the docker host (or remote host) [/etc/rsyslog.conf]

  • Enable/Allow UDP 514
    • Un-comment two (2) lines
  • Define unique syslog facilities for each API GW application & the primary Audit Event
  • Exclude duplicate logging to /var/messages
    • Add a semi-colon with the facility.none on this line
    • ;local3.none;local4.none;local5.none;local6.none
  • Restart the updated rsyslog.service
    • systemctl restart rsyslog.service
  • Validate UDP 514 is available.
    • netstat -an | grep :514

Step 2: Validate Syslog is functioning correctly with the facilities (syslog naming convention) using the OS command:

  • logger -s -p local3.warn Testing for ServiceNow SysLog
  • logger -s -p local4.warn Testing for Google GCP SysLog
  • logger -s -p local5.warn Testing for OpenStack SysLog
  • logger -s -p local6.warn Testing for SSG Audit Event SysLog

Step 3: Enable syslog for each API Application with a unique log file associated to a unique facility number.

a. Review the associated API Gateway Applications in the API Gateway Policy Manager and their associated URI strings,e.g. /gcp/*, /servicenow/*, etc.

b. Open the API Gateway Policy Manager / Tasks / Logging and Auditing / Manage Log/Audit Sinks Window

c. Create new Log Sinks for every API Gateway Application. Configure these for ‘syslog’, with the correct facility number, and ensure the FILTER is set correctly with the Category & Services URI string match (otherwise you may not see any data). Suggest Category=Traffic Log (as your first iteration). Reference for facility number (0-23) to match syslog naming conventions: https://tools.ietf.org/html/rfc5424 [page 10]

Step 4: Validate the API Gateway Application Syslog with remote web service call via curl to the various application URI strings.

curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/gcp/apple
cat /var/log/google-gcp.log | tail -1

curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/servicenow/pear
cat /var/log/servicenow.log | tail -1

curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/openstack/banana
cat /var/log/openstack.log | tail -1

Enabling Syslog for API Gateway Audit Event

The current documentation for enabling syslog for the API Gateway Audit Event process is clear, but administrators may have a challenge if the FILTER is not set correctly. To address this common mis-configuration, the follow steps are provided to run through the configuration, and clarify where to avoid this challenge.

Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/9-4/security-configuration-in-policy-manager/tasks-menu-security-options/manage-log-audit-sinks/how-to-audit-to-a-remote-syslog.html

Step 1AE: Review the prior OOTB configuration / view. The API Gateway Audit Events are stored in the MySQL database, and the built-in view tool allows for queries to be performed. However, unless the “File/Delete Old Audit Events” is executed, growth will continue to impact MySQL db.

Step 2AE: Configure the API Gateway Audit Events to not use the MySQL database. The below process will create a “[Internal Audit Sink Policy]” that we will use to redirect.

Step 3AE: Important Step: Disable “all” assertions in the newly created “[Internal Audit Sink Policy]”.

  • Double-click on the lower-left panel item ‘[Internal Audit Sink Policy]”.
  • The right-upper panel will displays nine (9) line items.
  • Use mouse to select all lines, then click on the RED button on the middle column to disable all assertions (lines).
  • In the upper-left panel, search for the string “continue”
  • The object “Continue Processing” will display
  • Use mouse to drag-n-drop this item to the upper-right panel.
  • Click “Save and Activate” selection on the upper-right panel, as shown in the image below.

Step 4AE: Now create the new API Gateway Log Sink for Audit Events. Ensure Filters are broad to the following Categories: Audits and Gateway Log. Avoid any additional filters. Ensure the facility number matches the syslog naming convention.

Step 5AE: Confirm that audit events are being sent to the syslog defined.

Step 6AE: Confirm that API Gateway configuration for syslog is retained after “destroying” the docker container, and rebuilding it to connect to the existing MySQL database.

a. Destroy and rebuild API Gateway container (r9.4)

docker stop ssg94
docker rm ssg94
docker-compose -p ssg94 -f ./docker-compose-ssg94-join-db.yml up -d
cat /var/log/ssg-audit-events.log | tail -5

b. Wait one minute and try again

cat /var/log/ssg-audit-events.log | tail -5

Additional Notes/Recommendations:

The primary API Gateway startup ‘ssg’ logs, that are accessible via ‘docker logs containerID’ may be converted to syslog, but this process will impact the JSON formatting that that ‘docker logs containerID’ uses.

A review of the OOTB syslog for /var/log/messages shows that these docker messages are also already forwarded here. Recommend skipping this unnecessary configuration step.

cat /var/log/messages | grep -i l7tech  | tail -2
docker logs ssg94 2>&1 | tail -2

Reference Table from rfc5424 ( https://tools.ietf.org/html/rfc5424 ). Facilities #16-22 are open and not predefined for other applications.

View of API Gateway “Manage Log Sinks” Window

View of the “View Logs” Window for “Log Sinks” may display, but no data will be returned, as there are no local files. See prior screen shot where ssg_0_0.log is redirected to /dev/null.

If the above screen does not load correctly (r9.4), then in the docker-compose file, add in an additional JVM switch:

EXTRA_JAVA_ARGS: "         -Dcom.l7tech.server.disableFileLogsinks=false "

If you wish to know more or need assistance, please contact us.

Be safe and automate your backups for CA Directory Data DSAs to LDIF

The CA Directory solution provides a mechanism to automate daily on-line backups, via one simple parameter:

dump dxgrid-db period 0 86400;

Where the first number is the offset from GMT/UTC (in seconds) and the second number is how often to run the backup (in seconds), e.g. Once a day = 86400 sec = 24 hr x 60 min/hr x 60 sec/min

Two Gaps/Challenge(s):

History: The automated backup process will overwrite the existing offline file(s) (*.zdb) for the Data DSA. Any requirement or need to perform a RCA is lost due to this fact. What was the data like 10 days ago? With the current state process, only the CA Directory or IM logs would be of assistance.

Size: The automated backup will create an offline file (*.zdb) footprint of the same size as the data (*.db) file. If your Data DSA (*.db) is 10 GB, then your offline (*.zdb) will be 10 GB. The Identity Provisioning User store has four (4) Data DSAs, that would multiple this number , e.g. four (4) db files + four (4) offline zdb files at 10 GB each, will require minimal of 80 GB disk space free. If we attempt to retain a history of these files for fourteen (14) days, this would be four (4) db + fourteen (14) zdb = eighteen (18) x 10 GB = 180 GB disk space required.

Resolutions:

Leverage the CA Directory tool (dxdumpdb) to convert from the binary data (*.db/*.zdb) to LDIF and the OS crontab for the ‘dsa’ account to automate a post ‘online backup’ export and conversion process.

Step 1: Validate the ‘dsa’ user ID has access to crontab (to avoid using root for this effort). cat /etc/cron.allow

If access is missing, append the ‘dsa’ user ID to this file.

Step 2: Validate that online backup process have been scheduled for your Data DSA. Use a find command to identify the offline files (*.zdb ). Note the size of the offline Data DSA files (*.zdb).

Step 3: Identify the online backup process start time, as defined in the Data DSA settings DXC file or perhaps DXI file. Convert this GMT offset time to the local time on the CA Directory server. (See references to assist)

Step 4: Use crontab -e as ‘dsa’ user ID, to create a new entry: (may use crontab -l to view any entries). Use the dxdumpdb -z switch with the DSA_NAME to create the exported LDIF file. Redirect this output to gzip to automatically bypass any need for temporary files. Note: Crontab has limited variable expansion, and any % characters must be escaped.

Example of the crontab for ‘dsa’ to run 30 minutes after (at 2 am CST) the online backup process is scheduled (at 1:30 am CST).

# Goal:  Export and compress the daily DSA offline backup to ldif.gz at 2 AM every day
# - Ensure this crontab runs AFTER the daily automated backup (zdb) of the CA Directory Data DSAs
# - Review these two (2) tokens for DATA DSAs:  ($DXHOME/config/settings/impd.dxc  or ./impd_backup.dxc)
#   a)   Location:  set dxgrid-backup-location = "/opt/CA/Directory/dxserver/backup/";
#   b)   Online Backup Period:   dump dxgrid-db period 0 86400;
#
# Note1: The 'N' start time of the 'dump dxgrid-db period N M' is the offset in seconds from midnight of UTC
#   For 24 hr clock, 0130 (AM) CST calculate the following in UTC/GMT =>  0130 CST + 6 hours = 0730 UTC
#   Due to the six (6) hour difference between CST and UTC TZ:  7.5 * 3600 = 27000 seconds
# Example(s):
#   dump dxgrid-db period 19800 86400;   [Once a day at 2330 CST]
#   dump dxgrid-db period 27000 86400;   [Once a day at 0130 CST]
#
# Note2:  Alternatively, may force an online backup using this line:
#               dump dxgrid-db;
#        & issuing this command:  dxserver init all
#
#####################################################################
#        1      2         3       4       5        6
#       min     hr      d-o-m   month   d-o-w   command(s)
#####################################################################
#####
#####  Testing Backup Every Five (5) Minutes ####
#*/5 * * * *  . $HOME/.profile && dxdumpdb -z `dxserver status | grep "impd-main" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
#####
#####  Backup daily at 2 AM CST  -  30 minutes after the online backup at 1:30 AM CST #####
#####
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-main"   | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-main"   | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-co"     | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-co"     | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-inc"    | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-inc"    | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz
0 2 * * *    . $HOME/.profile &&  dxdumpdb -z `dxserver status | grep "impd-notify" | awk "{print $1}"` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-notify" | awk '{print $1}'`_`/bin/date --utc +\%Y\%m\%d\%H\%M\%S.0Z`.ldif.gz

Example of the above lines that can be placed in a bash shell, instead of called directly via crontab. Note: Able to use variables and no need to escape the `date % characters `

# set DSA=main &&   dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=co &&     dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=inc &&    dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
# set DSA=notify && dxdumpdb -z `dxserver status | grep "impd-$DSA" | awk '{print $1}'` | gzip -9 > /tmp/`hostname`_`dxserver status | grep "impd-$DSA" | awk '{print $1}'`_`/bin/date --utc +%Y%m%d%H%M%S.0Z`.ldif.gz
#

Example of the output:

Monitor with tail -f /var/log/cron (or syslog depending on your OS version), when the crontab is executed for your ‘dsa’ account

View the output folder for the newly created gzip LDIF files. The files may be extracted back to LDIF format, via gzip -d file.ldif.gz. Compare these file sizes with the original (*.zdb) files of 2GB.

Recommendation(s):

Implement a similar process and retain this data for fourteen (14) days, to assist with any RCA or similar analysis that may be needed for historical data. Avoid copied the (*.db or *.zdb) files for backup, unless using this process to force a clean sync between peer MW Data DSAs.

The Data DSAs may be reloaded (dxloadb) from these LDIF snapshots; the LDIF files do not have the same file size impact as the binary db files; and as LDIF files, they may be quickly search for prior data using standard tools such as grep “text string” filename.ldif.

This process will assist in site preparation for a DAR (disaster and recovery) scenario. Protect your data.

References:

dxdumpdb

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/administrating/tools-to-manage-ca-directory/dxtools/dxdumpdb-tool-export-data-from-a-datastore-to-an-ldif-file.html

dump dxgrid-db

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/reference/commands-reference/dump-dxgrid-db-command-take-a-consistent-snapshot-copy-of-a-datastore.html

If you wish to learn more or need assistance, contact us.

CA Identity Manager Task and Event State Codes

A reference for the two bytes Task and Event Codes for CA Identity Manager listed below. State code of 128, for an event or task, indicates that the state machine has completed all processing associated with the respective task or event.

Example of the events associated with an example Create User Task, and respective final states

Event state transitions and associated code knowledge helps with getting a better understanding when troubleshooting failed or tasks stuck in progress. As a good practice, when testing a use case, keep an record of expected execution outcome for the use case (at an event level.)

Contact us if you have questions or are experiencing problems with your IAM deployment. We will get you on the path to success and help you realize real business value from your IAM program.