Multi-Write HUB Model with democorp

A useful feature with CA Directory for WAN latency challenges is the HUB model. This model allows sync of the data to occur to distant peer Multi-write DATA DSA, but does NOT impact the external application that is updating its own local Router/DATA DSAs.

To assist with understanding this HUB model, we have leverage the CA Directory samples of democorp & router to build out an architecture with six (6) DATA DSAs and two (2) router DSAs, to emulate two (2) data centers across the world. These samples are included with every CA Directory deployment under $DXHOME/samples/democorp & $DXHOME/samples/router.

This lab emulates two (2) of the three (3) data centers that are displayed within the CA documentation.

Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/ca-directory-concepts/directory-replication/multiwrite-mw-groups-hubs/topology-sample-and-disaster-recovery.html

This lab may be replicated with near-real-world WAN latency within VMWARE Workstation feature.

https://www.vmware.com/products/workstation-pro.html

Below is a bash shell script used to create the lab environment that was created on a single host OS, with CA Directory samples of “democorp” and “router”. These samples were “copied” and updated via sed commands to ensure the DSA are unique for TCP Ports and naming convention. The below examples’ nomenclature will use democorpX for (A-F) and routerYY (AA or BB) .

These DATA DSA use the same suffix and are all referenced in the group knowledge file. The HUB model configuration will change the behavior for MW-DISP replication between data centers. MW-DISP replication will still be used for all local sync between DATA DSA in the same data center, and would ONLY be used between data centers for the DATA DSA that are designated as “HUB”s (aka multi-write-group-hub).

To test the value of HUB model with WAN latency, suggest that the same lab be executed on two (2) hosts, where one host has the VMWARE network latency enabled to 150/150 milli-seconds. Update the ip addresses within the $DXHOME/config/knowlege/*.dxc on both host OS, to reflect the correct hostnames for each data center in the DSAs.

The diagram below outlines the delta between various democorp DATA DSA to use the HUB model.

The changes below show the deltas within the *.dxc/*.dxg files within the knowledge folder for democorp MW HUB model.

The below image captures the only deltas in the *.dxi (startup files) for the democorp MW HUB model, located within the server folder. Note, if CA Directory management tool is deployed and used for democorp, all configurations will be in a single *.dxi file.

#!/bin/bash
##############################################
#
# Name: democorp_mw_hub_lab.sh
#
# Multi-Write HUB lab using CA Directory and the samples of
# democorp and router under DXHOME/samples
# A. Baugher, 04/2020 - ANA Technology Partner
#
# Assumptions:
#   CA Directory is deployed & dxprofile is enabled for dsa user
#   Execute script as dsa user
#
# Step 0.  Clean-Up prior deployment
#
# Step 1.  Auto deploy both democorp and router samples with: setup.sh -q
#
# Step 2.  Make common changes in democorp prior to copying
#
# Step 3.  Create six (6) copies of democorp and two (2) copies of router
#
# Step 4.  Update the six (6) copies of democorp for:
#     - name
#     - ports
#     - multi-write-group  (HUB group)
#     - DSA flags for MW & HUB-DSA
#     - Group knowledge file reference
#
#        Update the two (2) copies of router for:
#    - name
#    - ports
#    - Group knowledge file reference
#    - set write-precedence  (for HUB-DSA)
#
# Step 5. Start all DSAs
#
# Step 6. Test with dxsearch query
#
# Step 7. Execute the dxsoak command with the service account & time command
#
# Step 8. Update democorpA to force a single delta between peer members of AA and BB
#
# Step 9.  Create LDAP Export
#
# Step 10.  Create LDAP Delta & Compare the various democorp DSA to validate sync operations
#
#
##############################################
#set -xv
echo ..
echo "#############################################################"
echo "Step 0.  Clean up prior deployment of democorp and router"
echo "#############################################################"
dxserver stop all
sleep 5
kill -9 `ps -ef | grep dsa | grep democorp | grep -v grep | grep -v "democorp_mw_hub_lab" | awk '{print $2}'` >   /dev/null 2>&1
kill -9 `ps -ef | grep dsa | grep router   | grep -v grep | awk '{print $2}'` >   /dev/null 2>&1
sleep 5
rm -rf $DXHOME/data/democorp*.*
rm -rf $DXHOME/config/knowledge/democorp*.*
rm -rf $DXHOME/config/knowledge/router*.*
rm -rf $DXHOME/config/servers/democorp*.*
rm -rf $DXHOME/config/servers/router*.*
rm -rf $DXHOME/logs/democorp*.*
rm -rf $DXHOME/logs/router*.*
rm -rf $DXHOME/backup/delta*.*  > /dev/null 2>&1
rm -rf $DXHOME/backup/*.ldif > /dev/null 2>&1


echo ..
echo "#############################################################"
echo "Step 1a. Deploy clean version of democorp and router"
echo "#############################################################"
cd  $DXHOME/samples/democorp
$DXHOME/samples/democorp/setup.sh -q  > /dev/null 2>&1
cd $DXHOME/samples/router
$DXHOME/samples/router/setup.sh -q    > /dev/null 2>&1

cd
echo ..
echo "#############################################################"
echo "Step 1b. Create service ID in democorp for later use"
echo "#############################################################"
cat << EOF > $DXHOME/diradmin.ldif
version: 1
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: top
cn: diradmin
sn: diradmin
givenName: diradmin
userPassword: Password01
EOF

dxmodify -a -c -h `hostname` -p 19389 -f $DXHOME/diradmin.ldif

echo ..
echo "#############################################################"
echo "Step 1c.  Stop all running democorp & router DSAs"
echo "#############################################################"
dxserver stop all
sleep 10

echo ..
echo "#############################################################"
echo "Step 2a. Make common changes in pre-existing files before other modification"
echo "Update dsa-flags in democorp.dxc to allow Multi-Write with a HUB"
echo "#############################################################"
sed -i 's|ssl-auth|ssl-auth\n    multi-write-group = hub_group_AA\n     dsa-flags     =|g' $DXHOME/config/knowledge/democorp.dxc
sed -i 's|dsa-flags     =|dsa-flags     = multi-write, no-service-while-recovering, load-share|g' $DXHOME/config/knowledge/democorp.dxc

echo ..
echo "#############################################################"
echo "Step 2b. Update MW recovery in democorp.dxi file"
echo "#############################################################"
sed -i 's|recovery = false;|recovery = true;|g' $DXHOME/config/servers/democorp.dxi

echo ..
echo "#############################################################"
echo "Step 3a. Create six (6) copies of democorp and two (2) routers"
echo "Copy democorp data folder contents"
echo "#############################################################"
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpA.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpA.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpB.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpB.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpC.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpC.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpD.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpD.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpE.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpE.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpF.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpF.tx  > /dev/null 2>&1

echo ..
echo "#############################################################"
echo "Step 3b. Copy autostart folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpA
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpB
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpC
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpD
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpE
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpF
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerAA
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerBB

echo ..
echo "#############################################################"
echo "Step 3c. Copy knowledge folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpA.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpB.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpC.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpD.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpE.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpF.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerAA.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerBB.dxc
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupAA.dxg
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 3d. Copy server folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpA.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpB.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpC.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpD.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpE.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpF.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerAA.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4a.  Update names & ports in democorp knowledge files"
echo "#############################################################"
sed -i 's|19389|29389|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19390|29390|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPA =|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPA>|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19389|29489|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19390|29490|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPB =|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPB>|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19389|29589|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19390|29590|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPC =|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPC>|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19389|29689|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19390|29690|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPD =|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPD>|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19389|29789|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19390|29790|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPE =|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPE>|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19389|29889|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|19390|29890|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPF =|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPF>|g' $DXHOME/config/knowledge/democorpF.dxc

echo ..
echo "#############################################################"
echo "Step 4b. Update knowledge files for router ports"
echo "#############################################################"
sed -i 's|19289|39289|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19290|39290|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERAA =|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19289|39389|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|19290|39390|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERBB =|g' $DXHOME/config/knowledge/routerBB.dxc

echo ..
echo "#############################################################"
echo "Step 4c. Update group knowledge file for three (3)MW Group HUB Peers "
echo "#############################################################"
sed -i 's|"router.dxc";|"routerAA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorp.dxc";|"democorpA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorpA.dxc";|"democorpA.dxc";\nsource "democorpB.dxc";\nsource "democorpC.dxc";\nsource "routerBB.dxc";\nsource "democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg

cp -r -p $DXHOME/config/knowledge/groupAA.dxg $DXHOME/config/knowledge/groupBB.dxg

#sed -i 's|"router.dxc";|"routerBB.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorp.dxc";|"democorpD.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorpD.dxc";|"democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 4d.  Update Server folder contents"
echo "#############################################################"
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpB.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpC.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpD.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpE.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpF.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/routerBB.dxi


echo ..
echo "#############################################################"
echo "Step 4e.  Update HUB Configurations in DSA knowledge and DSA routers"
echo "#############################################################"
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|/knowledge/groupAA.dxg";|/knowledge/groupAA.dxg";\nset  write-precedence = democorpA ,democorpB, democorpC;\n|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/groupBB.dxg";|/knowledge/groupBB.dxg";\nset  write-precedence = democorpD ,democorpE, democorpF;\n|g' $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4f.  Remove samples of router & democorp from starting "
echo "#############################################################"
rm -rf $DXHOME/config/servers/democorp.dxi
rm -rf $DXHOME/config/servers/router.dxi
rm -rf $DXHOME/config/autostart/democorp
rm -rf $DXHOME/config/autostart/router

echo ..
echo "#############################################################"
echo "Step 5. Start all DSAs"
echo "#############################################################"
dxcertgen certs > /dev/null 2>&1
dxserver start all

dxserver status

#exit

echo ..
echo "#############################################################"
echo "Step 6. Test all DSAs with dxsearch query"
echo "#############################################################"
# Comment out if too verbose
# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU

# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01

echo ..
echo "#############################################################"
echo "Step 7. Execute the dxsoak command with the service account & time command"
echo "allow to run for over 5 sec to monitor changes for Multi-Write"
echo "may allow for longer times (1 hour) to get better performance metrics"
echo "#############################################################"
cd $DXHOME/samples/dxsoak
echo "Update democorpA (TCP 29389) to confirm MW to from democorpA (hub_group_AA) to democorpD (hub_group_BB)"
# Create a delete file first; then re-add entries
grep dn: democorp.eldf | grep ,ou=Services > democorp-del.eldf
sed -i 's|,c=AU|,c=AU\nchangetype: del\n|g' democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Delete all DN entries with ou=Services: `wc -l democorp-del.eldf` on democorpA (TCP 29389)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29389 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Re-Add all DN entries with ou=Services: `wc -l democorp.eldf` on democorpD (TCP 29689)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29689 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp.eldf


echo ..
echo "#############################################################"
echo "Step 8a. Update democorpA to force a single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_sn.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: sn
sn: diradmin_AA_new_update
EOF

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Make update on democorpA"
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29389 -f $DXHOME/diradmin_sn.ldif

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

#exit
echo ..
echo "#############################################################"
echo "Step 8b. Update democorpF to force a reverse single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_givenName.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: givenName
givenName: diradmin_BB_new_update
EOF


echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Update democorpF to show replication via democorpD (HUB) to democorpA (HUB) "
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29889 -f $DXHOME/diradmin_givenName.ldif

echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp




echo ..
echo "###########################################################"
echo "Step 9b. Update CA Directory DSA to allow online backup ###"
echo "###########################################################"
echo " - Configure CA Directory to provide an data dump (zdb file) while DSA are online"
cp -r -p $DXHOME/config/settings/default.dxc.org $DXHOME/config/settings/default.dxc  > /dev/null 2>&1
cp -r -p $DXHOME/config/settings/default.dxc $DXHOME/config/settings/default.dxc.org  > /dev/null 2>&1
# Edit the DSA settings file to add in one line.  dump dxgrid-db;
chmod 744 $DXHOME/config/settings/default.dxc
echo "dump dxgrid-db;" >> $DXHOME/config/settings/default.dxc



echo ..
echo "######################################################################################"
echo "Step 9c. Re-init all DSA to data dump the CA DSAs for democorp & router "
echo "######################################################################################"
echo " - This make take 5-30 seconds to complete "
dxserver init all    > /dev/null 2>&1
# View for zdb or zd? (in-progress) files
sleep 10



echo ..
echo "#################################################################"
echo "Step 9d. Export DSA backup/offline zdb data files to LDIF file ###"
echo "#################################################################"
echo " - Export will happen after the backup/offline zdb files are fully created"
echo " - This make take 5-60 seconds  to complete "
echo ..
echo "#################################################################"
echo "Step 9e. Set WHILE loop for DemocorpF DSA ###"
echo "#################################################################"
until [ -f $DXHOME/data/democorpF.zdb ]
do
     echo " - Waiting till CA Directory has completed online data dump of DemocorpF DSA"
     sleep 5
done
sleep 5
echo ..
echo "#################################################################"
echo "Step 9f. Execute dxdumbdb for Democorp DSA - FULL ###"
echo "#################################################################"
mkdir $DXHOME/backup  > /dev/null 2>&1
cd $DXHOME/backup
dxdumpdb -z -f $DXHOME/backup/democorpA.ldif democorpA   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpB.ldif democorpB   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpC.ldif democorpC   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpD.ldif democorpD   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpE.ldif democorpE   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpF.ldif democorpF   > /dev/null 2>&1
sleep 5

echo ..
echo "#################################################################"
echo "Step 10a. Perform LDIF DELTA compare between democorpA and democorpB within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
#ldifdelta -x -S DSANAME  OLDFILE NEWFILE DELTAFILE
ldifdelta -x -S democorpA $DXHOME/backup/democorpA.ldif  $DXHOME/backup/democorpB.ldif $DXHOME/backup/delta-between-A-and-B.ldif
echo "#################################################################"
echo "Step 10b. Perform LDIF DELTA compare between democorpD and democorpE within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpD.ldif  $DXHOME/backup/democorpE.ldif $DXHOME/backup/delta-between-D-and-E.ldif
echo "#################################################################"
echo "Step 10c. Perform LDIF DELTA compare between democorpC and democorpF across different HUB MW groups"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpC.ldif  $DXHOME/backup/democorpF.ldif $DXHOME/backup/delta-between-C-and-F.ldif

echo .
echo .




Ref: This HUB Model lab was built off a prior lab for MW Sync with air-gap requirements.

https://community.broadcom.com/enterprisesoftware/communities/community-home/digestviewer/viewthread?MessageKey=62ccc41d-7c37-4728-ad1e-c82e7a8acc38&CommunityKey=f9d65308-ca9b-48b7-915c-7e9cb8fc3295&tab=digestviewer

Load Balancing Provisioning Tier

The prior releases of CA Identity Manager / Identity Suite have a bottleneck with the provisioning tier.

The top tier of the solution stack, Identity Manager Environment (IME/J2EE Application), may communicate to multiple Provisioning Servers (IMPS), but this configuration only has value for fail-over high availability.

This default deployment means we will have a “many-to-one” challenge, multiple IMEs experiencing a bottleneck with provisioning communication to a single IMPS server.

If this IMPS server is busy, then transactions for one or more IMEs are paused or may timeout. Unfortunately, the IME (J2EE) error messages or delays are not clear that this is a provisioning bottleneck challenge. Clients may attempt to resolve this challenge by increasing the number of IME and IMPS servers but will still be impacted by the provisioning bottleneck.

Two (2) prior methods used to overcome this bottleneck challenge were:


a) Pseudo hostname(s) entries, on the J2EE servers, for the Provisioning Tier, then rotate the order pseudo hostname(s) on the local J2EE host file to have their IP addresses access other IMPS. This methodology would give us a 1:1 configuration where one (1) IME is now locked to one (1) IMPS (by the pseudo hostname/IP address). This method is not perfect but ensures that all IMPS servers will be utilized if the number of IMPS servers equals IME (J2EE) servers. Noteworthy, this method is used by the CA identity Suite virtual appliance, where the pseudo hostname(s) are ca-prov-srv-01, ca-prov-srv-02, ca-prov-03, etc. (see image above)

<Connection
  host="ca-prov-srv-primary" port="20390"
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“
/>

b) A Router placed in-front of the IMPS (TCP 20389/20390), that contains “stickiness” to ensure that when round-robin model is used, that the same IMPS server is used for the IME that submitted a transaction, to avoid any concerns/challenges of possible”RACE” conditions, where a modify operations may occur before the create operation.


The “RACE” challenge is a concern of both of the methods above, but this risk is low, and can be managed with additional business rules that include pre-conditional checks, e.g., does the account exist before any modifications.

Ref: RACE https://en.wikipedia.org/wiki/Race_condition

Example of one type of RACE condition that may be seen.

Ref: PX Rule Engine: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

New CP2 Loading Balance Feature – No more bottleneck.

Identity Manager can now use round-robin load balancing support, without any restrictions on either type of provisioning operations or existing runtime limitations. This load balancing approach distributes client requests across a group of Provisioning servers.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/release-features-and-enhancement/Identity-Manager-14_3-CP2.html#concept.dita_b51ab03e-6e77-49be-8235-e50ee477247a_LoadBalancing

This feature is managed in the IME tier, and will also address any RACE conditions/concerns.


No configuration changes are required on the IMPS tier. After updates of CP2, we can now use the IME Management console to export the directory.xml for the IMPS servers and update the XML tag for <Connection. This configuration may also be deployed to the Virtual Appliances.

<Connection   
  host="ca-prov-srv-primary" port="20390”   
  loadbalance="ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“   
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“ 
/>

View of CP2 to download.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

Before applying this patch, we recommend collecting your metrics for feed operations that include multiple create operations, and modify operations to minimal of 1000 IDS, Monitor the IMPS etatrans logs as well; and the JCS/CCS logs. After the patch, run the same feed operations to determine the value of provisioning load-balance feature; and any provisioning delays that have been addressed. You may wish to increase the # of JCS/CCS servers (MS Windows) to speed up provisioning to Active Directory and other endpoints.

Disaster Recovery Scenarios for Directories

Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.

In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.

Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.

Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.

Cluster Out-of-Sync Scenario

Awareness

The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.

Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.

After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.

The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.

Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)

su - dsa    OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

# NOTIFY BRANCH (TCP 20404) 
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb

# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount

# Scroll to see entire line 

A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.

Recovery Processes

The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.

The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.

This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.

https://knowledge.broadcom.com/external/article?articleId=54088

The above document is dated, and still mentions additional file structures that have been retired, e.g. oc/zoc, at,zat.

An enhancement request has been submitted for both of these requests:

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=c71a304b-a689-4894-ac1c-786c9a2b2d0d

The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.

The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).

Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.

Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file. 

This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.

This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.  

Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.

STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.

su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;

# Scroll to see entire line 

Step 3 will then ask for an updated online backup to be executed. 

In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command. 

In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process. 

This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures. 

Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.

Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior. 

Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.

STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

# Scroll to see entire line 

Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.

The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.

https://anapartner.com/2020/05/03/wan-latency-rsync-versus-scp/

Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.

# RSYNC METHOD
sudo -iu dsa

time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data

# Scroll to see entire line 
# SCP METHOD   
sudo -iu dsa

scp   REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb   /tmp/dsa_data
/usr/bin/mv  /tmp/dsa_data/<incorrect_dsaname>.zdb   $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db

# Scroll to see entire line 

Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.

The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db

Remove the prior <dsaname>.dp (ASCII time-stamp file) and the <dsaname>.tx (binary data transaction file).

Step 5a Startup the DATA DSA with the command

dxserver start all

If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)

dxserver -d start <dsaname>

Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)

Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.

bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'

# Scroll to see entire line 

LDIF Recovery Processes

The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.

We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).

Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.

Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.

The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

ldifdelta -x -S ca-prov-srv-01-impd-co  /tmp/20200819_122820_1597858100_ca-prov-srv-01-impd-co.ldif   /tmp/20200819_123108_1597858268_ca-prov-srv-01-impd-co.ldif  |  perl -p00e 's/\r?\n //g'  >   /tmp/delta_file_ca-prov-srv-01-impd-co.ldif   ; cat /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
dxmodify -v -c -h`hostname` -p 20391  -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -f /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

# Scroll to see entire line 

The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.

The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.

In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.

Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.

Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.

We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.

Hope this has value to you and any challenges you may have with your environment.

Upgrade CA API Gateway via docker “in-place”

CA API Gateway (ssg) is used to manage SaaS endpoints/applications for the CA/Symantec Identity Suite solution. One of the challenges of appliances and Docker containers is the underlying 3rd party libraries may get dated, and require updates.

Most vendors will not allow post-updates or direct updates to their containers libraries, as this has an impact on the support model. So we must rely on the support process and push vendors to release additional updates to stay ahead of any security concerns.


The CA API Gateway (ssg) when deployed on docker, has a streamlined process for updating in place, as long as you have backed-up the MySQL database when the docker images are updated.

We wanted to capture the process to upgrade from CA API Gateway 9.4 (ssg94) to Gateway 10.0 (ssg10). Fortunately, the MySQL 8.0 database has the same structure, tables, and routines as the MySQL 5.7 database for CA API Gateway 9.4.

The challenge we have is the documented process to upgrade is difficult to implement on the same host OS; and there was a lost opportunity to manage the license file from 9.4 to license 10.0 during the re-import of the MySQL database.


The below diagram, from the CA API Gateway 10.0 upgrade process can be adjusted to streamline the upgrade process.

expedited_scenario_1
Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/upgrade-the-gateway/upgrade-an-appliance-gateway/manual-expedited-appliance-upgrade.html

The above-documented process outlines dropping the MySQL database ssg completely, and then create a new clean db. Using this documented process, we can slightly adjust it, to avoid an unnecessary step to re-import the license file after the restart of the gateway container. We also wish to add additional validation steps to show what is changing.

Proposal for modifications:

  1. Create a clean CA API Gateway 10.0 (ssg10) Docker deployment on the same Host OS. May use docker-compose with REST service enabled and use different TCP listening ports to allow two (2) docker containers to run simultaneously during the testing cycle. After testing may keep the default TCP license ports of 8443 & 9443.
  2. Allow the CA API Gateway 10.0 container to start cleanly with MySQL 8.0 DB and with the correct license file for version 10. Then export the MySQL database table that contains the updated license table.
  3. Import the prior backup MySQL file to the new CA API Gateway deployment. Then before startup, import the ssg10 license mysql file as well. This will replace the ssg94 license information.
  4. Restart the CA API Gateway container, and monitor the logs for any errors and ensure the new license file is used
  5. If REST API was enabled (via the docker-compose file & touch a file name “restman”), then use CURL to validate all REST services are available, and list all prior API Gateway Policy Services are displayed.

A visual example of this process using the prior diagram.

Note: The official documentation uses sed to replace a string “NO_AUTO_CREATE_USER”; but the documentation shows two examples. One with a comma & one without. We have included the one with the comma, but we did not see this line in the MySQL sql export, so it was deemed low value, but still included in our process.

Example of upgrade process and validation of using REST

Note the two (2) running CA API Gateway container of 9.4 (with MySQL 5.7) and 10.0 (with MySQL 8.0) with different TCP listen services; and validation of REST services for ssg10

Below are the above steps called out with additional validations steps, and the use of the “time” command to monitor the export of the files.

# Pre-Step 1:  On Test System:  Prepare SSG10 docker compose yml file and correct license.xml & confirm startup.
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}
docker ps -a
docker logs ssg10 -f --tail 100
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"


# Step 2:  On PROD HOST OS: Stop SSG94 and export the current MySQL 5.7 database with routines (aka stored procedures) & remove unwanted lines
docker stop ssg94
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.before.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.sql
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.updated.for.mysql8.sql
sed -i "s/NO_AUTO_CREATE_USER,//g"   			ssg94.backup.updated.for.mysql8.sql
sed -i "/Using a password on the command/d" 	ssg94.backup.updated.for.mysql8.sql


# Step 3: On PROD HOST OS: Deploy SSG10 with docker compose yml file & correct license xml file & export db table license_document
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}  
docker ps -a
docker logs ssg10 -f --tail 100     
docker stop ssg10
time docker exec -tt  mysql-ssg10  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines license_document  > ssg10.license.export.sql
sed -i "/Using a password on the command/d" 	ssg10.license.export.sql

# Step 4: On PROD HOST OS: Drop the SSG10 MySQL 8.0 ssg database and rebuilt with imports of SQL files.
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer drop ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer create ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg94.backup.updated.for.mysql8.sql
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg10.license.export.sql
docker exec -it mysql-ssg10  mysql --user=root --password=7layer ssg  -e "SELECT * FROM license_document;" | grep -A 12 -e "<license "

# Step 5: On PROD HOST OS:  Start SSG10 and validate no errors 
docker start ssg10       {Wait 90-120 seconds} 
docker ps -a

# Step 6:  Validate license    
docker logs ssg10 -f --tail 100  
docker logs ssg10 -f 2>&1  | grep -i license

# Step 7:  Validate REST services enabled and we can see all services
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
# Example to validate ServiceNow REST service to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://localhost:9443/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Example validate ServiceNow REST service via LB to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://192.168.242.135/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Direct REST service to ServiceNow to validate development instance is available.
curl --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json"  'https://dev101846.service-now.com/api/now/table/sys_user?sysparm_query=user_name=testalan13095'

# Step 8:  Certs required for IM JCS Tier to avoid typical cert issues.
a. Ensure the CA API Gateway public root CA cert or self-signed cert is imported to each JCS keystore
b. If using a LoadBalancer, e.g. httpd, ensure this public root CA cert or self-signed cert is imported to each JCS keystore.


docker commands collected to assist with RCA efforts for Operation Teams

# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"

# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"


#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)


#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100

# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux

#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'

# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "

# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"

# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"

# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"

# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl

Change the pmadmin password at the docker command line

Process flows collect for the CA API Gateway docker deployment

Example of the docker-compose yml file for CA API Gateway with REST web services and license xml file.

We attempt to keep useful notes/hints included in the yml file to allow for future reference. The example below redirect ports to TCP 18443 and 19443 from the standard ports of 8443 and 9443 for the CA API Gateway; and MySQL from 3306 to 23306 for testing protocols in non-Production enviornments.

# docker-compose-ssg10-0-mysql8-0_with_rest_and_external_mysql_volume.yml
# Startup:  docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml up  -d
# Stop:     docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml down
#
#
# Ensure Host OS Network allows IPv4 forwarding:   sysctl -a | grep ipv4.ip_forward
# Validate docker network has access with curl:  curl -vk --tlsv1.2  https://www.service-now.com
# Note:  Do NOT use TABS in this file
# Monitor startup of containers with:  docker logs ssg10 -f --tail 100   AND   docker logs mysql-ssg10 -f  --tail 100
# https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/using-the-container-gateway/getting-started-with-the-container-gateway/run-the-container-gateway-on-docker-engine/sample-docker-compose-deployment-file.html
version: "2.2"
services:
   ssg10:
     container_name: ssg10
     # Ref: https://hub.docker.com/r/caapim/gateway/tags
     #image: caapim/gateway:latest
     image: caapim/gateway:10.0.00_20200428
     mem_limit: 10048m
     volumes:
        # Ensure ssg_license.xml is a valid SSG license file for 9.4 or 10.0
        - ./ssg_license_10.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
        # https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/apis-and-toolkits/rest-management-api.html
        # Touch the file restman to auto-start rest webservices
        # Validate REST API with curl
        # curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
        # curl --insecure --user pmadmin:7layer  https://localhost:18443/restman/1.0/rest.wadl
        - ./restman:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/services/restman
     ports:
       - "58443:8443"
       - "59443:9443"
     environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql-ssg"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql-ssg10:3306/ssg?useSSL=false"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false"
        SSG_INTERNAL_SERVICES: "restman wsman"
     links:
        - mysql-ssg10
   mysql-ssg10:
     container_name: mysql-ssg10
     # Ref https://hub.docker.com/_/mysql?tab=tags
     image: mysql:8.0.20
     #image: mysql:latest
     # SSG 10.0 requires MySQL 8.x per documentation
     #https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/using-mysql-8_0-with-gateway-10.html
     mem_limit: 1048m
     restart: always
     ports:
        - "23306:3306"
     environment:
        - MYSQL_ROOT_PASSWORD=7layer
        #- MYSQL_RANDOM_ROOT_PASSWORD=yes
        - MYSQL_USER=gateway
        - MYSQL_PASSWORD=7layer
        - MYSQL_DATABASE=ssg
     command:
       - "--character-set-server=utf8mb3"
       - "--log-bin-trust-function-creators=1"
       - "--default-authentication-plugin=mysql_native_password"
       - "--innodb_log_buffer_size=32M"
       - "--innodb_log_file_size=80M"
       - "--max_allowed_packet=8M"
#     volumes:
#       - mysql_db8:/var/lib/mysql
# Persist SSG MySQL DB Data
# Validate after shutdown with:  docker volume ls  &  docker volume inspect ssg_mysql_db
# Note:  Important - Random Root Password will not work for persist MySQL - Password must be known for 1st time
#   volumes:
#     mysql_db8:
#
# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"
# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)
#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100
# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux
#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'
# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "
# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"
# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"
# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"
# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl

Clean-Up Orphans and Refine Correlation Rules

Correlation rules may be very simple. A unique ID on an IAM solution should match the unique ID (or combinations of attributes) to form a one-to-one (1:1) relationship to the identity on a managed endpoint/application.

Most sites that had the opportunity have started using GUID/UUID values for the correlation ID on the IAM solutions and if the endpoint/application allows it, the same GUID/UUID on an open field, that likely is not the same as the login ID field.

Example below using a GUID/UUID format as the primary identifier with the IAM solution and the endpoint/application of an Active Directory domain.

We may also have many different correlation rules or primary/secondary correlation for every application/endpoint. Until the correlation is correct we have the likelihood of an incorrect correlation or default correlation.

If we wish to remove an incorrect correlation, this may be done manually to remove or re-attach the correct entries. However, this would not address future correlation processes if the rules are not updated.

Example of removing a correlation from the orphan ID “[default user]”


Example to remove a incorrect correlation manually within the IAM solution

To assist with refinement of correlation rules, a feedback process/script may have value.

The below script demonstrates using OS ldapsearch/ldapdelete processes with the CA Identity Manager Provisioning Tier (TCP 20389/20390) a feedback process to cleanup the Orphans IDs under “[default user]”

The script will query all “inclusions” where an endpoint account has been incorrectly associated with the Global user “[default user]” and return a count of these records. The process will capture the dn values of these inclusions records, and then feed them to the Open LDAP ldapdelete process to have them removed. Since we are using the IMPS service (TCP 20389/20390) we are still allowing the solution to maintain referential integrity during the clean-up process.

After the deletion are complete, we will re-initialize a new E&C (explore & correlate) process using any new Correlation Rules that may have been added. It is this opportunity that an administrator may wish to adjust their own correlation rules; and then re-execute the script. If the correlation rules do not match, then the prior correlations will return to the “[default user]”.

#!/bin/bash
#####################################################################################################################
#
# Name: Clean Up [default user]
#
# Goal:  Script to clean up [default_user] correlations to allow for better orphan or rogue account identification
#  - Ensure that IMPS Service TCP 20389/20390 is used to maintain referential integrity of the inclusions entries
#    during delete operations.
#
# Ref:  CA IM r14.x solution & OS ldapsearch/ldapdelete
#
# A. Baugher, ANA, 04/2020
#
#####################################################################################################################
# set -xv
DATETZ=$(date -d "1970-01-01 00:00:00 `date +'%s'` seconds"  +'%Y-%m-%dT%H:%M:%S.%3NZ')
IMPSHOST=`hostname`
IMPSPORT=20390
IMPSUSERDN='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
# Use pwd file to avoid clear text passwords in script
# echo -n CLEAR_TEXT_PASSWORD > .imps.pwd
IMPSPWD=`cat .imps.pwd`
#####################################################################################################################
BASE_DN='eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta'
SUP_DN_ENTRY='eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im'
FILTER="(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=$SUP_DN_ENTRY))"
SEARCH=sub
ATTRIBUTES='dn eTInclusionID'
EXCLUDE="  -e ^$ "
#SIZE=" -z 10"
SIZE=" -z 0"
FILENAME=default_user_guid.txt
rm -rf $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID > tmp_file
echo "LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D '$IMPSUSERDN' -y ./.imps.pwd -b '$BASE_DN' -s $SEARCH '$FILTER' $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F': ' '{print \$2}' | grep eTInclusionID "
uniq -i tmp_file > $FILENAME
echo "#################################################################################################"
echo "# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : "`cat $FILENAME | wc -l`
rm -rf tmp_file
echo "#################################################################################################"



echo ""
echo "####################################################################################################################"
echo "#### Remove `cat $FILENAME | wc -l` EA (endpoint accounts) that are correlated to the Global User [default user] "
echo "####################################################################################################################"
LDAPTLS_REQCERT=never ldapdelete -v -c -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -f $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"


echo ""
echo "#################################################################################################"
echo "#### Re-explore & correlate to update Global User [default user] orphan bucket."
echo "#################################################################################################"
echo ""
IMPSADSBASEDN="eTADSDirectoryName=dc2016.exchange.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers

IMPSADSBASEDN="eTADSDirectoryName=dc2012.exchange2012.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers


echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
echo ""

Example of the output of the script (with 1000’s of lines remove for clarity). Includes E&C to two (2) ADS endpoints, where > 2000 identities will default correlation to the orphan Global User “[default user]”.

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2184
#################################################################################################
LDAPTLS_REQCERT=never ldapsearch  -z 0 -LLL -H ldaps://vapp0001:20390 -D 'eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta' -y ./.imps.pwd -b 'eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta' -s sub '(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im))' dn eTInclusionID | perl -p00e 's/\r?\n //g' | grep -v   -e ^$  | awk -F': ' '{print $2}' | grep eTInclusionID
#################################################################################################
# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : 2184
#################################################################################################

####################################################################################################################
#### Remove 2184 EA (endpoint accounts) that are correlated to the Global User [default user]
####################################################################################################################
ldap_initialize( ldaps://vapp0001:20390/??base )
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@67d6bf2c-1104-1039-96c4-ef7605d11763,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname6 mi. lastname6' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@65e02962-00bd-1039-830f-ae134a0f7638,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname0002 lastname0002' and Global User '[default user]' deleted successfully

[Deleted > 5000 similar rows ]

deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@ce05d098-1b32-1039-85ec-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'ffffff' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@75a62f60-1b32-1039-85ea-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'eeeee' and Global User '[default user]' deleted successfully

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
0
#################################################################################################

#################################################################################################
#### Re-explore & correlate to update Global User [default user] orphan bucket.
#################################################################################################

Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 672, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' correlation successful: (accounts correlated: 0, defaulted: 566, unchanged: 6, failures: 0)
Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 1871, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' correlation successful: (accounts correlated: 0, defaulted: 1619, unchanged: 153, failures: 0)

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2185
#################################################################################################



Modify the above script for your own application/endpoints and refine your correlation rules (or add additional ones as needed.)

If applications/endpoints identities are non-managed service IDs, a process that may be used to assist is shown below. Create a new Global User (similar format as [default user]), and then drag-n-drop the endpoint/application service ID accounts to the new Global User [endpoint A service ID].

The final goal is a “clean” orphan process, that will be able to alert us to any rogue accounts being created OOB (out-of-band) of the expected top-down IAM solution from an approved SOT (source-of-truth) solution, e.g. SAP HR/Workday or home-grown DB used with ETL processes. By removing the “noise” of incorrectly correlated accounts, we can now focus on identifying the true “orphans”.

WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$


Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid locking a userID in a Virtual Appliance

The below post describes enabling the .ssh private key/public key process for the provided service IDs to avoid dependency on a password that may be forgotten, and also how to leverage the service IDs to address potential CA Directory data sync challenges that may occur when there are WAN network latency challenges between remote cluster nodes.

Background:

The CA/Broadcom/Symantec Identity Suite (IGA) solution provides for a software virtual appliance. This software appliance is available on Amazon AWS as a pre-built AMI image that allows for rapid deployment.

The software appliance is also offered as an OVA file for Vmware ESXi/Workstation deployment.

Challenge:

If the primary service ID is locked or password is allowed to expire, then the administrator will likely have only two (2) options:

1) Request assistance from the Vendor (for a supported process to reset the service ID – likely with a 2nd service ID “recoverip”)

2) Boot from an ISO image (if allowed) to mount the vApp as a data drive and update the primary service ID.

Proposal:

Add a standardized SSH RSA private/pubic key to the primary service ID, if it does not exist. If it exists, validate able to authentication and copy files between cluster nodes with the existing .SSH files. Rotate these files per internal security policies, e.g. 1/year.

The focus for this entry is on the CA ‘config’ and ‘ec2-user’ service IDs.

An enhancement request has been added, to have the ‘dsa’ userID added to the file’/etc/ssh/ssh_allowed_users’ to allow for the same .ssh RSA process to address challenges during deployments where the CA Directory Data DSA did not fully copy from one node to another node.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

AWS vApp: ‘ec2-user’

The primary service ID for remote SSH access is ‘ec2-user’ for the Amazon AWS is already deployed with a .ssh RSA private/public key. This is a requirement for AWS deployments and has been enabled to use this process.

This feature allows for access to be via the private key from a remote SSH session using Putty/MobaXterm or similar tools. Another feature may be leveraged by updating the ‘ec2-user’ .ssh folder to allow for other nodes to be exposed with this service ID, to assist with the deployment of patch files.

As an example, enabling .ssh service between multiple cluster nodes will reduce scp process from remote workstations. Prior, if there were five (5) vApp nodes, to patch them would require uploading the patch direct to each of the five (5) nodes. With enabling .ssh service between all nodes for the ‘ec2-user’ service ID, we only need to upload patches to one (1) node, then use a scp process to push these patch file(s) from one node to another cluster node.

On-Prem vApp: ‘config’

We wish to emulate this process for on-prem vApp servers to reduce I/O for any files to be uploaded and/or shared.

This process has strong value when CA Directory *.db files are out-of-sync or during initial deployment, there may be network issues and/or WAN latency.

Below is an example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

An example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

Below is an example to push the newly created SSH RSA files to the remote host(s) of the vApp cluster. After this step, we can now use scp processes to assist with remediation efforts within scripts without a password stored as clear text.

Copy the RSA folder to your workstation, to add to your Putty/MobaXterm or similar SSH tool, to allow remote authentication using the public key.

If you have any issues, use the embedded verbose logging within the ssh client tool (-vv) to identify the root issue.

ssh -vv userid@remote_hostname

Example:

config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > eval `ssh-agent` && ssh-add
Agent pid 5717
Enter passphrase for /home/config/.ssh/id_rsa:
Identity added: /home/config/.ssh/id_rsa (/home/config/.ssh/id_rsa)
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ >
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > ssh -vv config@192.168.242.128
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.242.128 [192.168.242.128] port 22.
debug1: Connection established.
debug1: identity file /home/config/.ssh/identity type -1
debug1: identity file /home/config/.ssh/identity-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/config/.ssh/id_rsa type 1
debug1: identity file /home/config/.ssh/id_rsa-cert type -1
debug1: identity file /home/config/.ssh/id_dsa type -1
debug1: identity file /home/config/.ssh/id_dsa-cert type -1
debug1: identity file /home/config/.ssh/id_ecdsa type -1
debug1: identity file /home/config/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-sha1
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug2: mac_setup: found hmac-sha1
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 141/320
debug2: bits set: 1027/2048
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.242.128' is known and matches the RSA host key.
debug1: Found key in /home/config/.ssh/known_hosts:2
debug2: bits set: 991/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/config/.ssh/id_rsa (0x5648110d2a00)
debug2: key: /home/config/.ssh/identity ((nil))
debug2: key: /home/config/.ssh/id_dsa ((nil))
debug2: key: /home/config/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering public key: /home/config/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug2: input_userauth_pk_ok: SHA1 fp 39:06:95:0d:13:4b:9a:29:0b:28:b6:bd:3d:b0:03:e8:3c:ad:50:6f
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Thu Apr 30 20:21:48 2020 from 192.168.242.146

CA Identity Suite Virtual Appliance version 14.3.0 - SANDBOX mode
FIPS enabled:                   true
Server IP addresses:            192.168.242.128
Enabled services:
Identity Portal               192.168.242.128 [OK] WildFly (Portal) is running (pid 10570), port 8081
                                              [OK] Identity Portal Admin UI is available
                                              [OK] Identity Portal User Console is available
                                              [OK] Java heap size used by Identity Portal: 810MB/1512MB (53%)
Oracle Database Express 11g   192.168.242.128 [OK] Oracle Express Edition started
Identity Governance           192.168.242.128 [OK] WildFly (IG) is running (pid 8050), port 8082
                                              [OK] IG is running
                                              [OK] Java heap size used by Identity Governance: 807MB/1512MB (53%)
Identity Manager              192.168.242.128 [OK] WildFly (IDM) is running (pid 5550), port 8080
                                              [OK] IDM environment is started
                                              [OK] idm-userstore-router-caim-srv-01 started
                                              [OK] Java heap size used by Identity Manager: 1649MB/4096MB (40%)
Provisioning Server           192.168.242.128 [OK] im_ps is running
                                              [OK] co file usage: 1MB/250MB (0%)
                                              [OK] inc file usage: 1MB/250MB (0%)
                                              [OK] main file usage: 9MB/250MB (3%)
                                              [OK] notify file usage: 1MB/250MB (0%)
                                              [OK] All DSAs are started
Connector Server              192.168.242.128 [OK] jcs is running
User Store                    192.168.242.128 [OK] STATS: number of objects in cache: 5
                                              [OK] file usage: 1MB/200MB (0%)
                                              [OK] UserStore_userstore-01 started
Central Log Server            192.168.242.128 [OK] rsyslogd (pid  1670) is running...
=== LAST UPDATED: Fri May  1 12:15:05 CDT 2020 ====
*** [WARN] Volume / has 13% Free space (6.2G out of 47G)
config@cluster01 VAPP-14.3.0 (192.168.242.128):~ >

A view into rotating the SSH RSA keys for the CONFIG UserID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Advanced Oracle JDBC Logging

One of the challenges for a J2EE application is to understand the I/O operations to the underlying database.

The queries/stored procedures/prepared statements all have value to the J2EE applications but during RCA (root-cause-analysis) process, it can be challenging to identify where GAPs or improvements may be made. Improvements may be from the vendor of the J2EE applications (via a support ticket/enhancement) or custom client efforts for business rule SQL processes or embedded JAR with JDBC logic.

To assist with this RCA efforts we examined four (4) areas that have value to examine an application using an Oracle 12c/18c+ Database.

  1. J2EE logging using embedded features for datasources. This includes the JDBC spy process for Wildfly/JBOSS; and the ability to dynamically change logging levels or on/off features with the jboss-cli.sh process.

2. Intercept JDBC processes – An intermediate JDBC jar that will add additional logging of any JDBC process, e.g. think Wireshark type process. [As this will require additional changes; will describe this process last]

3. Diagnostics JDBC Jars – Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

4. AWR (Automatic Workload Repository ) Reports – Using either command line (SQL) or the Oracle SQL Developer GUI.

J2EE Logging (JDBC Spy)

An example of the above process using the Wildfly/JBOSS jboss-cli.sh process is provided below in a CLI script. The high value of this process is that it takes advantage of OOTB existing features that do NOT require any additional files to be download or installed.

Implementing this process will be via a flat-file with keywords for the command line JBOSS management console, which is taken as an input via the jboss-cli.sh process. (if you are not running under the context of the Wildfly/JBoss user, you will need to use the add-user.sh process).

The below script focuses on the active databases for the Symantec/Broadcom/CA Identity Management solution’s ObjectStore (OS) and TaskPersistence (TP) databases. JDBC debug entries will be written to server.log.

# Name:  Capture JDBC queries/statements for TP and OS with IM business rules
# Filename: im_jdbc_spy_for_tp_and_os.cli
# /apps/CA/wildfly-idm/bin/jboss-cli.sh --connect  --file=im_jdbc_spy_for_tp_and_os.cli
# /opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01!  --file=im_jdbc_spy_for_tp_and_os.cli
#
connect
echo "Take a snapshot backup of IM configuration file before any changes"
:take-snapshot
# Query values before setting them
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo "Enable JDBC Spy for TP and OS DB"
echo
# Always use batch with run-batch - will auto rollback on any errors for updates
batch
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:write-attribute(name=spy,value=true)
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:write-attribute(name=spy,value=true)
/subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=error,value=true)
run-batch
#/subsystem=logging/logger=jboss.jdbc.spy/:read-resource(recursive=false)
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo ""

echo "Check if logger is enabled already; if not then set"
# Use log updates in separate batch process to prevent rollback of entire process if already set
echo
batch
/subsystem=logging/logger=jboss.jdbc.spy/:add(level=TRACE)
run-batch

#
#
# Restart the IM service (reload does not work 100%)
# stop_im start_im restart_im

A view of executing the above script. Monitor the server.log for JDBC updates.

If you have access to install the X11 drivers for the OS, you may also view the new updates to the JBoss/Wildfly datasources.

Diagnostics JDBC Jars

Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

Ref: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjdbc/JDBC-diagnosability.html#GUID-4925EAAE-580E-4E29-9B9A-84143C01A6DC

This diagnostic feature has been available for quite some time and operates outside of the JBOSS/Wildfly tier, by focusing only on the Oracle Debug Jar and Java, which are managed through two (2) JVM switches.

A challenge exists for the CA Identity Manager solution on Wildfly 8.x, that the default logging utility process is using the newer JBOSS logging modules. This appears to interfere with the direct use of the properties file for Oracle debugging process. The solution appears to ignore the JVM switch of “-Djava.util.logging.config.file=OracleLog.properties.”

To address the above challenge, without extensively changing the CA IM solution logging modules, we integrated into the existing process.

This process will require a new jar deployed, older jars renamed, and two (2) reference XML files updated for Wildfly/Jboss.

AWR Reports

This is a common method that DBAs will use to assist application owners to identify I/O or other challenges that may be impacting performance. The ability to execute the AWR Report may be delegated to a non-system service ID (schema IDs).

This process may be executed at the SQLplus prompt but for application owners, it is best executed from Oracle Developer GUI. The process outlined in one (1) slide diagram below will showcase how to execute an AWR report, and how to generate one for our requirements.

This process assumes the administrator has access to a DB schema ID that has access to the AWR (View/DBA) process. If not, we have included the minimal access required.

Intercept JDBC processes

One of the more interesting processes is a 3rd party “man-in-the-middle” approach that will capture all JDBC traffic. We can think of this as Wireshark-type process.

We have left this process to last, as it does require additional modification to the JBoss/Wildfly environment. If the solution deployed is standalone, then the changes are straightforward to change. If the solution is on the Virtual Appliance, we will need to review alternative methods or request access to add additional modules to the appliance.

For this process, we chose the p6spy process.

A quick view to see the JDBC calls being sent from the IM solution to the Oracle Database with values populated. We can also capture the return, but this would be a very large amount of data.

A quick view of the “spy.log” output for the CA IM TP database during startup.


#!/bin/bash
##################################################
# Name: p6spy.sh
#
# Goal:  Deploy the p6spy jar to assist with RCA for IM solution
# Ref:  https://github.com/p6spy/p6spy
#
# A. Baugher, ANA, 04/2020
##################################################
JBOSS_HOME=/opt/CA/wildfly-idm
USER=wildfly
GROUP=wildfly

mkdir -p /tmp/p6spy
rm -rf /tmp/p6spy/*.jar
cd /tmp/p6spy
time curl -L -O https://repo1.maven.org/maven2/p6spy/p6spy/3.9.0/p6spy-3.9.0.jar
ls -lart *.jar
md5sum p6spy-3.9.0.jar
rm -rf $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
mkdir -p $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cp -r -p p6spy-3.9.0.jar $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
ls -lart $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cd /opt/CA/wildfly-idm/modules/system/layers/base/com/p6spy/main
cat << EOF >> module.xml
<module xmlns="urn:jboss:module:1.0" name="com.p6spy">
    <resources>
        <resource-root path="p6spy-3.9.0.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
<module name="com.ca.iam.jdbc.oracle"/>
    </dependencies>
</module>
EOF
chown -R $USER:$GROUP $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 755  $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 664  $JBOSS_HOME/modules/system/layers/base/com/p6spy/main/*
ls -lart
cat module.xml
# Update spy.properties file
curl -L -O https://raw.githubusercontent.com/p6spy/p6spy/master/src/main/assembly/individualFiles/spy.properties
rm -rf  $JBOSS_HOME/standalone/tmp/spy.properties
cp -r -p spy.properties $JBOSS_HOME/standalone/tmp/spy.properties
chown -R $USER:$GROUP $JBOSS_HOME/standalone/tmp/spy.properties
chmod -R 666 $JBOSS_HOME/standalone/tmp/spy.properties
ls -lart $JBOSS_HOME/standalone/tmp/spy.properties


A view with “results” enabled. May view with binary or text results. (excludecategories=info and excludebinary=true)
Enable these configurations in the CA IM standalone-full-ha.xml file – Focus only on the CA IM TP database (where most activity resides)
 <drivers>
                    <driver name="p6spy" module="com.p6spy">
                       <driver-class>com.p6spy.engine.spy.P6SpyDriver</driver-class>
                    </driver>
                    <driver name="ojdbc" module="com.ca.iam.jdbc.oracle">
                        <driver-class>oracle.jdbc.OracleDriver</driver-class>
                        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
                    </driver>
 </drivers>

<!-- ##############################  BEFORE P6SPY ##########################
                <datasource jta="false" jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="iam_im-imtaskpersistencedb-ds" enabled="true" use-java-context="true" spy="true">
                    <connection-url>jdbc:oracle:thin:@//database_srv:1521/xe</connection-url>
                    <driver>ojdbc</driver>
   ##############################  BEFORE P6SPY ##########################
-->
                <!-- <datasource jndi-name="java:/jdbc/p6spy" enabled="true" use-java-context="true" pool-name="p6spyPool"> -->

                <datasource jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="p6spyPool" enabled="true" use-java-context="true">
                        <connection-url>jdbc:p6spy:oracle:thin:@//database_srv:1521/xe</connection-url>
                        <driver>p6spy</driver>


These error messages may occur during setup or mis-configuration.

References:

https://p6spy.readthedocs.io/en/latest/index.html https://p6spy.readthedocs.io/en/latest/configandusage.html#common-property-file-settings   [doc on JDBC intercept settings] https://github.com/p6spy/p6spy/blob/master/src/main/assembly/individualFiles/spy.properties  [sample spy.property file for JDBC intercepts] https://github.com/p6spy/p6spy   [git] https://mvnrepositor.com/artifact/p6spy/p6spy    [jar]

Extra(s)

A view into the ojdbc8_g.jar for the two (2) system properties, that are all that typically is required to use this logging functionality.

oracle.jdbc.Trace and java.util.logging.config.file

COVID-19 and Privacy Preserving Contact Tracing


Contact Tracing makes it possible to combat the spread of the COVID-19 virus by alerting participants of possible exposure to someone who they have recently been in contact with, and who has subsequently been positively diagnosed as having the virus.” – Apple

The conventional deterrent from the adoption of contact tracing is the lack of privacy controls. Apple has to be involved when privacy is vital, and in time we are living today, having a framework that allows for privacy-focused contact tracing to limit the spread of novel viruses is extremely important. Apple and Google collaboratively have put a structure in place for enabling contact tracing while preserving privacy.

Below is a simplified explanation of how contact tracing would work using a mobile device, as described in the specification.

  • Use of Bluetooth LE (Low energy) for proximity detection (no use of location-based services which would be essential for preserving privacy)
  • Generation of daily tracing keys using a one-way hash function
  • Generation of rolling proximity identifiers that change every ~15 minutes and based on daily tracing key.
  • Advertise self proximity identifiers and discover foreign proximity identifiers.
  • User decides when to contribute to contact tracing
  • If diagnosed with COVID-19, the user consents to upload a subset of daily tracing keys.

To detect if one may have come in contact with a COVID-19 positive individual, they would:

  • Download COVID-19 positive daily tracing keys
  • Contact tracing app computes time-based proximity identifiers from the downloaded daily tracing keys on the local device
  • Checks if this local device has previously recorded any of these identifiers

The above mechanism of determining if one has come in contact with a COVID-19 positive person preserves privacy by:

  • Ensuring that the contact tracing keys cannot be reverse-engineered into computing identifying information of the originating device.
  • Not associating GPS or other location services with the keys
  • Performing verification of being in contact with another COVID-19 positive person on a local mobile device.

The specification is preliminary, and there is a strong attempt to ensure privacy of individuals are protected within this framework. COVID-19 positive information uploaded onto central servers also DO NOT contain any personally identifiable information. The detection itself is a decentralized process as it gets computed locally on an individual’s device. Central entities are not in control of either detecting or informing people in this entire process.

All of this works very well if everyone does their civic duty and report when they are positive.

For a broad adoption of contact tracing, there are opportunities for further improvement though. The specification should talk about server(s) responsible for collecting uploaded information. A (mobile) device generates daily tracing keys, and although the keys themselves do not have any mechanism to be associated with that device by an external process, uploading these keys from the same mobile device opens up the possibility of linking information. IP address of a device that uploads anything to a server is always known to it. Applications that are going to use the above privacy framework additionally need to consider connectivity related exposure while uploading information to central servers. The specification should have a section and considerations on protection and controls for exposure of privacy as a result of connectivity.

Another area of improvement could be on controls around potential abuse of this Contact Tracing mechanism. As per the specification, a person who has tested positive for COVID-19 opts-in and uploads their daily tracing key information to central servers. The intent is for others who have been in proximity with them, know, and take actions to self-quarantine to limit the spread. Pranksters or other entities may use it to cause general disruptions by falsely claiming to be COVID-19 positive to unnecessarily or intentionally cause disruption. If such actions happen at a large scale, it will severely impact reliability, credibility, and adoption of contact tracing.

Preserving Privacy is not an easy problem to solve, and larger mindshare is needed to solve these challenges.

[Detailed specification at Apple’s website]

This town is big enough for us all: Expanding the CA Provisioning Tier Schema to 900+ Custom Fields

Based on recent requests, we wished to revisit this “hidden” gem to expand the CA Identity Suite Provisioning schema to meet unique business requirements. Enable 100’s of SaaS and onPrem applications/endpoints for custom business logic to user’s endpoint accounts’ attributes.

Since early days of the CA Identity Suite solution (eAdmin r8.1sp2), there has been a provisioning SDK that provided an approved process to extend the CA Identity Manager’s IMPD (provisioning directory) schema from the default of 99 user custom fields to 900 additional user custom fields.   To compare, typically, the default 99 user custom fields are used with the standard 40-50 default user profile fields, e.g. givenName (First Name), sn (LastName), userID, telephone #, etc. to meet most business use-cases.

Unfortunately, this extended schema process is not well known.

The only known documentation is an embedded readme.txt within a compressed package. Occasionally there will be support tickets or community notes that request this feature as an “enhancement”.

This package is included in the Provisioning SDK download; for IM r14.3, the file name is:

Component Name: CA Identity Manager r14.3 Legacy components
File: GEN500000000002780.zip ~ 200 MB

Background:

CA Identity Suite (Identity Manager) Provisioning Tier does NOT attempt to be a meta-directory, but act as a virtual directory to the 1000’s of managed endpoints/userstores/applications.   As long as the “explore” operation was successful, there will be a “pointer” object that references the correct location of the endpoint accounts.  And when a “correlation” operation occurs, this endpoint account “pointer” object is attached (via inclusion referential objects), to the associated global user ID.    

By using this “virtual directory” architecture, it is possible for IM business rules or 3rd party tools to directly view the 1000’s of managed endpoints “real data” and not a “stored” representation of this data.

However, some clients do wish to “collect” the native data, and store this within the IMPD provisioning store, as SNAPSHOT data, to monitor for non-approved / OOB (out-of-band) access.   If some fields are dedicated to select endpoints, the default of 99 custom fields may quickly run out.

Tackling Case-insensitivity Requirement:

Adjusting the IMPD schema for case-insensitivity; this would allow for case-insensitive correlation rules, and if the new fields are exposed to the IME, case-insensitive comparisions for business rules (PX).

Challenge:

The above Provisioning SDK process will build the extended eTCustomField100-999 and eTCustomFieldName100-999 attributes with case=sensitive. Interestingly, we did not identify a requirement for case sensitivity with the default custom fields, but it does appear this was a decision when the SDK was created. Please note the observation of the OOTB etrust_admin.schema file (for the IMPS data). This OOTB schema for the default custom fields displays a mix of case sensitivity for the eTCustomField00-99 and eTCustomFieldName00-99.

Proposal:

To address this new requirement; and to clarify there are three (3) possible deployments to enable this extended schema. We will review the pro/cons of each possible deployment choice.

Supporting Note 1:

  • eTCustomFieldXXX is the attribute that will contain a value.
  • eTCustomFieldNameXXX is the attribute that will contain a business name for this custom field.

Supporting Note 2:

The CA IM Provisioning Tier was/is developed with early x86 MS VC++ code. We attempted to use later release of the MS Visual Studio VC solution for this process but it failed to generate the output files.

Phase 1 Steps: Enhance the IM Provisioning Tier with 900 new custom fields with case = insensitive.

  1. Download & install MS Visual Studio VC 2010 Express, to have access to the ‘nmake’ executable.
  1. Update OS PATH variables to reference this MS VC 2010 bin folder
  1. Execute the nmake binary, to ensure it is working fine
    • where make & make /?
  1. Download & install CA IM Provisioning SDK on the same server/ workstation as ‘nmake’ binary.
    • IM r14.3 GEN50000000002780.zip 200 MB
  1. Open a command line window; and then change folder to the Provisioning SDK’s COSX Samples folder

cd “C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\samples\COSX”

  1. Execute the gencosx.bat batch file to generate the additional schema for N attributes.

gencosx.bat 900 { Max allowed value is 900; which will generate 100-999 attributes}

The output text file: cosxparse.pty

**** The above steps only need to be executed ONCE on a workstation. After the output text file is generated, we should only need & retain this file for future updates. ****

################################################################

  1. Use Notepad++ to search and replace a string in the following file, cosxparse.pty

“case=sensitive” to “case=insensitive”

{We may be selective and only replace a few attributes instead of all additional 900 attributes.}

  1. Execute the following commands to generate the binary file.
  • Use batch files to set environmental values for the nmake program.
    “C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat“
  • Execute ‘nmake’
    nmake

The new output file (binary) will be:
C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\data\ cosxparse.ptt

  1. Before overwriting existing files; backup the three (3) prior files of IMPS/CCS data folder & IMPD schema folder for:
    etrust_cosx.schema
    etrust_cosx.dxc
    cosxparse.ptt
  1. Copy the file, cosxparse.ptt, to the IMPS server data folder
  1. Stop IMPS service: su – imps & imps stop
  1. Execute the follow command: schemagen -n COSX
  1. This process will create two (2) new output files:
    • etrust_cosx.dxc
    • etrust_cosx.schema
  1. Validate the two (2) new generated files have case-insensitivity set.
  1. Copy etrust_cosx.dxc to all CA Directory schema folders; including DX routers (on IMPS servers).
    • Validate this file is reference in the IMPD group knowledge schema file: etrust_admin.dxg
  1. Copy etrust_cosx.schema & cosxparse.ptt to all CA IMPS Servers, the CCS Servers’ data folders, & the CA IMPS GUI data folder.
    • Validate the file, etrust_cosx.schema, is reference in the IMPS configuration file: etrust_admin.conf
  1. Restart CA Directory and IMPS/CCS Services.
  • dxserver stop all / dxserver start all
  • imps stop / imps start
  • net stop im_jcs / net start im_jcs {this will also restart the im_ccs service}
  1. With the IMPS GUI
  • Assign a ‘business name’ to the newly created eTCustomField100+ under
    SYSTEM/GLOBAL PROPERTIES/CUSTOM USER FIELDS
    {If you do not see these newly created fields, then the IMPS GUI data folder was not updated per step 11.}
  • Validate that E&C Correlation Rules will now work for these extended fields with case-insensitivity.
    SYSTEM/DOMAIN CONFIGURATIONS/EXPLORE AND CORRELATE/CORRELATION ATTRIBUTE/
  • Validate the custom fields are viewable for each Global User.

We may now STOP HERE if we do NOT need to expose these new custom fields to the IME.

Pro: Able to use customfields for account templates and correlations rules.

Con: Not exposed to IME for 1:1 mapping nor exposed for PX Business Rules.

#############################################################

Phase 2 Steps – Advance Configuration – Add custom fields to the IME to allow for 1:1 mapping and use of PX Business Rules.

  1. Update the JIAM (Java LDAP to IMPS API) reference file, jiam.jar, to allow the IME to manage these extended fields for PX business rules.
    • Use 7zip https://www.7-zip.org/ to extract files from jiam.jar; update the file CommonObjects.xml; then replace this file in the jar file.
    • Location of reference file: ./wildfly-idm/standalone/deployments/ iam_im.ear/library/jiam.jar
    • Location for property files to update: \com\ca\iam\model\impl\datamodel\ CommonObjects.xml
  1. Update sections after eTCustomField99 with the below data with the case insensitive.

<property name="eTCustomField100"> <doc>Custom Field #100</doc> <value default="false"> <setValue> <baseType default="false"> <strValue></strValue> </baseType> </setValue> </value> <metadata name="jiam.syncToAccounts"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.modifyPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.ownerPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="isMultiValued"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="beanPropertyName"> <value> <strValue>customField100</strValue> </value> </metadata> <metadata name="pt.minimumAbbreviation"> <value> <intValue>10</intValue> </value> </metadata> <metadata name="pt.internalName"> <value> <strValue>CustomField100</strValue> </value> </metadata> <metadata name="pt.editType"> <value> <strValue>string</strValue> </value> </metadata> <metadata name="pt.editFlag"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.caseSensitivity"> <value> <strValue>insensitive</strValue> </value> </metadata> <metadata name="pt.asciiOnly"> <value> <boolValue>false</boolValue> </value> </metadata> <metadata name="pt.dataLocation"> <value> <strValue>db</strValue> </value> </metadata> </property>

  1. Update the CA IMPS directory.xml as needed for some or all 900 fields.
<ImsManagedObjectAttr physicalname="eTCustomField100" description="Custom Field 100" displayname="Custom Field 100" valuetype="String" multivalued="true" wellknown="%CUSTOM_FIELD_100%" maxlength="0"/>
  1. Update the IME’s IMCD to IMPS 1:1 mappings.
    • identityEnv_environment_settings.xml

We may now stop here. The next advance configuration is only required if we wish to manage the various Endpoint Mapping Tab with the IM UI; instead of the IMPS GUI. We would consider the next Phase 3 Steps, to be low value for the effort; as this configuration is typically set once and done in the IMPS GUI.

Pro: Able to use customfields for account templates and correlations rules. Also able to map these files 1:1 in the IME for IMCD attributes to be mapped to the IMPS extended custom attributes. These IMPS extended custom attributes will now be exposed for PX Business Rules.

Con: Not exposed to IM UI to update Endpoint’s Mapping TAB for ADS and DYN endpoints.

Phase 3 Steps – IME Advanced

If planning on exposing these new custom fields in both the Endpoint’ Mapping Attribute Screen & Endpoint Account Templates via the IME, follow these additional steps:

  1. Replace commonobjects.xml in ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\lib\roledefgen.jar by following the steps given below:
    • rename roledefgen.jar as roledefgen.zip
    • Open roledefgen.zip
    • open com\ca\iam\roledefgen\commonobjects.xml and replace the contents with the attached/provided commonobjects.xml file
    • save the zip
    • rename the zip to jar
  2. Now roledefgen.jar will contain the commonobjects.xml file with extended custom attributes
  1. execute the below RoleDefGenerator.bat to generate jars for all the required java/Dyn endpoints
    • ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\bin> RoleDefGenerator.bat -d -h -u “”
  1. open the generated endpoint jars one by one and modify them by following below steps:
    • rename the original .jar as .zip
    • open framework.xml and increase the version “version=” (2nd line)
    • rename the generated .jar as .zip
    • open and copy the contents of -RoleDef.xml
    • paste the copied content in step 4 to the file -RoleDef.xml in original .zip (step 1)
    • save the original .zip and rename it to .jar
    • replace the save .jar in ..\wildfly-8.2.0.Final\standalone\deployments\iam_im.ear\ user_console.war\WEB-INF\lib
  2. restart IM and test the Custom attributes in IM web-UI

Post Update Note:

  • Validate if we may need to rebuild the IMPD DSAs (4) for existing users that may have already had these extended attributes but with case=sensitive set previously.
    • This step is not required if this is the first time the extended attributes have been deployed or if the case=sensitive has not been changed.
    • Process: Export the IMPD LDIFs, rebuild the IMPD DSA and then re-import LDIFs.