Multi-Write HUB Model with democorp

A useful feature with CA Directory for WAN latency challenges is the HUB model. This model allows sync of the data to occur to distant peer Multi-write DATA DSA, but does NOT impact the external application that is updating its own local Router/DATA DSAs.

To assist with understanding this HUB model, we have leverage the CA Directory samples of democorp & router to build out an architecture with six (6) DATA DSAs and two (2) router DSAs, to emulate two (2) data centers across the world. These samples are included with every CA Directory deployment under $DXHOME/samples/democorp & $DXHOME/samples/router.

This lab emulates two (2) of the three (3) data centers that are displayed within the CA documentation.

Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/ca-directory-concepts/directory-replication/multiwrite-mw-groups-hubs/topology-sample-and-disaster-recovery.html

This lab may be replicated with near-real-world WAN latency within VMWARE Workstation feature.

https://www.vmware.com/products/workstation-pro.html

Below is a bash shell script used to create the lab environment that was created on a single host OS, with CA Directory samples of “democorp” and “router”. These samples were “copied” and updated via sed commands to ensure the DSA are unique for TCP Ports and naming convention. The below examples’ nomenclature will use democorpX for (A-F) and routerYY (AA or BB) .

These DATA DSA use the same suffix and are all referenced in the group knowledge file. The HUB model configuration will change the behavior for MW-DISP replication between data centers. MW-DISP replication will still be used for all local sync between DATA DSA in the same data center, and would ONLY be used between data centers for the DATA DSA that are designated as “HUB”s (aka multi-write-group-hub).

To test the value of HUB model with WAN latency, suggest that the same lab be executed on two (2) hosts, where one host has the VMWARE network latency enabled to 150/150 milli-seconds. Update the ip addresses within the $DXHOME/config/knowlege/*.dxc on both host OS, to reflect the correct hostnames for each data center in the DSAs.

The diagram below outlines the delta between various democorp DATA DSA to use the HUB model.

The changes below show the deltas within the *.dxc/*.dxg files within the knowledge folder for democorp MW HUB model.

The below image captures the only deltas in the *.dxi (startup files) for the democorp MW HUB model, located within the server folder. Note, if CA Directory management tool is deployed and used for democorp, all configurations will be in a single *.dxi file.

#!/bin/bash
##############################################
#
# Name: democorp_mw_hub_lab.sh
#
# Multi-Write HUB lab using CA Directory and the samples of
# democorp and router under DXHOME/samples
# A. Baugher, 04/2020 - ANA Technology Partner
#
# Assumptions:
#   CA Directory is deployed & dxprofile is enabled for dsa user
#   Execute script as dsa user
#
# Step 0.  Clean-Up prior deployment
#
# Step 1.  Auto deploy both democorp and router samples with: setup.sh -q
#
# Step 2.  Make common changes in democorp prior to copying
#
# Step 3.  Create six (6) copies of democorp and two (2) copies of router
#
# Step 4.  Update the six (6) copies of democorp for:
#     - name
#     - ports
#     - multi-write-group  (HUB group)
#     - DSA flags for MW & HUB-DSA
#     - Group knowledge file reference
#
#        Update the two (2) copies of router for:
#    - name
#    - ports
#    - Group knowledge file reference
#    - set write-precedence  (for HUB-DSA)
#
# Step 5. Start all DSAs
#
# Step 6. Test with dxsearch query
#
# Step 7. Execute the dxsoak command with the service account & time command
#
# Step 8. Update democorpA to force a single delta between peer members of AA and BB
#
# Step 9.  Create LDAP Export
#
# Step 10.  Create LDAP Delta & Compare the various democorp DSA to validate sync operations
#
#
##############################################
#set -xv
echo ..
echo "#############################################################"
echo "Step 0.  Clean up prior deployment of democorp and router"
echo "#############################################################"
dxserver stop all
sleep 5
kill -9 `ps -ef | grep dsa | grep democorp | grep -v grep | grep -v "democorp_mw_hub_lab" | awk '{print $2}'` >   /dev/null 2>&1
kill -9 `ps -ef | grep dsa | grep router   | grep -v grep | awk '{print $2}'` >   /dev/null 2>&1
sleep 5
rm -rf $DXHOME/data/democorp*.*
rm -rf $DXHOME/config/knowledge/democorp*.*
rm -rf $DXHOME/config/knowledge/router*.*
rm -rf $DXHOME/config/servers/democorp*.*
rm -rf $DXHOME/config/servers/router*.*
rm -rf $DXHOME/logs/democorp*.*
rm -rf $DXHOME/logs/router*.*
rm -rf $DXHOME/backup/delta*.*  > /dev/null 2>&1
rm -rf $DXHOME/backup/*.ldif > /dev/null 2>&1


echo ..
echo "#############################################################"
echo "Step 1a. Deploy clean version of democorp and router"
echo "#############################################################"
cd  $DXHOME/samples/democorp
$DXHOME/samples/democorp/setup.sh -q  > /dev/null 2>&1
cd $DXHOME/samples/router
$DXHOME/samples/router/setup.sh -q    > /dev/null 2>&1

cd
echo ..
echo "#############################################################"
echo "Step 1b. Create service ID in democorp for later use"
echo "#############################################################"
cat << EOF > $DXHOME/diradmin.ldif
version: 1
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: top
cn: diradmin
sn: diradmin
givenName: diradmin
userPassword: Password01
EOF

dxmodify -a -c -h `hostname` -p 19389 -f $DXHOME/diradmin.ldif

echo ..
echo "#############################################################"
echo "Step 1c.  Stop all running democorp & router DSAs"
echo "#############################################################"
dxserver stop all
sleep 10

echo ..
echo "#############################################################"
echo "Step 2a. Make common changes in pre-existing files before other modification"
echo "Update dsa-flags in democorp.dxc to allow Multi-Write with a HUB"
echo "#############################################################"
sed -i 's|ssl-auth|ssl-auth\n    multi-write-group = hub_group_AA\n     dsa-flags     =|g' $DXHOME/config/knowledge/democorp.dxc
sed -i 's|dsa-flags     =|dsa-flags     = multi-write, no-service-while-recovering, load-share|g' $DXHOME/config/knowledge/democorp.dxc

echo ..
echo "#############################################################"
echo "Step 2b. Update MW recovery in democorp.dxi file"
echo "#############################################################"
sed -i 's|recovery = false;|recovery = true;|g' $DXHOME/config/servers/democorp.dxi

echo ..
echo "#############################################################"
echo "Step 3a. Create six (6) copies of democorp and two (2) routers"
echo "Copy democorp data folder contents"
echo "#############################################################"
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpA.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpA.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpB.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpB.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpC.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpC.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpD.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpD.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpE.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpE.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpF.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpF.tx  > /dev/null 2>&1

echo ..
echo "#############################################################"
echo "Step 3b. Copy autostart folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpA
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpB
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpC
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpD
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpE
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpF
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerAA
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerBB

echo ..
echo "#############################################################"
echo "Step 3c. Copy knowledge folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpA.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpB.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpC.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpD.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpE.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpF.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerAA.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerBB.dxc
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupAA.dxg
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 3d. Copy server folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpA.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpB.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpC.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpD.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpE.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpF.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerAA.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4a.  Update names & ports in democorp knowledge files"
echo "#############################################################"
sed -i 's|19389|29389|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19390|29390|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPA =|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPA>|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19389|29489|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19390|29490|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPB =|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPB>|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19389|29589|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19390|29590|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPC =|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPC>|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19389|29689|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19390|29690|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPD =|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPD>|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19389|29789|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19390|29790|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPE =|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPE>|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19389|29889|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|19390|29890|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPF =|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPF>|g' $DXHOME/config/knowledge/democorpF.dxc

echo ..
echo "#############################################################"
echo "Step 4b. Update knowledge files for router ports"
echo "#############################################################"
sed -i 's|19289|39289|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19290|39290|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERAA =|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19289|39389|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|19290|39390|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERBB =|g' $DXHOME/config/knowledge/routerBB.dxc

echo ..
echo "#############################################################"
echo "Step 4c. Update group knowledge file for three (3)MW Group HUB Peers "
echo "#############################################################"
sed -i 's|"router.dxc";|"routerAA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorp.dxc";|"democorpA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorpA.dxc";|"democorpA.dxc";\nsource "democorpB.dxc";\nsource "democorpC.dxc";\nsource "routerBB.dxc";\nsource "democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg

cp -r -p $DXHOME/config/knowledge/groupAA.dxg $DXHOME/config/knowledge/groupBB.dxg

#sed -i 's|"router.dxc";|"routerBB.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorp.dxc";|"democorpD.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorpD.dxc";|"democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 4d.  Update Server folder contents"
echo "#############################################################"
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpB.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpC.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpD.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpE.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpF.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/routerBB.dxi


echo ..
echo "#############################################################"
echo "Step 4e.  Update HUB Configurations in DSA knowledge and DSA routers"
echo "#############################################################"
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|/knowledge/groupAA.dxg";|/knowledge/groupAA.dxg";\nset  write-precedence = democorpA ,democorpB, democorpC;\n|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/groupBB.dxg";|/knowledge/groupBB.dxg";\nset  write-precedence = democorpD ,democorpE, democorpF;\n|g' $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4f.  Remove samples of router & democorp from starting "
echo "#############################################################"
rm -rf $DXHOME/config/servers/democorp.dxi
rm -rf $DXHOME/config/servers/router.dxi
rm -rf $DXHOME/config/autostart/democorp
rm -rf $DXHOME/config/autostart/router

echo ..
echo "#############################################################"
echo "Step 5. Start all DSAs"
echo "#############################################################"
dxcertgen certs > /dev/null 2>&1
dxserver start all

dxserver status

#exit

echo ..
echo "#############################################################"
echo "Step 6. Test all DSAs with dxsearch query"
echo "#############################################################"
# Comment out if too verbose
# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU

# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01

echo ..
echo "#############################################################"
echo "Step 7. Execute the dxsoak command with the service account & time command"
echo "allow to run for over 5 sec to monitor changes for Multi-Write"
echo "may allow for longer times (1 hour) to get better performance metrics"
echo "#############################################################"
cd $DXHOME/samples/dxsoak
echo "Update democorpA (TCP 29389) to confirm MW to from democorpA (hub_group_AA) to democorpD (hub_group_BB)"
# Create a delete file first; then re-add entries
grep dn: democorp.eldf | grep ,ou=Services > democorp-del.eldf
sed -i 's|,c=AU|,c=AU\nchangetype: del\n|g' democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Delete all DN entries with ou=Services: `wc -l democorp-del.eldf` on democorpA (TCP 29389)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29389 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Re-Add all DN entries with ou=Services: `wc -l democorp.eldf` on democorpD (TCP 29689)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29689 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp.eldf


echo ..
echo "#############################################################"
echo "Step 8a. Update democorpA to force a single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_sn.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: sn
sn: diradmin_AA_new_update
EOF

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Make update on democorpA"
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29389 -f $DXHOME/diradmin_sn.ldif

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

#exit
echo ..
echo "#############################################################"
echo "Step 8b. Update democorpF to force a reverse single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_givenName.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: givenName
givenName: diradmin_BB_new_update
EOF


echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Update democorpF to show replication via democorpD (HUB) to democorpA (HUB) "
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29889 -f $DXHOME/diradmin_givenName.ldif

echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp




echo ..
echo "###########################################################"
echo "Step 9b. Update CA Directory DSA to allow online backup ###"
echo "###########################################################"
echo " - Configure CA Directory to provide an data dump (zdb file) while DSA are online"
cp -r -p $DXHOME/config/settings/default.dxc.org $DXHOME/config/settings/default.dxc  > /dev/null 2>&1
cp -r -p $DXHOME/config/settings/default.dxc $DXHOME/config/settings/default.dxc.org  > /dev/null 2>&1
# Edit the DSA settings file to add in one line.  dump dxgrid-db;
chmod 744 $DXHOME/config/settings/default.dxc
echo "dump dxgrid-db;" >> $DXHOME/config/settings/default.dxc



echo ..
echo "######################################################################################"
echo "Step 9c. Re-init all DSA to data dump the CA DSAs for democorp & router "
echo "######################################################################################"
echo " - This make take 5-30 seconds to complete "
dxserver init all    > /dev/null 2>&1
# View for zdb or zd? (in-progress) files
sleep 10



echo ..
echo "#################################################################"
echo "Step 9d. Export DSA backup/offline zdb data files to LDIF file ###"
echo "#################################################################"
echo " - Export will happen after the backup/offline zdb files are fully created"
echo " - This make take 5-60 seconds  to complete "
echo ..
echo "#################################################################"
echo "Step 9e. Set WHILE loop for DemocorpF DSA ###"
echo "#################################################################"
until [ -f $DXHOME/data/democorpF.zdb ]
do
     echo " - Waiting till CA Directory has completed online data dump of DemocorpF DSA"
     sleep 5
done
sleep 5
echo ..
echo "#################################################################"
echo "Step 9f. Execute dxdumbdb for Democorp DSA - FULL ###"
echo "#################################################################"
mkdir $DXHOME/backup  > /dev/null 2>&1
cd $DXHOME/backup
dxdumpdb -z -f $DXHOME/backup/democorpA.ldif democorpA   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpB.ldif democorpB   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpC.ldif democorpC   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpD.ldif democorpD   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpE.ldif democorpE   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpF.ldif democorpF   > /dev/null 2>&1
sleep 5

echo ..
echo "#################################################################"
echo "Step 10a. Perform LDIF DELTA compare between democorpA and democorpB within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
#ldifdelta -x -S DSANAME  OLDFILE NEWFILE DELTAFILE
ldifdelta -x -S democorpA $DXHOME/backup/democorpA.ldif  $DXHOME/backup/democorpB.ldif $DXHOME/backup/delta-between-A-and-B.ldif
echo "#################################################################"
echo "Step 10b. Perform LDIF DELTA compare between democorpD and democorpE within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpD.ldif  $DXHOME/backup/democorpE.ldif $DXHOME/backup/delta-between-D-and-E.ldif
echo "#################################################################"
echo "Step 10c. Perform LDIF DELTA compare between democorpC and democorpF across different HUB MW groups"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpC.ldif  $DXHOME/backup/democorpF.ldif $DXHOME/backup/delta-between-C-and-F.ldif

echo .
echo .




Ref: This HUB Model lab was built off a prior lab for MW Sync with air-gap requirements.

https://community.broadcom.com/enterprisesoftware/communities/community-home/digestviewer/viewthread?MessageKey=62ccc41d-7c37-4728-ad1e-c82e7a8acc38&CommunityKey=f9d65308-ca9b-48b7-915c-7e9cb8fc3295&tab=digestviewer

Load Balancing Provisioning Tier

The prior releases of CA Identity Manager / Identity Suite have a bottleneck with the provisioning tier.

The top tier of the solution stack, Identity Manager Environment (IME/J2EE Application), may communicate to multiple Provisioning Servers (IMPS), but this configuration only has value for fail-over high availability.

This default deployment means we will have a “many-to-one” challenge, multiple IMEs experiencing a bottleneck with provisioning communication to a single IMPS server.

If this IMPS server is busy, then transactions for one or more IMEs are paused or may timeout. Unfortunately, the IME (J2EE) error messages or delays are not clear that this is a provisioning bottleneck challenge. Clients may attempt to resolve this challenge by increasing the number of IME and IMPS servers but will still be impacted by the provisioning bottleneck.

Two (2) prior methods used to overcome this bottleneck challenge were:


a) Pseudo hostname(s) entries, on the J2EE servers, for the Provisioning Tier, then rotate the order pseudo hostname(s) on the local J2EE host file to have their IP addresses access other IMPS. This methodology would give us a 1:1 configuration where one (1) IME is now locked to one (1) IMPS (by the pseudo hostname/IP address). This method is not perfect but ensures that all IMPS servers will be utilized if the number of IMPS servers equals IME (J2EE) servers. Noteworthy, this method is used by the CA identity Suite virtual appliance, where the pseudo hostname(s) are ca-prov-srv-01, ca-prov-srv-02, ca-prov-03, etc. (see image above)

<Connection
  host="ca-prov-srv-primary" port="20390"
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“
/>

b) A Router placed in-front of the IMPS (TCP 20389/20390), that contains “stickiness” to ensure that when round-robin model is used, that the same IMPS server is used for the IME that submitted a transaction, to avoid any concerns/challenges of possible”RACE” conditions, where a modify operations may occur before the create operation.


The “RACE” challenge is a concern of both of the methods above, but this risk is low, and can be managed with additional business rules that include pre-conditional checks, e.g., does the account exist before any modifications.

Ref: RACE https://en.wikipedia.org/wiki/Race_condition

Example of one type of RACE condition that may be seen.

Ref: PX Rule Engine: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

New CP2 Loading Balance Feature – No more bottleneck.

Identity Manager can now use round-robin load balancing support, without any restrictions on either type of provisioning operations or existing runtime limitations. This load balancing approach distributes client requests across a group of Provisioning servers.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/release-features-and-enhancement/Identity-Manager-14_3-CP2.html#concept.dita_b51ab03e-6e77-49be-8235-e50ee477247a_LoadBalancing

This feature is managed in the IME tier, and will also address any RACE conditions/concerns.


No configuration changes are required on the IMPS tier. After updates of CP2, we can now use the IME Management console to export the directory.xml for the IMPS servers and update the XML tag for <Connection. This configuration may also be deployed to the Virtual Appliances.

<Connection   
  host="ca-prov-srv-primary" port="20390”   
  loadbalance="ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“   
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“ 
/>

View of CP2 to download.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

Before applying this patch, we recommend collecting your metrics for feed operations that include multiple create operations, and modify operations to minimal of 1000 IDS, Monitor the IMPS etatrans logs as well; and the JCS/CCS logs. After the patch, run the same feed operations to determine the value of provisioning load-balance feature; and any provisioning delays that have been addressed. You may wish to increase the # of JCS/CCS servers (MS Windows) to speed up provisioning to Active Directory and other endpoints.

Disaster Recovery Scenarios for Directories

Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.

In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.

Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.

Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.

Cluster Out-of-Sync Scenario

Awareness

The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.

Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.

After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.

The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.

Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)

su - dsa    OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

# NOTIFY BRANCH (TCP 20404) 
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb

# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount

# Scroll to see entire line 

A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.

Recovery Processes

The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.

The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.

This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.

https://knowledge.broadcom.com/external/article?articleId=54088

The above document is dated, and still mentions additional file structures that have been retired, e.g. oc/zoc, at,zat.

An enhancement request has been submitted for both of these requests:

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=c71a304b-a689-4894-ac1c-786c9a2b2d0d

The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.

The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).

Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.

Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file. 

This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.

This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.  

Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.

STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.

su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;

# Scroll to see entire line 

Step 3 will then ask for an updated online backup to be executed. 

In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command. 

In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process. 

This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures. 

Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.

Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior. 

Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.

STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

# Scroll to see entire line 

Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.

The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.

https://anapartner.com/2020/05/03/wan-latency-rsync-versus-scp/

Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.

# RSYNC METHOD
sudo -iu dsa

time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data

# Scroll to see entire line 
# SCP METHOD   
sudo -iu dsa

scp   REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb   /tmp/dsa_data
/usr/bin/mv  /tmp/dsa_data/<incorrect_dsaname>.zdb   $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db

# Scroll to see entire line 

Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.

The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db

Remove the prior <dsaname>.dp (ASCII time-stamp file) { the DATA DSA will auto replace this file with the *.dx file contents } and the <dsaname>.tx (binary data transaction file).

Step 5a Startup the DATA DSA with the command

dxserver start all

If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)

dxserver -d start <dsaname>

Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)

Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.

bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'

# Scroll to see entire line 

LDIF Recovery Processes

The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.

We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).

Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.

Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.

The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

ldifdelta -x -S ca-prov-srv-01-impd-co  /tmp/20200819_122820_1597858100_ca-prov-srv-01-impd-co.ldif   /tmp/20200819_123108_1597858268_ca-prov-srv-01-impd-co.ldif  |  perl -p00e 's/\r?\n //g'  >   /tmp/delta_file_ca-prov-srv-01-impd-co.ldif   ; cat /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
dxmodify -v -c -h`hostname` -p 20391  -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -f /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

# Scroll to see entire line 

The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.

The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.

In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.

Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.

Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.

We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.

Hope this has value to you and any challenges you may have with your environment.

Upgrade CA API Gateway via docker “in-place”

CA API Gateway (ssg) is used to manage SaaS endpoints/applications for the CA/Symantec Identity Suite solution. One of the challenges of appliances and Docker containers is the underlying 3rd party libraries may get dated, and require updates.

Most vendors will not allow post-updates or direct updates to their containers libraries, as this has an impact on the support model. So we must rely on the support process and push vendors to release additional updates to stay ahead of any security concerns.


The CA API Gateway (ssg) when deployed on docker, has a streamlined process for updating in place, as long as you have backed-up the MySQL database when the docker images are updated.

We wanted to capture the process to upgrade from CA API Gateway 9.4 (ssg94) to Gateway 10.0 (ssg10). Fortunately, the MySQL 8.0 database has the same structure, tables, and routines as the MySQL 5.7 database for CA API Gateway 9.4.

The challenge we have is the documented process to upgrade is difficult to implement on the same host OS; and there was a lost opportunity to manage the license file from 9.4 to license 10.0 during the re-import of the MySQL database.


The below diagram, from the CA API Gateway 10.0 upgrade process can be adjusted to streamline the upgrade process.

expedited_scenario_1
Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/upgrade-the-gateway/upgrade-an-appliance-gateway/manual-expedited-appliance-upgrade.html

The above-documented process outlines dropping the MySQL database ssg completely, and then create a new clean db. Using this documented process, we can slightly adjust it, to avoid an unnecessary step to re-import the license file after the restart of the gateway container. We also wish to add additional validation steps to show what is changing.

Proposal for modifications:

  1. Create a clean CA API Gateway 10.0 (ssg10) Docker deployment on the same Host OS. May use docker-compose with REST service enabled and use different TCP listening ports to allow two (2) docker containers to run simultaneously during the testing cycle. After testing may keep the default TCP license ports of 8443 & 9443.
  2. Allow the CA API Gateway 10.0 container to start cleanly with MySQL 8.0 DB and with the correct license file for version 10. Then export the MySQL database table that contains the updated license table.
  3. Import the prior backup MySQL file to the new CA API Gateway deployment. Then before startup, import the ssg10 license mysql file as well. This will replace the ssg94 license information.
  4. Restart the CA API Gateway container, and monitor the logs for any errors and ensure the new license file is used
  5. If REST API was enabled (via the docker-compose file & touch a file name “restman”), then use CURL to validate all REST services are available, and list all prior API Gateway Policy Services are displayed.

A visual example of this process using the prior diagram.

Note: The official documentation uses sed to replace a string “NO_AUTO_CREATE_USER”; but the documentation shows two examples. One with a comma & one without. We have included the one with the comma, but we did not see this line in the MySQL sql export, so it was deemed low value, but still included in our process.

Example of upgrade process and validation of using REST

Note the two (2) running CA API Gateway container of 9.4 (with MySQL 5.7) and 10.0 (with MySQL 8.0) with different TCP listen services; and validation of REST services for ssg10

Below are the above steps called out with additional validations steps, and the use of the “time” command to monitor the export of the files.

# Pre-Step 1:  On Test System:  Prepare SSG10 docker compose yml file and correct license.xml & confirm startup.
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}
docker ps -a
docker logs ssg10 -f --tail 100
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"


# Step 2:  On PROD HOST OS: Stop SSG94 and export the current MySQL 5.7 database with routines (aka stored procedures) & remove unwanted lines
docker stop ssg94
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.before.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.sql
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.updated.for.mysql8.sql
sed -i "s/NO_AUTO_CREATE_USER,//g"   			ssg94.backup.updated.for.mysql8.sql
sed -i "/Using a password on the command/d" 	ssg94.backup.updated.for.mysql8.sql


# Step 3: On PROD HOST OS: Deploy SSG10 with docker compose yml file & correct license xml file & export db table license_document
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}  
docker ps -a
docker logs ssg10 -f --tail 100     
docker stop ssg10
time docker exec -tt  mysql-ssg10  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines license_document  > ssg10.license.export.sql
sed -i "/Using a password on the command/d" 	ssg10.license.export.sql

# Step 4: On PROD HOST OS: Drop the SSG10 MySQL 8.0 ssg database and rebuilt with imports of SQL files.
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer drop ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer create ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg94.backup.updated.for.mysql8.sql
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg10.license.export.sql
docker exec -it mysql-ssg10  mysql --user=root --password=7layer ssg  -e "SELECT * FROM license_document;" | grep -A 12 -e "<license "

# Step 5: On PROD HOST OS:  Start SSG10 and validate no errors 
docker start ssg10       {Wait 90-120 seconds} 
docker ps -a

# Step 6:  Validate license    
docker logs ssg10 -f --tail 100  
docker logs ssg10 -f 2>&1  | grep -i license

# Step 7:  Validate REST services enabled and we can see all services
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
# Example to validate ServiceNow REST service to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://localhost:9443/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Example validate ServiceNow REST service via LB to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://192.168.242.135/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Direct REST service to ServiceNow to validate development instance is available.
curl --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json"  'https://dev101846.service-now.com/api/now/table/sys_user?sysparm_query=user_name=testalan13095'

# Step 8:  Certs required for IM JCS Tier to avoid typical cert issues.
a. Ensure the CA API Gateway public root CA cert or self-signed cert is imported to each JCS keystore
b. If using a LoadBalancer, e.g. httpd, ensure this public root CA cert or self-signed cert is imported to each JCS keystore.


docker commands collected to assist with RCA efforts for Operation Teams

# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"

# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"


#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)


#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100

# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux

#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'

# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "

# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"

# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"

# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"

# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl

Change the pmadmin password at the docker command line

Process flows collect for the CA API Gateway docker deployment

Example of the docker-compose yml file for CA API Gateway with REST web services and license xml file.

We attempt to keep useful notes/hints included in the yml file to allow for future reference. The example below redirect ports to TCP 18443 and 19443 from the standard ports of 8443 and 9443 for the CA API Gateway; and MySQL from 3306 to 23306 for testing protocols in non-Production enviornments.

# docker-compose-ssg10-0-mysql8-0_with_rest_and_external_mysql_volume.yml
# Startup:  docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml up  -d
# Stop:     docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml down
#
#
# Ensure Host OS Network allows IPv4 forwarding:   sysctl -a | grep ipv4.ip_forward
# Validate docker network has access with curl:  curl -vk --tlsv1.2  https://www.service-now.com
# Note:  Do NOT use TABS in this file
# Monitor startup of containers with:  docker logs ssg10 -f --tail 100   AND   docker logs mysql-ssg10 -f  --tail 100
# https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/using-the-container-gateway/getting-started-with-the-container-gateway/run-the-container-gateway-on-docker-engine/sample-docker-compose-deployment-file.html
version: "2.2"
services:
   ssg10:
     container_name: ssg10
     # Ref: https://hub.docker.com/r/caapim/gateway/tags
     #image: caapim/gateway:latest
     image: caapim/gateway:10.0.00_20200428
     mem_limit: 10048m
     volumes:
        # Ensure ssg_license.xml is a valid SSG license file for 9.4 or 10.0
        - ./ssg_license_10.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
        # https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/apis-and-toolkits/rest-management-api.html
        # Touch the file restman to auto-start rest webservices
        # Validate REST API with curl
        # curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
        # curl --insecure --user pmadmin:7layer  https://localhost:18443/restman/1.0/rest.wadl
        - ./restman:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/services/restman
     ports:
       - "58443:8443"
       - "59443:9443"
     environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql-ssg"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql-ssg10:3306/ssg?useSSL=false"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false"
        SSG_INTERNAL_SERVICES: "restman wsman"
     links:
        - mysql-ssg10
   mysql-ssg10:
     container_name: mysql-ssg10
     # Ref https://hub.docker.com/_/mysql?tab=tags
     image: mysql:8.0.20
     #image: mysql:latest
     # SSG 10.0 requires MySQL 8.x per documentation
     #https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/using-mysql-8_0-with-gateway-10.html
     mem_limit: 1048m
     restart: always
     ports:
        - "23306:3306"
     environment:
        - MYSQL_ROOT_PASSWORD=7layer
        #- MYSQL_RANDOM_ROOT_PASSWORD=yes
        - MYSQL_USER=gateway
        - MYSQL_PASSWORD=7layer
        - MYSQL_DATABASE=ssg
     command:
       - "--character-set-server=utf8mb3"
       - "--log-bin-trust-function-creators=1"
       - "--default-authentication-plugin=mysql_native_password"
       - "--innodb_log_buffer_size=32M"
       - "--innodb_log_file_size=80M"
       - "--max_allowed_packet=8M"
#     volumes:
#       - mysql_db8:/var/lib/mysql
# Persist SSG MySQL DB Data
# Validate after shutdown with:  docker volume ls  &  docker volume inspect ssg_mysql_db
# Note:  Important - Random Root Password will not work for persist MySQL - Password must be known for 1st time
#   volumes:
#     mysql_db8:
#
# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"
# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)
#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100
# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux
#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'
# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "
# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"
# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"
# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"
# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl

Clean-Up Orphans and Refine Correlation Rules

Correlation rules may be very simple. A unique ID on an IAM solution should match the unique ID (or combinations of attributes) to form a one-to-one (1:1) relationship to the identity on a managed endpoint/application.

Most sites that had the opportunity have started using GUID/UUID values for the correlation ID on the IAM solutions and if the endpoint/application allows it, the same GUID/UUID on an open field, that likely is not the same as the login ID field.

Example below using a GUID/UUID format as the primary identifier with the IAM solution and the endpoint/application of an Active Directory domain.

We may also have many different correlation rules or primary/secondary correlation for every application/endpoint. Until the correlation is correct we have the likelihood of an incorrect correlation or default correlation.

If we wish to remove an incorrect correlation, this may be done manually to remove or re-attach the correct entries. However, this would not address future correlation processes if the rules are not updated.

Example of removing a correlation from the orphan ID “[default user]”


Example to remove a incorrect correlation manually within the IAM solution

To assist with refinement of correlation rules, a feedback process/script may have value.

The below script demonstrates using OS ldapsearch/ldapdelete processes with the CA Identity Manager Provisioning Tier (TCP 20389/20390) a feedback process to cleanup the Orphans IDs under “[default user]”

The script will query all “inclusions” where an endpoint account has been incorrectly associated with the Global user “[default user]” and return a count of these records. The process will capture the dn values of these inclusions records, and then feed them to the Open LDAP ldapdelete process to have them removed. Since we are using the IMPS service (TCP 20389/20390) we are still allowing the solution to maintain referential integrity during the clean-up process.

After the deletion are complete, we will re-initialize a new E&C (explore & correlate) process using any new Correlation Rules that may have been added. It is this opportunity that an administrator may wish to adjust their own correlation rules; and then re-execute the script. If the correlation rules do not match, then the prior correlations will return to the “[default user]”.

#!/bin/bash
#####################################################################################################################
#
# Name: Clean Up [default user]
#
# Goal:  Script to clean up [default_user] correlations to allow for better orphan or rogue account identification
#  - Ensure that IMPS Service TCP 20389/20390 is used to maintain referential integrity of the inclusions entries
#    during delete operations.
#
# Ref:  CA IM r14.x solution & OS ldapsearch/ldapdelete
#
# A. Baugher, ANA, 04/2020
#
#####################################################################################################################
# set -xv
DATETZ=$(date -d "1970-01-01 00:00:00 `date +'%s'` seconds"  +'%Y-%m-%dT%H:%M:%S.%3NZ')
IMPSHOST=`hostname`
IMPSPORT=20390
IMPSUSERDN='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
# Use pwd file to avoid clear text passwords in script
# echo -n CLEAR_TEXT_PASSWORD > .imps.pwd
IMPSPWD=`cat .imps.pwd`
#####################################################################################################################
BASE_DN='eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta'
SUP_DN_ENTRY='eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im'
FILTER="(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=$SUP_DN_ENTRY))"
SEARCH=sub
ATTRIBUTES='dn eTInclusionID'
EXCLUDE="  -e ^$ "
#SIZE=" -z 10"
SIZE=" -z 0"
FILENAME=default_user_guid.txt
rm -rf $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID > tmp_file
echo "LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D '$IMPSUSERDN' -y ./.imps.pwd -b '$BASE_DN' -s $SEARCH '$FILTER' $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F': ' '{print \$2}' | grep eTInclusionID "
uniq -i tmp_file > $FILENAME
echo "#################################################################################################"
echo "# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : "`cat $FILENAME | wc -l`
rm -rf tmp_file
echo "#################################################################################################"



echo ""
echo "####################################################################################################################"
echo "#### Remove `cat $FILENAME | wc -l` EA (endpoint accounts) that are correlated to the Global User [default user] "
echo "####################################################################################################################"
LDAPTLS_REQCERT=never ldapdelete -v -c -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -f $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"


echo ""
echo "#################################################################################################"
echo "#### Re-explore & correlate to update Global User [default user] orphan bucket."
echo "#################################################################################################"
echo ""
IMPSADSBASEDN="eTADSDirectoryName=dc2016.exchange.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers

IMPSADSBASEDN="eTADSDirectoryName=dc2012.exchange2012.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers


echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
echo ""

Example of the output of the script (with 1000’s of lines remove for clarity). Includes E&C to two (2) ADS endpoints, where > 2000 identities will default correlation to the orphan Global User “[default user]”.

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2184
#################################################################################################
LDAPTLS_REQCERT=never ldapsearch  -z 0 -LLL -H ldaps://vapp0001:20390 -D 'eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta' -y ./.imps.pwd -b 'eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta' -s sub '(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im))' dn eTInclusionID | perl -p00e 's/\r?\n //g' | grep -v   -e ^$  | awk -F': ' '{print $2}' | grep eTInclusionID
#################################################################################################
# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : 2184
#################################################################################################

####################################################################################################################
#### Remove 2184 EA (endpoint accounts) that are correlated to the Global User [default user]
####################################################################################################################
ldap_initialize( ldaps://vapp0001:20390/??base )
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@67d6bf2c-1104-1039-96c4-ef7605d11763,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname6 mi. lastname6' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@65e02962-00bd-1039-830f-ae134a0f7638,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname0002 lastname0002' and Global User '[default user]' deleted successfully

[Deleted > 5000 similar rows ]

deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@ce05d098-1b32-1039-85ec-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'ffffff' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@75a62f60-1b32-1039-85ea-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'eeeee' and Global User '[default user]' deleted successfully

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
0
#################################################################################################

#################################################################################################
#### Re-explore & correlate to update Global User [default user] orphan bucket.
#################################################################################################

Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 672, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' correlation successful: (accounts correlated: 0, defaulted: 566, unchanged: 6, failures: 0)
Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 1871, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' correlation successful: (accounts correlated: 0, defaulted: 1619, unchanged: 153, failures: 0)

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2185
#################################################################################################



Modify the above script for your own application/endpoints and refine your correlation rules (or add additional ones as needed.)

If applications/endpoints identities are non-managed service IDs, a process that may be used to assist is shown below. Create a new Global User (similar format as [default user]), and then drag-n-drop the endpoint/application service ID accounts to the new Global User [endpoint A service ID].

The final goal is a “clean” orphan process, that will be able to alert us to any rogue accounts being created OOB (out-of-band) of the expected top-down IAM solution from an approved SOT (source-of-truth) solution, e.g. SAP HR/Workday or home-grown DB used with ETL processes. By removing the “noise” of incorrectly correlated accounts, we can now focus on identifying the true “orphans”.

COVID-19 and Privacy Preserving Contact Tracing


Contact Tracing makes it possible to combat the spread of the COVID-19 virus by alerting participants of possible exposure to someone who they have recently been in contact with, and who has subsequently been positively diagnosed as having the virus.” – Apple

The conventional deterrent from the adoption of contact tracing is the lack of privacy controls. Apple has to be involved when privacy is vital, and in time we are living today, having a framework that allows for privacy-focused contact tracing to limit the spread of novel viruses is extremely important. Apple and Google collaboratively have put a structure in place for enabling contact tracing while preserving privacy.

Below is a simplified explanation of how contact tracing would work using a mobile device, as described in the specification.

  • Use of Bluetooth LE (Low energy) for proximity detection (no use of location-based services which would be essential for preserving privacy)
  • Generation of daily tracing keys using a one-way hash function
  • Generation of rolling proximity identifiers that change every ~15 minutes and based on daily tracing key.
  • Advertise self proximity identifiers and discover foreign proximity identifiers.
  • User decides when to contribute to contact tracing
  • If diagnosed with COVID-19, the user consents to upload a subset of daily tracing keys.

To detect if one may have come in contact with a COVID-19 positive individual, they would:

  • Download COVID-19 positive daily tracing keys
  • Contact tracing app computes time-based proximity identifiers from the downloaded daily tracing keys on the local device
  • Checks if this local device has previously recorded any of these identifiers

The above mechanism of determining if one has come in contact with a COVID-19 positive person preserves privacy by:

  • Ensuring that the contact tracing keys cannot be reverse-engineered into computing identifying information of the originating device.
  • Not associating GPS or other location services with the keys
  • Performing verification of being in contact with another COVID-19 positive person on a local mobile device.

The specification is preliminary, and there is a strong attempt to ensure privacy of individuals are protected within this framework. COVID-19 positive information uploaded onto central servers also DO NOT contain any personally identifiable information. The detection itself is a decentralized process as it gets computed locally on an individual’s device. Central entities are not in control of either detecting or informing people in this entire process.

All of this works very well if everyone does their civic duty and report when they are positive.

For a broad adoption of contact tracing, there are opportunities for further improvement though. The specification should talk about server(s) responsible for collecting uploaded information. A (mobile) device generates daily tracing keys, and although the keys themselves do not have any mechanism to be associated with that device by an external process, uploading these keys from the same mobile device opens up the possibility of linking information. IP address of a device that uploads anything to a server is always known to it. Applications that are going to use the above privacy framework additionally need to consider connectivity related exposure while uploading information to central servers. The specification should have a section and considerations on protection and controls for exposure of privacy as a result of connectivity.

Another area of improvement could be on controls around potential abuse of this Contact Tracing mechanism. As per the specification, a person who has tested positive for COVID-19 opts-in and uploads their daily tracing key information to central servers. The intent is for others who have been in proximity with them, know, and take actions to self-quarantine to limit the spread. Pranksters or other entities may use it to cause general disruptions by falsely claiming to be COVID-19 positive to unnecessarily or intentionally cause disruption. If such actions happen at a large scale, it will severely impact reliability, credibility, and adoption of contact tracing.

Preserving Privacy is not an easy problem to solve, and larger mindshare is needed to solve these challenges.

[Detailed specification at Apple’s website]

This town is big enough for us all: Expanding the CA Provisioning Tier Schema to 900+ Custom Fields

Based on recent requests, we wished to revisit this “hidden” gem to expand the CA Identity Suite Provisioning schema to meet unique business requirements. Enable 100’s of SaaS and onPrem applications/endpoints for custom business logic to user’s endpoint accounts’ attributes.

Since early days of the CA Identity Suite solution (eAdmin r8.1sp2), there has been a provisioning SDK that provided an approved process to extend the CA Identity Manager’s IMPD (provisioning directory) schema from the default of 99 user custom fields to 900 additional user custom fields.   To compare, typically, the default 99 user custom fields are used with the standard 40-50 default user profile fields, e.g. givenName (First Name), sn (LastName), userID, telephone #, etc. to meet most business use-cases.

Unfortunately, this extended schema process is not well known.

The only known documentation is an embedded readme.txt within a compressed package. Occasionally there will be support tickets or community notes that request this feature as an “enhancement”.

This package is included in the Provisioning SDK download; for IM r14.3, the file name is:

Component Name: CA Identity Manager r14.3 Legacy components
File: GEN500000000002780.zip ~ 200 MB

Background:

CA Identity Suite (Identity Manager) Provisioning Tier does NOT attempt to be a meta-directory, but act as a virtual directory to the 1000’s of managed endpoints/userstores/applications.   As long as the “explore” operation was successful, there will be a “pointer” object that references the correct location of the endpoint accounts.  And when a “correlation” operation occurs, this endpoint account “pointer” object is attached (via inclusion referential objects), to the associated global user ID.    

By using this “virtual directory” architecture, it is possible for IM business rules or 3rd party tools to directly view the 1000’s of managed endpoints “real data” and not a “stored” representation of this data.

However, some clients do wish to “collect” the native data, and store this within the IMPD provisioning store, as SNAPSHOT data, to monitor for non-approved / OOB (out-of-band) access.   If some fields are dedicated to select endpoints, the default of 99 custom fields may quickly run out.

Tackling Case-insensitivity Requirement:

Adjusting the IMPD schema for case-insensitivity; this would allow for case-insensitive correlation rules, and if the new fields are exposed to the IME, case-insensitive comparisions for business rules (PX).

Challenge:

The above Provisioning SDK process will build the extended eTCustomField100-999 and eTCustomFieldName100-999 attributes with case=sensitive. Interestingly, we did not identify a requirement for case sensitivity with the default custom fields, but it does appear this was a decision when the SDK was created. Please note the observation of the OOTB etrust_admin.schema file (for the IMPS data). This OOTB schema for the default custom fields displays a mix of case sensitivity for the eTCustomField00-99 and eTCustomFieldName00-99.

Proposal:

To address this new requirement; and to clarify there are three (3) possible deployments to enable this extended schema. We will review the pro/cons of each possible deployment choice.

Supporting Note 1:

  • eTCustomFieldXXX is the attribute that will contain a value.
  • eTCustomFieldNameXXX is the attribute that will contain a business name for this custom field.

Supporting Note 2:

The CA IM Provisioning Tier was/is developed with early x86 MS VC++ code. We attempted to use later release of the MS Visual Studio VC solution for this process but it failed to generate the output files.

Phase 1 Steps: Enhance the IM Provisioning Tier with 900 new custom fields with case = insensitive.

  1. Download & install MS Visual Studio VC 2010 Express, to have access to the ‘nmake’ executable.
  1. Update OS PATH variables to reference this MS VC 2010 bin folder
  1. Execute the nmake binary, to ensure it is working fine
    • where make & make /?
  1. Download & install CA IM Provisioning SDK on the same server/ workstation as ‘nmake’ binary.
    • IM r14.3 GEN50000000002780.zip 200 MB
  1. Open a command line window; and then change folder to the Provisioning SDK’s COSX Samples folder

cd “C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\samples\COSX”

  1. Execute the gencosx.bat batch file to generate the additional schema for N attributes.

gencosx.bat 900 { Max allowed value is 900; which will generate 100-999 attributes}

The output text file: cosxparse.pty

**** The above steps only need to be executed ONCE on a workstation. After the output text file is generated, we should only need & retain this file for future updates. ****

################################################################

  1. Use Notepad++ to search and replace a string in the following file, cosxparse.pty

“case=sensitive” to “case=insensitive”

{We may be selective and only replace a few attributes instead of all additional 900 attributes.}

  1. Execute the following commands to generate the binary file.
  • Use batch files to set environmental values for the nmake program.
    “C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat“
  • Execute ‘nmake’
    nmake

The new output file (binary) will be:
C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\data\ cosxparse.ptt

  1. Before overwriting existing files; backup the three (3) prior files of IMPS/CCS data folder & IMPD schema folder for:
    etrust_cosx.schema
    etrust_cosx.dxc
    cosxparse.ptt
  1. Copy the file, cosxparse.ptt, to the IMPS server data folder
  1. Stop IMPS service: su – imps & imps stop
  1. Execute the follow command: schemagen -n COSX
  1. This process will create two (2) new output files:
    • etrust_cosx.dxc
    • etrust_cosx.schema
  1. Validate the two (2) new generated files have case-insensitivity set.
  1. Copy etrust_cosx.dxc to all CA Directory schema folders; including DX routers (on IMPS servers).
    • Validate this file is reference in the IMPD group knowledge schema file: etrust_admin.dxg
  1. Copy etrust_cosx.schema & cosxparse.ptt to all CA IMPS Servers, the CCS Servers’ data folders, & the CA IMPS GUI data folder.
    • Validate the file, etrust_cosx.schema, is reference in the IMPS configuration file: etrust_admin.conf
  1. Restart CA Directory and IMPS/CCS Services.
  • dxserver stop all / dxserver start all
  • imps stop / imps start
  • net stop im_jcs / net start im_jcs {this will also restart the im_ccs service}
  1. With the IMPS GUI
  • Assign a ‘business name’ to the newly created eTCustomField100+ under
    SYSTEM/GLOBAL PROPERTIES/CUSTOM USER FIELDS
    {If you do not see these newly created fields, then the IMPS GUI data folder was not updated per step 11.}
  • Validate that E&C Correlation Rules will now work for these extended fields with case-insensitivity.
    SYSTEM/DOMAIN CONFIGURATIONS/EXPLORE AND CORRELATE/CORRELATION ATTRIBUTE/
  • Validate the custom fields are viewable for each Global User.

We may now STOP HERE if we do NOT need to expose these new custom fields to the IME.

Pro: Able to use customfields for account templates and correlations rules.

Con: Not exposed to IME for 1:1 mapping nor exposed for PX Business Rules.

#############################################################

Phase 2 Steps – Advance Configuration – Add custom fields to the IME to allow for 1:1 mapping and use of PX Business Rules.

  1. Update the JIAM (Java LDAP to IMPS API) reference file, jiam.jar, to allow the IME to manage these extended fields for PX business rules.
    • Use 7zip https://www.7-zip.org/ to extract files from jiam.jar; update the file CommonObjects.xml; then replace this file in the jar file.
    • Location of reference file: ./wildfly-idm/standalone/deployments/ iam_im.ear/library/jiam.jar
    • Location for property files to update: \com\ca\iam\model\impl\datamodel\ CommonObjects.xml
  1. Update sections after eTCustomField99 with the below data with the case insensitive.

<property name="eTCustomField100"> <doc>Custom Field #100</doc> <value default="false"> <setValue> <baseType default="false"> <strValue></strValue> </baseType> </setValue> </value> <metadata name="jiam.syncToAccounts"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.modifyPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.ownerPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="isMultiValued"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="beanPropertyName"> <value> <strValue>customField100</strValue> </value> </metadata> <metadata name="pt.minimumAbbreviation"> <value> <intValue>10</intValue> </value> </metadata> <metadata name="pt.internalName"> <value> <strValue>CustomField100</strValue> </value> </metadata> <metadata name="pt.editType"> <value> <strValue>string</strValue> </value> </metadata> <metadata name="pt.editFlag"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.caseSensitivity"> <value> <strValue>insensitive</strValue> </value> </metadata> <metadata name="pt.asciiOnly"> <value> <boolValue>false</boolValue> </value> </metadata> <metadata name="pt.dataLocation"> <value> <strValue>db</strValue> </value> </metadata> </property>

  1. Update the CA IMPS directory.xml as needed for some or all 900 fields.
<ImsManagedObjectAttr physicalname="eTCustomField100" description="Custom Field 100" displayname="Custom Field 100" valuetype="String" multivalued="true" wellknown="%CUSTOM_FIELD_100%" maxlength="0"/>
  1. Update the IME’s IMCD to IMPS 1:1 mappings.
    • identityEnv_environment_settings.xml

We may now stop here. The next advance configuration is only required if we wish to manage the various Endpoint Mapping Tab with the IM UI; instead of the IMPS GUI. We would consider the next Phase 3 Steps, to be low value for the effort; as this configuration is typically set once and done in the IMPS GUI.

Pro: Able to use customfields for account templates and correlations rules. Also able to map these files 1:1 in the IME for IMCD attributes to be mapped to the IMPS extended custom attributes. These IMPS extended custom attributes will now be exposed for PX Business Rules.

Con: Not exposed to IM UI to update Endpoint’s Mapping TAB for ADS and DYN endpoints.

Phase 3 Steps – IME Advanced

If planning on exposing these new custom fields in both the Endpoint’ Mapping Attribute Screen & Endpoint Account Templates via the IME, follow these additional steps:

  1. Replace commonobjects.xml in ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\lib\roledefgen.jar by following the steps given below:
    • rename roledefgen.jar as roledefgen.zip
    • Open roledefgen.zip
    • open com\ca\iam\roledefgen\commonobjects.xml and replace the contents with the attached/provided commonobjects.xml file
    • save the zip
    • rename the zip to jar
  2. Now roledefgen.jar will contain the commonobjects.xml file with extended custom attributes
  1. execute the below RoleDefGenerator.bat to generate jars for all the required java/Dyn endpoints
    • ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\bin> RoleDefGenerator.bat -d -h -u “”
  1. open the generated endpoint jars one by one and modify them by following below steps:
    • rename the original .jar as .zip
    • open framework.xml and increase the version “version=” (2nd line)
    • rename the generated .jar as .zip
    • open and copy the contents of -RoleDef.xml
    • paste the copied content in step 4 to the file -RoleDef.xml in original .zip (step 1)
    • save the original .zip and rename it to .jar
    • replace the save .jar in ..\wildfly-8.2.0.Final\standalone\deployments\iam_im.ear\ user_console.war\WEB-INF\lib
  2. restart IM and test the Custom attributes in IM web-UI

Post Update Note:

  • Validate if we may need to rebuild the IMPD DSAs (4) for existing users that may have already had these extended attributes but with case=sensitive set previously.
    • This step is not required if this is the first time the extended attributes have been deployed or if the case=sensitive has not been changed.
    • Process: Export the IMPD LDIFs, rebuild the IMPD DSA and then re-import LDIFs.