Parallel provisioning for Active Directory and MS Exchange mailboxes – Improve Birthright/DayOne Access

One of the challenges that IAM/IAG solutions may have is using single thread processing for select endpoints. For the CA/Symantec Identity Management solution, before IM r14.3cp2, we lived with a single-threaded connector to managed MS Active Directory endpoints.

To address this challenge, we deployed multiple connector servers. We allowed the IM Provisioning Server (IMPS) to use a built-in round-robin approach of load-balancing separate transactions to different connector servers, which would service the same Active Directory endpoints.

The IME may be running as fast as it can with its clustered deployment, but as soon as a task has MS Active Directory, and there is a bottleneck with the CCS Service. We begin to see the IME JMS queue reporting that it is stuck and the IME View Submitted Task reporting “In Progress” for all tasks. If the CCS service is restarted, all IME tasks are then reported as “Failed.”

This is/was the bottleneck for the solution for sites that have MS Active Directory for Birthright/DayOne Access.

We can now avoid this bottleneck. [*** (5/24/2021) – There is an enhancement to CP2 to address im_ccs.exe crashes during peak loads discovered using this testing process. ]

Via the newly delivered enhancement https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7154e15b-085d-469e-bff0-ac588ff6bd5b .

We now have full parallel provisioning to MS Active Directory from a single connector server (JCS/CCS).

The new attribute that regulates this behavior is eTADSMaxConnectionsInPool. This attribute will be applied on every existing ADS endpoint that is currently being managed by the IM Provisioning Server after CP2 is deployed. Note: The default value is 10, but we recommend after much testing, to match the value of the IMPS-> JCS and JCS->CCS to equal 200.

During testing within the IME using Bulk Tasks or the IM BLC, we can see that the CCS-> ADS traffic will reach 20-30 connections if allowed. You may set this attribute to a value of 200 via Jxplorer and/or an ldapmodify/dxmodify script.

echo "############### SET ADS MAX CONNECTIONS IN POOL SIZE ##################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
NAMESPACE=exchange2016
LDAPTLS_REQCERT=never dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD" << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'

To confirm the number of open connections is greater than one (1), we can issue a Bulk IM Task or use a performance tool like CA Directory dxsoak.

In this example, we will show case using CA Directory dxsoak to execute 100 parallel threads to create 100 ADS Accounts with MS Exchange Mailboxes. We will also enclose this script for download for others to review and use.

Performance Lab:

Pre-Steps:

  1. Leverage CA Directory samples’ dxsoak binary (performance testing). You may wish to use CA Directory on an existing IM Provisioning Server (Linux OS) or you may deploy CA Directory (MS Windows version) to the JCS/CCS connector. Examples are provided for both OSes.
  2. Create LDIF files for IM Provisioning Server and/or IM Connector Tier. This file is needed to ‘push’ the solution to-failure. The use of the IME Bulk Task and/or etautil scripts to the IM Provisioning Tier, will not provide the transaction speed we need to break the CCS service if possible.
  3. Within the IM Provisioning Manager enable the ADS Endpoint TXT Logs on the Logging TAB, for all checkboxes.
  4. Monitor the IMPS etatrans* logs, monitor the JCS ADS logs, monitor the CCS ADS logs, monitor the number of CCS-> ADS (LDAP/S – TCP 389/636) threads. [Suggest using MS Sysinternals Process Explorer and select im_ccs.exe & then TCP/IP TAB]
  5. Monitor the MS ADS Domain via MS ADUC (AD Users & Computers UI) and MS Exchange Mailbox (Mailbox UI via Browser)

Execution:

6. Perform a UNIT TEST with dxmodify/ldapmodify to confirm the LDIF file input is correct with the correct suffix.

time dxmodify -H ldap://192.168.242.135:20389 -c -x -D "eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta" -w Password01 -f ads_user_with_exchange_dc_eta.ldif

7. Perform the PERFORMANCE TEST with dxsoak binary with the same LDIF file & correct suffix. Rate observed = 23 K ids/hr

./dxsoak -c -l 60 -t 100 -h 192.168.242.135:20389 -D "eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta" -w Password01 -f ads_user_with_exchange_dc_eta.ldif

Observations:

8. IMPS etatrans*.log – Count the number of operations per second. Note any RACE and/or data collisions, e.g. ADS accounts deleted prior to add via 100 threads or ADS account created multiple times attempted in different threads.

9. IM CCS ADS <endpoint>.log – Will only have useful data if the ADS Endpoint Logging TAB has been checked for TXT logs.

10. Finally, validate directly in MS Active Domain with the ADUC or similar tool & MS Exchange mailboxes being created/deleted.

11. Count the number of threads from im_ccs.exe to ADS – Suggest using MS Sysinternals Process Explorer tool and/or Powershell to count the number of connections.

MS Powershell Script to count the number of LDAP (TCP 389) connection from im_ccs.exe. [Note: TCP 389 is used more if the ADS Endpoint is setup to use SASL authentication. TCP 636 is used more if the ADS Endpoint is using the older TLS authentication]

$i=1
Do {
cls
(Get-NetTCPConnection -State Established -OwningProcess (Get-Process -name im_ccs).id -RemotePort 389).count
Start-Sleep -s 1
$i++
}
while ($i -le 5)

Direct Performance Testing to JCS/CCS Service

While this testing has limited value, it can offer satisfaction and assistance to troubleshoot any challenges. We can use the prior LDIF files with a slightly different suffix, dc=etasa (instead of dc=eta), to use dxsoak to push the connector tier to failure. This step helped provide memory dumps back to CA/Symantec Engineering teams to help isolate challenges within the parallel processing. CCS Service is only exposed via localhost. If you wish to test the CCS Service remotely, then update the MS Registry key for the CCS service to use the external IP address of the JCS/CCS Server. Rate observed = 25 K ids/hr

Script to generate 100 ADS Accounts with MS Exchange Mailbox Creation

You may wish to review this script and adjust it for your ADS / MS Exchange domains for testing. You can also create a simple LDIF file with password resets or ADS group membership adds. Just remember that the IMPS Service (TCP 20389/20390) uses the suffix dc=eta, and the IM JCS/CCS Services (TCP 20410/20411) & (TCP 20402/20403) use the suffix dc=etasa. Additionally, if using CA Directory dxsoak, only use the non-TLS ports, as this binary is not equipped for using TLS certs.

#!/bin/bash
#######################################################################################################################
# Name:  Generate ADS Feed Files for IM Solution Provisioning/Connector Tiers
#
# Goal:  Validate the new parallel processes from the IM Connector Tier to Active Directory with MS Exchange
#
#
# Generate ADS User LDIF file(s) for use with unit (dxmodify) and performance testing (dxsoak) to:
#  - {Note: dxsoak will only work with non-TLS ports}
#
# IM JCS (20410)  "dc=etasa"    {Ensure MS Windows Firewall allows this port to be exposed}
# IM CCS (20402)  "dc=etasa"    {This port is localhost only, may open to network traffic via registry update}
# IMPS (20389)    "dc=eta"
#
#
# Monitor:  
#
# The IMPS etatrans*.log  {exclude searches}
# The JCS daily log
# The JCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
# The CCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
#
# Execute per the examples provided during run of this file
#
#
# ANA 05/2021
#######################################################################################################################

# Unique Variables for an ADS Domain
NAMESPACE=exchange2016
ADSDOMAIN=exchange.lab
DCDOMAIN="DC=exchange,DC=lab"
OU=People

#######################################################################################################################


MAX=100
start=00001
counter=$start
echo "###############################################################"
echo "###############################################################"
START=`/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`
echo `/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`" = Current OS UTC time stamp"
echo "###############################################################"
FILE1=ads_user_with_exchange_dc_etasa.ldif
FILE2=ads_user_with_exchange_dc_eta.ldif
echo "" > $FILE1
while [ $counter -le $MAX ]
do
    n=$((10000+counter)); n=${n#1}
    tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
   echo "Counter with leading zeros = $n   at time:  $tz"


cat << EOF >> $FILE1
dn:  eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
objectClass:  eTADSAccount
eTADSobjectClass:  user
eTADSAccountName:  firstname$n aaalastname$n
eTADSgivenName:  firstname$n
eTADSsn:  aaalastname$n
eTADSdisplayName:  firstname$n aaalastname$n
eTADSuserPrincipalName:  aaatestuser$n@$ADSDOMAIN
eTADSsAMAccountName:  aaatestuser$n
eTPassword:  Password01
eTADSpwdLastSet:  -1
eTSuspended:  0
eTADSuserAccountControl:  0000000512
eTADSDescription:  description $tz
eTADSphysicalDeliveryOfficeName:  office
eTADStelephoneNumber:  111-222-3333
eTADSmail:  aaatestuser$n@$ADSDOMAIN
eTADSwwwHomePage:  web.page.lab
eTADSotherTelephone:  111-222-3333
eTADSurl:  other.web.page.lab
eTADSstreetAddress:  street address line01
eTADSpostOfficeBox:  pobox 111
eTADSl:  city
eTADSst:  state
eTADSpostalCode:  11111
eTADSco:  UNITED STATES
eTADSc:  US
eTADScountryCode:  840
eTADSscriptPath:  loginscript.cmd
eTADSprofilePath:  \profile\path\here
eTADShomePhone:  111-222-3333
eTADSpager:  111-222-3333
eTADSmobile:  111-222-3333
eTADSfacsimileTelephoneNumber:  111-222-3333
eTADSipPhone:  111-222-3333
eTADSinfo:  Notes Here
eTADSotherHomePhone:  111-222-3333
eTADSotherPager:  111-222-3333
eTADSotherMobile:  111-222-3333
eTADSotherFacsimileTelephoneNumber:  111-222-3333
eTADSotherIpPhone:  111-222-3333
eTADStitle:  title
eTADSdepartment:  department
eTADScompany:  company
eTADSmanager:  CN=manager_fn manager_ln,OU=$OU,$DCDOMAIN
eTADSmemberOf:  CN=Backup Operators,CN=Builtin,$DCDOMAIN
eTADSlyncSIPAddressOption: 0000000000
eTADSdisplayNamePrintable: aaatestuser$n
eTADSmailNickname: aaatestuser$n
eTADShomeMDB: (Automatic Mailbox Distribution)
eTADShomeMTA: CN=DC001,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,$DCDOMAIN
eTAccountStatus: A
eTADSmsExchRecipientTypeDetails: 0000000001
eTADSmDBUseDefaults: TRUE
eTADSinitials: A
eTADSaccountExpires: 9223372036854775807

EOF
 counter=$(( $counter + 00001 ))
done


#  Create the delete ADS Process
start=00001
counter=$start
while [ $counter -le $MAX ]
do
    n=$((10000+counter)); n=${n#1}
    tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
   echo "Counter with leading zeros = $n   at time:  $tz"


cat << EOF >> $FILE1
dn:  eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: delete

EOF
 counter=$(( $counter + 00001 ))
done

echo ""
echo "################################### ADS USER OBJECT STATS ################################################################"
echo "Number of add objects: `grep "changetype: add" $FILE1 | wc -l`"
echo "Number of delete objects: `grep "changetype: delete" $FILE1 | wc -l`"
rm -rf $FILE2
cp -r -p $FILE1 $FILE2
sed -i 's|,dc=im,dc=etasa|,dc=im,dc=eta|g' $FILE2
ls -lart $FILE1
ls -lart $FILE2

echo ""
echo "################################### SET ADS MAX CONNECTIONS IN POOL SIZE ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
LDAPTLS_REQCERT=never dxmodify  -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD"  << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL  -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'

echo ""
echo "################################### CCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20402
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the CCS Service to test single thread with dxmodify or ldapmodify"
echo "dxmodify  -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the CCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "

echo ""
echo "################################### JCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20410
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the JCS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify  -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the JCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "


echo ""
echo "################################### IMPS UNIT & PERF TEST ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
echo "Execute this command to the IMPS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify  -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
echo "Execute this command to the IMPS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $IMPS_HOST:$IMPS_PORT -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "




Address the new bottleneck of MS Exchange / O365 Provisioning.

After parallel provisioning has been introduced with the new im_ccs.exe service, you may noticed that the number of transactions is still being throttled during performance testing.

Out-of-the-box MS Active Directory Global Throttling Policy has the parameter of PowerShellMaxConcurrency set to a default of 18 connection. Any provisioning that uses MS Powershell for MS Exchange and/or MS O365 will be impacted by this default parameter.

To address this bottleneck, we can create a new Throttling Policy and only assign the service ID that will be managing identities, to avoid a global change.

Example: New-ThrottlingPolicy MaxPowershell -PowerShellMaxConcurrency 100 & Set-Mailbox “User Name” -ThrottlingPolicy MaxPowershell

After this change has been made, restart the IM JCS/CCS Services, and retest again with your performance tools. Review the CCS ADS log for # of creations in 60 seconds, and you will be pleasantly surprise at the rate. The logs are the strong confirmation we are looking for.

Performance test (947 ADS accounts w/Exchange mailboxes in 60 seconds, 08:59:54 to  09:00:53) => Rate of 15 ids/second   (or 54 K ids/hr) with updated MaxPowershell = 100 thottlingpolicy.

The last bottleneck appears to be CPU availability to MS Exchange Supporting Services, w3wp.exe, the MS IIS Service. Which appears to be managing MS Powershell connections per its startup string of

" c:\windows\system32\inetsrv\w3wp.exe -ap "MSExchangePowerShellAppPool" -v "v4.0" -c "C:\Program Files\Microsoft\Exchange Server\V15\bin\GenericAppPoolConfigWithGCServerEnabledFalse.config" -a \.\pipe\iisipme304c50e-6b42-4b26-83a4-229ee037be5d -h "C:\inetpub\temp\apppools\MSExchangePowerShellAppPool\MSExchangePowerShellAppPool.config" -w "" -m 0"

WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$


Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid Data Quality Issues during Testing (TDM)

Why do we see data quality challenge in lower environments (Test, Dev, QA) that we do not see in Production Environments?

If the project team was asked to set up lower environments for any new solution, it might be that the TDM (test-data-management) methodology is not a formal corporate process.

TDM may be simply described as capturing non-PII (sensitive) production data and coping a full or limited set of the data to the non-production environments. This non-PII data may be 1:1 or masked during this process.

A TDM (test-data-management) process for a new environment may be a challenge if there is no current production environment or that the current production environment is from a prior solution or M&A (merge/acquisitions).

While there are formal paid tools/solutions for TDM, a project team may wish to leverage CLI (command-line) and/or scripts to create this sub-set of non-PII production data for the lower environments.

This process may be as simple as deciding to export the full DIT (directory structure/directory information tree) of an LDAP store with all its current group names, but replace the userID/Full Name/sensitive data with “dummy/masked” data. This exported data would be loaded with the near-Production data, to allow for full use-case and negative use-case testing in the lower environments.

The Goal? Avoid show-stopper or high-level issues due to missed data quality concerns during a Go-Live or Business Release Cycle. This is very important when we have a small maintenance window to add new functionality.

Let us help with the knowledge transfer and building of representatives environments. We see this challenge often for the IAM solutions that manage 1000’s of endpoints, where even the basic Active Directory representation is missing the same DIT structure and group objects as the project AD domains, especially for M&A business projects.

Writing Successful Test Plans

One of the challenges we see is that project team members dislike writing.

Documentation that is very visible business owners/team leads, e.g. business/technical requirements, design, or project management, will not be greatly impacted due to the maturity of the senior resources.

However, one area seems to suffer and does have an impact for project timelines & future go-live estimates. Documentation for test plans may be very simplistic or detailed.

Project suffer timeline challenges when test plans & tests scripts are too simplistic.

The business QA resources assigned to execute the test plan/test scripts can NOT be assumed to have the in-depth background/knowledge of the solution. If the initial conditions and final output are not clearly called out (or how to reset them), then we have seen project timeline is drawn out as they are pushed into a seemingly never-ending cycle of QA testing.

To ensure your project is successful, demand that the test scripts for the test plans are written out as if to be executed by your great-grandparents. This includes which hyperlinks to use, which browser to use, which initial conditions to reset, which tool to reset to initial conditions, which steps to follow, how to record the final answer, where to capture the results, screenshot to be captured where and how.

The above methodology ensures that we do not have a “black box” of a solution, e.g. something-goes-in and we-hope-that-something-good-comes-out.

With the above process, the QA team lead can then scale out their team as needed.

When expected input/output information is captured, automated testing can be introduced with enhanced reporting and validation. This becomes exponentially valuable for IAM solution that manages 100’s of endpoints from legacy [AS/400, HP-NONSTOP NSK, Mainframe (ACF2/TSS/RACF/TSO)] to SaaS Cloud solutions.

So don’t contemplate, spend the time and reap the values. Make your grandparents proud!

Transparency through Automated Testing

One of the challenges that businesses have for projects is an awareness of the true status of tasks.

Project Methodology continues to advance with concepts of Agile Project Management which work well for larger projects. One of the value statements from Agile is the question to project resources when they can complete a task. This question provides a view into the mindset of the resource’s skill set and confidence to meet the task goal. If the resource is a junior resource or has limited skill in the task, then the effort provided to the team will be high. With Agile methodology using this process, it becomes very easy for resources, while they frantically research, to inadvertently drain the project bucket of effort, e.g., a 4-hour task that turns into a week duration.

Another area that has great success with enforcing transparency is automated testing. Automated testing may be used for unit, integration, use-case, performance, and scale testing. However, for project transparency, to lower business risk and project cost overrun, we would state the value of automated testing is from use-case & regression testing.

After technical and business requirements are complete, ensure that a project scheduled or WBS (work-breakdown-structure) has a defined milestone to migrate ALL manual use-case testing to automation. The effort to convert from manual use-case testing to automate testing will be considered by a few to have little value. However, when the final parts of a project are to meet a go-live over a weekend or to add in new business release with adjusted business logic. What would you trust to reach your goals 100%?

Below are two (2) common scenarios:

  1. Solution Upgrade Go-Live over a weekend. You have to be allocated 48 hours to backup of solution data & all platforms, perform a data snapshot, migrate data, integrate with newer solution components (possible new agents), combine with production data, and validate all use-cases for all business logic. And allow time for roll-back if, during triage of issues, the business team determines that show-stopper issues will not be addressed in the period. If you fail, you may be allowed one more attempt on another of your weekends, with all 2-20 people.
  2. Solution Business Release Cycle – Over a weekend or business day. You have the option to deploy new business logic to your solution. You can lower business risk to deploy during a business day but will require additional use-case and regression testing. If you have no automation, you will leverage a QA team of 2-10 people to exercise the use-cases; and sometimes negative use-cases.

Math: Assume your solution has twenty (20) use-cases & sub-use-cases where each use-case may have twenty (20) test scripts. Assume that you have an excellent QA/business/technical resources that have adequate capture the initial conditions (that must be reset every time) for each test script & they are checking for data quality challenges as well. Assume each test script takes about ten (10) minutes to execute, where your QA team resource (not the same skill set) will follow exactly and record success/failure. Perhaps you have trained them to use QA tools to screen capture your failure messages, and assign a technical project team resource to address.

20 use-case x 20 scripts/use-case x 10 min/script = 4000 minutes for one QA resource. Well, we have 1440 minutes in a day, so 4000/1440 = 2.78 days or 66.7 hours. Assume we add ten (10) QA business resources, while we have lower the QA cycle from 66.7 hours to 6.7 hours; we will be required to “freeze” any additional updates during this QA cycle; and likely impact our maintenance window for remediation of “found” issues for either scenario above.

Be aware of the “smoke” testing follies. This type of testing still leaves issues “burning.”

Enforce transparency for project owners, project managers, and team members.

Ensure that the effort to build the automated testing is kept for future regression when the new business logic phase is implemented. Prove to yourselves that prior business logic will NOT be impacted.

Many tools can be leveraged for automation, e.g., Open Source Jmeter (used by many customers), Selenium, or paid tools (Broadcom/CA Technologies Blazemeter), SOAPUI

Let us help.

We firmly believe, encourage, and perform knowledge transfer to our customers to help them succeed, and ensure that the introduction of automated testing lowers TCO of any solution. We can train your staff very quickly to leverage Jmeter from their desktop/servers to automate any written testing plans for solutions. These JMeter process can then be shared with all project team members.