These are exciting times, marked by a transformative change in the way modern applications are rolled out. The transition to Cloud and related technologies is adding considerable value to the process. If you are utilizing solutions like SiteMinder SSO or CA Access Gateway, having access to real-time metrics is invaluable. In the following article, we’ll explore the inherent features of the CA SSO container form factor that facilitate immediate metrics generation, compatible with platforms like Grafana.
Our Lab cluster is an On-Premise RedHat OpenShift Kubernetes Cluster which has the CA SSO Container solution, available as part of the Broadcom Validate Beta Program. The deployment of different SSO elements like policy servers and access gateway is facilitated through a Helm package provided by Broadcom. Within our existing OpenShift environment, a Prometheus metrics server is configured to gather time-series data. By default, the tracking of user workload metrics isn’t activated in OpenShift and must be manually enabled. To do so, make sure the ‘enableUserWorkload‘ setting is toggled to ‘true‘. You can either create or modify the existing configmap to ensure this setting is activated.
Grafana is also deployed for visuals and connected to the Prometheus data source to create metrics visuals. Grafana data source can be created using the YAML provided below. Note that creation of the grafana datasource will require the Prometheus URL as well as an authorization token to access stored metrics. This token can be extracted from the cluster using the below commands.
Also ensure that a role binding exists to allow the service account (prometheus-k8s) in the openshift-monitoring namespace access to the role which allows monitoring of resources in the target (smdev) namespace.
Once the CA SSO helm chart is installed with metrics enabled, we must also ensure that the namespace in which CA SSO gets deployed has openshift.io/cluster-monitoring label set as true.
We are all set now and should see the metrics getting populated using the OpenShift console (Observe -> Metrics menu item) as well as available for Grafana’s consumption.
In the era of next-generation application delivery, integrated monitoring and observability features now come standard, offering considerable advantages, particularly for operations and management teams seeking clear insights into usage and solution value. This heightened value is especially notable in deployments via container platforms. If you’re on the path to modernization and are looking to speed up your initiatives, feel free to reach out. We’re committed to your success and are keen to partner with you.
The recent DNS challenges for a large organization that impacted their worldwide customers bring to mind a project we completed this year, a global password reset redundancy solution.
We worked with a client who desired to manage unplanned WAN outages to their five (5) data centers for three (3) independent MS Active Directory Domains with integration to various on-prem applications/ endpoints. The business requirement was for self-service password sync, where the users’ password change process is initialed/managed by the two (2) different MS Active Directory Password Policies.
Without the WAN outage requirement, any IAM/IAG solution may manage this request within a single data center. A reverse password sync agent process is enabled on all writable MS Active Directory domain controllers (DC). All the world-wide MS ADS domain controllers would communicate to the single data center to validate and resend this password change to all of the users’ managed endpoint/application accounts, e.g. SAP, Mainframe (ACF2/RACF/TSS), AS/400, Unix, SaaS, Database, LDAP, Certs, etc.
With the WAN outage requirement, however, a queue or components must be deployed/enabled at each global data center, so that password changes are allowed to sync locally to avoid work-stoppage and async-queued to avoid out-of-sync password to the other endpoint/applications that may be in other data centers.
We were able to work with the client to determine that their current IAM/IAG solution would have the means to meet this requirement, but we wished to confirm no issues with WAN latency and the async process. The WAN latency was measured at less than 300 msec between remote data centers that were opposite globally. The WAN latency measured is the global distance and any intermediate devices that the network traffic may pass through.
To review the solution’s ability to meet the latency issues, we introduced a test environment to emulate the global latency for deployment use-cases, change password use-cases, and standard CrUD use-cases. There is a feature within VMWare Workstation, that allows emulation of degraded network traffic. This process was a very useful planning/validation tool to lower rollback risk during production deployment.
VMWare Workstation Network Adapter Advance Settings for WAN latency emulation
The solution used for the Global Password Rest solution was Symantec Identity Suite Virtual Appliance r14.3cp2. This solution has many tiers, where select components may be globally deployed and others may not.
We avoided any changes to the J2EE tier (Wildfly) or Database for our architecture as these components arenot supported for WAN latency by the Vendor. Note: We have worked with other clients that have deployment at two (2) remote data centers within 1000 km, that have reported minimal challenges for these tiers.
We focused our efforts on the Provisioning Tier and Connector Tier. The Provisioning Tier consists of the Provisioning Server and Provisioning Directory.
The Provisioning Server has no shared knowledge with other Provisioning Servers. The Provisioning Directory (Symantec Directory) is where the provisioning data may be set up in a multi-write peer model. Symantec Directory is a proper X.500 directory with high redundancy and is designed to manage WAN latency between remote data centers and recovery after an outage. See example provided below.
The Connector Tier consists of the Java Connector Server and C++ Connector Server, which may be deployed on MS Windows as an independent component. There is no shared knowledge between Connector Servers, which works in our favor.
Requirement:
Three (3) independent MS Active Directory domain in five (5) remote data centers need to allow self-service password change & allow local password sync during a WAN outage. Passwords changes are driven by MS ADS Password Policies (every N days). The IME Password Policy for IAG/IAM solution is not enabled, IME authentication is redirected to an ADS domain, and the IMPS IM Callback Feature is disabled.
Below is an image that outlines the topology for five (5) global data centers in AMER, EMEA, and APAC.
The flow diagram below captures the password change use-case (self-service or delegated), the expected data flow to the user’s managed endpoints/applications, and the eventual peer sync of the MS Active Directory domain local to the user.
Observation(s):
The standalone solution of Symantec IAG/IAM has no expected challenges with configurations, but the Virtual Appliance offers pre-canned configurations that may impact a WAN deployment.
During this project, we identified three (3) challenges using the virtual appliance.
Two (2) items needed the assistance of the Broadcom Support and Engineering teams. They were able to work with us to address deployment configuration challenges with the “check_cluster_clock_sync -v ” process that incorrectly increments time delays between servers instead of resetting a value of zero between testing between servers.
Why this is important? The “check_cluster_clock_sync” alias is used during auto-deployment of vApp nodes. If the time reported between servers is > 15 seconds then replication may fail. This time check issue was addressed with a hotfix. After the hot-fix was deployed, all clock differences were resolved.
The second challenge was a deployment challenge of the IMPS component for its embedded “registry files/folders”. The prior embedded copy process was observed to be using standard “scp”. With a WAN latency, the scp copy operation may take more than 30 seconds. Our testing with the Virtual Appliance showed that a simple copy would take over two (2) minutes for multiple small files. After reviewing with CA support/engineering, they provided an updated copy process using “rsync” that speeds up copy performance by >100x. Before this update, the impact was provisioning tier deployment would fail and partial rollback would occur.
The last challenge we identified was using the Symantec Directory’s embedded features to manage WAN latency via multi-write HUB groups. The Virtual Appliance cannot automatically manage this feature when enabled in the knowledge files of the provisioning data DSAs. Symantec Directory will fail to start after auto-deployment.
Fortunately, on the Virtual appliance, we have full access to the ‘dsa’ service ID and can modify these knowledge files before/after deployment. Suppose we wish to roll back or add a new Provisioning Server Virtual Appliance. In that case, we must disable the multi-write HUB group configuration temporarily, e.g. comment out the configuration parameter and re-init the DATA DSAs.
Six (6) Steps for Global Password Reset Solution Deployment
We were able to refine our list of steps for deployment using pre-built knowledge files and deployment of the vApp nodes in blank slates with the base components of Provisioning Server (PS) and Provisioning Directory) with a remote MS Windows server for the Connector Server (JCS/CCS).
Step 1: Update Symantec Directory DATA DSA’s knowledge configuration files to use the multiple group HUB model. Note that multi-write group configuration is enabled within the DATA DSA’s *.dxc files. One Directory servers in each data center will be defined as a “HUB”.
To assist this configuration effort, we leveraged a serials of bash shell scripts that could be pasted into multiple putty/ssh sessions on each vApp to replace the “HUB” string with a “sed” command.
After the HUB model is enabled (stop/start the DATA DSAs), confirm that delayed WAN latency has no challenge with Symantec Directory sync processes. By monitoring the Symantec Directory logs during replication, we can see that sync operation with the WAN latency is captured with the delay > 1 msecs between data centers AMER1 and APAC1.
Step 2: Update IMPS configurations to avoid delays with Global Password Reset solution.
Note for this architecture, we do not use external IME Password Policies. We ensure that each AD endpoint has the checkbox enabled for “Password synchronization agent is installed” & each Global User (GU) has “Enable Password Synchronization Agent” checkbox enabled to prevent data looping. To ensure this GU attribute is always enabled, we updated an attribute under “Create Users Default Attributes”.
Step 3a: Update the Connector Tier (CCS Component)
Ensure that the MS Windows Environmental variables for the CCS connector are defined for Failover (ADS_FAILOVER) and Retry (ADS_RETRY).
Step 3b: Update the CCS DNS knowledge file of ADS DCs hostnames.
Important Note: Avoid using the refresh feature “Refresh DC List” within the IMPS GUI for the ADS Endpoint. If this feature is used, then a “merge” will be processed from the local CCS DNS file contents and what is defined within the IMPS GUI refresh process. If we wish to manage the redirection to local MS ADS Domain Controllers, we need to control this behavior. If this step is done, we can clean out the Symantec Directory of extra entries. The only negative aspect is the local password change may attempt to communicate to one of the remote MS ADS Domain Controllers that are not within the local data center. During a WAN outage, a user would notice a delay during the password change event while the CCS connector timed out the connection until it connected to the local MS ADS DC.
Step 3c: CCS ADS Failover
If using SSL over TCP 636 confirm the ADS Domain Root Certificate is deployed to the MS Windows Server where the CCS service is deployed. If using SASL over TCP 389 (if available), then no additional effort is required.
If using SSL over TCP 636, use the MS tool certlm.msc to export the public root CA Certificate for this ADS Domain. Export to base64 format for import to the MS Windows host (if not already part of the ADS Domain) with the same MS tool certlm.msc.
Step 4a: Update the Connector Tier for the JCS component.
Add the stabilization parameter “maxWait” to the JCS/CCS configuration file. Recommend 10-30 seconds.
Step 4b: Update JCS registration to the IMPS Tier
You may use the Virtual Appliance Console, but this has a delay when pulling the list of any JCS connector that may be down at this time of the check/submission. If we use the Connector Xpress UI, we can accomplish the same process much faster with additional flexibility for routing rules to the exact MS ADS Endpoints in the local data center.
Step 4c: Observe the IMPS routing to JCS via etatrans log during any transaction.
If any JCS service is unavailable (TCP 20411), then the routing rules process will report a value of 999.00, instead of a low value of 0.00-1.00.
Step 5: Update the Remote Password Change Agent (DLL) on MS ADS Domain Controllers (writable)
Step 6a: Validation of Self-Service Password Change to selected MS ADS Domain Controller.
Using various MS Active Directory processes, we can emulate a delegated or self-service password change early during the configuration cycle, to confirm deployment is correct. The below example uses MS Powershell to select a writable MS ADS Domain Controller to update a user’s password. We can then monitor the logs at all tiers for completion of this password change event.
A view of the password change event from the Reverse Password Sync Agent log file on the exact MS Domain Controller.
Step 6b: Validation of password change event via CCS ADS Log.
Step 6c: Validation of password change event via IMPS etatrans log
Note: Below screenshot showcases alias/function to assist with monitoring the etatrans logs on the Virtual Appliance.
Below screen shot showcases using ldapsearch to check timestamps for before/after of password change event within MS Active Directory Domain.
We hope these notes are of some value to your business and projects.
Appendix
Using the MS Windows Server for CCS Server
Get current status of AD account on select DC server before Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with current password.)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password05" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Change AD account's password via Powershell:
PowerShell Example:
Set-ADAccountPassword -Identity "idmpwtest" -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "Password06" -Force) -Server dc2016.exchange.lab
Get current status of AD account on select DC server after Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with NEW password)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password06" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Using the Provisioning Server for password change event
Get current status of AD account on select DC server before Password Change:
LDAPSearch Example: (From IMPS server - as user with current password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password05 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
Change AD account's password via ldapmodify & base64 conversion process:
LDAPModify Example:
BASE64PWD=`echo -n '"Password06"' | iconv -f utf8 -t utf16le | base64 -w 0`
ADSHOST='192.168.242.154'
ADSUSERDN='CN=Administrator,CN=Users,DC=exchange2012,DC=lab'
ADSPWD='Password01!’
ldapmodify -v -a -H ldaps://$ADSHOST:636 -D "$ADSUSERDN" -w "$ADSPWD" << EOF
dn: CN=idmpwtest,OU=People,dc=exchange2012,dc=lab
changetype: modify
replace: unicodePwd
unicodePwd::$BASE64PWD
EOF
Get current status of AD account on select DC server after Password Change:
LDAPSearch Example: (From IMPS server - with user's account and new password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password06 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
On Linux OS, there are two (2) device drivers that provide entropy “noise” for components that require encryption, e.g. the /dev/random and the /dev/urandom device drivers. The /dev/random is a “blocking” device driver. When the “noise” is low, any component that relies on this driver will be “stalled” until enough entropy is returned. We can measure the entropy from a range of 0-4096. Where a value over 1000 is excellent. Any value in the double or single digits will impact the performance of the OS and solutions with delays. The root cause of these delays is not evident during troubleshooting, and typically there are no warning nor error messages related to entropy.
The Symantec Identity Suite solution, when deployed on Linux OS is typically deployed with the JVM switch -Djava.security.egd=file:/dev/./urandom for any component that uses Java (Oracle or AdoptOpenJDK), e.g. Wildfly (IM/IG/IP) and IAMCS (JCS). This JVM variable is sufficient for most use-cases to manage the encryption/hash needs of the solution.
However, for any component that does not provide a mechanism to use the alternative of /dev/urandom driver, the Linux OS vendors offer tools such as the “rng-tools” package. We can review what OS RNGD service is available using package tools, e.g.
dnf list installed | grep -i rng
If the Symantec Identity Suite or other solutions are deployed as standalone components, then we may adjust the Linux OS as we need with no restrictions to add the RNGD daemon as we wish. One favorite is the HAVEGED daemon over the default OS RNGD.
See prior notes on value and testing for Entropy on Linux OS (standalone deployments):
The challenge for Virtual Appliances is that we are limited to what functionality the Symantec Product Team provides for us to leverage. The RNGD service was available on the vApp r14.3, but was disabled for OS challenges with 100% utilization with CentOS 6.4. The service is still installed, but the actual binary is non-executable.
A new Virtual Appliance patch would be required to re-enable this RNGD on vApp r14.3cp2. We have access via sudo, to /sbin/chkconfig, /sbin/service to re-enable this service, but as the binary is not executable, we cannot progress any further. We can see the alias in the documentation still exist, but the OS alias was removed in the cp2 update.
However, since vApp r14.4 was release, we can focus on this Virtual Appliance which is running Centos 8 stream. The RNGD service here is disabled (masked) but can be re-enabled for our use with the sudo command. There is no current documented method for RNGD on vApp r14.4 at this time, but the steps below will show an approved way using the ‘config’ userID and sudo commands.
Confirm that the “rng-tools” package is installed and that the RNGD binary is executable. We can also see that the RNGD service is “masked”. Masked services are prevented from starting manually or automatically as an extra safety measure when we wish for tighter control over our systems.
If we test OS entropy for this vApp r14.4 server without RNGD, we can monitor how a simple BASH shell script that emulates a password being generated will impact the “entropy” of /dev/random. The below script will reduce the entropy to low numbers. This process will now impact the OS itself and any components that reference /dev/random. We can observe with “lsof /dev/random” that the java programs will still reference /dev/random; even though most activity is going to /dev/urandom.
Using the time command in the BASH shell script, we can see that the response is rapid for the first 20+ iterations, but as soon as the entropy is depleted, each execution is delayed by 10-30x times.
Now let’s see what RNGD service will do for us when it is enabled. Let’s follow the steps below to unmask, enable, and start the RNGD service as the ‘config’ userID. We have access to sudo to the Centos 8 Stream command of /sbin/systemctl.
After the RNGD service is enabled, test again with the same prior BASH shell script but bump the loops to 1000 or higher. Note using the time command we can see that each loop finishes within a fraction of a second.
Aim to keep the solution footprint small and the right-sized to solve the business’ needs. Do not accept the default performance; avoid over-purchasing to scale to your expected growth.
Use the JVM switch wherever there is a java process, e.g. BLC or home-grown ETL (extract-transform-load) processes.
-Djava.security.egd=file:/dev/./urandom
If you suspect a dependence may impact the OS or other processes on /dev/random, then enable the OS RNGD and perform your testing. Monitor with the top command to ensure RNGD service is providing value and not impacting the solution.
One business risk to manage when new business logic is being promoted to production environments is how to plan for a rollback process, where prior state data is restored, especially for an application/endpoint that is critical for a business; and as important to users as their login credentials and access.
In this entry, we showcase how to use CA Directory to snapshot an endpoint on a scheduled basis (daily/hourly) and have the process prepare a rollback delta file for user’s entitlements.
Understanding how queries may be direct to an endpoint/application or via the CA Identity Manager provisioning tier, we can speed up this process rapidly for sites that have millions of identities in an endpoint.
#!/bin/bash
##############################################################################
#
# POC to demostrate process to snapshot endpoint data on a daily basis
# and to allow a format for roll back
#
# 1. Review ADS with dxsearch/dxmodify
# 2. Create ADS representative Router DSA with CA Directory
# 3. Create ldif delta of snapshot data
# 4. Convert 'replace' to 'add' to ensure Roll back process is a 'merge'
# and NOT an 'overwrite' of entitlements
#
#
#
# A. Baugher, ANA, 11/2019
#
##############################################################################
########## Secure password for script ########
FILE=/tmp/.ads.hash.pwd
#rm -rf $FILE $FILE.salt
[[ -f $FILE ]]
echo "Check if $FILE exists: $?"
[[ -s $FILE ]]
echo "Check if $FILE is populated: $?"
if [[ ! -s $FILE && ! -s $FILE.salt ]]
then
# File did not have any data
# Run script once with pwd then replace with junk data in script
SALT=$RANDOM$RANDOM$RANDOM
PWD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ENCPWD=$(echo $PWD | openssl enc -aes-256-cbc -a -salt -pass pass:$SALT)
echo $ENCPWD > $FILE
echo $SALT > $FILE.salt
chmod 600 $FILE $FILE.salt
fi
if [[ -s $FILE && -s $FILE.salt ]]
then
ENCPWD=`cat $FILE`
SALT=`cat $FILE.salt`
echo "$PWD and $SALT for $ENCPWD"
MYPWD=$(echo "$ENCPWD" | openssl enc -aes-256-cbc -a -d -salt -pass pass:$SALT)
echo "$PWD and $SALT for $MYPWD"
else
echo "Missing password encrypted data and salt"
exit 1
fi
#exit
echo ""
echo "##############################################################################"
echo "Step 0 # Remove prior ads schema files"
echo "##############################################################################"
ADS_SCHEMA=ads_schema
ADS_SUFFIX="dc=exchange,dc=lab"
RANDOM_PORT=50389
rm -rf $DXHOME/config/knowledge/$ADS_SCHEMA.dxc
rm -rf $DXHOME/config/servers/$ADS_SCHEMA.dxi
rm -rf $DXHOME/config/schema/$ADS_SCHEMA.dxc
echo ""
echo "##############################################################################"
echo "Step 1 # Create new router DSA"
echo "##############################################################################"
echo "dxnewdsa -t router $ADS_SCHEMA $RANDOM_PORT $ADS_SUFFIX"
dxnewdsa -t router $ADS_SCHEMA $RANDOM_PORT $ADS_SUFFIX
echo""
echo "##############################################################################"
echo "Step 2 # Create temporary LDIF file of ADS schema"
echo "##############################################################################"
cd $DXHOME/config/schema
ADS_BIND_DN="CN=Administrator,CN=Users,DC=exchange,DC=lab"
ADS_BIND_PWD=$MYPWD
ADS_PASSFILE=/tmp/.ads.pwd
echo -n $MYPWD > $ADS_PASSFILE
chmod 600 $ADS_PASSFILE
ADS_SERVER=dc2016.exchange.lab
ADS_PORT=389
echo "dxschemaldif -v -D $ADS_BIND_DN -w ADS_BIND_PASSWORD_HERE $ADS_SERVER:$ADS_PORT > $ADS_SCHEMA.ldif"
dxschemaldif -v -D $ADS_BIND_DN -w $ADS_BIND_PWD $ADS_SERVER:$ADS_PORT > $ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 3 # Replace unknown SYNTAX with closely related SYNTAX known by CA Directory r12.6.5"
echo "##############################################################################"
echo "sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' $ADS_SCHEMA.ldif"
sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' $ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 4 - # Create CA Directory Schema DXC File from LDIF Schema File"
echo "##############################################################################"
echo "ldif2dxc -f $ADS_SCHEMA.ldif -b bad.ldif -x default.dxg -v $ADS_SCHEMA.dxc"
ldif2dxc -f $ADS_SCHEMA.ldif -b bad.ldif -x default.dxg -v $ADS_SCHEMA.dxc
echo ""
echo "##############################################################################"
echo "Step 5 - # Update router DSA schema reference"
echo "##############################################################################"
echo "sed -i \"s|source \"../schema/default.dxg\";|source \"../schema/default.dxg\";\nsource \"../schema/$ADS_SCHEMA.dxc\"; |g\" $DXHOME/config /servers/$ADS_SCHEMA.dxi"
sed -i "s|source \"../schema/default.dxg\";|source \"../schema/default.dxg\";\nsource \"../schema/$ADS_SCHEMA.dxc\"; |g" $DXHOME/config/servers /$ADS_SCHEMA.dxi
echo ""
echo "##############################################################################"
echo "Step 6 - # Query ADS endpoint for snapshot 1 "
echo "##############################################################################"
echo "dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX '(objectClass=User)' memberOf > snapshot_1_ $ADS_SCHEMA.ldif "
echo "ldifsort snapshot_1_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif "
dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX "(objectClass=User)" memberOf | perl -p00e 's/\r?\ n //g' > snapshot_1_$ADS_SCHEMA.ldif
ldifsort snapshot_1_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 7 - # Query ADS endpoint for snapshot 2"
echo "##############################################################################"
echo "dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX '(objectClass=User)' memberOf > snapshot_2_ $ADS_SCHEMA.ldif "
echo "ldifsort snapshot_2_$ADS_SCHEMA.ldif snapshot_2_sorted_$ADS_SCHEMA.ldif "
dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX "(objectClass=User)" memberOf | perl -p00e 's/\r?\ n //g' > snapshot_2_$ADS_SCHEMA.ldif
ldifsort snapshot_2_$ADS_SCHEMA.ldif snapshot_2_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 8 - # Find the delta for any removed objects"
echo "##############################################################################"
echo "ldifdelta -x -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif"
ldifdelta -x -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 9a: Convert from User ldapmodify syntax of 'overwrite' of 'replace' "
echo "##############################################################################"
ldifdelta -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif user_mod_syntax_input.ldif >/dev/null 2>&1
cat user_mod_syntax_input.ldif | perl -p00e 's/\r?\n //g' > user_mod_syntax.ldif
cat user_mod_syntax.ldif
echo "##############################################################################"
echo "Step 9b: Convert to ADS Group ldapmodify syntax with a 'merge' of 'add' for the group objects"
echo "##############################################################################"
perl /opt/CA/Directory/dxserver/samples/dxsoak/convert.pl user_mod_syntax.ldif > group_mod_syntax_input.ldif
cat group_mod_syntax_input.ldif | perl -p00e 's/\r?\n //g' > group_mod_syntax.ldif
cat group_mod_syntax.ldif
echo "##############################################################################"
Example of output from above script:
[dsa@vapp0001]$ ./active_directory_user_delta_via_ca_dir_tools-lab.sh
Check if /tmp/.ads.hash.pwd exists: 0
Check if /tmp/.ads.hash.pwd is populated: 0
/opt/CA/Directory/dxserver/samples/dxsoak and 31936904511291 for U2FsdGVkX195Ti6A8GdFTG6Kmrf6xDcOhrd2aPWVezc=
/opt/CA/Directory/dxserver/samples/dxsoak and 31936904511291 for CAdemo123
20200427150345,505.0Z = Current OS UTC time stamp
##############################################################################
Step 0 # Remove prior ads schema files
##############################################################################
20200427150345,509.0Z = Current OS UTC time stamp
##############################################################################
Step 1 # Create new router DSA
##############################################################################
dxnewdsa -t router ads_schema 50389 dc=exchange,dc=lab
Writing the knowledge file...
knowledge file written
Writing the initialization file...
Initialization file written
Starting the DSA 'ads_schema'...
ads_schema starting
ads_schema started
20200427150345,513.0Z = Current OS UTC time stamp
##############################################################################
Step 2 # Create temporary LDIF file of ADS schema
##############################################################################
dxschemaldif -v -D CN=Administrator,CN=Users,DC=exchange,DC=lab -w ADS_BIND_PASSWORD_HERE dc2016.exchange.lab:389 > ads_schema.ldif
>> Issuing LDAP v3 synchronous bind to 'dc2016.exchange.lab:389'...
>> Fetching root DSE 'subschemaSubentry' attribute...
>> Downloading schema from 'CN=Aggregate,CN=Schema,CN=Configuration,DC=exchange,DC=lab'...
>> Received (4527) values
>> Done.
20200427150345,539.0Z = Current OS UTC time stamp
##############################################################################
Step 3 # Replace unknown SYNTAX with closely related SYNTAX known by CA Directory r12.6.5
##############################################################################
sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' ads_schema.ldif
20200427150345,560.0Z = Current OS UTC time stamp
##############################################################################
Step 4 - # Create CA Directory Schema DXC File from LDIF Schema File
##############################################################################
ldif2dxc -f ads_schema.ldif -b bad.ldif -x default.dxg -v ads_schema.dxc
>> Opening input file 'ads_schema.ldif' ...
>> Opening existing dxserver schema file '/opt/CA/Directory/dxserver/config/schema/default.dxg' ...
>> Opening bad file 'bad.ldif' ...
>> Opening output file '/opt/CA/Directory/dxserver/config/schema/ads_schema.dxc' ...
>> Processing dxserver schema group file '/opt/CA/Directory/dxserver/config/schema/default.dxg'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/x500.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/cosine.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/umich.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/inetop.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/dxserver.dxc'...
>> Loaded (248) existing dxserver schema entries
>> Loading LDIF records...
>> Loading LDIF record number (1)...
>> Skipping attr: 'objectClass'
>> Skipping attr: 'objectClass'
>> Processing loaded LDIF records...
>> Moving objectClasses to end of list...
>> Sorting attrs/objectClasses so parents precede their children...
>> Processing attributeTypes...
>> Defaulting 'directoryString' syntax without any (required) matching rules to 'caseIgnoreString'...
[Remove repeating lines x 1000]
>> Processing objectClasses...
>> Skipping existing schema entry 'top' with oid '2.5.6.0'...
>> Skipping existing schema entry 'locality' with oid '2.5.6.3'...
>> Skipping existing schema entry 'device' with oid '2.5.6.14'...
>> Skipping existing schema entry 'certificationAuthority' with oid '2.5.6.16'...
>> Skipping existing schema entry 'groupOfNames' with oid '2.5.6.9'...
>> Skipping existing schema entry 'organizationalRole' with oid '2.5.6.8'...
>> Skipping existing schema entry 'organizationalUnit' with oid '2.5.6.5'...
>> Skipping existing schema entry 'domain' with oid '1.2.840.113556.1.5.66'...
>> Skipping existing schema entry 'rFC822LocalPart' with oid '0.9.2342.19200300.100.4.14'...
>> Skipping existing schema entry 'applicationProcess' with oid '2.5.6.11'...
>> Skipping existing schema entry 'document' with oid '0.9.2342.19200300.100.4.6'...
>> Skipping existing schema entry 'room' with oid '0.9.2342.19200300.100.4.7'...
>> Skipping existing schema entry 'domainRelatedObject' with oid '0.9.2342.19200300.100.4.17'...
>> Skipping existing schema entry 'country' with oid '2.5.6.2'...
>> Skipping existing schema entry 'friendlyCountry' with oid '0.9.2342.19200300.100.4.18'...
>> Skipping existing schema entry 'groupOfUniqueNames' with oid '2.5.6.17'...
>> Skipping existing schema entry 'organization' with oid '2.5.6.4'...
>> Skipping existing schema entry 'simpleSecurityObject' with oid '0.9.2342.19200300.100.4.19'...
>> Skipping existing schema entry 'person' with oid '2.5.6.6'...
>> Skipping existing schema entry 'organizationalPerson' with oid '2.5.6.7'...
>> Skipping existing schema entry 'inetOrgPerson' with oid '2.16.840.1.113730.3.2.2'...
>> Skipping existing schema entry 'residentialPerson' with oid '2.5.6.10'...
>> Skipping existing schema entry 'applicationEntity' with oid '2.5.6.12'...
>> Skipping existing schema entry 'dSA' with oid '2.5.6.13'...
>> Skipping existing schema entry 'cRLDistributionPoint' with oid '2.5.6.19'...
>> Skipping existing schema entry 'documentSeries' with oid '0.9.2342.19200300.100.4.9'...
>> Skipping existing schema entry 'account' with oid '0.9.2342.19200300.100.4.5'...
>> Converting LDIF records to DXserver schema format...
>> Converted (4398) of (4525) schema records
20200427150345,894.0Z = Current OS UTC time stamp
##############################################################################
Step 5 - # Update router DSA schema reference
##############################################################################
sed -i "s|source "../schema/default.dxg";|source "../schema/default.dxg";\nsource "../schema/ads_schema.dxc"; |g" /opt/CA/Directory/dxserver/config/servers/ads_schema.dxi
20200427150345,897.0Z = Current OS UTC time stamp
##############################################################################
step 6 - # Update an ADS account with memberOf for testing with initial conditions
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd << EOF >/dev/null 2>&1
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
adding new entry CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
20200427150345,909.0Z = Current OS UTC time stamp
##############################################################################
Step 7 - # Query ADS endpoint for snapshot 1
##############################################################################
dxsearch -LLL -h dc2016.exchange.lab -p 389 -x -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -b dc=exchange,dc=lab '(&(objectClass=User)(memberOf=*))' memberOf | perl -p00e 's/\r?\n //g' > snapshot_1_ads_schema.ldif
ldifsort snapshot_1_ads_schema.ldif snapshot_1_sorted_ads_schema.ldif
creating buckets
creating sort cluster 1 of size 200
sorting 0 records
creating sort cluster 2 of size 200
sorting 200 records
creating sort cluster 3 of size 200
sorting 400 records
3 buckets created
sorting 588 records
588 records sorted, 0 bad records
20200427150345,940.0Z = Current OS UTC time stamp
##############################################################################
Step 8 - # Update an ADS account with memberOf for testing after snapshot
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd << EOF
Ignore the error msg: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM)
This error will occur if a non-existant value is removed from the group's member attribute
##############################################################################
ldap_initialize( ldap://dc2016.exchange.lab:389 )
delete member:
CN=Test User 001,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=eeeee,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=Test User 001,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
ldap_modify: Server is unwilling to perform (53)
additional info: 00000561: SvcErr: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM), data 0
delete member:
CN=alantest,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=eeeee,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
ldap_modify: Server is unwilling to perform (53)
additional info: 00000561: SvcErr: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM), data 0
add member:
CN=alantest,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
deleting entry "CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab"
delete complete
20200427150345,954.0Z = Current OS UTC time stamp
##############################################################################
Step 9 - # Query ADS endpoint for snapshot 2
##############################################################################
dxsearch -LLL -h dc2016.exchange.lab -p 389 -x -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -b dc=exchange,dc=lab '(&(objectClass=User)(memberOf=*))' memberOf | perl -p00e 's/\r?\n //g' > snapshot_2_ads_schema.ldif
ldifsort snapshot_2_ads_schema.ldif snapshot_2_sorted_ads_schema.ldif
creating buckets
creating sort cluster 1 of size 200
sorting 0 records
creating sort cluster 2 of size 200
sorting 200 records
creating sort cluster 3 of size 200
sorting 400 records
3 buckets created
sorting 587 records
587 records sorted, 0 bad records
20200427150345,985.0Z = Current OS UTC time stamp
##############################################################################
Step 10 - # Find the delta for any removed objects
##############################################################################
ldifdelta -x -S ads_schema snapshot_2_sorted_ads_schema.ldif snapshot_1_sorted_ads_schema.ldif
dn: CN=eeeee,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alantest,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=la
b
-
dn: CN=Test User 001,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
changetype: add
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
ldifdelta summary:
587 entries in old file
588 entries in new file
Produced:
1 add entry records
0 delete entry records
3 modify entry records
20200427150346,070.0Z = Current OS UTC time stamp
##############################################################################
Step 11a: Convert from User ldapmodify syntax of 'overwrite' of 'replace'
##############################################################################
dn: CN=eeeee,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alantest,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
-
dn: CN=Test User 001,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
changetype: add
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
20200427150346,163.0Z = Current OS UTC time stamp
##############################################################################
Step 11b: Convert to ADS Group ldapmodify syntax with a 'merge' of 'add' for the group objects
##############################################################################
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=eeeee,CN=Users,DC=exchange,DC=lab
member: CN=Test User 001,CN=Users,DC=exchange,DC=lab
dn: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
# Ignoring Users: [CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab <-> CN=Account Operators,CN=Builtin,DC=exchange,DC=lab] Reason: User NOT present in the latest Snapshot! Cannot add to group.
20200427150346,172.0Z = Current OS UTC time stamp
##############################################################################
Step 11c: Query ADS Group member(s) before Roll back process
##############################################################################
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
20200427150346,185.0Z = Current OS UTC time stamp
##############################################################################
Step 12: Roll back change to ADS User membershipOf to ADS
##############################################################################
Ignore the false positive warning message of: (ENTRY_EXISTS) - This is the 'merge' process
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -f group_mod_syntax.ldif
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
modifying entry CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
20200427150346,194.0Z = Current OS UTC time stamp
##############################################################################
Step 13: Query ADS Group member after Roll back process
##############################################################################
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=eeeee,CN=Users,DC=exchange,DC=lab
member: CN=Test User 001,CN=Users,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
“DSA is attempting to start after a long outage, perform a recovery procedure before starting”
Challenge: The IMPD (Identity Manager Provisioning Directory) Data DSAs have been offline for a while, e.g. 7 days+ (> 1 week), and the Symantec/CA Directory solution will, to protect the data, refuse to allow the DATA DSAs to start unless there is manual intervention to prevent the possibility of production data (Live DATA DSAs) being synced with older data (Offline DATA DSAs).
If we were concern, we would follow best practices and remove the offline DATA DSAs’ *.db & *.dp files, and replace the *.db with current copies of the Live DATA DSAs’ *.db files; generate temporary time files of *.dx and allow the time files of *.dp to rebuild themselves upon startup of the offline DATA DSAs.
However, if we are NOT concern, or the environment is non-production we can avoid the multiple shells, multiple commands to resync by using a combinations of bash shell commands. The proposal below outlines using the Symantec/CA Identity Suite virtual appliance, where both the IMPD and IMPS (Identity Manager Provisioning Server) components reside on the same servers.
Proposal: Use a single Linux host to send remote commands as a single user ID; sudo to the ‘dsa’ and ‘imps’ service IDs, and issue commands to address the restart process.
Pre-Work: For the Identity Suite vApp, recommend that .ssh keys be used to avoid using a password for the ‘config’ user IDs on all vApp nodes.
If using .SSH keys, do not forget to use this shortcut to cache the local session: eval `ssh-agent` && ssh-add
Steps: Issue the following bash commands with the correct IPs or hostnames.
If possible, wrap the remote commands in a for-loop. The below example uses the local ‘config’ user ID, to ssh to remote servers, then issues a local su to the ‘dsa’ service ID. The ‘dsa’ commands may need to be wrapped as shown below to allow multiple commands to be executed together. We have a quick hostname check, stop all IMPD DATA DSAs, find the time-stamp file that is preventing the startup of the IMPD DATA DSAs and remove it, restart all IMPD DATA DSA, and then move on to the next server with the for-loop. The ‘imps’ commands are similar with a quick hostname check, status check, stop and start process, another status check, then move on to the next server in the for-loop.
for i in {136..141}; do ssh -t config@192.168.242.$i "su - dsa -c \"hostname;dxserver stop all;pwd;find ./data/ -type f \( -name '*.dp' \) -delete ;dxserver start all \" "; done
for i in {136..141}; do ssh -t config@192.168.242.$i "su - imps -c \"hostname;imps status;imps stop;imps start;imps status \" "; done
View of for-loop commands output:
Additional: Process to assist with decision to sync or not sync.
Check if the number of total entries in each individual IMPD DATA DSA match with their peers (Multi-Write groups). Goal: Avoid any deltas > 1% between peers. The IMPD “main”, “co”, “inc” DATA DSA should be 100% in sync. We may see some minor flux in the “notify” DATA DSA, as this is temporary data used by the IMPS server to store data to be sent to the IME via the IME Call Back Process.
If there are any deltas, then we may export the IMPD DATA DSAs to LDIF files and then use the Symantec/CA Directory ldifdelta process to isolate and triage the deltas.
su - dsa OR [ sudo -iu dsa ]
export HISTIGNORE=' *' {USE THIS LINE TO FORCE HISTORY TO IGNORE ANY COMMANDS WITH A LEADING SPACE CHARACTER}
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd {USE SPACE CHARACTER IN FRONT TO AVOID HISTORY USAGE}
# NOTIFY BRANCH (TCP 20404)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD NOTIFY DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# INC BRANCH (TCP 20398)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD INC DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# CO BRANCH (TCP 20396)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD CO DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# MAIN BRANCH (TCP 20394)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD MAIN DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
NOTIFY DSA is temporary data and will have deltas. This DSA is used for the IME CALL BACK process.
One of the challenges that IAM/IAG solutions may have is using single thread processing for select endpoints. For the CA/Symantec Identity Management solution, before IM r14.3cp2, we lived with a single-threaded connector to managed MS Active Directory endpoints.
To address this challenge, we deployed multiple connector servers. We allowed the IM Provisioning Server (IMPS) to use a built-in round-robin approach of load-balancing separate transactions to different connector servers, which would service the same Active Directory endpoints.
The IME may be running as fast as it can with its clustered deployment, but as soon as a task has MS Active Directory, and there is a bottleneck with the CCS Service. We begin to see the IME JMS queue reporting that it is stuck and the IME View Submitted Task reporting “In Progress” for all tasks. If the CCS service is restarted, all IME tasks are then reported as “Failed.”
This is/was the bottleneck for the solution for sites that have MS Active Directory for Birthright/DayOne Access.
We can now avoid this bottleneck. [*** (5/24/2021) – There is an enhancement to CP2 to address im_ccs.exe crashes during peak loads discovered using this testing process. ]
We now have full parallel provisioning to MS Active Directory from a single connector server (JCS/CCS).
The new attribute that regulates this behavior is eTADSMaxConnectionsInPool. This attribute will be applied on every existing ADS endpoint that is currently being managed by the IM Provisioning Server after CP2 is deployed. Note: The default value is 10, but we recommend after much testing, to match the value of the IMPS-> JCS and JCS->CCS to equal 200.
During testing within the IME using Bulk Tasks or the IM BLC, we can see that the CCS-> ADS traffic will reach20-30connections if allowed. You may set this attribute to a value of 200 via Jxplorer and/or an ldapmodify/dxmodify script.
To confirm the number of open connections is greater than one (1), we can issue a Bulk IM Task or use a performance tool like CA Directory dxsoak.
In this example, we will show case using CA Directory dxsoak to execute 100 parallel threads to create 100 ADS Accounts with MS Exchange Mailboxes. We will also enclose this script for download for others to review and use.
Performance Lab:
Pre-Steps:
Leverage CA Directory samples’ dxsoak binary (performance testing). You may wish to use CA Directory on an existing IM Provisioning Server (Linux OS) or you may deploy CA Directory (MS Windows version) to the JCS/CCS connector. Examples are provided for both OSes.
Create LDIF files for IM Provisioning Server and/or IM Connector Tier. This file is needed to ‘push’ the solution to-failure. The use of the IME Bulk Task and/or etautil scripts to the IM Provisioning Tier, will not provide the transaction speed we need to break the CCS service if possible.
Within the IM Provisioning Manager enable the ADS Endpoint TXT Logs on the Logging TAB, for all checkboxes.
Monitor the IMPS etatrans* logs, monitor the JCS ADS logs, monitor the CCS ADS logs, monitor the number of CCS-> ADS (LDAP/S – TCP 389/636) threads. [Suggest using MS Sysinternals Process Explorer and select im_ccs.exe & then TCP/IP TAB]
Monitor the MS ADS Domain via MS ADUC (AD Users & Computers UI) and MS Exchange Mailbox (Mailbox UI via Browser)
Execution:
6. Perform a UNIT TEST with dxmodify/ldapmodify to confirm the LDIF file input is correct with the correct suffix.
8. IMPS etatrans*.log – Count the number of operations per second. Note any RACE and/or data collisions, e.g. ADS accounts deleted prior to add via 100 threads or ADS account created multiple times attempted in different threads.
9. IM CCS ADS <endpoint>.log – Will only have useful data if the ADS Endpoint Logging TAB has been checked for TXT logs.
10. Finally, validate directly in MS Active Domain with the ADUC or similar tool & MS Exchange mailboxes being created/deleted.
11. Count the number of threads from im_ccs.exe to ADS – Suggest using MS Sysinternals Process Explorer tool and/or Powershell to count the number of connections.
MS Powershell Script to count the number of LDAP (TCP 389) connection from im_ccs.exe. [Note: TCP 389 is used more if the ADS Endpoint is setup to use SASL authentication. TCP 636 is used more if the ADS Endpoint is using the older TLS authentication]
$i=1
Do {
cls
(Get-NetTCPConnection -State Established -OwningProcess (Get-Process -name im_ccs).id -RemotePort 389).count
Start-Sleep -s 1
$i++
}
while ($i -le 5)
Direct Performance Testing to JCS/CCS Service
While this testing has limited value, it can offer satisfaction and assistance to troubleshoot any challenges. We can use the prior LDIF files with a slightly different suffix, dc=etasa (instead of dc=eta), to use dxsoak to push the connector tier to failure. This step helped provide memory dumps back to CA/Symantec Engineering teams to help isolate challenges within the parallel processing. CCS Service is only exposed via localhost. If you wish to test the CCS Service remotely, then update the MS Registry key for the CCS service to use the external IP address of the JCS/CCS Server. Rate observed = 25 K ids/hr
Script to generate 100 ADS Accounts with MS Exchange Mailbox Creation
You may wish to review this script and adjust it for your ADS / MS Exchange domains for testing. You can also create a simple LDIF file with password resets or ADS group membership adds. Just remember that the IMPS Service (TCP 20389/20390) uses the suffix dc=eta, and the IM JCS/CCS Services (TCP 20410/20411) & (TCP 20402/20403) use the suffix dc=etasa. Additionally, if using CA Directory dxsoak, only use the non-TLS ports, as this binary is not equipped for using TLS certs.
#!/bin/bash
#######################################################################################################################
# Name: Generate ADS Feed Files for IM Solution Provisioning/Connector Tiers
#
# Goal: Validate the new parallel processes from the IM Connector Tier to Active Directory with MS Exchange
#
#
# Generate ADS User LDIF file(s) for use with unit (dxmodify) and performance testing (dxsoak) to:
# - {Note: dxsoak will only work with non-TLS ports}
#
# IM JCS (20410) "dc=etasa" {Ensure MS Windows Firewall allows this port to be exposed}
# IM CCS (20402) "dc=etasa" {This port is localhost only, may open to network traffic via registry update}
# IMPS (20389) "dc=eta"
#
#
# Monitor:
#
# The IMPS etatrans*.log {exclude searches}
# The JCS daily log
# The JCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
# The CCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
#
# Execute per the examples provided during run of this file
#
#
# ANA 05/2021
#######################################################################################################################
# Unique Variables for an ADS Domain
NAMESPACE=exchange2016
ADSDOMAIN=exchange.lab
DCDOMAIN="DC=exchange,DC=lab"
OU=People
#######################################################################################################################
MAX=100
start=00001
counter=$start
echo "###############################################################"
echo "###############################################################"
START=`/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`
echo `/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`" = Current OS UTC time stamp"
echo "###############################################################"
FILE1=ads_user_with_exchange_dc_etasa.ldif
FILE2=ads_user_with_exchange_dc_eta.ldif
echo "" > $FILE1
while [ $counter -le $MAX ]
do
n=$((10000+counter)); n=${n#1}
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Counter with leading zeros = $n at time: $tz"
cat << EOF >> $FILE1
dn: eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
objectClass: eTADSAccount
eTADSobjectClass: user
eTADSAccountName: firstname$n aaalastname$n
eTADSgivenName: firstname$n
eTADSsn: aaalastname$n
eTADSdisplayName: firstname$n aaalastname$n
eTADSuserPrincipalName: aaatestuser$n@$ADSDOMAIN
eTADSsAMAccountName: aaatestuser$n
eTPassword: Password01
eTADSpwdLastSet: -1
eTSuspended: 0
eTADSuserAccountControl: 0000000512
eTADSDescription: description $tz
eTADSphysicalDeliveryOfficeName: office
eTADStelephoneNumber: 111-222-3333
eTADSmail: aaatestuser$n@$ADSDOMAIN
eTADSwwwHomePage: web.page.lab
eTADSotherTelephone: 111-222-3333
eTADSurl: other.web.page.lab
eTADSstreetAddress: street address line01
eTADSpostOfficeBox: pobox 111
eTADSl: city
eTADSst: state
eTADSpostalCode: 11111
eTADSco: UNITED STATES
eTADSc: US
eTADScountryCode: 840
eTADSscriptPath: loginscript.cmd
eTADSprofilePath: \profile\path\here
eTADShomePhone: 111-222-3333
eTADSpager: 111-222-3333
eTADSmobile: 111-222-3333
eTADSfacsimileTelephoneNumber: 111-222-3333
eTADSipPhone: 111-222-3333
eTADSinfo: Notes Here
eTADSotherHomePhone: 111-222-3333
eTADSotherPager: 111-222-3333
eTADSotherMobile: 111-222-3333
eTADSotherFacsimileTelephoneNumber: 111-222-3333
eTADSotherIpPhone: 111-222-3333
eTADStitle: title
eTADSdepartment: department
eTADScompany: company
eTADSmanager: CN=manager_fn manager_ln,OU=$OU,$DCDOMAIN
eTADSmemberOf: CN=Backup Operators,CN=Builtin,$DCDOMAIN
eTADSlyncSIPAddressOption: 0000000000
eTADSdisplayNamePrintable: aaatestuser$n
eTADSmailNickname: aaatestuser$n
eTADShomeMDB: (Automatic Mailbox Distribution)
eTADShomeMTA: CN=DC001,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,$DCDOMAIN
eTAccountStatus: A
eTADSmsExchRecipientTypeDetails: 0000000001
eTADSmDBUseDefaults: TRUE
eTADSinitials: A
eTADSaccountExpires: 9223372036854775807
EOF
counter=$(( $counter + 00001 ))
done
# Create the delete ADS Process
start=00001
counter=$start
while [ $counter -le $MAX ]
do
n=$((10000+counter)); n=${n#1}
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Counter with leading zeros = $n at time: $tz"
cat << EOF >> $FILE1
dn: eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: delete
EOF
counter=$(( $counter + 00001 ))
done
echo ""
echo "################################### ADS USER OBJECT STATS ################################################################"
echo "Number of add objects: `grep "changetype: add" $FILE1 | wc -l`"
echo "Number of delete objects: `grep "changetype: delete" $FILE1 | wc -l`"
rm -rf $FILE2
cp -r -p $FILE1 $FILE2
sed -i 's|,dc=im,dc=etasa|,dc=im,dc=eta|g' $FILE2
ls -lart $FILE1
ls -lart $FILE2
echo ""
echo "################################### SET ADS MAX CONNECTIONS IN POOL SIZE ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
LDAPTLS_REQCERT=never dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD" << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'
echo ""
echo "################################### CCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20402
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the CCS Service to test single thread with dxmodify or ldapmodify"
echo "dxmodify -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the CCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo ""
echo "################################### JCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20410
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the JCS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the JCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo ""
echo "################################### IMPS UNIT & PERF TEST ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
echo "Execute this command to the IMPS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
echo "Execute this command to the IMPS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $IMPS_HOST:$IMPS_PORT -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
Address the new bottleneck of MS Exchange / O365 Provisioning.
After parallel provisioning has been introduced with the new im_ccs.exe service, you may noticed that the number of transactions is still being throttled during performance testing.
Out-of-the-box MS Active Directory Global Throttling Policy has the parameter of PowerShellMaxConcurrency set to a default of 18 connection. Any provisioning that uses MS Powershell for MS Exchange and/or MS O365 will be impacted by this default parameter.
To address this bottleneck, we can create a new Throttling Policy and only assign the service ID that will be managing identities, to avoid a global change.
After this change has been made, restart the IM JCS/CCS Services, and retest again with your performance tools. Review the CCS ADS log for # of creations in 60 seconds, and you will be pleasantly surprise at the rate. The logs are the strong confirmation we are looking for.
Performance test (947 ADS accounts w/Exchange mailboxes in 60 seconds, 08:59:54 to  09:00:53) =>Rate of 15 ids/second  (or 54 K ids/hr) with updated MaxPowershell = 100 thottlingpolicy.
The last bottleneck appears to be CPU availability to MS Exchange Supporting Services, w3wp.exe, the MS IIS Service. Which appears to be managing MS Powershell connections per its startup string of
The Symantec (CA/Broadcom) Directory solution provides a mechanism for routing LDAPv3 traffic to other solutions. This routing mechanism allows Symantec Directory to act as a virtual directory service for other directories, e.g., MS Active Directory, SunOne, Novell eDirectory, etc.
The Symantec Identity Suite solution uses the LDAP protocol for its mid-tier and connector-tier components. The Provisioning Server is exposed on TCP 20389/20390, the JCS (Java Connector Server) is exposed on TCP 20410/20411, and the CCS (C++ Connector Server) is exposed on TCP 20402/20403.
We wished to isolate provisioning data challenges within the Symantec Identity Management solution that was not fully viewable using the existing debugging logs & features of the provisioning tier & connector tiers. Using Symantec Directory, we can leverage the routing mechanism to build a MITM (man-in-the-middle) methodology to track all LDAP traffic through the Symantec Identity Manager connector tier.
We focused on the final leg of provisioning and created a process to track the JCS -> CCS LDAP traffic. We wanted to understand what and how the data was being sent from the JCS to the CCS to isolate issues to the CCS service and MS Active Directory. Using the trace level of Symantec Directory, we can capture all LDAP traffic, including binds/queries/add/modify actions.
The below steps showcase how to use Symantec Directory as an approved MITM process for troubleshooting exercises. We found this process more valuable than deploying Wireshark on the JCS/CCS Server and decoding the encrypted traffic for LDAP.
Background:
Symantec Directory documentation on routing. Please note the concept / feature of “set transparent-routing = true;” to avoid schema challenges when routing to other directory/ldap solutions.
The Symantec Identity Management connector tier may be deployed on MS Windows or Linux OS. If the CCS service is being used, then MS Windows OS is required for this MS Visual C++ component/service. As we are focused on the CCS service, we will introduce the Symantec Directory solution on the same MS Windows OS.
NOTE: We will keep the MITM process contained on a single host, and will not redirect the network traffic beyond the host.
Step 1: Deploy the latest Symantec Directory solution on MS Windows OS. This deployment is a blank slate for the next steps to follow.
Step 2: Copy the folders of schema, limits, and ssld from an existing Symantec Directory deployment of the Symantec Identity Manager solution. Using the existing schema files, references, and certificates will allow us to avoid any challenges during startup of the Router DSA due to the pre-defined provisioning/connector tier configurations. Please note when copying from a Linux OS version of Symantec Directory, we will need to update the path from Linux format to MS Windows format in the SSLD impd.dxc file for “cert-dir” and “ca-file” parameters.
Step 3: Create a new Router DSA DXI configuration file. This is the primary configuration file for Symantec Directory DSA. It will referenced the schema, knowledge, limits, and certificates for the DSA. Note the parameters for “transparent-routing” to avoid schema challenges with other solutions. Note the trace level used to trace the LDAP traffic in the Symantec Directory Router DSA trace log.
# DXserver/config/servers/admin_router_ccs_30402.dxi
# logging and tracing
close summary-log;
close trace-log;
source "../logging/default.dxc";
# schema
clear schema;
source "../schema/impd.dxg";
# access controls
clear access;
# source "../access/";
# ssld
source "../ssld/impd.dxc";
# knowledge
clear dsas;
source "../knowledge/admin_router_ccs_group.dxg";
# operational settings
source "../settings/default.dxc";
# service limits
source "../limits/impd.dxc";
# database - none - transparent router
set transparent-routing=TRUE;
# tunnel through eAdmin server error code and messages
set route-non-compliant-ldap-error-codes = true;
set trace=ldap,time,stats;
#set trace=dsa,time;
Step 4: Create the three (3) knowledge files. The “group” knowledge file will be used to redirect to the other two (2) knowledge files of the router DSA and the re-direct DSA to the CCS service.
# DXserver/config/knowledge/admin_router_ccs_group.dxg
# The admin_router_ccs_30402.dxc PORT 30402
# will be used for the IAMCS (JCS) CCS port override configuration file
# server_ccs.properties via proxyConnectionConfig.proxyServerPort=30402
source "admin_router_ccs_30402.dxc";
source "admin_ccs_server_01.dxc";
# DXserver/config/knowledge/admin_ccs_server_01.dxc
# This file is sourced by admin_router_ccs_group.dxg.
set dsa admin_ccs_server_01 =
{
prefix = <dc etasa>
dsa-name = <dc etasa><cn admin_ccs_server_01>
dsa-password = "secret"
address = ipv4 localhost port 20402
auth-levels = clear-password
dsp-idle-time = 100000
dsa-flags = load-share
trust-flags = allow-check-password, no-server-credentials, trust-conveyed-originator
link-flags = dsp-ldap
#link-flags = dsp-ldap, ssl-encryption
# Note: ssl will require update to /etc/hosts with: <IP_Address> eta_server
};
Step 5: Update the JCS configuration file that contains the TCP port that we will be redirecting to. In this example, we will declare TCP 30402 to be the new port.
Overview of all files updated and their relationship to each other.
Validation
Start up the solution in the following order. Ensure that the new Symantec Directory Router DSA is starting with no issue. If there are any syntax issues, isolate them with the command: dxserver -d start DSA_NAME.
Start the Router DSA first, then restart the im_jcs (JCS) service. The im_ccs (CCS) service will be auto-started by the JCS service. Wait one (1) minute, then check that both TCP Ports 20402 (CCS) and 30402 (Router DSA) are both in the LISTEN state. If we do not see these both ports, please stop and restart these services.
May use MS Sysinternals ProcessExplorer to monitor both services and using the TCP/IP tab, to view which ports are being used.
A view of the im_ccs.exe and dxserver.exe services and which TCP ports they are listening on.
Use a 3rd party LDAP client tool, such as Jxplorer to authenticate to both the CCS and the Router DSA ports, with the embedded service ID of “cn=root,dc=etasa”. We should see exactly the SAME data.
Use the IME or IMPS to perform a query to MS Active Directory (or any other endpoint that uses the CCS connector tier). We should now see the “cache” on the CCS service be populated with the endpoint information, and the base DN structure. We can now track all LDAP traffic through the Router DSA MITM process.
View of trace logs
We can monitor when the JCS first binds to the CCS service.
We can monitor when the IMPS via the JCS queries if the CCS is aware of the ADS endpoint.
Finally, we can view when the IMPS service decrypt its stored information on the Active Directory endpoint, and push this information to the CCS cache, to allow communication to MS Active Directory. Using Notepad++ we can tail the trace log.
Please note, this is a secure LDAP/S tunnel from the IMPS -> JCS -> CCS -> MS ADS.
We can now view how this data is pushed via this secure tunnel with the MITM process.
We can now monitor all traffic and assist with troubleshooting any CCS/MS-ADS challenges.
This same MITM methodology/process may also be used for the IMPS (TCP 20389/2039) and the JCS (TCP 20410/20411) services. We have used this process to capture the IME (JIAM) LDAP traffic to the IMPS Service, to isolate multiple queries for Child Provisioning Roles. Which has been used by the product team to enhance the solution to lower startup durations of the IME in the latest releases.
Binds/queries/add/modification all work with this approach, but we do see an issue with OID for IMPS ADS endpoint “explore process” on ADS OU object. We are reviewing how to address this last challenge that states “critical extension is unavailable” for a LDAP control property of the OU object. The OIDs captured appear to be related to SunOne/Iplanet.
The Symantec (CA) Identity Suite includes the Symantec (CA) Directory. This component is installed under the ‘dsa’ service ID. On the virtual appliance, this ‘dsa’ service ID does not have a password defined, and therefore no login is allowed.
As an enhancement, we would like to add in a SSH private key to allow authentication to the ‘dsa’ service ID from other virtual appliances and desktop usage with various tools, e.g. Putty, MobaXterm, WinSCP, etc. This enhancement will allow for a streamlined process to address out-of-sync Directory DATA DSAs with scp/Rsync copies without intermediate file shares or use of other service IDs.
Challenge:
The virtual appliance of Symantec (CA) Identity Suite r14.3 is built on CentOS 6.4. The OpenSSH services on this OS apparently do not use a private key format that can be used by desktop tools or the PuttyGen (keygen conversion tool). However, the private key may be used between vApp servers if using the FQDN (full qualified domain name). We noted that during testing, that localhost is not allowed due to localhost not defined in the SSHD “AllowedUsers” property file.
On newer virtual appliances vApp r14.4 with CentOS 8 Stream, this challenge does not exist, and we can use the OpenSSH private key, id_rsa, with the desktop tools as-is.
To assist with challenge and streamlining this process we have the following three (2) options:
Option 1: On newer OS, use OpenSSH process
After creating the private key, ./ssh/id_rsa, cat this file out to notepad, and save for use with the desktop tools
Generate this OpenSSH private/public key. The final command will help to validate this private key may be used for server to server communication.
On any OS (new/old) just use Putty-Gen tool to generate the private key. Update key comment/passphrase. After the private key is created, copy the TEXT “Public Key for pasting into OpenSSH authorized_keys file”. Just like it says, and then you may use the associated private key, id_rsa.ppk, with the desktop tools for the ‘dsa’ service ID.
Option 3: Combination of processes/tools
Important: .ssh/authorized_keys is updated and not overwritten.
One of the challenges that IAM/IAG teams may have every few months is delivery or access for internal/external auditors to validate access within the IAM/IAG system and their managed endpoints.
Usually, auditors may directly access the 100’s system/endpoints/applications and randomly select a few or export the entire directory structure to review access. This effort takes time and possible 100’s of entitlements to grant temporary/expiry access to view. Auditors also prefer Excel or CSV files to review rather than fixed documents (PDF) to allow them to filter and isolate what interests them.
One process that may have value for your team is various tools with export functionality to CSV/XLS and the ability to query the 100’s-1000’s of systems from a single entry point.
A tool that we have found valuable over the years is SoftTerra LDAP Browser.
The multiple benefits from this tool for IAM/IAG are:
It is a read-only tool, so no mistakes can be made by granting too much access.
It has the ability to save queries that are popular and can be copied from other tools.
It has the ability to export the queries to CSV/XLS formats (plus others)
It can be used to pull reports from an IAM/IAG solution via their directory ports.
It can be used to pull reports from the managed applications (on-prem or SaaS) via the IAM provisioning directory ports.
The tool is free from SoftTerra, it is a limited version of their Administration tool
Example of the SoftTerra LDAP Browser tool used to query Active Directory, LDAP user stores, and Provisioning User Store & managed endpoints/applications.
A view to export Service Now (SNOW) accounts via the CA/Symantec Identity Manager Provisioning Server/Service (TCP 20390) via the LDAP/S protocol.
Why? The provisioning server may be viewed as a virtual directory/pass-through directory to the managed endpoints via its connector tier.
The below image shows SoftTerra LDAPBrowser used to connect to the Provisioning Server (TCP 20390). Then navigate to a Service Now (SNOW) managed endpoint, to query on all accounts and their respective profiles & entitlements. This same report/extract process may be done for mainframe/AS400 and client-server applications, e.g Active Directory, Unix, Databases, etc.
Enhance this process with defense-in-depth
We will not use the primary default administration account of the provisioning tier, “etaadmin”. Since this account has full access to change data.
Within the IAM/IAG solution, create an auditor account.
In the example below we create a new Global User, with the name “auditor”, a description, password, and a local “read-only admin profile” with an expiration date. This will allow the auditors to use the account as they wish (or you may grant this “read-only admin profile directly to their existing Global User ID). The account may still follow the same password reset expiration processes. If the account is marked as “restricted” in the CA/Symantec IM solution, then this account is limited how it may be changed to avoid any unexpected sync challenges to managed endpoints (if it was correlated to other accounts).
After the new Global User is created (or existing ID is added to the Admin Profile “ReadAdministrator”), update SoftTerra Credentials for the Provisioning Service. Below the new DN with “auditor” is shown in the credentials for login ID, e.g. “eTGlobalUserName=auditor,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta”
Now, the auditors may run as many reports as they would like, and export to spreadsheets or PDF files using a read-only account with a read-only tool.
Honorable mentions for other query tools.
Jxplorer is a useful & free java-based tool for reports, but this is a full edit tool & only exports out to LDIF format. http://jxplorer.org/
Apache Directory Studio is another very useful & free java-based tool for reports. This is a full edit tool. It does have the ability to export to many different formats. Since this tool does NOT need an MS Windows installer, and if the Desktop prevent installation, this is typically our 2nd choice to use. Extract and use the current java on the MS Windows OS or download AdoptOpenJDK and extract it to use with Apache Directory Studio. https://directory.apache.org/studio/ & https://adoptopenjdk.net/
SoftTerra LDAP Administrator is a paid and full edit tool. It has the same look-n-feel of the SoftTerra LDAP Browser tool. It is typically used by administrators of various LDAP solutions. We recommend this tool for your larger sites or if you would like a fast responsive tool on MS Windows OS. https://www.ldapadministrator.com/
If you have other recommendations, please leave a response.
Bonus Feature – SoftTerra AD Authentication
Both the SoftTerra tools allow binding using your existing authentication (on your desktop/laptop) into Active Directory. No need to create additional user ID for the auditors or yourself.
Perhaps the O365 or Outlook contacts process is not robust or too slow or perhaps you wish you had a more detail view of your internal active directory to view a manager’s direct reports. You can use this feature to view the the non-privacy attributes of your domain of all accounts with a read-only tool.
Step 01: Open a command-line prompt on your desktop/workstation after you have authenticated to your Active Directory domain & type set | findstr LOGONSERVER
Step 02: Install SoftTerra LDAP Browser Tool & Create a new profile
Step 03: Type the name of the Active Directory LOGONSERVER (aka Domain Controller) into the following fields & ensure “Use Secure Connection (SSL)” is selected (to avoid query issues).
Step 04: Click Next until you see “User Authentication Information” then select the radio button for “Currently logged on user (Active Directory)”, then click Finish button.
Step 05: After the profile is built, now click on the profile and watch it expand into a tree display of Active Directory. Select the branch that you believe has the list of users you would like to view, then select an individual user account, to see the values populated.
Step 06: If you wish to export this data to a spreadsheet (CSV/XLS), right click on the left object and select export option.
Step 07: You will have a series of options to export to & the file name it will write to.
Step 08: Advance search and export process. Select the branch that holds all the users you wish to view and export. Note: If the branch has 10,000 objects, this process may take minutes to complete depending on the query.
Step 09: The follow search windows will appear to help you create, save, and export your queries. Note that if you start to type in the field name, the list of the fields will start to appear.
Step 10: Ensure the FILTER is properly formed (use google to assist), and which attribute you wish to view or export is defined, then click search. If you are satisfied with your search, use the “Save Results” to export to a spreadsheet (CSV/XLS) or other format.
Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.
In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.
Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.
Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.
Cluster Out-of-Sync Scenario
Awareness
The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.
Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.
After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.
The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.
Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)
su - dsa OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
# NOTIFY BRANCH (TCP 20404)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb
# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount
# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount
# Scroll to see entire line
A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.
Recovery Processes
The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.
The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.
This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.
The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.
The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).
Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.
Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file.
This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.
This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.
Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.
STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.
su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;
# Scroll to see entire line
Step 3 will then ask for an updated online backup to be executed.
In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command.
In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process.
This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures.
Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.
Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior.
Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.
STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep `date '+%Y-%m-%d'`
# Scroll to see entire line
Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.
The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.
Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.
# RSYNC METHOD
sudo -iu dsa
time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
# Scroll to see entire line
# SCP METHOD
sudo -iu dsa
scp REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb /tmp/dsa_data
/usr/bin/mv /tmp/dsa_data/<incorrect_dsaname>.zdb $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db
# Scroll to see entire line
Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.
The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db
Remove the prior <dsaname>.dp (ASCII time-stamp file) { the DATA DSA will auto replace this file with the *.dx file contents } and the <dsaname>.tx (binary data transaction file).
Step 5a Startup the DATA DSA with the command
dxserver start all
If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)
dxserver -d start <dsaname>
Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)
Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.
bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'
# Scroll to see entire line
LDIF Recovery Processes
The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.
We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).
Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.
Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.
The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.
The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.
The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.
In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.
Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.
Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.
We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.
Hope this has value to you and any challenges you may have with your environment.