Authenticate to vApp ‘dsa’ user ID via ssh private key

The Symantec (CA) Identity Suite includes the Symantec (CA) Directory. This component is installed under the ‘dsa’ service ID. On the virtual appliance, this ‘dsa’ service ID does not have a password defined, and therefore no login is allowed.

As an enhancement, we would like to add in a SSH private key to allow authentication to the ‘dsa’ service ID from other virtual appliances and desktop usage with various tools, e.g. Putty, MobaXterm, WinSCP, etc. This enhancement will allow for a streamlined process to address out-of-sync Directory DATA DSAs with scp/Rsync copies without intermediate file shares or use of other service IDs.

Challenge:

The virtual appliance of Symantec (CA) Identity Suite r14.3 is built on CentOS 6.4. The OpenSSH services on this OS apparently do not use a private key format that can be used by desktop tools or the PuttyGen (keygen conversion tool). However, the private key may be used between vApp servers if using the FQDN (full qualified domain name). We noted that during testing, that localhost is not allowed due to localhost not defined in the SSHD “AllowedUsers” property file.

On newer virtual appliances vApp r14.4 with CentOS 8 Stream, this challenge does not exist, and we can use the OpenSSH private key, id_rsa, with the desktop tools as-is.

To assist with challenge and streamlining this process we have the following three (2) options:

Option 1: On newer OS, use OpenSSH process

After creating the private key, ./ssh/id_rsa, cat this file out to notepad, and save for use with the desktop tools

Generate this OpenSSH private/public key. The final command will help to validate this private key may be used for server to server communication.

echo y | ssh-keygen -t rsa -b 4096 -N Password02 -C "$USER@$HOSTNAME" -f .ssh/id_rsa ; ls -lart .ssh ; cat .ssh/id_rsa ; cat .ssh/id_rsa.pub >> .ssh/authorized_keys ; chmod 600 .ssh/authorized_keys ; ssh -v -i .ssh/id_rsa $USER@`hostname`

Option 2: Skip the OpenSSH process, use PuttyGen

On any OS (new/old) just use Putty-Gen tool to generate the private key. Update key comment/passphrase. After the private key is created, copy the TEXT “Public Key for pasting into OpenSSH authorized_keys file”. Just like it says, and then you may use the associated private key, id_rsa.ppk, with the desktop tools for the ‘dsa’ service ID.

Option 3: Combination of processes/tools

Important: .ssh/authorized_keys is updated and not overwritten.

JCS versus CCS Connector Tier Challenges

A very common challenge we see is the modification of the CA/Symantec Connector Server Service(s) startup order for the embedded C++ (CCS) Connector. This CCS connector service on MS Windows OS is marked default as “Manual” startup.

Since the solution documentation is not clear on why this is configured as manual, we will see site’ administrators that will either change this service from “Manual” to “Automatic” or will start the CCS service manually themselves upon a restart.

However, either of these processes will impact the ability of the JCS Service from managing the CCS Services cache upon startup. The JCS will NOT be able to manage the CCS service for a number of minutes until it can resolve this challenge. Unfortunately, when this occurs, the traffic to any CCS managed endpoints will be placed in a long time out within the JCS Service. The IMPS (Provisioning Service) will think that it successfully handed off the task to the JCS/CCS tier, but the task will stay in a holding pattern until either the memory of the JCS is overwhelmed or the CCS Service restarts/crashes or a timeout of the task.

TL;DR – Please do not start the CCS Service manually. Only stop/start the JCS Service, wait a full minute and you should see the CCS Service start up. If the CCS Service does NOT start, investigate why.

JCS Service’s management of the CCS Service:

To understand how the JCS Service manages the CCS Service (via localhost TCP 20402), we can review two (2) files and use MS Sysinternals Process Explorer to view the JCS Service starting the CCS Service via the command “net start im_ccs”. The JCS Service will now have access to update the CCS service’s cache with information for a managed endpoint, e.g. Active Directory.

The two (2) JCS Service configuration files for CCS Service are:

  • C:\Program Files (x86)\CA\Identity Manager\Connector Server\jcs\conf\server_osgi_ccs.xml [File contains startup properties of how the JCS will manage timeouts to the CCS Service & connections pools]
  • C:\Program Files (x86)\CA\Identity Manager\Connector Server\jcs\conf\override\server_ccs.properties [File contains the bind credentials and the service port to communicate to on localhost:20402. The password hash will be PBES or AES format depending if FIPS is enabled.]

And finally a view of the startup of the CCS Service via JCS Service using MS Sysinternals Process Explorer https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer We can see that a child process is started from the JCS Service that will call the MS Windows “net.exe” command and execute “net start im_ccs”

Keeping the JCS Service and CCS Service as-is for startup processes will help avoid confusion for the provisioning tier of the CA/Symantec solution. Please only stop/start the JCS Service. If the CCS Service does not stop after 2 minutes, kill it. But never start the CCS by itself.

A view of the data path from IMPS (IM Provisioning Server) to Active Directory (manage endpoint) via the Connector tier.

Performance Improvements

While we may not adjust the startup from manual to automatic, we can enhance the default configurations for performance and timeout improvements. The JCS Service starts up with a default of 1 GB RAM. The JCS Service is 64 bit based on using 64 bit JAVA and the memory can be increased accordingly. After testing with large data sets, we recommend increasing the JCS JVM max memory from 1 GB to 4 GB. We can confirm after startup of the JCS will use over 1 GB of RAM with MS Sysinternals Process Explorer.

Other improvement include updating the JAVA that is supporting the JCS Service. CA/Symantec now recommends using AdoptOpenJDK. The documentation now explains how this may be updated in-place. Or as we prefer to reinstall and allow the installer to update the path statements for AdoptOpenJDK.

The below image below shows in the MS Windows Registry for the JCS Service (Procrun 2.0/im_jcs) the key value pairs that are updated for AdoptOpenJDK. https://adoptopenjdk.net/ If managing Active Directory, please review your OS environmental variables to control the behavior from the CCS Service to Active Directory.

After you restart the JCS Service, open the JCS Administration Console via http://localhost:20080/main or https://localhost:20443/main right click on the “Local Connector Server” ICON and it should display that AdoptOpenJDK is in use now. Only major release 8 is supported, avoid trying later releases (11,15) until support is confirmed.

Stability Improvements

The default JCS Service configuration file has knowledge of the connection pool and timeouts, but appears to be missing the “maxWait” token defined. If we are willing to wait 5-10 minutes for the JCS Service to reset its knowledge of the CCS service, we can leave the default. However for a large environment, we have found that lowering the wait times will greatly avoid the delays in transactions when there is stoppage. We have identified two (2) configuration parameters that will assist with the long term stability of the solution. Adding the “maxWait” of 60 seconds (60000 milliseconds) to the JCS configuration file for CCS service and updating the default IM Provisioning Server domain configuration parameter of “Connections/Refresh Time” to 90 seconds.

Troubleshooting and Logging

To assist with RCA efforts, we have the following recommendations. Enable verbose logging for both the JCS Service and the managed endpoint to isolate issues. You may also need to increase logging for the API Gateway or docker logs.

Below is the example to enable verbose logging.

To monitor the JCS logs, there are several tools that will assist, but we find that the latest releases of Notepad++ allow for “tailing” the active JCS logs.

Example of verbose logs for Active Directory via the CCS’s ADS and JCS logs.

Important Logging Note: Enable the new IM r14.3cp2 feature to auto rotate your CCS ADS log. Avoid stop/start of the CCS Service yourself, that may interrupt the JCS behavior to the CCS Service (error communicating to localhost:20402 will display in JCS logs). New file(s): Connector Server\ccs\data\ADS\<Endpoint_Name>.logconfig

Ref: https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/identity-management-and-governance-connectors/1-0/connectors/microsoft-connectors/microsoft-active-directory-exchange-and-skpye-for-business(lync)/active-directory-connector-capabilities/Active-Directory-Connector-Log-Rotation.html

CCS OS Environmental Variables

Spread throughout the documentation for the CA/Symantec IAM/IAG connector tier, is the use of MS Windows OS environmental variables for the CCS Service. The majority are used to manage behavior to Active Directory and/or MS Exchange. Please search the document for the latest updates. These may be set in MS Window OS via the System Environmental Variable section or via the command line with “setx”. https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/identity-management-and-governance-connectors/1-0/search.html?q=Environment%20Variable&page=1

Example of setting MS Windows OS Environmental variables with “setx” and description of the value of each variable for Active Directory/MS Exchange

1[High Value. Will force AGENTLESS connection to Exchange 2010 & UP]
 setx ADS_AGENTLESS_MODE 1 /m
2[High Value. Default value = 2, Kerberos authentication for Exchange Powershell API]
 setx ADS_AGENTLESS_AUTHMETHOD 2 /m 
3[High Value. Default value = 3. Increase to 100 and ALSO have Exchange Admin create a new quota for the service account used to create mailboxes. Default Exchange Powershell Quota is 18. New-ThrottlingPolicy MaxPowershell -PowerShellMaxConcurrency 100 AND Set-Mailbox ServiceAccountID -ThrottlingPolicy MaxPowershell ]
 setx ADS_AGENTLESS_MAXCONN 100 /m 
4[Monitor. Default value = 1. Error level ONLY, increase to level 3 for debugging powershell logging to MS Exchange]
 setx ADS_AGENTLESS_LOGLEVEL 1 /m 
5[Medium Value. CCS service will wait 10 minutes for single account. Exchange Powershell Mailbox Quota of 18 and BLC with 100’s of users.]
 setx ADS_CONFIRM_MAILBOX 600 /m 
6[Low Value. Mask the AD Failover List in the IM Prov Manager UI]
 setx ADS_DISABLE_DCSTATUS 1 /m 
7[Low Value. Mask the viewing the default AD Primary Group in the IM Prov Manager UI]
 setx ADS_DISABLE_PRIMARYGROUPNAME 1 /m 
8[High Value. Send the DC hostname to the Exchange server to query first instead of Exchange relying on its current pool]
 setx ADS_E2K_SEND_DC 1 /m 
9[High Value. Requires service account can view all alternatives DC. May limit failover DC via properties file.]
 setx ADS_FAILOVER 1 /m 
10[Medium Value. Performance if Terminal Services attribute are NOT being managed, e.g. changed in Account Templates or PX rules.]
 setx ADS_WTS_TIMEOUT -1 /m
11[Set “ADS_OPERATION_TIMEOUT” to -1 to disable the client side timeout functionality in the Environment Variable, otherwise 60]
 setx ADS_OPERATION_TIMEOUT 60 /m
12[The failover retry interval is the time that the Active Directory connector waits before checking the stopped server. The default retry interval is 15 minutes]
 setx ADS_RETRY   15 /m
13[To allow groups in unmanaged domains to be a part of synchronization, Defines whether the synchronization operation searches the global catalog. The value of x can be 0 or 1: 0: (Default) The synchronization operation queries the local catalog only. It does not consider universal groups in unmanaged domains. When x is set to 0, the y value has no effect. 1: Synchronization queries the global catalog to allow it to consider groups in unmanaged domains. y Defines which domains the synchronization operation considers. 0: Synchronization considers groups in both managed and unmanaged domains. 1: Synchronization considers groups in managed domains only.]
 setx ADS_MANAGE_GROUPS 01 /m
14[Monitor. Seems only valuable for debugging. Has performance hit but may assist for CCS debugging to ActiveDirectory.]
 setx ADS_FORCELOG 1 /m 
15[Low Value. The IMPS service can page with lower limits. Impact if this value is > what AD default page limit size is. ]
 setx ADS_SIZELIMIT 50000 /m 

Reinstalling the JCS Service from the Virtual Appliance

If you are using the CA/Symantec Identity Suite virtual appliance, consider after patching the solutions on the virtual appliance, to re-installing the remote JCS Services. This will avoid any confusion on which patches are deployed on the remote JCS servers. Any patches on the virtual appliance will be incorporated into the new installer. We prefer to use the JCS only on the MS Windows OS, as it can service both JCS type managed endpoints & CCS type managed endpoints together. We also have full access to adjust the behavior of these service on MS Windows OS rather than the limited access provided by the virtual appliance for the JCS service.

Hopefully some of these notes will help you avoid any challenges with the connector tier and if you do, how to isolate the issues.

Advance Review: Review how CCS Service receives IMPS data via the JCS Tier.

The below example will load the DLL for the CCS Service (pass-through), then the information to bind to the ADS endpoint will be sent, then two (2) modify operations will be executed. This process emulates the IMPS behavior with the JCS and CCS. The bind information for the ADS endpoint that is stored in the CA Provisioning User Store, and queried/decrypted by the IMPS to send to the JCS as needed. Only after this information is stored in the CCS service, will the solution be able to explore or manage the ADS endpoint accounts.

su - dsa
export HISTIGNORE=' *'
echo -n Password01 > .imps.pwd; chmod 600 .imps.pwd
HOST=192.168.242.154;LDAPTLS_REQCERT=never dxmodify -c -H ldaps://$HOST:20411 -D "cn=root,dc=etasa" -y .imps.pwd << EOF
dn: eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
objectClass: eTADSNamespace
eTAgentPluginDLL: W2KNamespace.dll
eTNamespaceName: ActiveDirectory

dn: eTADSDirectoryName=dc2012.exchange2012.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
eTADSobjectCategory: CN=Domain-DNS,CN=Schema,CN=Configuration,DC=exchange2012,DC=lab
eTADSdomainFunctionality: 6
eTADSUseSSL: 1
eTLogWindowsEventSeverity: FE
eTAccountResumable: 1
eTADSnetBIOS: EXCHANGE2012
eTLogStdoutSeverity: FE
eTLog: 1
eTLogUnicenterSeverity: FE
eTADSlockoutDuration: -18000000000
objectclass: eTADSDirectory
eTLogETSeverity: FE
eTADSmsExchSystemObjectsObjectVersion: 1
eTADSsettings: 2
eTADSconfig: ExpirePwd=0: HomeDirInheritPermission=0
eTLogDestination: F
eTADSUserContainer: CN=BuiltIn;CN=Users
eTADSbackupDirs: 000;Default-First-Site-Name.Sites.Configuration.exchange2012.lab;dc2012.exchange2012.lab;0
eTADSuseFailover: 0
eTLogAuditSeverity: FE
eTADS-DefaultContext: exchange2012.lab
eTADSforestFunctionality: 6
eTADSAuthDN: Administrator
eTADSlyncMaxConnection: 5
eTADSAuthPWD: Password01!
eTLogFileSeverity: FIESW
eTADSprimaryServer: dc2012.exchange2012.lab
eTADScontainers: CN=Domain-DNS,CN=Schema,CN=Configuration,DC=exchange2012,DC=lab;exchange2012.lab;dc2012.exchange2012.lab
eTADSTimeBoundMembershipsEnabled: 0
eTADSKeepCamCaftFiles: 0
eTADSdomainControllerFunctionality: 6
eTADSexchange: 0
eTADSmsExchSchemaVersion: 1
eTADSCamCaftTimeout: 0000001800
eTADSPortNum: 636
eTADSDCDomain: DC=exchange2012,DC=lab
eTADSServerName: dc2012.exchange2012.lab
eTADSDirectoryName: dc2012.exchange2012.lab

EOF

MS Windows Firewall Rules for JCS Service

NOTE: Ensure MS Win OS F/W Port is open for 20411 on the IAMCS Server

Powershell Example:

Get-NetFirewallRule -Name jcs
New-NetFirewallRule -Name '#### IAMCS JCS TCP 20411 & 20443 #####' -DisplayName '##### IAMCS JCS TCP 20411 & 20443 #####' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 20411,20443

Win Cmd Lin Example:

netsh advfirewall firewall add rule name="##### IAMCS JCS TCP 20411 & 20433 #####" dir=in action=allow protocol=TCP localport="20411,20443"

Install MS .Net Framework 3.5 (Required for the CCS Service & ECS – enterprise common services library framework)

DISM /Online /Enable-Feature /All /FeatureName:NetFx3

Re-install or Uninstall issues

If unable to re-install, please delete the CA install/registry tracking file under C:\Windows folder, C:\Windows\vpd.properties , then reboot before attempting a re-install of the JCS/CCS component.

ECS Services

These five (5) ECS Services are typically not active used & may be changed to manual for minor CPU relief. ECS features are retained for supporting libraries.

Be kind to your auditors – Streamline Adhoc Reports

One of the challenges that IAM/IAG teams may have every few months is delivery or access for internal/external auditors to validate access within the IAM/IAG system and their managed endpoints.

Usually, auditors may directly access the 100’s system/endpoints/applications and randomly select a few or export the entire directory structure to review access. This effort takes time and possible 100’s of entitlements to grant temporary/expiry access to view. Auditors also prefer Excel or CSV files to review rather than fixed documents (PDF) to allow them to filter and isolate what interests them.

One process that may have value for your team is various tools with export functionality to CSV/XLS and the ability to query the 100’s-1000’s of systems from a single entry point.

A tool that we have found valuable over the years is SoftTerra LDAP Browser.

https://www.ldapadministrator.com/softerra-ldap-browser.htm

The multiple benefits from this tool for IAM/IAG are:

  1. It is a read-only tool, so no mistakes can be made by granting too much access.
  2. It has the ability to save queries that are popular and can be copied from other tools.
  3. It has the ability to export the queries to CSV/XLS formats (plus others)
  4. It can be used to pull reports from an IAM/IAG solution via their directory ports.
  5. It can be used to pull reports from the managed applications (on-prem or SaaS) via the IAM provisioning directory ports.
  6. The tool is free from SoftTerra, it is a limited version of their Administration tool
Example of the SoftTerra LDAP Browser tool used to query Active Directory, LDAP user stores, and Provisioning User Store & managed endpoints/applications.

A view to export Service Now (SNOW) accounts via the CA/Symantec Identity Manager Provisioning Server/Service (TCP 20390) via the LDAP/S protocol.

Why? The provisioning server may be viewed as a virtual directory/pass-through directory to the managed endpoints via its connector tier.

The below image shows SoftTerra LDAPBrowser used to connect to the Provisioning Server (TCP 20390). Then navigate to a Service Now (SNOW) managed endpoint, to query on all accounts and their respective profiles & entitlements. This same report/extract process may be done for mainframe/AS400 and client-server applications, e.g Active Directory, Unix, Databases, etc.

Enhance this process with defense-in-depth

We will not use the primary default administration account of the provisioning tier, “etaadmin”. Since this account has full access to change data.

Within the IAM/IAG solution, create an auditor account.

In the example below we create a new Global User, with the name “auditor”, a description, password, and a local “read-only admin profile” with an expiration date. This will allow the auditors to use the account as they wish (or you may grant this “read-only admin profile directly to their existing Global User ID). The account may still follow the same password reset expiration processes. If the account is marked as “restricted” in the CA/Symantec IM solution, then this account is limited how it may be changed to avoid any unexpected sync challenges to managed endpoints (if it was correlated to other accounts).

After the new Global User is created (or existing ID is added to the Admin Profile “ReadAdministrator”), update SoftTerra Credentials for the Provisioning Service. Below the new DN with “auditor” is shown in the credentials for login ID, e.g. “eTGlobalUserName=auditor,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta”

Now, the auditors may run as many reports as they would like, and export to spreadsheets or PDF files using a read-only account with a read-only tool.

Honorable mentions for other query tools.

Jxplorer is a useful & free java-based tool for reports, but this is a full edit tool & only exports out to LDIF format. http://jxplorer.org/

Apache Directory Studio is another very useful & free java-based tool for reports. This is a full edit tool. It does have the ability to export to many different formats. Since this tool does NOT need an MS Windows installer, and if the Desktop prevent installation, this is typically our 2nd choice to use. Extract and use the current java on the MS Windows OS or download AdoptOpenJDK and extract it to use with Apache Directory Studio. https://directory.apache.org/studio/ & https://adoptopenjdk.net/

SoftTerra LDAP Administrator is a paid and full edit tool. It has the same look-n-feel of the SoftTerra LDAP Browser tool. It is typically used by administrators of various LDAP solutions. We recommend this tool for your larger sites or if you would like a fast responsive tool on MS Windows OS. https://www.ldapadministrator.com/

If you have other recommendations, please leave a response.

Bonus Feature – SoftTerra AD Authentication

Both the SoftTerra tools allow binding using your existing authentication (on your desktop/laptop) into Active Directory. No need to create additional user ID for the auditors or yourself.

Perhaps the O365 or Outlook contacts process is not robust or too slow or perhaps you wish you had a more detail view of your internal active directory to view a manager’s direct reports. You can use this feature to view the the non-privacy attributes of your domain of all accounts with a read-only tool.

Step 01: Open a command-line prompt on your desktop/workstation after you have authenticated to your Active Directory domain & type set | findstr LOGONSERVER

Step 02: Install SoftTerra LDAP Browser Tool & Create a new profile

Step 03: Type the name of the Active Directory LOGONSERVER (aka Domain Controller) into the following fields & ensure “Use Secure Connection (SSL)” is selected (to avoid query issues).

Step 04: Click Next until you see “User Authentication Information” then select the radio button for “Currently logged on user (Active Directory)”, then click Finish button.

Step 05: After the profile is built, now click on the profile and watch it expand into a tree display of Active Directory. Select the branch that you believe has the list of users you would like to view, then select an individual user account, to see the values populated.

Step 06: If you wish to export this data to a spreadsheet (CSV/XLS), right click on the left object and select export option.

Step 07: You will have a series of options to export to & the file name it will write to.

Step 08: Advance search and export process. Select the branch that holds all the users you wish to view and export. Note: If the branch has 10,000 objects, this process may take minutes to complete depending on the query.

Step 09: The follow search windows will appear to help you create, save, and export your queries. Note that if you start to type in the field name, the list of the fields will start to appear.

Step 10: Ensure the FILTER is properly formed (use google to assist), and which attribute you wish to view or export is defined, then click search. If you are satisfied with your search, use the “Save Results” to export to a spreadsheet (CSV/XLS) or other format.

Multi-Write HUB Model with democorp

A useful feature with CA Directory for WAN latency challenges is the HUB model. This model allows sync of the data to occur to distant peer Multi-write DATA DSA, but does NOT impact the external application that is updating its own local Router/DATA DSAs.

To assist with understanding this HUB model, we have leverage the CA Directory samples of democorp & router to build out an architecture with six (6) DATA DSAs and two (2) router DSAs, to emulate two (2) data centers across the world. These samples are included with every CA Directory deployment under $DXHOME/samples/democorp & $DXHOME/samples/router.

This lab emulates two (2) of the three (3) data centers that are displayed within the CA documentation.

Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/directory/14-1/ca-directory-concepts/directory-replication/multiwrite-mw-groups-hubs/topology-sample-and-disaster-recovery.html

This lab may be replicated with near-real-world WAN latency within VMWARE Workstation feature.

https://www.vmware.com/products/workstation-pro.html

Below is a bash shell script used to create the lab environment that was created on a single host OS, with CA Directory samples of “democorp” and “router”. These samples were “copied” and updated via sed commands to ensure the DSA are unique for TCP Ports and naming convention. The below examples’ nomenclature will use democorpX for (A-F) and routerYY (AA or BB) .

These DATA DSA use the same suffix and are all referenced in the group knowledge file. The HUB model configuration will change the behavior for MW-DISP replication between data centers. MW-DISP replication will still be used for all local sync between DATA DSA in the same data center, and would ONLY be used between data centers for the DATA DSA that are designated as “HUB”s (aka multi-write-group-hub).

To test the value of HUB model with WAN latency, suggest that the same lab be executed on two (2) hosts, where one host has the VMWARE network latency enabled to 150/150 milli-seconds. Update the ip addresses within the $DXHOME/config/knowlege/*.dxc on both host OS, to reflect the correct hostnames for each data center in the DSAs.

The diagram below outlines the delta between various democorp DATA DSA to use the HUB model.

The changes below show the deltas within the *.dxc/*.dxg files within the knowledge folder for democorp MW HUB model.

The below image captures the only deltas in the *.dxi (startup files) for the democorp MW HUB model, located within the server folder. Note, if CA Directory management tool is deployed and used for democorp, all configurations will be in a single *.dxi file.

#!/bin/bash
##############################################
#
# Name: democorp_mw_hub_lab.sh
#
# Multi-Write HUB lab using CA Directory and the samples of
# democorp and router under DXHOME/samples
# A. Baugher, 04/2020 - ANA Technology Partner
#
# Assumptions:
#   CA Directory is deployed & dxprofile is enabled for dsa user
#   Execute script as dsa user
#
# Step 0.  Clean-Up prior deployment
#
# Step 1.  Auto deploy both democorp and router samples with: setup.sh -q
#
# Step 2.  Make common changes in democorp prior to copying
#
# Step 3.  Create six (6) copies of democorp and two (2) copies of router
#
# Step 4.  Update the six (6) copies of democorp for:
#     - name
#     - ports
#     - multi-write-group  (HUB group)
#     - DSA flags for MW & HUB-DSA
#     - Group knowledge file reference
#
#        Update the two (2) copies of router for:
#    - name
#    - ports
#    - Group knowledge file reference
#    - set write-precedence  (for HUB-DSA)
#
# Step 5. Start all DSAs
#
# Step 6. Test with dxsearch query
#
# Step 7. Execute the dxsoak command with the service account & time command
#
# Step 8. Update democorpA to force a single delta between peer members of AA and BB
#
# Step 9.  Create LDAP Export
#
# Step 10.  Create LDAP Delta & Compare the various democorp DSA to validate sync operations
#
#
##############################################
#set -xv
echo ..
echo "#############################################################"
echo "Step 0.  Clean up prior deployment of democorp and router"
echo "#############################################################"
dxserver stop all
sleep 5
kill -9 `ps -ef | grep dsa | grep democorp | grep -v grep | grep -v "democorp_mw_hub_lab" | awk '{print $2}'` >   /dev/null 2>&1
kill -9 `ps -ef | grep dsa | grep router   | grep -v grep | awk '{print $2}'` >   /dev/null 2>&1
sleep 5
rm -rf $DXHOME/data/democorp*.*
rm -rf $DXHOME/config/knowledge/democorp*.*
rm -rf $DXHOME/config/knowledge/router*.*
rm -rf $DXHOME/config/servers/democorp*.*
rm -rf $DXHOME/config/servers/router*.*
rm -rf $DXHOME/logs/democorp*.*
rm -rf $DXHOME/logs/router*.*
rm -rf $DXHOME/backup/delta*.*  > /dev/null 2>&1
rm -rf $DXHOME/backup/*.ldif > /dev/null 2>&1


echo ..
echo "#############################################################"
echo "Step 1a. Deploy clean version of democorp and router"
echo "#############################################################"
cd  $DXHOME/samples/democorp
$DXHOME/samples/democorp/setup.sh -q  > /dev/null 2>&1
cd $DXHOME/samples/router
$DXHOME/samples/router/setup.sh -q    > /dev/null 2>&1

cd
echo ..
echo "#############################################################"
echo "Step 1b. Create service ID in democorp for later use"
echo "#############################################################"
cat << EOF > $DXHOME/diradmin.ldif
version: 1
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: top
cn: diradmin
sn: diradmin
givenName: diradmin
userPassword: Password01
EOF

dxmodify -a -c -h `hostname` -p 19389 -f $DXHOME/diradmin.ldif

echo ..
echo "#############################################################"
echo "Step 1c.  Stop all running democorp & router DSAs"
echo "#############################################################"
dxserver stop all
sleep 10

echo ..
echo "#############################################################"
echo "Step 2a. Make common changes in pre-existing files before other modification"
echo "Update dsa-flags in democorp.dxc to allow Multi-Write with a HUB"
echo "#############################################################"
sed -i 's|ssl-auth|ssl-auth\n    multi-write-group = hub_group_AA\n     dsa-flags     =|g' $DXHOME/config/knowledge/democorp.dxc
sed -i 's|dsa-flags     =|dsa-flags     = multi-write, no-service-while-recovering, load-share|g' $DXHOME/config/knowledge/democorp.dxc

echo ..
echo "#############################################################"
echo "Step 2b. Update MW recovery in democorp.dxi file"
echo "#############################################################"
sed -i 's|recovery = false;|recovery = true;|g' $DXHOME/config/servers/democorp.dxi

echo ..
echo "#############################################################"
echo "Step 3a. Create six (6) copies of democorp and two (2) routers"
echo "Copy democorp data folder contents"
echo "#############################################################"
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpA.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpA.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpB.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpB.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpC.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpC.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpD.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpD.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpE.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpE.tx  > /dev/null 2>&1
cp -r -p $DXHOME/data/democorp.db $DXHOME/data/democorpF.db
cp -r -p $DXHOME/data/democorp.tx $DXHOME/data/democorpF.tx  > /dev/null 2>&1

echo ..
echo "#############################################################"
echo "Step 3b. Copy autostart folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpA
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpB
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpC
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpD
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpE
cp -r -p $DXHOME/config/autostart/democorp  $DXHOME/config/autostart/democorpF
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerAA
cp -r -p $DXHOME/config/autostart/router    $DXHOME/config/autostart/routerBB

echo ..
echo "#############################################################"
echo "Step 3c. Copy knowledge folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpA.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpB.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpC.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpD.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpE.dxc
cp -r -p $DXHOME/config/knowledge/democorp.dxc $DXHOME/config/knowledge/democorpF.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerAA.dxc
cp -r -p $DXHOME/config/knowledge/router.dxc   $DXHOME/config/knowledge/routerBB.dxc
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupAA.dxg
cp -r -p $DXHOME/config/knowledge/sample.dxg   $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 3d. Copy server folder contents"
echo "#############################################################"
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpA.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpB.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpC.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpD.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpE.dxi
cp -r -p $DXHOME/config/servers/democorp.dxi   $DXHOME/config/servers/democorpF.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerAA.dxi
cp -r -p $DXHOME/config/servers/router.dxi     $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4a.  Update names & ports in democorp knowledge files"
echo "#############################################################"
sed -i 's|19389|29389|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19390|29390|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPA =|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPA>|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|19389|29489|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19390|29490|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPB =|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPB>|g' $DXHOME/config/knowledge/democorpB.dxc
sed -i 's|19389|29589|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19390|29590|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPC =|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPC>|g' $DXHOME/config/knowledge/democorpC.dxc
sed -i 's|19389|29689|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19390|29690|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPD =|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPD>|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|19389|29789|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19390|29790|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPE =|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPE>|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|19389|29889|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|19390|29890|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|dsa DEMOCORP =|dsa DEMOCORPF =|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|<c AU><o DEMOCORP><cn DXserver>|<c AU><o DEMOCORP><cn DEMOCORPF>|g' $DXHOME/config/knowledge/democorpF.dxc

echo ..
echo "#############################################################"
echo "Step 4b. Update knowledge files for router ports"
echo "#############################################################"
sed -i 's|19289|39289|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19290|39290|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERAA =|g' $DXHOME/config/knowledge/routerAA.dxc
sed -i 's|19289|39389|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|19290|39390|g' $DXHOME/config/knowledge/routerBB.dxc
sed -i 's|dsa ROUTER =|dsa ROUTERBB =|g' $DXHOME/config/knowledge/routerBB.dxc

echo ..
echo "#############################################################"
echo "Step 4c. Update group knowledge file for three (3)MW Group HUB Peers "
echo "#############################################################"
sed -i 's|"router.dxc";|"routerAA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorp.dxc";|"democorpA.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|"democorpA.dxc";|"democorpA.dxc";\nsource "democorpB.dxc";\nsource "democorpC.dxc";\nsource "routerBB.dxc";\nsource "democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg
sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupAA.dxg

cp -r -p $DXHOME/config/knowledge/groupAA.dxg $DXHOME/config/knowledge/groupBB.dxg

#sed -i 's|"router.dxc";|"routerBB.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorp.dxc";|"democorpD.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|"democorpD.dxc";|"democorpD.dxc";\nsource "democorpE.dxc";\nsource "democorpF.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg
#sed -i 's|source "unspsc.dxc";|#source "unspsc.dxc";|g' $DXHOME/config/knowledge/groupBB.dxg

echo ..
echo "#############################################################"
echo "Step 4d.  Update Server folder contents"
echo "#############################################################"
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpB.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/democorpC.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpD.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpE.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/democorpF.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupAA.dxg";|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/sample.dxg";|/knowledge/groupBB.dxg";|g' $DXHOME/config/servers/routerBB.dxi


echo ..
echo "#############################################################"
echo "Step 4e.  Update HUB Configurations in DSA knowledge and DSA routers"
echo "#############################################################"
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpA.dxc
sed -i 's|load-share|load-share, multi-write-group-hub|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpD.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpE.dxc
sed -i 's|multi-write-group = hub_group_AA|multi-write-group = hub_group_BB|g' $DXHOME/config/knowledge/democorpF.dxc
sed -i 's|/knowledge/groupAA.dxg";|/knowledge/groupAA.dxg";\nset  write-precedence = democorpA ,democorpB, democorpC;\n|g' $DXHOME/config/servers/routerAA.dxi
sed -i 's|/knowledge/groupBB.dxg";|/knowledge/groupBB.dxg";\nset  write-precedence = democorpD ,democorpE, democorpF;\n|g' $DXHOME/config/servers/routerBB.dxi

echo ..
echo "#############################################################"
echo "Step 4f.  Remove samples of router & democorp from starting "
echo "#############################################################"
rm -rf $DXHOME/config/servers/democorp.dxi
rm -rf $DXHOME/config/servers/router.dxi
rm -rf $DXHOME/config/autostart/democorp
rm -rf $DXHOME/config/autostart/router

echo ..
echo "#############################################################"
echo "Step 5. Start all DSAs"
echo "#############################################################"
dxcertgen certs > /dev/null 2>&1
dxserver start all

dxserver status

#exit

echo ..
echo "#############################################################"
echo "Step 6. Test all DSAs with dxsearch query"
echo "#############################################################"
# Comment out if too verbose
# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU

# Data DSAs
#dxsearch -h `hostname` -p 29389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29489 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29589 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29689 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29789 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 29889 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
# Router DSAs
#dxsearch -h `hostname` -p 39289 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01
#dxsearch -h `hostname` -p 39389 -c -x -b o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01

echo ..
echo "#############################################################"
echo "Step 7. Execute the dxsoak command with the service account & time command"
echo "allow to run for over 5 sec to monitor changes for Multi-Write"
echo "may allow for longer times (1 hour) to get better performance metrics"
echo "#############################################################"
cd $DXHOME/samples/dxsoak
echo "Update democorpA (TCP 29389) to confirm MW to from democorpA (hub_group_AA) to democorpD (hub_group_BB)"
# Create a delete file first; then re-add entries
grep dn: democorp.eldf | grep ,ou=Services > democorp-del.eldf
sed -i 's|,c=AU|,c=AU\nchangetype: del\n|g' democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Delete all DN entries with ou=Services: `wc -l democorp-del.eldf` on democorpA (TCP 29389)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29389 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp-del.eldf

echo ..
echo "#############################################################"
echo "# Re-Add all DN entries with ou=Services: `wc -l democorp.eldf` on democorpD (TCP 29689)"
time ./dxsoak -c -t 2 -q 10 -l 5 -h `hostname`:29689 -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -w Password01 -f democorp.eldf


echo ..
echo "#############################################################"
echo "Step 8a. Update democorpA to force a single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_sn.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: sn
sn: diradmin_AA_new_update
EOF

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Make update on democorpA"
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29389 -f $DXHOME/diradmin_sn.ldif

echo "#############################################################"
echo "# Query democorpA (TCP 29389) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29389 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for sn value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 sn createTimestamp modifyTimestamp

#exit
echo ..
echo "#############################################################"
echo "Step 8b. Update democorpF to force a reverse single delta between peer members of AA and BB"
echo "#############################################################"
cd
cat << EOF > $DXHOME/diradmin_givenName.ldif
dn: cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU
changetype: modify
replace: givenName
givenName: diradmin_BB_new_update
EOF


echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value before change"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp

echo "#############################################################"
echo "# Update democorpF to show replication via democorpD (HUB) to democorpA (HUB) "
echo "#############################################################"
dxmodify -a -c -h `hostname` -p 29889 -f $DXHOME/diradmin_givenName.ldif

echo "#############################################################"
echo "# Query democorpC (TCP 29589) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29589 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp
echo "#############################################################"
echo "# Query democorpF (TCP 29889) for givenName value after change"
echo " - May catch a fractional delta in replication"
echo "#############################################################"
dxsearch -LLL -h `hostname` -p 29889 -c -x -b cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU -D cn=diradmin,ou=Networks,ou=Support,o=DEMOCORP,c=AU  -w Password01 givenName createTimestamp modifyTimestamp




echo ..
echo "###########################################################"
echo "Step 9b. Update CA Directory DSA to allow online backup ###"
echo "###########################################################"
echo " - Configure CA Directory to provide an data dump (zdb file) while DSA are online"
cp -r -p $DXHOME/config/settings/default.dxc.org $DXHOME/config/settings/default.dxc  > /dev/null 2>&1
cp -r -p $DXHOME/config/settings/default.dxc $DXHOME/config/settings/default.dxc.org  > /dev/null 2>&1
# Edit the DSA settings file to add in one line.  dump dxgrid-db;
chmod 744 $DXHOME/config/settings/default.dxc
echo "dump dxgrid-db;" >> $DXHOME/config/settings/default.dxc



echo ..
echo "######################################################################################"
echo "Step 9c. Re-init all DSA to data dump the CA DSAs for democorp & router "
echo "######################################################################################"
echo " - This make take 5-30 seconds to complete "
dxserver init all    > /dev/null 2>&1
# View for zdb or zd? (in-progress) files
sleep 10



echo ..
echo "#################################################################"
echo "Step 9d. Export DSA backup/offline zdb data files to LDIF file ###"
echo "#################################################################"
echo " - Export will happen after the backup/offline zdb files are fully created"
echo " - This make take 5-60 seconds  to complete "
echo ..
echo "#################################################################"
echo "Step 9e. Set WHILE loop for DemocorpF DSA ###"
echo "#################################################################"
until [ -f $DXHOME/data/democorpF.zdb ]
do
     echo " - Waiting till CA Directory has completed online data dump of DemocorpF DSA"
     sleep 5
done
sleep 5
echo ..
echo "#################################################################"
echo "Step 9f. Execute dxdumbdb for Democorp DSA - FULL ###"
echo "#################################################################"
mkdir $DXHOME/backup  > /dev/null 2>&1
cd $DXHOME/backup
dxdumpdb -z -f $DXHOME/backup/democorpA.ldif democorpA   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpB.ldif democorpB   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpC.ldif democorpC   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpD.ldif democorpD   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpE.ldif democorpE   > /dev/null 2>&1
dxdumpdb -z -f $DXHOME/backup/democorpF.ldif democorpF   > /dev/null 2>&1
sleep 5

echo ..
echo "#################################################################"
echo "Step 10a. Perform LDIF DELTA compare between democorpA and democorpB within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
#ldifdelta -x -S DSANAME  OLDFILE NEWFILE DELTAFILE
ldifdelta -x -S democorpA $DXHOME/backup/democorpA.ldif  $DXHOME/backup/democorpB.ldif $DXHOME/backup/delta-between-A-and-B.ldif
echo "#################################################################"
echo "Step 10b. Perform LDIF DELTA compare between democorpD and democorpE within same HUB MW group"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpD.ldif  $DXHOME/backup/democorpE.ldif $DXHOME/backup/delta-between-D-and-E.ldif
echo "#################################################################"
echo "Step 10c. Perform LDIF DELTA compare between democorpC and democorpF across different HUB MW groups"
echo "Look for any delta in the metrics > 0"
echo "#################################################################"
ldifdelta -x -S democorpC $DXHOME/backup/democorpC.ldif  $DXHOME/backup/democorpF.ldif $DXHOME/backup/delta-between-C-and-F.ldif

echo .
echo .




Ref: This HUB Model lab was built off a prior lab for MW Sync with air-gap requirements.

https://community.broadcom.com/enterprisesoftware/communities/community-home/digestviewer/viewthread?MessageKey=62ccc41d-7c37-4728-ad1e-c82e7a8acc38&CommunityKey=f9d65308-ca9b-48b7-915c-7e9cb8fc3295&tab=digestviewer

Load Balancing Provisioning Tier

The prior releases of CA Identity Manager / Identity Suite have a bottleneck with the provisioning tier.

The top tier of the solution stack, Identity Manager Environment (IME/J2EE Application), may communicate to multiple Provisioning Servers (IMPS), but this configuration only has value for fail-over high availability.

This default deployment means we will have a “many-to-one” challenge, multiple IMEs experiencing a bottleneck with provisioning communication to a single IMPS server.

If this IMPS server is busy, then transactions for one or more IMEs are paused or may timeout. Unfortunately, the IME (J2EE) error messages or delays are not clear that this is a provisioning bottleneck challenge. Clients may attempt to resolve this challenge by increasing the number of IME and IMPS servers but will still be impacted by the provisioning bottleneck.

Two (2) prior methods used to overcome this bottleneck challenge were:


a) Pseudo hostname(s) entries, on the J2EE servers, for the Provisioning Tier, then rotate the order pseudo hostname(s) on the local J2EE host file to have their IP addresses access other IMPS. This methodology would give us a 1:1 configuration where one (1) IME is now locked to one (1) IMPS (by the pseudo hostname/IP address). This method is not perfect but ensures that all IMPS servers will be utilized if the number of IMPS servers equals IME (J2EE) servers. Noteworthy, this method is used by the CA identity Suite virtual appliance, where the pseudo hostname(s) are ca-prov-srv-01, ca-prov-srv-02, ca-prov-03, etc. (see image above)

<Connection
  host="ca-prov-srv-primary" port="20390"
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“
/>

b) A Router placed in-front of the IMPS (TCP 20389/20390), that contains “stickiness” to ensure that when round-robin model is used, that the same IMPS server is used for the IME that submitted a transaction, to avoid any concerns/challenges of possible”RACE” conditions, where a modify operations may occur before the create operation.


The “RACE” challenge is a concern of both of the methods above, but this risk is low, and can be managed with additional business rules that include pre-conditional checks, e.g., does the account exist before any modifications.

Ref: RACE https://en.wikipedia.org/wiki/Race_condition

Example of one type of RACE condition that may be seen.

Ref: PX Rule Engine: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

New CP2 Loading Balance Feature – No more bottleneck.

Identity Manager can now use round-robin load balancing support, without any restrictions on either type of provisioning operations or existing runtime limitations. This load balancing approach distributes client requests across a group of Provisioning servers.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/release-features-and-enhancement/Identity-Manager-14_3-CP2.html#concept.dita_b51ab03e-6e77-49be-8235-e50ee477247a_LoadBalancing

This feature is managed in the IME tier, and will also address any RACE conditions/concerns.


No configuration changes are required on the IMPS tier. After updates of CP2, we can now use the IME Management console to export the directory.xml for the IMPS servers and update the XML tag for <Connection. This configuration may also be deployed to the Virtual Appliances.

<Connection   
  host="ca-prov-srv-primary" port="20390”   
  loadbalance="ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“   
  failover="ca-prov-srv-01:20390,ca-prov-srv-02:20390,ca-prov-srv-03:20390,ca-prov-srv-04:20390“ 
/>

View of CP2 to download.

https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-identity-and-access-management/identity-manager/14-3/Release-Notes/Cumulative-Patches/Latest-Cumulative-Patch-14_3-CP2.html

Before applying this patch, we recommend collecting your metrics for feed operations that include multiple create operations, and modify operations to minimal of 1000 IDS, Monitor the IMPS etatrans logs as well; and the JCS/CCS logs. After the patch, run the same feed operations to determine the value of provisioning load-balance feature; and any provisioning delays that have been addressed. You may wish to increase the # of JCS/CCS servers (MS Windows) to speed up provisioning to Active Directory and other endpoints.

Disaster Recovery Scenarios for Directories

Restore processes may be done with snapshots-in-time for both databases and directories. We wished to provide clarity of the restoration steps after a snapshot-in-time is utilized for a directory. The methodology outlined below has the following goals: a) allow sites to prepare before they need the restoration steps, b) provide a training module to exercise samples included in a vendor solution.

In this scenario, we focused on the CA/Broadcom/Symantec Directory solution. The CA Directory provides several tools to automate online backup snapshots, but these processes stop at copies of the binary data files.

Additionally, we desired to walk-through the provided DAR (Disaster and Recovery) scenarios and determine what needed to be updated to reflect newer features; and how we may validate that we did accomplish a full restoration.

Finally, to assist with the decision tree model, where we need to triage and determine if a full restore is required, or may we select partial restoration via extracts and imports of selected data.

Cluster Out-of-Sync Scenario

Awareness

The first indicator that a userstore (CA Directory DATA DSA) is out-of-sync will be the CA Directory logs themselves, e.g. alarm or trace logs.

Another indication will be inconsistent query results for a user object that returns different results when using a front-end router to the DATA DSAs.

After awareness of the issue, the team will exercise a triage process to determine the extent of the out-of-sync data. For a quick check, one may execute LDAP queries direct to the TCP port of each DATA DSA on each host, and examine the results directory or even the total number of entries, e.g. dxTotalEntryCount.

The returned count value will help determine if the number of entries for each DATA DSA on the peer MW hosts are out-of-sync for ADD or DEL operations. The challenge/GAP with this method is it will not show any delta due to modify operations on the user objects themselves, e.g. address field changed.

Example of LDAP queries (dxsearch/ldapsearch) to CA Directory DATA DSA for the CA Identity Management solution (4 DATA DSA and 1 ROUTER DSA)

su - dsa    OR [ sudo -iu dsa ]
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

# NOTIFY BRANCH (TCP 20404) 
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount
dn: dc=notify,dc=etadb

# INC BRANCH (TCP 20398)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# CO BRANCH (TCP 20396)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# MAIN BRANCH (TCP 20394)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount

# ALL BRANCHES - Router Port (TCP 20391)
LDAPTLS_REQCERT=never  dxsearch -LLL -H ldaps://`hostname`:20391 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=etadb' '(objectClass=*)' dxTotalEntryCount

# Scroll to see entire line 

A better process to identify the delta(s) will be automating the daily backup process, to build out LDIF files for each peer MW DATA DSA and then performing a delta process between the LDIF files. We will walk through this more involve step later in this blog entry.

Recovery Processes

The below link has examples from CA/Broadcom/Symantec with recovery notes of CA Directory DATA DSA that are out-of-sync due to extended downtime or outage window.

The below image pulled from the document (page 9.) shows CA Directory r12.x using the latest recovery processes of “multiwrite-DISP” (MW-DISP) mode.

This recovery process of MW-DISP is default for the CA Identity Management DATA DSAs via the install wizard tools, when they create the IMPD DATA DSAs.

https://knowledge.broadcom.com/external/article?articleId=54088

The above document is dated, and still mentions additional file structures that have been retired, e.g. oc/zoc, at,zat.

An enhancement request has been submitted for both of these requests:

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=c71a304b-a689-4894-ac1c-786c9a2b2d0d

The modified version we have started for CA Directory r14.x adds some clarity to the <dsaname>.dx files; and which steps may be adjusted to support the split data structure for the four (4) IMPD DATA DSAs.

The same time flow diagram was used. Extra notes were added for clarity, and if possible, examples of commands that will be used to assist with direct automation of each step (or maybe pasted in an SSH session window, as the dsa service ID).

Step 1, implicit in the identification/triage process, is to determine what userstore data is out-of-sync and how large a delta do we have. If the DSA service has been shut down (either deliberately or via a startup issue), if the shutdown delay is more than a few days, then the CA Directory process will check the date stamp in the <dsaname>.dp file and the transaction in the <dsaname>.tx file; if the dates are too large CA Directory will refuse to start the DATA DSA and issue a warning message.

Step 2, we will leverage the dxdisp <dsaname> command to generate a new time-stamp file <dsaname>.dx, that will be used to prevent unnecessary sync operations with any data older than the date stamp in this file. 

This command should be issued for every DATA DSA on the same host—Especially true for split DATA DSAs, e.g. IMPD (CA Identity Manager’s Provisioning Directories). In our example below, to assist with this step, we use a combination of commands with a while-loop to issue the dxdisp command.

This command can be executed regardless if the DSA is running or shutdown. If an existing <dsaname>.dx file exists, any additional execution of dxdisp will add updated time-stamps to this file.  

Note: The <dsaname>.dx file will be removed upon restart of the DATA DSA.

STEP 2: ISSUE DXDISP COMMAND [ Create time-stamp file for re-sync use ] ON ALL IMPD SERVERS.

su - dsa OR [ sudo -iu dsa ]
bash
dxserver status | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxdisp "$LINE" ;done ; echo ; find $DXHOME -name "*.dx" -exec ls -larth {} \;

# Scroll to see entire line 

Step 3 will then ask for an updated online backup to be executed. 

In earlier release of CA Directory, this required a telnet/ssh connection to the dxconsole of each DATA DSA. Or using the DSA configuration files to contain a dump dxgrid-db; command that would be executed with dxserver init all command. 

In newer releases of CA Directory, we can leverage the dxserver onlinebackup <dsaname> process. 

This step can be a challenge to dump all DATA DSAs at the same time, using manual procedures. 

Fortunately, we can automate this with a single bash shell process; and as an enhancement, we can also generate the LDIF extracts of each DATA DSA for later delta compare operations.

Note: The DATA DSA must be running (started) for the onlinebackup process to function correctly. If unsure, issue a dxserver status or dxserver start all prior. 

Retain the LDIF files from the “BAD” DATA DSA Servers for analysis.

STEP 3a-3c: ON ALL IMPD DATA DSA SERVERS - ISSUE ONLINE BACKUP PROCESS
su - dsa OR [ sudo -iu dsa ]
bash

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -w -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

# Scroll to see entire line 

Step 4a Walks through the possible copy operations from “GOOD” to the “BAD” DATA DSA host, for the <dsaname>.zdb files. The IMPD DATA DSA will require that three (3) of four (4) zdb files are copied, to ensure no impact to referential integrity between the DATA DSA.

The preferred model to copy data from one remote host to another is via the compressed rsync process over SSH, as this is a rapid process for the CA Directory db / zdb files.

https://anapartner.com/2020/05/03/wan-latency-rsync-versus-scp/

Below are the code blocks that demonstrate examples how to copy data from one DSA server to another DSA server.

# RSYNC METHOD
sudo -iu dsa

time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data

# Scroll to see entire line 
# SCP METHOD   
sudo -iu dsa

scp   REMOTE_ID@$HOST:./data/<folder_impd_data_dsa_name>/*.zdb   /tmp/dsa_data
/usr/bin/mv  /tmp/dsa_data/<incorrect_dsaname>.zdb   $DXHOME/data/<folder_impd_data_dsa_name>/<correct_dsaname>.db

# Scroll to see entire line 

Step 4b Walk through the final steps before restarting the “BAD” DATA DSA.

The ONLY files that should be in the data folders are <dsaname>.db (binary data file) and <dsaname>.dx (ASCII time-stamp file). Ensure that the copied <prior-hostname-dsaname>.zdb file has been renamed to the correct hostname & extension for <dsaname>.db

Remove the prior <dsaname>.dp (ASCII time-stamp file) { the DATA DSA will auto replace this file with the *.dx file contents } and the <dsaname>.tx (binary data transaction file).

Step 5a Startup the DATA DSA with the command

dxserver start all

If there is any issue with a DATA or ROUTER DSA not starting, then issue the same command with the debug switch (-d)

dxserver -d start <dsaname>

Use the output from the above debug process to address any a) syntax challenges, or b) older PID/LCK files ($DXHOME/pid)

Step 5b Finally, use dxsearch/ldapsearch to query a unit-test of authentication with the primary service ID. Use other unit/use-case tests as needed to confirm data is now synced.

bash
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd

LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://`hostname`:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s base -b 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' '(objectClass=*)' | perl -p00e 's/\r?\n //g'

# Scroll to see entire line 

LDIF Recovery Processes

The steps above are for recovery via a 100% replacement method, where the assumption is that the “bad” DSA server does NOT have any data worth keeping or wish to be reviewed.

We wish to clarify a process/methodology, where the “peer” Multi-write DSA may be out-of-sync. Still, we are not sure “which” is truly the “good DSA” to select, or perhaps we wished to merge data from multiple DSA before we declare one to be the “good DSA” (with regards to the completeness of data).

Using CA Directory commands, we can join them together to automate snapshots and exports to LDIF files. These LDIF files can then be compared against their peers MW DATA DSA exports or even to themselves at different snapshot export times. As long as we have the LDIF exports, we can recover from any DAR scenario.

Example of using CA Directory dxserver and dxdumpdb commands (STEP 3) with the ldifdelta and dxmodify commands.

The output from ldifdelta may be imported to any remote peer MW DATA DSA server to sync via dxmodify to that hostname, to force a sync for the few objects that may be out-of-sync, e.g. Password Hashes or other.

dxserver status | grep started | grep -v router | awk '{print $1}' | while IFS='' read -r LINE || [ -n "$LINE" ] ; do dxserver onlinebackup "$LINE" ; sleep 10; dxdumpdb -z -f /tmp/`date '+%Y%m%d_%H%M%S_%s'`_$LINE.ldif $LINE ;done ; echo ; find $DXHOME -name "*.zdb" -exec ls -larth {} \; ; echo ; ls -larth --time-style=full-iso /tmp/*.ldif | grep  `date '+%Y-%m-%d'`

ldifdelta -x -S ca-prov-srv-01-impd-co  /tmp/20200819_122820_1597858100_ca-prov-srv-01-impd-co.ldif   /tmp/20200819_123108_1597858268_ca-prov-srv-01-impd-co.ldif  |  perl -p00e 's/\r?\n //g'  >   /tmp/delta_file_ca-prov-srv-01-impd-co.ldif   ; cat /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd
dxmodify -v -c -h`hostname` -p 20391  -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -f /tmp/delta_file_ca-prov-srv-01-impd-co.ldif

# Scroll to see entire line 

The below images demonstrate a delta that exists between two (2) time snapshots. The CA Directory tool, ldifdelta, can identify and extract the modified entry to the user object.

The following examples will show how to re-import this delta using dxmodify command to the DATA DSA with no other modifications required to the input LDIF file.

In the testing example below, before any update to an object, let’s capture a snapshot-in-time and the LDIF files for each DATA DSA.

Lets make an update to a user object using any tool we wish, or command line process like ldapmodify.

Next, lets capture a new snapshot-in-time after the update, so we will be able to utilize the ldifdelta tool.

We can use the ldifdelta tool to create the delta LDIF input file. After we review this file, and accept the changes, we can then submit this LDIF file to the remote peer MW DATA DSA that are out-of-sync.

Hope this has value to you and any challenges you may have with your environment.

Upgrade CA API Gateway via docker “in-place”

CA API Gateway (ssg) is used to manage SaaS endpoints/applications for the CA/Symantec Identity Suite solution. One of the challenges of appliances and Docker containers is the underlying 3rd party libraries may get dated, and require updates.

Most vendors will not allow post-updates or direct updates to their containers libraries, as this has an impact on the support model. So we must rely on the support process and push vendors to release additional updates to stay ahead of any security concerns.


The CA API Gateway (ssg) when deployed on docker, has a streamlined process for updating in place, as long as you have backed-up the MySQL database when the docker images are updated.

We wanted to capture the process to upgrade from CA API Gateway 9.4 (ssg94) to Gateway 10.0 (ssg10). Fortunately, the MySQL 8.0 database has the same structure, tables, and routines as the MySQL 5.7 database for CA API Gateway 9.4.

The challenge we have is the documented process to upgrade is difficult to implement on the same host OS; and there was a lost opportunity to manage the license file from 9.4 to license 10.0 during the re-import of the MySQL database.


The below diagram, from the CA API Gateway 10.0 upgrade process can be adjusted to streamline the upgrade process.

expedited_scenario_1
Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/upgrade-the-gateway/upgrade-an-appliance-gateway/manual-expedited-appliance-upgrade.html

The above-documented process outlines dropping the MySQL database ssg completely, and then create a new clean db. Using this documented process, we can slightly adjust it, to avoid an unnecessary step to re-import the license file after the restart of the gateway container. We also wish to add additional validation steps to show what is changing.

Proposal for modifications:

  1. Create a clean CA API Gateway 10.0 (ssg10) Docker deployment on the same Host OS. May use docker-compose with REST service enabled and use different TCP listening ports to allow two (2) docker containers to run simultaneously during the testing cycle. After testing may keep the default TCP license ports of 8443 & 9443.
  2. Allow the CA API Gateway 10.0 container to start cleanly with MySQL 8.0 DB and with the correct license file for version 10. Then export the MySQL database table that contains the updated license table.
  3. Import the prior backup MySQL file to the new CA API Gateway deployment. Then before startup, import the ssg10 license mysql file as well. This will replace the ssg94 license information.
  4. Restart the CA API Gateway container, and monitor the logs for any errors and ensure the new license file is used
  5. If REST API was enabled (via the docker-compose file & touch a file name “restman”), then use CURL to validate all REST services are available, and list all prior API Gateway Policy Services are displayed.

A visual example of this process using the prior diagram.

Note: The official documentation uses sed to replace a string “NO_AUTO_CREATE_USER”; but the documentation shows two examples. One with a comma & one without. We have included the one with the comma, but we did not see this line in the MySQL sql export, so it was deemed low value, but still included in our process.

Example of upgrade process and validation of using REST

Note the two (2) running CA API Gateway container of 9.4 (with MySQL 5.7) and 10.0 (with MySQL 8.0) with different TCP listen services; and validation of REST services for ssg10

Below are the above steps called out with additional validations steps, and the use of the “time” command to monitor the export of the files.

# Pre-Step 1:  On Test System:  Prepare SSG10 docker compose yml file and correct license.xml & confirm startup.
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}
docker ps -a
docker logs ssg10 -f --tail 100
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"


# Step 2:  On PROD HOST OS: Stop SSG94 and export the current MySQL 5.7 database with routines (aka stored procedures) & remove unwanted lines
docker stop ssg94
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.before.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.sql
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.updated.for.mysql8.sql
sed -i "s/NO_AUTO_CREATE_USER,//g"   			ssg94.backup.updated.for.mysql8.sql
sed -i "/Using a password on the command/d" 	ssg94.backup.updated.for.mysql8.sql


# Step 3: On PROD HOST OS: Deploy SSG10 with docker compose yml file & correct license xml file & export db table license_document
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}  
docker ps -a
docker logs ssg10 -f --tail 100     
docker stop ssg10
time docker exec -tt  mysql-ssg10  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines license_document  > ssg10.license.export.sql
sed -i "/Using a password on the command/d" 	ssg10.license.export.sql

# Step 4: On PROD HOST OS: Drop the SSG10 MySQL 8.0 ssg database and rebuilt with imports of SQL files.
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer drop ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer create ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg94.backup.updated.for.mysql8.sql
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg10.license.export.sql
docker exec -it mysql-ssg10  mysql --user=root --password=7layer ssg  -e "SELECT * FROM license_document;" | grep -A 12 -e "<license "

# Step 5: On PROD HOST OS:  Start SSG10 and validate no errors 
docker start ssg10       {Wait 90-120 seconds} 
docker ps -a

# Step 6:  Validate license    
docker logs ssg10 -f --tail 100  
docker logs ssg10 -f 2>&1  | grep -i license

# Step 7:  Validate REST services enabled and we can see all services
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
# Example to validate ServiceNow REST service to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://localhost:9443/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Example validate ServiceNow REST service via LB to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://192.168.242.135/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Direct REST service to ServiceNow to validate development instance is available.
curl --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json"  'https://dev101846.service-now.com/api/now/table/sys_user?sysparm_query=user_name=testalan13095'

# Step 8:  Certs required for IM JCS Tier to avoid typical cert issues.
a. Ensure the CA API Gateway public root CA cert or self-signed cert is imported to each JCS keystore
b. If using a LoadBalancer, e.g. httpd, ensure this public root CA cert or self-signed cert is imported to each JCS keystore.


docker commands collected to assist with RCA efforts for Operation Teams

# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"

# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"


#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)


#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100

# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux

#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'

# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "

# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"

# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"

# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"

# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl

Change the pmadmin password at the docker command line

Process flows collect for the CA API Gateway docker deployment

Example of the docker-compose yml file for CA API Gateway with REST web services and license xml file.

We attempt to keep useful notes/hints included in the yml file to allow for future reference. The example below redirect ports to TCP 18443 and 19443 from the standard ports of 8443 and 9443 for the CA API Gateway; and MySQL from 3306 to 23306 for testing protocols in non-Production enviornments.

# docker-compose-ssg10-0-mysql8-0_with_rest_and_external_mysql_volume.yml
# Startup:  docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml up  -d
# Stop:     docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml down
#
#
# Ensure Host OS Network allows IPv4 forwarding:   sysctl -a | grep ipv4.ip_forward
# Validate docker network has access with curl:  curl -vk --tlsv1.2  https://www.service-now.com
# Note:  Do NOT use TABS in this file
# Monitor startup of containers with:  docker logs ssg10 -f --tail 100   AND   docker logs mysql-ssg10 -f  --tail 100
# https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/using-the-container-gateway/getting-started-with-the-container-gateway/run-the-container-gateway-on-docker-engine/sample-docker-compose-deployment-file.html
version: "2.2"
services:
   ssg10:
     container_name: ssg10
     # Ref: https://hub.docker.com/r/caapim/gateway/tags
     #image: caapim/gateway:latest
     image: caapim/gateway:10.0.00_20200428
     mem_limit: 10048m
     volumes:
        # Ensure ssg_license.xml is a valid SSG license file for 9.4 or 10.0
        - ./ssg_license_10.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
        # https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/apis-and-toolkits/rest-management-api.html
        # Touch the file restman to auto-start rest webservices
        # Validate REST API with curl
        # curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
        # curl --insecure --user pmadmin:7layer  https://localhost:18443/restman/1.0/rest.wadl
        - ./restman:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/services/restman
     ports:
       - "58443:8443"
       - "59443:9443"
     environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql-ssg"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql-ssg10:3306/ssg?useSSL=false"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false"
        SSG_INTERNAL_SERVICES: "restman wsman"
     links:
        - mysql-ssg10
   mysql-ssg10:
     container_name: mysql-ssg10
     # Ref https://hub.docker.com/_/mysql?tab=tags
     image: mysql:8.0.20
     #image: mysql:latest
     # SSG 10.0 requires MySQL 8.x per documentation
     #https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/using-mysql-8_0-with-gateway-10.html
     mem_limit: 1048m
     restart: always
     ports:
        - "23306:3306"
     environment:
        - MYSQL_ROOT_PASSWORD=7layer
        #- MYSQL_RANDOM_ROOT_PASSWORD=yes
        - MYSQL_USER=gateway
        - MYSQL_PASSWORD=7layer
        - MYSQL_DATABASE=ssg
     command:
       - "--character-set-server=utf8mb3"
       - "--log-bin-trust-function-creators=1"
       - "--default-authentication-plugin=mysql_native_password"
       - "--innodb_log_buffer_size=32M"
       - "--innodb_log_file_size=80M"
       - "--max_allowed_packet=8M"
#     volumes:
#       - mysql_db8:/var/lib/mysql
# Persist SSG MySQL DB Data
# Validate after shutdown with:  docker volume ls  &  docker volume inspect ssg_mysql_db
# Note:  Important - Random Root Password will not work for persist MySQL - Password must be known for 1st time
#   volumes:
#     mysql_db8:
#
# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"
# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)
#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100
# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux
#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'
# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "
# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"
# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"
# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"
# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl

Clean-Up Orphans and Refine Correlation Rules

Correlation rules may be very simple. A unique ID on an IAM solution should match the unique ID (or combinations of attributes) to form a one-to-one (1:1) relationship to the identity on a managed endpoint/application.

Most sites that had the opportunity have started using GUID/UUID values for the correlation ID on the IAM solutions and if the endpoint/application allows it, the same GUID/UUID on an open field, that likely is not the same as the login ID field.

Example below using a GUID/UUID format as the primary identifier with the IAM solution and the endpoint/application of an Active Directory domain.

We may also have many different correlation rules or primary/secondary correlation for every application/endpoint. Until the correlation is correct we have the likelihood of an incorrect correlation or default correlation.

If we wish to remove an incorrect correlation, this may be done manually to remove or re-attach the correct entries. However, this would not address future correlation processes if the rules are not updated.

Example of removing a correlation from the orphan ID “[default user]”


Example to remove a incorrect correlation manually within the IAM solution

To assist with refinement of correlation rules, a feedback process/script may have value.

The below script demonstrates using OS ldapsearch/ldapdelete processes with the CA Identity Manager Provisioning Tier (TCP 20389/20390) a feedback process to cleanup the Orphans IDs under “[default user]”

The script will query all “inclusions” where an endpoint account has been incorrectly associated with the Global user “[default user]” and return a count of these records. The process will capture the dn values of these inclusions records, and then feed them to the Open LDAP ldapdelete process to have them removed. Since we are using the IMPS service (TCP 20389/20390) we are still allowing the solution to maintain referential integrity during the clean-up process.

After the deletion are complete, we will re-initialize a new E&C (explore & correlate) process using any new Correlation Rules that may have been added. It is this opportunity that an administrator may wish to adjust their own correlation rules; and then re-execute the script. If the correlation rules do not match, then the prior correlations will return to the “[default user]”.

#!/bin/bash
#####################################################################################################################
#
# Name: Clean Up [default user]
#
# Goal:  Script to clean up [default_user] correlations to allow for better orphan or rogue account identification
#  - Ensure that IMPS Service TCP 20389/20390 is used to maintain referential integrity of the inclusions entries
#    during delete operations.
#
# Ref:  CA IM r14.x solution & OS ldapsearch/ldapdelete
#
# A. Baugher, ANA, 04/2020
#
#####################################################################################################################
# set -xv
DATETZ=$(date -d "1970-01-01 00:00:00 `date +'%s'` seconds"  +'%Y-%m-%dT%H:%M:%S.%3NZ')
IMPSHOST=`hostname`
IMPSPORT=20390
IMPSUSERDN='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
# Use pwd file to avoid clear text passwords in script
# echo -n CLEAR_TEXT_PASSWORD > .imps.pwd
IMPSPWD=`cat .imps.pwd`
#####################################################################################################################
BASE_DN='eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta'
SUP_DN_ENTRY='eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im'
FILTER="(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=$SUP_DN_ENTRY))"
SEARCH=sub
ATTRIBUTES='dn eTInclusionID'
EXCLUDE="  -e ^$ "
#SIZE=" -z 10"
SIZE=" -z 0"
FILENAME=default_user_guid.txt
rm -rf $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID > tmp_file
echo "LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D '$IMPSUSERDN' -y ./.imps.pwd -b '$BASE_DN' -s $SEARCH '$FILTER' $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F': ' '{print \$2}' | grep eTInclusionID "
uniq -i tmp_file > $FILENAME
echo "#################################################################################################"
echo "# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : "`cat $FILENAME | wc -l`
rm -rf tmp_file
echo "#################################################################################################"



echo ""
echo "####################################################################################################################"
echo "#### Remove `cat $FILENAME | wc -l` EA (endpoint accounts) that are correlated to the Global User [default user] "
echo "####################################################################################################################"
LDAPTLS_REQCERT=never ldapdelete -v -c -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -f $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"


echo ""
echo "#################################################################################################"
echo "#### Re-explore & correlate to update Global User [default user] orphan bucket."
echo "#################################################################################################"
echo ""
IMPSADSBASEDN="eTADSDirectoryName=dc2016.exchange.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers

IMPSADSBASEDN="eTADSDirectoryName=dc2012.exchange2012.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers


echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
echo ""

Example of the output of the script (with 1000’s of lines remove for clarity). Includes E&C to two (2) ADS endpoints, where > 2000 identities will default correlation to the orphan Global User “[default user]”.

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2184
#################################################################################################
LDAPTLS_REQCERT=never ldapsearch  -z 0 -LLL -H ldaps://vapp0001:20390 -D 'eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta' -y ./.imps.pwd -b 'eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta' -s sub '(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im))' dn eTInclusionID | perl -p00e 's/\r?\n //g' | grep -v   -e ^$  | awk -F': ' '{print $2}' | grep eTInclusionID
#################################################################################################
# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : 2184
#################################################################################################

####################################################################################################################
#### Remove 2184 EA (endpoint accounts) that are correlated to the Global User [default user]
####################################################################################################################
ldap_initialize( ldaps://vapp0001:20390/??base )
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@67d6bf2c-1104-1039-96c4-ef7605d11763,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname6 mi. lastname6' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@65e02962-00bd-1039-830f-ae134a0f7638,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname0002 lastname0002' and Global User '[default user]' deleted successfully

[Deleted > 5000 similar rows ]

deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@ce05d098-1b32-1039-85ec-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'ffffff' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@75a62f60-1b32-1039-85ea-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'eeeee' and Global User '[default user]' deleted successfully

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
0
#################################################################################################

#################################################################################################
#### Re-explore & correlate to update Global User [default user] orphan bucket.
#################################################################################################

Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 672, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' correlation successful: (accounts correlated: 0, defaulted: 566, unchanged: 6, failures: 0)
Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 1871, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' correlation successful: (accounts correlated: 0, defaulted: 1619, unchanged: 153, failures: 0)

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2185
#################################################################################################



Modify the above script for your own application/endpoints and refine your correlation rules (or add additional ones as needed.)

If applications/endpoints identities are non-managed service IDs, a process that may be used to assist is shown below. Create a new Global User (similar format as [default user]), and then drag-n-drop the endpoint/application service ID accounts to the new Global User [endpoint A service ID].

The final goal is a “clean” orphan process, that will be able to alert us to any rogue accounts being created OOB (out-of-band) of the expected top-down IAM solution from an approved SOT (source-of-truth) solution, e.g. SAP HR/Workday or home-grown DB used with ETL processes. By removing the “noise” of incorrectly correlated accounts, we can now focus on identifying the true “orphans”.

WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$


Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid locking a userID in a Virtual Appliance

The below post describes enabling the .ssh private key/public key process for the provided service IDs to avoid dependency on a password that may be forgotten, and also how to leverage the service IDs to address potential CA Directory data sync challenges that may occur when there are WAN network latency challenges between remote cluster nodes.

Background:

The CA/Broadcom/Symantec Identity Suite (IGA) solution provides for a software virtual appliance. This software appliance is available on Amazon AWS as a pre-built AMI image that allows for rapid deployment.

The software appliance is also offered as an OVA file for Vmware ESXi/Workstation deployment.

Challenge:

If the primary service ID is locked or password is allowed to expire, then the administrator will likely have only two (2) options:

1) Request assistance from the Vendor (for a supported process to reset the service ID – likely with a 2nd service ID “recoverip”)

2) Boot from an ISO image (if allowed) to mount the vApp as a data drive and update the primary service ID.

Proposal:

Add a standardized SSH RSA private/pubic key to the primary service ID, if it does not exist. If it exists, validate able to authentication and copy files between cluster nodes with the existing .SSH files. Rotate these files per internal security policies, e.g. 1/year.

The focus for this entry is on the CA ‘config’ and ‘ec2-user’ service IDs.

An enhancement request has been added, to have the ‘dsa’ userID added to the file’/etc/ssh/ssh_allowed_users’ to allow for the same .ssh RSA process to address challenges during deployments where the CA Directory Data DSA did not fully copy from one node to another node.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

AWS vApp: ‘ec2-user’

The primary service ID for remote SSH access is ‘ec2-user’ for the Amazon AWS is already deployed with a .ssh RSA private/public key. This is a requirement for AWS deployments and has been enabled to use this process.

This feature allows for access to be via the private key from a remote SSH session using Putty/MobaXterm or similar tools. Another feature may be leveraged by updating the ‘ec2-user’ .ssh folder to allow for other nodes to be exposed with this service ID, to assist with the deployment of patch files.

As an example, enabling .ssh service between multiple cluster nodes will reduce scp process from remote workstations. Prior, if there were five (5) vApp nodes, to patch them would require uploading the patch direct to each of the five (5) nodes. With enabling .ssh service between all nodes for the ‘ec2-user’ service ID, we only need to upload patches to one (1) node, then use a scp process to push these patch file(s) from one node to another cluster node.

On-Prem vApp: ‘config’

We wish to emulate this process for on-prem vApp servers to reduce I/O for any files to be uploaded and/or shared.

This process has strong value when CA Directory *.db files are out-of-sync or during initial deployment, there may be network issues and/or WAN latency.

Below is an example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

An example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

Below is an example to push the newly created SSH RSA files to the remote host(s) of the vApp cluster. After this step, we can now use scp processes to assist with remediation efforts within scripts without a password stored as clear text.

Copy the RSA folder to your workstation, to add to your Putty/MobaXterm or similar SSH tool, to allow remote authentication using the public key.

If you have any issues, use the embedded verbose logging within the ssh client tool (-vv) to identify the root issue.

ssh -vv userid@remote_hostname

Example:

config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > eval `ssh-agent` && ssh-add
Agent pid 5717
Enter passphrase for /home/config/.ssh/id_rsa:
Identity added: /home/config/.ssh/id_rsa (/home/config/.ssh/id_rsa)
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ >
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > ssh -vv config@192.168.242.128
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.242.128 [192.168.242.128] port 22.
debug1: Connection established.
debug1: identity file /home/config/.ssh/identity type -1
debug1: identity file /home/config/.ssh/identity-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/config/.ssh/id_rsa type 1
debug1: identity file /home/config/.ssh/id_rsa-cert type -1
debug1: identity file /home/config/.ssh/id_dsa type -1
debug1: identity file /home/config/.ssh/id_dsa-cert type -1
debug1: identity file /home/config/.ssh/id_ecdsa type -1
debug1: identity file /home/config/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-sha1
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug2: mac_setup: found hmac-sha1
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 141/320
debug2: bits set: 1027/2048
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.242.128' is known and matches the RSA host key.
debug1: Found key in /home/config/.ssh/known_hosts:2
debug2: bits set: 991/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/config/.ssh/id_rsa (0x5648110d2a00)
debug2: key: /home/config/.ssh/identity ((nil))
debug2: key: /home/config/.ssh/id_dsa ((nil))
debug2: key: /home/config/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering public key: /home/config/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug2: input_userauth_pk_ok: SHA1 fp 39:06:95:0d:13:4b:9a:29:0b:28:b6:bd:3d:b0:03:e8:3c:ad:50:6f
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Thu Apr 30 20:21:48 2020 from 192.168.242.146

CA Identity Suite Virtual Appliance version 14.3.0 - SANDBOX mode
FIPS enabled:                   true
Server IP addresses:            192.168.242.128
Enabled services:
Identity Portal               192.168.242.128 [OK] WildFly (Portal) is running (pid 10570), port 8081
                                              [OK] Identity Portal Admin UI is available
                                              [OK] Identity Portal User Console is available
                                              [OK] Java heap size used by Identity Portal: 810MB/1512MB (53%)
Oracle Database Express 11g   192.168.242.128 [OK] Oracle Express Edition started
Identity Governance           192.168.242.128 [OK] WildFly (IG) is running (pid 8050), port 8082
                                              [OK] IG is running
                                              [OK] Java heap size used by Identity Governance: 807MB/1512MB (53%)
Identity Manager              192.168.242.128 [OK] WildFly (IDM) is running (pid 5550), port 8080
                                              [OK] IDM environment is started
                                              [OK] idm-userstore-router-caim-srv-01 started
                                              [OK] Java heap size used by Identity Manager: 1649MB/4096MB (40%)
Provisioning Server           192.168.242.128 [OK] im_ps is running
                                              [OK] co file usage: 1MB/250MB (0%)
                                              [OK] inc file usage: 1MB/250MB (0%)
                                              [OK] main file usage: 9MB/250MB (3%)
                                              [OK] notify file usage: 1MB/250MB (0%)
                                              [OK] All DSAs are started
Connector Server              192.168.242.128 [OK] jcs is running
User Store                    192.168.242.128 [OK] STATS: number of objects in cache: 5
                                              [OK] file usage: 1MB/200MB (0%)
                                              [OK] UserStore_userstore-01 started
Central Log Server            192.168.242.128 [OK] rsyslogd (pid  1670) is running...
=== LAST UPDATED: Fri May  1 12:15:05 CDT 2020 ====
*** [WARN] Volume / has 13% Free space (6.2G out of 47G)
config@cluster01 VAPP-14.3.0 (192.168.242.128):~ >

A view into rotating the SSH RSA keys for the CONFIG UserID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]