Avoid the noise – IMPS etatrans alias/function “tap”

Monitoring use-cases within solutions that use various logs can be onerous when there is “noise” or low-value events in the logs. For provisioning use-cases, we prefer to focus on the “CrUD” use-cases and actions.

The CA/Symantec Identity Manager solution has a mid-tier component, IM Provisioning Server, that captures quite a bit of information useful for monitoring for success/failure. The default Log Level of the primary log file, etatrans*.log, is log level = 7. This log level will capture all possible searches and information of activity within the Provisioning Server’s service and transactions to its connector tier.

We can reduce some of the “noise” of searches/information and focus on the “CRuD” actions of “add/mod/del” by reducing the log level to level = 3.

This help as well to reduce the impact to the disk spaces and roll-over of the etatrans*.log file during bulk tasks or feed tasks.

Challenges:

However, even with log level = 3, we still have some “noise” in the etatrans*.log.

Additional “pain points”, the etatrans*.log file is renamed upon every restart of the IMPS service and during rollover at a size of 1 MB.

Resolution:

To assist with “finding” the current file, and to remove the noise, we have created the following “function/alias” for the IMPS user ID.

  1. Log into the IMPS service ID: sudo su – imps {Ensure you use the “dash” character to ensure the .profile is sourced when you switch IDs}
  2. Edit the .profile file: vi .profile
  3. The current file will only have one line, that sources the primary IMPS environmental information: . /etc/.profile_imps

4. Add the following body after the IMPS environmental profile line

function tap () {
cd $ETAHOME/logs
a=$(ls -rt $ETAHOME/logs | grep etatrans | tail -1)
pwd
echo "Tail current log file with exclusions: "$a
tail -F $a | grep -v -e ":LDAP" -e ":Config" -e "AUDITCONFIG" -e ":EtaServer" -e ":Bind " -e ":Unbind " -e ":Search "
}
export -f tap

This new “function/alias” will cd to the correct folder of logs, then tail the correct etatrans*.log file, and exclude the noise of non-CrUD activity. Using the new alias of “tap” on all provisioning servers, will allow us to isolate any challenges during use-case validation.

5. Exit out of the IMPS user ID account; then re-sudo back into this account, and test the “tap” alias.

6. While using the “tap” alias, exercise use-cases within the IM Provisioning Manager (GUI) and the IM User Console (browser); monitor the “Add/Mod/Deletes”. You will also be able to see the “Child” updates to endpoints and updates to the IMPS notification queue (IME Callback).

ADS Endpoint Configuration Challenges and Hints

  1. Ensure the hostname entry is a FQDN or alias. It can not be an IP address if MS Exchange is to be managed through this connector, due to conflict with Kerberos authentication and IP addresses. If the object was created with an IP address, it may be changed via Jxplorer for two (2) attributes: eTADSprimaryServer and eTADSServerName.

2. General Information on the ADS Endpoint Logging Tab and where this information is stored. Only two (2) the Destination have value with current deployment, e.g. Text File & System Log (MS Windows Event viewer) for Active Directory (ADS). The “Text File” will output data to two (2) files: jcs\logs\ADS\<endpoint-name>.log and ccs\logs\ADS\<endpoint-name>.log

3. Use the MS Event Viewer on the ADS Domain Controller, or use the MS Event Viewer to remotely view the transactions on the remote ADS DC. Select the event codes of 627,628,4723,4724,4738 to start with. Other codes may be added that are useful. Ref: https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/appendix-l–events-to-monitor

4. Additionally, the User ID may be in one of three (3) formats: UPN (serviceid@exchange.lab), NT ( domain\serviceid ), LDAP DN ( cn=serviceid,ou=people,dc=exchange,dc=lab). We recommend UPN or NT format to allow the embedded API features for MS Exchange powershell management to correctly function. If the ID is to be changed, a password update must be done as well, since the User ID is part of the seed for the encrypted password for the service ID to be stored in CA Directory on the ADS endpoint object.

5. SASL versus TLS authentication checkboxes. We can tested the ADS authentication availability using ldapsearch binary. Ports used by Active Directory for authentication by client tools, https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/config-firewall-for-ad-domains-and-trusts

Note: SASL is encrypted traffic. If wireshark is used to intercept the traffic, the service ID may be seen during initial authentication, but NOT the password nor the payload data.

Notes on SASL validation for Active Directory. {Pro: No need to worry about TLS certificates rotation on client connections – all TLS is managed by the server}

:: Search ADS / LDAP store what is offered for SASL (use -x for simple connection)
ldapsearch -x -h dc2016.exchange.lab -p 389 -b “” -LLL -s base supportedSASLMechanisms

EXAMPLE OUTPUT

[root@oracle ~]# ldapsearch -x -h dc2016.exchange.lab -p 389 -b “” -LLL -s base supportedSASLMechanisms
dn:
supportedSASLMechanisms: GSSAPI
supportedSASLMechanisms: GSS-SPNEGO
supportedSASLMechanisms: EXTERNAL
supportedSASLMechanisms: DIGEST-MD5

:: On Linux OS, execute rpm -qa to search for SASL installed modules/libraries.
rpm -qa | grep cyrus

EXAMPLE OUTPUT

[root@oracle ~]# rpm -qa | grep cyrus
cyrus-sasl-gssapi-2.1.26-23.el7.x86_64
cyrus-sasl-lib-2.1.26-23.el7.x86_64
cyrus-sasl-md5-2.1.26-23.el7.x86_64

:: On Linux OS, install missing SASL libraries & ldapsearch (ldap-client)
yum -y install cyrus-sasl-md5 cyrus-sasl-gssapi openldap-clients

TESTING DIFFERING AUTHENTICATION MECHANISMS #### (may remove -d9 debug switch to view cleaner results)

TLS

LDAPTLS_REQCERT=never ldapsearch -d9 -LLL -H ldaps://dc2016.exchange.lab:636 -w CAdemo123 -D “CN=Administrator,CN=Users,DC=exchange,DC=lab” -b “CN=Administrator,CN=Users,DC=exchange,DC=lab” -s base userAccountControl

Start TLS

LDAPTLS_REQCERT=never ldapsearch -d9 -Z -LLL -H ldap://dc2016.exchange.lab:389 -w CAdemo123 -D “CN=Administrator,CN=Users,DC=exchange,DC=lab” -b “CN=Administrator,CN=Users,DC=exchange,DC=lab” -s base userAccountControl

Digest-MD5

ldapsearch -d9 -LLL -H ldap://dc2016.exchange.lab -w CAdemo123 -Y DIGEST-MD5 -U Administrator -b “CN=Administrator,CN=Users,DC=exchange,DC=lab” -s base userAccountControl

Kerberos (GSS)

ldapsearch -d9 -LLL -H ldap://dc2016.exchange.lab -w CAdemo123 -Y GSSAPI -U Administrator -b “CN=Administrator,CN=Users,DC=exchange,DC=lab” -s base userAccountControl

6. TCP/UDP Ports required for Active Directory Endpoint management per CA Documentation https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/identity-manager/14-4/reference/default-ports-for-ca-identity-manager-and-associated-components.html

SASL appears to connect on TCP 636 briefly, then use TCP 389 extensively. Other ports are 80 (Service), 135 (lsass.exe for home folders), 6405 (lsass.exe). If Kerberos authentication is defined for the service ID, then other ports will be used, e.g. 3268/3269. TCP 4104/4105 are for the legacy CAM/CAFT agents (typically not used any more).

Recommendation: Add these TCP Ports to any Firewall between the IM JCS/CCS Server and the Active Directory Domain Controllers to improve performance and avoid time-out delays.

MS Active Directory References on SASL.

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/989e0748-0953-455d-9d37-d08dfbf3998b

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/a98c1f56-8246-4212-8c4e-d92da1a9563b

Parallel provisioning for Active Directory and MS Exchange mailboxes – Improve Birthright/DayOne Access

One of the challenges that IAM/IAG solutions may have is using single thread processing for select endpoints. For the CA/Symantec Identity Management solution, before IM r14.3cp2, we lived with a single-threaded connector to managed MS Active Directory endpoints.

To address this challenge, we deployed multiple connector servers. We allowed the IM Provisioning Server (IMPS) to use a built-in round-robin approach of load-balancing separate transactions to different connector servers, which would service the same Active Directory endpoints.

The IME may be running as fast as it can with its clustered deployment, but as soon as a task has MS Active Directory, and there is a bottleneck with the CCS Service. We begin to see the IME JMS queue reporting that it is stuck and the IME View Submitted Task reporting “In Progress” for all tasks. If the CCS service is restarted, all IME tasks are then reported as “Failed.”

This is/was the bottleneck for the solution for sites that have MS Active Directory for Birthright/DayOne Access.

We can now avoid this bottleneck. [*** (5/24/2021) – There is an enhancement to CP2 to address im_ccs.exe crashes during peak loads discovered using this testing process. ]

Via the newly delivered enhancement https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7154e15b-085d-469e-bff0-ac588ff6bd5b .

We now have full parallel provisioning to MS Active Directory from a single connector server (JCS/CCS).

The new attribute that regulates this behavior is eTADSMaxConnectionsInPool. This attribute will be applied on every existing ADS endpoint that is currently being managed by the IM Provisioning Server after CP2 is deployed. Note: The default value is 10, but we recommend after much testing, to match the value of the IMPS-> JCS and JCS->CCS to equal 200.

During testing within the IME using Bulk Tasks or the IM BLC, we can see that the CCS-> ADS traffic will reach 20-30 connections if allowed. You may set this attribute to a value of 200 via Jxplorer and/or an ldapmodify/dxmodify script.

echo "############### SET ADS MAX CONNECTIONS IN POOL SIZE ##################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
NAMESPACE=exchange2016
LDAPTLS_REQCERT=never dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD" << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'

To confirm the number of open connections is greater than one (1), we can issue a Bulk IM Task or use a performance tool like CA Directory dxsoak.

In this example, we will show case using CA Directory dxsoak to execute 100 parallel threads to create 100 ADS Accounts with MS Exchange Mailboxes. We will also enclose this script for download for others to review and use.

Performance Lab:

Pre-Steps:

  1. Leverage CA Directory samples’ dxsoak binary (performance testing). You may wish to use CA Directory on an existing IM Provisioning Server (Linux OS) or you may deploy CA Directory (MS Windows version) to the JCS/CCS connector. Examples are provided for both OSes.
  2. Create LDIF files for IM Provisioning Server and/or IM Connector Tier. This file is needed to ‘push’ the solution to-failure. The use of the IME Bulk Task and/or etautil scripts to the IM Provisioning Tier, will not provide the transaction speed we need to break the CCS service if possible.
  3. Within the IM Provisioning Manager enable the ADS Endpoint TXT Logs on the Logging TAB, for all checkboxes.
  4. Monitor the IMPS etatrans* logs, monitor the JCS ADS logs, monitor the CCS ADS logs, monitor the number of CCS-> ADS (LDAP/S – TCP 389/636) threads. [Suggest using MS Sysinternals Process Explorer and select im_ccs.exe & then TCP/IP TAB]
  5. Monitor the MS ADS Domain via MS ADUC (AD Users & Computers UI) and MS Exchange Mailbox (Mailbox UI via Browser)

Execution:

6. Perform a UNIT TEST with dxmodify/ldapmodify to confirm the LDIF file input is correct with the correct suffix.

time dxmodify -H ldap://192.168.242.135:20389 -c -x -D "eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta" -w Password01 -f ads_user_with_exchange_dc_eta.ldif

7. Perform the PERFORMANCE TEST with dxsoak binary with the same LDIF file & correct suffix. Rate observed = 23 K ids/hr

./dxsoak -c -l 60 -t 100 -h 192.168.242.135:20389 -D "eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta" -w Password01 -f ads_user_with_exchange_dc_eta.ldif

Observations:

8. IMPS etatrans*.log – Count the number of operations per second. Note any RACE and/or data collisions, e.g. ADS accounts deleted prior to add via 100 threads or ADS account created multiple times attempted in different threads.

9. IM CCS ADS <endpoint>.log – Will only have useful data if the ADS Endpoint Logging TAB has been checked for TXT logs.

10. Finally, validate directly in MS Active Domain with the ADUC or similar tool & MS Exchange mailboxes being created/deleted.

11. Count the number of threads from im_ccs.exe to ADS – Suggest using MS Sysinternals Process Explorer tool and/or Powershell to count the number of connections.

MS Powershell Script to count the number of LDAP (TCP 389) connection from im_ccs.exe. [Note: TCP 389 is used more if the ADS Endpoint is setup to use SASL authentication. TCP 636 is used more if the ADS Endpoint is using the older TLS authentication]

$i=1
Do {
cls
(Get-NetTCPConnection -State Established -OwningProcess (Get-Process -name im_ccs).id -RemotePort 389).count
Start-Sleep -s 1
$i++
}
while ($i -le 5)

Direct Performance Testing to JCS/CCS Service

While this testing has limited value, it can offer satisfaction and assistance to troubleshoot any challenges. We can use the prior LDIF files with a slightly different suffix, dc=etasa (instead of dc=eta), to use dxsoak to push the connector tier to failure. This step helped provide memory dumps back to CA/Symantec Engineering teams to help isolate challenges within the parallel processing. CCS Service is only exposed via localhost. If you wish to test the CCS Service remotely, then update the MS Registry key for the CCS service to use the external IP address of the JCS/CCS Server. Rate observed = 25 K ids/hr

Script to generate 100 ADS Accounts with MS Exchange Mailbox Creation

You may wish to review this script and adjust it for your ADS / MS Exchange domains for testing. You can also create a simple LDIF file with password resets or ADS group membership adds. Just remember that the IMPS Service (TCP 20389/20390) uses the suffix dc=eta, and the IM JCS/CCS Services (TCP 20410/20411) & (TCP 20402/20403) use the suffix dc=etasa. Additionally, if using CA Directory dxsoak, only use the non-TLS ports, as this binary is not equipped for using TLS certs.

#!/bin/bash
#######################################################################################################################
# Name:  Generate ADS Feed Files for IM Solution Provisioning/Connector Tiers
#
# Goal:  Validate the new parallel processes from the IM Connector Tier to Active Directory with MS Exchange
#
#
# Generate ADS User LDIF file(s) for use with unit (dxmodify) and performance testing (dxsoak) to:
#  - {Note: dxsoak will only work with non-TLS ports}
#
# IM JCS (20410)  "dc=etasa"    {Ensure MS Windows Firewall allows this port to be exposed}
# IM CCS (20402)  "dc=etasa"    {This port is localhost only, may open to network traffic via registry update}
# IMPS (20389)    "dc=eta"
#
#
# Monitor:  
#
# The IMPS etatrans*.log  {exclude searches}
# The JCS daily log
# The JCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
# The CCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
#
# Execute per the examples provided during run of this file
#
#
# ANA 05/2021
#######################################################################################################################

# Unique Variables for an ADS Domain
NAMESPACE=exchange2016
ADSDOMAIN=exchange.lab
DCDOMAIN="DC=exchange,DC=lab"
OU=People

#######################################################################################################################


MAX=100
start=00001
counter=$start
echo "###############################################################"
echo "###############################################################"
START=`/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`
echo `/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`" = Current OS UTC time stamp"
echo "###############################################################"
FILE1=ads_user_with_exchange_dc_etasa.ldif
FILE2=ads_user_with_exchange_dc_eta.ldif
echo "" > $FILE1
while [ $counter -le $MAX ]
do
    n=$((10000+counter)); n=${n#1}
    tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
   echo "Counter with leading zeros = $n   at time:  $tz"


cat << EOF >> $FILE1
dn:  eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
objectClass:  eTADSAccount
eTADSobjectClass:  user
eTADSAccountName:  firstname$n aaalastname$n
eTADSgivenName:  firstname$n
eTADSsn:  aaalastname$n
eTADSdisplayName:  firstname$n aaalastname$n
eTADSuserPrincipalName:  aaatestuser$n@$ADSDOMAIN
eTADSsAMAccountName:  aaatestuser$n
eTPassword:  Password01
eTADSpwdLastSet:  -1
eTSuspended:  0
eTADSuserAccountControl:  0000000512
eTADSDescription:  description $tz
eTADSphysicalDeliveryOfficeName:  office
eTADStelephoneNumber:  111-222-3333
eTADSmail:  aaatestuser$n@$ADSDOMAIN
eTADSwwwHomePage:  web.page.lab
eTADSotherTelephone:  111-222-3333
eTADSurl:  other.web.page.lab
eTADSstreetAddress:  street address line01
eTADSpostOfficeBox:  pobox 111
eTADSl:  city
eTADSst:  state
eTADSpostalCode:  11111
eTADSco:  UNITED STATES
eTADSc:  US
eTADScountryCode:  840
eTADSscriptPath:  loginscript.cmd
eTADSprofilePath:  \profile\path\here
eTADShomePhone:  111-222-3333
eTADSpager:  111-222-3333
eTADSmobile:  111-222-3333
eTADSfacsimileTelephoneNumber:  111-222-3333
eTADSipPhone:  111-222-3333
eTADSinfo:  Notes Here
eTADSotherHomePhone:  111-222-3333
eTADSotherPager:  111-222-3333
eTADSotherMobile:  111-222-3333
eTADSotherFacsimileTelephoneNumber:  111-222-3333
eTADSotherIpPhone:  111-222-3333
eTADStitle:  title
eTADSdepartment:  department
eTADScompany:  company
eTADSmanager:  CN=manager_fn manager_ln,OU=$OU,$DCDOMAIN
eTADSmemberOf:  CN=Backup Operators,CN=Builtin,$DCDOMAIN
eTADSlyncSIPAddressOption: 0000000000
eTADSdisplayNamePrintable: aaatestuser$n
eTADSmailNickname: aaatestuser$n
eTADShomeMDB: (Automatic Mailbox Distribution)
eTADShomeMTA: CN=DC001,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,$DCDOMAIN
eTAccountStatus: A
eTADSmsExchRecipientTypeDetails: 0000000001
eTADSmDBUseDefaults: TRUE
eTADSinitials: A
eTADSaccountExpires: 9223372036854775807

EOF
 counter=$(( $counter + 00001 ))
done


#  Create the delete ADS Process
start=00001
counter=$start
while [ $counter -le $MAX ]
do
    n=$((10000+counter)); n=${n#1}
    tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
   echo "Counter with leading zeros = $n   at time:  $tz"


cat << EOF >> $FILE1
dn:  eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: delete

EOF
 counter=$(( $counter + 00001 ))
done

echo ""
echo "################################### ADS USER OBJECT STATS ################################################################"
echo "Number of add objects: `grep "changetype: add" $FILE1 | wc -l`"
echo "Number of delete objects: `grep "changetype: delete" $FILE1 | wc -l`"
rm -rf $FILE2
cp -r -p $FILE1 $FILE2
sed -i 's|,dc=im,dc=etasa|,dc=im,dc=eta|g' $FILE2
ls -lart $FILE1
ls -lart $FILE2

echo ""
echo "################################### SET ADS MAX CONNECTIONS IN POOL SIZE ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
LDAPTLS_REQCERT=never dxmodify  -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD"  << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL  -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'

echo ""
echo "################################### CCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20402
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the CCS Service to test single thread with dxmodify or ldapmodify"
echo "dxmodify  -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the CCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "

echo ""
echo "################################### JCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20410
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the JCS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify  -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the JCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "


echo ""
echo "################################### IMPS UNIT & PERF TEST ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
echo "Execute this command to the IMPS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify  -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
echo "Execute this command to the IMPS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $IMPS_HOST:$IMPS_PORT -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "




Address the new bottleneck of MS Exchange / O365 Provisioning.

After parallel provisioning has been introduced with the new im_ccs.exe service, you may noticed that the number of transactions is still being throttled during performance testing.

Out-of-the-box MS Active Directory Global Throttling Policy has the parameter of PowerShellMaxConcurrency set to a default of 18 connection. Any provisioning that uses MS Powershell for MS Exchange and/or MS O365 will be impacted by this default parameter.

To address this bottleneck, we can create a new Throttling Policy and only assign the service ID that will be managing identities, to avoid a global change.

Example: New-ThrottlingPolicy MaxPowershell -PowerShellMaxConcurrency 100 & Set-Mailbox “User Name” -ThrottlingPolicy MaxPowershell

After this change has been made, restart the IM JCS/CCS Services, and retest again with your performance tools. Review the CCS ADS log for # of creations in 60 seconds, and you will be pleasantly surprise at the rate. The logs are the strong confirmation we are looking for.

Performance test (947 ADS accounts w/Exchange mailboxes in 60 seconds, 08:59:54 to  09:00:53) => Rate of 15 ids/second   (or 54 K ids/hr) with updated MaxPowershell = 100 thottlingpolicy.

The last bottleneck appears to be CPU availability to MS Exchange Supporting Services, w3wp.exe, the MS IIS Service. Which appears to be managing MS Powershell connections per its startup string of

" c:\windows\system32\inetsrv\w3wp.exe -ap "MSExchangePowerShellAppPool" -v "v4.0" -c "C:\Program Files\Microsoft\Exchange Server\V15\bin\GenericAppPoolConfigWithGCServerEnabledFalse.config" -a \.\pipe\iisipme304c50e-6b42-4b26-83a4-229ee037be5d -h "C:\inetpub\temp\apppools\MSExchangePowerShellAppPool\MSExchangePowerShellAppPool.config" -w "" -m 0"

LDAP MITM Methodology to isolate data challenge

The Symantec (CA/Broadcom) Directory solution provides a mechanism for routing LDAPv3 traffic to other solutions. This routing mechanism allows Symantec Directory to act as a virtual directory service for other directories, e.g., MS Active Directory, SunOne, Novell eDirectory, etc.


The Symantec Identity Suite solution uses the LDAP protocol for its mid-tier and connector-tier components. The Provisioning Server is exposed on TCP 20389/20390, the JCS (Java Connector Server) is exposed on TCP 20410/20411, and the CCS (C++ Connector Server) is exposed on TCP 20402/20403.


We wished to isolate provisioning data challenges within the Symantec Identity Management solution that was not fully viewable using the existing debugging logs & features of the provisioning tier & connector tiers. Using Symantec Directory, we can leverage the routing mechanism to build a MITM (man-in-the-middle) methodology to track all LDAP traffic through the Symantec Identity Manager connector tier.


We focused on the final leg of provisioning and created a process to track the JCS -> CCS LDAP traffic. We wanted to understand what and how the data was being sent from the JCS to the CCS to isolate issues to the CCS service and MS Active Directory. Using the trace level of Symantec Directory, we can capture all LDAP traffic, including binds/queries/add/modify actions.

The below steps showcase how to use Symantec Directory as an approved MITM process for troubleshooting exercises. We found this process more valuable than deploying Wireshark on the JCS/CCS Server and decoding the encrypted traffic for LDAP.

Background:

Symantec Directory documentation on routing. Please note the concept / feature of “set transparent-routing = true;” to avoid schema challenges when routing to other directory/ldap solutions.

https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/directory/14-1/ca-directory-concepts/directory-distribution-and-routing.html

MITM Methodology for JCS->CCS Service:

The Symantec Identity Management connector tier may be deployed on MS Windows or Linux OS. If the CCS service is being used, then MS Windows OS is required for this MS Visual C++ component/service. As we are focused on the CCS service, we will introduce the Symantec Directory solution on the same MS Windows OS.

NOTE: We will keep the MITM process contained on a single host, and will not redirect the network traffic beyond the host.

Step 1: Deploy the latest Symantec Directory solution on MS Windows OS. This deployment is a blank slate for the next steps to follow.

Step 2: Copy the folders of schema, limits, and ssld from an existing Symantec Directory deployment of the Symantec Identity Manager solution. Using the existing schema files, references, and certificates will allow us to avoid any challenges during startup of the Router DSA due to the pre-defined provisioning/connector tier configurations. Please note when copying from a Linux OS version of Symantec Directory, we will need to update the path from Linux format to MS Windows format in the SSLD impd.dxc file for “cert-dir” and “ca-file” parameters.

# DXserver/config/ssld/impd.dxc

set ssl = {
cert-dir = "C:\Program Files\CA\Directory\dxserver\config\ssld\personalities"
ca-file = "C:\Program Files\CA\Directory\dxserver\config\ssld\impd_trusted.pem"
cipher = "HIGH:!SSLv2:!EXP:!aNULL:!eNULL"
#protocol = tlsv12
fips = false
};

Step 3: Create a new Router DSA DXI configuration file. This is the primary configuration file for Symantec Directory DSA. It will referenced the schema, knowledge, limits, and certificates for the DSA. Note the parameters for “transparent-routing” to avoid schema challenges with other solutions. Note the trace level used to trace the LDAP traffic in the Symantec Directory Router DSA trace log.

# DXserver/config/servers/admin_router_ccs_30402.dxi

# logging and tracing 
close summary-log; 
close trace-log; 
source "../logging/default.dxc"; 
 
# schema 
clear schema; 
source "../schema/impd.dxg";
 
# access controls 
clear access; 
# source "../access/"; 
 
# ssld
source "../ssld/impd.dxc";

# knowledge 
clear dsas; 
source "../knowledge/admin_router_ccs_group.dxg"; 
 
# operational settings 
source "../settings/default.dxc"; 
 
# service limits 
source "../limits/impd.dxc"; 

# database  - none - transparent router
set transparent-routing=TRUE;

# tunnel through eAdmin server error code and  messages
set route-non-compliant-ldap-error-codes = true;

set trace=ldap,time,stats;
#set trace=dsa,time;

Step 4: Create the three (3) knowledge files. The “group” knowledge file will be used to redirect to the other two (2) knowledge files of the router DSA and the re-direct DSA to the CCS service.

# DXserver/config/knowledge/admin_router_ccs_group.dxg 
# The admin_router_ccs_30402.dxc PORT 30402 
# will be used for the IAMCS (JCS) CCS port override configuration file
# server_ccs.properties via proxyConnectionConfig.proxyServerPort=30402

source "admin_router_ccs_30402.dxc";
source "admin_ccs_server_01.dxc";
# DXserver/config/knowledge/admin_router_ccs_30402.dxc 
# This file is sourced by admin_router_ccs_group.dxg.
 
set dsa admin_router_ccs_30402 =  
{ 
    prefix        = <> 
    dsa-name      = <dc etasa><cn admin_router_ccs_30402> 
    dsa-password  = "secret"
    address       = ipv4 localhost port 30402
    snmp-port     = 22500
    console-port  = 22501
    auth-levels   = clear-password
    dsp-idle-time = 100000 
    trust-flags = allow-check-password, trust-conveyed-originator
    link-flags    = ssl-encryption-remote
};
# DXserver/config/knowledge/admin_ccs_server_01.dxc
# This file is sourced by admin_router_ccs_group.dxg.

set dsa admin_ccs_server_01 =  
{ 
     prefix        = <dc etasa> 
     dsa-name      = <dc etasa><cn admin_ccs_server_01> 
     dsa-password  = "secret"
     address       = ipv4 localhost port 20402
     auth-levels   = clear-password
     dsp-idle-time = 100000
     dsa-flags     = load-share
     trust-flags   = allow-check-password, no-server-credentials, trust-conveyed-originator
     link-flags    = dsp-ldap
     #link-flags    = dsp-ldap, ssl-encryption
     # Note:  ssl will require update to /etc/hosts with:  <IP_Address>  eta_server

};

Step 5: Update the JCS configuration file that contains the TCP port that we will be redirecting to. In this example, we will declare TCP 30402 to be the new port.

#C:\Program Files (x86)\CA\Identity Manager\Connector Server\jcs\conf\override\server_ccs.properties

ccsWindowsController.ccsScriptPath=C:\\Program Files (x86)\\CA\\Identity Manager\\Connector Server\\ccs\\bin
proxyCCSManager.enabled=true
proxyCCSManager.startupWait=30
proxyConnectionConfig.proxyServerHostname=localhost
#proxyConnectionConfig.proxyServerPort=20402
proxyConnectionConfig.proxyServerPort=30402
proxyConnectionConfig.proxyServerUser=cn=root,dc=etasa
proxyConnectionConfig.proxyServerPassword={AES}pbj27RvWGakDKCr+DhRH4Q==
proxyConnectionConfig.proxyServerUseSsl=false
proxyCCSManager.controller.ref=ccsWindowsController

Overview of all files updated and their relationship to each other.

Validation

Start up the solution in the following order. Ensure that the new Symantec Directory Router DSA is starting with no issue. If there are any syntax issues, isolate them with the command: dxserver -d start DSA_NAME.

Start the Router DSA first, then restart the im_jcs (JCS) service. The im_ccs (CCS) service will be auto-started by the JCS service. Wait one (1) minute, then check that both TCP Ports 20402 (CCS) and 30402 (Router DSA) are both in the LISTEN state. If we do not see these both ports, please stop and restart these services.

May use MS Sysinternals ProcessExplorer to monitor both services and using the TCP/IP tab, to view which ports are being used.

A view of the im_ccs.exe and dxserver.exe services and which TCP ports they are listening on.

Use a 3rd party LDAP client tool, such as Jxplorer to authenticate to both the CCS and the Router DSA ports, with the embedded service ID of “cn=root,dc=etasa”. We should see exactly the SAME data.

Use the IME or IMPS to perform a query to MS Active Directory (or any other endpoint that uses the CCS connector tier). We should now see the “cache” on the CCS service be populated with the endpoint information, and the base DN structure. We can now track all LDAP traffic through the Router DSA MITM process.

View of trace logs

We can monitor when the JCS first binds to the CCS service.

We can monitor when the IMPS via the JCS queries if the CCS is aware of the ADS endpoint.

Finally, we can view when the IMPS service decrypt its stored information on the Active Directory endpoint, and push this information to the CCS cache, to allow communication to MS Active Directory. Using Notepad++ we can tail the trace log.

Please note, this is a secure LDAP/S tunnel from the IMPS -> JCS -> CCS -> MS ADS.

We can now view how this data is pushed via this secure tunnel with the MITM process.

> [88] 
> [88] <-- #1 LDAP MESSAGE messageID 5
> [88] AddRequest
> [88]  entry: eTADSDirectoryName=ads2016,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
> [88]  attributes:
> [88]   type: eTADSobjectCategory
> [88]   value: CN=Domain-DNS,CN=Schema,CN=Configuration,DC=exchange,DC=lab
> [88]   type: eTADSdomainFunctionality
> [88]   value: 7
> [88]   type: eTADSUseSSL
> [88]   value: 3
> [88]   type: eTADSexchangeGroups
> [88]   value: CN=Mailbox Database 0840997559,CN=Databases,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   value: CN=im,CN=Databases,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   type: eTLogWindowsEventSeverity
> [88]   value: FE
> [88]   type: eTAccountResumable
> [88]   value: 1
> [88]   type: eTADSnetBIOS
> [88]   value: EXCHANGE
> [88]   type: eTLogStdoutSeverity
> [88]   value: FE
> [88]   type: eTLog
> [88]   value: 0
> [88]   type: eTLogUnicenterSeverity
> [88]   value: FE
> [88]   type: eTADSlockoutDuration
> [88]   value: -18000000000
> [88]   type: objectClass
> [88]   value: eTADSDirectory
> [88]   type: eTLogETSeverity
> [88]   value: FE
> [88]   type: eTADSmsExchSystemObjectsObjectVersion
> [88]   value: 13240
> [88]   type: eTADSsettings
> [88]   value: 3
> [88]   type: eTADSconfig
> [88]   value: ExpirePwd=0
> [88]   value: HomeDirInheritPermission=0
> [88]   type: eTLogDestination
> [88]   value: F
> [88]   type: eTADSUserContainer
> [88]   value: CN=BuiltIn;CN=Users
> [88]   type: eTADSbackupDirs
> [88]   value: 000;DEFAULT;192.168.242.156;0
> [88]   value: 001;DEFAULT;dc2016.exchange.lab;0
> [88]   value: 002;site1;server1.domain.com;0
> [88]   value: 003;site1;server2.domain.com;0
> [88]   value: 004;site2;server3.domain.com;0
> [88]   value: 005;site2;server4.domain.com;0
> [88]   type: eTADSuseFailover
> [88]   value: 1
> [88]   type: eTLogAuditSeverity
> [88]   value: FE
> [88]   type: eTADS-DefaultContext
> [88]   value: exchange.lab
> [88]   type: eTADSforestFunctionality
> [88]   value: 7
> [88]   type: eTADSAuthDN
> [88]   value: Administrator
> [88]   type: eTADSlyncMaxConnection
> [88]   value: 5
> [88]   type: eTADShomeMTA
> [88]   value: CN=Microsoft MTA,CN=EXCHANGE2016,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   type: eTADSAuthPWD
> [88]   value: CAdemo123
> [88]   type: eTADSexchangelegacyDN
> [88]   value: /o=ExchangeLab/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=EXCHANGE2016/cn=Microsoft Private MDB
> [88]   type: eTLogFileSeverity
> [88]   value: F
> [88]   type: eTADSprimaryServer
> [88]   value: dc2016.exchange.lab
> [88]   type: eTADScontainers
> [88]   value: CN=Builtin,DC=exchange,DC=lab
> [88]   value: CN=Computers,DC=exchange,DC=lab
> [88]   value: OU=Domain Controllers,DC=exchange,DC=lab
> [88]   value: OU=Explore,DC=exchange,DC=lab
> [88]   value: CN=ForeignSecurityPrincipals,DC=exchange,DC=lab
> [88]   value: CN=Keys,DC=exchange,DC=lab
> [88]   value: CN=Managed Service Accounts,DC=exchange,DC=lab
> [88]   value: OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
> [88]   value: OU=o365,DC=exchange,DC=lab
> [88]   value: OU=People,DC=exchange,DC=lab
> [88]   value: CN=Program Data,DC=exchange,DC=lab
> [88]   value: CN=Users,DC=exchange,DC=lab
> [88]   value: DC=ForestDnsZones,DC=exchange,DC=lab
> [88]   value: DC=DomainDnsZones,DC=exchange,DC=lab
> [88]   type: eTADSTimeBoundMembershipsEnabled
> [88]   value: 0
> [88]   type: eTADSexchange
> [88]   value: 1
> [88]   type: eTADSdomainControllerFunctionality
> [88]   value: 7
> [88]   type: eTADSexchangeStores
> [88]   value: CN=EXCHANGE2016,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   value: CN=Mailbox,CN=Transport Configuration,CN=EXCHANGE2016,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   value: CN=Frontend,CN=Transport Configuration,CN=EXCHANGE2016,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=ExchangeLab,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=exchange,DC=lab
> [88]   type: eTADSKeepCamCaftFiles
> [88]   value: 0
> [88]   type: eTADSmsExchSchemaVersion
> [88]   value: 15333
> [88]   type: eTADSCamCaftTimeout
> [88]   value: 0000001800
> [88]   type: eTADSMaxConnectionsInPool
> [88]   value: 0000000101
> [88]   type: eTADSPortNum
> [88]   value: 389
> [88]   type: eTADSDCDomain
> [88]   value: DC=exchange,DC=lab
> [88]   type: eTADSServerName
> [88]   value: 192.168.242.156
> [88]   type: eTADSDirectoryName
> [88]   value: ads2016
> [88]   type: eTAccountDeletable
> [88]   value: 1
> [88] controls:
> [88]   controlType: 2.16.840.1.113730.3.4.2
> [88]   non-critical

We can now monitor all traffic and assist with troubleshooting any CCS/MS-ADS challenges.

This same MITM methodology/process may also be used for the IMPS (TCP 20389/2039) and the JCS (TCP 20410/20411) services. We have used this process to capture the IME (JIAM) LDAP traffic to the IMPS Service, to isolate multiple queries for Child Provisioning Roles. Which has been used by the product team to enhance the solution to lower startup durations of the IME in the latest releases.

Binds/queries/add/modification all work with this approach, but we do see an issue with OID for IMPS ADS endpoint “explore process” on ADS OU object. We are reviewing how to address this last challenge that states “critical extension is unavailable” for a LDAP control property of the OU object. The OIDs captured appear to be related to SunOne/Iplanet.

WAN Latency: Rsync versus SCP

We were curious about what methods we can use to manage large files that must be copied between sites with WAN-type latency and also restrict ourselves to processes available on the CA Identity Suite virtual appliance / Symantec IGA solution.

Leveraging VMware Workstation’s ability to introduce network latency between images, allows for a validation of a global password reset solution.

If we experience deployment challenges with native copy operations, we need to ensure we have alternatives to address any out-of-sync data.

The embedded CA Directory maintains the data tier in separate binary files, using a software router to join the data tier into a virtual directory. This allows for scalability and growth to accommodate the largest of sites.

We focused on the provisioning directory (IMPD) as our likely candidate for re-syncing.

Test Conditions:

  1. To ensure the data was being securely copied, we kept the requirement for SSH sessions between two (2) different nodes of a cluster.
  2. We introduce latency with VMware Workstation NIC for one of the nodes.

3. The four (4) IMPD Data DSAs were resized to 2500 MB each (a similar size we have seen in production at many sites).

4. We removed data and the folder structure from the receiving node to avoid any checksum restart processes from gaining an unfair advantage.

5. If the process allowed for exclusions, we did take advantage of this feature.

6. The feature/process/commands must be available on the vApp to the ‘config’ or ‘dsa’ userIDs.

7. The reference host/node that is being pulled, has the CA Directory Data DSAs offline (dxserver stop all) to prevent ongoing changes to the files during the copy operation.

Observations:

SCP without Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process took over 12 minutes to copy 10,250 MB of data

SCP with Compression: Unable to exclude other files (*.tx,*.dp, UserStore) – This process still took over 12 minutes to copy 10,250 MB of data

Rsync without compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. If the folder was not deleted prior, then this process would give artificial high-speed results. This process was able to exclude the UserStore DSA files and the transaction files (*.dp & *.tx) that are not required to be copied for use on a remote server. Only 10,000 MB (4 x 2500 MB) was copied instead of an extra 250 MB.

Rsync with compression: This process can exclude files/folders and has built-in checksum features (to allow a restart of a file if the connection is broken) and works over SSH as well. This process was the winner, and; extremely amazing performance over the other processes.

Total Time: 1 min 10 seconds for 10,000 MB of data over a WAN latency of 70 ms (140 ms R/T)

Now that we have found our winner, we need to do a few post steps to use the copied files. CA Directory, to maintain uniqueness between peer members of the multi-write (MW) group, have a unique name for the data folder and the data file. On the CA Identity Suite / Symantec IGA Virtual Appliance, pseudo nomenclature is used with two (2) digits.

The next step is to rename the folder and the files. Since the vApp is locked down for installing other tools that may be available for rename operations, we utilized the find and mv command with a regular xpression process to assist with these two (2) steps.

Complete Process Summarized with Validation

The below process was written within the default shell of ‘dsa’ userID ‘csh’. If the shell is changed to ‘bash’; update accordingly.

The below process also utilized a SSH RSA private/public key process that was previously generated for the ‘dsa’ user ID. If you are using the vApp, change the userID to config; and su – dsa to complete the necessary steps. You may need to add a copy operation between dsa & config userIDs.

Summary of using rsync with find/mv to rename copied IMPD *.db files/folders
[dsa@pwdha03 ~/data]$ dxserver status
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ dxserver stop all > & /dev/null
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$ eval `ssh-agent` && ssh-add
Agent pid 5395
Enter passphrase for /opt/CA/Directory/dxserver/.ssh/id_rsa:
Identity added: /opt/CA/Directory/dxserver/.ssh/id_rsa (/opt/CA/Directory/dxserver/.ssh/id_rsa)
[dsa@pwdha03 ~/data]$ rm -rf *
[dsa@pwdha03 ~/data]$ du -hs
4.0K    .
[dsa@pwdha03 ~/data]$ time rsync --progress -e 'ssh -ax' -avz --exclude "User*" --exclude "*.dp" --exclude "*.tx" dsa@192.168.242.135:./data/ $DXHOME/data
FIPS mode initialized
receiving incremental file list
./
ca-prov-srv-01-impd-co/
ca-prov-srv-01-impd-co/ca-prov-srv-01-impd-co.db
  2500000000 100%  143.33MB/s    0:00:16 (xfer#1, to-check=3/9)
ca-prov-srv-01-impd-inc/
ca-prov-srv-01-impd-inc/ca-prov-srv-01-impd-inc.db
  2500000000 100%  153.50MB/s    0:00:15 (xfer#2, to-check=2/9)
ca-prov-srv-01-impd-main/
ca-prov-srv-01-impd-main/ca-prov-srv-01-impd-main.db
  2500000000 100%  132.17MB/s    0:00:18 (xfer#3, to-check=1/9)
ca-prov-srv-01-impd-notify/
ca-prov-srv-01-impd-notify/ca-prov-srv-01-impd-notify.db
  2500000000 100%  130.91MB/s    0:00:18 (xfer#4, to-check=0/9)

sent 137 bytes  received 9810722 bytes  139161.12 bytes/sec
total size is 10000000000  speedup is 1019.28
27.237u 5.696s 1:09.43 47.4%    0+0k 128+19531264io 2pf+0w
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-01-impd-co  ca-prov-srv-01-impd-inc  ca-prov-srv-01-impd-main  ca-prov-srv-01-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data/ -mindepth 1 -type d -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ ls
ca-prov-srv-03-impd-co  ca-prov-srv-03-impd-inc  ca-prov-srv-03-impd-main  ca-prov-srv-03-impd-notify
[dsa@pwdha03 ~/data]$ find $DXHOME/data -depth -name '*.db' -exec bash -c 'mv  $0 ${0/01/03}' {} \; > & /dev/null
[dsa@pwdha03 ~/data]$ dxserver start all
Starting all dxservers
ca-prov-srv-03-impd-main starting
..
ca-prov-srv-03-impd-main started
ca-prov-srv-03-impd-notify starting
..
ca-prov-srv-03-impd-notify started
ca-prov-srv-03-impd-co starting
..
ca-prov-srv-03-impd-co started
ca-prov-srv-03-impd-inc starting
..
ca-prov-srv-03-impd-inc started
ca-prov-srv-03-imps-router starting
..
ca-prov-srv-03-imps-router started
[dsa@pwdha03 ~/data]$ du -hs
9.4G    .
[dsa@pwdha03 ~/data]$


Note: An enhancement has been open to request that the ‘dsa’ userID is able to use remote SSH processes to address any challenges if the Data IMPD DSAs need to be copied or retained for backup processes.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

Example for vApp Patches:

Note: There is no major different in speed if the files being copied are already compressed. The below image shows that initial copy is at the rate of the network w/ latency. The value gain from using rsync is still the checksum feature that allow auto-restart where it left off.

vApp Patch process refined to a few lines (to three nodes of a cluster deployment)

# PATCHES
# On Local vApp [as config userID]
mkdir -p patches  && cd patches
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-VA-140300-0002.tar.gpg
curl -L -O ftp://ftp.ca.com/pub/CAIdentitySuiteVA/cumulative-patches/14.3.0/CP-IMV-140300-0001.tgz.gpg
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg           [Patch VA prior to any solution patch]
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
cd ..
# Push from one host to another via scp
IP=192.168.242.136;scp -r patches  config@$IP:
IP=192.168.242.137;scp -r patches  config@$IP:
# Push from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.136;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
IP=192.168.242.137;rsync --progress -e 'ssh -ax' -avz $HOME/patches config@$IP:
# Pull from one host to another via rsync over ssh          [Minor gain for compressed files]
IP=192.168.242.135;rsync --progress -e 'ssh -ax' -avz config@$IP:./patches $HOME

# View the files were patched
IP=192.168.242.136;ssh -tt config@$IP "ls -lart patches"
IP=192.168.242.137;ssh -tt config@$IP "ls -lart patches"

# On Remote vApp Node #2
IP=192.168.242.136;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

# On Remote vApp Node #3
IP=192.168.242.137;ssh $IP
cd patches
screen    [will open a new bash shell ]
patch_vapp CP-VA-140300-0002.tar.gpg
patch_vapp CP-IMV-140300-0001.tgz.gpg
exit          [exit screen]
exit          [exit to original host]

View of rotating the SSH RSA key for CONFIG User ID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Avoid locking a userID in a Virtual Appliance

The below post describes enabling the .ssh private key/public key process for the provided service IDs to avoid dependency on a password that may be forgotten, and also how to leverage the service IDs to address potential CA Directory data sync challenges that may occur when there are WAN network latency challenges between remote cluster nodes.

Background:

The CA/Broadcom/Symantec Identity Suite (IGA) solution provides for a software virtual appliance. This software appliance is available on Amazon AWS as a pre-built AMI image that allows for rapid deployment.

The software appliance is also offered as an OVA file for Vmware ESXi/Workstation deployment.

Challenge:

If the primary service ID is locked or password is allowed to expire, then the administrator will likely have only two (2) options:

1) Request assistance from the Vendor (for a supported process to reset the service ID – likely with a 2nd service ID “recoverip”)

2) Boot from an ISO image (if allowed) to mount the vApp as a data drive and update the primary service ID.

Proposal:

Add a standardized SSH RSA private/pubic key to the primary service ID, if it does not exist. If it exists, validate able to authentication and copy files between cluster nodes with the existing .SSH files. Rotate these files per internal security policies, e.g. 1/year.

The focus for this entry is on the CA ‘config’ and ‘ec2-user’ service IDs.

An enhancement request has been added, to have the ‘dsa’ userID added to the file’/etc/ssh/ssh_allowed_users’ to allow for the same .ssh RSA process to address challenges during deployments where the CA Directory Data DSA did not fully copy from one node to another node.

https://community.broadcom.com/participate/ideation-home/viewidea?IdeationKey=7c795c51-d028-4db8-adb1-c9df2dc48bff

AWS vApp: ‘ec2-user’

The primary service ID for remote SSH access is ‘ec2-user’ for the Amazon AWS is already deployed with a .ssh RSA private/public key. This is a requirement for AWS deployments and has been enabled to use this process.

This feature allows for access to be via the private key from a remote SSH session using Putty/MobaXterm or similar tools. Another feature may be leveraged by updating the ‘ec2-user’ .ssh folder to allow for other nodes to be exposed with this service ID, to assist with the deployment of patch files.

As an example, enabling .ssh service between multiple cluster nodes will reduce scp process from remote workstations. Prior, if there were five (5) vApp nodes, to patch them would require uploading the patch direct to each of the five (5) nodes. With enabling .ssh service between all nodes for the ‘ec2-user’ service ID, we only need to upload patches to one (1) node, then use a scp process to push these patch file(s) from one node to another cluster node.

On-Prem vApp: ‘config’

We wish to emulate this process for on-prem vApp servers to reduce I/O for any files to be uploaded and/or shared.

This process has strong value when CA Directory *.db files are out-of-sync or during initial deployment, there may be network issues and/or WAN latency.

Below is an example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

An example to create and/or rotate the private/public SSH RSA files for the ‘config’ service ID.

Below is an example to push the newly created SSH RSA files to the remote host(s) of the vApp cluster. After this step, we can now use scp processes to assist with remediation efforts within scripts without a password stored as clear text.

Copy the RSA folder to your workstation, to add to your Putty/MobaXterm or similar SSH tool, to allow remote authentication using the public key.

If you have any issues, use the embedded verbose logging within the ssh client tool (-vv) to identify the root issue.

ssh -vv userid@remote_hostname

Example:

config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > eval `ssh-agent` && ssh-add
Agent pid 5717
Enter passphrase for /home/config/.ssh/id_rsa:
Identity added: /home/config/.ssh/id_rsa (/home/config/.ssh/id_rsa)
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ >
config@vapp0001 VAPP-14.1.0 (192.168.242.146):~ > ssh -vv config@192.168.242.128
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.242.128 [192.168.242.128] port 22.
debug1: Connection established.
debug1: identity file /home/config/.ssh/identity type -1
debug1: identity file /home/config/.ssh/identity-cert type -1
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /home/config/.ssh/id_rsa type 1
debug1: identity file /home/config/.ssh/id_rsa-cert type -1
debug1: identity file /home/config/.ssh/id_dsa type -1
debug1: identity file /home/config/.ssh/id_dsa-cert type -1
debug1: identity file /home/config/.ssh/id_ecdsa type -1
debug1: identity file /home/config/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: hmac-sha1,hmac-sha2-256,hmac-sha2-512
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-sha1
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug2: mac_setup: found hmac-sha1
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 141/320
debug2: bits set: 1027/2048
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '192.168.242.128' is known and matches the RSA host key.
debug1: Found key in /home/config/.ssh/known_hosts:2
debug2: bits set: 991/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/config/.ssh/id_rsa (0x5648110d2a00)
debug2: key: /home/config/.ssh/identity ((nil))
debug2: key: /home/config/.ssh/id_dsa ((nil))
debug2: key: /home/config/.ssh/id_ecdsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug1: Unspecified GSS failure.  Minor code may provide more information
Improper format of Kerberos configuration file

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering public key: /home/config/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug2: input_userauth_pk_ok: SHA1 fp 39:06:95:0d:13:4b:9a:29:0b:28:b6:bd:3d:b0:03:e8:3c:ad:50:6f
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Thu Apr 30 20:21:48 2020 from 192.168.242.146

CA Identity Suite Virtual Appliance version 14.3.0 - SANDBOX mode
FIPS enabled:                   true
Server IP addresses:            192.168.242.128
Enabled services:
Identity Portal               192.168.242.128 [OK] WildFly (Portal) is running (pid 10570), port 8081
                                              [OK] Identity Portal Admin UI is available
                                              [OK] Identity Portal User Console is available
                                              [OK] Java heap size used by Identity Portal: 810MB/1512MB (53%)
Oracle Database Express 11g   192.168.242.128 [OK] Oracle Express Edition started
Identity Governance           192.168.242.128 [OK] WildFly (IG) is running (pid 8050), port 8082
                                              [OK] IG is running
                                              [OK] Java heap size used by Identity Governance: 807MB/1512MB (53%)
Identity Manager              192.168.242.128 [OK] WildFly (IDM) is running (pid 5550), port 8080
                                              [OK] IDM environment is started
                                              [OK] idm-userstore-router-caim-srv-01 started
                                              [OK] Java heap size used by Identity Manager: 1649MB/4096MB (40%)
Provisioning Server           192.168.242.128 [OK] im_ps is running
                                              [OK] co file usage: 1MB/250MB (0%)
                                              [OK] inc file usage: 1MB/250MB (0%)
                                              [OK] main file usage: 9MB/250MB (3%)
                                              [OK] notify file usage: 1MB/250MB (0%)
                                              [OK] All DSAs are started
Connector Server              192.168.242.128 [OK] jcs is running
User Store                    192.168.242.128 [OK] STATS: number of objects in cache: 5
                                              [OK] file usage: 1MB/200MB (0%)
                                              [OK] UserStore_userstore-01 started
Central Log Server            192.168.242.128 [OK] rsyslogd (pid  1670) is running...
=== LAST UPDATED: Fri May  1 12:15:05 CDT 2020 ====
*** [WARN] Volume / has 13% Free space (6.2G out of 47G)
config@cluster01 VAPP-14.3.0 (192.168.242.128):~ >

A view into rotating the SSH RSA keys for the CONFIG UserID

# CONFIG - On local vApp host
ls -lart .ssh     [view any prior files]
echo y | ssh-keygen -b 4096 -N Password01 -C $USER -f $HOME/.ssh/id_rsa
IP=192.168.242.135;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.136;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
IP=192.168.242.137;ssh-keyscan -p 22 $IP >> .ssh/known_hosts
cp -r -p .ssh/id_rsa.pub .ssh/authorized_keys
rm -rf /tmp/*.$USER.ssh-keys.tar
tar -cvf /tmp/`/bin/date -u +%s`.$USER.ssh-keys.tar .ssh
ls -lart /tmp/*.$USER.ssh-keys.tar
eval `ssh-agent` && ssh-add           [Enter Password for SSH RSA Private Key]
IP=192.168.242.136;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
IP=192.168.242.137;scp `ls /tmp/*.$USER.ssh-keys.tar`  config@$IP:
USER=config;ssh -tt $USER@192.168.242.136 "tar -xvf *.$USER.ssh-keys.tar"
USER=config;ssh -tt $USER@192.168.242.137 "tar -xvf *.$USER.ssh-keys.tar"
IP=192.168.242.136;ssh $IP `/bin/date -u +%s`
IP=192.168.242.137;ssh $IP `/bin/date -u +%s`
IP=192.168.242.136;ssh -vv $IP              [Use -vv to troubleshoot ssh process]
IP=192.168.242.137;ssh -vv $IP 				[Use -vv to troubleshoot ssh process]

Advanced Oracle JDBC Logging

One of the challenges for a J2EE application is to understand the I/O operations to the underlying database.

The queries/stored procedures/prepared statements all have value to the J2EE applications but during RCA (root-cause-analysis) process, it can be challenging to identify where GAPs or improvements may be made. Improvements may be from the vendor of the J2EE applications (via a support ticket/enhancement) or custom client efforts for business rule SQL processes or embedded JAR with JDBC logic.

To assist with this RCA efforts we examined four (4) areas that have value to examine an application using an Oracle 12c/18c+ Database.

  1. J2EE logging using embedded features for datasources. This includes the JDBC spy process for Wildfly/JBOSS; and the ability to dynamically change logging levels or on/off features with the jboss-cli.sh process.

2. Intercept JDBC processes – An intermediate JDBC jar that will add additional logging of any JDBC process, e.g. think Wireshark type process. [As this will require additional changes; will describe this process last]

3. Diagnostics JDBC Jars – Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

4. AWR (Automatic Workload Repository ) Reports – Using either command line (SQL) or the Oracle SQL Developer GUI.

J2EE Logging (JDBC Spy)

An example of the above process using the Wildfly/JBOSS jboss-cli.sh process is provided below in a CLI script. The high value of this process is that it takes advantage of OOTB existing features that do NOT require any additional files to be download or installed.

Implementing this process will be via a flat-file with keywords for the command line JBOSS management console, which is taken as an input via the jboss-cli.sh process. (if you are not running under the context of the Wildfly/JBoss user, you will need to use the add-user.sh process).

The below script focuses on the active databases for the Symantec/Broadcom/CA Identity Management solution’s ObjectStore (OS) and TaskPersistence (TP) databases. JDBC debug entries will be written to server.log.

# Name:  Capture JDBC queries/statements for TP and OS with IM business rules
# Filename: im_jdbc_spy_for_tp_and_os.cli
# /apps/CA/wildfly-idm/bin/jboss-cli.sh --connect  --file=im_jdbc_spy_for_tp_and_os.cli
# /opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01!  --file=im_jdbc_spy_for_tp_and_os.cli
#
connect
echo "Take a snapshot backup of IM configuration file before any changes"
:take-snapshot
# Query values before setting them
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo "Enable JDBC Spy for TP and OS DB"
echo
# Always use batch with run-batch - will auto rollback on any errors for updates
batch
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:write-attribute(name=spy,value=true)
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:write-attribute(name=spy,value=true)
/subsystem=jca/cached-connection-manager=cached-connection-manager/:write-attribute(name=error,value=true)
run-batch
#/subsystem=logging/logger=jboss.jdbc.spy/:read-resource(recursive=false)
echo "Is JDBC Spy enabled for TP DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imtaskpersistencedb-ds/:read-attribute(name=spy)
echo "Is JDBC Spy enabled for OS DB : result = ?"
echo
/subsystem=datasources/data-source=iam_im-imobjectstoredb-ds/:read-attribute(name=spy)
echo ""

echo "Check if logger is enabled already; if not then set"
# Use log updates in separate batch process to prevent rollback of entire process if already set
echo
batch
/subsystem=logging/logger=jboss.jdbc.spy/:add(level=TRACE)
run-batch

#
#
# Restart the IM service (reload does not work 100%)
# stop_im start_im restart_im

A view of executing the above script. Monitor the server.log for JDBC updates.

If you have access to install the X11 drivers for the OS, you may also view the new updates to the JBoss/Wildfly datasources.

Diagnostics JDBC Jars

Oracle provides “Diagnosability in JDBC” Jars with the format of ojdbcX_g.jar that are enabled with a JVM switch “- Doracle.jdbc.Trace=true”

Ref: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjdbc/JDBC-diagnosability.html#GUID-4925EAAE-580E-4E29-9B9A-84143C01A6DC

This diagnostic feature has been available for quite some time and operates outside of the JBOSS/Wildfly tier, by focusing only on the Oracle Debug Jar and Java, which are managed through two (2) JVM switches.

A challenge exists for the CA Identity Manager solution on Wildfly 8.x, that the default logging utility process is using the newer JBOSS logging modules. This appears to interfere with the direct use of the properties file for Oracle debugging process. The solution appears to ignore the JVM switch of “-Djava.util.logging.config.file=OracleLog.properties.”

To address the above challenge, without extensively changing the CA IM solution logging modules, we integrated into the existing process.

This process will require a new jar deployed, older jars renamed, and two (2) reference XML files updated for Wildfly/Jboss.

AWR Reports

This is a common method that DBAs will use to assist application owners to identify I/O or other challenges that may be impacting performance. The ability to execute the AWR Report may be delegated to a non-system service ID (schema IDs).

This process may be executed at the SQLplus prompt but for application owners, it is best executed from Oracle Developer GUI. The process outlined in one (1) slide diagram below will showcase how to execute an AWR report, and how to generate one for our requirements.

This process assumes the administrator has access to a DB schema ID that has access to the AWR (View/DBA) process. If not, we have included the minimal access required.

Intercept JDBC processes

One of the more interesting processes is a 3rd party “man-in-the-middle” approach that will capture all JDBC traffic. We can think of this as Wireshark-type process.

We have left this process to last, as it does require additional modification to the JBoss/Wildfly environment. If the solution deployed is standalone, then the changes are straightforward to change. If the solution is on the Virtual Appliance, we will need to review alternative methods or request access to add additional modules to the appliance.

For this process, we chose the p6spy process.

A quick view to see the JDBC calls being sent from the IM solution to the Oracle Database with values populated. We can also capture the return, but this would be a very large amount of data.

A quick view of the “spy.log” output for the CA IM TP database during startup.


#!/bin/bash
##################################################
# Name: p6spy.sh
#
# Goal:  Deploy the p6spy jar to assist with RCA for IM solution
# Ref:  https://github.com/p6spy/p6spy
#
# A. Baugher, ANA, 04/2020
##################################################
JBOSS_HOME=/opt/CA/wildfly-idm
USER=wildfly
GROUP=wildfly

mkdir -p /tmp/p6spy
rm -rf /tmp/p6spy/*.jar
cd /tmp/p6spy
time curl -L -O https://repo1.maven.org/maven2/p6spy/p6spy/3.9.0/p6spy-3.9.0.jar
ls -lart *.jar
md5sum p6spy-3.9.0.jar
rm -rf $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
mkdir -p $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cp -r -p p6spy-3.9.0.jar $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
ls -lart $JBOSS_HOME/modules/system/layers/base/com/p6spy/main
cd /opt/CA/wildfly-idm/modules/system/layers/base/com/p6spy/main
cat << EOF >> module.xml
<module xmlns="urn:jboss:module:1.0" name="com.p6spy">
    <resources>
        <resource-root path="p6spy-3.9.0.jar"/>
    </resources>
    <dependencies>
        <module name="javax.api"/>
        <module name="javax.transaction.api"/>
<module name="com.ca.iam.jdbc.oracle"/>
    </dependencies>
</module>
EOF
chown -R $USER:$GROUP $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 755  $JBOSS_HOME/modules/system/layers/base/com/p6spy
chmod -R 664  $JBOSS_HOME/modules/system/layers/base/com/p6spy/main/*
ls -lart
cat module.xml
# Update spy.properties file
curl -L -O https://raw.githubusercontent.com/p6spy/p6spy/master/src/main/assembly/individualFiles/spy.properties
rm -rf  $JBOSS_HOME/standalone/tmp/spy.properties
cp -r -p spy.properties $JBOSS_HOME/standalone/tmp/spy.properties
chown -R $USER:$GROUP $JBOSS_HOME/standalone/tmp/spy.properties
chmod -R 666 $JBOSS_HOME/standalone/tmp/spy.properties
ls -lart $JBOSS_HOME/standalone/tmp/spy.properties


A view with “results” enabled. May view with binary or text results. (excludecategories=info and excludebinary=true)
Enable these configurations in the CA IM standalone-full-ha.xml file – Focus only on the CA IM TP database (where most activity resides)
 <drivers>
                    <driver name="p6spy" module="com.p6spy">
                       <driver-class>com.p6spy.engine.spy.P6SpyDriver</driver-class>
                    </driver>
                    <driver name="ojdbc" module="com.ca.iam.jdbc.oracle">
                        <driver-class>oracle.jdbc.OracleDriver</driver-class>
                        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
                    </driver>
 </drivers>

<!-- ##############################  BEFORE P6SPY ##########################
                <datasource jta="false" jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="iam_im-imtaskpersistencedb-ds" enabled="true" use-java-context="true" spy="true">
                    <connection-url>jdbc:oracle:thin:@//database_srv:1521/xe</connection-url>
                    <driver>ojdbc</driver>
   ##############################  BEFORE P6SPY ##########################
-->
                <!-- <datasource jndi-name="java:/jdbc/p6spy" enabled="true" use-java-context="true" pool-name="p6spyPool"> -->

                <datasource jndi-name="java:/iam/im/jdbc/jdbc/idm" pool-name="p6spyPool" enabled="true" use-java-context="true">
                        <connection-url>jdbc:p6spy:oracle:thin:@//database_srv:1521/xe</connection-url>
                        <driver>p6spy</driver>


These error messages may occur during setup or mis-configuration.

References:

https://p6spy.readthedocs.io/en/latest/index.html https://p6spy.readthedocs.io/en/latest/configandusage.html#common-property-file-settings   [doc on JDBC intercept settings] https://github.com/p6spy/p6spy/blob/master/src/main/assembly/individualFiles/spy.properties  [sample spy.property file for JDBC intercepts] https://github.com/p6spy/p6spy   [git] https://mvnrepositor.com/artifact/p6spy/p6spy    [jar]

Extra(s)

A view into the ojdbc8_g.jar for the two (2) system properties, that are all that typically is required to use this logging functionality.

oracle.jdbc.Trace and java.util.logging.config.file

This town is big enough for us all: Expanding the CA Provisioning Tier Schema to 900+ Custom Fields

Based on recent requests, we wished to revisit this “hidden” gem to expand the CA Identity Suite Provisioning schema to meet unique business requirements. Enable 100’s of SaaS and onPrem applications/endpoints for custom business logic to user’s endpoint accounts’ attributes.

Since early days of the CA Identity Suite solution (eAdmin r8.1sp2), there has been a provisioning SDK that provided an approved process to extend the CA Identity Manager’s IMPD (provisioning directory) schema from the default of 99 user custom fields to 900 additional user custom fields.   To compare, typically, the default 99 user custom fields are used with the standard 40-50 default user profile fields, e.g. givenName (First Name), sn (LastName), userID, telephone #, etc. to meet most business use-cases.

Unfortunately, this extended schema process is not well known.

The only known documentation is an embedded readme.txt within a compressed package. Occasionally there will be support tickets or community notes that request this feature as an “enhancement”.

This package is included in the Provisioning SDK download; for IM r14.3, the file name is:

Component Name: CA Identity Manager r14.3 Legacy components
File: GEN500000000002780.zip ~ 200 MB

Background:

CA Identity Suite (Identity Manager) Provisioning Tier does NOT attempt to be a meta-directory, but act as a virtual directory to the 1000’s of managed endpoints/userstores/applications.   As long as the “explore” operation was successful, there will be a “pointer” object that references the correct location of the endpoint accounts.  And when a “correlation” operation occurs, this endpoint account “pointer” object is attached (via inclusion referential objects), to the associated global user ID.    

By using this “virtual directory” architecture, it is possible for IM business rules or 3rd party tools to directly view the 1000’s of managed endpoints “real data” and not a “stored” representation of this data.

However, some clients do wish to “collect” the native data, and store this within the IMPD provisioning store, as SNAPSHOT data, to monitor for non-approved / OOB (out-of-band) access.   If some fields are dedicated to select endpoints, the default of 99 custom fields may quickly run out.

Tackling Case-insensitivity Requirement:

Adjusting the IMPD schema for case-insensitivity; this would allow for case-insensitive correlation rules, and if the new fields are exposed to the IME, case-insensitive comparisions for business rules (PX).

Challenge:

The above Provisioning SDK process will build the extended eTCustomField100-999 and eTCustomFieldName100-999 attributes with case=sensitive. Interestingly, we did not identify a requirement for case sensitivity with the default custom fields, but it does appear this was a decision when the SDK was created. Please note the observation of the OOTB etrust_admin.schema file (for the IMPS data). This OOTB schema for the default custom fields displays a mix of case sensitivity for the eTCustomField00-99 and eTCustomFieldName00-99.

Proposal:

To address this new requirement; and to clarify there are three (3) possible deployments to enable this extended schema. We will review the pro/cons of each possible deployment choice.

Supporting Note 1:

  • eTCustomFieldXXX is the attribute that will contain a value.
  • eTCustomFieldNameXXX is the attribute that will contain a business name for this custom field.

Supporting Note 2:

The CA IM Provisioning Tier was/is developed with early x86 MS VC++ code. We attempted to use later release of the MS Visual Studio VC solution for this process but it failed to generate the output files.

Phase 1 Steps: Enhance the IM Provisioning Tier with 900 new custom fields with case = insensitive.

  1. Download & install MS Visual Studio VC 2010 Express, to have access to the ‘nmake’ executable.
  1. Update OS PATH variables to reference this MS VC 2010 bin folder
  1. Execute the nmake binary, to ensure it is working fine
    • where make & make /?
  1. Download & install CA IM Provisioning SDK on the same server/ workstation as ‘nmake’ binary.
    • IM r14.3 GEN50000000002780.zip 200 MB
  1. Open a command line window; and then change folder to the Provisioning SDK’s COSX Samples folder

cd “C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\samples\COSX”

  1. Execute the gencosx.bat batch file to generate the additional schema for N attributes.

gencosx.bat 900 { Max allowed value is 900; which will generate 100-999 attributes}

The output text file: cosxparse.pty

**** The above steps only need to be executed ONCE on a workstation. After the output text file is generated, we should only need & retain this file for future updates. ****

################################################################

  1. Use Notepad++ to search and replace a string in the following file, cosxparse.pty

“case=sensitive” to “case=insensitive”

{We may be selective and only replace a few attributes instead of all additional 900 attributes.}

  1. Execute the following commands to generate the binary file.
  • Use batch files to set environmental values for the nmake program.
    “C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat“
  • Execute ‘nmake’
    nmake

The new output file (binary) will be:
C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\data\ cosxparse.ptt

  1. Before overwriting existing files; backup the three (3) prior files of IMPS/CCS data folder & IMPD schema folder for:
    etrust_cosx.schema
    etrust_cosx.dxc
    cosxparse.ptt
  1. Copy the file, cosxparse.ptt, to the IMPS server data folder
  1. Stop IMPS service: su – imps & imps stop
  1. Execute the follow command: schemagen -n COSX
  1. This process will create two (2) new output files:
    • etrust_cosx.dxc
    • etrust_cosx.schema
  1. Validate the two (2) new generated files have case-insensitivity set.
  1. Copy etrust_cosx.dxc to all CA Directory schema folders; including DX routers (on IMPS servers).
    • Validate this file is reference in the IMPD group knowledge schema file: etrust_admin.dxg
  1. Copy etrust_cosx.schema & cosxparse.ptt to all CA IMPS Servers, the CCS Servers’ data folders, & the CA IMPS GUI data folder.
    • Validate the file, etrust_cosx.schema, is reference in the IMPS configuration file: etrust_admin.conf
  1. Restart CA Directory and IMPS/CCS Services.
  • dxserver stop all / dxserver start all
  • imps stop / imps start
  • net stop im_jcs / net start im_jcs {this will also restart the im_ccs service}
  1. With the IMPS GUI
  • Assign a ‘business name’ to the newly created eTCustomField100+ under
    SYSTEM/GLOBAL PROPERTIES/CUSTOM USER FIELDS
    {If you do not see these newly created fields, then the IMPS GUI data folder was not updated per step 11.}
  • Validate that E&C Correlation Rules will now work for these extended fields with case-insensitivity.
    SYSTEM/DOMAIN CONFIGURATIONS/EXPLORE AND CORRELATE/CORRELATION ATTRIBUTE/
  • Validate the custom fields are viewable for each Global User.

We may now STOP HERE if we do NOT need to expose these new custom fields to the IME.

Pro: Able to use customfields for account templates and correlations rules.

Con: Not exposed to IME for 1:1 mapping nor exposed for PX Business Rules.

#############################################################

Phase 2 Steps – Advance Configuration – Add custom fields to the IME to allow for 1:1 mapping and use of PX Business Rules.

  1. Update the JIAM (Java LDAP to IMPS API) reference file, jiam.jar, to allow the IME to manage these extended fields for PX business rules.
    • Use 7zip https://www.7-zip.org/ to extract files from jiam.jar; update the file CommonObjects.xml; then replace this file in the jar file.
    • Location of reference file: ./wildfly-idm/standalone/deployments/ iam_im.ear/library/jiam.jar
    • Location for property files to update: \com\ca\iam\model\impl\datamodel\ CommonObjects.xml
  1. Update sections after eTCustomField99 with the below data with the case insensitive.

<property name="eTCustomField100"> <doc>Custom Field #100</doc> <value default="false"> <setValue> <baseType default="false"> <strValue></strValue> </baseType> </setValue> </value> <metadata name="jiam.syncToAccounts"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.modifyPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.ownerPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="isMultiValued"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="beanPropertyName"> <value> <strValue>customField100</strValue> </value> </metadata> <metadata name="pt.minimumAbbreviation"> <value> <intValue>10</intValue> </value> </metadata> <metadata name="pt.internalName"> <value> <strValue>CustomField100</strValue> </value> </metadata> <metadata name="pt.editType"> <value> <strValue>string</strValue> </value> </metadata> <metadata name="pt.editFlag"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.caseSensitivity"> <value> <strValue>insensitive</strValue> </value> </metadata> <metadata name="pt.asciiOnly"> <value> <boolValue>false</boolValue> </value> </metadata> <metadata name="pt.dataLocation"> <value> <strValue>db</strValue> </value> </metadata> </property>

  1. Update the CA IMPS directory.xml as needed for some or all 900 fields.
<ImsManagedObjectAttr physicalname="eTCustomField100" description="Custom Field 100" displayname="Custom Field 100" valuetype="String" multivalued="true" wellknown="%CUSTOM_FIELD_100%" maxlength="0"/>
  1. Update the IME’s IMCD to IMPS 1:1 mappings.
    • identityEnv_environment_settings.xml

We may now stop here. The next advance configuration is only required if we wish to manage the various Endpoint Mapping Tab with the IM UI; instead of the IMPS GUI. We would consider the next Phase 3 Steps, to be low value for the effort; as this configuration is typically set once and done in the IMPS GUI.

Pro: Able to use customfields for account templates and correlations rules. Also able to map these files 1:1 in the IME for IMCD attributes to be mapped to the IMPS extended custom attributes. These IMPS extended custom attributes will now be exposed for PX Business Rules.

Con: Not exposed to IM UI to update Endpoint’s Mapping TAB for ADS and DYN endpoints.

Phase 3 Steps – IME Advanced

If planning on exposing these new custom fields in both the Endpoint’ Mapping Attribute Screen & Endpoint Account Templates via the IME, follow these additional steps:

  1. Replace commonobjects.xml in ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\lib\roledefgen.jar by following the steps given below:
    • rename roledefgen.jar as roledefgen.zip
    • Open roledefgen.zip
    • open com\ca\iam\roledefgen\commonobjects.xml and replace the contents with the attached/provided commonobjects.xml file
    • save the zip
    • rename the zip to jar
  2. Now roledefgen.jar will contain the commonobjects.xml file with extended custom attributes
  1. execute the below RoleDefGenerator.bat to generate jars for all the required java/Dyn endpoints
    • ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\bin> RoleDefGenerator.bat -d -h -u “”
  1. open the generated endpoint jars one by one and modify them by following below steps:
    • rename the original .jar as .zip
    • open framework.xml and increase the version “version=” (2nd line)
    • rename the generated .jar as .zip
    • open and copy the contents of -RoleDef.xml
    • paste the copied content in step 4 to the file -RoleDef.xml in original .zip (step 1)
    • save the original .zip and rename it to .jar
    • replace the save .jar in ..\wildfly-8.2.0.Final\standalone\deployments\iam_im.ear\ user_console.war\WEB-INF\lib
  2. restart IM and test the Custom attributes in IM web-UI

Post Update Note:

  • Validate if we may need to rebuild the IMPD DSAs (4) for existing users that may have already had these extended attributes but with case=sensitive set previously.
    • This step is not required if this is the first time the extended attributes have been deployed or if the case=sensitive has not been changed.
    • Process: Export the IMPD LDIFs, rebuild the IMPD DSA and then re-import LDIFs.

CA Identity Manager Task and Event State Codes

A reference for the two bytes Task and Event Codes for CA Identity Manager listed below. State code of 128, for an event or task, indicates that the state machine has completed all processing associated with the respective task or event.

Example of the events associated with an example Create User Task, and respective final states

Event state transitions and associated code knowledge helps with getting a better understanding when troubleshooting failed or tasks stuck in progress. As a good practice, when testing a use case, keep an record of expected execution outcome for the use case (at an event level.)

Contact us if you have questions or are experiencing problems with your IAM deployment. We will get you on the path to success and help you realize real business value from your IAM program.

API Gateway and Docker Lab

While assisting a site with their upgrade process from CA API Gateway 9.2 (docker) to the latest CA API Gateway 9.4 image, we needed to clarify the steps. In this blog entry, we have capture our validation processes of the documented and undocumented features of API Gateway docker deployment ( https://hub.docker.com/r/caapim/gateway/ ), pedantic verbose steps to assist with training of staff resources; and enhanced the external checks for a DAR (disaster and recovery) scenario using docker & docker-compose tools.

Please use this lab to jump start your knowledge of the tools: ‘docker’, ‘docker-compose’ and the API Gateway. We have added many checks and the use of bash shell to view the contents of the API Gateway containers. If you have additional notes/tips, please leave a comment.

To lower business risk during this exercise, we made the follow decisions:

1) Avoid use of default naming conventions, to prevent accidental deletion of the supporting MySQL database for CA API Gateway. The default ‘docker-compose.yml’ was renamed as appropriate for each API Gateway version.

2) Instead of using different folders to host configuration files, we defined project names as part of the startup process for docker-compose.

3) Any docker container updates would reference the BASH shell directly instead of a soft-link, to avoid different behaviors between the API GW container and the MySQL container.

Challenges:

Challenge #1: Both the API Gateway 9.2 and 9.4 docker container have defects with regards to using the standardized ‘docker stop/start containerID‘ process. API Gateway 9.2 would not restart cleanly; and API Gateway 9.4 container would not update the embedded health check process, e.g. docker ps -a OR docker inspect containerID

Resolution #1: Both challenges were addressed in the enclosed testing scripts. Docker-compose is used exclusively for API Gateway 9.2 container, and touching an internal file in the API Gateway 9.4 container.

Challenge #2: The docker parameters between API Gateway 9.2 and API Gateway 9.4 had changed.

Resolution #2: Identify the missing parameters with ‘docker logs containerID’ and review of the embedded deployment script of ‘entrypoint.sh’

Infrastructure: Seven (7) files were used for this lab on CentOS 7.x (/opt/docker/api)

  1. ssg_license.xml (required from Broadcom/CA Sales Team – ask for 90 day trial if a current one is not available)
  2. docker-compose-ssg94.yml (the primary install configuration file for API GW 9.4)
  3. docker-compose-ssg92.yml (the primary install configuration file for API GW 9.2)
  4. docker-compose-ssg94-join-db.xml (the restart configuration file – use as needed)
  5. docker-compose-ssg92-join-db.xml (the restart configuration file – use as needed)
  6. 01_create_both_ssg92_and_ssg94_docker_deployments.sh (The installation of ‘docker’ and ‘docker-compose’ with the deployment of API GW 9.2 [with MySQL 5.5] and API GW 9.4 [with MySQL 5.7] ; with some additional updates)
  7. 02_backup_and_migrate_mysql_ssg_data_ from_ssg92_to_ssg94_db.sh (The export/import process from API GW 9.2 to API GW 9.4 and some additional checks)

Example of the seven (7) lab files’ contents:

  1. ssg_license.xml ( a view of the header only )
<?xml version="1.0" encoding="UTF-8"?>
<license Id="5774266080443298199" xmlns="http://l7tech.com/license">
    <description>LIC-PRODUCTION</description>
    <licenseAttributes/>
    <valid>2018-12-10T19:32:31.000Z</valid>
    <expires>2019-12-11T19:32:31.000Z</expires>
    <host name=""/>
    <ip address=""/>
    <product name="Layer 7 SecureSpan Suite">
        <version major="9" minor=""/>
        <featureset name="set:Profile:EnterpriseGateway"/>
    </product>

2. docker-compose-ssg94.yml

version: "2.2"
services:
    ssg94:
      container_name: ssg94
      image: caapim/gateway:latest
      mem_limit: 4g
      volumes:
         - /opt/docker/api/ssg_license.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
      expose:
      - "8777"
      - "2142"
      ports:
        - "8443:8443"
        - "9443:9443"
      environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql57"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql57:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false -Dcom.l7tech.server.disableFileLogsinks=false "
      links:
        - mysql57
    mysql57:
      container_name: ssg94_mysql57
      image: mysql:5.7
      restart: always
      mem_limit: 2g
      ports:
       - "3306:3306"
      environment:
         - MYSQL_ROOT_PASSWORD=7layer
         - MYSQL_USER=gateway
         - MYSQL_PASSWORD=7layer
         - MYSQL_DATABASE=ssg

3. docker-compose-ssg92.yml

version: "2.2"
services:
    ssg92:
      container_name: ssg92
      image: caapim/gateway:9.2.00-9087_CR10
      mem_limit: 4g
      expose:
      - "8778"
      - "2143"
      ports:
        - "8444:8443"
        - "9444:9443"
      environment:
        SKIP_CONFIG_SERVER_CHECK: "true"
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql55"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "root"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql55:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_ADMIN_USER: "pmadmin"
        SSG_ADMIN_PASS: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=true -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false "
        SSG_LICENSE: "$SSG_LICENSE_ENV"
      links:
        - mysql55
    mysql55:
      container_name: ssg92_mysql55
      image: mysql:5.5
      restart: always
      mem_limit: 2g
      ports:
      - "3307:3306"
      environment:
        - MYSQL_ROOT_PASSWORD=7layer

4. docker-compose-ssg94-join-db.yml

version: "2.2"
services:
    ssg94:
      container_name: ssg94
      image: caapim/gateway:latest
      mem_limit: 4g
      volumes:
         - /opt/docker/api/ssg_license.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
      expose:
      - "8777"
      - "2142"
      ports:
        - "8443:8443"
        - "9443:9443"
      environment:
        ACCEPT_LICENSE: "true"
        #SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_COMMAND: "join"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql57"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql57:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false -Dcom.l7tech.server.disableFileLogsinks=false "
      links:
        - mysql57
    mysql57:
      container_name: ssg94_mysql57
      image: mysql:5.7
      restart: always
      mem_limit: 2g
      ports:
       - "3306:3306"
      environment:
         - MYSQL_ROOT_PASSWORD=7layer
         - MYSQL_USER=gateway
         - MYSQL_PASSWORD=7layer
         - MYSQL_DATABASE=ssg

5. docker-compose-ssg92-join-db.yml

version: "2.2"
services:
    ssg92:
      container_name: ssg92
      image: caapim/gateway:9.2.00-9087_CR10
      mem_limit: 4g
      expose:
      - "8778"
      - "2143"
      ports:
        - "8444:8443"
        - "9444:9443"
      environment:
        SKIP_CONFIG_SERVER_CHECK: "true"
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "join"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql55"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "root"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql55:3306/ssg?useSSL=false"
        SSG_DATABASE_WAIT_TIMEOUT: "120"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        SSG_ADMIN_USER: "pmadmin"
        SSG_ADMIN_PASS: "7layer"
        SSG_INTERNAL_SERVICES: "restman wsman"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=true -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false  -Dcom.l7tech.service.metrics.enabled=false "
        SSG_LICENSE: "$SSG_LICENSE_ENV"
      links:
        - mysql55
    mysql55:
      container_name: ssg92_mysql55
      image: mysql:5.5
      restart: always
      mem_limit: 2g
      ports:
      - "3307:3306"
      environment:
        - MYSQL_ROOT_PASSWORD=7layer

6. 01_create_both_ssg92_and_ssg94_docker_deployments.sh

#!/bin/bash
##################################################################
#
# Script to validate upgrade process from CA API GW 9.2 to 9.4 with docker
#  - Avoid using default of 'docker-compose.yml'
#  - Define different project names for API GW 9.2 and 9.4 to avoid conflict
#  - Explictly use bash shell  /bin/bash  instead of soft-link
#
# 1. Use docker with docker-compose to download & start
#      CA API GW 9.4 (with MySQL 5.7) &
#      CA API GW 9.2 (with MySQL 5.5)
#
# 2. Configure CA API GW 9.4 with TCP 8443/9443
#              CA API GW 9.2 with TCP 8444/9444 (redirect to 8443/9443)
#
# 3. Configure MySQL 5.7 to be externally exposed on TCP 3306
#              MySQL 5.5 to be externally exposed on TCP 3307
#  - Adjust 'grant' token on MySQL configuration file for root account
#
# 4. Validate authentication credentials to the above web services with curl
#
#
# 5. Add network modules via yum to API GW 9.4 container
#   - To assist with troubleshooting / debug exercises
#
# 6. Enable system to use API GW GUI to perform final validation
#   - Appears to be an issue to use browers to access the API GW UI TCP 8443/8444
#
#
# Alan Baugher, ANA, 10/19
#
##################################################################


echo ""
echo ""
echo "################################"
echo "Install docker and docker-compose via yum if missing"
echo "Watch for message:  Nothing to do "
echo ""
echo "yum -y install docker docker-compose "
yum -y install docker docker-compose
echo "################################"
echo ""


echo "################################"
echo "Shut down any prior docker container running for API GW 9.2 and 9.4"
cd /opt/docker/api
pwd
echo "Issue this command if script fails:  docker stop \$(docker ps -a -q)  && docker rm \$(docker ps -a -q)   "
echo "################################"
echo ""


echo "################################"
export SSG_LICENSE_ENV=$(cat ./ssg_license.xml | gzip | base64 --wrap=0)
echo "Execute  'docker-compose down'  to ensure no prior data or containers for API GW 9.4"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml down
echo "################################"
echo "Execute  'docker-compose down'  to ensure no prior data or containers for API GW 9.2"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml down
echo "################################"
echo ""


echo "################################"
echo "Execute  'docker ps -a'   to validate no running docker containers for API GW 9.2 nor 9.4"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""


echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
echo "Execute  'docker-compose up -d'  to start docker containers for API GW 9.4 with MySql 5.7 with TCP 8443/9443"
docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94.yml up -d
echo "################################"
echo "Change folder to execute docker-compose script for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
echo "Execute  'docker-compose up -d'  to start docker containers for API GW 9.2 with MySql 5.5 with TCP 8444/9444"
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92.yml up -d
echo "################################"
echo ""


echo "################################"
echo "Backup current API GW 9.4 running container for future analysis"
echo "docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""


echo "################################"
echo "Update API GW 9.4 running container with additional supporting tools with yum"
echo "docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c \"yum install -y -q net-tools iproute unzip vi --nogpgcheck\" "
docker exec -it -u root -e TERM=xterm ssg94 /bin/sh -c "yum install -y -q net-tools iproute unzip vi --nogpgcheck "
echo "Export API GW 9.4 running container after supporting tools are added"
echo "docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar "
docker export ssg94 > ssg94.export.tools.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.tar
echo "################################"
echo ""


echo "################################"
echo "Validate network ports are exposed for API GW Manager UI "
netstat -anpeW | grep -e docker -e "Local" | grep -e "tcp" -e "Local"
echo "################################"
echo ""

echo "################################"
echo "Sleep 70 seconds for both API GW to be ready"
echo "################################"
sleep 70
echo ""


echo ""
echo "################################"
echo "Extra:  Open TCP 3306 for mysql remote access "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "echo -e '\0041includedir /etc/mysql/conf.d/\n\0041includedir /etc/mysql/mysql.conf.d/\n[mysqld]\nskip-grant-tables' > /etc/mysql/mysql.cnf && cat /etc/mysql/mysql.cnf "
#docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql restart"
#docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /bin/bash -c "/etc/init.d/mysql status && echo -n"
echo "################################"
docker restart ssg94_mysql57
echo ""



echo "################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.2 and 9.4 with their correct ports"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.2 TCP 8444 - should see six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.4 TCP 8443 - should see six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Next Steps:"
echo "       Open the API GW UI for 9.2 and create a new entry in the lower left panel"
echo ""
echo "Example: "
echo "       Right click on hostname entry and select 'Publish RESTful Service Proxy with WADL' "
echo "       Select Manual Entry, then click Next"
echo "       Enter data for two (2) fields:"
echo "                  Service Name:  Alan "
echo "                  Resource Base URL:  http://www.anapartner.com/alan "
echo "       Then select Finish Button "
echo "################################"
echo ""

7. 02_backup_and_migrate_mysql_ssg_data_from_ssg92_to_ssg94_db.sh

#!/bin/bash
#######################################################################
#
# Script to validate upgrade process from CA API 9.2 to 9.4 with docker
#  - Avoid using default of 'docker-compose.yml'
#  - Define different project names for API GW 9.2 and 9.4 to avoid conflict
#  - Explictly use bash shell  /bin/bash  instead of soft-link /bin/sh
#
# 1. Stop docker containers for CA API GW 9.2 & 9.4 (leave mysql containers running)
#    - To prevent any updates to mysql db during migration
#
# 2. Use mysqldump command to export CA API 9.2 MySQL 5.5 ssg database with stored procedures (aka routines)
#   - Review excluding the audit tables to avoid carrying over excessive data
#
# 3. Use mysql command to import sql file to CA API 9.4 MySQL 5.7 ssg database
#   - Review if dropping / recreate the ssg database will avoid any install issues
#   - Keep eye on table cluster_info {as this has the Gateway1 defination with the host IP address}
#
# 4. Restart CA API GW 9.2 & 9.4 containers
#
#    - Challenge 1: CA API GW 9.2 docker image has issue with docker stop/start process
#    the reference /root/entrypoint.sh will loop with creation of a license folder
#    - Addressed with custom docker-compose file to recreate image to join existing MySQL 5.5 container
#
#    - Challenge 2: CA API GW 9.4 docker image has issue with docker stop/start process
#    The new heathcheck.sh process calls a base.sh script that compare the date-time stamp for two files
#    , the datestamp for one file is not updated correctly upon docker start process.
#    - Addressed with custom docker bash script to "touch" the primary file to allow date stamp to be updated.  Validate with: docker logs ssg94
#      WARNING 1      com.l7tech.server.boot.GatewayBoot: Unable to touch /opt/SecureSpan/Gateway/node/default/var/started:
#                  /opt/SecureSpan/Gateway/node/default/var/started (Permission denied)
#
#    - Challenge 3: CA API GW 9.4 docker image appears to have similar issue for hazelcast startup
#    The container may hold for 300 seconds due to hazelcast configuration not completing correctly
#     SEVERE  1      com.hazelcast.instance.Node: [172.17.0.3]:8777 [gateway] [3.10.2] Could not join cluster in 300000 ms. Shutting down now!
#     Unable to start the server: Error starting server : Lifecycle error: Could not initialize Hazelcast cluster
#     WARNING 107    com.hazelcast.cluster.impl.TcpIpJoiner: [172.17.0.3]:8777 [gateway] [3.10.2] Error during join check!
#    - Addessed with different project names to avoid conflict between API GW 9.2 broadcast to API GW 9.4
#
#    - Challenge 4: CA API GW 9.2 appears to be stuck in a while loop for /opt/docker/entrypoint.sh
#      apim-provisioning: INFO: waiting for the shutdown file at "/userdata/shutdown" to be created
#    - Addressed:  Does not seem to have impact with current testing.  Ignore.  Validate with:  docker logs ssg92
#
# 5. Important Note: Ensure that the SSG_CLUSTER_HOST and SSG_CLUSTER_PASSWORD values for CA API GW 9.4 docker-compose file
#    match those set in the configured MySQL database.
#    After CA API GW 9.4 container connects to the existing Gateway database, the Container Gateway will automatically upgrades
#    the ssg database if the ssg database version is lower than the version of the Container Gateway.
#    - Ensure the jdbc hostname
#
#    - Ref:  https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/9-4/other-gateway-form-factors/using-the-container-gateway/getting-started-with-the-container-gateway/connect-the-container-gateway-to-an-existing-mysql-database.html
#
#
# Alan Baugher, ANA, 10/19
#
#######################################################################

echo ""
echo "####################################"
echo "Early check: Address a file permission issue with the API GW 9.4 container"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c 'chmod 666 /opt/SecureSpan/Gateway/node/default/var/started' "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "chmod 666 /opt/SecureSpan/Gateway/node/default/var/started"
echo "May validate issue with:  docker logs ssg94 "
echo "####################################"


echo ""
echo "####################################"
echo "Temporarily shutdown the API GW containers for 9.2 and 9.4 to avoid any updates to the mysql db during export & import"
echo "docker stop ssg92 ssg94 "
docker stop ssg92 ssg94
echo "####################################"
echo ""


echo "####################################"
echo "Validate API GW container are down and the MySQL db containers are up and working"
echo "Pause for 5 seconds:  sleep 5"
sleep 5
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.2 MySQL 5.5 ssg db with stored procedures (aka routines)"
echo "time docker exec -i `docker ps -a | grep mysql:5.5 | awk '{print $1}'` mysqldump -u root --password=7layer ssg  --routines >  ssg92.backup_with_routines.sql  2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.5 | awk '{print $1}'` mysqldump -u root --password=7layer ssg  --routines >  ssg92.backup_with_routines.sql  2> /dev/null
echo "View the size of the MySQL 5.5. ssg db for API GW 9.2"
ls -lart | grep ssg92.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db with stored procedures (aka routines) as a 'before' reference file"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.before.backup_with_routines.sql  2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.before.backup_with_routines.sql  2> /dev/null
echo "View the size of the MySQL 5.7. ssg db for API GW 9.4 as the 'before' reference file"
ls -lart | grep ssg94.before.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Import the MySQL 5.5 ssg db with stored procedures (aka routines) into MySQL 5.7 for API GW 9.4"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysql -u root --password=7layer ssg    < ssg92.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysql -u root --password=7layer ssg    < ssg92.backup_with_routines.sql 2> /dev/null
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db wht stored procedures (aka routines) as a 'after' import reference file"
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.after.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.after.backup_with_routines.sql 2> /dev/null
echo "View the size of the MySQL 5.7. ssg db for API GW 9.4 as the 'after' reference file"
ls -lart | grep ssg94.after.backup_with_routines.sql
echo "####################################"
echo ""


echo "####################################"
echo "Restart the API GW containers for 9.2 and 9.4 "
# Note: Restart of the ssg94 container will 'auto' upgrade the ssg database to 9.4 tags
echo "docker restart ssg94 "
docker restart ssg94
#docker rm ssg94
#docker-compose -p ssg94 -f /opt/docker/api/docker-compose-ssg94-join-db.yml up -d
echo "####################################"
# Note:  API GW 9.2 docker image was not designed for stop/start correctly; rm then redeploy
export SSG_LICENSE_ENV=$(cat ssg_license.xml | gzip | base64 --wrap=0)
echo "Remove the API GW 9.2 container via:  docker rm ssg92"
docker rm ssg92
echo "Redeploy the API GW 9.2 container "
echo "docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92-join-db.yml up -d "
docker-compose -p ssg92 -f /opt/docker/api/docker-compose-ssg92-join-db.yml up -d
echo "####################################"
echo ""



echo "####################################"
echo "Validate API GW container are up and the mysql db containers are working"
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Export the API GW 9.4 MySQL 5.7 ssg db after import & after the 'auto' upgrade as an 'after' auto upgrade reference file"
docker stop ssg94
echo "time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.auto.after.backup_with_routines.sql 2> /dev/null "
time docker exec -i `docker ps -a | grep mysql:5.7 | awk '{print $1}'` /usr/bin/mysqldump -u root --password=7layer ssg  --routines >  ssg94.auto.after.backup_with_routines.sql 2> /dev/null
echo "View all the exported MySQL files to compare process flow"
ls -lart ssg*.sql
docker start ssg94
echo "View the auto upgrade from version 9.2 to version 9.4 with a delta compare of the exported sql files"
echo "diff ssg94.after.backup_with_routines.sql  ssg94.before.backup_with_routines.sql  | grep -i \"INSERT INTO .ssg_version.\" "
diff ssg94.after.backup_with_routines.sql  ssg94.before.backup_with_routines.sql  | grep -i "INSERT INTO .ssg_version."
echo "####################################"
echo ""


echo "####################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.4 and 9.2"
echo "docker ps --format \"table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}\" "
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 MySQL 5.7 databases"
echo "Validate that 'ssg' database exists "
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  -e \"show databases;\" "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  -e "show databases;"
echo "####################################"
echo ""


echo "####################################"
echo "Review for any delta of the MySQL ssg database after import"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.5 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"show tables;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.5 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "show tables;" > ssg92.tables.txt
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"show tables;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "show tables;" > ssg94.tables.txt
echo "Observer for any delta from the below command"
echo "diff ssg92.tables.txt ssg94.tables.txt"
diff ssg92.tables.txt ssg94.tables.txt
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 admin user in the MySQL 5.7 ssg database"
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;\" "
docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
echo "####################################"
echo ""


echo "####################################"
echo "Show current API GW 9.4 admin user in the intermediate configuration file on the AIP GW 9.4 container"
echo "docker exec -it -u root -e TERM=xterm ssg94 /bin/bash -c \"grep -i -e l7.login -e l7.password /opt/SecureSpan/Gateway/node/default/etc/bootstrap/bundle/001_update_admin_user.xml.req.bundle\" "
docker exec -it -u root -e TERM=xterm ssg94 /bin/bash -c "grep -i -e l7.login -e l7.password /opt/SecureSpan/Gateway/node/default/etc/bootstrap/bundle/001_update_admin_user.xml.req.bundle"
echo "####################################"
echo ""


echo "####################################"
echo "Show all 'new' files created or linked in API GW 9.4 container with mtime of 1 day. Excluding lock (LCK) files"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"find /opt -type f -mtime -1 -ls | grep -i -v -e '.LCK'\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "find /opt -type f -mtime -1 -ls | grep -i -v -e '.LCK'"
echo "####################################"
echo ""


echo "####################################"
echo " View the license.xml file that was copied to the API GW 9.4 container bootstrap folder before copied to the MySQL 5.7 ssg db table "
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"ls -lart  /opt/SecureSpan/Gateway/node/default/etc/bootstrap/license \" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "ls -lart  /opt/SecureSpan/Gateway/node/default/etc/bootstrap/license "
echo "####################################"
echo ""


echo "####################################"
echo "View logon count for the API GW 9.4 admin user via MySQL 5.7 ssg db"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=gateway --password=7layer ssg -e \"select hex(goid), version, hex(provider_goid), login, fail_count, last_attempted, last_activity, state from logon_info;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=gateway --password=7layer ssg -e "select hex(goid), version, hex(provider_goid), login, fail_count, last_attempted, last_activity, state from logon_info;"
echo "####################################"
echo ""


echo "####################################"
echo "View the API GW 9.4 MySQL 5.7 mysql.user table"
### docker logs `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  2>&1 | grep  "GENERATED ROOT PASSWORD"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e \"SELECT User,account_locked,password_expired,password_last_changed,authentication_string FROM mysql.user;\" "
docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer  ssg -e "SELECT User,account_locked,password_expired,password_last_changed,authentication_string FROM mysql.user;"
echo "####################################"
echo ""


echo "####################################"
echo "To remove any locked account (including pmadmin SSG Admin User ID) from the MySQL 5.7 ssg logon_info table  {or any account}"
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"delete from logon_info where login ='pmadmin';\" "
echo "docker exec -it -u root -e TERM=xterm  `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=7layer ssg -e \"truncate logon_info;\"  "
echo "####################################"
echo ""


echo "####################################"
echo "To change root password for MySQL 5.7 mysql.user db"
echo "docker exec -it -u root -e TERM=xterm `docker ps -a | grep mysql:5.7 | awk '{print $1}'`  mysql --user=root --password=OLD_PASSWORD -e  \"SET PASSWORD FOR 'root'@'localhost' = PASSWORD('7layer');\" "
echo "####################################"
echo ""


echo "####################################"
echo "Sleep 30 seconds to address restart health check time-stamp issue with API GW 9.4"
sleep 30
echo "####################################"
echo ""


echo "####################################"
echo "Address API GW 9.4 container health check upon stop/start or restart gap.  (base.sh script)"
echo "docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot\" "
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot"
echo "Touch to update date-time stamp for one file: /opt/SecureSpan/Gateway/node/default/var/started"
echo "docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c \"touch /opt/SecureSpan/Gateway/node/default/var/started\" "
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "touch /opt/SecureSpan/Gateway/node/default/var/started"
docker exec -it -u root -e TERM=XTERM `docker ps -a | grep caapim/gateway:latest | awk '{print $1}'` /bin/bash -c "date +%s -r /opt/SecureSpan/Gateway/node/default/var/started  && date +%s -r /opt/SecureSpan/Gateway/node/default/var/preboot"
echo "####################################"
echo ""


echo "####################################"
echo "Sleep 30 seconds to allow health check status to update for API GW 9.4"
echo "May also monitor health and overall status with:   docker inspect ssg94 "
sleep 30
echo "####################################"
echo ""


echo "####################################"
echo "Execute  'docker ps -a'   to validate running docker containers for API GW 9.4 and 9.2"
echo "docker ps --format \"table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}\" "
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.RunningFor}}\t{{.Status}}\t{{.Ports}}"
echo "####################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.2 TCP 8444 - should see minimal of six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8444/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""


echo "################################"
echo "Test authentication with the SSG backup URL for API 9.4 TCP 8443 - should see minimal of six (6) lines"
echo "curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e 'title' -e 'Gateway node' -e 'input' -e 'form action' "
echo "#########           ############"
curl -s --insecure  -u pmadmin:7layer  https://$(hostname -s):8443/ssg/backup | grep -e "title" -e "Gateway node" -e "input" -e "form action"
echo "################################"
echo ""

View of the API Gateway via the MS Windows API GW UI for both API GW 9.2 (using the 9.3 UI) and API 9.4 (using the 9.4 UI). The API GW Policies will be migrated from API 9.2 to API 9.4 via the export/import of MySQL ssg database. After import, the API GW 9.4 docker image will ‘auto’ upgrade the ssg database to the 9.4 version.

Interesting view of the API GW 9.4 MySQL database ‘ssg’ after import and a restart (that will ‘auto’ upgrade the ssg database version). Note multiple Gateway “nodes” that will appear after each ‘docker restart containerID’