Upgrade CA API Gateway via docker “in-place”

CA API Gateway (ssg) is used to manage SaaS endpoints/applications for the CA/Symantec Identity Suite solution. One of the challenges of appliances and Docker containers is the underlying 3rd party libraries may get dated, and require updates.

Most vendors will not allow post-updates or direct updates to their containers libraries, as this has an impact on the support model. So we must rely on the support process and push vendors to release additional updates to stay ahead of any security concerns.


The CA API Gateway (ssg) when deployed on docker, has a streamlined process for updating in place, as long as you have backed-up the MySQL database when the docker images are updated.

We wanted to capture the process to upgrade from CA API Gateway 9.4 (ssg94) to Gateway 10.0 (ssg10). Fortunately, the MySQL 8.0 database has the same structure, tables, and routines as the MySQL 5.7 database for CA API Gateway 9.4.

The challenge we have is the documented process to upgrade is difficult to implement on the same host OS; and there was a lost opportunity to manage the license file from 9.4 to license 10.0 during the re-import of the MySQL database.


The below diagram, from the CA API Gateway 10.0 upgrade process can be adjusted to streamline the upgrade process.

expedited_scenario_1
Ref: https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/upgrade-the-gateway/upgrade-an-appliance-gateway/manual-expedited-appliance-upgrade.html

The above-documented process outlines dropping the MySQL database ssg completely, and then create a new clean db. Using this documented process, we can slightly adjust it, to avoid an unnecessary step to re-import the license file after the restart of the gateway container. We also wish to add additional validation steps to show what is changing.

Proposal for modifications:

  1. Create a clean CA API Gateway 10.0 (ssg10) Docker deployment on the same Host OS. May use docker-compose with REST service enabled and use different TCP listening ports to allow two (2) docker containers to run simultaneously during the testing cycle. After testing may keep the default TCP license ports of 8443 & 9443.
  2. Allow the CA API Gateway 10.0 container to start cleanly with MySQL 8.0 DB and with the correct license file for version 10. Then export the MySQL database table that contains the updated license table.
  3. Import the prior backup MySQL file to the new CA API Gateway deployment. Then before startup, import the ssg10 license mysql file as well. This will replace the ssg94 license information.
  4. Restart the CA API Gateway container, and monitor the logs for any errors and ensure the new license file is used
  5. If REST API was enabled (via the docker-compose file & touch a file name “restman”), then use CURL to validate all REST services are available, and list all prior API Gateway Policy Services are displayed.

A visual example of this process using the prior diagram.

Note: The official documentation uses sed to replace a string “NO_AUTO_CREATE_USER”; but the documentation shows two examples. One with a comma & one without. We have included the one with the comma, but we did not see this line in the MySQL sql export, so it was deemed low value, but still included in our process.

Example of upgrade process and validation of using REST

Note the two (2) running CA API Gateway container of 9.4 (with MySQL 5.7) and 10.0 (with MySQL 8.0) with different TCP listen services; and validation of REST services for ssg10

Below are the above steps called out with additional validations steps, and the use of the “time” command to monitor the export of the files.

# Pre-Step 1:  On Test System:  Prepare SSG10 docker compose yml file and correct license.xml & confirm startup.
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}
docker ps -a
docker logs ssg10 -f --tail 100
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"


# Step 2:  On PROD HOST OS: Stop SSG94 and export the current MySQL 5.7 database with routines (aka stored procedures) & remove unwanted lines
docker stop ssg94
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.before.`/bin/date --utc +%Y%m%d%H%M%S.0Z`.sql
time docker exec -tt mysql-ssg  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines > ssg94.backup.updated.for.mysql8.sql
sed -i "s/NO_AUTO_CREATE_USER,//g"   			ssg94.backup.updated.for.mysql8.sql
sed -i "/Using a password on the command/d" 	ssg94.backup.updated.for.mysql8.sql


# Step 3: On PROD HOST OS: Deploy SSG10 with docker compose yml file & correct license xml file & export db table license_document
time docker-compose -p ssg10 -f ./docker-compose-ssg10-0.yml up -d      {Wait 90-120 seconds}  
docker ps -a
docker logs ssg10 -f --tail 100     
docker stop ssg10
time docker exec -tt  mysql-ssg10  /usr/bin/mysqldump -h 127.0.0.1 -u root --password=7layer  ssg --routines license_document  > ssg10.license.export.sql
sed -i "/Using a password on the command/d" 	ssg10.license.export.sql

# Step 4: On PROD HOST OS: Drop the SSG10 MySQL 8.0 ssg database and rebuilt with imports of SQL files.
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer drop ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -it -u root -e term=xterm mysql-ssg10 /usr/bin/mysqladmin --user=root --password=7layer create ssg
docker exec -it mysql-ssg10   mysql --user=root --password=7layer -e "show databases;"
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg94.backup.updated.for.mysql8.sql
time docker exec -i  mysql-ssg10  /usr/bin/mysql -u root --password=7layer ssg    <  ssg10.license.export.sql
docker exec -it mysql-ssg10  mysql --user=root --password=7layer ssg  -e "SELECT * FROM license_document;" | grep -A 12 -e "<license "

# Step 5: On PROD HOST OS:  Start SSG10 and validate no errors 
docker start ssg10       {Wait 90-120 seconds} 
docker ps -a

# Step 6:  Validate license    
docker logs ssg10 -f --tail 100  
docker logs ssg10 -f 2>&1  | grep -i license

# Step 7:  Validate REST services enabled and we can see all services
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
# Example to validate ServiceNow REST service to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://localhost:9443/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Example validate ServiceNow REST service via LB to CA APIGW
curl --insecure --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json" --header "url: https://dev101846.service-now.com"  "https://192.168.242.135/ServiceNow/v1/Users?filter=userName+eq+%22ztestalan10340%22&attributes=userName"
# Direct REST service to ServiceNow to validate development instance is available.
curl --user  admin:gwALPtteR5R1  --compressed --header "Accept: application/json"  'https://dev101846.service-now.com/api/now/table/sys_user?sysparm_query=user_name=testalan13095'

# Step 8:  Certs required for IM JCS Tier to avoid typical cert issues.
a. Ensure the CA API Gateway public root CA cert or self-signed cert is imported to each JCS keystore
b. If using a LoadBalancer, e.g. httpd, ensure this public root CA cert or self-signed cert is imported to each JCS keystore.


docker commands collected to assist with RCA efforts for Operation Teams

# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"

# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"


#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)


#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100

# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux

#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'

# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "

# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"

# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"

# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"

# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:9443/restman/1.0/rest.wadl

Change the pmadmin password at the docker command line

Process flows collect for the CA API Gateway docker deployment

Example of the docker-compose yml file for CA API Gateway with REST web services and license xml file.

We attempt to keep useful notes/hints included in the yml file to allow for future reference. The example below redirect ports to TCP 18443 and 19443 from the standard ports of 8443 and 9443 for the CA API Gateway; and MySQL from 3306 to 23306 for testing protocols in non-Production enviornments.

# docker-compose-ssg10-0-mysql8-0_with_rest_and_external_mysql_volume.yml
# Startup:  docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml up  -d
# Stop:     docker-compose -p ssg -f ./docker-compose-ssg10-0-mysql8.yml down
#
#
# Ensure Host OS Network allows IPv4 forwarding:   sysctl -a | grep ipv4.ip_forward
# Validate docker network has access with curl:  curl -vk --tlsv1.2  https://www.service-now.com
# Note:  Do NOT use TABS in this file
# Monitor startup of containers with:  docker logs ssg10 -f --tail 100   AND   docker logs mysql-ssg10 -f  --tail 100
# https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/using-the-container-gateway/getting-started-with-the-container-gateway/run-the-container-gateway-on-docker-engine/sample-docker-compose-deployment-file.html
version: "2.2"
services:
   ssg10:
     container_name: ssg10
     # Ref: https://hub.docker.com/r/caapim/gateway/tags
     #image: caapim/gateway:latest
     image: caapim/gateway:10.0.00_20200428
     mem_limit: 10048m
     volumes:
        # Ensure ssg_license.xml is a valid SSG license file for 9.4 or 10.0
        - ./ssg_license_10.xml:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/license/license.xml
        # https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/apis-and-toolkits/rest-management-api.html
        # Touch the file restman to auto-start rest webservices
        # Validate REST API with curl
        # curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
        # curl --insecure --user pmadmin:7layer  https://localhost:18443/restman/1.0/rest.wadl
        - ./restman:/opt/SecureSpan/Gateway/node/default/etc/bootstrap/services/restman
     ports:
       - "58443:8443"
       - "59443:9443"
     environment:
        ACCEPT_LICENSE: "true"
        SSG_CLUSTER_COMMAND: "create"
        SSG_CLUSTER_HOST: "localhost"
        SSG_CLUSTER_PASSWORD: "7layer"
        SSG_DATABASE_TYPE: "mysql"
        SSG_DATABASE_HOST: "mysql-ssg"
        SSG_DATABASE_PORT: "3306"
        SSG_DATABASE_NAME: "ssg"
        SSG_DATABASE_USER: "gateway"
        SSG_DATABASE_PASSWORD: "7layer"
        SSG_DATABASE_JDBC_URL: "jdbc:mysql://mysql-ssg10:3306/ssg?useSSL=false"
        SSG_DATABASE_ADMIN_USER: "root"
        SSG_DATABASE_ADMIN_PASS: "7layer"
        SSG_ADMIN_USERNAME: "pmadmin"
        SSG_ADMIN_PASSWORD: "7layer"
        EXTRA_JAVA_ARGS: "-Dcom.l7tech.bootstrap.env.license.enable=false -Dcom.l7tech.bootstrap.autoTrustSslKey=trustAnchor,TrustedFor.SSL,TrustedFor.SAML_ISSUER -Dcom.l7tech.server.transport.jms.topicMasterOnly=false"
        SSG_INTERNAL_SERVICES: "restman wsman"
     links:
        - mysql-ssg10
   mysql-ssg10:
     container_name: mysql-ssg10
     # Ref https://hub.docker.com/_/mysql?tab=tags
     image: mysql:8.0.20
     #image: mysql:latest
     # SSG 10.0 requires MySQL 8.x per documentation
     #https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/ca-enterprise-software/layer7-api-management/api-gateway/10-0/install-configure-upgrade/using-mysql-8_0-with-gateway-10.html
     mem_limit: 1048m
     restart: always
     ports:
        - "23306:3306"
     environment:
        - MYSQL_ROOT_PASSWORD=7layer
        #- MYSQL_RANDOM_ROOT_PASSWORD=yes
        - MYSQL_USER=gateway
        - MYSQL_PASSWORD=7layer
        - MYSQL_DATABASE=ssg
     command:
       - "--character-set-server=utf8mb3"
       - "--log-bin-trust-function-creators=1"
       - "--default-authentication-plugin=mysql_native_password"
       - "--innodb_log_buffer_size=32M"
       - "--innodb_log_file_size=80M"
       - "--max_allowed_packet=8M"
#     volumes:
#       - mysql_db8:/var/lib/mysql
# Persist SSG MySQL DB Data
# Validate after shutdown with:  docker volume ls  &  docker volume inspect ssg_mysql_db
# Note:  Important - Random Root Password will not work for persist MySQL - Password must be known for 1st time
#   volumes:
#     mysql_db8:
#
# Extra commands to assist RCA efforts or OPS teams
#
# Validate routing is enabled within the CAAPIGW (ssg) container
#   docker exec -it ssg  bash -c "curl -L www.google.com"
#   docker exec -it -u root -e term=xterm ssg /bin/bash -c "curl -vk --tlsv1.2  https://www.service-now.com"
# Interactive Session with mysql>  prompt
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "show databases;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT User,Password,authentication_string FROM mysql.user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "SELECT name,login,password,enabled,expiration,password_expiry FROM internal_user;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "truncate logon_info;"
#  docker exec -it mysql-ssg   mysql --user=root --password=7layer -e "delete from logon_info where login ='ssgadmin';"
# If MySQL root password is random, find via logs  (use redirect to switch from JSON to text to use grep)
#  docker logs mysql-ssg 2>&1 | grep -i "Generated root password"
#  docker logs mysql-ssg -f       {Used to tail the logs}
#  Limit the logs to see
#  docker logs ssg10 -f --tail 100
# Commands to install additional packages for vul scans (ps from procps) & update passwords (mkpasswd from whois)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "apt-get update -y && apt-get upgrade -y && apt-get install -y procps && apt-get install -y whois"
#   docker exec -it  mysql-ssg ps aux
#  Update password process
# Generate SSHA512 Password (use one of the below methods)
#   docker exec -it -u root -e term=xterm   mysql-ssg   /bin/bash -c "mkpasswd -m sha-512 7layer"
#   python -c 'import crypt; print(crypt.crypt("7layer", crypt.mksalt(crypt.METHOD_SHA512)))'
#   perl -le 'print crypt "7layer", "\$6\$customSalt\$"'
# Update password via command line (escape any $ characters)
#  docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE internal_user SET password='\$6\$SzW/q9xVM9\$Ed/LjCDVpIYNTq94CsqO2stR0h4KniPOl/7iQDv1SEXNu9ftv//6hohlJxNeizmac/V9cEb6WmJfdHQCFwpoc0' WHERE name='pmadmin'; "
# View user and password hash in DB
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from internal_user \G;"
# View if account is active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "select * from logon_info \G;"
# Reset if account is NOT active
#   docker exec -it -u root -e term=xterm  mysql-ssg mysql  --user=root --password=7layer ssg -e "UPDATE logon_info set state='ACTIVE', fail_count=0 where login='pmadmin';"
# REST WEB SERVICES
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/home.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/services
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/gateway-management.xsd
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/folders?name=My%20Service
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/restDoc.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/emailListeners?sort=host&order=desc
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/authentication.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/passwords/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/policies/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/doc/migration.html
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/ssgconnectors?enabled=true
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/clusterProperties/template
#  curl --insecure --user pmadmin:7layer  https://localhost:19443/restman/1.0/rest.wadl

Clean-Up Orphans and Refine Correlation Rules

Correlation rules may be very simple. A unique ID on an IAM solution should match the unique ID (or combinations of attributes) to form a one-to-one (1:1) relationship to the identity on a managed endpoint/application.

Most sites that had the opportunity have started using GUID/UUID values for the correlation ID on the IAM solutions and if the endpoint/application allows it, the same GUID/UUID on an open field, that likely is not the same as the login ID field.

Example below using a GUID/UUID format as the primary identifier with the IAM solution and the endpoint/application of an Active Directory domain.

We may also have many different correlation rules or primary/secondary correlation for every application/endpoint. Until the correlation is correct we have the likelihood of an incorrect correlation or default correlation.

If we wish to remove an incorrect correlation, this may be done manually to remove or re-attach the correct entries. However, this would not address future correlation processes if the rules are not updated.

Example of removing a correlation from the orphan ID “[default user]”


Example to remove a incorrect correlation manually within the IAM solution

To assist with refinement of correlation rules, a feedback process/script may have value.

The below script demonstrates using OS ldapsearch/ldapdelete processes with the CA Identity Manager Provisioning Tier (TCP 20389/20390) a feedback process to cleanup the Orphans IDs under “[default user]”

The script will query all “inclusions” where an endpoint account has been incorrectly associated with the Global user “[default user]” and return a count of these records. The process will capture the dn values of these inclusions records, and then feed them to the Open LDAP ldapdelete process to have them removed. Since we are using the IMPS service (TCP 20389/20390) we are still allowing the solution to maintain referential integrity during the clean-up process.

After the deletion are complete, we will re-initialize a new E&C (explore & correlate) process using any new Correlation Rules that may have been added. It is this opportunity that an administrator may wish to adjust their own correlation rules; and then re-execute the script. If the correlation rules do not match, then the prior correlations will return to the “[default user]”.

#!/bin/bash
#####################################################################################################################
#
# Name: Clean Up [default user]
#
# Goal:  Script to clean up [default_user] correlations to allow for better orphan or rogue account identification
#  - Ensure that IMPS Service TCP 20389/20390 is used to maintain referential integrity of the inclusions entries
#    during delete operations.
#
# Ref:  CA IM r14.x solution & OS ldapsearch/ldapdelete
#
# A. Baugher, ANA, 04/2020
#
#####################################################################################################################
# set -xv
DATETZ=$(date -d "1970-01-01 00:00:00 `date +'%s'` seconds"  +'%Y-%m-%dT%H:%M:%S.%3NZ')
IMPSHOST=`hostname`
IMPSPORT=20390
IMPSUSERDN='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
# Use pwd file to avoid clear text passwords in script
# echo -n CLEAR_TEXT_PASSWORD > .imps.pwd
IMPSPWD=`cat .imps.pwd`
#####################################################################################################################
BASE_DN='eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta'
SUP_DN_ENTRY='eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im'
FILTER="(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=$SUP_DN_ENTRY))"
SEARCH=sub
ATTRIBUTES='dn eTInclusionID'
EXCLUDE="  -e ^$ "
#SIZE=" -z 10"
SIZE=" -z 0"
FILENAME=default_user_guid.txt
rm -rf $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID > tmp_file
echo "LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D '$IMPSUSERDN' -y ./.imps.pwd -b '$BASE_DN' -s $SEARCH '$FILTER' $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F': ' '{print \$2}' | grep eTInclusionID "
uniq -i tmp_file > $FILENAME
echo "#################################################################################################"
echo "# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : "`cat $FILENAME | wc -l`
rm -rf tmp_file
echo "#################################################################################################"



echo ""
echo "####################################################################################################################"
echo "#### Remove `cat $FILENAME | wc -l` EA (endpoint accounts) that are correlated to the Global User [default user] "
echo "####################################################################################################################"
LDAPTLS_REQCERT=never ldapdelete -v -c -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -f $FILENAME
echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"


echo ""
echo "#################################################################################################"
echo "#### Re-explore & correlate to update Global User [default user] orphan bucket."
echo "#################################################################################################"
echo ""
IMPSADSBASEDN="eTADSDirectoryName=dc2016.exchange.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers

IMPSADSBASEDN="eTADSDirectoryName=dc2012.exchange2012.lab,eTNamespaceName=ActiveDirectory,dc=im,dc=eta"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreUpdateEtrust
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b "$IMPSADSBASEDN" -s sub "(objectClass=*)" eTExploreCorrelateUsers


echo ""
echo "#################################################################################################"
echo "#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########"
echo "#################################################################################################"
LDAPTLS_REQCERT=never ldapsearch $SIZE -LLL -H ldaps://$IMPSHOST:$IMPSPORT -D "$IMPSUSERDN" -w $IMPSPWD -b $BASE_DN -s $SEARCH "$FILTER" $ATTRIBUTES | perl -p00e 's/\r?\n //g' | grep -v $EXCLUDE | awk -F": " '{print $2}'  | grep eTInclusionID | wc -l
echo "#################################################################################################"
echo ""

Example of the output of the script (with 1000’s of lines remove for clarity). Includes E&C to two (2) ADS endpoints, where > 2000 identities will default correlation to the orphan Global User “[default user]”.

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2184
#################################################################################################
LDAPTLS_REQCERT=never ldapsearch  -z 0 -LLL -H ldaps://vapp0001:20390 -D 'eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta' -y ./.imps.pwd -b 'eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta' -s sub '(&(objectClass=eTInclusionObject)(eTSuperiorClassEntry=eTGlobalUserName=[default user],eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im))' dn eTInclusionID | perl -p00e 's/\r?\n //g' | grep -v   -e ^$  | awk -F': ' '{print $2}' | grep eTInclusionID
#################################################################################################
# of unique Endpoint Accounts that are Correlated to [default user] matching query filter : 2184
#################################################################################################

####################################################################################################################
#### Remove 2184 EA (endpoint accounts) that are correlated to the Global User [default user]
####################################################################################################################
ldap_initialize( ldaps://vapp0001:20390/??base )
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@67d6bf2c-1104-1039-96c4-ef7605d11763,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname6 mi. lastname6' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@65e02962-00bd-1039-830f-ae134a0f7638,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'firstname0002 lastname0002' and Global User '[default user]' deleted successfully

[Deleted > 5000 similar rows ]

deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@ce05d098-1b32-1039-85ec-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'ffffff' and Global User '[default user]' deleted successfully
deleting entry "eTInclusionID=df104a69-e746-49df-9a61-51e8c20038d0@75a62f60-1b32-1039-85ea-b0629a56714f,eTSubordinateClass=eTADSAccount,eTSuperiorClass=eTGlobalUser,eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=eta"
Delete Result: Success (0)
Additional info: :ETA_S_0035<MGU>, Inclusion between Active Dir. Account 'eeeee' and Global User '[default user]' deleted successfully

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
0
#################################################################################################

#################################################################################################
#### Re-explore & correlate to update Global User [default user] orphan bucket.
#################################################################################################

Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 672, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2016.exchange.lab' correlation successful: (accounts correlated: 0, defaulted: 566, unchanged: 6, failures: 0)
Additional information: :ETA_S_0023<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' exploration successful: (objects added: 0, deleted: 0, updated: 0, unchanged: 1871, failures: 0)
Additional information: :ETA_S_0017<EDI>, Active Directory Endpoint 'dc2012.exchange2012.lab' correlation successful: (accounts correlated: 0, defaulted: 1619, unchanged: 153, failures: 0)

#################################################################################################
#### How many EA (endpoint accounts) are correlated to the Global User [default user] ###########
#################################################################################################
2185
#################################################################################################



Modify the above script for your own application/endpoints and refine your correlation rules (or add additional ones as needed.)

If applications/endpoints identities are non-managed service IDs, a process that may be used to assist is shown below. Create a new Global User (similar format as [default user]), and then drag-n-drop the endpoint/application service ID accounts to the new Global User [endpoint A service ID].

The final goal is a “clean” orphan process, that will be able to alert us to any rogue accounts being created OOB (out-of-band) of the expected top-down IAM solution from an approved SOT (source-of-truth) solution, e.g. SAP HR/Workday or home-grown DB used with ETL processes. By removing the “noise” of incorrectly correlated accounts, we can now focus on identifying the true “orphans”.

COVID-19 and Privacy Preserving Contact Tracing


Contact Tracing makes it possible to combat the spread of the COVID-19 virus by alerting participants of possible exposure to someone who they have recently been in contact with, and who has subsequently been positively diagnosed as having the virus.” – Apple

The conventional deterrent from the adoption of contact tracing is the lack of privacy controls. Apple has to be involved when privacy is vital, and in time we are living today, having a framework that allows for privacy-focused contact tracing to limit the spread of novel viruses is extremely important. Apple and Google collaboratively have put a structure in place for enabling contact tracing while preserving privacy.

Below is a simplified explanation of how contact tracing would work using a mobile device, as described in the specification.

  • Use of Bluetooth LE (Low energy) for proximity detection (no use of location-based services which would be essential for preserving privacy)
  • Generation of daily tracing keys using a one-way hash function
  • Generation of rolling proximity identifiers that change every ~15 minutes and based on daily tracing key.
  • Advertise self proximity identifiers and discover foreign proximity identifiers.
  • User decides when to contribute to contact tracing
  • If diagnosed with COVID-19, the user consents to upload a subset of daily tracing keys.

To detect if one may have come in contact with a COVID-19 positive individual, they would:

  • Download COVID-19 positive daily tracing keys
  • Contact tracing app computes time-based proximity identifiers from the downloaded daily tracing keys on the local device
  • Checks if this local device has previously recorded any of these identifiers

The above mechanism of determining if one has come in contact with a COVID-19 positive person preserves privacy by:

  • Ensuring that the contact tracing keys cannot be reverse-engineered into computing identifying information of the originating device.
  • Not associating GPS or other location services with the keys
  • Performing verification of being in contact with another COVID-19 positive person on a local mobile device.

The specification is preliminary, and there is a strong attempt to ensure privacy of individuals are protected within this framework. COVID-19 positive information uploaded onto central servers also DO NOT contain any personally identifiable information. The detection itself is a decentralized process as it gets computed locally on an individual’s device. Central entities are not in control of either detecting or informing people in this entire process.

All of this works very well if everyone does their civic duty and report when they are positive.

For a broad adoption of contact tracing, there are opportunities for further improvement though. The specification should talk about server(s) responsible for collecting uploaded information. A (mobile) device generates daily tracing keys, and although the keys themselves do not have any mechanism to be associated with that device by an external process, uploading these keys from the same mobile device opens up the possibility of linking information. IP address of a device that uploads anything to a server is always known to it. Applications that are going to use the above privacy framework additionally need to consider connectivity related exposure while uploading information to central servers. The specification should have a section and considerations on protection and controls for exposure of privacy as a result of connectivity.

Another area of improvement could be on controls around potential abuse of this Contact Tracing mechanism. As per the specification, a person who has tested positive for COVID-19 opts-in and uploads their daily tracing key information to central servers. The intent is for others who have been in proximity with them, know, and take actions to self-quarantine to limit the spread. Pranksters or other entities may use it to cause general disruptions by falsely claiming to be COVID-19 positive to unnecessarily or intentionally cause disruption. If such actions happen at a large scale, it will severely impact reliability, credibility, and adoption of contact tracing.

Preserving Privacy is not an easy problem to solve, and larger mindshare is needed to solve these challenges.

[Detailed specification at Apple’s website]

This town is big enough for us all: Expanding the CA Provisioning Tier Schema to 900+ Custom Fields

Based on recent requests, we wished to revisit this “hidden” gem to expand the CA Identity Suite Provisioning schema to meet unique business requirements. Enable 100’s of SaaS and onPrem applications/endpoints for custom business logic to user’s endpoint accounts’ attributes.

Since early days of the CA Identity Suite solution (eAdmin r8.1sp2), there has been a provisioning SDK that provided an approved process to extend the CA Identity Manager’s IMPD (provisioning directory) schema from the default of 99 user custom fields to 900 additional user custom fields.   To compare, typically, the default 99 user custom fields are used with the standard 40-50 default user profile fields, e.g. givenName (First Name), sn (LastName), userID, telephone #, etc. to meet most business use-cases.

Unfortunately, this extended schema process is not well known.

The only known documentation is an embedded readme.txt within a compressed package. Occasionally there will be support tickets or community notes that request this feature as an “enhancement”.

This package is included in the Provisioning SDK download; for IM r14.3, the file name is:

Component Name: CA Identity Manager r14.3 Legacy components
File: GEN500000000002780.zip ~ 200 MB

Background:

CA Identity Suite (Identity Manager) Provisioning Tier does NOT attempt to be a meta-directory, but act as a virtual directory to the 1000’s of managed endpoints/userstores/applications.   As long as the “explore” operation was successful, there will be a “pointer” object that references the correct location of the endpoint accounts.  And when a “correlation” operation occurs, this endpoint account “pointer” object is attached (via inclusion referential objects), to the associated global user ID.    

By using this “virtual directory” architecture, it is possible for IM business rules or 3rd party tools to directly view the 1000’s of managed endpoints “real data” and not a “stored” representation of this data.

However, some clients do wish to “collect” the native data, and store this within the IMPD provisioning store, as SNAPSHOT data, to monitor for non-approved / OOB (out-of-band) access.   If some fields are dedicated to select endpoints, the default of 99 custom fields may quickly run out.

Tackling Case-insensitivity Requirement:

Adjusting the IMPD schema for case-insensitivity; this would allow for case-insensitive correlation rules, and if the new fields are exposed to the IME, case-insensitive comparisions for business rules (PX).

Challenge:

The above Provisioning SDK process will build the extended eTCustomField100-999 and eTCustomFieldName100-999 attributes with case=sensitive. Interestingly, we did not identify a requirement for case sensitivity with the default custom fields, but it does appear this was a decision when the SDK was created. Please note the observation of the OOTB etrust_admin.schema file (for the IMPS data). This OOTB schema for the default custom fields displays a mix of case sensitivity for the eTCustomField00-99 and eTCustomFieldName00-99.

Proposal:

To address this new requirement; and to clarify there are three (3) possible deployments to enable this extended schema. We will review the pro/cons of each possible deployment choice.

Supporting Note 1:

  • eTCustomFieldXXX is the attribute that will contain a value.
  • eTCustomFieldNameXXX is the attribute that will contain a business name for this custom field.

Supporting Note 2:

The CA IM Provisioning Tier was/is developed with early x86 MS VC++ code. We attempted to use later release of the MS Visual Studio VC solution for this process but it failed to generate the output files.

Phase 1 Steps: Enhance the IM Provisioning Tier with 900 new custom fields with case = insensitive.

  1. Download & install MS Visual Studio VC 2010 Express, to have access to the ‘nmake’ executable.
  1. Update OS PATH variables to reference this MS VC 2010 bin folder
  1. Execute the nmake binary, to ensure it is working fine
    • where make & make /?
  1. Download & install CA IM Provisioning SDK on the same server/ workstation as ‘nmake’ binary.
    • IM r14.3 GEN50000000002780.zip 200 MB
  1. Open a command line window; and then change folder to the Provisioning SDK’s COSX Samples folder

cd “C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\samples\COSX”

  1. Execute the gencosx.bat batch file to generate the additional schema for N attributes.

gencosx.bat 900 { Max allowed value is 900; which will generate 100-999 attributes}

The output text file: cosxparse.pty

**** The above steps only need to be executed ONCE on a workstation. After the output text file is generated, we should only need & retain this file for future updates. ****

################################################################

  1. Use Notepad++ to search and replace a string in the following file, cosxparse.pty

“case=sensitive” to “case=insensitive”

{We may be selective and only replace a few attributes instead of all additional 900 attributes.}

  1. Execute the following commands to generate the binary file.
  • Use batch files to set environmental values for the nmake program.
    “C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat“
  • Execute ‘nmake’
    nmake

The new output file (binary) will be:
C:\Program Files (x86)\CA\Identity Manager\Provisioning SDK\admin\data\ cosxparse.ptt

  1. Before overwriting existing files; backup the three (3) prior files of IMPS/CCS data folder & IMPD schema folder for:
    etrust_cosx.schema
    etrust_cosx.dxc
    cosxparse.ptt
  1. Copy the file, cosxparse.ptt, to the IMPS server data folder
  1. Stop IMPS service: su – imps & imps stop
  1. Execute the follow command: schemagen -n COSX
  1. This process will create two (2) new output files:
    • etrust_cosx.dxc
    • etrust_cosx.schema
  1. Validate the two (2) new generated files have case-insensitivity set.
  1. Copy etrust_cosx.dxc to all CA Directory schema folders; including DX routers (on IMPS servers).
    • Validate this file is reference in the IMPD group knowledge schema file: etrust_admin.dxg
  1. Copy etrust_cosx.schema & cosxparse.ptt to all CA IMPS Servers, the CCS Servers’ data folders, & the CA IMPS GUI data folder.
    • Validate the file, etrust_cosx.schema, is reference in the IMPS configuration file: etrust_admin.conf
  1. Restart CA Directory and IMPS/CCS Services.
  • dxserver stop all / dxserver start all
  • imps stop / imps start
  • net stop im_jcs / net start im_jcs {this will also restart the im_ccs service}
  1. With the IMPS GUI
  • Assign a ‘business name’ to the newly created eTCustomField100+ under
    SYSTEM/GLOBAL PROPERTIES/CUSTOM USER FIELDS
    {If you do not see these newly created fields, then the IMPS GUI data folder was not updated per step 11.}
  • Validate that E&C Correlation Rules will now work for these extended fields with case-insensitivity.
    SYSTEM/DOMAIN CONFIGURATIONS/EXPLORE AND CORRELATE/CORRELATION ATTRIBUTE/
  • Validate the custom fields are viewable for each Global User.

We may now STOP HERE if we do NOT need to expose these new custom fields to the IME.

Pro: Able to use customfields for account templates and correlations rules.

Con: Not exposed to IME for 1:1 mapping nor exposed for PX Business Rules.

#############################################################

Phase 2 Steps – Advance Configuration – Add custom fields to the IME to allow for 1:1 mapping and use of PX Business Rules.

  1. Update the JIAM (Java LDAP to IMPS API) reference file, jiam.jar, to allow the IME to manage these extended fields for PX business rules.
    • Use 7zip https://www.7-zip.org/ to extract files from jiam.jar; update the file CommonObjects.xml; then replace this file in the jar file.
    • Location of reference file: ./wildfly-idm/standalone/deployments/ iam_im.ear/library/jiam.jar
    • Location for property files to update: \com\ca\iam\model\impl\datamodel\ CommonObjects.xml
  1. Update sections after eTCustomField99 with the below data with the case insensitive.

<property name="eTCustomField100"> <doc>Custom Field #100</doc> <value default="false"> <setValue> <baseType default="false"> <strValue></strValue> </baseType> </setValue> </value> <metadata name="jiam.syncToAccounts"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.modifyPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.ownerPrivilege"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="isMultiValued"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="beanPropertyName"> <value> <strValue>customField100</strValue> </value> </metadata> <metadata name="pt.minimumAbbreviation"> <value> <intValue>10</intValue> </value> </metadata> <metadata name="pt.internalName"> <value> <strValue>CustomField100</strValue> </value> </metadata> <metadata name="pt.editType"> <value> <strValue>string</strValue> </value> </metadata> <metadata name="pt.editFlag"> <value> <boolValue>true</boolValue> </value> </metadata> <metadata name="pt.caseSensitivity"> <value> <strValue>insensitive</strValue> </value> </metadata> <metadata name="pt.asciiOnly"> <value> <boolValue>false</boolValue> </value> </metadata> <metadata name="pt.dataLocation"> <value> <strValue>db</strValue> </value> </metadata> </property>

  1. Update the CA IMPS directory.xml as needed for some or all 900 fields.
<ImsManagedObjectAttr physicalname="eTCustomField100" description="Custom Field 100" displayname="Custom Field 100" valuetype="String" multivalued="true" wellknown="%CUSTOM_FIELD_100%" maxlength="0"/>
  1. Update the IME’s IMCD to IMPS 1:1 mappings.
    • identityEnv_environment_settings.xml

We may now stop here. The next advance configuration is only required if we wish to manage the various Endpoint Mapping Tab with the IM UI; instead of the IMPS GUI. We would consider the next Phase 3 Steps, to be low value for the effort; as this configuration is typically set once and done in the IMPS GUI.

Pro: Able to use customfields for account templates and correlations rules. Also able to map these files 1:1 in the IME for IMCD attributes to be mapped to the IMPS extended custom attributes. These IMPS extended custom attributes will now be exposed for PX Business Rules.

Con: Not exposed to IM UI to update Endpoint’s Mapping TAB for ADS and DYN endpoints.

Phase 3 Steps – IME Advanced

If planning on exposing these new custom fields in both the Endpoint’ Mapping Attribute Screen & Endpoint Account Templates via the IME, follow these additional steps:

  1. Replace commonobjects.xml in ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\lib\roledefgen.jar by following the steps given below:
    • rename roledefgen.jar as roledefgen.zip
    • Open roledefgen.zip
    • open com\ca\iam\roledefgen\commonobjects.xml and replace the contents with the attached/provided commonobjects.xml file
    • save the zip
    • rename the zip to jar
  2. Now roledefgen.jar will contain the commonobjects.xml file with extended custom attributes
  1. execute the below RoleDefGenerator.bat to generate jars for all the required java/Dyn endpoints
    • ..\Identity Manager\IAM Suite\Identity Manager\tools\RoleDefinitionGenerator\bin> RoleDefGenerator.bat -d -h -u “”
  1. open the generated endpoint jars one by one and modify them by following below steps:
    • rename the original .jar as .zip
    • open framework.xml and increase the version “version=” (2nd line)
    • rename the generated .jar as .zip
    • open and copy the contents of -RoleDef.xml
    • paste the copied content in step 4 to the file -RoleDef.xml in original .zip (step 1)
    • save the original .zip and rename it to .jar
    • replace the save .jar in ..\wildfly-8.2.0.Final\standalone\deployments\iam_im.ear\ user_console.war\WEB-INF\lib
  2. restart IM and test the Custom attributes in IM web-UI

Post Update Note:

  • Validate if we may need to rebuild the IMPD DSAs (4) for existing users that may have already had these extended attributes but with case=sensitive set previously.
    • This step is not required if this is the first time the extended attributes have been deployed or if the case=sensitive has not been changed.
    • Process: Export the IMPD LDIFs, rebuild the IMPD DSA and then re-import LDIFs.