The Symantec (CA/Broadcom) Identity Portal is widely used for managing IAM workflows with customizable forms, tasks, and business logic. This tool allow its business logic to be exported within the management console.
However, a major challenge exists in migrating or analyzing environments like Dev → Test → Prod . This effort can be challenging when working with these exported Portal files. Although configuration migration tools are available, reviewing and verifying changes can be difficult. Portal exports are delivered as a single compressed JSON one-liner—making it hard to identify meaningful changes (“deltas”) without involving a large manual effort.
Challenge 1: Single-Line JSON Exports from Identity Portal
Example above has over 88K characters in a single line. Try to search on that string to find the object you wish to change or update.
Identity Portal’s export format is a flat, one-line JSON string, even if the export contains hundreds of forms, layout structures, and java scripts.
Migration/Analysis Risks
Impossible to visually scan or diff exports.
Nested structures like layout, formProps, and handlers are escaped strings, sometimes double-encoded.
Hidden differences can result in subtle bugs between versions or environments.
A Solution
We created a series of PowerShell scripts that leverage AI to select the best key-value pairs to sort on, that would either provide the best human-readable or searchable processes to reduce the complexity and effort for migration processes. We now can isolate minor delta changes that would otherwise been hidden until a use-case was exercised later in the migration effort, which would require additional effort to be utilized.
Convert the one-liner export into pretty-formatted, human-readable JSON.
Detect and decode deeply embedded or escaped JSON strings, especially within layout or formProps.
Extract each form’s business logic and layout separately.
These outputs allow us to:
Open and analyze the data in Notepad++, with clean indentation and structure.
Use WinMerge or Beyond Compare to easily spot deltas between environments or versioned exports.
Track historical changes over time by comparing daily/weekly snapshots.
Challenge 2: Embedded JavaScript Inside Portal Forms
Identity Portal forms often include JavaScript logic directly embedded in the form definition (onLoad, onChange, onSubmit).
Migration Risks
JS logic is not separated from the data model or UI.
Inconsistent formatting or legacy syntax can cause scripts to silently fail.
Broken logic might not surface until after production deployment.
Suggested Solutions
Use PowerShell to extract JS blocks per form and store them as external .js.txt files.
Identify reused code patterns that should be modularized.
Create regression test cases for logic-heavy forms.
Challenge 3: Form Layouts with Escaped JSON Structures
The layout field in each form is often a stringified JSON object, sometimes double or triple-escaped.
Migration Risks
Malformed layout strings crash the form UI.
Even minor layout changes (like label order) are hard to detect.
Suggested Solutions
Extract and pretty-print each layout block to .layout.json files.
Please note: While the output is pretty-print, it is not quite JSON format, due to the escape sequences. Use these exported files as searchable/research to help isolate deltas to be corrected during the migration efforts.
Use WinMerge or Notepad++ for visual diffs.
Validate control-to-field binding consistency.
Using our understanding of the Identity Portal format for the ‘layout’ property, were able to identify methods using AI to manage the double-or-triple escaped characters that were troublesome to export consistently. Our service engagements now incorporate greater use of AI and associated APIs to support migration efforts and process modernization, with the goal of minimizing business risk for our clients and our organization.
Challenge 4: Java Plugins with Multiple Classes
Many Portal instances rely on custom Java plugins with dozens of classes, Spring beans, and services.
Migration Risks
Portal API changes break plugins.
Lack of modularity or documentation for the custom plugins.
Missing source code for complied custom plugins.
Difficult to test or rebuild.
Suggested Solutions
In the absence of custom source code, decompile plugins using jd-gui .
jd-gui Java decompilation for plugin reverse engineering.
Recommendations for Future-Proofing
Store layouts and handlers in Git.
Modularize plugin code.
Version control form definitions.
Automate validation tests in CI or staging.
Conclusion
Migrating Identity Portal environments requires more than copy-pasting exports— In the absence of proper implementation documentation around customizations, it may require reverse engineering, decoding, and differencing of deeply nested structures.
By extracting clean, readable artifacts and comparing across environments, teams will gain visibility, traceability, and confidence in their migration efforts.
Review our github collection of the above mentioned scripts. Please reach out if you would like assistance with your migration processes/challenges. We can now progress toward automation of the business logic from one environment to the next.
Metrics are essential for monitoring and optimizing the health of your solutions. The simplicity of using SaaS-based Application Performance Monitoring (APM)/Operational Intelligence/Analytics tools makes them indispensable for gaining actionable insights.
By leveraging metrics, you can not only ensure the performance and reliability of your systems but also build compelling ROI use cases. We can leverage these SaaS platforms to incorporate ROI queries to pull forward data that is not exposed in other dashboards.
In this guide, we’ll demonstrate the power of metrics by deploying a Broadcom DX O2 agent to the Symantec IGA Virtual Appliance in under 10 minutes, providing immediate value and visibility into your business operations. This straightforward process integrates seamlessly into your existing infrastructure, enhancing the observability and security of a hardened appliance.
This walk-through will showcase how metrics can enhance the observability and security of a hardened appliance.
After you login to your DX OI/O2 instance, navigate to the settings/agents section. You can select an Agent, and your custom authentication token to use it will be embedded in the package. We plan to use the javaagent being offered for Wildfly (aka JBOSS) . Select this agent.
When the screen displays, expand the “Command Line Download“. We will use the wget command to directly download this agent to our Virtual Appliance that has internet access. Otherwise, download the agent to your workstation, and then file transfer it to your Virtual Appliance that has Wildfly running on it.
Step03: Login to the IGA Virtual Appliance with ssh.
Create a local media folder, then proceed to download the DX 02 agent. After the download is successful, we will extract the agent into a known folder used on the IGA Virtual Appliance for “java profilers”. Since the files are owned by the ‘config’ user, and we need the ‘wildfly’ user to have access to the log folder, please chmod 777 to both log folders to avoid any startup issues for the Wildfly applications. You may leave the rest of the file/folder permissions as is.
mkdir -p ~/media/dxoi_agent ; cd ~/media/dxoi_agent/
wget --content-disposition "https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/XXXXXXXXXXXXXXXX?format=archive&layout=bootstrap_preferred&packageDownloadSecurityToken=ZZZZZZZZZZZZZZZZZZZZZZ"
ls -lart
tar -xf JBoss_jboss_20241117_v1.tar -C /opt/CA/VirtualAppliance/custom/profiler/
cd /opt/CA/VirtualAppliance/custom/profiler/
ls -lart
cd wily/
ls -lart
# We could update permissions for all files/folder to 777, but we only need the following to be changed.
# Update permission for folders that 'wildfly' will write out to.
chmod 777 logs/ ./releases/24.10/logs/
chmod 777 ./releases/24.10/core/config/hotdeploy
chmod 777 ./releases/24.10/extensions
mkdir -p ./releases/24.10/core/config/metrics
chmod 777 ./releases/24.10/core/config/metrics
After updating the logs, hotdeploy, extensions, metrics folders’ permissions, please run the shell script “./agent-location.sh”. This script will output the JVM arguments that we will use with the Wildfly instances for IdentityManager, IdentityPortal, and IdentityGovernance
We will now edit the jvm-arg.conf files for both IdentityManager and IdentityPortal for this scenario with the above string. We will prepend the string “javaagent:” and to avoid an Java Log Module loading order error, we will place the entire string at the very end of the JAVA_OPTS variable. We can use the same exact string and path, as the service name of each instance will be automatically determined by the javaagent.
Below is a view of what the IM and IP jvm-args.conf file should look like. Please ensure the full string is at the very end.
Now stop and start both IdentityManager and IdentityPortal. We recommend using a second ssh session to monitor the wildfly-console.log for each, as it will immediate show any issues due to permissions or other with the java-agent.
Step04: We are Done. View the DX O2 UI and review the new incoming data.
Recommend that we walk through all the possible pre-built dashboard to use and monitor/alert on your solution. Of interest, is the IM shows as the hostname of the Virtual Appliance “vapp1453” and IP shows as the internal pseudo name of “IPnode1”. Note, these values can be over-written in the profile file.
A view of metrics by each agent. You must click on each sub-category to see what is being offered.
A very interesting view within the memory space of the IdentityManager application
Other views to review
What is very interesting, is adding ROI metrics to dashboards, where we can monitor the number of events that are being utilized, e.g. external customer access, internal password changes. The APIs allowed provide maximum flexibility to input directly any ROI metrics we wish.
Reach out and we will work with you to get the most value out of your solution.
Additional Notes
JVM Order Management
On the IGA virtual appliance, the order of JVM switches for “LogManager” is predetermined. If the new javaagent is not placed at the very end of the JAVA_OPTS, we may see these generic warn/error messages. We spent quite a bit of time being mislead by these generic warning/error messages. We did not need to add extra JVM switches to manage the JVM order. If you do have challenges, review the current documentation for the JBOSS agent.
WARNING: Failed to load the specified log manager class org.jboss.logmanager.LogManager
ERROR: WFLYCTL0013: Operation ("parallel-extension-add") failed - address: ([])
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: WFLYLOG0078: The logging subsystem requires the log manager to be org.jboss.logmanager.LogManager. The subsystem has not be initialized and cannot be used. To use JBoss Log Manager you must add the system property "java.util.logging.manager" and set it to "org.jboss.logmanager.LogManager"
FATAL: WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details.
Bonus Round: Deploy the Java Agent for the JCS component
While the javaagent for both Wildfly and Java are the same, the support modules are slightly different. We may be able to combine them, but to avoid any possible concerns, we seperated the extraction folders. Add this javaagent string to the JCS JVM custom configuration file: jvm_options.conf
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
~/media/dxoi_agent > wget --content-disposition "https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/7XXXXXXXXXX?format=archive&layout=bootstrap_preferred&packageDownload SecurityToken=ZZZZZZZZZZZ"
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
-rw-r--r-- 1 config config 31645184 Nov 16 23:51 Java_other_20241117_v1.tar
~/media/dxoi_agent > tar -xf Java_other_20241117_v1.tar
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
-rw-r--r-- 1 config config 31645184 Nov 16 23:51 Java_other_20241117_v1.tar
drwxr-xr-x 4 config config 123 Nov 16 23:51 wily
~/media/dxoi_agent > mv wily/ /opt/CA/VirtualAppliance/custom/profiler/wily-jcs
~/media/dxoi_agent > cd /opt/CA/VirtualAppliance/custom/profiler/wily-jcs
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs > ls -lart
drwxr-xr-x 2 config config 6 Nov 16 23:41 logs
-rw-r--r-- 1 config config 5 Nov 16 23:41 agent.release
-rwxr-xr-x 1 config config 1371 Nov 16 23:41 agent-location.sh
-rwxr-xr-x 1 config config 1138 Nov 16 23:41 agent-location.bat
-rw-r--r-- 1 config config 45258 Nov 16 23:41 Agent.jar
drwxr-xr-x 3 config config 19 Nov 16 23:51 releases
# We could update permissions for all files/folder to 777, but we only need the following to be changed.
# Update permission for folders that 'wildfly' will write out to.
chmod 777 logs/ ./releases/24.10/logs/
chmod 777 ./releases/24.10/core/config/hotdeploy
chmod 777 ./releases/24.10/extensions
mkdir -p ./releases/24.10/core/config/metrics
chmod 777 ./releases/24.10/core/config/metrics
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs > ./agent-location.sh
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs/releases/24.10/Agent.jar -Dcom.wily.introscope.agentProfile=/opt/CA/VirtualAppliance/custom/profiler/wily-jcs/releases/24.10/core/config/IntroscopeAgent.profile -Dintroscope.agent.bo otstrap.home=/opt/CA/VirtualAppliance/custom/profiler/wily-jcs -Dintroscope.agent.bootstrap.release.version=24.10 -Dintroscope.agent.bootstrap.version.loaded=24.10
Below is a view of the JCS agent with DX O2 UI. We see it is by itself under “Java”. Also note a challenge with two (2) Wildfly (JBoss) instances using the same profile with the default “agentName=JBoss Agent”. These Wildfly instances were automatically named upon startup, but after awhile the static name in the profile took precedence. See more information below
Challenge with default naming convention
When we have two (2) or more applications using the same profile, we may see DX O2 attempt to join them together in the metrics UI. To avoid this, lets make two (2) copies of the Introscope.profile and add our own “agentName” for each. Do NOT forget to comment out the default of “introscope.agent.agentName=JBoss Agent”. We added “com.wily.introscope.agent.agentName” as well, since it is called out in the online documentation.
Observations: The IP deployment honors the new value provide immediately. The IM deployment claims in the DX O2 agent logs that “Unable to automatically determine the Agent Name because: The Application Server naming mechanism is not yet available.” It defaults to the hostname of the Virtual Appliance after a few minutes, then it appears that it will reset to the correct agentName later.
/opt/CA/VirtualAppliance/custom/profiler/wily/releases/24.10/core/config > head IntroscopeAgent.ip.profile
###############################################################################
# Add name for IP application
###############################################################################
com.wily.introscope.agent.agentName=IP
introscope.agent.agentName=IP
/opt/CA/VirtualAppliance/custom/profiler/wily/releases/24.10/core/config > head IntroscopeAgent.im.profile
###############################################################################
# Add name for IM application
###############################################################################
com.wily.introscope.agent.agentName=IM
introscope.agent.agentName=IM
Per online documentation, we have other renaming options as well
# Using JVM -D environmental switches, we can set one of these two
# -D JVM switches
# -DagentName=IM
# -Dcom.wily.introscope.agent.agentName=IM
# Within the Instroscope.profile configuration file, we have these options.
# Allow Introscope to pickup a JVM environmental value from a pre-existing -D variable
introscope.agent.agentNameSystemPropertyKey=jboss.node.name
# Static name for the agent
introscope.agent.agentName=IP
# Allow the Introscope agent to append an integer to the name, used for clusters, e.g. JBoss Agent-1, JBoss Agent-2
introscope.agent.clonedAgent=true
The deployments for IP and JCS had no issues using any of the above, the IM application only responded well to the cloneAgent example. This is likely due to how the LogManager modules are ordered in the startup files that ‘config’ service ID does not have write access to modify.
Log Folder Cleanup
The ‘config’ service ID owns this folder, and even though there are files owned by ‘wildfly’, the ‘config’ user can delete these files.
ROI Metrics
Enable the API section of your DX O2 instance to create your ROI Metrics Input
On large project teams, multiple members may often use the same hosts simultaneously. Alternatively, you might prefer to maintain multiple SSH sessions open on the same host—one for monitoring logs and another for executing commands. While a Linux host using the Bash shell records command-line history, the default settings can pose challenges. Specifically, they may result in the loss of prior history when multiple sessions access the same host.
To address this, you can make some enhancements to your configuration. On the Symantec IGA Virtual Appliance, we typically add these improvements to the .bashrc files of the config, dsa, and imps service IDs. These adjustments ensure the preservation of command history for all work performed. Naturally, it is also important to clean up or remove any sensitive data, such as passwords, from the history.
Below, we explore an optimized .bashrc configuration that focuses on improving command history management. Key features include appending history across sessions, adding timestamps to commands, ignoring specific commands, and safeguarding sensitive inputs.
Optimized .bashrc Configuration
Here’s the full configuration we’ll be exploring:
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
Detailed Explanation of the Configuration
shopt -s histappend
Ensures that new commands from the current session are appended to your history file instead of overwriting it. This prevents accidental history loss across sessions.
export HISTTIMEFORMAT='%F %T '
Adds a timestamp to each command in your history, formatted as YYYY-MM-DD HH:MM:SS.
export HISTSIZE=10000
Limits the number of commands retained in memory during the current session to 10,000.
export HISTFILESIZE=100000
Configures the maximum number of commands saved in the history file to 100,000.
export HISTIGNORE='ls:history'
Excludes frequently used or less important commands like ls and history from being saved, reducing clutter.
export HISTCONTROL=ignorespace
Prevents commands that start with a space from being saved to history. This is particularly useful for sensitive commands like those containing passwords or API keys. When we copy-n-paste from Notepad++ or similar, remember to put a space character in front of the command.
export PROMPT_COMMAND='history -a; history -c; history -r'
Keeps history synchronized across multiple shell sessions: history -a appends new commands to the history file, history -c clears the in-memory history for the current session, and history -r reloads history from the history file.
Symantec IGA Virtual Appliance Service IDs
with the .profile or .bash_profile and .bashrc file(s).
We can see that the default .bash_profile for ‘config’ service already has a redirect reference for .bashrc
config@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
config@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific environment
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
export PATH
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
if [ -f "$rc" ]; then
. "$rc"
fi
done
fi
unset rc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
A view the ‘dsa’ service ID files with some modifications. The default .profile only has the one line that sources the file /opt/CA/Directory/dxserver/install/.dxprofile. To assist with monitoring history, instead of other direct updates, we still will use .bashrc reference to this file.
[dsa@vapp1453 ~]$ cat .profile
. /opt/CA/Directory/dxserver/install/.dxprofile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Below is the view of the new file .bashrc to be source by DSA .profile file.
[dsa@vapp1453 ~]$ cat .bashrc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
A view the ‘imps’ service ID files with some modifications. The default .profile only has the one line that sources the file /etc/.profile_imps. To assist with monitoring history, instead of other direct updates, we still will use .bashrc reference to this file
imps@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .profile
# Source IM Provisioning Profile script
. /etc/.profile_imps
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Below is the view of the new file .bashrc to be source by IMPS .profile file.
imps@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bashrc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
Delete Sensitive Information from History
If sensitive information has already been recorded in your history, you should clean it up. While you could wipe the entire history, a better approach is to retain as much as possible and remove only the sensitive entries.
The Challenge of Deleting Sensitive History
When deleting specific entries from Bash history, there’s a complication: line numbers change dynamically. The Bash history is a sequential list, so removing an entry causes all subsequent commands to shift up, altering their line numbers.
To address this, the cleanup process should iterate backward through the history. Starting with the last match ensures that earlier line numbers remain unaffected by changes further down the list.
Cleanup Script
Save the following script as history_cleanup.sh and modify the PATTERN variable to match the sensitive commands you want to delete:
#!/bin/bash
##################################################################
# Name: history_cleanup.sh
# Goal: Provide a means to clean up prior bash history of any
# sensitive data by a known pattern, e.g. password or token
#
# ANA 11/2024
##################################################################
# Prompt the user to enter the pattern to search for
read -p "Enter the pattern to search for in history: " PATTERN
# Validate input
if [ -z "$PATTERN" ]; then
echo "No pattern entered. Exiting."
exit 1
fi
# Use grep to find matching history entries and delete them in reverse order
history | grep "$PATTERN" | sort -r | while read -r line; do
# Extract the history line number (first column in the output)
LINE_NUMBER=$(echo "$line" | awk '{print $1}')
# Delete the history entry by its line number
history -d "$LINE_NUMBER"
done
# Save the updated history to the .bash_history file
history -w
echo "History cleanup complete. Entries matching '$PATTERN' have been removed."
Final Thoughts
Applying this .bashrc configuration across all service IDs offers several advantages. It streamlines workflows, secures sensitive inputs, and ensures a more organized command history. These enhancements are particularly valuable for developers, administrators, or anyone operating in multi-terminal environments.
Key Benefits:
History Persistence: Ensures commands are appended to the history file without overwriting existing entries, preserving a complete record of activity.
Enhanced Auditability: Adds timestamps to history, making it easier to track when specific commands were executed.
Reduced Noise: Excludes less critical commands, such as ls, to keep the history clean and focused on meaningful actions.
Improved Privacy: Commands starting with a space are omitted from the history, protecting sensitive inputs like passwords or API keys.
Real-Time Synchronization: Maintains consistent history across multiple terminal sessions, enabling seamless transitions and collaboration.
By adopting these configurations, you can enhance productivity, improve security, and achieve better management of command history in your environment.
On a typical Linux host, rolling back a configuration in WildFly can be as simple as copying a backup of the configuration XML file back into place. However, working within the constraints of a secured virtual appliance (vApp) presents a unique challenge: the primary service ID often lacks write access to critical files under the WildFly deployment.
When faced with this limitation, administrators may feel stuck. What options do we have? Thankfully, WildFly’s jboss-cli.sh process provides a lifeline for configuration management, allowing us to take snapshots and reload configurations efficiently. See the bottom of this blog if you need to create a user for jboss-cli.sh usage.
Why Snapshots are necessary for your sanity
WildFly snapshots capture the server’s current configuration, creating a safety net for experimentation and troubleshooting. They allow you to test changes, debug issues, or introduce new features with confidence, knowing you can quickly restore the server to a previous state.
In this guide, we’ll explore a step-by-step process to test and restore configurations using WildFly snapshots on the Symantec IGA Virtual Appliance.
Step-by-Step: Testing and Restoring Configurations
Step 1: Stamp and Backup the Current Configuration
First, optionally you may add a unique custom attribute to the current `standalone.xml` (ca-standalone-full-ha.xml) configuration, if you don’t already have a delta to compare. This new custom attribute acts as a marker, helping track configuration changes. After updating the configuration, take a snapshot.
Simulate a change by updating the custom attribute. Validate the update with a read query to confirm the changes are applied. To be safe, we will remove the attribute and re-add with a new string that is different.
List all available snapshots to identify the correct rollback point. You can use the `:list-snapshots` command to query snapshots and verify files in the snapshot directory.
/opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01! --timeout=90000 --command=":list-snapshots"
ls -l /opt/CA/wildfly-idm/standalone/configuration/standalone_xml_history/snapshot/
Step 4: Reload from Snapshot
Once you’ve identified the appropriate snapshot, use the `reload` command to roll back the configuration. This is the Monitor the process to ensure it completes successfully, then verify the configuration.
Adding a WildFly Admin User for Snapshot Management
Before you can execute commands through WildFly’s `jboss-cli.sh`, you’ll need to ensure you have a properly configured admin user. If an admin user does not already exist, you can create one with the following command:
- **`-m`**: Indicates the user is for management purposes. - **`-u jboss-admin`**: Specifies the username (`jboss-admin` in this case). - **`-p Password01!`**: Sets the password for the user. - **`-g SuperUser`**: Assigns the user to the `SuperUser` group, granting necessary permissions for snapshot and configuration management.
You can have as many jboss-cli.sh service IDs as you need.
Please note, this Wildfly management service ID is not the same as the Wildfly application service ID, that is needed for the /iam/im/logging_v2.jsp access. Which requires the -a switch and the group of IAMAdmin
If your logging_v2.jsp page is not displaying correct, there is simple update to resolve this challenge. Add the below string to your /opt/CA/VirtualAppliance/custom/IdentityManager/jvm-args.conf file.
Our goal is to move away from using self-signed certificates, often supplied by various software vendors and Kubernetes deployments, in favor of a unified approach with a single wildcard certificate covering multiple subdomains. This strategy allows for streamlined updates of a single certificate, which can then be distributed across our servers and projects as needed.
We have identified LetsEncrypt as a viable source for these wildcard certificates. Although LetsEncrypt offers a Docker/Podman image example, we have discovered an alternative that integrates with Google Domains. This method automates the entire process, including validation of the DNS TXT record and generation of wildcard certificates. It’s important to note that Google Domains operates independently from Google Cloud DNS, and its API is limited to updating TXT records only.
In this blog post, we will concentrate on how to validate the certificates provided by LetsEncrypt using OpenSSL. We will also demonstrate how to replace self-signed certificates with these new ones on a virtual appliance.
View the pubkey of the LetsEncrypt cert.pem and privkey.pem file to confirm they match
We need to download this root CA cert for solutions, appliances, and on-prem Kubernetes Clusters that do NOT have these root CA certs in their existing keystores.
Validate the cert.pem file with the public root CA cert and the provided chain.pem file
Will return an “OK” response if valid. CA certs order is important, if reversed this process will fail. Validation still fails if we only have chain.pem or fullchain.pem (see image below) without the correct public root CA cert from LetsEncrypt. Note: This public root CA cert is typically provided in updated modern browsers. Note2: While the fullchain.pem does have a CA cert with the CN = ISRG Root X1, this does NOT appear to be the correct one based on the error reported, so we have downloaded the correct CA cert to be used with the same CN name of ISRG Root X1 (see following images below)
Combine cert.pem with the public root CA cert and chain.pem for a complete chain cert in the CORRECT order.
Important Note: cert.pem MUST be first in this list otherwise validation will fail. Please note that there are two (2) root CA certs with the same CN, that may cause some confusion when validating the chain.
Validate certs with openssl server process and two (2) terminal ssh sessions/windows
1st terminal session – run an openssl server (via openssl s_server) on port 9443 (any open port). The -www switch will send a status message back to the client when it connects. This includes information about the ciphers used and various session parameters. The output is in HTML format so this option will normally be used with a web browser.
2nd terminal session – run openssl s_client and curl with the combined chain cert to validate. Replace the FQDN with your LetsEnscrypt domain in the wildcard cert. Example below FQDN is training.anapartner.net. You may also use a browser to access the openssl s_server web server with a FQDN.
Example of using the official Certbot image with podman. We recommend using multiple -d switches with *.subdomain1.domain.com to allow a single cert be used for many of your projects. Reference of this deployment
A version of podman with the Google Domain API TXT integration. We use variables for reuse of this code for various testing domains. This docker image will temporarily create the Google Domain TXT records via a REST API, that are needed for Certbot DNS validation, then the process will remove the TXT records. There is no manual interaction required. We use this process with a bash shell script to run as needed or via scheduled events.
Replace Identity Suite vApp Apache Certificate with LetsEncrypt
We see tech notes and an online document but wanted to provide a cleaner step by step process to update the Symantec IGA Virtual Appliance certificates for the embedded Apache HTTPD service under the path /opt/CA/VirtualAppliance/custom/apache-ssl-certificates
# Collect the generated LetsEncrypt certs via certbot, save them, scp to the vApp host, and then extract them
tar -xvf letsencrypt-20231125.tar
# View the certs
ls -lart
# Validate LetEncrypt cert via pubkey match between private key and cert
openssl x509 -noout -pubkey -in cert.pem
openssl pkey -pubout -in privkey.pem
# Download the latest LetsEncrypt public root CA cert
curl -sOL https://letsencrypt.org/certs/isrgrootx1.pem
# Validate a full chain with root with cert (Note: order is important on cat process)
openssl verify -CAfile <(cat isrgrootx1.pem chain.pem) cert.pem
# Create a full chain with root cert and LetsEncrypt chain in the correct ORDER
cat cert.pem isrgrootx1.pem chain.pem > combined_chain_with_cert.pem
# Move prior Apache HTTPD cert files
mv localhost.key localhost.key.original
mv localhost.crt localhost.crt.original
# Link the new LetsEncrypt files to same names of localhost.XXX
ln -s privkey.pem localhost.key
ln -s combined_chain_with_cert.pem localhost.crt
# Restart apache (httpd)
sudo systemctl restart httpd
# Test with curl with the FQDN name in the CN or SANs of the cert, e.g. training.anapartner.net
curl -v --cacert combined_chain_with_cert.pem --resolve training.anapartner.net:443:127.0.0.1 https://training.anapartner.net:443
# Test with browser with the FQDN name
Example of integration and information reported by browser
View of the certificate as shown with a CN (subject) = training.anapartner.net
A view of the SANS wildcard certs that match the FQDN used in the browser URL bar of iga.k8s-training-student01.anapartner.net
Example of error messages from Apache HTTPD service’s log files if certs are not in correct order or validate correctly. One message is a warning only, the other message is a fatal error message about the validation between the cert and private key do not match. Use the pubkey check process to confirm the cert/key match.
[ssl:warn] [pid 2206:tid 140533536823616] AH01909: CA_IMAG_VAPP:443:0 server certificate does NOT include an ID which matches the server name
[ssl:emerg] [pid 652562:tid 140508002732352] AH02565: Certificate and private key CA_IMAG_VAPP:443:0 from /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key do not match
TLS Secrets update in Kubernetes Cluster
If your Kubernetes Cluster is on-prem and does not have access to the internet to validate the root CA cert you may decided to use the combine_chain_with_cert.pem when building your Kubernetes Secrets. With Kubernetes Secrets you must delete then re-add the Secret as there are no current update process for Secrets.
CERTFOLDER=~/labs/letsencrypt
CERTFILE=${CERTFOLDER}/combined_chain_with_cert.pem
#CERTFILE=${CERTFOLDER}/fullchain.pem
KEYFILE=${CERTFOLDER}/privkey.pem
INGRESS_TLS_SECRET=anapartner-dev-tls
NAMESPACE=monitoring
NS=${NAMESPACE}
kubectl -n ${NS} get secret ${INGRESS_TLS_SECRET} 2>&1 > /dev/null
if [ $? != 0 ]; then
echo ""
echo "### Installing TLS Certificate for Ingress"
kubectl -n ${NS} create secret tls ${INGRESS_TLS_SECRET} \
--cert=${CERTFILE} \
--key=${KEYFILE}
fi
Ingress Rule via yaml:
tls:
- hosts:
- grafana.${INGRESS_APPS_DOMAIN}
secretName: ${INGRESS_TLS_SECRET}
helm update:
--set grafana.ingress.tlsSecret=${INGRESS_TLS_SECRET}
The below section from the online documentation mentions the purpose of a certificate to be used by Symatnec Directory. It mentioned using either DXcertgen or openssl. We can now add in LetsEncrypt certs as well to be used.
One statement caught our eye that was not quite accurate was that a certificate used on one DSA could not be used for another DSA. We can see if we compare the DSAs provided by CA Directory for the provisioning servers data tier (Identity Suite/Identity Manager), there is no difference between them, including subject name. Due to the fact that the subject (CN) has the same name for all five (5) DSAs (data/router), if a Java JNDI call is made for an LDAP call to the DSAs, the LDAP hostname validation must be disabled. (see below)
We must use key type of RSA for any cert with Symantec PAM. This process is fairly straightforward to update the certificates. Access the PAM UI configuration, select the menu of: Configuration / Security / Certificates. Join the cert.pem and the privkey.pem files together, in this order, with cat or notepad.
Challenge/Resolution: Please edit the joined file and add the string “RSA“ to the header/footer of the private key provided by LetsEncrypt. Per Broadcom tech note: 126692, “PAMs source code is expecting that RSA based Private Keys start with “—–BEGIN RSA PRIVATE KEY—–” header and have a matching footer. See example below.
Select “Certificate with Private key” with X509 (as other option), then click “Choose File” button to select the combined cert/privatekey pem file. We are not required to have a destination filename nor passphrase for the LetsEncrypt certs. Click Upload button.
We should receive a confirmation message of “Confirmation: PAM-CM-0349: subject=CN = training.anapartner.net has been verified.”
The error message “PAM-CM-0201: Verification Error Can not open private key file” will occur if using keytype of RSA or ECDSA and the default header/footer does not contain a required string for PAM to parse. If we attempt to use ECDSA keytype, we would receive a similar PAM-CM-0201 error message after updating the header/footer. So please regenerate the LetsEncrypt certs with keytype=RSA.
Next steps, after the certificate and private key have been loaded into Symantec PAM, please use the “Set” menu option to assign this certificate as primary. We will click verify button first, to confirm the certificate is functioning correctly.
We should receive a confirmation message for the file ” Confirmation: PAM-CM-0346: cert_with_key_only_for_pam_app.crt has been verified“.
Finally, we will click “Accept” button, and allow the PAM appliance to restart. Click “Yes” when asked to restart the appliance.
View the updated PAM UI with the LetsEncrypt Certs.
ERROR Messages
If you have received any of the below error messages during use of any java process, e.g. J2EE servers (JBOSS/Wildfly), you have pushed beyond the solution’s vendor ability to manage new features provided in LetsEncrypt certs. You will need to regenerate them with the type of RSA, instead of default of elliptical certs.
UNKNOWN-CIPHER-SUITE ERR_SSL_VERSION_OR_CIPHER_MISMATCH SSLHandshakeException: no cipher suites in common Ignore unavailable extension Ignore unknown or unsupported extension
Use the below processes to help you identify the root cause of your issue.
Create an ‘RSA’ type JKS and P12 Keystore using LetsEncrypt certs.
The below example is a two (2) steps process that will create a p12 keystore first with the cert.pem and privkey.pem files. Then, a second command will convert the p12 keystore to the older JKS keystore format. You may use these in any Java process, e.g.J2EE and/or Tomcat platform.
While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.
The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.
One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.
We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.
The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.
A view of the Apache HTTPD SSL self-signed certificate and key.
The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:
A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.
Provisioning Server process for TLS enablement for IME ETACALLBACK process.
Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.
Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.
Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.
Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.
sudo certbot certonly --manual --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email
Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]
cert1.pem [The primary server side wildcard cert]
privkey1.pem [The primary server side private key associated with the wildcard cert]
chain1.pem [The intermediate chain certs that are needed to validate the cert1 cert]
fullchain1.pem [two files together in the correct order of cert1.pem and chain1.pem.]
NOTE: fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]
Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.
Validate with browsers and view the HTTPS lock symbol to view the certificate
Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.
A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.
Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.
Common TLS errors seen with the Provisioning Server ETACALLBACK
Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.
Other Error messages (curl)
If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823
The virtual appliance and standalone deployment of Symantec (CA) Identity Suite allow for redirecting authentication for the J2EE tier application through Symantec SSO or directly to an Active Directory domain, instead of the existing userstore for the solution.
Challenge:
The standalone deployment of Symantec (CA) Identity Suite on MS Windows OS allowed for the mid-tier component to utilize PAM modules to redirect to AD authentication for the Global User.
However this PAM feature does not exist for Provisioning Servers on the virtual appliance.
To be clear, there are noexpectations this feature will be introduced in the future roadmap for the solution, as the primary UI will be the web browser.
Review:
Symantec (CA) Identity Suite architecture for virtual appliance versus standalone deployment architecture.
The standalone deployment architecture has both MS Windows and Linux components of all tiers.
The vApp deployment architecture has primary Linux components and few MS Windows components.
The vApp MS Windows components do not include the IMPS (Provisioning Server)
Proposal:
To address this requirement of enabling AD authentication to the vApp Provisioning Server, we will introduce the concept of a “jump server”.
The “jump server” will utilize the standalone deployment of Symantec Identity Provisioning Server on an MS Windows OS. This “jump server” will be deployed as an “alternative server” integrated into the existing vApp Provisioning Directory deployment.
We will select deployment configuration ONLY of the Provisioning Server itself. We do not require the embedded CCS Service.
We will integrate this “jump server” deployment with the existing Symantec Identity solution.
Ensure the imps_datakey encryption seed file is in sync between all components vApp and standalone.
To avoid impacting the existing vApp deployment, we will NOT integrate the “jump server” deployment to the IME. The IME’s Directory XML for the Provisioning Directory will not be updated.
Important Note: The Symantec/CA Directory solution is required as a pre-step.
Summary of deployment steps:
Select a MS Window OS workstation (clean or with JCS/CCS Services) that may be part of the MS AD Domain
Option 1: [RECOMMENDED & PREFERRED] If using a clean OS, install MS .NetFramework 3.5.1 for the provisioning component.
Open cmd as administrator to deploy: DISM /Online /Enable-Feature /All /FeatureName:NetFx3
Option 2: [MED-HIGH RISK] If using “side-deployment” on an existing JCS/CCS server (MS Win OS), we will need to make modifications to this server.
Will need to rename the file C:\Windows\vpd.properties to avoid conflict with the JCS/CCS component naming convention in this “registry” file. (see below screen shot)
Will require a post-install execution of the IMPS pwdmgr tool to address an MS Registry path conflict between the CCS and IMPS components.
Ensure all CA Directory hostnames are in DNS or in the MS Windows local host file (C:\Windows\System32\drivers\etc\hosts ) otherwise this “jump server” deployment will fail when it tries to validate all possible directory nodes’ hostnames and build the respective Directory knowledge files.
Create a reference file for the new IMPS router dxc file on at least one of the existing vApp Identity Suite Directory Server otherwise this “jump server” deployment will fail due to trust issue when testing connections to other directory nodes’ hostnames.
Deploy Symantec/CA Directory (if not already done) – default configurations. Otherwise, you will see this error message
Deploy IMPS MS Windows – Only IMPS (no CCS) with Alternative Server Selection Configuration & update to latest CP patches. Note: For “side-deployment” only: If the vpd.properties file was not renamed, then a name collision will occur due to this registry file, if using the JCS/CCS server to side-deploy. It is low risk to change this file, as it is used to prevent deployments of lower release version of components over the prior installed higher release versions of the same component. If there is a concern, all components can be reinstalled as needed. Do not forget to install the latest CP patches to ensure this “jump server” is the same binary level as the vApp solution.
Review of additional notes during deployment of “jump server”. Note: For “side-deployment” only: On the page that ask for the Identity Suite Directory connection information, you will see the solution attempt to load env variables that do not exist. Override these value and enter the Directory hostname, Port 20394, and the default bind DN credentials for a Directory userID: eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=im,dc=etadb
Deploy IMPM Manager GUI if needed.
Post-Deployment – Update IMPM Manager GUI preference to ONLY connect to the new IMPS server on MS Windows. Use the “Enable Failover” checkbox and place the IP/hostname first in the list. Hint: Remove the other IMPS servers from this list or you may add an extra digit to the IMPS servers entries you wish to save, but prevent auto-connectivity to them. Confirm able to authenticate directly through the solution using prior credentials for your service ID: etaadmin or imadmin. This will validate connectivity to the existing vApp Identity Suite solution.
On the “jump server” under the Provisioning Server\pam\ADS folder copy the etapam.dll to the IMPS \bin folder. Then copy the etapam_id.conf configuration file to the \pam parent folder. Update the parameters in this file. Set the enable= parameter to yes. Set the domain= to either the MS AD Domain or use the FQDN hostname of the ADS Domain Controller (DC). If we use the FQDN hostname of the DC the “jump server” does NOT have to be made a member of the MS AD Domain. Save the file and restart the “CA Identity Manager – Provisioning Server”
Validate PAM functionality in the IMPS etatrans log is enabled. We will see two (2) entries: PAM: Initialization started (same for all use-cases) and PAM: Not enabled or No PAM managed endpoint. We want “PAM: No PAM managed endpoint” – That is an extra feature we could enable, but do not require for the “jump server” scenario.
Validate PAM functionality with MS Sysinternals. Ensure that we copied the etapam.dll to the bin folder and that the configuration file is being read.
Test authentication using IMPM Manager login as IMPS Manager Global User that has same userID format as AD sAMAccountName. Negative Use-Case testing: Create a new AD domain user that does NOT exist as a Global User and attempt to authentication. Test with etaadmin or other Global User that does NOT have a match AD sAMAcccount name entry. Review the IMPS etatrans logs on the “jump server”
Update the IMPS encrypted seed file imps_datakey as needed.
Note: The MS Win version of IMPS encrypted seed file may be different than the vApp seed.
If this step is skipped, there will be no obvious error message with the exception that a bind has failed for communication to the JCS/CCS services.
After this file is updated, we will need to re-install IMPS service to ensure that all prior encryption passwords are replaced with new passwords using the new seed file. Basically, we need to install the MS Win version of IMPS Server twice, e.g. standard install, change the seed file value, re-install with update all components and updated passwords.
CCS Service conflict with “side-loading” IMPS Service {“side-loading” methodology}
The “side loading “process of deploying the “jump server” IMPS Provisioning Server on the JCS/CCS Server will impact starting of the CCS service. The installation will update the MS Registry with extra branches and updated shared attribute values between the CCS service and IMPS service, e.g. ETAHOME.
This challenge is a strong reason why we may choose the “clean” installation methodology, to avoid this conflict and possible support challenge.
To address this concern, update the new registry values that store the embedded reversible encrypted password for the CCS Service. Use the password reset tool “pwdmgr” and reset the “Connector Server” for both “eta” & “im” domain to the prior stored password. If the imps_datakey file is not in sync between all provisioning servers (& ccs service), then we will see failed bind connections error messages in the logs.
We will now be able to stop/start the JCS service, and see the embedded CCS service stop and start as well.
Example of challenge and error messages if imps_datakey is not updated and in sync.
Use the following command, csfconfig.exe, under the newly deployed IMPS bin folder to view the JCS connectors defined to the solution stack.
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin show
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>echo Password01 > c:\imps.pwd
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin add name=pamjcs host=192.168.242.143 pass=c:\imps.pwd br-add=@ debug=yes port=20411
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
Created CS object with name = pamjcs
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin remove name=pamjcs
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
We will see both error status when the imps_datakey file is out-of-sync with others. Please ensure the Linux & MS Win versions are in sync.
You may view the file imps_datakey being referenced with the pwdmgr tool:
You wish to monitor what accounts (embedded) are updated with the IMPS pwdmgr tool: su – imps and execute the two commands in a different SSH shell to monitor the pwdmgr.log that was enabled.
Enablement of extra functionality (bypass the no-sync option on Global User password update)
You may wish to keep the Global User and AD password in sync. If they are not, then you will have two passwords that will work for the Global User account. The newer PAM AD authentication credentials, and the older Global User password. The etapam.dll module data path appears to check for PAM AD first, and if it fails, then it will check the Global User eTPassword field as well.
Enable the AD endpoint in the etapam_id.conf file. The type and domain will be as shown, e.g. Active Directory and im (for the vApp). The endpoint-name will be free-form and whatever you may have named your AD endpoint in the IMPS GUI.
Monitor the startup of the PAM module within the IMPS etatrans*.log
Perform a use-case test with changing a Global User account without correlation to an AD endpoint; and then retest with a Global User that is correlated to an AD endpoint. Do both test with NO SYNC operation
If the Global User is already correlated to an AD endpoint account, then we will see a “Child Modify” operation to the correlated AD endpoint account’s Password within the IMPS etatrans*.log.
One “gotcha”. There appears to be a check against the AD password policy. If the new password does not fit the AD password policy, the following error message will appear, “ETA_E_0007 <MGU>, Global user XXXXXXX modification failed: PAM account password updated failed: Account password must match global user password.
“DSA is attempting to start after a long outage, perform a recovery procedure before starting”
Challenge: The IMPD (Identity Manager Provisioning Directory) Data DSAs have been offline for a while, e.g. 7 days+ (> 1 week), and the Symantec/CA Directory solution will, to protect the data, refuse to allow the DATA DSAs to start unless there is manual intervention to prevent the possibility of production data (Live DATA DSAs) being synced with older data (Offline DATA DSAs).
If we were concern, we would follow best practices and remove the offline DATA DSAs’ *.db & *.dp files, and replace the *.db with current copies of the Live DATA DSAs’ *.db files; generate temporary time files of *.dx and allow the time files of *.dp to rebuild themselves upon startup of the offline DATA DSAs.
However, if we are NOT concern, or the environment is non-production we can avoid the multiple shells, multiple commands to resync by using a combinations of bash shell commands. The proposal below outlines using the Symantec/CA Identity Suite virtual appliance, where both the IMPD and IMPS (Identity Manager Provisioning Server) components reside on the same servers.
Proposal: Use a single Linux host to send remote commands as a single user ID; sudo to the ‘dsa’ and ‘imps’ service IDs, and issue commands to address the restart process.
Pre-Work: For the Identity Suite vApp, recommend that .ssh keys be used to avoid using a password for the ‘config’ user IDs on all vApp nodes.
If using .SSH keys, do not forget to use this shortcut to cache the local session: eval `ssh-agent` && ssh-add
Steps: Issue the following bash commands with the correct IPs or hostnames.
If possible, wrap the remote commands in a for-loop. The below example uses the local ‘config’ user ID, to ssh to remote servers, then issues a local su to the ‘dsa’ service ID. The ‘dsa’ commands may need to be wrapped as shown below to allow multiple commands to be executed together. We have a quick hostname check, stop all IMPD DATA DSAs, find the time-stamp file that is preventing the startup of the IMPD DATA DSAs and remove it, restart all IMPD DATA DSA, and then move on to the next server with the for-loop. The ‘imps’ commands are similar with a quick hostname check, status check, stop and start process, another status check, then move on to the next server in the for-loop.
for i in {136..141}; do ssh -t config@192.168.242.$i "su - dsa -c \"hostname;dxserver stop all;pwd;find ./data/ -type f \( -name '*.dp' \) -delete ;dxserver start all \" "; done
for i in {136..141}; do ssh -t config@192.168.242.$i "su - imps -c \"hostname;imps status;imps stop;imps start;imps status \" "; done
View of for-loop commands output:
Additional: Process to assist with decision to sync or not sync.
Check if the number of total entries in each individual IMPD DATA DSA match with their peers (Multi-Write groups). Goal: Avoid any deltas > 1% between peers. The IMPD “main”, “co”, “inc” DATA DSA should be 100% in sync. We may see some minor flux in the “notify” DATA DSA, as this is temporary data used by the IMPS server to store data to be sent to the IME via the IME Call Back Process.
If there are any deltas, then we may export the IMPD DATA DSAs to LDIF files and then use the Symantec/CA Directory ldifdelta process to isolate and triage the deltas.
su - dsa OR [ sudo -iu dsa ]
export HISTIGNORE=' *' {USE THIS LINE TO FORCE HISTORY TO IGNORE ANY COMMANDS WITH A LEADING SPACE CHARACTER}
echo -n Password01 > .impd.pwd ; chmod 600 .impd.pwd {USE SPACE CHARACTER IN FRONT TO AVOID HISTORY USAGE}
# NOTIFY BRANCH (TCP 20404)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD NOTIFY DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20404 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=notify,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# INC BRANCH (TCP 20398)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD INC DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20398 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTInclusionContainerName=Inclusions,eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# CO BRANCH (TCP 20396)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD CO DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20396 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'eTNamespaceName=CommonObjects,dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
# MAIN BRANCH (TCP 20394)
for i in {135..140}; do echo "########## 192.168.242.$i IMPD MAIN DATA DSA ##########";LDAPTLS_REQCERT=never dxsearch -LLL -H ldaps://192.168.242.$i:20394 -D 'eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=etadb' -y .impd.pwd -s sub -b 'dc=im,dc=etadb' '(objectClass=*)' dxTotalEntryCount | perl -p00e 's/\r?\n //g' ; done
NOTIFY DSA is temporary data and will have deltas. This DSA is used for the IME CALL BACK process.
Ensure the hostname entry is a FQDN or alias. It can not be an IP address if MS Exchange is to be managed through this connector, due to conflict with Kerberos authentication and IP addresses. If the object was created with an IP address, it may be changed via Jxplorer for two (2) attributes: eTADSprimaryServer and eTADSServerName.
2. General Information on the ADS Endpoint Logging Tab and where this information is stored. Only two (2) the Destination have value with current deployment, e.g. Text File & System Log (MS Windows Event viewer) for Active Directory (ADS). The “Text File” will output data to two (2) files: jcs\logs\ADS\<endpoint-name>.log and ccs\logs\ADS\<endpoint-name>.log
4. Additionally, the User ID may be in one of three (3) formats: UPN (serviceid@exchange.lab), NT ( domain\serviceid ), LDAP DN ( cn=serviceid,ou=people,dc=exchange,dc=lab). We recommend UPN or NT format to allow the embedded API features for MS Exchange powershell management to correctly function. If the ID is to be changed, a password update must be done as well, since the User ID is part of the seed for the encrypted password for the service ID to be stored in CA Directory on the ADS endpoint object.
Note: SASL is encrypted traffic. If wireshark is used to intercept the traffic, the service ID may be seen during initial authentication, but NOT the password nor the payload data.
Notes on SASL validation for Active Directory. {Pro: No need to worry about TLS certificates rotation on client connections – all TLS is managed by the server}
:: Search ADS / LDAP store what is offered for SASL (use -x for simple connection) ldapsearch -x -h dc2016.exchange.lab -p 389 -b “” -LLL -s base supportedSASLMechanisms
SASL appears to connect on TCP 636 briefly, then use TCP 389 extensively. Other ports are 80 (Service), 135 (lsass.exe for home folders), 6405 (lsass.exe). If Kerberos authentication is defined for the service ID, then other ports will be used, e.g. 3268/3269. TCP 4104/4105 are for the legacy CAM/CAFT agents (typically not used any more).
Recommendation: Add these TCP Ports to any Firewall between the IM JCS/CCS Server and the Active Directory Domain Controllers to improve performance and avoid time-out delays.
One of the challenges that IAM/IAG solutions may have is using single thread processing for select endpoints. For the CA/Symantec Identity Management solution, before IM r14.3cp2, we lived with a single-threaded connector to managed MS Active Directory endpoints.
To address this challenge, we deployed multiple connector servers. We allowed the IM Provisioning Server (IMPS) to use a built-in round-robin approach of load-balancing separate transactions to different connector servers, which would service the same Active Directory endpoints.
The IME may be running as fast as it can with its clustered deployment, but as soon as a task has MS Active Directory, and there is a bottleneck with the CCS Service. We begin to see the IME JMS queue reporting that it is stuck and the IME View Submitted Task reporting “In Progress” for all tasks. If the CCS service is restarted, all IME tasks are then reported as “Failed.”
This is/was the bottleneck for the solution for sites that have MS Active Directory for Birthright/DayOne Access.
We can now avoid this bottleneck. [*** (5/24/2021) – There is an enhancement to CP2 to address im_ccs.exe crashes during peak loads discovered using this testing process. ]
We now have full parallel provisioning to MS Active Directory from a single connector server (JCS/CCS).
The new attribute that regulates this behavior is eTADSMaxConnectionsInPool. This attribute will be applied on every existing ADS endpoint that is currently being managed by the IM Provisioning Server after CP2 is deployed. Note: The default value is 10, but we recommend after much testing, to match the value of the IMPS-> JCS and JCS->CCS to equal 200.
During testing within the IME using Bulk Tasks or the IM BLC, we can see that the CCS-> ADS traffic will reach20-30connections if allowed. You may set this attribute to a value of 200 via Jxplorer and/or an ldapmodify/dxmodify script.
To confirm the number of open connections is greater than one (1), we can issue a Bulk IM Task or use a performance tool like CA Directory dxsoak.
In this example, we will show case using CA Directory dxsoak to execute 100 parallel threads to create 100 ADS Accounts with MS Exchange Mailboxes. We will also enclose this script for download for others to review and use.
Performance Lab:
Pre-Steps:
Leverage CA Directory samples’ dxsoak binary (performance testing). You may wish to use CA Directory on an existing IM Provisioning Server (Linux OS) or you may deploy CA Directory (MS Windows version) to the JCS/CCS connector. Examples are provided for both OSes.
Create LDIF files for IM Provisioning Server and/or IM Connector Tier. This file is needed to ‘push’ the solution to-failure. The use of the IME Bulk Task and/or etautil scripts to the IM Provisioning Tier, will not provide the transaction speed we need to break the CCS service if possible.
Within the IM Provisioning Manager enable the ADS Endpoint TXT Logs on the Logging TAB, for all checkboxes.
Monitor the IMPS etatrans* logs, monitor the JCS ADS logs, monitor the CCS ADS logs, monitor the number of CCS-> ADS (LDAP/S – TCP 389/636) threads. [Suggest using MS Sysinternals Process Explorer and select im_ccs.exe & then TCP/IP TAB]
Monitor the MS ADS Domain via MS ADUC (AD Users & Computers UI) and MS Exchange Mailbox (Mailbox UI via Browser)
Execution:
6. Perform a UNIT TEST with dxmodify/ldapmodify to confirm the LDIF file input is correct with the correct suffix.
8. IMPS etatrans*.log – Count the number of operations per second. Note any RACE and/or data collisions, e.g. ADS accounts deleted prior to add via 100 threads or ADS account created multiple times attempted in different threads.
9. IM CCS ADS <endpoint>.log – Will only have useful data if the ADS Endpoint Logging TAB has been checked for TXT logs.
10. Finally, validate directly in MS Active Domain with the ADUC or similar tool & MS Exchange mailboxes being created/deleted.
11. Count the number of threads from im_ccs.exe to ADS – Suggest using MS Sysinternals Process Explorer tool and/or Powershell to count the number of connections.
MS Powershell Script to count the number of LDAP (TCP 389) connection from im_ccs.exe. [Note: TCP 389 is used more if the ADS Endpoint is setup to use SASL authentication. TCP 636 is used more if the ADS Endpoint is using the older TLS authentication]
$i=1
Do {
cls
(Get-NetTCPConnection -State Established -OwningProcess (Get-Process -name im_ccs).id -RemotePort 389).count
Start-Sleep -s 1
$i++
}
while ($i -le 5)
Direct Performance Testing to JCS/CCS Service
While this testing has limited value, it can offer satisfaction and assistance to troubleshoot any challenges. We can use the prior LDIF files with a slightly different suffix, dc=etasa (instead of dc=eta), to use dxsoak to push the connector tier to failure. This step helped provide memory dumps back to CA/Symantec Engineering teams to help isolate challenges within the parallel processing. CCS Service is only exposed via localhost. If you wish to test the CCS Service remotely, then update the MS Registry key for the CCS service to use the external IP address of the JCS/CCS Server. Rate observed = 25 K ids/hr
Script to generate 100 ADS Accounts with MS Exchange Mailbox Creation
You may wish to review this script and adjust it for your ADS / MS Exchange domains for testing. You can also create a simple LDIF file with password resets or ADS group membership adds. Just remember that the IMPS Service (TCP 20389/20390) uses the suffix dc=eta, and the IM JCS/CCS Services (TCP 20410/20411) & (TCP 20402/20403) use the suffix dc=etasa. Additionally, if using CA Directory dxsoak, only use the non-TLS ports, as this binary is not equipped for using TLS certs.
#!/bin/bash
#######################################################################################################################
# Name: Generate ADS Feed Files for IM Solution Provisioning/Connector Tiers
#
# Goal: Validate the new parallel processes from the IM Connector Tier to Active Directory with MS Exchange
#
#
# Generate ADS User LDIF file(s) for use with unit (dxmodify) and performance testing (dxsoak) to:
# - {Note: dxsoak will only work with non-TLS ports}
#
# IM JCS (20410) "dc=etasa" {Ensure MS Windows Firewall allows this port to be exposed}
# IM CCS (20402) "dc=etasa" {This port is localhost only, may open to network traffic via registry update}
# IMPS (20389) "dc=eta"
#
#
# Monitor:
#
# The IMPS etatrans*.log {exclude searches}
# The JCS daily log
# The JCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
# The CCS ADS log {Enable the ADS Endpoint TXT logging for all checkboxes}
#
# Execute per the examples provided during run of this file
#
#
# ANA 05/2021
#######################################################################################################################
# Unique Variables for an ADS Domain
NAMESPACE=exchange2016
ADSDOMAIN=exchange.lab
DCDOMAIN="DC=exchange,DC=lab"
OU=People
#######################################################################################################################
MAX=100
start=00001
counter=$start
echo "###############################################################"
echo "###############################################################"
START=`/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`
echo `/bin/date --utc +%Y%m%d%H%M%S,%3N.0Z`" = Current OS UTC time stamp"
echo "###############################################################"
FILE1=ads_user_with_exchange_dc_etasa.ldif
FILE2=ads_user_with_exchange_dc_eta.ldif
echo "" > $FILE1
while [ $counter -le $MAX ]
do
n=$((10000+counter)); n=${n#1}
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Counter with leading zeros = $n at time: $tz"
cat << EOF >> $FILE1
dn: eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: add
objectClass: eTADSAccount
eTADSobjectClass: user
eTADSAccountName: firstname$n aaalastname$n
eTADSgivenName: firstname$n
eTADSsn: aaalastname$n
eTADSdisplayName: firstname$n aaalastname$n
eTADSuserPrincipalName: aaatestuser$n@$ADSDOMAIN
eTADSsAMAccountName: aaatestuser$n
eTPassword: Password01
eTADSpwdLastSet: -1
eTSuspended: 0
eTADSuserAccountControl: 0000000512
eTADSDescription: description $tz
eTADSphysicalDeliveryOfficeName: office
eTADStelephoneNumber: 111-222-3333
eTADSmail: aaatestuser$n@$ADSDOMAIN
eTADSwwwHomePage: web.page.lab
eTADSotherTelephone: 111-222-3333
eTADSurl: other.web.page.lab
eTADSstreetAddress: street address line01
eTADSpostOfficeBox: pobox 111
eTADSl: city
eTADSst: state
eTADSpostalCode: 11111
eTADSco: UNITED STATES
eTADSc: US
eTADScountryCode: 840
eTADSscriptPath: loginscript.cmd
eTADSprofilePath: \profile\path\here
eTADShomePhone: 111-222-3333
eTADSpager: 111-222-3333
eTADSmobile: 111-222-3333
eTADSfacsimileTelephoneNumber: 111-222-3333
eTADSipPhone: 111-222-3333
eTADSinfo: Notes Here
eTADSotherHomePhone: 111-222-3333
eTADSotherPager: 111-222-3333
eTADSotherMobile: 111-222-3333
eTADSotherFacsimileTelephoneNumber: 111-222-3333
eTADSotherIpPhone: 111-222-3333
eTADStitle: title
eTADSdepartment: department
eTADScompany: company
eTADSmanager: CN=manager_fn manager_ln,OU=$OU,$DCDOMAIN
eTADSmemberOf: CN=Backup Operators,CN=Builtin,$DCDOMAIN
eTADSlyncSIPAddressOption: 0000000000
eTADSdisplayNamePrintable: aaatestuser$n
eTADSmailNickname: aaatestuser$n
eTADShomeMDB: (Automatic Mailbox Distribution)
eTADShomeMTA: CN=DC001,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,$DCDOMAIN
eTAccountStatus: A
eTADSmsExchRecipientTypeDetails: 0000000001
eTADSmDBUseDefaults: TRUE
eTADSinitials: A
eTADSaccountExpires: 9223372036854775807
EOF
counter=$(( $counter + 00001 ))
done
# Create the delete ADS Process
start=00001
counter=$start
while [ $counter -le $MAX ]
do
n=$((10000+counter)); n=${n#1}
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Counter with leading zeros = $n at time: $tz"
cat << EOF >> $FILE1
dn: eTADSAccountName=firstname$n aaalastname$n,eTADSOrgUnitName=$OU,eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=etasa
changetype: delete
EOF
counter=$(( $counter + 00001 ))
done
echo ""
echo "################################### ADS USER OBJECT STATS ################################################################"
echo "Number of add objects: `grep "changetype: add" $FILE1 | wc -l`"
echo "Number of delete objects: `grep "changetype: delete" $FILE1 | wc -l`"
rm -rf $FILE2
cp -r -p $FILE1 $FILE2
sed -i 's|,dc=im,dc=etasa|,dc=im,dc=eta|g' $FILE2
ls -lart $FILE1
ls -lart $FILE2
echo ""
echo "################################### SET ADS MAX CONNECTIONS IN POOL SIZE ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
LDAPTLS_REQCERT=never dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D "$IMPS_USER" -w "$IMPS_PWD" << EOF
dn: eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta
changetype: modify
eTADSMaxConnectionsInPool: 200
EOF
LDAPTLS_REQCERT=never dxsearch -LLL -H ldap://$IMPS_HOST:$IMPS_PORT -x -D "$IMPS_USER" -w "$IMPS_PWD" -b "eTADSDirectoryName=$NAMESPACE,eTNamespaceName=ActiveDirectory,dc=im,dc=eta" -s base eTADSMaxConnectionsInPool | perl -p00e 's/\r?\n //g'
echo ""
echo "################################### CCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20402
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the CCS Service to test single thread with dxmodify or ldapmodify"
echo "dxmodify -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the CCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo ""
echo "################################### JCS UNIT & PERF TEST ################################################################"
CCS_HOST=192.168.242.80
CCS_PORT=20410
CCS_USER="cn=root,dc=etasa"
CCS_PWD="Password01"
echo "Execute this command to the JCS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify -H ldap://$CCS_HOST:$CCS_PORT -c -x -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo "Execute this command to the JCS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $CCS_HOST:$CCS_PORT -D $CCS_USER -w $CCS_PWD -f $FILE1 "
echo ""
echo "################################### IMPS UNIT & PERF TEST ################################################################"
IMPS_HOST=192.168.242.135
IMPS_PORT=20389
IMPS_USER='eTGlobalUserName=etaadmin,eTGlobalUserContainerName=Global Users,eTNamespaceName=CommonObjects,dc=im,dc=eta'
IMPS_PWD="Password01"
echo "Execute this command to the IMPS Service to test single thread with dxmodify or ldapmodify "
echo "dxmodify -H ldap://$IMPS_HOST:$IMPS_PORT -c -x -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
echo "Execute this command to the IMPS Service to test 100 threads with dxsoak "
echo "./dxsoak -c -l 60 -t 100 -h $IMPS_HOST:$IMPS_PORT -D \"$IMPS_USER\" -w $IMPS_PWD -f $FILE2 "
Address the new bottleneck of MS Exchange / O365 Provisioning.
After parallel provisioning has been introduced with the new im_ccs.exe service, you may noticed that the number of transactions is still being throttled during performance testing.
Out-of-the-box MS Active Directory Global Throttling Policy has the parameter of PowerShellMaxConcurrency set to a default of 18 connection. Any provisioning that uses MS Powershell for MS Exchange and/or MS O365 will be impacted by this default parameter.
To address this bottleneck, we can create a new Throttling Policy and only assign the service ID that will be managing identities, to avoid a global change.
After this change has been made, restart the IM JCS/CCS Services, and retest again with your performance tools. Review the CCS ADS log for # of creations in 60 seconds, and you will be pleasantly surprise at the rate. The logs are the strong confirmation we are looking for.
Performance test (947 ADS accounts w/Exchange mailboxes in 60 seconds, 08:59:54 to 09:00:53) =>Rate of 15 ids/second (or 54 K ids/hr) with updated MaxPowershell = 100 thottlingpolicy.
The last bottleneck appears to be CPU availability to MS Exchange Supporting Services, w3wp.exe, the MS IIS Service. Which appears to be managing MS Powershell connections per its startup string of