The Symantec (CA/Broadcom) Identity Portal is widely used for managing IAM workflows with customizable forms, tasks, and business logic. This tool allow its business logic to be exported within the management console.
However, a major challenge exists in migrating or analyzing environments like Dev → Test → Prod . This effort can be challenging when working with these exported Portal files. Although configuration migration tools are available, reviewing and verifying changes can be difficult. Portal exports are delivered as a single compressed JSON one-liner—making it hard to identify meaningful changes (“deltas”) without involving a large manual effort.
Challenge 1: Single-Line JSON Exports from Identity Portal
Example above has over 88K characters in a single line. Try to search on that string to find the object you wish to change or update.
Identity Portal’s export format is a flat, one-line JSON string, even if the export contains hundreds of forms, layout structures, and java scripts.
Migration/Analysis Risks
Impossible to visually scan or diff exports.
Nested structures like layout, formProps, and handlers are escaped strings, sometimes double-encoded.
Hidden differences can result in subtle bugs between versions or environments.
A Solution
We created a series of PowerShell scripts that leverage AI to select the best key-value pairs to sort on, that would either provide the best human-readable or searchable processes to reduce the complexity and effort for migration processes. We now can isolate minor delta changes that would otherwise been hidden until a use-case was exercised later in the migration effort, which would require additional effort to be utilized.
Convert the one-liner export into pretty-formatted, human-readable JSON.
Detect and decode deeply embedded or escaped JSON strings, especially within layout or formProps.
Extract each form’s business logic and layout separately.
These outputs allow us to:
Open and analyze the data in Notepad++, with clean indentation and structure.
Use WinMerge or Beyond Compare to easily spot deltas between environments or versioned exports.
Track historical changes over time by comparing daily/weekly snapshots.
Challenge 2: Embedded JavaScript Inside Portal Forms
Identity Portal forms often include JavaScript logic directly embedded in the form definition (onLoad, onChange, onSubmit).
Migration Risks
JS logic is not separated from the data model or UI.
Inconsistent formatting or legacy syntax can cause scripts to silently fail.
Broken logic might not surface until after production deployment.
Suggested Solutions
Use PowerShell to extract JS blocks per form and store them as external .js.txt files.
Identify reused code patterns that should be modularized.
Create regression test cases for logic-heavy forms.
Challenge 3: Form Layouts with Escaped JSON Structures
The layout field in each form is often a stringified JSON object, sometimes double or triple-escaped.
Migration Risks
Malformed layout strings crash the form UI.
Even minor layout changes (like label order) are hard to detect.
Suggested Solutions
Extract and pretty-print each layout block to .layout.json files.
Please note: While the output is pretty-print, it is not quite JSON format, due to the escape sequences. Use these exported files as searchable/research to help isolate deltas to be corrected during the migration efforts.
Use WinMerge or Notepad++ for visual diffs.
Validate control-to-field binding consistency.
Using our understanding of the Identity Portal format for the ‘layout’ property, were able to identify methods using AI to manage the double-or-triple escaped characters that were troublesome to export consistently. Our service engagements now incorporate greater use of AI and associated APIs to support migration efforts and process modernization, with the goal of minimizing business risk for our clients and our organization.
Challenge 4: Java Plugins with Multiple Classes
Many Portal instances rely on custom Java plugins with dozens of classes, Spring beans, and services.
Migration Risks
Portal API changes break plugins.
Lack of modularity or documentation for the custom plugins.
Missing source code for complied custom plugins.
Difficult to test or rebuild.
Suggested Solutions
In the absence of custom source code, decompile plugins using jd-gui .
jd-gui Java decompilation for plugin reverse engineering.
Recommendations for Future-Proofing
Store layouts and handlers in Git.
Modularize plugin code.
Version control form definitions.
Automate validation tests in CI or staging.
Conclusion
Migrating Identity Portal environments requires more than copy-pasting exports— In the absence of proper implementation documentation around customizations, it may require reverse engineering, decoding, and differencing of deeply nested structures.
By extracting clean, readable artifacts and comparing across environments, teams will gain visibility, traceability, and confidence in their migration efforts.
Review our github collection of the above mentioned scripts. Please reach out if you would like assistance with your migration processes/challenges. We can now progress toward automation of the business logic from one environment to the next.
Metrics are essential for monitoring and optimizing the health of your solutions. The simplicity of using SaaS-based Application Performance Monitoring (APM)/Operational Intelligence/Analytics tools makes them indispensable for gaining actionable insights.
By leveraging metrics, you can not only ensure the performance and reliability of your systems but also build compelling ROI use cases. We can leverage these SaaS platforms to incorporate ROI queries to pull forward data that is not exposed in other dashboards.
In this guide, we’ll demonstrate the power of metrics by deploying a Broadcom DX O2 agent to the Symantec IGA Virtual Appliance in under 10 minutes, providing immediate value and visibility into your business operations. This straightforward process integrates seamlessly into your existing infrastructure, enhancing the observability and security of a hardened appliance.
This walk-through will showcase how metrics can enhance the observability and security of a hardened appliance.
After you login to your DX OI/O2 instance, navigate to the settings/agents section. You can select an Agent, and your custom authentication token to use it will be embedded in the package. We plan to use the javaagent being offered for Wildfly (aka JBOSS) . Select this agent.
When the screen displays, expand the “Command Line Download“. We will use the wget command to directly download this agent to our Virtual Appliance that has internet access. Otherwise, download the agent to your workstation, and then file transfer it to your Virtual Appliance that has Wildfly running on it.
Step03: Login to the IGA Virtual Appliance with ssh.
Create a local media folder, then proceed to download the DX 02 agent. After the download is successful, we will extract the agent into a known folder used on the IGA Virtual Appliance for “java profilers”. Since the files are owned by the ‘config’ user, and we need the ‘wildfly’ user to have access to the log folder, please chmod 777 to both log folders to avoid any startup issues for the Wildfly applications. You may leave the rest of the file/folder permissions as is.
mkdir -p ~/media/dxoi_agent ; cd ~/media/dxoi_agent/
wget --content-disposition "https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/XXXXXXXXXXXXXXXX?format=archive&layout=bootstrap_preferred&packageDownloadSecurityToken=ZZZZZZZZZZZZZZZZZZZZZZ"
ls -lart
tar -xf JBoss_jboss_20241117_v1.tar -C /opt/CA/VirtualAppliance/custom/profiler/
cd /opt/CA/VirtualAppliance/custom/profiler/
ls -lart
cd wily/
ls -lart
# We could update permissions for all files/folder to 777, but we only need the following to be changed.
# Update permission for folders that 'wildfly' will write out to.
chmod 777 logs/ ./releases/24.10/logs/
chmod 777 ./releases/24.10/core/config/hotdeploy
chmod 777 ./releases/24.10/extensions
mkdir -p ./releases/24.10/core/config/metrics
chmod 777 ./releases/24.10/core/config/metrics
After updating the logs, hotdeploy, extensions, metrics folders’ permissions, please run the shell script “./agent-location.sh”. This script will output the JVM arguments that we will use with the Wildfly instances for IdentityManager, IdentityPortal, and IdentityGovernance
We will now edit the jvm-arg.conf files for both IdentityManager and IdentityPortal for this scenario with the above string. We will prepend the string “javaagent:” and to avoid an Java Log Module loading order error, we will place the entire string at the very end of the JAVA_OPTS variable. We can use the same exact string and path, as the service name of each instance will be automatically determined by the javaagent.
Below is a view of what the IM and IP jvm-args.conf file should look like. Please ensure the full string is at the very end.
Now stop and start both IdentityManager and IdentityPortal. We recommend using a second ssh session to monitor the wildfly-console.log for each, as it will immediate show any issues due to permissions or other with the java-agent.
Step04: We are Done. View the DX O2 UI and review the new incoming data.
Recommend that we walk through all the possible pre-built dashboard to use and monitor/alert on your solution. Of interest, is the IM shows as the hostname of the Virtual Appliance “vapp1453” and IP shows as the internal pseudo name of “IPnode1”. Note, these values can be over-written in the profile file.
A view of metrics by each agent. You must click on each sub-category to see what is being offered.
A very interesting view within the memory space of the IdentityManager application
Other views to review
What is very interesting, is adding ROI metrics to dashboards, where we can monitor the number of events that are being utilized, e.g. external customer access, internal password changes. The APIs allowed provide maximum flexibility to input directly any ROI metrics we wish.
Reach out and we will work with you to get the most value out of your solution.
Additional Notes
JVM Order Management
On the IGA virtual appliance, the order of JVM switches for “LogManager” is predetermined. If the new javaagent is not placed at the very end of the JAVA_OPTS, we may see these generic warn/error messages. We spent quite a bit of time being mislead by these generic warning/error messages. We did not need to add extra JVM switches to manage the JVM order. If you do have challenges, review the current documentation for the JBOSS agent.
WARNING: Failed to load the specified log manager class org.jboss.logmanager.LogManager
ERROR: WFLYCTL0013: Operation ("parallel-extension-add") failed - address: ([])
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: WFLYLOG0078: The logging subsystem requires the log manager to be org.jboss.logmanager.LogManager. The subsystem has not be initialized and cannot be used. To use JBoss Log Manager you must add the system property "java.util.logging.manager" and set it to "org.jboss.logmanager.LogManager"
FATAL: WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details.
Bonus Round: Deploy the Java Agent for the JCS component
While the javaagent for both Wildfly and Java are the same, the support modules are slightly different. We may be able to combine them, but to avoid any possible concerns, we seperated the extraction folders. Add this javaagent string to the JCS JVM custom configuration file: jvm_options.conf
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
~/media/dxoi_agent > wget --content-disposition "https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/7XXXXXXXXXX?format=archive&layout=bootstrap_preferred&packageDownload SecurityToken=ZZZZZZZZZZZ"
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
-rw-r--r-- 1 config config 31645184 Nov 16 23:51 Java_other_20241117_v1.tar
~/media/dxoi_agent > tar -xf Java_other_20241117_v1.tar
~/media/dxoi_agent > ls -lart
-rw-r--r-- 1 config config 30803968 Nov 16 21:18 JBoss_jboss_20241117_v1.tar
-rw-r--r-- 1 config config 31645184 Nov 16 23:51 Java_other_20241117_v1.tar
drwxr-xr-x 4 config config 123 Nov 16 23:51 wily
~/media/dxoi_agent > mv wily/ /opt/CA/VirtualAppliance/custom/profiler/wily-jcs
~/media/dxoi_agent > cd /opt/CA/VirtualAppliance/custom/profiler/wily-jcs
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs > ls -lart
drwxr-xr-x 2 config config 6 Nov 16 23:41 logs
-rw-r--r-- 1 config config 5 Nov 16 23:41 agent.release
-rwxr-xr-x 1 config config 1371 Nov 16 23:41 agent-location.sh
-rwxr-xr-x 1 config config 1138 Nov 16 23:41 agent-location.bat
-rw-r--r-- 1 config config 45258 Nov 16 23:41 Agent.jar
drwxr-xr-x 3 config config 19 Nov 16 23:51 releases
# We could update permissions for all files/folder to 777, but we only need the following to be changed.
# Update permission for folders that 'wildfly' will write out to.
chmod 777 logs/ ./releases/24.10/logs/
chmod 777 ./releases/24.10/core/config/hotdeploy
chmod 777 ./releases/24.10/extensions
mkdir -p ./releases/24.10/core/config/metrics
chmod 777 ./releases/24.10/core/config/metrics
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs > ./agent-location.sh
/opt/CA/VirtualAppliance/custom/profiler/wily-jcs/releases/24.10/Agent.jar -Dcom.wily.introscope.agentProfile=/opt/CA/VirtualAppliance/custom/profiler/wily-jcs/releases/24.10/core/config/IntroscopeAgent.profile -Dintroscope.agent.bo otstrap.home=/opt/CA/VirtualAppliance/custom/profiler/wily-jcs -Dintroscope.agent.bootstrap.release.version=24.10 -Dintroscope.agent.bootstrap.version.loaded=24.10
Below is a view of the JCS agent with DX O2 UI. We see it is by itself under “Java”. Also note a challenge with two (2) Wildfly (JBoss) instances using the same profile with the default “agentName=JBoss Agent”. These Wildfly instances were automatically named upon startup, but after awhile the static name in the profile took precedence. See more information below
Challenge with default naming convention
When we have two (2) or more applications using the same profile, we may see DX O2 attempt to join them together in the metrics UI. To avoid this, lets make two (2) copies of the Introscope.profile and add our own “agentName” for each. Do NOT forget to comment out the default of “introscope.agent.agentName=JBoss Agent”. We added “com.wily.introscope.agent.agentName” as well, since it is called out in the online documentation.
Observations: The IP deployment honors the new value provide immediately. The IM deployment claims in the DX O2 agent logs that “Unable to automatically determine the Agent Name because: The Application Server naming mechanism is not yet available.” It defaults to the hostname of the Virtual Appliance after a few minutes, then it appears that it will reset to the correct agentName later.
/opt/CA/VirtualAppliance/custom/profiler/wily/releases/24.10/core/config > head IntroscopeAgent.ip.profile
###############################################################################
# Add name for IP application
###############################################################################
com.wily.introscope.agent.agentName=IP
introscope.agent.agentName=IP
/opt/CA/VirtualAppliance/custom/profiler/wily/releases/24.10/core/config > head IntroscopeAgent.im.profile
###############################################################################
# Add name for IM application
###############################################################################
com.wily.introscope.agent.agentName=IM
introscope.agent.agentName=IM
Per online documentation, we have other renaming options as well
# Using JVM -D environmental switches, we can set one of these two
# -D JVM switches
# -DagentName=IM
# -Dcom.wily.introscope.agent.agentName=IM
# Within the Instroscope.profile configuration file, we have these options.
# Allow Introscope to pickup a JVM environmental value from a pre-existing -D variable
introscope.agent.agentNameSystemPropertyKey=jboss.node.name
# Static name for the agent
introscope.agent.agentName=IP
# Allow the Introscope agent to append an integer to the name, used for clusters, e.g. JBoss Agent-1, JBoss Agent-2
introscope.agent.clonedAgent=true
The deployments for IP and JCS had no issues using any of the above, the IM application only responded well to the cloneAgent example. This is likely due to how the LogManager modules are ordered in the startup files that ‘config’ service ID does not have write access to modify.
Log Folder Cleanup
The ‘config’ service ID owns this folder, and even though there are files owned by ‘wildfly’, the ‘config’ user can delete these files.
ROI Metrics
Enable the API section of your DX O2 instance to create your ROI Metrics Input
On large project teams, multiple members may often use the same hosts simultaneously. Alternatively, you might prefer to maintain multiple SSH sessions open on the same host—one for monitoring logs and another for executing commands. While a Linux host using the Bash shell records command-line history, the default settings can pose challenges. Specifically, they may result in the loss of prior history when multiple sessions access the same host.
To address this, you can make some enhancements to your configuration. On the Symantec IGA Virtual Appliance, we typically add these improvements to the .bashrc files of the config, dsa, and imps service IDs. These adjustments ensure the preservation of command history for all work performed. Naturally, it is also important to clean up or remove any sensitive data, such as passwords, from the history.
Below, we explore an optimized .bashrc configuration that focuses on improving command history management. Key features include appending history across sessions, adding timestamps to commands, ignoring specific commands, and safeguarding sensitive inputs.
Optimized .bashrc Configuration
Here’s the full configuration we’ll be exploring:
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
Detailed Explanation of the Configuration
shopt -s histappend
Ensures that new commands from the current session are appended to your history file instead of overwriting it. This prevents accidental history loss across sessions.
export HISTTIMEFORMAT='%F %T '
Adds a timestamp to each command in your history, formatted as YYYY-MM-DD HH:MM:SS.
export HISTSIZE=10000
Limits the number of commands retained in memory during the current session to 10,000.
export HISTFILESIZE=100000
Configures the maximum number of commands saved in the history file to 100,000.
export HISTIGNORE='ls:history'
Excludes frequently used or less important commands like ls and history from being saved, reducing clutter.
export HISTCONTROL=ignorespace
Prevents commands that start with a space from being saved to history. This is particularly useful for sensitive commands like those containing passwords or API keys. When we copy-n-paste from Notepad++ or similar, remember to put a space character in front of the command.
export PROMPT_COMMAND='history -a; history -c; history -r'
Keeps history synchronized across multiple shell sessions: history -a appends new commands to the history file, history -c clears the in-memory history for the current session, and history -r reloads history from the history file.
Symantec IGA Virtual Appliance Service IDs
with the .profile or .bash_profile and .bashrc file(s).
We can see that the default .bash_profile for ‘config’ service already has a redirect reference for .bashrc
config@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
config@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific environment
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
export PATH
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
if [ -d ~/.bashrc.d ]; then
for rc in ~/.bashrc.d/*; do
if [ -f "$rc" ]; then
. "$rc"
fi
done
fi
unset rc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
A view the ‘dsa’ service ID files with some modifications. The default .profile only has the one line that sources the file /opt/CA/Directory/dxserver/install/.dxprofile. To assist with monitoring history, instead of other direct updates, we still will use .bashrc reference to this file.
[dsa@vapp1453 ~]$ cat .profile
. /opt/CA/Directory/dxserver/install/.dxprofile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Below is the view of the new file .bashrc to be source by DSA .profile file.
[dsa@vapp1453 ~]$ cat .bashrc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
A view the ‘imps’ service ID files with some modifications. The default .profile only has the one line that sources the file /etc/.profile_imps. To assist with monitoring history, instead of other direct updates, we still will use .bashrc reference to this file
imps@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .profile
# Source IM Provisioning Profile script
. /etc/.profile_imps
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Below is the view of the new file .bashrc to be source by IMPS .profile file.
imps@vapp1453 VAPP-14.5.0 (192.168.2.45):~ > cat .bashrc
# Added to improve history of all commands
shopt -s histappend
export HISTTIMEFORMAT='%F %T '
export HISTSIZE=10000
export HISTFILESIZE=100000
export HISTIGNORE='ls:history'
export HISTCONTROL=ignorespace
export PROMPT_COMMAND='history -a; history -c; history -r'
Delete Sensitive Information from History
If sensitive information has already been recorded in your history, you should clean it up. While you could wipe the entire history, a better approach is to retain as much as possible and remove only the sensitive entries.
The Challenge of Deleting Sensitive History
When deleting specific entries from Bash history, there’s a complication: line numbers change dynamically. The Bash history is a sequential list, so removing an entry causes all subsequent commands to shift up, altering their line numbers.
To address this, the cleanup process should iterate backward through the history. Starting with the last match ensures that earlier line numbers remain unaffected by changes further down the list.
Cleanup Script
Save the following script as history_cleanup.sh and modify the PATTERN variable to match the sensitive commands you want to delete:
#!/bin/bash
##################################################################
# Name: history_cleanup.sh
# Goal: Provide a means to clean up prior bash history of any
# sensitive data by a known pattern, e.g. password or token
#
# ANA 11/2024
##################################################################
# Prompt the user to enter the pattern to search for
read -p "Enter the pattern to search for in history: " PATTERN
# Validate input
if [ -z "$PATTERN" ]; then
echo "No pattern entered. Exiting."
exit 1
fi
# Use grep to find matching history entries and delete them in reverse order
history | grep "$PATTERN" | sort -r | while read -r line; do
# Extract the history line number (first column in the output)
LINE_NUMBER=$(echo "$line" | awk '{print $1}')
# Delete the history entry by its line number
history -d "$LINE_NUMBER"
done
# Save the updated history to the .bash_history file
history -w
echo "History cleanup complete. Entries matching '$PATTERN' have been removed."
Final Thoughts
Applying this .bashrc configuration across all service IDs offers several advantages. It streamlines workflows, secures sensitive inputs, and ensures a more organized command history. These enhancements are particularly valuable for developers, administrators, or anyone operating in multi-terminal environments.
Key Benefits:
History Persistence: Ensures commands are appended to the history file without overwriting existing entries, preserving a complete record of activity.
Enhanced Auditability: Adds timestamps to history, making it easier to track when specific commands were executed.
Reduced Noise: Excludes less critical commands, such as ls, to keep the history clean and focused on meaningful actions.
Improved Privacy: Commands starting with a space are omitted from the history, protecting sensitive inputs like passwords or API keys.
Real-Time Synchronization: Maintains consistent history across multiple terminal sessions, enabling seamless transitions and collaboration.
By adopting these configurations, you can enhance productivity, improve security, and achieve better management of command history in your environment.
On a typical Linux host, rolling back a configuration in WildFly can be as simple as copying a backup of the configuration XML file back into place. However, working within the constraints of a secured virtual appliance (vApp) presents a unique challenge: the primary service ID often lacks write access to critical files under the WildFly deployment.
When faced with this limitation, administrators may feel stuck. What options do we have? Thankfully, WildFly’s jboss-cli.sh process provides a lifeline for configuration management, allowing us to take snapshots and reload configurations efficiently. See the bottom of this blog if you need to create a user for jboss-cli.sh usage.
Why Snapshots are necessary for your sanity
WildFly snapshots capture the server’s current configuration, creating a safety net for experimentation and troubleshooting. They allow you to test changes, debug issues, or introduce new features with confidence, knowing you can quickly restore the server to a previous state.
In this guide, we’ll explore a step-by-step process to test and restore configurations using WildFly snapshots on the Symantec IGA Virtual Appliance.
Step-by-Step: Testing and Restoring Configurations
Step 1: Stamp and Backup the Current Configuration
First, optionally you may add a unique custom attribute to the current `standalone.xml` (ca-standalone-full-ha.xml) configuration, if you don’t already have a delta to compare. This new custom attribute acts as a marker, helping track configuration changes. After updating the configuration, take a snapshot.
Simulate a change by updating the custom attribute. Validate the update with a read query to confirm the changes are applied. To be safe, we will remove the attribute and re-add with a new string that is different.
List all available snapshots to identify the correct rollback point. You can use the `:list-snapshots` command to query snapshots and verify files in the snapshot directory.
/opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01! --timeout=90000 --command=":list-snapshots"
ls -l /opt/CA/wildfly-idm/standalone/configuration/standalone_xml_history/snapshot/
Step 4: Reload from Snapshot
Once you’ve identified the appropriate snapshot, use the `reload` command to roll back the configuration. This is the Monitor the process to ensure it completes successfully, then verify the configuration.
Adding a WildFly Admin User for Snapshot Management
Before you can execute commands through WildFly’s `jboss-cli.sh`, you’ll need to ensure you have a properly configured admin user. If an admin user does not already exist, you can create one with the following command:
- **`-m`**: Indicates the user is for management purposes. - **`-u jboss-admin`**: Specifies the username (`jboss-admin` in this case). - **`-p Password01!`**: Sets the password for the user. - **`-g SuperUser`**: Assigns the user to the `SuperUser` group, granting necessary permissions for snapshot and configuration management.
You can have as many jboss-cli.sh service IDs as you need.
Please note, this Wildfly management service ID is not the same as the Wildfly application service ID, that is needed for the /iam/im/logging_v2.jsp access. Which requires the -a switch and the group of IAMAdmin
If your logging_v2.jsp page is not displaying correct, there is simple update to resolve this challenge. Add the below string to your /opt/CA/VirtualAppliance/custom/IdentityManager/jvm-args.conf file.
Our goal is to move away from using self-signed certificates, often supplied by various software vendors and Kubernetes deployments, in favor of a unified approach with a single wildcard certificate covering multiple subdomains. This strategy allows for streamlined updates of a single certificate, which can then be distributed across our servers and projects as needed.
We have identified LetsEncrypt as a viable source for these wildcard certificates. Although LetsEncrypt offers a Docker/Podman image example, we have discovered an alternative that integrates with Google Domains. This method automates the entire process, including validation of the DNS TXT record and generation of wildcard certificates. It’s important to note that Google Domains operates independently from Google Cloud DNS, and its API is limited to updating TXT records only.
In this blog post, we will concentrate on how to validate the certificates provided by LetsEncrypt using OpenSSL. We will also demonstrate how to replace self-signed certificates with these new ones on a virtual appliance.
View the pubkey of the LetsEncrypt cert.pem and privkey.pem file to confirm they match
We need to download this root CA cert for solutions, appliances, and on-prem Kubernetes Clusters that do NOT have these root CA certs in their existing keystores.
Validate the cert.pem file with the public root CA cert and the provided chain.pem file
Will return an “OK” response if valid. CA certs order is important, if reversed this process will fail. Validation still fails if we only have chain.pem or fullchain.pem (see image below) without the correct public root CA cert from LetsEncrypt. Note: This public root CA cert is typically provided in updated modern browsers. Note2: While the fullchain.pem does have a CA cert with the CN = ISRG Root X1, this does NOT appear to be the correct one based on the error reported, so we have downloaded the correct CA cert to be used with the same CN name of ISRG Root X1 (see following images below)
Combine cert.pem with the public root CA cert and chain.pem for a complete chain cert in the CORRECT order.
Important Note: cert.pem MUST be first in this list otherwise validation will fail. Please note that there are two (2) root CA certs with the same CN, that may cause some confusion when validating the chain.
Validate certs with openssl server process and two (2) terminal ssh sessions/windows
1st terminal session – run an openssl server (via openssl s_server) on port 9443 (any open port). The -www switch will send a status message back to the client when it connects. This includes information about the ciphers used and various session parameters. The output is in HTML format so this option will normally be used with a web browser.
2nd terminal session – run openssl s_client and curl with the combined chain cert to validate. Replace the FQDN with your LetsEnscrypt domain in the wildcard cert. Example below FQDN is training.anapartner.net. You may also use a browser to access the openssl s_server web server with a FQDN.
Example of using the official Certbot image with podman. We recommend using multiple -d switches with *.subdomain1.domain.com to allow a single cert be used for many of your projects. Reference of this deployment
A version of podman with the Google Domain API TXT integration. We use variables for reuse of this code for various testing domains. This docker image will temporarily create the Google Domain TXT records via a REST API, that are needed for Certbot DNS validation, then the process will remove the TXT records. There is no manual interaction required. We use this process with a bash shell script to run as needed or via scheduled events.
Replace Identity Suite vApp Apache Certificate with LetsEncrypt
We see tech notes and an online document but wanted to provide a cleaner step by step process to update the Symantec IGA Virtual Appliance certificates for the embedded Apache HTTPD service under the path /opt/CA/VirtualAppliance/custom/apache-ssl-certificates
# Collect the generated LetsEncrypt certs via certbot, save them, scp to the vApp host, and then extract them
tar -xvf letsencrypt-20231125.tar
# View the certs
ls -lart
# Validate LetEncrypt cert via pubkey match between private key and cert
openssl x509 -noout -pubkey -in cert.pem
openssl pkey -pubout -in privkey.pem
# Download the latest LetsEncrypt public root CA cert
curl -sOL https://letsencrypt.org/certs/isrgrootx1.pem
# Validate a full chain with root with cert (Note: order is important on cat process)
openssl verify -CAfile <(cat isrgrootx1.pem chain.pem) cert.pem
# Create a full chain with root cert and LetsEncrypt chain in the correct ORDER
cat cert.pem isrgrootx1.pem chain.pem > combined_chain_with_cert.pem
# Move prior Apache HTTPD cert files
mv localhost.key localhost.key.original
mv localhost.crt localhost.crt.original
# Link the new LetsEncrypt files to same names of localhost.XXX
ln -s privkey.pem localhost.key
ln -s combined_chain_with_cert.pem localhost.crt
# Restart apache (httpd)
sudo systemctl restart httpd
# Test with curl with the FQDN name in the CN or SANs of the cert, e.g. training.anapartner.net
curl -v --cacert combined_chain_with_cert.pem --resolve training.anapartner.net:443:127.0.0.1 https://training.anapartner.net:443
# Test with browser with the FQDN name
Example of integration and information reported by browser
View of the certificate as shown with a CN (subject) = training.anapartner.net
A view of the SANS wildcard certs that match the FQDN used in the browser URL bar of iga.k8s-training-student01.anapartner.net
Example of error messages from Apache HTTPD service’s log files if certs are not in correct order or validate correctly. One message is a warning only, the other message is a fatal error message about the validation between the cert and private key do not match. Use the pubkey check process to confirm the cert/key match.
[ssl:warn] [pid 2206:tid 140533536823616] AH01909: CA_IMAG_VAPP:443:0 server certificate does NOT include an ID which matches the server name
[ssl:emerg] [pid 652562:tid 140508002732352] AH02565: Certificate and private key CA_IMAG_VAPP:443:0 from /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key do not match
TLS Secrets update in Kubernetes Cluster
If your Kubernetes Cluster is on-prem and does not have access to the internet to validate the root CA cert you may decided to use the combine_chain_with_cert.pem when building your Kubernetes Secrets. With Kubernetes Secrets you must delete then re-add the Secret as there are no current update process for Secrets.
CERTFOLDER=~/labs/letsencrypt
CERTFILE=${CERTFOLDER}/combined_chain_with_cert.pem
#CERTFILE=${CERTFOLDER}/fullchain.pem
KEYFILE=${CERTFOLDER}/privkey.pem
INGRESS_TLS_SECRET=anapartner-dev-tls
NAMESPACE=monitoring
NS=${NAMESPACE}
kubectl -n ${NS} get secret ${INGRESS_TLS_SECRET} 2>&1 > /dev/null
if [ $? != 0 ]; then
echo ""
echo "### Installing TLS Certificate for Ingress"
kubectl -n ${NS} create secret tls ${INGRESS_TLS_SECRET} \
--cert=${CERTFILE} \
--key=${KEYFILE}
fi
Ingress Rule via yaml:
tls:
- hosts:
- grafana.${INGRESS_APPS_DOMAIN}
secretName: ${INGRESS_TLS_SECRET}
helm update:
--set grafana.ingress.tlsSecret=${INGRESS_TLS_SECRET}
The below section from the online documentation mentions the purpose of a certificate to be used by Symatnec Directory. It mentioned using either DXcertgen or openssl. We can now add in LetsEncrypt certs as well to be used.
One statement caught our eye that was not quite accurate was that a certificate used on one DSA could not be used for another DSA. We can see if we compare the DSAs provided by CA Directory for the provisioning servers data tier (Identity Suite/Identity Manager), there is no difference between them, including subject name. Due to the fact that the subject (CN) has the same name for all five (5) DSAs (data/router), if a Java JNDI call is made for an LDAP call to the DSAs, the LDAP hostname validation must be disabled. (see below)
We must use key type of RSA for any cert with Symantec PAM. This process is fairly straightforward to update the certificates. Access the PAM UI configuration, select the menu of: Configuration / Security / Certificates. Join the cert.pem and the privkey.pem files together, in this order, with cat or notepad.
Challenge/Resolution: Please edit the joined file and add the string “RSA“ to the header/footer of the private key provided by LetsEncrypt. Per Broadcom tech note: 126692, “PAMs source code is expecting that RSA based Private Keys start with “—–BEGIN RSA PRIVATE KEY—–” header and have a matching footer. See example below.
Select “Certificate with Private key” with X509 (as other option), then click “Choose File” button to select the combined cert/privatekey pem file. We are not required to have a destination filename nor passphrase for the LetsEncrypt certs. Click Upload button.
We should receive a confirmation message of “Confirmation: PAM-CM-0349: subject=CN = training.anapartner.net has been verified.”
The error message “PAM-CM-0201: Verification Error Can not open private key file” will occur if using keytype of RSA or ECDSA and the default header/footer does not contain a required string for PAM to parse. If we attempt to use ECDSA keytype, we would receive a similar PAM-CM-0201 error message after updating the header/footer. So please regenerate the LetsEncrypt certs with keytype=RSA.
Next steps, after the certificate and private key have been loaded into Symantec PAM, please use the “Set” menu option to assign this certificate as primary. We will click verify button first, to confirm the certificate is functioning correctly.
We should receive a confirmation message for the file ” Confirmation: PAM-CM-0346: cert_with_key_only_for_pam_app.crt has been verified“.
Finally, we will click “Accept” button, and allow the PAM appliance to restart. Click “Yes” when asked to restart the appliance.
View the updated PAM UI with the LetsEncrypt Certs.
ERROR Messages
If you have received any of the below error messages during use of any java process, e.g. J2EE servers (JBOSS/Wildfly), you have pushed beyond the solution’s vendor ability to manage new features provided in LetsEncrypt certs. You will need to regenerate them with the type of RSA, instead of default of elliptical certs.
UNKNOWN-CIPHER-SUITE ERR_SSL_VERSION_OR_CIPHER_MISMATCH SSLHandshakeException: no cipher suites in common Ignore unavailable extension Ignore unknown or unsupported extension
Use the below processes to help you identify the root cause of your issue.
Create an ‘RSA’ type JKS and P12 Keystore using LetsEncrypt certs.
The below example is a two (2) steps process that will create a p12 keystore first with the cert.pem and privkey.pem files. Then, a second command will convert the p12 keystore to the older JKS keystore format. You may use these in any Java process, e.g.J2EE and/or Tomcat platform.
Organizations may rely on software appliances to meet the demands of secure, mission-critical applications. To ensure the optimal operation of these appliances, Application Performance Monitoring/Management (APM) tools have emerged as invaluable assets. In this comprehensive blog post, we’ll explore how the synergy of APM host monitoring via the SysEdge module, APM Java, and APM JBoss(Wildfly/J2EE) can significantly enhance the performance and security of applications running on software appliances like the Symantec Identity Suite Virtual Appliance (on-prem and AWS instances).
1. Value of APM Java and APM JBoss(Wildfly) Monitoring
APM tools feature specialized modules for Java-based applications and JBoss(Wildfly/J2EE) application servers, addressing the unique challenges posed by these technologies:
Optimizing Java-Based Applications: APM Java monitoring delves deep into Java applications, tracing bottlenecks, optimizing code, and ensuring the efficient use of Java Virtual Machine (JVM) resources.
JBoss Application Server Expertise: APM JBoss monitoring tracks the performance and stability of JBoss deployments, providing insights into JBoss-specific metrics critical for the smooth operation of JBoss-based applications.
2. The Role of APM Host Monitoring with SysEdge Module
The SysEdge module, integrated into APM tools, plays a pivotal role in enhancing the performance, security, and overall management of software appliances. This module focuses on host-based metrics, offering insights into the appliance’s performance and health.
Resource Utilization: SysEdge monitors essential resources, such as CPU, memory, disk space, and network usage, ensuring efficient resource allocation and preventing performance bottlenecks.
Hardware Health: It provides insights into the hardware/virtual components, crucial for maintaining the reliability of the appliance.
Comprehensive Diagnostics: The detailed host-based metrics allow for more accurate and rapid issue diagnostics, helping administrators identify and address problems efficiently. Gain real-time insight into digital performance, user experience and behavior
3. Benefits of APM Tools for Software Appliances
The integration of host-based metrics through SysEdge, along with APM Java and APM JBoss monitoring, offers a multitude of benefits:
Holistic Insights: APM tools provide a complete picture of the appliance’s performance, helping administrators make informed decisions by combining application-specific data with host-based metrics. Ensure flawless user experiences with analytics-driven insights
Proactive Issue Resolution: Administrators can proactively identify and address issues that may impact both application performance and the host system, reducing downtime and increasing reliability. Eliminate alert fatigue with automated root cause analysis.
Streamlined Management: These tools enable remote management of the appliance, even in challenging environments, allowing fine-tuning, patch application, and addressing security concerns. Empower every team. Improve every app.
4. Secure Deployment with Non-Root User ID and DevOps Automation
The utilization of non-root user IDs and DevOps automation can significantly enhance both security and operational efficiency in the deployment and management of applications on software appliances. Traditional application deployments often involved elevated privileges, exposing them to security vulnerabilities. Deploying applications with non-root user IDs offers several advantages:
Reduced Attack Surface: Non-root users have limited permissions, reducing the potential attack surface and making it more difficult for malicious actors to compromise the system.
Enhanced Security: By limiting application permissions, non-root deployments minimize the risk of security breaches and unauthorized access.
Compliance: Using non-root user IDs aligns with security best practices and compliance requirements, ensuring your organization meets regulatory standards.
Isolation: Non-root deployments prevent applications from interfering with critical system components, reducing the risk of conflicts and crashes.
5. Example of integration/deployment of APM tools (Java/JBoss/SysEdge) on the Symantec Identity Suite Virtual Appliance with non-root Id (config/ec2-user)
The Symantec Identity Suite Virtual Appliance is a harden software appliance that only allow authentication for one (1) of two (2) non-root IDs (config or ec2-user). The Symantec Identity Suite does allow for APM type tools to be deployed via extraction under the path /opt/CA/VirtualAppliance/custom/profiler.
We want to walk-through how to enable the DX APM SaaS Infra Agent with HostMonitoring (SysEdge) enabled on an Amazon Linux 2 host as a non-root user ID, as well as integration with the CA Identity Suite three (3) JBoss/Wildfly instances for IM/IG/IP and the CA Identity Suite JCS Connector Server with the embedded Java agent.
Additionally, we wanted to ensure that any external configuration access was disabled, as we only wanted to allow a “push” configuration/model of data from the vApp to the APM SaaS Collection APIs. We did not wish to allow any modification of the APM agent’s configuration on the vApp, that was not defined during initial deployment.
Four (4) parameters were modified from default installation:
1. Ensure non-root id is used for sysedge
echo "privilege_separation_user ${NON_ROOT_USER_ID}" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
2. Ensure only local host can pull or send data to the sysedge agent
echo "bind_address 127.0.0.1" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
3. Mask low value entries - Switch to debug loglevel as needed to address configuration challenges
echo "sysedge_loglevel fatal" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
4. Disable remote management via APM Command Center (ACC)
sed -i "s|introscope.agent.acc.enable=true|introscope.agent.acc.enable=false|g" ${APM_INFRA_HOME_FOLDER}/apmia/core/config/IntroscopeAgent.profile
Leveraging the information provided by the Broadcom DX Application Performance Management and Symantec Identity Suite documentation, we were able to clarify the dependencies on JBoss logging log4j jar(s) and LogManager objects with Adopt Open JDK 8. Neither document had the exact configuration that we found viable during testing.
We had two (2) challenges deploying the HostMonitoring (sysedge) agent on the AWS Identity Suite vApp instance that we were able to identify and address. No issues were found on the on-prem edition of the Identity Suite vApp.
Challenge(s):
File ownership of the sysedge PID and log file by ‘root’ instead of the non-root user due to default systemd startup script for sysedge module/binary.
Null entry within a file impacted startup of ‘sysedge’ binary from default OS file /etc/redhat-release
The below bash shell script utilized the APM SaaS Binary download feature of embedding the sysedge module/binary component with the primary APM Infra agent. This will include the unique token for your own APM SaaS infrastructure (or APM Enterprise infra). The APM SaaS download process provides a great automation download via wget process. The Symantec Identity Suite allows the non-root IDs to start/stop systemd process, which we leverage. Alternatively, we may use crontab for the non-root IDs to start/stop the two (2) services: apmia and sysedge as documented for DX APM agents.
#!/bin/bash
####################################################################
#
# Install the APM SaaS Infra Agent with HostMonitoring module (sysedge) as non-root id
# - Update variables of NON_ROOT_USER_ID and INFRA_DOWNLOAD_URL and APM_INFRA_HOME_FOLDER
# - This script works for a host that allows minimal sudo access to systemctl
# - Alternative startup process is crontab for the non-root-id
#
# Goal: Replica process for: ./APMIACtrl.sh install user=non-root-id
# and ./APMIACtrl.sh console_start
#
# Methodology: Isolate delta between folders using diff with sub-folder detection
# diff -iry --suppress-common-lines apmia/ apmia.orginal/
#
# Important Note: Identified RCA for sysedge binary having memory fault SEGV
# /etc/redhat-release MUST be populated (avoid null value)
#
#
# Crontab notes from online APM agent docs: (if needed)
# @reboot /home/user/apmia/APMIACtrl.sh console_start > /home/user/logs/cron.log 2>&1
# */5 * * * * /home/user/apmia/APMIACtrl.sh console_start > /home/user/logs/cron.log 2>&1
#
# Modify default APM Infra Agent parameters with these changes
# 1. Ensure non-root id is used for sysedge
# echo "privilege_separation_user ${NON_ROOT_USER_ID}" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
# 2. Ensure only local host can pull or send data to the sysedge agent
# echo "bind_address 127.0.0.1" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
# 3. Mask low value entries - Switch to debug loglevel as needed to address configuration challenges
# echo "sysedge_loglevel fatal" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
# 4. Disable acc integration (remote management) via APM Command Center (ACC)
# sed -i "s|introscope.agent.acc.enable=true|introscope.agent.acc.enable=false|g" ${APM_INFRA_HOME_FOLDER}/apmia/core/config/IntroscopeAgent.profile
#
#
#
# Ref. https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/dx-apm-agents/SaaS/infrastructure-agent/install-and-deploy-infrastructure-agent/install-the-infrastructure-agent-on-ca-digital-experience-insights.html
# https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/dx-apm-agents/SaaS/infrastructure-agent/Supportability-Matrix-for-Infrastructure-Agent.html
# https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/dx-apm-agents/SaaS/SystemEDGE-based-Monitoring.html
#
# ANA 10/2023
#
####################################################################
#NON_ROOT_USER_ID=config
NON_ROOT_USER_ID=ec2-user
echo ""
echo ""
echo "################################################################################################"
echo "# Ensure the download url has the APM Infra Agent with HostMonitoring check / enabled - This will be packaged together "
echo "################################################################################################"
INFRA_DOWNLOAD_URL="https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/XXX_SITE_ID_STRING_XXXX?format=archive&layout=bootstrap_preferred&packageDownloadSecurityToken=XXXXXXXXXXXXXXXXXXX_LONG_TOKEN_HERE__XXXXXXXXXXX"
APM_INFRA_HOME_FOLDER=/opt/CA/VirtualAppliance/custom/profiler/apm_infra
mkdir -p ${APM_INFRA_HOME_FOLDER}
cd ${APM_INFRA_HOME_FOLDER}
pwd
ls -lart
echo "wget --no-check-certificate --content-disposition ${INFRA_DOWNLOAD_URL} -O Infrastructure_Agent_apmia.tar"
wget --no-check-certificate --content-disposition ${INFRA_DOWNLOAD_URL} -O Infrastructure_Agent_apmia.tar
APM_INFRA_FILE_NAME=$(ls -lart Infrastructure_Agent_apmia* |tail -1 | awk '{print $9}')
echo "tar -xvf ${APM_INFRA_FILE_NAME} "
#tar -xvf ${APM_INFRA_FILE_NAME}
tar -xf ${APM_INFRA_FILE_NAME}
echo ""
echo ""
echo "################################################################################################"
echo "Update APM Infra Agent startup file to use non-root user ID of ${NON_ROOT_USER_ID} "
echo "################################################################################################"
sed -i "s|#RUN_AS_USER=|RUN_AS_USER=${NON_ROOT_USER_ID}|g" ${APM_INFRA_HOME_FOLDER}/apmia/bin/APMIAgent.sh
echo ""
echo ""
echo "################################################################################################"
echo "Validate update of NON_ROOT_USER_ID"
echo "################################################################################################"
grep -C 2 -i "RUN_AS_USER=${NON_ROOT_USER_ID}" ${APM_INFRA_HOME_FOLDER}/apmia/bin/APMIAgent.sh
echo ""
echo ""
echo "################################################################################################"
echo "Extract SystemEdge component for APM Infra Host Monitoring"
echo "################################################################################################"
export AGENTHOME=${APM_INFRA_HOME_FOLDER}/apmia
SYSEDGE_FILE_NAME=$(ls ${AGENTHOME}/casystemedge*)
echo ${SYSEDGE_FILE_NAME}
cd ${AGENTHOME}
#tar -xvf ${SYSEDGE_FILE_NAME}
tar -xf ${SYSEDGE_FILE_NAME}
echo ""
echo ""
echo "################################################################################################"
echo "Deploy and install SystemEdge component for APM Infra Host Monitoring with non-root user ID"
echo "################################################################################################"
kill $(pidof sysedge) &>/dev/null
rm -rf ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE &>/dev/null
cd ${APM_INFRA_HOME_FOLDER}/apmia/CA_SystemEDGE_Core
./ca-setup.sh install
/bin/ps -ef | grep -i sysedge | grep -v grep
echo ""
echo ""
echo "################################################################################################"
echo "Update and restart the SystemEdge component for APM Infra Host Monitoring with non-root user ID"
echo "################################################################################################"
# Ensure non-root id is used for sysedge
echo "privilege_separation_user ${NON_ROOT_USER_ID}" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
# Ensure only local host can pull or send data to the sysedge agent
echo "bind_address 127.0.0.1" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
# Mask low value entries - Switch to debug loglevel as needed to address configuration challenges
echo "sysedge_loglevel fatal" >> ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf
cp -r -p ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/sysedge.cf ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.cf
${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/bin/sysedgectl stop
${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/bin/sysedgectl start
/bin/ps -ef | grep -i sysedge | grep -v grep
echo ""
echo ""
echo "################################################################################################"
echo "Check updates to sysedge configuration file sysedge.cf "
echo "################################################################################################"
#tail -5 ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.cf
grep -Hin -v -e "^$" -e "^#" -e "^template" ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.cf
echo ""
echo ""
echo "################################################################################################"
echo "Update the APM Infra main agent and disable the acc component "
echo "################################################################################################"
sed -i "s|introscope.agent.acc.enable=true|introscope.agent.acc.enable=false|g" ${APM_INFRA_HOME_FOLDER}/apmia/core/config/IntroscopeAgent.profile
grep "introscope.agent.acc.enable" ${APM_INFRA_HOME_FOLDER}/apmia/core/config/IntroscopeAgent.profile
echo ""
echo ""
echo "################################################################################################"
echo "Create systemd startup process on vApp due to sudo systemctl process allowed on vApp for APM Infra main agent"
echo "################################################################################################"
cat << EOF > ${APM_INFRA_HOME_FOLDER}/apmia/apmia.service
# /etc/systemd/system/apmia.service
[Unit]
Description=APM Infrastructure Agent
After=syslog.target
[Service]
Type=forking
ExecStart="${APM_INFRA_HOME_FOLDER}/apmia/bin/./APMIAgent.sh" start sysd
ExecStop="${APM_INFRA_HOME_FOLDER}/apmia/bin/./APMIAgent.sh" stop sysd
User=${NON_ROOT_USER_ID}
KillMode=control-group
Environment=SYSTEMD_KILLMODE_WARNING=true
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl stop apmia.service &>/dev/null
sudo systemctl disable apmia.service &>/dev/null
sudo systemctl enable ${APM_INFRA_HOME_FOLDER}/apmia/apmia.service
echo "################################################################################################"
sudo systemctl cat apmia.service
echo "################################################################################################"
sudo systemctl daemon-reload
sudo systemctl start apmia.service
sudo systemctl status apmia.service -a -l --no-pager
echo ""
echo ""
echo "################################################################################################"
echo "Create systemd startup process on vApp due to sudo systemctl process allowed on vApp for Sysedge agent"
echo "################################################################################################"
# Stop sysedge via manual process to use the systemd process
${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/bin/sysedgectl stop
# Manage these two (2) systemd error with PIDs and Paths
# Refusing to accept PID outside of service control group, acquired through unsafe symlink chain
# /opt/CA/VirtualAppliance/custom/profiler/apm_infra/apmia/SystemEDGE/config/port1691/sysedge.service:8] Not an absolute path
#
cat << EOF > ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.service
# /etc/systemd/system/sysedge.service
[Unit]
Description=sysedge
After=syslog.target
[Service]
Type=forking
WorkingDirectory=${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691
#Environment=VAR_HERE_ENV_ENV=production PATH=PATH_HERE_IF_NEEDED
ExecStart="${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/bin/CA-SystemEDGE" start sysd
ExecStop="${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/bin/CA-SystemEDGE" stop sysd
PIDFile=${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.pid
User=${NON_ROOT_USER_ID}
KillMode=none
[Install]
WantedBy=multi-user.target
EOF
echo ""
echo ""
echo "################################################################################################"
cat ${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.service
echo "################################################################################################"
echo ""
echo ""
sudo systemctl stop sysedge.service &>/dev/null
sudo systemctl disable sysedge.service &>/dev/null
sudo systemctl enable "${APM_INFRA_HOME_FOLDER}/apmia/SystemEDGE/config/port1691/sysedge.service"
echo "################################################################################################"
sudo systemctl cat sysedge.service
echo "################################################################################################"
sudo systemctl daemon-reload
sudo systemctl start sysedge.service
sudo systemctl status sysedge.service -a -l --no-pager
echo ""
echo ""
echo "################################################################################################"
echo "Ensure all files are owned by the non-root id of ${NON_ROOT_USER_ID} for APM Infra Agent"
echo "Check for any error messages "
echo "################################################################################################"
sudo systemctl stop apmia.service
sudo systemctl stop sysedge.service
echo "chown -R ${NON_ROOT_USER_ID}:${NON_ROOT_USER_ID} ${APM_INFRA_HOME_FOLDER}/apmia "
echo "################################################################################################"
chown -R ${NON_ROOT_USER_ID}:${NON_ROOT_USER_ID} ${APM_INFRA_HOME_FOLDER}/apmia
echo ""
echo ""
echo ""
echo ""
echo "################################################################################################"
echo "View running services: APM Infra will have a parent process of wrapper and two (2) java child processes"
echo "The APM Infra HostMonitoring module will have one process name sysedge "
echo "################################################################################################"
sudo systemctl start apmia.service
sudo systemctl start sysedge.service
ps -ef | grep apmia | grep -v grep
echo ""
echo ""
echo "################################################################################################"
echo "Use these these commands to view / monitor / start & stop APM Infra Agent"
echo "################################################################################################"
echo "sudo systemctl status apmia.service -a -l --no-pager"
echo "sudo systemctl stop apmia.service"
echo "sudo systemctl start apmia.service"
echo "sudo systemctl cat apmia.service"
echo "journalctl -u apmia.service -f"
echo ""
echo "sudo systemctl status sysedge.service -a -l --no-pager"
echo "sudo systemctl stop sysedge.service"
echo "sudo systemctl start sysedge.service"
echo "sudo systemctl cat sysedge.service"
echo "journalctl -u sysedge.service -f"
echo "journalctl -u sysedge.service -xe -f"
echo ""
echo ""
6. Example of JVM arguments for integration of APM SaaS Java Agent with Identity Suite JCS Connector Server)
The APM SaaS or APM Enterprise UI solution will provide an agent download page, that allows section of the agent to be deployed.
Instruction will be provided and please noticed the “wget” auto download link that may be leveraged for automation (dev-ops) processes. This link will include both the siteID and the download token ID.
If you haven’t generated a new credentials token, after you download the Java/Jboss agents, click “Show Agent Details” to harvest the three (3) key/value pairs.
The below bash shell script will download the APM Java Agent package. There is no embedded credential token with this download. The credentials are provided separately via the APM SaaS UI (as shown above). While we could place these three (3) parameters within the APM agent configuration file, IntroscopeAgent.profile, we decided to clarity the use of JVM switches to override any values, and allow us to automate this deployment independent of any new APM agents updates.
#!/bin/bash
######################################################################################
#
# Automate deployment of the APM SaaS Java Agent with credentials & urls
# to the single Identity Manager JCS Connector Server instance
#
# Use variables for the JVM parameters. Adjust if needed.
# We may override the default naming convention to clarity which instance is
# being monitoring within the APM SaaS Dashboard, to avoid confusion with any
# JBoss agent instance from IM/IG/IP
#
# ANA 10/2023
#
######################################################################################
echo ""
echo ""
echo "################################################################################################"
echo "# Ensure the download url has the APM Java Agent "
echo "################################################################################################"
APM_CREDENTIAL_TOKEN='XXXXXXX_LONG_TOKEN_HERE_FROM_APM_SAAS_UI__XXXXXXX'
APM_URL='apmgw.dxi-na1.saas.broadcom.com'
APM_SAAS_AGENT_URL="https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/XXX_SITE_ID_STRING_XXXX?format=archive&layout=bootstrap_preferred&packageDownloadSecurityToken=XXXX_DOWNLOAD_TOKEN_PROVIDED_FROM_APM_SAAS_UI_WHEN_SELECTED___XXXXXXXXXXXX"
APM_AGENT_HOME_FOLDER=/opt/CA/VirtualAppliance/custom/profiler/apm_java
APM_AGENT_FILE_NAME="APM_SaaS_Java_Agent.tar"
mkdir -p ${APM_AGENT_HOME_FOLDER}
cd ${APM_AGENT_HOME_FOLDER}
pwd
ls -lart
echo "wget --no-check-certificate --content-disposition ${APM_SAAS_AGENT_URL} -O ${APM_AGENT_FILE_NAME}"
wget --no-check-certificate --content-disposition ${APM_SAAS_AGENT_URL} -O ${APM_AGENT_FILE_NAME}
ls -lart
echo "tar -xvf ${APM_AGENT_FILE_NAME} "
#tar -xvf ${APM_AGENT_FILE_NAME}
tar -xf ${APM_AGENT_FILE_NAME}
ls -lart
#
#
tz=`/bin/date --utc +%Y%m%d%H%M%S`
APM_AGENT_NAME=IM_JCS_NODE
JVM_BACKUP_LOCATION=/opt/CA/VirtualAppliance/custom/profiler/
JVM_FILE=/opt/CA/IdentityManager/ConnectorServer/data/jvm_options.conf
if [ -f ${JVM_FILE} ];then
cp -r -p ${JVM_FILE} ${JVM_BACKUP_LOCATION}/${tz}_jvm_options.conf
echo "-server -Xms1g -Xmx2g -Djava.awt.headless=true -Dcom.sun.net.ssl.enableECC=true -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Djava.net.preferIPv4Stack=true -Djava.security.egd=file:/dev/./urandom -javaagent:${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/Agent.jar -Dcom.wily.introscope.agentProfile=${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/core/config/IntroscopeAgent.profile -Dintroscope.agent.bootstrap.home=${APM_AGENT_HOME_FOLDER}/wily -Dintroscope.agent.bootstrap.release.version=2023.9 -Dintroscope.agent.bootstrap.version.loaded=2023.9 -Dcom.wily.introscope.agentManager.url.1=wss://${APM_URL} -Dcom.wily.introscope.agentManager.url.2=https://${APM_URL} -Dcom.wily.introscope.agentManager.credential=\"${APM_CREDENTIAL_TOKEN}\" -Dcom.wily.introscope.agent.agentName=${APM_AGENT_NAME} -XX:+PrintFlagsFinal -DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector" > ${JCS_JVM_FILE}
echo "Start / Stop JCS"
echo "sudo systemctl stop im_jcs "
sudo systemctl stop im_jcs
echo "sudo systemctl start im_jcs "
sudo systemctl start im_jcs
}
7. Example of JVM arguments for integration of APM SaaS JBoss Agent with Identity Suite IM/IG/IP JBoss/Wildfly instances
The below bash shell script is for the three (3) JBoss (Wildfly) instances on the vApp. Fortunately for us, the Broadcom/Symantec Product/Engineering team kept the same log4j jar and class files version between all three (3) instances, so we were able to leverage variables for each. You may wish to adjust the JVM parameters as needed. Please note, that again we did not embed any key/value pair into the APM configuration file, as we wanted to automate this deployment independent of any new APM agents updates.
#!/bin/bash
#########################################################################
#
# Automate deployment of the APM SaaS JBOSS Agent with credentials & urls
# to the three JBoss/Wildfly instances
# - IdentityManager
# - IdentityGovernance
# - IdentityPortal
#
# Use variables for the JVM parameters that allow a similar configuration for
# all three (3) instances. Adjust if needed. Since all three (3) JBoss are on the
# same host, we will override the default naming convention to clarity which instance is
# being monitoring within the APM SaaS Dashboard
#
# ANA 10/2023
#
#########################################################################
echo ""
echo ""
echo "################################################################################################"
echo "# Ensure the download url has the APM JBoss Agent "
echo "################################################################################################"
APM_SAAS_AGENT_URL="https://apmgw.dxi-na1.saas.broadcom.com/acc/apm/acc/downloadpackage/XXX_SITE_ID_STRING_XXXX?format=archive&layout=bootstrap_preferred&packageDownloadSecurityToken=XXXX__DOWNLOAD_TOKEN_HERE"
APM_CREDENTIALS='XXXXX_LONG_CREDENTIAL_TOKEN_HERE__XXXXXX'
APM_URL='apmgw.dxi-na1.saas.broadcom.com'
APM_AGENT_HOME_FOLDER=/opt/CA/VirtualAppliance/custom/profiler/apm_jboss
APM_AGENT_FILE_NAME="APM_SaaS_JBOSS_Agent.tar"
mkdir -p ${APM_AGENT_HOME_FOLDER}
cd ${APM_AGENT_HOME_FOLDER}
pwd
ls -lart
echo "wget --no-check-certificate --content-disposition ${APM_SAAS_AGENT_URL} -O ${APM_AGENT_FILE_NAME}"
wget --no-check-certificate --content-disposition ${APM_SAAS_AGENT_URL} -O ${APM_AGENT_FILE_NAME}
echo "tar -xvf ${APM_AGENT_FILE_NAME} "
#tar -xvf ${APM_AGENT_FILE_NAME}
tar -xf ${APM_AGENT_FILE_NAME}
ls -lart
tz=`/bin/date --utc +%Y%m%d%H%M%S`
JBOSS_INSTANCE=IdentityManager
FILE_BACKUP_LOCATION=/opt/CA/VirtualAppliance/custom/${JBOSS_INSTANCE}
FILE_JVM_FILE=${FILE_BACKUP_LOCATION}/jvm-args.conf
if [ -f ${FILE_JVM_FILE} ]; then
echo "cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf "
cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf
echo "JAVA_OPTS=-Xms512m -Xmx2048m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UseCompressedOops -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -javaagent:${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/Agent.jar -Dcom.wily.introscope.agentProfile=${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/core/config/IntroscopeAgent.profile -Djboss.modules.system.pkgs=org.jboss.logmanager,org.jboss.byteman,com.wily,com.wily.* -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Xbootclasspath/p:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.5.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/wildfly/common/main/wildfly-common-1.4.0.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.6.Final.jar -Dcom.wily.org.apache.commons.logging.Log=com.wily.org.apache.commons.logging.impl.NoOpLog -Dintroscope.agent.bootstrap.home=${APM_AGENT_HOME_FOLDER}/wily -Dintroscope.agent.bootstrap.release.version=2023.9 -Dintroscope.agent.bootstrap.version.loaded=2023.9 -Dcom.wily.introscope.agentManager.url.1=wss://${APM_URL} -Dcom.wily.introscope.agentManager.url.2=https://${APM_URL} -Dcom.wily.introscope.agentManager.credential=\"${APM_CREDENTIALS}\" -Dcom.wily.introscope.agent.agentName=${JBOSS_INSTANCE} -XX:+PrintFlagsFinal -DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector" > ${FILE_JVM_FILE}
echo "sudo systemctl stop wildfly-idm "
sudo systemctl stop wildfly-idm
echo "sudo systemctl start wildfly-idm "
sudo systemctl start wildfly-idm
fi
JBOSS_INSTANCE=IdentityGovernance
FILE_BACKUP_LOCATION=/opt/CA/VirtualAppliance/custom/${JBOSS_INSTANCE}
FILE_JVM_FILE=${FILE_BACKUP_LOCATION}/jvm-args.conf
if [ -f ${FILE_JVM_FILE} ]; then
echo "cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf "
cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf
echo "JAVA_OPTS=-Xms512m -Xmx2048m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UseCompressedOops -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -javaagent:${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/Agent.jar -Dcom.wily.introscope.agentProfile=${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/core/config/IntroscopeAgent.profile -Djboss.modules.system.pkgs=org.jboss.logmanager,org.jboss.byteman,com.wily,com.wily.* -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Xbootclasspath/p:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.5.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/wildfly/common/main/wildfly-common-1.4.0.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.6.Final.jar -Dcom.wily.org.apache.commons.logging.Log=com.wily.org.apache.commons.logging.impl.NoOpLog -Dintroscope.agent.bootstrap.home=${APM_AGENT_HOME_FOLDER}/wily -Dintroscope.agent.bootstrap.release.version=2023.9 -Dintroscope.agent.bootstrap.version.loaded=2023.9 -Dcom.wily.introscope.agentManager.url.1=wss://${APM_URL} -Dcom.wily.introscope.agentManager.url.2=https://${APM_URL} -Dcom.wily.introscope.agentManager.credential=\"${APM_CREDENTIALS}\" -Dcom.wily.introscope.agent.agentName=${JBOSS_INSTANCE} -XX:+PrintFlagsFinal -DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector" > ${FILE_JVM_FILE}
echo "sudo systemctl stop wildfly-ig "
sudo systemctl stop wildfly-ig
echo "sudo systemctl start wildfly-ig "
sudo systemctl start wildfly-ig
fi
JBOSS_INSTANCE=IdentityPortal
FILE_BACKUP_LOCATION=/opt/CA/VirtualAppliance/custom/${JBOSS_INSTANCE}
FILE_JVM_FILE=${FILE_BACKUP_LOCATION}/jvm-args.conf
if [ -f ${FILE_JVM_FILE} ]; then
echo "cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf "
cp -r -p ${FILE_JVM_FILE} ${FILE_BACKUP_LOCATION}/${tz}_jvm-args.conf
echo "JAVA_OPTS=-Xms512m -Xmx2048m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:+UseCompressedOops -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -javaagent:${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/Agent.jar -Dcom.wily.introscope.agentProfile=${APM_AGENT_HOME_FOLDER}/wily/releases/2023.9/core/config/IntroscopeAgent.profile -Djboss.modules.system.pkgs=org.jboss.logmanager,org.jboss.byteman,com.wily,com.wily.* -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Xbootclasspath/p:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.5.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/wildfly/common/main/wildfly-common-1.4.0.Final.jar:\${JBOSS_HOME}/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.1.6.Final.jar -Dcom.wily.org.apache.commons.logging.Log=com.wily.org.apache.commons.logging.impl.NoOpLog -Dintroscope.agent.bootstrap.home=${APM_AGENT_HOME_FOLDER}/wily -Dintroscope.agent.bootstrap.release.version=2023.9 -Dintroscope.agent.bootstrap.version.loaded=2023.9 -Dcom.wily.introscope.agentManager.url.1=wss://${APM_URL} -Dcom.wily.introscope.agentManager.url.2=https://${APM_URL} -Dcom.wily.introscope.agentManager.credential=\"${APM_CREDENTIALS}\" -Dcom.wily.introscope.agent.agentName=${JBOSS_INSTANCE} -XX:+PrintFlagsFinal -DLog4jContextSelector=org.apache.logging.log4j.core.selector.BasicContextSelector" > ${FILE_JVM_FILE}
echo "sudo systemctl stop wildfly-portal "
sudo systemctl stop wildfly-portal
echo "sudo systemctl start wildfly-portal "
sudo systemctl start wildfly-portal
fi
8. Conclusion of APM tools integration on the Symantec Identity Suite Virtual Appliance with non-root Id (config/ec2-user)
We have been successful using APM tool and home-grown java monitoring processes to improve the performance of the Symantec Identity Suite solution for peak use-cases. Using these processes should allow you to peak into the ‘blackbox’ of Java/JBoss processes to understand where bottlenecks may exists.
9. View sysedge.cf configuration for proposed change to isolate behavior to single host
View of the configuration file for sysedge.cf via cat sysedge.cf | grep -i -v -e “^$” -e “^#” -e “^template”
Token / Parameter
Value
Commentary
version 6.0
sysedge_loglevel
fatal
Proposed change: Switch from default log level of “info” to “fatal” to avoid bloat in sysedge.log file due to embedded function in sysedge binary to copy configuration file to /etc folder for non-root ID. Switch to debug if there are any configurations challenges.
bind_address
127.0.0.1
Proposed change: Prevent external updates to SNMP trap of SysEdge (allow only localhost) – sudo systemctl start sysedge.service
Set during installation. May be set via scripts for manual deployment
default_port
1691
Set during installation. May be set via scripts for manual deployment
no_proc_monitor
no_procgroup_monitor
privilege_separation_user
config OR ec2-user
Proposed change: Add new required parameter to enforce non-root access for a local host account. Confirm ownership when stopping/starting the service via systedgectl or sudo systemctl start sysedge.service. Confirm file ownership for PID & log files.
10. Prior notes / examples of integration/deployment of APM tools (Java/JBoss) on the Symantec Identity Suite Virtual Appliance with non-root Id (config/ec2-user)
We have been involved with use of APM tools for quite a bit and contributed to the inclusion of these features into various solution, including software appliances like the Symantec Identity Suite with non-root access.
Businesses have always sought ways to improve efficiency, scalability, and resilience. Cloud architecture and Kubernetes have revolutionized how organizations build, deploy, and manage applications. At its heart, Kubernetes is an open-source container orchestration platform, but its business value far surpasses simple orchestration. Among its many features, the capabilities of auto-scaling and self-healing stand out. Both capabilities ensure seamless user experiences and facilitate reduced operational costs and improved reliability.
VIP AuthHub brings next-generation authentication solutions on a cloud-native platform. One does not have to be in a managed cloud environment like those provided by Amazon, Microsoft, or Google to harness the power of Self-healing and auto-scaling for applications. Our on-prem cloud environment is built using an OpenShift Kubernetes implementation, where we showcase the auto-scaling and self-healing capabilities of the deployed VIP AuthHub solution. If you are in a hosted cloud environment, reaping the benefits of a cloud-managed Kubernetes environment is just the same.
In this demo we discuss self-healing and horizontal pod scaling (HPA), the building blocks used for autoscaling in a Kubernetes environment.
Scalable Authentication: Auth Hub dynamically auto-scales its capacity to handle unexpected peak loads, preserving the user experience.
Uninterrupted Service: Thanks to Kubernetes’ self-healing features, the VIP Auth Hub remains operational even if specific components face issues.
Implementing cloud architecture with Kubernetes optimizes resource utilization in response to real-time demands while minimizing service disruptions. Many, if not most, medium and large organization have their own data centers. Investing in an on-premises Kubernetes deployment empowers businesses to tap into the transformative features of auto-scaling and self-healing.
In the intricate world of cybersecurity and identity management, evolving threats and vulnerabilities demand our undivided attention. When considering upgrading your Symantec Identity Suite Virtual Appliance, understanding the nuanced technological landscape, including the perks of Jitterentropy and the challenges associated with Java’s Bouncy Castle entropy, can make a world of difference.
The Technological Need:
Robust Randomness with Jitterentropy: Relying on the natural timing jitter of CPUs, Jitterentropy has emerged as a game-changing hardware random number generator (RNG). The latest renditions of the Symantec Identity Suite Virtual Appliance leverage this RNG, ensuring unparalleled randomness, making decoding by potential threats a herculean task.
Operational Efficiency: Upgrades tuned with contemporary features promise optimized performance. Coupled with Jitterentropy, the RNG processes are turbocharged, promising minimal downtime and an elevated user experience.
Challenges with Bouncy Castle Entropy in Java: Bouncy Castle, despite its vast utility in cryptographic operations in Java, has had its share of entropy-related issues. Some known problems include:
Predictability: Certain RNG implementations in Bouncy Castle have been found to be somewhat predictable, which could compromise security.
Seed Reuse: There have been instances where seeds were reused, which again poses security concerns.
Slow Entropy Accumulation: At times, the entropy collection is slower than expected, leading to potential operational delays. With security solutions the lack of entropy impacts scale and usability.
Business Justification for Rapid Response:
With the business landscape in perpetual flux, the right tech decisions can spell the difference between stagnation and growth:
Enhanced Security: Incorporating Linux OS with Jitterentropy is synonymous with state-of-the-art security. Such forward-thinking measures drastically curtail potential security breaches.
Cost Savings: Forward-looking upgrades, especially those that incorporate cutting-edge features like Jitterentropy, offer tangible long-term financial advantages. Fewer breaches, reduced system errors, and saved manual efforts contribute positively to the bottom line.
Staying Competitive: In an era of rapid technological advancements, integrating elements like Jitterentropy ensures you’re leading from the front.
Compliance and Regulatory Adherence: With cybersecurity standards constantly on the move, staying updated is non-negotiable. Evade potential legal issues and hefty fines by staying on top of these norms.
Customer Trust: By showcasing a commitment to data safety through advanced systems (and by addressing known entropy issues like those in Bouncy Castle), businesses can strengthen customer trust and foster long-term loyalty.
Validating Jitterentropy Integration in the Linux Kernel: A Comprehensive Guide
As the world of Linux continues to evolve, one exciting development is the incorporation of jitterentropy into the kernel. This robust hardware random number generator (RNG) enhances the quality of randomness, making our systems even more secure. If you’re keen on understanding, implementing, or validating this feature in your Linux setup, this guide is tailored just for you.
What is Jitterentropy?
Jitterentropy is an RNG based on the natural timing jitter that occurs in CPUs. In the realm of cybersecurity, RNGs are of paramount importance; they generate the random numbers pivotal for cryptographic operations. The less predictable these numbers are, the tougher it becomes for malicious actors to crack them.
Why is Jitterentropy Essential?
For systems relying on cryptographic functions, such as encryption, the RNG’s caliber can’t be overstated. Jitterentropy guarantees first-rate randomness, upping your system’s security game. https://www.chronox.de/jent.html
How to Validate Jitterentropy Integration:
Identify Your Kernel Version: Kick things off by determining your kernel version using the uname -r or uname -acommand.
Is Jitterentropy Part of Your Kernel Configuration?: Deploy this simple grep command to figure out if jitterentropy is enabled in your kernel:
grep -HRin jitter /boot/config*
An output showing CONFIG_CRYPTO_JITTERENTROPY=y confirms that jitterentropy is enabled. The “y” here indicates that the feature is in-built in the kernel.
Time-Driven Testing for Jitterentropy: By simulating multiple pulls from the entropy source, you can gauge how efficient jitterentropy is:
time for i in {1..1000}; do time dd if=/dev/random bs=1 count=16 2>/dev/null | base64; done
This command performs two functions:
It times each of the 1000 pulls from /dev/random, allowing you to measure the average time taken, basically emulating 1000 rapid password changes of 16 characters.
It provides an overall timing for 1000 pulls, letting you know the total duration for the entire operation. If your system remains responsive and completes the pulls swiftly, it’s a strong indication that your entropy source is in prime working condition. Which implies that any solution on the appliance has adequate entropy to service users and processes to scale.
Another command that add counters to see that 1000 iteration have passed. Note, if there is no entropy pump, this process will NOT succeed. The Linux OS entropy will be rapidly depleted and any solution on the host will be delayed. Ensure there is an entropy pump to keep the performance you need.
counter=1;MAX=1000;time while [ $counter -le $MAX ]; do echo "########## $counter ##########" ; time dd if=/dev/random bs=16 count=1 2> /dev/null | base64; counter=$(( $counter + 1 )); done;
Wrapping Up:
The integration of Jitterentropy in the Linux kernel underscores the open-source community’s relentless dedication to fortifying security. By understanding, testing, and leveraging it, you ensure that your system is bolstered against potential threats, always staying a step ahead in the cybersecurity arena. Keep exploring, stay updated, and most importantly, remain secure!
Review upgrade your Symantec Identity Suite to improve your performance for users and scale to millions of transactions.
For non-appliances or older Linux OS (Kernel release < 5.6):
While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.
The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.
One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.
We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.
The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.
A view of the Apache HTTPD SSL self-signed certificate and key.
The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:
A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.
Provisioning Server process for TLS enablement for IME ETACALLBACK process.
Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.
Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.
Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.
Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.
sudo certbot certonly --manual --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email
Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]
cert1.pem [The primary server side wildcard cert]
privkey1.pem [The primary server side private key associated with the wildcard cert]
chain1.pem [The intermediate chain certs that are needed to validate the cert1 cert]
fullchain1.pem [two files together in the correct order of cert1.pem and chain1.pem.]
NOTE: fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]
Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.
Validate with browsers and view the HTTPS lock symbol to view the certificate
Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.
A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.
Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.
Common TLS errors seen with the Provisioning Server ETACALLBACK
Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.
Other Error messages (curl)
If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823
The recent DNS challenges for a large organization that impacted their worldwide customers bring to mind a project we completed this year, a global password reset redundancy solution.
We worked with a client who desired to manage unplanned WAN outages to their five (5) data centers for three (3) independent MS Active Directory Domains with integration to various on-prem applications/ endpoints. The business requirement was for self-service password sync, where the users’ password change process is initialed/managed by the two (2) different MS Active Directory Password Policies.
Without the WAN outage requirement, any IAM/IAG solution may manage this request within a single data center. A reverse password sync agent process is enabled on all writable MS Active Directory domain controllers (DC). All the world-wide MS ADS domain controllers would communicate to the single data center to validate and resend this password change to all of the users’ managed endpoint/application accounts, e.g. SAP, Mainframe (ACF2/RACF/TSS), AS/400, Unix, SaaS, Database, LDAP, Certs, etc.
With the WAN outage requirement, however, a queue or components must be deployed/enabled at each global data center, so that password changes are allowed to sync locally to avoid work-stoppage and async-queued to avoid out-of-sync password to the other endpoint/applications that may be in other data centers.
We were able to work with the client to determine that their current IAM/IAG solution would have the means to meet this requirement, but we wished to confirm no issues with WAN latency and the async process. The WAN latency was measured at less than 300 msec between remote data centers that were opposite globally. The WAN latency measured is the global distance and any intermediate devices that the network traffic may pass through.
To review the solution’s ability to meet the latency issues, we introduced a test environment to emulate the global latency for deployment use-cases, change password use-cases, and standard CrUD use-cases. There is a feature within VMWare Workstation, that allows emulation of degraded network traffic. This process was a very useful planning/validation tool to lower rollback risk during production deployment.
VMWare Workstation Network Adapter Advance Settings for WAN latency emulation
The solution used for the Global Password Rest solution was Symantec Identity Suite Virtual Appliance r14.3cp2. This solution has many tiers, where select components may be globally deployed and others may not.
We avoided any changes to the J2EE tier (Wildfly) or Database for our architecture as these components arenot supported for WAN latency by the Vendor. Note: We have worked with other clients that have deployment at two (2) remote data centers within 1000 km, that have reported minimal challenges for these tiers.
We focused our efforts on the Provisioning Tier and Connector Tier. The Provisioning Tier consists of the Provisioning Server and Provisioning Directory.
The Provisioning Server has no shared knowledge with other Provisioning Servers. The Provisioning Directory (Symantec Directory) is where the provisioning data may be set up in a multi-write peer model. Symantec Directory is a proper X.500 directory with high redundancy and is designed to manage WAN latency between remote data centers and recovery after an outage. See example provided below.
The Connector Tier consists of the Java Connector Server and C++ Connector Server, which may be deployed on MS Windows as an independent component. There is no shared knowledge between Connector Servers, which works in our favor.
Requirement:
Three (3) independent MS Active Directory domain in five (5) remote data centers need to allow self-service password change & allow local password sync during a WAN outage. Passwords changes are driven by MS ADS Password Policies (every N days). The IME Password Policy for IAG/IAM solution is not enabled, IME authentication is redirected to an ADS domain, and the IMPS IM Callback Feature is disabled.
Below is an image that outlines the topology for five (5) global data centers in AMER, EMEA, and APAC.
The flow diagram below captures the password change use-case (self-service or delegated), the expected data flow to the user’s managed endpoints/applications, and the eventual peer sync of the MS Active Directory domain local to the user.
Observation(s):
The standalone solution of Symantec IAG/IAM has no expected challenges with configurations, but the Virtual Appliance offers pre-canned configurations that may impact a WAN deployment.
During this project, we identified three (3) challenges using the virtual appliance.
Two (2) items needed the assistance of the Broadcom Support and Engineering teams. They were able to work with us to address deployment configuration challenges with the “check_cluster_clock_sync -v ” process that incorrectly increments time delays between servers instead of resetting a value of zero between testing between servers.
Why this is important? The “check_cluster_clock_sync” alias is used during auto-deployment of vApp nodes. If the time reported between servers is > 15 seconds then replication may fail. This time check issue was addressed with a hotfix. After the hot-fix was deployed, all clock differences were resolved.
The second challenge was a deployment challenge of the IMPS component for its embedded “registry files/folders”. The prior embedded copy process was observed to be using standard “scp”. With a WAN latency, the scp copy operation may take more than 30 seconds. Our testing with the Virtual Appliance showed that a simple copy would take over two (2) minutes for multiple small files. After reviewing with CA support/engineering, they provided an updated copy process using “rsync” that speeds up copy performance by >100x. Before this update, the impact was provisioning tier deployment would fail and partial rollback would occur.
The last challenge we identified was using the Symantec Directory’s embedded features to manage WAN latency via multi-write HUB groups. The Virtual Appliance cannot automatically manage this feature when enabled in the knowledge files of the provisioning data DSAs. Symantec Directory will fail to start after auto-deployment.
Fortunately, on the Virtual appliance, we have full access to the ‘dsa’ service ID and can modify these knowledge files before/after deployment. Suppose we wish to roll back or add a new Provisioning Server Virtual Appliance. In that case, we must disable the multi-write HUB group configuration temporarily, e.g. comment out the configuration parameter and re-init the DATA DSAs.
Six (6) Steps for Global Password Reset Solution Deployment
We were able to refine our list of steps for deployment using pre-built knowledge files and deployment of the vApp nodes in blank slates with the base components of Provisioning Server (PS) and Provisioning Directory) with a remote MS Windows server for the Connector Server (JCS/CCS).
Step 1: Update Symantec Directory DATA DSA’s knowledge configuration files to use the multiple group HUB model. Note that multi-write group configuration is enabled within the DATA DSA’s *.dxc files. One Directory servers in each data center will be defined as a “HUB”.
To assist this configuration effort, we leveraged a serials of bash shell scripts that could be pasted into multiple putty/ssh sessions on each vApp to replace the “HUB” string with a “sed” command.
After the HUB model is enabled (stop/start the DATA DSAs), confirm that delayed WAN latency has no challenge with Symantec Directory sync processes. By monitoring the Symantec Directory logs during replication, we can see that sync operation with the WAN latency is captured with the delay > 1 msecs between data centers AMER1 and APAC1.
Step 2: Update IMPS configurations to avoid delays with Global Password Reset solution.
Note for this architecture, we do not use external IME Password Policies. We ensure that each AD endpoint has the checkbox enabled for “Password synchronization agent is installed” & each Global User (GU) has “Enable Password Synchronization Agent” checkbox enabled to prevent data looping. To ensure this GU attribute is always enabled, we updated an attribute under “Create Users Default Attributes”.
Step 3a: Update the Connector Tier (CCS Component)
Ensure that the MS Windows Environmental variables for the CCS connector are defined for Failover (ADS_FAILOVER) and Retry (ADS_RETRY).
Step 3b: Update the CCS DNS knowledge file of ADS DCs hostnames.
Important Note: Avoid using the refresh feature “Refresh DC List” within the IMPS GUI for the ADS Endpoint. If this feature is used, then a “merge” will be processed from the local CCS DNS file contents and what is defined within the IMPS GUI refresh process. If we wish to manage the redirection to local MS ADS Domain Controllers, we need to control this behavior. If this step is done, we can clean out the Symantec Directory of extra entries. The only negative aspect is the local password change may attempt to communicate to one of the remote MS ADS Domain Controllers that are not within the local data center. During a WAN outage, a user would notice a delay during the password change event while the CCS connector timed out the connection until it connected to the local MS ADS DC.
Step 3c: CCS ADS Failover
If using SSL over TCP 636 confirm the ADS Domain Root Certificate is deployed to the MS Windows Server where the CCS service is deployed. If using SASL over TCP 389 (if available), then no additional effort is required.
If using SSL over TCP 636, use the MS tool certlm.msc to export the public root CA Certificate for this ADS Domain. Export to base64 format for import to the MS Windows host (if not already part of the ADS Domain) with the same MS tool certlm.msc.
Step 4a: Update the Connector Tier for the JCS component.
Add the stabilization parameter “maxWait” to the JCS/CCS configuration file. Recommend 10-30 seconds.
Step 4b: Update JCS registration to the IMPS Tier
You may use the Virtual Appliance Console, but this has a delay when pulling the list of any JCS connector that may be down at this time of the check/submission. If we use the Connector Xpress UI, we can accomplish the same process much faster with additional flexibility for routing rules to the exact MS ADS Endpoints in the local data center.
Step 4c: Observe the IMPS routing to JCS via etatrans log during any transaction.
If any JCS service is unavailable (TCP 20411), then the routing rules process will report a value of 999.00, instead of a low value of 0.00-1.00.
Step 5: Update the Remote Password Change Agent (DLL) on MS ADS Domain Controllers (writable)
Step 6a: Validation of Self-Service Password Change to selected MS ADS Domain Controller.
Using various MS Active Directory processes, we can emulate a delegated or self-service password change early during the configuration cycle, to confirm deployment is correct. The below example uses MS Powershell to select a writable MS ADS Domain Controller to update a user’s password. We can then monitor the logs at all tiers for completion of this password change event.
A view of the password change event from the Reverse Password Sync Agent log file on the exact MS Domain Controller.
Step 6b: Validation of password change event via CCS ADS Log.
Step 6c: Validation of password change event via IMPS etatrans log
Note: Below screenshot showcases alias/function to assist with monitoring the etatrans logs on the Virtual Appliance.
Below screen shot showcases using ldapsearch to check timestamps for before/after of password change event within MS Active Directory Domain.
We hope these notes are of some value to your business and projects.
Appendix
Using the MS Windows Server for CCS Server
Get current status of AD account on select DC server before Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with current password.)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password05" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Change AD account's password via Powershell:
PowerShell Example:
Set-ADAccountPassword -Identity "idmpwtest" -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "Password06" -Force) -Server dc2016.exchange.lab
Get current status of AD account on select DC server after Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with NEW password)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password06" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Using the Provisioning Server for password change event
Get current status of AD account on select DC server before Password Change:
LDAPSearch Example: (From IMPS server - as user with current password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password05 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
Change AD account's password via ldapmodify & base64 conversion process:
LDAPModify Example:
BASE64PWD=`echo -n '"Password06"' | iconv -f utf8 -t utf16le | base64 -w 0`
ADSHOST='192.168.242.154'
ADSUSERDN='CN=Administrator,CN=Users,DC=exchange2012,DC=lab'
ADSPWD='Password01!’
ldapmodify -v -a -H ldaps://$ADSHOST:636 -D "$ADSUSERDN" -w "$ADSPWD" << EOF
dn: CN=idmpwtest,OU=People,dc=exchange2012,dc=lab
changetype: modify
replace: unicodePwd
unicodePwd::$BASE64PWD
EOF
Get current status of AD account on select DC server after Password Change:
LDAPSearch Example: (From IMPS server - with user's account and new password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password06 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged