The Symantec (CA/Broadcom) Identity Portal is widely used for managing IAM workflows with customizable forms, tasks, and business logic. This tool allow its business logic to be exported within the management console.
However, a major challenge exists in migrating or analyzing environments like Dev โ Test โ Prod . This effort can be challenging when working with these exported Portal files. Although configuration migration tools are available, reviewing and verifying changes can be difficult. Portal exports are delivered as a single compressed JSON one-linerโmaking it hard to identify meaningful changes (“deltas”) without involving a large manual effort.
Challenge 1: Single-Line JSON Exports from Identity Portal
Example above has over 88K characters in a single line. Try to search on that string to find the object you wish to change or update.
Identity Portalโs export format is a flat, one-line JSON string, even if the export contains hundreds of forms, layout structures, and java scripts.
Migration/Analysis Risks
Impossible to visually scan or diff exports.
Nested structures like layout, formProps, and handlers are escaped strings, sometimes double-encoded.
Hidden differences can result in subtle bugs between versions or environments.
A Solution
We created a series of PowerShell scripts that leverage AI to select the best key-value pairs to sort on, that would either provide the best human-readable or searchable processes to reduce the complexity and effort for migration processes. We now can isolate minor delta changes that would otherwise been hidden until a use-case was exercised later in the migration effort, which would require additional effort to be utilized.
Convert the one-liner export into pretty-formatted, human-readable JSON.
Detect and decode deeply embedded or escaped JSON strings, especially within layout or formProps.
Extract each formโs business logic and layout separately.
These outputs allow us to:
Open and analyze the data in Notepad++, with clean indentation and structure.
Use WinMerge or Beyond Compare to easily spot deltas between environments or versioned exports.
Track historical changes over time by comparing daily/weekly snapshots.
Challenge 2: Embedded JavaScript Inside Portal Forms
Identity Portal forms often include JavaScript logic directly embedded in the form definition (onLoad, onChange, onSubmit).
Migration Risks
JS logic is not separated from the data model or UI.
Inconsistent formatting or legacy syntax can cause scripts to silently fail.
Broken logic might not surface until after production deployment.
Suggested Solutions
Use PowerShell to extract JS blocks per form and store them as external .js.txt files.
Identify reused code patterns that should be modularized.
Create regression test cases for logic-heavy forms.
Challenge 3: Form Layouts with Escaped JSON Structures
The layout field in each form is often a stringified JSON object, sometimes double or triple-escaped.
Migration Risks
Malformed layout strings crash the form UI.
Even minor layout changes (like label order) are hard to detect.
Suggested Solutions
Extract and pretty-print each layout block to .layout.json files.
Please note: While the output is pretty-print, it is not quite JSON format, due to the escape sequences. Use these exported files as searchable/research to help isolate deltas to be corrected during the migration efforts.
Use WinMerge or Notepad++ for visual diffs.
Validate control-to-field binding consistency.
Using our understanding of the Identity Portal format for the ‘layout’ property, were able to identify methods using AI to manage the double-or-triple escaped characters that were troublesome to export consistently. Our service engagements now incorporate greater use of AI and associated APIs to support migration efforts and process modernization, with the goal of minimizing business risk for our clients and our organization.
Challenge 4: Java Plugins with Multiple Classes
Many Portal instances rely on custom Java plugins with dozens of classes, Spring beans, and services.
Migration Risks
Portal API changes break plugins.
Lack of modularity or documentation for the custom plugins.
Missing source code for complied custom plugins.
Difficult to test or rebuild.
Suggested Solutions
In the absence of custom source code, decompile plugins using jd-gui .
jd-gui Java decompilation for plugin reverse engineering.
Recommendations for Future-Proofing
Store layouts and handlers in Git.
Modularize plugin code.
Version control form definitions.
Automate validation tests in CI or staging.
Conclusion
Migrating Identity Portal environments requires more than copy-pasting exportsโ In the absence of proper implementation documentation around customizations, it may require reverse engineering, decoding, and differencing of deeply nested structures.
By extracting clean, readable artifacts and comparing across environments, teams will gain visibility, traceability, and confidence in their migration efforts.
Review our github collection of the above mentioned scripts. Please reach out if you would like assistance with your migration processes/challenges. We can now progress toward automation of the business logic from one environment to the next.
Custom User Data Directories and Advanced DNS/TLS Management
In the world of web development, testing, or any activity requiring precise browser session handling, juggling multiple configurations can quickly become overwhelming. Fortunately, modern browsers like Google Chrome offer powerful features that, when combined with a bit of command-line magic, can make your life significantly easier. Letโs dive into the usefulness of having multiple incognito windows using different user-data-dirs and managing TLS certificates with dedicated DNS mapping.
Why Multiple Incognito Windows?
Incognito mode is a great way to open a clean browser session without carrying over cookies, cache, or other data from your primary browsing experience. However, opening multiple incognito windows in a standard configuration doesnโt isolate sessionsโthey share the same incognito context. This is where the --user-data-dir flag comes in.
By specifying a unique--user-data-dirfor each incognito session, youโre effectively sandboxing your browser profiles. This is particularly useful for:
Testing Multi-User Applications: Simulate multiple users interacting with your web application without needing separate devices or browsers.
Very useful, to have an regular user accessing the same application when simultaneously using an admin ID with the same application on the same MS Windows host.
Isolating Session Data: Prevent session contamination when testing login states, cookies, or caching behavior.
High value – we can effectively have multiple sessions to the SAME application.
This is very important benefit.
Debugging Environments: Configure distinct profiles for staging, production, or development environments.
So useful when working with IP addresses that are not in the production DNS and we do not have access to MS Windows host file to create aliases for the IP addresses during testing.
See below examples for three (3) commands with different user data directories that will ensure these browser sessions do NOT share a session or session cookies. We are no longer limited to a single incognito session!!!
Each session launched with a unique --user-data-dir path acts as a fully independent browser instance, complete with its own storage and settings. Chose a folder like C:\Temp where the Chrome browser will be able to create the data folder by itself and not impacted by any restrictive folder permission.
While we could run the above command line at will, however it may be more beneficial to store this string in the MS Windows shortcut, we can easily make as many Incognito sessions as we wish, attached to unique MS Windows shortcuts to the chrome.exe browser binary.
Managing TLS Certificates and DNS Mapping
When dealing with local development environments, you often need to work with IP addresses and TLS certificates. Usually you may have access to your local MS Windows host file, you can directly edit it to add the IP address with the hostname/FQDN that matches the CN (subject) or SANs of the TLS certificate.
However, if you do NOT have access to the MS Windows host file, usually we are stuck using the IP address in part of the URL and dealing with the TLS certificate warning messages. Chrome, by default, is strict about TLS and DNS, which can lead to frustrating โYour connection is not privateโ warnings. However, with a few additional flags, you can streamline this process.
DNS Mapping with Host Resolver Rules
The --host-resolver-rules flag allows you to map specific hostnames to IP addresses directly from the command line, bypassing the system’s DNS configuration. This is incredibly useful for testing domains that donโt have publicly resolvable DNS records or for redirecting traffic to a specific server in a development environment.
This maps any subdomain of iam-dev.us to the IP address 192.168.2.102 without needing to modify your systemโs hosts file. The additional benefit is that we can map the hostname to match the CN (subject) of the TLS certificate, so we do NOT see any TLS certificate errors.
We now have NO issue viewing a certificate that matched our hostname we provided in the shortcut to the Chrome browser.
Below is a live example testing with Symantec Identity Portal (IGA) solution, where we have two (2) separate incognito sessions, with different –user-data-dir to ensure we have isolated the sessions. We also have used the –host-resolver-rules with an IP address that is not in our DNS. We have mapped this IP address in the chrome shortcut.
Note: A “warning message” will show when we use the –host-resolver-rule flag, to ensure we are not being “hijacked” by someone else. Since we are doing this effort, you may click the “X” and close this warning message.
While the above worked for an environment with a proper TLS certificate, we need an alternative for environments using self-signed certificates, Chromeโs strict security measures can get in the way. Using flags like --ignore-certificate-errors (only in safe, controlled environments!) or configuring your system to trust these certificates can help.
Combined with the --user-data-dir option, you can even preload trusted certificates into specific profiles for a seamless workflow.
Running an isolated session with its own user data directory.
Redirecting specific DNS queries to a development server.
Accessing a self-signed certificate-protected URL without unnecessary interruptions.
Conclusion
By leveraging Chromeโs --user-data-dir, --host-resolver-rules, and other advanced flags, you can tailor your browser environment to handle complex workflows with ease. Whether youโre a developer, tester, or IT specialist, these tools offer a robust way to manage multiple configurations, ensuring efficient and error-free operations.
So next time youโre troubleshooting or testing, remember: the right Chrome flags can be game-changing. Experiment with these options and streamline your workflow today!
Bonus: View of active switches
Use chrome://version to view the active command line switches in use.
On a typical Linux host, rolling back a configuration in WildFly can be as simple as copying a backup of the configuration XML file back into place. However, working within the constraints of a secured virtual appliance (vApp) presents a unique challenge: the primary service ID often lacks write access to critical files under the WildFly deployment.
When faced with this limitation, administrators may feel stuck. What options do we have? Thankfully, WildFlyโs jboss-cli.sh process provides a lifeline for configuration management, allowing us to take snapshots and reload configurations efficiently. See the bottom of this blog if you need to create a user for jboss-cli.sh usage.
Why Snapshots are necessary for your sanity
WildFly snapshots capture the server’s current configuration, creating a safety net for experimentation and troubleshooting. They allow you to test changes, debug issues, or introduce new features with confidence, knowing you can quickly restore the server to a previous state.
In this guide, weโll explore a step-by-step process to test and restore configurations using WildFly snapshots on the Symantec IGA Virtual Appliance.
Step-by-Step: Testing and Restoring Configurations
Step 1: Stamp and Backup the Current Configuration
First, optionally you may add a unique custom attribute to the current `standalone.xml` (ca-standalone-full-ha.xml) configuration, if you don’t already have a delta to compare. This new custom attribute acts as a marker, helping track configuration changes. After updating the configuration, take a snapshot.
Simulate a change by updating the custom attribute. Validate the update with a read query to confirm the changes are applied. To be safe, we will remove the attribute and re-add with a new string that is different.
List all available snapshots to identify the correct rollback point. You can use the `:list-snapshots` command to query snapshots and verify files in the snapshot directory.
/opt/CA/wildfly-idm/bin/jboss-cli.sh --connect --user=jboss-admin --password=Password01! --timeout=90000 --command=":list-snapshots"
ls -l /opt/CA/wildfly-idm/standalone/configuration/standalone_xml_history/snapshot/
Step 4: Reload from Snapshot
Once youโve identified the appropriate snapshot, use the `reload` command to roll back the configuration. This is the Monitor the process to ensure it completes successfully, then verify the configuration.
Adding a WildFly Admin User for Snapshot Management
Before you can execute commands through WildFly’s `jboss-cli.sh`, you’ll need to ensure you have a properly configured admin user. If an admin user does not already exist, you can create one with the following command:
- **`-m`**: Indicates the user is for management purposes. - **`-u jboss-admin`**: Specifies the username (`jboss-admin` in this case). - **`-p Password01!`**: Sets the password for the user. - **`-g SuperUser`**: Assigns the user to the `SuperUser` group, granting necessary permissions for snapshot and configuration management.
You can have as many jboss-cli.sh service IDs as you need.
Please note, this Wildfly management service ID is not the same as the Wildfly application service ID, that is needed for the /iam/im/logging_v2.jsp access. Which requires the -a switch and the group of IAMAdmin
If your logging_v2.jsp page is not displaying correct, there is simple update to resolve this challenge. Add the below string to your /opt/CA/VirtualAppliance/custom/IdentityManager/jvm-args.conf file.
In the 1950’s an innovation revolutionized the transport industry by dramatically reducing the cost of shipping and making global trade more efficient with a large standardized shipping box that could be easily transported on trucks, trains, and ships.โThis same concept has been applied to software.โ
A “container” is a lightweight, standalone, executable package of software that includes everything needed to run a piece of software: code, runtime, system tools, system libraries, and settings.
This concept gets us all out of the “install” game, and into the using a solution quicker. Containers provide a low-risk environment to experiment and learn new skills. Whether it’s web development, app creation, or even dabbling in AI and machine learning, containers offer a sandbox-like environment to test and refine your skills without affecting your main operating system.
If you wish to excel with understanding containers, we recommend you ‘sandbox’ and ‘play’ with them often.โThe same concepts that you will learn with basic container deployment, will greatly assist with growing your knowledge with Kubernetes , Red Hat OpenShift, Broadcom (VMware Tanzu) and other cloud providers’ Kubernetes platforms (Google GKE, Amazon EKS, Microsoft AKS).
There is a plethora of containers and solutions to choose from for your education.โWe have setup several examples of using select containers with bash shell scripts that showcase different features using the podman binary.โThe examples include Jasper Report Server, Nessus, PostGres (with PGadmin), Splunk, BusyBox, qBittorrent, Kiwix, and TLS notes.
We hope these examples will showcase the value of containers and provide satisfaction with your understanding of the containers.โPlease reach out if you wish to learn more and have us assist with your Kubernetes needs.
When dealing with legacy, inherited, vendor, or custom bash shell scripts, especially those involving multiple functions and sourcing other child scripts, it’s important to track the flow of files being read and updated. This is particularly relevant if these scripts are invoked by systemd.
Challenge: Capturing the entire data flow in complex scripts can be laborious.
Solution: We’ve explored various methods, including the use of default file descriptors and redirection, as outlined in the bash manual and other resources. Among these, a notable feature is available for bash versions greater than 4.1, which significantly aids in debugging.
Bash Version Check: To utilize these debugging features, first confirm your bash version. You can do this using one of the following three methods:
echo "${BASH_VERSION}"
bash --version
Keyboard method in a shell: Ctrl + x then Ctrl + v
Leveraging BASH Variables for Enhanced Script Debugging
Key Variables: In the realm of Bash scripting, two significant variables stand out for effective debugging:
{FD}: This variable plays a crucial role in handling file descriptors within your scripts.
{BASH_XTRACEFD}: Specifically designed for tracing purposes, it aids in redirecting the output of set -x.
Finding More Information: For a comprehensive understanding of these variables, refer to the Bash man page. You can find detailed explanations and usage guidelines under the REDIRECTION section, second paragraph. This section provides insights into how these variables interact with the file system and how they can be optimally used in your scripts.
Optimizing Debugging in Bash Scripts Using Custom File Descriptors
Defining the {FD} Variable:
Purpose:{FD} is a user defined string to hold the number of a free file descriptor. This approach avoids reliance on the default file descriptors 1 (standard output) and 2 (standard error).
Advantage: By not using the defaults, we ensure that the normal functions of the scripts, especially those dependent on standard output/error, are not inadvertently altered.
Relation to System Limits: The value of {FD} correlates with the system’s limit on open file descriptors, which can be checked using ulimit -n.
Integrating {BASH_XTRACEFD}:
Function: This pre-defined variable links the integer provided by {FD} with the set -x tracing process in Bash.
Benefit: This linkage allows for more effective and targeted debugging.
Combining Variables for Enhanced Debugging:
By tying {FD}, {BASH_XTRACEFD}, and the internal tracing process together, we create a robust method for debugging and tracing in Bash scripts.
Allocate a File Descriptor:
Run the following line [ exec {FD}>/tmp/file ; echo ${FD} ] multiple times to allocate a file descriptor integer.
If not within a Bash shell script, remember to close the open file descriptor afterward.
When observing the file descriptors in a Bash environment, especially through the path /proc/self/fd, certain descriptors are typically reserved.
Reserved Descriptors:
0: Standard Input (stdin)
1: Standard Output (stdout)
2: Standard Error (stderr)
3: In this specific context, descriptor 3 is observed as the PID for the ls command that was recently executed.
Key Takeaway:
This insight underscores the importance of understanding how different file descriptors are utilized and reserved within the Bash environment, particularly when developing or debugging scripts. By knowing which descriptors are reserved or in use, you can avoid conflicts or unintended behaviors in your script execution.
ls -lart /proc/self/fd/
Advanced Data Flow Tracking in Bash Scripts with Enhanced Debugging Techniques
Combining Debugging Processes:
By integrating the previously discussed file descriptor management and tracing methods with the addition of a date-time stamp in milliseconds, we create a powerful tool for capturing the entire data flow in Bash scripts.
Incorporating Date-Time Stamps:
This involves appending a precise timestamp (in milliseconds) along with the name of the Bash shell script file in the debug output. This enhancement allows for more detailed and time-specific tracking of script execution.
Utility Across Parent Processes:
Such an approach is especially beneficial for scripts that are commonly sourced in different contexts, like those used in start and stop processes. It ensures comprehensive data capture, irrespective of the parent process that initiated the script.
Ownership Management with chown:
To further refine the process, a chown command has been added. This is particularly useful when the Bash shell scripts are not owned by the executing user ID.
The chown process is configured to change the ownership of these scripts from ‘root’ to a specified non-root user ID. This step is crucial for scenarios where script execution under a non-root user is necessary for security or organizational policies.
Summary:
By synthesizing file descriptor management, tracing with set -x, timestamping, and ownership adjustment, we establish a comprehensive debugging and tracking mechanism. This method offers a detailed insight into script execution and is invaluable for complex environments where multiple scripts interact and where permission and ownership nuances are critical.
tz=`/bin/date --utc +%Y%m%d%H%M%S3%N.0Z`
SCRIPT_FILENAME=$(basename $(readlink -f $0))
# Use a free FD (file descriptor) to capture the debug stream caused by "set -x":
exec {FD}>/tmp/${tz}-${SCRIPT_FILENAME}-debug.log
BASH_XTRACEFD=${FD}
chown ec2-user:ec2-user /tmp/${tz}-*-debug.log
# turn on the debug stream:
set -x
Now, execute the script or allow systemd processes to run.
Tracking Script Execution with Timestamped Output Files
Output File Generation:
The debugging process described earlier will generate multiple output files. These files are recommended to be stored in the /tmp directory.
Naming Convention of Output Files:
Each output file follows a specific naming convention that incorporates the timestamp (in milliseconds) and the script’s name. This format aids in accurately tracking the execution flow and timing of the scripts.
Sorting and Reviewing Output Files:
Using ls -al:
To review these output files effectively, employ the command ls -al. This command lists files with detailed information, including their creation or modification times.
Sorting by Timestamp:
Sort the output files by their date-stamp names. This sorting ensures that the files are displayed in the correct chronological order, reflecting the sequence of script execution.
Practical Application:
This method of tracking via timestamped output files is particularly useful in complex environments where understanding the sequence and timing of script executions is crucial. By sorting the files, administrators and developers can gain insights into the script’s behavior and interactions over time.
ls -al /tmp/*.log
start_process -rw-r--r-- 1 ec2-user ec2-user 7721 Dec 6 11:35 /tmp/202312061635393500776926.0Z-start-jboss_start_pre.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 76931 Dec 6 11:35 /tmp/202312061635393557011038.0Z-start-jboss_start_pre.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 12226 Dec 6 11:35 /tmp/202312061635403268056910.0Z-increase_jboss_access.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 8483 Dec 6 11:35 /tmp/202312061635403367517901.0Z-set_jboss_params.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 83961 Dec 6 11:35 /tmp/202312061635403585892921.0Z-start-jboss_start_pre.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 53134 Dec 6 11:35 /tmp/202312061635423009460436.0Z-start-jboss_start_pre.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 19460 Dec 6 11:35 /tmp/202312061635423472267227.0Z-reset_jboss_deployment_status-debug.log
-rw-r--r-- 1 ec2-user ec2-user 7763 Dec 6 11:35 /tmp/202312061635423606880351.0Z-generate_jboss_tls_certificate-debug.log
-rw-r--r-- 1 ec2-user ec2-user 9986 Dec 6 11:35 /tmp/202312061635423663787276.0Z-generate_jboss_tls_certificate-debug.log
-rw-r--r-- 1 ec2-user ec2-user 12624 Dec 6 11:36 /tmp/202312061635423756198600.0Z-start-jboss_start_post.sh-debug.log
stop_process -rw-r--r-- 1 ec2-user ec2-user 7513 Dec 6 11:36 /tmp/202312061636233089744060.0Z-start-jboss_stop.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 11188 Dec 6 11:36 /tmp/202312061636233147040616.0Z-set-jboss-config.sh-debug.log
-rw-r--r-- 1 ec2-user ec2-user 8277 Dec 6 11:36 /tmp/202312061636233223278052.0Z-set-support-config.sh-debug.log
๐ Dive into Debugging: Unraveling the Data with Grep and File Exploration ๐ต๏ธโโ๏ธ
Exploring the Output Files:
Now that you have your timestamped output files neatly organized in the /tmp directory, it’s time to delve into the heart of debugging.
Using Grep for Efficient Analysis:
The Power of grep: Harness the capabilities of grep to sift through these files efficiently. grep is a powerful tool for searching text and can help you quickly locate the specific information you’re interested in within the files.
Isolating Relevant Data: Whether you’re looking for error messages, specific variable values, or particular function calls, grep can help you zero in on the exact lines you need.
Opening and Reviewing Files:
Manual Inspection: For a more detailed analysis, open these files in your preferred text editor. This allows you to review the contents in detail and understand the broader context of the debug information.
Sequential Understanding: Remember, each file’s name contains a timestamp, helping you piece together the sequence of events in your script’s execution.
Enjoy the Process:
This is where the true fun begins โ piecing together the puzzle. Enjoy the satisfaction of unraveling the complexities of your Bash scripts, gaining insights, and resolving issues more effectively than ever before!
Note: The picture below is provided to encourage and guide users in the final phase of their debugging journey, combining technical instructions with a touch of motivation.
Remember to disable this debug feature, otherwise you may fill up disk space on your server if these same scripts or functions are called periodically.
Our goal is to move away from using self-signed certificates, often supplied by various software vendors and Kubernetes deployments, in favor of a unified approach with a single wildcard certificate covering multiple subdomains. This strategy allows for streamlined updates of a single certificate, which can then be distributed across our servers and projects as needed.
We have identified LetsEncrypt as a viable source for these wildcard certificates. Although LetsEncrypt offers a Docker/Podman image example, we have discovered an alternative that integrates with Google Domains. This method automates the entire process, including validation of the DNS TXT record and generation of wildcard certificates. It’s important to note that Google Domains operates independently from Google Cloud DNS, and its API is limited to updating TXT records only.
In this blog post, we will concentrate on how to validate the certificates provided by LetsEncrypt using OpenSSL. We will also demonstrate how to replace self-signed certificates with these new ones on a virtual appliance.
View the pubkey of the LetsEncrypt cert.pem and privkey.pem file to confirm they match
We need to download this root CA cert for solutions, appliances, and on-prem Kubernetes Clusters that do NOT have these root CA certs in their existing keystores.
Validate the cert.pem file with the public root CA cert and the provided chain.pem file
Will return an “OK” response if valid. CA certs order is important, if reversed this process will fail. Validation still fails if we only have chain.pem or fullchain.pem (see image below) without the correct public root CA cert from LetsEncrypt. Note: This public root CA cert is typically provided in updated modern browsers. Note2: While the fullchain.pem does have a CA cert with the CN = ISRG Root X1, this does NOT appear to be the correct one based on the error reported, so we have downloaded the correct CA cert to be used with the same CN name of ISRG Root X1 (see following images below)
Combine cert.pem with the public root CA cert and chain.pem for a complete chain cert in the CORRECT order.
Important Note: cert.pem MUST be first in this list otherwise validation will fail. Please note that there are two (2) root CA certs with the same CN, that may cause some confusion when validating the chain.
Validate certs with openssl server process and two (2) terminal ssh sessions/windows
1st terminal session – run an openssl server (via openssl s_server) on port 9443 (any open port). The -www switch will send a status message back to the client when it connects. This includes information about the ciphers used and various session parameters. The output is in HTML format so this option will normally be used with a web browser.
2nd terminal session – run openssl s_client and curl with the combined chain cert to validate. Replace the FQDN with your LetsEnscrypt domain in the wildcard cert. Example below FQDN is training.anapartner.net. You may also use a browser to access the openssl s_server web server with a FQDN.
Example of using the official Certbot image with podman. We recommend using multiple -d switches with *.subdomain1.domain.com to allow a single cert be used for many of your projects. Reference of this deployment
A version of podman with the Google Domain API TXT integration. We use variables for reuse of this code for various testing domains. This docker image will temporarily create the Google Domain TXT records via a REST API, that are needed for Certbot DNS validation, then the process will remove the TXT records. There is no manual interaction required. We use this process with a bash shell script to run as needed or via scheduled events.
Replace Identity Suite vApp Apache Certificate with LetsEncrypt
We see tech notes and an online document but wanted to provide a cleaner step by step process to update the Symantec IGA Virtual Appliance certificates for the embedded Apache HTTPD service under the path /opt/CA/VirtualAppliance/custom/apache-ssl-certificates
# Collect the generated LetsEncrypt certs via certbot, save them, scp to the vApp host, and then extract them
tar -xvf letsencrypt-20231125.tar
# View the certs
ls -lart
# Validate LetEncrypt cert via pubkey match between private key and cert
openssl x509 -noout -pubkey -in cert.pem
openssl pkey -pubout -in privkey.pem
# Download the latest LetsEncrypt public root CA cert
curl -sOL https://letsencrypt.org/certs/isrgrootx1.pem
# Validate a full chain with root with cert (Note: order is important on cat process)
openssl verify -CAfile <(cat isrgrootx1.pem chain.pem) cert.pem
# Create a full chain with root cert and LetsEncrypt chain in the correct ORDER
cat cert.pem isrgrootx1.pem chain.pem > combined_chain_with_cert.pem
# Move prior Apache HTTPD cert files
mv localhost.key localhost.key.original
mv localhost.crt localhost.crt.original
# Link the new LetsEncrypt files to same names of localhost.XXX
ln -s privkey.pem localhost.key
ln -s combined_chain_with_cert.pem localhost.crt
# Restart apache (httpd)
sudo systemctl restart httpd
# Test with curl with the FQDN name in the CN or SANs of the cert, e.g. training.anapartner.net
curl -v --cacert combined_chain_with_cert.pem --resolve training.anapartner.net:443:127.0.0.1 https://training.anapartner.net:443
# Test with browser with the FQDN name
Example of integration and information reported by browser
View of the certificate as shown with a CN (subject) = training.anapartner.net
A view of the SANS wildcard certs that match the FQDN used in the browser URL bar of iga.k8s-training-student01.anapartner.net
Example of error messages from Apache HTTPD service’s log files if certs are not in correct order or validate correctly. One message is a warning only, the other message is a fatal error message about the validation between the cert and private key do not match. Use the pubkey check process to confirm the cert/key match.
[ssl:warn] [pid 2206:tid 140533536823616] AH01909: CA_IMAG_VAPP:443:0 server certificate does NOT include an ID which matches the server name
[ssl:emerg] [pid 652562:tid 140508002732352] AH02565: Certificate and private key CA_IMAG_VAPP:443:0 from /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key do not match
TLS Secrets update in Kubernetes Cluster
If your Kubernetes Cluster is on-prem and does not have access to the internet to validate the root CA cert you may decided to use the combine_chain_with_cert.pem when building your Kubernetes Secrets. With Kubernetes Secrets you must delete then re-add the Secret as there are no current update process for Secrets.
CERTFOLDER=~/labs/letsencrypt
CERTFILE=${CERTFOLDER}/combined_chain_with_cert.pem
#CERTFILE=${CERTFOLDER}/fullchain.pem
KEYFILE=${CERTFOLDER}/privkey.pem
INGRESS_TLS_SECRET=anapartner-dev-tls
NAMESPACE=monitoring
NS=${NAMESPACE}
kubectl -n ${NS} get secret ${INGRESS_TLS_SECRET} 2>&1 > /dev/null
if [ $? != 0 ]; then
echo ""
echo "### Installing TLS Certificate for Ingress"
kubectl -n ${NS} create secret tls ${INGRESS_TLS_SECRET} \
--cert=${CERTFILE} \
--key=${KEYFILE}
fi
Ingress Rule via yaml:
tls:
- hosts:
- grafana.${INGRESS_APPS_DOMAIN}
secretName: ${INGRESS_TLS_SECRET}
helm update:
--set grafana.ingress.tlsSecret=${INGRESS_TLS_SECRET}
The below section from the online documentation mentions the purpose of a certificate to be used by Symatnec Directory. It mentioned using either DXcertgen or openssl. We can now add in LetsEncrypt certs as well to be used.
One statement caught our eye that was not quite accurate was that a certificate used on one DSA could not be used for another DSA. We can see if we compare the DSAs provided by CA Directory for the provisioning servers data tier (Identity Suite/Identity Manager), there is no difference between them, including subject name. Due to the fact that the subject (CN) has the same name for all five (5) DSAs (data/router), if a Java JNDI call is made for an LDAP call to the DSAs, the LDAP hostname validation must be disabled. (see below)
We must use key type of RSA for any cert with Symantec PAM. This process is fairly straightforward to update the certificates. Access the PAM UI configuration, select the menu of: Configuration / Security / Certificates. Join the cert.pem and the privkey.pem files together, in this order, with cat or notepad.
Challenge/Resolution: Please edit the joined file and add the string “RSA“ to the header/footer of the private key provided by LetsEncrypt. Per Broadcom tech note: 126692, “PAMs source code is expecting that RSA based Private Keys start with “—–BEGIN RSA PRIVATE KEY—–” header and have a matching footer. See example below.
Select “Certificate with Private key” with X509 (as other option), then click “Choose File” button to select the combined cert/privatekey pem file. We are not required to have a destination filename nor passphrase for the LetsEncrypt certs. Click Upload button.
We should receive a confirmation message of “Confirmation: PAM-CM-0349: subject=CN = training.anapartner.net has been verified.”
The error message “PAM-CM-0201: Verification Error Can not open private key file” will occur if using keytype of RSA or ECDSA and the default header/footer does not contain a required string for PAM to parse. If we attempt to use ECDSA keytype, we would receive a similar PAM-CM-0201 error message after updating the header/footer. So please regenerate the LetsEncrypt certs with keytype=RSA.
Next steps, after the certificate and private key have been loaded into Symantec PAM, please use the “Set” menu option to assign this certificate as primary. We will click verify button first, to confirm the certificate is functioning correctly.
We should receive a confirmation message for the file ” Confirmation: PAM-CM-0346: cert_with_key_only_for_pam_app.crt has been verified“.
Finally, we will click “Accept” button, and allow the PAM appliance to restart. Click “Yes” when asked to restart the appliance.
View the updated PAM UI with the LetsEncrypt Certs.
ERROR Messages
If you have received any of the below error messages during use of any java process, e.g. J2EE servers (JBOSS/Wildfly), you have pushed beyond the solution’s vendor ability to manage new features provided in LetsEncrypt certs.โYou will need to regenerate them with the type of RSA, instead of default of elliptical certs.
UNKNOWN-CIPHER-SUITE ERR_SSL_VERSION_OR_CIPHER_MISMATCH SSLHandshakeException: no cipher suites in common Ignore unavailable extension Ignore unknown or unsupported extension
Use the below processes to help you identify the root cause of your issue.
Create an ‘RSA’ type JKS and P12 Keystore using LetsEncrypt certs.
The below example is a two (2) steps process that will create a p12 keystore first with the cert.pem and privkey.pem files.โThen, a second command will convert the p12 keystore to the older JKS keystore format.โYou may use these in any Java process, e.g.J2EE and/or Tomcat platform.
These are exciting times, marked by a transformative change in the way modern applications are rolled out. The transition to Cloud and related technologies is adding considerable value to the process. If you are utilizing solutions like SiteMinder SSO or CA Access Gateway, having access to real-time metrics is invaluable. In the following article, we’ll explore the inherent features of the CA SSO container form factor that facilitate immediate metrics generation, compatible with platforms like Grafana.
Our Lab cluster is an On-Premise RedHat OpenShift Kubernetes Cluster which has the CA SSO Container solution, available as part of the Broadcom Validate Beta Program. The deployment of different SSO elements like policy servers and access gateway is facilitated through a Helm package provided by Broadcom. Within our existing OpenShift environment, a Prometheus metrics server is configured to gather time-series data. By default, the tracking of user workload metrics isn’t activated in OpenShift and must be manually enabled. To do so, make sure the ‘enableUserWorkload‘ setting is toggled to ‘true‘. You can either create or modify the existing configmap to ensure this setting is activated.
Grafana is also deployed for visuals and connected to the Prometheus data source to create metrics visuals. Grafana data source can be created using the YAML provided below. Note that creation of the grafana datasource will require the Prometheus URL as well as an authorization token to access stored metrics. This token can be extracted from the cluster using the below commands.
Also ensure that a role binding exists to allow the service account (prometheus-k8s) in the openshift-monitoring namespace access to the role which allows monitoring of resources in the target (smdev) namespace.
Once the CA SSO helm chart is installed with metrics enabled, we must also ensure that the namespace in which CA SSO gets deployed has openshift.io/cluster-monitoring label set as true.
We are all set now and should see the metrics getting populated using the OpenShift console (Observe -> Metrics menu item) as well as available for Grafana’s consumption.
In the era of next-generation application delivery, integrated monitoring and observability features now come standard, offering considerable advantages, particularly for operations and management teams seeking clear insights into usage and solution value. This heightened value is especially notable in deployments via container platforms. If you’re on the path to modernization and are looking to speed up your initiatives, feel free to reach out. We’re committed to your success and are keen to partner with you.
While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.
The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.
One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.
We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.
The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.
A view of the Apache HTTPD SSL self-signed certificate and key.
The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:
A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.
Provisioning Server process for TLS enablement for IME ETACALLBACK process.
Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.
Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.
Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.
Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.
sudo certbot certonly --manual --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email
Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]
cert1.pem [The primary server side wildcard cert]
privkey1.pem [The primary server side private key associated with the wildcard cert]
chain1.pem [The intermediate chain certs that are needed to validate the cert1 cert]
fullchain1.pem [two files together in the correct order of cert1.pem and chain1.pem.]
NOTE: fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]
Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.
Validate with browsers and view the HTTPS lock symbol to view the certificate
Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.
A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.
Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.
Common TLS errors seen with the Provisioning Server ETACALLBACK
Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.
Other Error messages (curl)
If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823
RedHat OpenShift is one of the container orchestration platforms that provides an enterprise-grade solution for deploying, running, and managing applications on public, on-premise, or hybrid cloud environments.
This blog entry outlines the high-level architecture of a LAB OpenShift on-prem cloud environment built on VMware Workstation infrastructure.
Red Hat OpenShift and the customized ISO image with Red Hat Core OS provide a straightforward process to build your lab and can help lower the training cost. You may watch the end-to-end process in the video below or follow this blog entry to understand the overall process.
Requirements:
Red Hat Developer Account w/ Red Hat Developer Subscription for Individuals
Local DNS to resolve a minimum of three (3) addresses for OpenShift. (api.[domain], api-int.[domain], *.apps.[domain])
DHCP Server (may use VMware Workstation NAT’s DHCP)
Storage (recommend using NFS for on-prem deployment/lab) for OpenShift logging/monitoring & any db/dir data to be retained.
SSH Terminal Program w/ SSH Key.
Browser(s)
Front Loader/Load Balancer (HAProxy)
VMware Workstation Pro 16.x
Specs: (We used more than the minimum recommended by OpenShift to prepare for other applications)
Three (3) Control Planes Nodes @ 8 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64 bit” Guest OS Type
Four (4) Worker Nodes @ 4 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64” Guest OS Type
Post-Efforts: Apply these to provide additional value. [Included as examples]
Add entropy service (haveged) to all nodes/pods to increase security & performance.
Let’sEncrypt wild card certs for *.[DOMAIN] and *.apps.[DOMAIN] to avoid self-signed certs for external UIs. Avoid using “thisisunsafe” within the Chrome browser to access the local OpenShift console.
Update OpenShift Ingress to be aware of more than two (2) worker nodes.
Update OpenShift to use NFS as default storage.
Below is a view of our footprint to deploy the OpenShift 4.x environment on a local data center hosted by VMware Workstation.
Red Hat OpenShift provides three (3) options to deploy. Cloud, Datacenter, Local. Local is similar to minikube for your laptop/workstation with a few pods. Red Hat OpenShift license for Cloud requires deployment on other vendors’ sites for the nodes (cpu/ram/disk) and load balancers. If you deploy OpenShift on AWS and GCP, plan a budget of $500/mo per resource for the assets.
After reviewing the open-source OKD solution and the various OpenShift deployment methods, we selected the “DataCenter” option within OpenShift. Two (2) points made this decision easy.
Red Hat OpenShift offers a sixty (60) day eval license.
This license can be restarted for another sixty (60) days if you delete/archive the last cluster.
Red Hat OpenShift provides a customized ISO image with Red Hat Core OS, ignition yaml files, and an embedded SSH Public Key, that does a lot of the heavy lifting for setting up the cluster.
The below screen showcases the process that Red Hat uses to build a bootstrap ISO image using Red Hat Core OS, Ignition yaml files (to determine node type of control plane/worker node), and the embedded SSH Key. This process provides a lot of value to building a cluster and streamlines the effort.
DNS Requirement
The minimal DNS entries required for OpenShift is three (3) addresses.
Update HAproxy.cfg as needed for IP addresses / Ports. To avoid deployment of HAProxy twice, we use the “bind” command to join two (2) HAproxy configuration files together to prevent conflict on port 80/443 redirect for both OpenShift and another application deployed on OpenShift.
# Global settings
# Set $IP_RANGE as an OS ENV or Global variable before running HAPROXY
# Important: If using VMworkstation NAT ensure this range is correctly defined to
# avoid error message with x509 error on port 22623 upon startup on control planes
#
# Ensure 3XXXX PORT is defined correct from the ingress
# - We have predefined these ports to 32080 and 32443 for helm deployment of ingress
# oc -n ingress get svc
#
#---------------------------------------------------------------------
global
setenv IP_RANGE 192.168.243
setenv HA_BIND_IP1 192.168.2.101
setenv HA_BIND_IP2 192.168.2.111
maxconn 20000
log /dev/log local0 info
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option redispatch
option forwardfor except 127.0.0.0/8
retries 3
maxconn 20000
timeout http-request 10000ms
timeout http-keep-alive 10000ms
timeout check 10000ms
timeout connect 40000ms
timeout client 300000ms
timeout server 300000ms
timeout queue 50000ms
# Enable HAProxy stats
# Important Note: Patch OpenShift Ingress to allow internal RHEL CoreOS haproxy to run on additional worker nodes
# oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge
#
listen stats
bind :9000
stats uri /
stats refresh 10000ms
# Kube API Server
frontend k8s_api_frontend
bind :6443
default_backend k8s_api_backend
mode tcp
option tcplog
backend k8s_api_backend
mode tcp
balance source
server ocp-cp-1_6443 "$IP_RANGE".128:6443 check
server ocp-cp-2_6443 "$IP_RANGE".129:6443 check
server ocp-cp-3_6443 "$IP_RANGE".130:6443 check
# OCP Machine Config Server
frontend ocp_machine_config_server_frontend
mode tcp
bind :22623
default_backend ocp_machine_config_server_backend
option tcplog
backend ocp_machine_config_server_backend
mode tcp
balance source
server ocp-cp-1_22623 "$IP_RANGE".128:22623 check
server ocp-cp-2_22623 "$IP_RANGE".129:22623 check
server ocp-cp-3_22623 "$IP_RANGE".130:22623 check
# OCP Machine Config Server #2
frontend ocp_machine_config_server_frontend2
mode tcp
bind :22624
default_backend ocp_machine_config_server_backend2
option tcplog
backend ocp_machine_config_server_backend2
mode tcp
balance source
server ocp-cp-1_22624 "$IP_RANGE".128:22624 check
server ocp-cp-2_22624 "$IP_RANGE".129:22624 check
server ocp-cp-3_22624 "$IP_RANGE".130:22624 check
# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend ocp_http_ingress_frontend
bind "$HA_BIND_IP1":80
default_backend ocp_http_ingress_backend
mode tcp
option tcplog
backend ocp_http_ingress_backend
balance source
mode tcp
server ocp-w-1_80 "$IP_RANGE".131:80 check
server ocp-w-2_80 "$IP_RANGE".132:80 check
server ocp-w-3_80 "$IP_RANGE".133:80 check
server ocp-w-4_80 "$IP_RANGE".134:80 check
server ocp-w-5_80 "$IP_RANGE".135:80 check
server ocp-w-6_80 "$IP_RANGE".136:80 check
server ocp-w-7_80 "$IP_RANGE".137:80 check
frontend ocp_https_ingress_frontend
bind "$HA_BIND_IP1":443
default_backend ocp_https_ingress_backend
mode tcp
option tcplog
backend ocp_https_ingress_backend
mode tcp
balance source
server ocp-w-1_443 "$IP_RANGE".131:443 check
server ocp-w-2_443 "$IP_RANGE".132:443 check
server ocp-w-3_443 "$IP_RANGE".133:443 check
server ocp-w-4_443 "$IP_RANGE".134:443 check
server ocp-w-5_443 "$IP_RANGE".135:443 check
server ocp-w-6_443 "$IP_RANGE".136:443 check
server ocp-w-7_443 "$IP_RANGE".137:443 check
######################################################################################
# VIPAUTHHUB Ingress
frontend vip_http_ingress_frontend
bind "$HA_BIND_IP2":80
mode tcp
option forwardfor
option http-server-close
default_backend vip_http_ingress_backend
backend vip_http_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32080 "$IP_RANGE".131:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32080 "$IP_RANGE".132:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32080 "$IP_RANGE".133:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32080 "$IP_RANGE".134:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32080 "$IP_RANGE".135:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32080 "$IP_RANGE".136:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32080 "$IP_RANGE".137:32080 check fall 3 rise 2 send-proxy-v2
frontend vip_https_ingress_frontend
bind "$HA_BIND_IP2":443
# mgmt-sspfqdn
acl is_mgmt_ssp hdr_end(host) -i mgmt-ssp.okd.anapartner.dev
use_backend vip_ingress-nodes_mgmt-nodeport if is_mgmt_ssp
mode tcp
#option forwardfor
option http-server-close
default_backend vip_https_ingress_backend
backend vip_https_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
backend vip_ingress-nodes_mgmt-nodeport
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
######################################################################################
Use the following commands to add 2nd IP address to one NIC on the main VMware Workstation Host, where NIC = eno1 and 2nd IP address = 192.168.2.111
nmcli dev show eno1
sudo nmcli dev mod eno1 +ipv4.address 192.168.2.111/24
VMware Workstation Hosts / Nodes
When building the VMware hosts, ensure that you use Guest Type “Red Hat Enterprise Linux 8 x64” to match the embedded Red Hat Core OS provided in an ISO image. Otherwise, DHCP services may not work correctly, and when the VMware host boots, it may not receive an IP address.
The VMware hosts for Control Planes Nodes are recommended to be 8 vCPU, 16 GB RAM, and 100 HDD. The VMware hosts for Worker Nodes are recommended to be 4 vCPU, 16 GB RAM, and 100 HDD. OpenShift requires a minimum of three (3) Control Plane Nodes and two (2) Worker Nodes. Please check with any solution you may deploy and adjust the parameters as needed. We will deploy four (4) Worker Nodes for Symantec VIP Auth Hub solution. And horizontally scale the solution with more worker nodes for Symantec API Manager and Siteminder.
Before starting any of these images, create a local snapshot as a “before” state. This will allow you to redeploy with minimal impact if there is any issue.
Before starting the deployment, you may wish to create a new NAT VMware Network, to avoid impacting any existing VMware images on the same address range. We will be adjusting the dhcpd.conf and dhcpd.leases files for this network.
To avoid an issue with reverse DNS lookup within PODS and Containers, remove a default value from dhcpd.conf. Stop vmware network, remove or comment out the line “option domain-name localdomain;” , remove any dhcpd.leases information, then restart the vmware network.
OpenShift / Kubernetes / Helm Command Line Binaries
Download these two (2) client packages to have three (3) binaries for interfacing with OpenShift/Kubernetes API Server.
Download Openshift Binaries for remote management (on main host)
#########################
sudo su -
cd /tmp/openshift
curl -skOL https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64.tar.gz ; tar -zxvf helm-linux-amd64.tar.gz
curl -skOL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz ; tar -zxvf openshift-client-linux.tar.gz
mv -f oc /usr/bin/oc
mv -f kubectl /usr/bin/kubectl
mv -f helm-linux-amd64 /usr/local/bin/helm
oc version
helm version
kubectl version
Start an OpenShift Cluster Deployment
OpenID Configuration with OpenShift
Post-deployment step: After you have deployed OpenShift cluster, you will be asked to create an IDP to authenticate other accounts. Below is an example with OpenShift and MS Azure. The image below showcases the parameters and values to be shared between the two solutions.
Entropy DaemonSet for OpenShift Nodes/Pods
We can validate the entropy on an OpenShift nodes or Pod via use of /dev/random. We prefer to emulate a 1000 password changes that showcase how rapidly the entropy pool of 4K is depleted when a security process accesses it. Example of the single line bash code.
Validate Entropy in Openshift Nodes [Before/After use of Haveged Deployment]
#########################
(counter=1;MAX=1001;time while [ $counter -le $MAX ]; do echo "";echo "########## $counter ##########" ; echo "Entropy = `cat /proc/sys/kernel/random/entropy_avail` out of 4096"; echo "" ; time dd if=/dev/random bs=8 count=1 2>/dev/null | base64; counter=$(( $counter + 1 )); done;)
If the number of OpenShift Workers is greater than two (2), then you will need to patch the OpenShift Ingress controller to scale up to the number of worker nodes.
WORKERS=`oc get nodes | grep worker | wc -l`
echo ""
echo "######################################################################"
echo "# of Worker replicas in OpenShift Ingress Prior to update"
echo "oc get -n openshift-ingress-operator ingresscontroller -o yaml | grep -i replicas:"
#echo "######################################################################"
echo ""
oc patch -n openshift-ingress-operator ingresscontroller/default --patch "{\"spec\":{\"replicas\": ${WORKERS}}}" --type=merge
LetsEncrypt Certs for OpenShift Ingress and API Server
The certs with OpenShift are self-signed. This is not an issue until you attempt to access the local OpenShift console with a browser and are stopped from accessing the UI by newer security enforcement in the browsers. To avoid this challenge, we recommend switching the certs to LetsEncrypt. There are many examples how to rotate the certs. We used the below link to rotate the certs. https://docs.openshift.com/container-platform/4.12/security/certificates/replacing-default-ingress-certificate.html
echo "Installing ConfigMap for the Default Ingress Controllers"
oc delete configmap letsencrypt-fullchain-ca -n openshift-config &>/dev/null
oc create configmap letsencrypt-fullchain-ca \
--from-file=ca-bundle.crt=${CHAINFILE} \
-n openshift-config
oc patch proxy/cluster \
--type=merge \
--patch='{"spec":{"trustedCA":{"name":"letsencrypt-fullchain-ca"}}}'
echo "Installing Certificates for the Default Ingress Controllers"
oc delete secret letsencrypt-certs -n openshift-ingress &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-ingress
echo "Backup prior version of ingresscontroller"
oc get ingresscontroller default -n openshift-ingress-operator -o yaml > /tmp/ingresscontroller.$DATE.yaml
oc patch ingresscontroller.operator default -n openshift-ingress-operator --type=merge --patch='{"spec": { "defaultCertificate": { "name": "letsencrypt-certs" }}}'
echo "Installing Certificates for the API Endpoint"
oc delete secret letsencrypt-certs -n openshift-config &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-config
echo "Backup prior version of apiserver"
oc get apiserver cluster -o yaml > /tmp/apiserver_cluster.$DATE.yaml
oc patch apiserver cluster --type merge --patch="{\"spec\": {\"servingCerts\": {\"namedCertificates\": [ { \"names\": [ \"$LE_API\" ], \"servingCertificate\": {\"name\": \"letsencrypt-certs\" }}]}}}"
echo "#####################################################################################"
echo "true | openssl s_client -connect api.${DOMAIN}:443 --showcerts --servername api.${DOMAIN}"
echo ""
echo "It may take 5-10 minutes for the OpenShift Ingress/API Pods to cycle with the new certs"
echo "You may monitor with: watch -n 2 'oc get pod -A | grep -i -v -e running -e complete' "
echo ""
echo "Per Openshift documentation use the below command to monitor the state of the API server"
echo "ensure PROGRESSING column states False as the status before continuing with deployment"
echo ""
echo "oc get clusteroperators kube-apiserver "
Please reach out if you wish to learn more or have ANA assist with Kubernetes / OpenShift opportunities.
Typically, we may use various tools to view JMS queue(s) related metrics for trends and stale/stuck activity. During issues with J2EE JMS Queue, though, it would be helpful to be able to view and trace transactions to assist with a resolution. With proper logging levels enabled, Wildfly/JBOSS logs show detailed information containing the JMS IDs associated with each transaction. These JMS transactions we see in the logs are already ‘in-flight’ and are being processed by a message handler.
On the Symantec Identity Suite Virtual Appliance, the Wildfly & HornetQ processes are run under the ‘wildfly’ service ID. The wildfly journals are located in the wildfly data folder and stored in a format that is efficient for processing. To perform analysis on the data within these journals, though, we noticed a challenge with read-permissions for the HornetQ files even when Wildfly/Java process is not actively running.
To avoid this issue on the Virtual Appliance, copy the HornetQ files to a temporary folder. Remember to copy the entire folder, including sub-folders.
Once the live-hornetq folder is available in a tmp location, execute the below process for printing Journal content.
Print HornetQ Journal and Bindings
To export the HornetQ Journal Files to XML, the Java module of “org.hornetq.core.journal.impl.ExportJournal” requires the journal sub-folder with the prefix of “hornetq-data”, the file extension (hq), the file sizes, and where to export the XML file (export.dat). The prefix and file extension (hq) are unique to the Identity Suite vApp.
The body/rows of the JMS export is partially base64. You may parse through this information as you wish.
Use this information to trace through transactions in the JMS queue.
For Cleanup, within the Symantec Identity Suite vApp, there are a few options. The first is deleting the JMS queue journals before starting the Wildfly service. This can be accomplished using the build-in alias ‘deleteIDMJMSqueue’.
alias deleteIDMJMSqueue='sudo /opt/CA/VirtualAppliance/scripts/.firstrun/deleteIDMJMSqueue.sh'
Another option is to remove a select JMS entry from the queue using /opt/CA/wildfly-idm/bin/jboss-cli.sh process. If created with an input script, escape the colons in the GUID.
/subsystem=transactions/log-store=log-store/:probe()
ls /subsystem=transactions/log-store=log-store/transactions
/subsystem=transactions/log-store=log-store/transactions=0:ffffa409cc8a:1c01b1ff:5c7e95ac:eb:delete()
View a description of the JMS Processing from Broadcom Engineering/Support Teams (see below video)
This write-up provides the tools required for a deeper analysis. Debugging issues with JMS may test one’s patience, stay the course, stay persistent, and have fun!
References: (Delete JMS queue and remove a single entry)