Modernizing Identity Portal Migrations with AI: Navigating Embedded Scripts and Plugin Frameworks

Introduction

The Symantec (CA/Broadcom) Identity Portal is widely used for managing IAM workflows with customizable forms, tasks, and business logic. This tool allow its business logic to be exported within the management console.

However, a major challenge exists in migrating or analyzing environments like Dev → Test → Prod . This effort can be challenging when working with these exported Portal files. Although configuration migration tools are available, reviewing and verifying changes can be difficult. Portal exports are delivered as a single compressed JSON one-liner—making it hard to identify meaningful changes (“deltas”) without involving a large manual effort.


Challenge 1: Single-Line JSON Exports from Identity Portal

Example above has over 88K characters in a single line. Try to search on that string to find the object you wish to change or update.

Identity Portal’s export format is a flat, one-line JSON string, even if the export contains hundreds of forms, layout structures, and java scripts.

Migration/Analysis Risks

  • Impossible to visually scan or diff exports.
  • Nested structures like layout, formProps, and handlers are escaped strings, sometimes double-encoded.
  • Hidden differences can result in subtle bugs between versions or environments.

A Solution

We created a series of PowerShell scripts that leverage AI to select the best key-value pairs to sort on, that would either provide the best human-readable or searchable processes to reduce the complexity and effort for migration processes. We now can isolate minor delta changes that would otherwise been hidden until a use-case was exercised later in the migration effort, which would require additional effort to be utilized.

  • Convert the one-liner export into pretty-formatted, human-readable JSON.
  • Detect and decode deeply embedded or escaped JSON strings, especially within layout or formProps.
  • Extract each form’s business logic and layout separately.

These outputs allow us to:

  • Open and analyze the data in Notepad++, with clean indentation and structure.
  • Use WinMerge or Beyond Compare to easily spot deltas between environments or versioned exports.
  • Track historical changes over time by comparing daily/weekly snapshots.

Challenge 2: Embedded JavaScript Inside Portal Forms

Identity Portal forms often include JavaScript logic directly embedded in the form definition (onLoad, onChange, onSubmit).

Migration Risks

  • JS logic is not separated from the data model or UI.
  • Inconsistent formatting or legacy syntax can cause scripts to silently fail.
  • Broken logic might not surface until after production deployment.

Suggested Solutions

  • Use PowerShell to extract JS blocks per form and store them as external .js.txt files.
  • Identify reused code patterns that should be modularized.
  • Create regression test cases for logic-heavy forms.

Challenge 3: Form Layouts with Escaped JSON Structures

The layout field in each form is often a stringified JSON object, sometimes double or triple-escaped.

ANA provides in-depth analysis of the Symantec Identity Portal business logic and embedded java-script and java plugins to assist with migration

Migration Risks

  • Malformed layout strings crash the form UI.
  • Even minor layout changes (like label order) are hard to detect.

Suggested Solutions

  • Extract and pretty-print each layout block to .layout.json files.
    • Please note: While the output is pretty-print, it is not quite JSON format, due to the escape sequences. Use these exported files as searchable/research to help isolate deltas to be corrected during the migration efforts.
  • Use WinMerge or Notepad++ for visual diffs.
  • Validate control-to-field binding consistency.

Using our understanding of the Identity Portal format for the ‘layout’ property, were able to identify methods using AI to manage the double-or-triple escaped characters that were troublesome to export consistently. Our service engagements now incorporate greater use of AI and associated APIs to support migration efforts and process modernization, with the goal of minimizing business risk for our clients and our organization.


Challenge 4: Java Plugins with Multiple Classes

Many Portal instances rely on custom Java plugins with dozens of classes, Spring beans, and services.

Migration Risks

  • Portal API changes break plugins.
  • Lack of modularity or documentation for the custom plugins.
  • Missing source code for complied custom plugins.
  • Difficult to test or rebuild.

Suggested Solutions

  • In the absence of custom source code, decompile plugins using jd-gui .
  • Rebuild with Maven/Gradle in modern IDEs.
  • Isolate logic into reusable service layers.

Testing and Validation

  • Pretty JSON confirms field mapping.
  • Layouts render in Dev, Test, and Prod.
  • Plugins respond with valid output.
  • JS handlers trigger as expected.

Tools and Techniques

  • PowerShell: Prettify JSON, extract layouts/handlers.
  • Notepad++: Review JSON and scripts.
  • WinMerge / Beyond Compare: Diff exports and track changes.
  • jd-gui Java decompilation for plugin reverse engineering.

Recommendations for Future-Proofing

  • Store layouts and handlers in Git.
  • Modularize plugin code.
  • Version control form definitions.
  • Automate validation tests in CI or staging.

Conclusion

Migrating Identity Portal environments requires more than copy-pasting exports— In the absence of proper implementation documentation around customizations, it may require reverse engineering, decoding, and differencing of deeply nested structures.

By extracting clean, readable artifacts and comparing across environments, teams will gain visibility, traceability, and confidence in their migration efforts.

Review our github collection of the above mentioned scripts. Please reach out if you would like assistance with your migration processes/challenges. We can now progress toward automation of the business logic from one environment to the next.

https://github.com/anapartner-com/identity_portal

Streamlining with LetsEncrypt Wildcard Certificates and Automated Validation

Our goal is to move away from using self-signed certificates, often supplied by various software vendors and Kubernetes deployments, in favor of a unified approach with a single wildcard certificate covering multiple subdomains. This strategy allows for streamlined updates of a single certificate, which can then be distributed across our servers and projects as needed.

We have identified LetsEncrypt as a viable source for these wildcard certificates. Although LetsEncrypt offers a Docker/Podman image example, we have discovered an alternative that integrates with Google Domains. This method automates the entire process, including validation of the DNS TXT record and generation of wildcard certificates. It’s important to note that Google Domains operates independently from Google Cloud DNS, and its API is limited to updating TXT records only.

In this blog post, we will concentrate on how to validate the certificates provided by LetsEncrypt using OpenSSL. We will also demonstrate how to replace self-signed certificates with these new ones on a virtual appliance.

View the pubkey of the LetsEncrypt cert.pem and privkey.pem file to confirm they match

Note: LetsEncrypt private keys are now key-type = ECDSA by default. You may still request RSA key-type.

openssl x509 -noout -pubkey -in cert.pem

openssl pkey -pubout -in privkey.pem

When key-type = rsa, we can use either openssl validation method (rsa with md5 or pkey with pubout)

openssl x509 -noout -modulus -in cert.pem | openssl md5

openssl rsa  -noout -modulus -in privkey.pem | openssl md5

Validate DNS & Dates of the cert, chain, fullchain pem files

openssl x509 -text -noout -in cert.pem | grep -e DNS -e 'Not Before:' -e 'Not After :' -e 'Subject: C'

while openssl x509 -noout -text | grep -e Certificate: -e Issuer: -e DNS -e 'Not Before:' -e 'Not After :' -e 'Subject: C' ; do :; done < chain.pem 2>/dev/null

while openssl x509 -noout -text | grep -e Certificate: -e Issuer: -e DNS -e 'Not Before:' -e 'Not After :' -e 'Subject: C' ; do :; done < fullchain.pem 2>/dev/null

Download the current trusted public root CA cert from LetsEncrypt

We need to download this root CA cert for solutions, appliances, and on-prem Kubernetes Clusters that do NOT have these root CA certs in their existing keystores.

curl -sOL https://letsencrypt.org/certs/isrgrootx1.pem

Validate the cert.pem file with the public root CA cert and the provided chain.pem file

Will return an “OK” response if valid. CA certs order is important, if reversed this process will fail. Validation still fails if we only have chain.pem or fullchain.pem (see image below) without the correct public root CA cert from LetsEncrypt. Note: This public root CA cert is typically provided in updated modern browsers. Note2: While the fullchain.pem does have a CA cert with the CN = ISRG Root X1, this does NOT appear to be the correct one based on the error reported, so we have downloaded the correct CA cert to be used with the same CN name of ISRG Root X1 (see following images below)

openssl verify -CAfile <(cat isrgrootx1.pem chain.pem) cert.pem

Combine cert.pem with the public root CA cert and chain.pem for a complete chain cert in the CORRECT order.

Important Note: cert.pem MUST be first in this list otherwise validation will fail. Please note that there are two (2) root CA certs with the same CN, that may cause some confusion when validating the chain.

cat cert.pem isrgrootx1.pem chain.pem > combined_chain_with_cert.pem

Validate certs with openssl server process and two (2) terminal ssh sessions/windows

1st terminal session – run an openssl server (via openssl s_server) on port 9443 (any open port). The -www switch will send a status message back to the client when it connects. This includes information about the ciphers used and various session parameters. The output is in HTML format so this option will normally be used with a web browser.

openssl s_server -key privkey.pem -cert combined_chain_with_cert.pem -accept 9443 -www

2nd terminal session – run openssl s_client and curl with the combined chain cert to validate. Replace the FQDN with your LetsEnscrypt domain in the wildcard cert. Example below FQDN is training.anapartner.net. You may also use a browser to access the openssl s_server web server with a FQDN.

true | openssl s_client -connect localhost:9443 -CAfile combined_chain_with_cert.pem

curl -v --cacert combined_chain_with_cert.pem --resolve training.anapartner.net:9443:127.0.0.1 https://training.anapartner.net:9443

CERTBOT deployment via PODMAN

Example of using the official Certbot image with podman. We recommend using multiple -d switches with *.subdomain1.domain.com to allow a single cert be used for many of your projects. Reference of this deployment

podman run --rm -it \
-v "$(pwd)/etc-output:/etc/letsencrypt:z" \
-v "$(pwd)/var-output:/var/lib/letsencrypt:z" \
 certbot/certbot certonly --manual -d wildcard.domain.com \
-d *.subdomain1.domain.com \
-d *.subdomain2.domain.com \
-d *.subdomain3.domain.com \
-d *.subdomain4.domain.com

A version of podman with the Google Domain API TXT integration. We use variables for reuse of this code for various testing domains. This docker image will temporarily create the Google Domain TXT records via a REST API, that are needed for Certbot DNS validation, then the process will remove the TXT records. There is no manual interaction required. We use this process with a bash shell script to run as needed or via scheduled events.

podman run --rm \
  -v ${LETSENCRYPT}/var/lib/letsencrypt:/var/lib/letsencrypt:z \
  -v ${LETSENCRYPT}/etc/letsencrypt:/etc/letsencrypt:z \
  -v ${LETSENCRYPT}/var/log/letsencrypt:/var/log/letsencrypt:z \
  --cap-drop=all \
  ghcr.io/aaomidi/certbot-dns-google-domains:latest \
  certbot certonly  \
  --authenticator 'dns-google-domains' \
  --dns-google-domains-credentials /var/lib/letsencrypt/dns_google_domains_credentials_${domain}.ini \
  --non-interactive --agree-tos -m dns@${domain} \
  --server 'https://acme-v02.api.letsencrypt.org/directory' \
  --non-interactive \
  --dns-google-domains-zone "${domain}" \
  -d "${CN_OF_TLS_DOMAIN}" \
  -d "*.$subdomain01.$domain" \
  -d "*.$subdomain02.$domain" \
  -d "*.$subdomain03.$domain" \
  -d "*.$subdomain04.$domain" \
  -d "*.$subdomain05.$domain" \
  -d "*.$subdomain06.$domain" \
  -d "*.$subdomain07.$domain" \
  -d "*.$subdomain08.$domain" \
  -d "*.$subdomain09.$domain" \
  -d "*.$subdomain10.$domain" \
  -d "*.$subdomain11.$domain" \
  -d "*.$subdomain12.$domain"

Replace Identity Suite vApp Apache Certificate with LetsEncrypt

We see tech notes and an online document but wanted to provide a cleaner step by step process to update the Symantec IGA Virtual Appliance certificates for the embedded Apache HTTPD service under the path /opt/CA/VirtualAppliance/custom/apache-ssl-certificates

# Collect the generated LetsEncrypt certs via certbot, save them, scp to the vApp host, and then extract them

tar -xvf letsencrypt-20231125.tar


# View the certs

ls -lart


# Validate LetEncrypt cert via pubkey match between private key and cert

openssl x509 -noout -pubkey -in cert.pem
openssl pkey -pubout -in privkey.pem


# Download the latest LetsEncrypt public root CA cert

curl -sOL https://letsencrypt.org/certs/isrgrootx1.pem


# Validate a full chain with root with cert (Note: order is important on cat process)

openssl verify -CAfile <(cat isrgrootx1.pem chain.pem) cert.pem


# Create a full chain with root cert and LetsEncrypt chain in the correct ORDER

cat cert.pem isrgrootx1.pem chain.pem > combined_chain_with_cert.pem


# Move prior Apache HTTPD cert files

mv localhost.key localhost.key.original
mv localhost.crt localhost.crt.original


# Link the new LetsEncrypt files to same names of localhost.XXX

ln -s privkey.pem localhost.key
ln -s combined_chain_with_cert.pem localhost.crt


# Restart apache (httpd)

sudo systemctl restart httpd


# Test with curl with the FQDN name in the CN or SANs of the cert, e.g. training.anapartner.net

curl -v --cacert combined_chain_with_cert.pem --resolve training.anapartner.net:443:127.0.0.1 https://training.anapartner.net:443


# Test with browser with the FQDN name

Example of integration and information reported by browser

View of the certificate as shown with a CN (subject) = training.anapartner.net

A view of the SANS wildcard certs that match the FQDN used in the browser URL bar of iga.k8s-training-student01.anapartner.net

Example of error messages from Apache HTTPD service’s log files if certs are not in correct order or validate correctly. One message is a warning only, the other message is a fatal error message about the validation between the cert and private key do not match. Use the pubkey check process to confirm the cert/key match.

[ssl:warn] [pid 2206:tid 140533536823616] AH01909: CA_IMAG_VAPP:443:0 server certificate does NOT include an ID which matches the server name
[ssl:emerg] [pid 652562:tid 140508002732352] AH02565: Certificate and private key CA_IMAG_VAPP:443:0 from /etc/pki/tls/certs/localhost.crt and /etc/pki/tls/private/localhost.key do not match

TLS Secrets update in Kubernetes Cluster

If your Kubernetes Cluster is on-prem and does not have access to the internet to validate the root CA cert you may decided to use the combine_chain_with_cert.pem when building your Kubernetes Secrets. With Kubernetes Secrets you must delete then re-add the Secret as there are no current update process for Secrets.

CERTFOLDER=~/labs/letsencrypt
CERTFILE=${CERTFOLDER}/combined_chain_with_cert.pem
#CERTFILE=${CERTFOLDER}/fullchain.pem
KEYFILE=${CERTFOLDER}/privkey.pem
INGRESS_TLS_SECRET=anapartner-dev-tls
NAMESPACE=monitoring


NS=${NAMESPACE}
kubectl -n ${NS} get secret ${INGRESS_TLS_SECRET} 2>&1 > /dev/null
if [ $? != 0 ]; then
echo ""
echo "### Installing TLS Certificate for Ingress"
kubectl -n ${NS} create secret tls ${INGRESS_TLS_SECRET} \
  --cert=${CERTFILE} \
  --key=${KEYFILE}
fi



Ingress Rule via yaml:
  tls:
  - hosts:
    - grafana.${INGRESS_APPS_DOMAIN}
    secretName: ${INGRESS_TLS_SECRET}


helm update:
--set grafana.ingress.tlsSecret=${INGRESS_TLS_SECRET} 

Symantec Directory use of LetsEncrypt certs

Symantec (CA) Directory requires certificates to have a purpose of ” SSL client: Yes and SSL server: Yes” and be in X.509 PEM format. Fortunate for us, we can validate this is true and leverage our LetsEncrypt certs for the DSAs. If we compare the use of dxcertgen (the included binary to generate certificates within CA Directory) and LetsEncrypt, we can see they only differ for two (2) purposes that do not impact our use.

openssl x509 -noout -purpose -in cert.pem
openssl x509 -noout -ext KeyUsage -in cert.pem
openssl x509 -noout -ext extendedKeyUsage -in cert.pem

The below section from the online documentation mentions the purpose of a certificate to be used by Symatnec Directory. It mentioned using either DXcertgen or openssl. We can now add in LetsEncrypt certs as well to be used.

One statement caught our eye that was not quite accurate was that a certificate used on one DSA could not be used for another DSA. We can see if we compare the DSAs provided by CA Directory for the provisioning servers data tier (Identity Suite/Identity Manager), there is no difference between them, including subject name. Due to the fact that the subject (CN) has the same name for all five (5) DSAs (data/router), if a Java JNDI call is made for an LDAP call to the DSAs, the LDAP hostname validation must be disabled. (see below)

-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true

Symantec PAM use of LetsEncrypt Certs

We must use key type of RSA for any cert with Symantec PAM. This process is fairly straightforward to update the certificates. Access the PAM UI configuration, select the menu of: Configuration / Security / Certificates. Join the cert.pem and the privkey.pem files together, in this order, with cat or notepad.

Challenge/Resolution: Please edit the joined file and add the string “RSA to the header/footer of the private key provided by LetsEncrypt. Per Broadcom tech note: 126692, “PAMs source code is expecting that RSA based Private Keys start with “—–BEGIN RSA PRIVATE KEY—–” header and have a matching footer. See example below.

Select “Certificate with Private key” with X509 (as other option), then click “Choose File” button to select the combined cert/privatekey pem file. We are not required to have a destination filename nor passphrase for the LetsEncrypt certs. Click Upload button.

We should receive a confirmation message of “Confirmation: PAM-CM-0349: subject=CN = training.anapartner.net has been verified.”

The error message “PAM-CM-0201: Verification Error Can not open private key file” will occur if using keytype of RSA or ECDSA and the default header/footer does not contain a required string for PAM to parse. If we attempt to use ECDSA keytype, we would receive a similar PAM-CM-0201 error message after updating the header/footer. So please regenerate the LetsEncrypt certs with keytype=RSA.

Next steps, after the certificate and private key have been loaded into Symantec PAM, please use the “Set” menu option to assign this certificate as primary. We will click verify button first, to confirm the certificate is functioning correctly.

We should receive a confirmation message for the file ” Confirmation: PAM-CM-0346: cert_with_key_only_for_pam_app.crt has been verified“.

Finally, we will click “Accept” button, and allow the PAM appliance to restart. Click “Yes” when asked to restart the appliance.

View the updated PAM UI with the LetsEncrypt Certs.

ERROR Messages

If you have received any of the below error messages during use of any java process, e.g. J2EE servers (JBOSS/Wildfly), you have pushed beyond the solution’s vendor ability to manage new features provided in LetsEncrypt certs. You will need to regenerate them with the type of RSA, instead of default of elliptical certs.

UNKNOWN-CIPHER-SUITE
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
SSLHandshakeException: no cipher suites in common
Ignore unavailable extension
Ignore unknown or unsupported extension

Use the below processes to help you identify the root cause of your issue.

JAVA_OPTS="$JAVA_OPTS -Djavax.net.debug=ssl:handshake:verbose "

curl -kv https://bastion-host.sso-training-student01.anapartner.net:8443/iam/siteminder/adminui

true | openssl s_client -cipher HIGH -connect bastion-host.sso-training-student01.anapartner.net:8443

Example of using podman to generate wild card certs with RSA keytype.

# RSA keytype is needed for SM Admin UI and PAM App - Recommend keep this as default

export KEYTYPE=rsa

podman run --rm --name letsencrypt \
-v ${LETSENCRYPT}/var/lib/letsencrypt:/var/lib/letsencrypt:z \
-v ${LETSENCRYPT}/etc/letsencrypt:/etc/letsencrypt:z \
-v ${LETSENCRYPT}/var/log/letsencrypt:/var/log/letsencrypt:z \
--cap-drop=all \
ghcr.io/aaomidi/certbot-dns-google-domains:latest \
certbot certonly \
--authenticator 'dns-google-domains' \
--dns-google-domains-credentials /var/lib/letsencrypt/dns_google_domains_credentials_${TLS_DOMAIN}.ini \
--non-interactive --agree-tos -m dns@${TLS_DOMAIN} \
--server 'https://acme-v02.api.letsencrypt.org/directory' \
--non-interactive \
--key-type ${KEYTYPE} \
--dns-google-domains-zone \"${TLS_DOMAIN}\" \
-d \"${CN_OF_TLS_DOMAIN}\" \
-d \"*.${GCP_TRAINING_NAME}01.${TLS_DOMAIN}\" \

Java Keystores

Create an ‘RSA’ type JKS and P12 Keystore using LetsEncrypt certs.

The below example is a two (2) steps process that will create a p12 keystore first with the cert.pem and privkey.pem files. Then, a second command will convert the p12 keystore to the older JKS keystore format. You may use these in any Java process, e.g.J2EE and/or Tomcat platform.


openssl pkcs12 -export -inkey privkey.pem -in cert.pem -name tomcat -out keyStore.p12 -password pass:changeit



keytool -v -importkeystore -srckeystore keyStore.p12 -srcstoretype PKCS12 -srcstorepass changeit -destkeystore keyStore.jks -deststoretype JKS -deststorepass changeit -keyalg RSA -noprompt