VMware Workstation and Vyos Software Router: Expedite on-prem Kubernetes and OpenShift Labs

With the rapid evolution of technology and increasing complexity of software solutions, using tools like VMware Workstation for learning and testing has become necessary. Deploying intricate systems like Kubernetes and OpenShift on VMware Workstation provides an opportunity for in-depth understanding and experience before implementing these solutions on a larger, organization-wide scale.

VMware Workstation, coupled with the powerful container orchestration capabilities of Kubernetes and OpenShift, offers an unparalleled platform for crafting next-generation applications and solutions and lowering costs. It’s a potent combination that can significantly boost your organization’s operational efficiency, application delivery speed, and overall software development lifecycle.

In the realm of advanced solution deployments, the right tools can make all the difference. With VMware Workstation, you’re not just getting a virtualization tool; you’re acquiring a platform that helps you delve deeper into modern software architectures and innovations. Harness its potential and equip yourself with the knowledge and experience needed to stay ahead of the curve.

Certainly, networking is one of the critical aspects of VMware Workstation that make it such a versatile tool. VMware Workstation offers three types of networking options to suit different needs and scenarios. Let’s explore each of these in detail.

1. Bridged Networking

Bridged Networking is the simplest and most straightforward networking mode. When you configure a VM to use bridged networking, the VM is connected directly to the existing network that your host computer is connected to. In essence, it will be as though the VM is another physical device on your network.

With bridged networking, your VM can have its unique identity on the network, such as its IP address, making it an entirely independent entity from the host. This is particularly useful when you need the VM to interact directly with other devices on the network, or when it needs to be accessible from other computers.

2. Network Address Translation (NAT)

The NAT mode allows your VMs to share the IP address of the host machine. Essentially, all the network traffic from the VMs is routed through the host machine. This implies that the VMs can access the external network and the internet, but they cannot be directly reached from the external network since they are ‘hidden’ behind the host.

NAT is highly beneficial when you want to isolate your VMs from your network while still providing them with network access. For instance, this can be handy when testing untrusted applications or experimenting with potentially unstable software that could disrupt your network.

3. Host-Only Networking

The Host-Only networking mode creates a private network shared only between the VMs and the host machine. This means that your VMs can communicate with each other and the host machine but cannot access the external network or the internet.

Host-Only networking is particularly useful when you want to create a secure, isolated environment for your VMs, away from the vulnerabilities of the external network. This is ideal when working with sensitive data or creating a controlled environment for testing network applications.

Each of these three VMware Workstation networking modes has advantages and suitable use-cases. The choice between them depends on your specific needs- creating an isolated testing environment or mimicking a complex, interconnected network for a comprehensive deployment simulation.

Expanding Host-Only for use with OpenShift/Kubernetes Labs

As discussed earlier, VMware workstation offer three (3) types of networks modes: Bridged, NAT, and Host Only. The bridged mode has a challenge that it will share your office or home network and request an IP address to be assigned. This may not be acceptable in your office, or you may wish to keep your main home network free from VMware hosts. NAT is typically the most selected network used for VMware guest OS, as it will not impact the office/home network. The limitation with NAT, is it only allows outward-bound traffic from the Guest OS, via the VMware Host. There are no routing rules to allow traffic from outside to access the Guest OS images. The last network mode is Host-Only. Host-Only is designed to be an isolated network segment between the VMware guest OS and the VMware Host OS. There is no outward or inward-bound traffic. This network mode is typically not used when access to the internet is required.

Introduce: Vyos Software Router for VMware (OVA)

We wanted a more flexible solution than these three (3) modes. We wanted to standardize a network segment for our OpenShift/Kubernetes training/development that did not require a change between locations (like bridged) or force our internal resources to reset their bridged network to match.

After a review, we selected VMware Host-Only, which has the basics of what we needed. We were only missing routing rules for inbound and outbound traffic. We looked around and found a software solution already made that we could immediately leverage with minimal configuration changes to Vmware client OS/images. Vyos software router was already provided in an OVA format for immediate use.

We downloaded and imported the OVA into VMware workstation.

Since we planned to have multiple host network segments to manage large data for OpenShift/Kubernetes, we bumped up the VMware guest OS specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM. And adjusted the extra Network Adapters to be Host-only or Custom (Host-Only) networks.

After we adjusted the Guest OS specs, we snapshotted this VMware Guest OS image to allow rollback if we wanted to change a feature later. We started up the image and logged in with default credentials; vyos/vyos

After login via the VMware Guest OS console, we immediately updated Vyos configuration to allow us to ssh into the Guest OS and perform our work in a better UI.

Below is an example of the bootstrap configuration to enable remote access via ssh, and update eth0 NIC to a bridged IP address that we can access. We standardized a rule that all network routing would use IP xxx.yyy.zzz.254.

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save

We then switched to our favorite SSH terminal tool of MobaXterm (or Putty) to validate we could access the Vyos software router remotely.

We are now ready to add a configuration that allows a default route, inbound routes, and outbound routes for our four (4) network NICs.

The below lines may be pasted into the SSH session. ‘conf’ (config) will open the Vyos configuration shell so that we can paste it into all lines. We will define static IP addresses for all four (4) NICs, a static route to our external network router, outbound rules, and inbound rules. Please ensure that the IP addresses for the four (4) NICs match what you have defined.

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0

Please double check the IP addresses match your VMware Host-only networks.

Validation

We will validate inbound and outbound traffic using ping on the Vyos software router. When this passes, we will move on to routing configuration for external devices.

After basic validation, please snapshot your Vyos Guest OS

In the final step, we will add routing configuration on MS Windows OS and Linux OS to reach all four (4) networks from any external device and any VMware image on one of the four (4) networks.

# Ref: https://docs.vyos.io/en/equuleus/configuration/system/default-route.html
#      https://docs.vyos.io/en/equuleus/quick-start.html
#      https://bertvv.github.io/cheat-sheets/VyOS.html

#Step 000:  Increase Vyos Router specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM when adding more than two interfaces in VMware Workstation


#Step 00: Review VMware Host vmnet addresses, use to build your rules.

ip a | grep vmnet

16: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.10.10.1/24 brd 10.10.10.255 scope global vmnet1
17: vmnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.0.0.1/24 brd 10.0.0.255 scope global vmnet2
18: vmnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.242.1/24 brd 192.168.242.255 scope global vmnet3
19: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.243.1/24 brd 192.168.243.255 scope global vmnet8
20: vmnet255: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.255.0.1/24 brd 10.255.0.255 scope global vmnet255



# Step 0:  Boot strap first interface (via vmware console of vyos running image -  after login with vyos / vyos)

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save
exit
show interface


# Step 1: Vyos configuration - after login with vyos / vyos with an SSH putty session tool to allow copy-n-paste of the below rows

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0 


# Step 2:  Update external lab network devices (laptop on 192.168.2.x) to use Vyos Router for this new routes

# MS Win OS examples:
route add -p 10.10.10.0 mask 255.255.255.0 192.168.2.254
route add -p 10.0.0.0 mask 255.255.255.0 192.168.2.254
route add -p 192.168.242.0 mask 255.255.255.0 192.168.2.254

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Linux OS examples:
sudo route add -net  10.0.0.0/24 gw 192.168.2.254
sudo route add -net  10.10.10.0/24 gw 192.168.2.254
sudo route add -net  192.168.242.0/24 gw 192.168.2.254
route -n
netstat -rn   (dnf -y install net-tools)

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Step 3:  Optional:  Add static routes on network router if missed on a device, to redirect to the vyos bridged interface.


# Step 4:  Update the VMware DHCP configuration file to use the new Vyos Router for any Vmware images with DHCP, then reboot images.
option routers  10.10.10.254;     [VMware Workstation on Linux OS: /etc/vmware/vmnet1/dhcp/dhcpd.conf ]
option routers  10.0.0.254;       [VMware Workstation on Linux OS: /etc/vmware/vmnet2/dhcp/dhcpd.conf ]
option routers  192.168.242.254;  [VMware Workstation on Linux OS: /etc/vmware/vmnet3/dhcp/dhcpd.conf ]

# Note:  MS Win OS:  The VMware DHCP configurations are combined in one file:  C:\ProgramData\VMware\vmnetdhcp.conf
# 
# Restart images, view routes, then do a outbound submission as a test.


ping 8.8.8.8
ping www.google.com


# Step 5:  For Openshift, ensure that your install-config.yaml or agent-config.yaml is defined with the correct gateway router for Vyos.



# Step 6:  Exercise your VMware host images and then monitor within Vyos via:
show nat source translations
show nat source statistics
monitor traffic interface any filter 'host 10.0.0.99'      [embedded tcpdump]

Overview of Vyos Software Router with Vmware Workstation and three (3) host-only networks with bridged network

We now have the methodology to use over 250+ possible VMware host-only network segments for our networking labs with OpenShift and Kubernetes that require internet outbound and/or inbound access. We can standardize a unique host-only network segment to share with team members and clients for training/education/development. With the embedded tcpdump feature in Vyos Software router image, we can quickly address and isolate network routing configuration challenges.

Hopefully, this will allow you to continue to expand your knowledge and awareness of new architectures with your dedicated lab environment.

Secure Application Introspection

Locate “the good, the bad, and the ugly” data with a transparent proxy.

Have you been frustrated with various enterprise/cloud solutions’ APIs implementation or documentation where a single case-sensitive data field entry delays progress? Does the solution have undocumented features for older client tools? Do you wish to know what your mobile apps or laptop sends to the internet?

Utilizing a proxy can help with all the above, and if the process is quick and straightforward, so much the better.

Typically for a proxy, there may be quite a bit of effort and steps. You may need to modify a client host/mobile phone to redirect web traffic with OS environmental variables of HTTP_PROXY and HTTPS_PROXY or adjustment of the underlying OS network/iptables. Prior, we typically set up the open-source Jmeter proxy with the OS environment variables to capture secure traffic data. This process works well for most applications. Additionally, the Firefox browser allows manual modification using a proxy without dependence on the OS environment settings if we wish to capture the user experience and any data challenges.

The example below of modifying a Firefox browser to use a “manual proxy configuration” instead of system/auto configurations.

To ensure accurate capture of web traffic submissions, a more thorough method is needed as the above process may fail if client tools or mobile apps cannot detect OS environmental variables.

We have found a perfect combination within the open-source tool of MITMproxy with podman and the embedded VPN feature of WireGuard.

The process in six (6) steps:

  1. Deployment of the WireGuard VPN client on the client host (MS Win/Linux/Mobile)
  2. Deployment of MITMproxy using podman (or docker) with WireGuard mode/configuration
  3. Edit the wireguard.conf file to have the correct public IP address and import this file to the WireGuard VPN client and establish the VPN connection.
  4. Copy the mitmproxy-ca-cert.cer to the client component Java or OS keystore (if needed) as a trusted CA cert.
  5. Open the MITMproxy Web UI or monitor the command line dashboard
  6. Execute your test on the client host and view the results in the MITMproxy Web UI for both request and response.

MITMproxy UI with WireGuard mode enabled.

The WireGuard client configuration will be provided in three (3) places: the MITMproxy logs (podman logs mitmproxy), the text file wireguard.conf (if podman/docker volumes are enabled), and the MITMproxy UI. The QR code is enabled for mobile phone use, but since the public IP address provided is not correct in this view, you will need to manually edit this configuration on your mobile phone during those use-cases to have the correct endpoint IP address.

MITMproxy UI with standard proxy configuration mode.

Bash Script:

Script to deploy MITMproxy with podman on a linux OS with two (2) configurations: Wireguard mode for any client applications that do not honor HTTP_PROXY/HTTPS_PROXY and Standard proxy mode. This bash script allows a shared volume to use the SAME certs to avoid managing different certs upon restart of the container.

#!/bin/bash
######################################################################################
#
#  Deploy MITMproxy with two (2) configurations:
#
#     MITMProxy with WireGuard mode enabled (UDP 51820) and Web UI (TCP 8081)
#     MITMProxy with standard proxy enabled (TCP 9080) and Web UI (TCP 9081)
#
#  Notes:  Use podman exec to check path and env variables
#    - Binaries:  dnf -y install podman 
#    - Use shared folder to avoid having two (2) different configuration files for both copies
#    - Do not forget the :z for -v volumes to avoid permissions issues
#    - Do not forget quotes around env -e variables
#    - Use --rm as needed
#    - Use this switch as needed, but do not leave it on:   --log-level debug \
#
#   Basic:  podman run -it -v /tmp/mitmproxy/:/home/mitmproxy/.mitmproxy:z -p 8080:8080 mitmproxy/mitmproxy
#   Logs:   podman logs mitmproxy-wireguard
#   Shell:  podman exec -it -u root mitmproxy bash
#
#  Options Ref.  https://docs.mitmproxy.org/stable/concepts-options/
#   - added stream_large_bodies=10m to lower impact to mitmproxy due
#       to possible large json/xml payloads 
#
#  ANA 07/2023
#
######################################################################################

MITMPROXY_HOMEPATH=/tmp/mitmproxy
echo ""
echo "You may delete the shared folder of ${MITMPROXY_HOMEPATH}"
echo "to remove prior configuration of mitmproxy certs & wireguard.conf files"
echo ""
#sudo rm -rf ${MITMPROXY_HOMEPATH}

mkdir -p ${MITMPROXY_HOMEPATH}
chmod -R 777 ${MITMPROXY_HOMEPATH}
ls -hlrt ${MITMPROXY_HOMEPATH}

echo ""
echo " Starting mitmproxy-wireguard proxy "
podman rm mitmproxy-wireguard -f  &>/dev/null
podman run -d -it --name mitmproxy-wireguard \
   -p 51820:51820/udp -p 8081:8081 \
   -l mitmproxy \
   -v ${MITMPROXY_HOMEPATH}:/home/mitmproxy/.mitmproxy:z  \
    docker.io/mitmproxy/mitmproxy \
    mitmweb --mode wireguard --ssl-insecure  --web-host 0.0.0.0 --web-port 8081 --set stream_large_bodies=10m


echo ""
echo " Starting mitmproxy-default proxy "
podman rm mitmproxy-default -f  &>/dev/null
podman run -d -it --name mitmproxy-default \
    -p 9080:9080 -p 9081:9081 \
    -l mitmproxy  \
    -v ${MITMPROXY_HOMEPATH}:/home/mitmproxy/.mitmproxy:z  \
     docker.io/mitmproxy/mitmproxy \
     mitmweb --set listen_port=9080 --web-host 0.0.0.0 --web-port 9081

echo ""
echo ""
echo "###############################################################################"
echo ""
echo " Running Podman Containers for MITMproxy"
sleep 5
podman ps -a --no-trunc | grep -i mitmproxy
echo ""
echo "###############################################################################"
podman logs  mitmproxy-default
echo ""
echo " Monitor the mitmproxy-default UI @ http://$(curl -s ifconfig.me):9081 "
echo "###############################################################################"
podman logs  mitmproxy-wireguard
echo ""
echo " Monitor the mitmproxy-wireguard UI @ http://$(curl -s ifconfig.me):8081 "
echo "###############################################################################"
echo ""
echo "Please update the mitmproxy wireguard client configuration endpoint address to:  $(curl -s ifconfig.me)"
echo ""
echo "###############################################################################"
echo ""

MITMproxy CERTS:

Add mitmproxy-ca-cert to the trusted root certs folder on your client host OS keystore (MS Win: certlm.msc) and/or if there is a java keystore for the client tool, please add the mitmproxy-ca-cert.cer as a trusted cert. keytool -import -trustcacerts -file mitm-ca-proxy.cer -alias mitmproxy -keystore capam.keystore

WireGuard client configuration:

To ensure that only selected web traffic is monitored through wireguard VPN to mitmproxy, make changes to the wireguard.conf file before importing it. Specifically, update the AllowedIPs address field to include a single IP address. Additionally, modify the endpoint field to direct traffic to the public IP address of the mitmproxy host on UDP port 51820. If deploying mitmproxy on AWS or other cloud hosts, confirm that the firewall/security groups permit TCP 8080, 8081, 9080, 9091, and UDP 51820. Once you have activated the WireGuard client, test your processes on the host and monitor the MITMproxy UI for updates.

An example of data captured between two (2) CLI tools. These CLI tools did not honor the OS environmental variables of HTTP_PROXY & HTTPS_PROXY. Using the MITMproxy with WireGuard process, we can now confirm the delta submission behavior that was masked by the CLI tools. This process was useful to confirm that MS Powershell was removing special characters for a password string, e.g. ! (exclamation mark).

Example of script deploying two (2) MITMproxy containers

What changed? Business Logic XML Delta Process

When your solution grows, it may be challenging to identify when and what team members added with regards to new business logic. We can parse through change control documentation, but that may be a long and frustrating process. One of the challenges is that objects created within the Symantec (Broadcom) Identity Suite solution may not have date stamps on the object within the database tables. Again, we could parse through logs and the Task Persistence and Archive Task Persistence databases.

Please stop this behavior.

Let’s introduce a new streamlined process to help your administrative and business IAG teams.

Below is a process to use existing tools & samples within the existing IAG solution.

Goals: Automate a daily backup and create an XML delta of new business objects that were created the prior days. The tools used will be the included Import Export utility with additional Linux commands. We focused this process on the Symantec (Broadcom) Identity Suite Virtual Appliance with the built-in userID of ‘config’.

To start we will copy the solution’s included Import/Export sample to the ‘config’ userID home folder, and the minimal libraries files required to execute this process. We will then modify the script to perform a delta compare every day that it is executed by the ‘config’ crontab.

# Step 1 - Create a copy of the Import / Export Utility under the config home folder

mkdir -p /home/config/backup/export
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/ImportExportUtility/ /home/config/backup/export/


# Step 2 - Copy the three (3) extra JAR files required by the Import / Export Utility

mkdir -p /home/config/backup/export/lib
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/idmutils.jar /home/config/backup/export/lib/
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/log4j.jar /home/config/backup/export/lib/
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/bc-fips.jar /home/config/backup/export/lib/

# Step 3 - Backup the current shell script and properties file before our changes.

cd /home/config/backup/export/ImportExportUtility/
cp -r -p config.properties config.properties.org
cp -r -p ImportExportUtil.sh ImportExportUtil.sh.org

We now will update the config.properties with your own hostname and credentials.

Below is the contexts of a working file, with additional commentary to assist with the replacement of the PBES password encryption. Please note, that the mode=export, and that we have selected resourceType=RoleDefination. Over 99% of the business objects will reside within this single XLM file when it is exported. We set the localPath=. to be the current path to allow the automated scripts to rename files for use with an XML diff tool. You may wish to update the export path to a network path.

## provide IM server base url with port number 
## Use netstat -apn | grep 8080 to confirm IP address
baseUrl=http://192.168.2.220:8080
## Login credential (in case Management console is protected), use JSAFE algorithm of PasswordTool to encrypt your plain text password
## Example: /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/PasswordTool > ./pwdtools.sh -JSAFE -p Password01
userName=admin
password={PBES}:B8+4u/F3aiZ9sXus6HyDNA==
## provide mode import/export
mode=export
## provide resource type ALL/Directory/Environment/RoleDefinition
#resourceType=Directory
resourceType=RoleDefinition
#resourceType=Environment
#resourceType=All
## provide comma separated list of Directories to import/export, in case of import it should be xml file name
directories=ProvStore,UserStore,AuthenticationDirectory
## provide Environment name for Environment/Role Definition import/export, in case of environment import it should be zip file name
environment=identityEnv
## In case of Role Definition import please provide xml file name
roleDefFileName=env-RoleDefinitions
## provide local path to save/get the resources, in case of export directory structure will be created
localPath=.
## provide request time out in minutes
timeout=600
## restart Environment after import: yes/no, For restart to work environment name should be provided
restartEnv=no

We now will update the shell script of ImportExportUtil.sh.

We have renamed this shell script to ExportRoles.sh to clearly call out what we wish for this process to focus on, and what we will call it via a crontab entry. We have enhanced the JAVA_OPTS to speed up exports depending on the number of business objects, e.g. an IME with 40K provisioning roles may take over 60 minutes to export. We then have created a process that will generate an XML diff between the prior export and the latest export. We do date-time stamp the exports to allow past review of changes.

#!/bin/sh
#
#  This batch file sets up the environment and runs the IM Import and Export Utility
#

if [ -z "$JAVA_HOME" ] ; then
  echo "---------------------------------------------------------------------"
  echo "ERROR: Cannot find JAVA_HOME"
  echo "Please specify JAVA_HOME variable in this script file."
  echo "---------------------------------------------------------------------"
  exit
fi

export JAVA_HOME

MYCLASSPATH=.:./importExportUtility.jar:../lib/bc-fips.jar:../lib/idmutils.jar:../lib/log4j.jar

export MYCLASSPATH

##############
### ANA, 04/22
# Update JAVA_OPTS for speed as 40K Prov Roles may take 60-90 minutes to export
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo ""
echo "Starting at : $tz"
echo ""
#JAVA_OPTS="-Xms256m -Xmx512m $JAVA_OPTS"

JAVA_OPTS="-Xms256m -Xmx4g -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true $JAVA_OPTS"


##############
### ANA, 04/22
# Rename prior backup to be used for diff operation prior to exporting a new file
dos2unix identityEnv-RoleDefinitions.xml                                           >/dev/null 2>&1
cp -r -p identityEnv-RoleDefinitions.xml  identityEnv-RoleDefinitions_prior.xml    >/dev/null 2>&1

export JAVA_OPTS
$JAVA_HOME/bin/java $JAVA_OPTS -cp $MYCLASSPATH com.ca.identitymanager.importexportutility.client.ImportExportClient


##############
### ANA, 04/22
# Setup config crontab to execute this task every day at 1:11 AM
#  11 1 * * *   /home/config/scripts/create_pr_and_import_them/ExportRoles/ExportRoles.sh  >/dev/null 2>&1
# Use https://crontab.guru/ to define the correct scheduler
# Rename vApp (all) files for future review and delta compares

dos2unix identityEnv-RoleDefinitions.xml                                           >/dev/null 2>&1
cp -r -p identityEnv-RoleDefinitions.xml    identityEnv-RoleDefinitions_$tz.xml    >/dev/null 2>&1


##############
### ANA, 04/22
# Perform Diff operation between prior and new exports of IME Roles&Tasks.xml files
# Create an XML diff between two prior files.
#diff <(xmllint --c14n identityEnv-RoleDefinitions_prior.xml) <(xmllint --c14n identityEnv-RoleDefinitions.xml)

xmllint --c14n identityEnv-RoleDefinitions_prior.xml                                       > identityEnv-RoleDefinitions_prior_xmllint.xml
xmllint --c14n identityEnv-RoleDefinitions.xml                                             > identityEnv-RoleDefinitions_xmllint.xml
diff identityEnv-RoleDefinitions_prior_xmllint.xml identityEnv-RoleDefinitions_xmllint.xml > identityEnv-RoleDefinitions_DIFF_xmllint.xml
cp -r -p identityEnv-RoleDefinitions_DIFF_xmllint.xml identityEnv-RoleDefinitions_DIFF_xmllint_$tz.xml    >/dev/null 2>&1
echo ""
echo "There are `wc -l identityEnv-RoleDefinitions_DIFF_xmllint.xml` rows different between the two files"
echo ""
echo "There are `grep -i   '<ImsRole' identityEnv-RoleDefinitions_DIFF_xmllint.xml | wc -l ` Roles delta between the two files"
echo ""
echo "Please review if these deltas are correct:  cat identityEnv-RoleDefinitions_DIFF_xmllint.xml | more "
echo ""
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Done at : $tz "
echo  ""

Examples of this process executed. First, let’s generate about 5 provisioning roles to be loaded into the IME and then import them.

config@pwdha001 VAPP-14.3.0 (192.168.2.220):~/scripts/create_pr_and_import_them > ./create_5k_pr_xml_file_for_ime_mgmt_import.sh

real    0m0.028s
user    0m0.006s
sys     0m0.013s

Number of Provisioning Roles Created in XML File: 5
-rw-rw-r-- 1 config config 6.2K Apr 23 14:01 new_provisioning_roles.xml

Now we will import this XML file into the IME (see sample below for a Provisioning Role addition)

<?xml version="1.0" encoding="UTF-8"?>
<ims:ImsTemplate xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://imsenvironmentobjects/xsd imsconfig://schema/ImsEnv
ironmentObjects.xsd" xmlns:ims="http://imsenvironmentobjects/xsd" xmlns:imsrule="http://imsmemberrule/xsd" xmlns:imsscope="http://imsscoperule/xsd" x
mlns:imschange="http://imschangeaction/xsd">

        <!--   ******************** Create 10K Provisioning Roles ********************   -->

        <ImsRole name="prov-role_160823241129774" roletype="PROVISIONING" assignable="true" adminassignable="true" enabled="true" allowduplicatecusto
m="false" description="DESC_upto_128_Characters" custom01="CF01_upto_1000_Characters" custom02="CF02" custom03="CF03" custom04="CF04" custom05="CF05"
 custom06="CF06" custom07="CF07" custom08="CF08" custom09="CF09" custom10="CF10">

                <AdminPolicy assignable="true" adminassignable="true">
                        <imsrule:MemberRule><RoleMember><AdminRole name="User Manager"/></RoleMember></imsrule:MemberRule>
                        <imsscope:ScopeRule object="USER" purpose="*"><All/></imsscope:ScopeRule>
                </AdminPolicy>
                <AdminPolicy assignable="true" adminassignable="true">
                        <imsrule:MemberRule><All/></imsrule:MemberRule>
                        <imsscope:ScopeRule object="USER" purpose="*"><All/></imsscope:ScopeRule>
                </AdminPolicy>

                <OwnerPolicy>
                        <imsrule:MemberRule><RoleMember><AdminRole name="System Manager"/></RoleMember></imsrule:MemberRule>
                </OwnerPolicy>

                <Attribute name="comments">2022-04-23T20:01:47.000Z : COMMENTS_upto_128_Characters</Attribute>
                <Attribute name="department">DEPT_upto_100_Characters</Attribute>
     </ImsRole>

</ims:ImsTemplate>

Output from an import with the ImportExportUtil shell script

-----------------------------------------------------------
-------------------Starting a new Import-------------------
-----------------------------------------------------------
Importing Role Definition to Environment 'identityEnv'...
#############  Import Output  #############
Warning: Updating the IdentityMinder environment "identityEnv"
  Deploying role definitions...
    Importing Roles...

*********
0 error(s), 0 warning(s)
Role Definition Imported Successfully!!!

Now we will run our new ExportRoles.sh script, where it will export the IME and do an XML delta compare operation between a prior export and the latest export.

config@pwdha001 VAPP-14.3.0 (192.168.2.220):~/backup/export/ImportExportUtility > time ./ExportRoles.sh

Starting at : 20220423190707,3582824826.0Z

-----------------------------------------------------------
-------------------Starting a new Export-------------------
-----------------------------------------------------------
Exporting Role Definition from Environment 'identityEnv'...
 disposition attachment; filename=identityEnv-RoleDefinitions.xml;
Exported Filename: identityEnv-RoleDefinitions.xml
Role Definition exported Successfully!!!

There are 76 identityEnv-RoleDefinitions_DIFF_xmllint.xml rows different between the two files

There are 5 Roles delta between the two files

Please review if these deltas are correct:  cat identityEnv-RoleDefinitions_DIFF_xmllint.xml | more

Done at : 20220423190713,3241475139.0Z


real    0m5.663s
user    0m2.509s
sys     0m0.394s

Here is a view of the delta file that is generated. We now KNOW what was added since the last export. We have a date range as well.

>       <ImsRole adminassignable="true" allowduplicatecustom="false" assignable="true" custom01="CF01_upto_1000_Characters" custom02="CF02" custom03="CF03" custom04="CF04" custom05="CF0
5" custom06="CF06" custom07="CF07" custom08="CF08" custom09="CF09" custom10="CF10" description="DESC_upto_128_Characters" enabled="true" name="prov-role_160823241129774" roletype="PROVI
SIONING">
>               <AdminPolicy adminassignable="true" assignable="true">
>                       <imsrule:MemberRule><RoleMember><AdminRole name="User Manager"></AdminRole></RoleMember></imsrule:MemberRule>
>                       <imsscope:ScopeRule object="USER" purpose="*"><All></All></imsscope:ScopeRule>
>               </AdminPolicy>
>               <AdminPolicy adminassignable="true" assignable="true">
>                       <imsrule:MemberRule><All></All></imsrule:MemberRule>
>                       <imsscope:ScopeRule object="USER" purpose="*"><All></All></imsscope:ScopeRule>
>               </AdminPolicy>
>               <OwnerPolicy>
>                       <imsrule:MemberRule><RoleMember><AdminRole name="System Manager"></AdminRole></RoleMember></imsrule:MemberRule>
>               </OwnerPolicy>
>               <Attribute name="comments">2022-04-23T20:01:47.000Z : COMMENTS_upto_128_Characters</Attribute>
>               <Attribute name="department">DEPT_upto_100_Characters</Attribute>
>       </ImsRole>

A view of the files generated in the export process. NOTE: The first time executed, there will be an error due to the missing 1st file. Make a change in the IME, and then run this process a 2nd time to see the delta.

For larger IME, the export may take 60-90 minutes. The below example shows for a full “ALL” export that may be configured in the config.properties file. This IME was updated with over 40K Provisioning Roles to help isolate a java memory leak.

Counting the roles (Admin, Access, Provisioning) within the IME (Oracle DB) table of IM_ROLE

When we tested loading 5K Provisioning Roles, we noted it would take about 30 minutes. If the input file was over 10 MB, then the Import process would fail with a generic java error.

Lets Encrypt DNS Challenge

We are fond of the Let’s Encrypt DNS challenge process instead of alternative processes. The Let’s Encrypt DNS challenge using the certbot allows businesses to scale their replacements of certs that are not exposed directly to the internet. The certbot tool has switches that allow custom scripts to be run, which allows for a lot of flexibility.

Installation of certbot

Review the website for https://certbot.eff.org/, and there are sections per OS on how to use “snap” to install this tool. Example with CentOS7. https://certbot.eff.org/instructions?ws=other&os=centosrhel7

sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

We have used certbot with manual steps every 90 days with our development DNS domains but wanted to automate these steps. Unfortunately, we noticed a challenge with using Google Domains not having an API available to update DNS records. After research, we did find that Google Cloud does have the APIs available.

We had several options:

a) Move the DNS domains from Google Domains to Google Cloud,

b) redirect CNAME records from Google Domains to Google Cloud,

c) move to another Domain Register that has APIs available,

d) redirect CNAME records from Google Domain to another Domain Register.

Since we had another Domain register offering APIs, we decided to choose option d. This entry will review our steps and how we leverage certbot and the two (2) DNS Domains Registers.

Step 1: Google Domains – Create _acme-challenge CNAME records.

Step 2: 2nd Domain Register – GoDaddy – Create _acme-challenge TXT records.

Step 3: Enable the API Key on the 2nd Domain Register and the API Password Code

https://developer.godaddy.com/keys?hbi_code=1

Step 4: Validate via CURL the ability to update the TXT records through the 2nd Domain Register’s API.

curl -s -X PUT \
"https://api.godaddy.com/v1/domains/anapartner.in/records/TXT/_acme-challenge.aks.iam.anapartner.dev" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"TESTING THIS STRING FIELD", \"name\": \"_acme-challenge.aks.iam.anapartner.dev", \"ttl\": 600 }]"

Finally, we will create a script that will be executed by crontab every 85 days. Please note that the scripts to be called by certbot are created as HERE DOCS to allow portability within a single script.

#!/bin/bash
###############################################################################
#
#  Update Google DNS via round-about way through 2nd DNS Register DNS API (anapartner.in)
#
#
#  Pre-work:
#     1. Use existing or purchase a domain from 2nd DNS register
#
#     2. Create Google Domain CNAME records for each of the wildcard domain to a remote DNS TXT Record
#     _acme-challenge.gke.iam.anapartner.org CNAME _acme-challenge.gke.iam.anapartner.org.anapartner.in
#     _acme-challenge.aks.iam.anapartner.org CNAME _acme-challenge.aks.iam.anapartner.org.anapartner.in
#     _acme-challenge.eks.iam.anapartner.org CNAME _acme-challenge.eks.iam.anapartner.org.anapartner.in
#
#     3. Create 2nd DNS Register Domain TXT records for each of the object to be updated
#     _acme-challenge.gke.iam.anapartner.org.baugher.net
#     _acme-challenge.aks.iam.anapartner.org.baugher.net
#     _acme-challenge.eks.iam.anapartner.org.baugher.net
#
#     4. Enable the 2nd DNS Register API for Production Access (Developer) & store the KEY & SECRET for use
#        https://developer.godaddy.com/keys?hbi_code=1
#
#     5. Install certbot     dnf -y install certbot
#         Note:  certbot will use two (2) variables of:  CERTBOT_DOMAIN (after the -d switch)
#         and  CERTBOT_VALIDATION (the text string to be used for TXT records)
#
#  ANA 07/2022
#
###############################################################################

GODADDY_API_KEY="XXXXXXXXXXXXXy"
GODADDY_API_SECRET="XXXXXXXXXXXXXXX"
DOMAIN="anapartner.in"


echo ""
echo "Create wildcard domain list"
echo "This may be any TXT record for a remote domain FQDN that is mapped in the anapartner.in"
echo "#####################################################################"

#cat << 'EOF' > wildcard-domains.txt
#*.gke.iam.anapartner.in
#*.aks.iam.anapartner.in
#*.eks.iam.anapartner.in
#EOF

cat << 'EOF' > wildcard-domains.txt
*.gke.iam.anapartner.dev
*.aks.iam.anapartner.dev
*.eks.iam.anapartner.dev
EOF

WILDCARD_DOMAIN=anapartner.dev

echo ""
echo "Create godaddy.sh script to update TXT records"
echo "#####################################################################"
cat <<  EOF > godaddy.sh
#!/bin/bash
if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
    echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
    CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
    echo \$CERTBOT_DOMAIN
fi

DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"


curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"\$CERTBOT_VALIDATION\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"

sleep 30
EOF

chmod 555 godaddy.sh

echo ""
echo "Create godaddy-clean.sh script to wipe TXT records - as needed"
echo "#####################################################################"
cat << EOF > godaddy-clean.sh
#!/bin/bash

if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
    echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
    CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
    echo \$CERTBOT_DOMAIN
fi

DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"

curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"clean\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"

EOF
chmod 555 godaddy-clean.sh


echo ""
echo "Start Loop to use Let's Encrypt's certbot tool"
echo "#####################################################################"
while read -r domain;
do

echo "#####################################################################"
echo "$domain"
echo ""
certbot -d $domain --agree-tos --register-unsafely-without-email --manual \
--preferred-challenges dns --manual-auth-hook ./godaddy.sh \
--manual-cleanup-hook ./godaddy-clean.sh --manual-public-ip-logging-ok \
--force-renewal certonly

echo ""

done < wildcard-domains.txt
# Add logic to handle the certs/keys when they are issued.

echo ""
echo "#####################################################################"
ls -lart /etc/letsencrypt/archive/*

#rm -rf godaddy.sh godaddy-clean.sh &>/dev/null
echo ""

echo ""
echo "After validation, the TXT records will be marked with the 'clean' string "
echo "#####################################################################"
echo "nslookup  -type=txt _acme-challenge.eks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup  -type=txt _acme-challenge.aks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup  -type=txt _acme-challenge.gke.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"

View of the script being executed

Files generated by Let’s Encrypt certbot [certN.pem, privkeyN.pem, chainN.pem, and fullchainN.pem]

Adding wildcard certificates to Virtual Appliance

While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.

The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.

One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.

We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.

The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.

/etc/pki/tls/certs/localhost.crt -> /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.crt

/etc/pki/tls/private/localhost.key -> /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.key

A view of the Apache HTTPD SSL self-signed certificate and key.

The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caim-srv

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caig-srv

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caip-srv

A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.

Provisioning Server process for TLS enablement for IME ETACALLBACK process.

Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.

Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.

Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.

Let’sEncrypt https://letsencrypt.org/ Certificates

Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.

sudo certbot certonly --manual  --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email

Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]

cert1.pem   [The primary server side wildcard cert]

privkey1.pem   [The primary server side private key associated with the wildcard cert]

chain1.pem   [The intermediate chain certs that are needed to validate the cert1 cert]

fullchain1.pem    [two files together in the correct order of  cert1.pem and chain1.pem.]  

NOTE:  fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]

Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/

Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.

CERT=lets-encrypt-r3.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

CERT=isrgrootx1.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

CERT=isrg-root-x2.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

cat lets-encrypt-r3.pem isrgrootx1.pem isrg-root-x2.pem > combine-chain-letsencrypt.pem

Replacing the certificates for the vApp Apache, Wildfly (3), and Provisioning Server (ETACALLBACK)

Apache HTTPD Service (TCP 443/10443) (May need to reboot vApp)

cp -r -p  /home/config/aks.iam.anapartner.dev/fullchain2.pem /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.crt

cp -r -p  /home/config/aks.iam.anapartner.dev/privkey2.pem  /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.key

Wildfly Services (TCP 8443/8444/84445) for IM, IG, and IP (restart services after update)

View of the Wildfly (Java) services for IM, IG, and IP (restart services after update)
openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caim-srv -password pass:changeit
restart_im

openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caig-srv -password pass:changeit
restart_ig

openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caip-srv -password pass:changeit
restart_ip

Provisioning Server ETACALLBACK public certificate location (restart imps service) [Place in bin folder]

su - imps
cp -r -p /home/config/aks.iam.anapartner.dev/combine-chain-letsencrypt.pem /opt/CA/IdentityManager/ProvisioningServer/bin/
imps stop; imps start

Validation of updated services.

Use openssl s_client to validate certificates being used. Examples below for TCP 443 and 8443

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:443 -CAfile combine-chain-letsencrypt.pem  | grep "Verify return code"

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:8443 -CAfile combine-chain-letsencrypt.pem  | grep "Verify return code"

To view all certs in the chain, use the below openssl s_client command with -showcerts switch:

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:443 -CAfile combine-chain-letsencrypt.pem  -showcerts

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:8443 -CAfile combine-chain-letsencrypt.pem  -showcerts

Validate with browsers and view the HTTPS lock symbol to view the certificate

Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.

A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.

Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.

Common TLS errors seen with the Provisioning Server ETACALLBACK

Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.

Other Error messages (curl)

If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823

References

https://knowledge.broadcom.com/external/article?articleId=54198
https://community.broadcom.com/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=849ea21f-cc5a-4eac-9988-465a75165cf1
https://curl.se/libcurl/c/libcurl-env.html
https://knowledge.broadcom.com/external/article/204213/how-to-setup-inbound-notifications-to-us.html
https://knowledge.broadcom.com/external/article/213480/how-to-replace-the-vapp-wildfly-ssl-cert.html https://www.stephenwagner.com/2021/09/30/sophos-dst-root-ca-x3-expiration-problems-fix/