Is Copy-n-Paste operations impacting your Identity & Governance solutions?

Microsoft Office Suite’s Autocorrect: How Character Replacements Impact Identity and Governance Solutions => Garbage-In-Garbage-Out (GIGO)

When thinking about identity and governance solutions, many of us consider factors such as password security, multi-factor authentication, or access control. Rarely do we contemplate the subtle implications of character replacements in our word processing software. However, Microsoft Office Suite’s Autocorrect feature, while intended to enhance the user experience, has introduced concerns around the copy-paste process, especially with characters like the dash and quotes. Let’s delve into the nuances of this issue and its potential impacts for two (2) of the most common replacements that have impact.

A Common Scenario:

Automated emails from Ticket Systems are forwarded to administrators or users, then these admin/users may copy-n-paste these strings from the email (or MS word document) to an identity / governance solution, as they wish to be efficient and ensure no mistyped characters happen from one solution to another. These fields could be used for provisioning access by a business role name or kicking off a governance campaign search.

Dash vs. Emdash: What’s the Big Deal?

Microsoft Word (and other programs within the Office Suite) has a habit of automatically converting the standard dash (-) to an emdash (—) when it assumes the user is attempting to create a longer break in the sentence. On the surface, this appears to be a simple formatting choice. Yet, when you copy content containing these characters and paste them into identity or governance platforms, unexpected issues may arise. This “emdash” decision appears to be following British style formatting per this reference. https://www.sussex.ac.uk/informatics/punctuation/hyphenanddash/dash

Identity systems often depend on exact character matching for elements like usernames, role names, domain names, or system strings. For instance, if a user is instructed to input “domain-name.com” but inadvertently pastes “domain—name.com” (with an emdash), the system will not recognize the latter as a valid entry. This leads to failed authentication attempts, locked accounts, and potential security concerns as users and admins scramble to correct the discrepancies. Worst case, the identity/governance solution is using UTF-8 or newer character sets to accept the special characters, but the underlying IG/IM database is still using older ASCII format, that do not recognize the newer character sets. If this occurs, then a data clean up operation is typically needed by the IM/IG/DBA teams.

The Smart Quotes Dilemma

Similarly, Microsoft’s Autocorrect feature replaces standard double quotes (“) with smart quotes (“ ”) for a more visually appealing look in documents. While they may enhance the aesthetic feel of a document, smart quotes can wreak havoc in systems expecting the simpler ASCII version.

A code or script that depends on specific string matching will fail if smart quotes are used instead of standard quotes. This can lead to malfunctioning applications, scripts, or integrations when developers or administrators copy and paste content from Office documents directly into configuration files or codebases.

Governance Solutions and Data Integrity

In governance solutions, consistency and data integrity are of the utmost importance. Consider a scenario where policy documents or terms of use agreements are drafted in Word. Any auto-replaced characters might be unintentionally added to official records or database entries. When such documents are parsed or processed by automated systems, unexpected behaviors might occur due to these seemingly innocuous character changes.

Recommendations and Best Practices:

  1. Awareness: Ensure that your team is aware of these auto-corrections. Training sessions or instructional guides can be used to inform users about these pitfalls.
  2. Disable Autocorrect: If you frequently copy and paste between Office Suite and other platforms, consider disabling these specific autocorrect features for these two (2) common ones (dash/quotes). See the below screen shots how to disable these two (2) features in MS Outlook, MS Word, and MS Powerpoint. Fortunately, we do not have to modify MS Excel. From a global updates, companies may wish to visit their patch process, to update the MS registry for these auto correction behavior for all users.
  3. Post-Copy Verification: After pasting content, always double-check critical characters to ensure they have not been auto-replaced. It may be necessary to incorporate policy verification rules to prevent entry of these two (2) common replacement characters, e.g. PX Policy UI data verification rules.
  4. Use Plain Text Editors: When dealing with sensitive or system-related information, use plain text editors like Notepad, Notepad++ or VSCode to avoid any auto-formatting.

Location of auto-correction of dash (-) to emdash (–) & quotes in MS Outlook

Location of auto-correction of dash (-) to emdash (–) & quotes in MS Word

Location of auto-correction of dash (-) to emdash (–) & quotes in MS Powerpoint

Fortunately, we do NOT have this issue in MS Excel for the two (2) characters we are reviewing in this blog.

An impact of copy-n-paste:

For example, if you are using an Oracle database, and you may see upside down question mark characters ¿ in your data sets, this is a strong indicator that the database is doing an auto-replacement for the special characters that it does not recognize. The below example showcases when users/administrators would use copy-n-paste operations to create new IM/IG objects, that would not be returned when searching later upon these objects, as the names would no longer match what was entered the 1st time.

If the database has a default character map, this effort will not be simple, as the DBAs must make a major change and will require an outage window. The DBAs may also need to be involved in the data clean up or replacement exercise to adjust the malformed entries.

Conclusion

The Microsoft Office Suite’s Autocorrect feature demonstrates how even well-intentioned, user-friendly functionalities can introduce unforeseen challenges. For those operating in the realm of identity and governance, an awareness of these issues is essential. It’s a testament to the intricate nature of modern software environments, where even the simplest character can have significant implications. Confirm your identity access / governance solutions have a matching character set between the solution stack and the underlying database.

Leveling Up: The Imperative of Upgrading Your Symantec Identity Suite Virtual Appliance to 14.5 (Centos Stream 9) for Robust Randomness, Enhanced Jitterentropy, and Bouncy Castle Entropy Insights

In the intricate world of cybersecurity and identity management, evolving threats and vulnerabilities demand our undivided attention. When considering upgrading your Symantec Identity Suite Virtual Appliance, understanding the nuanced technological landscape, including the perks of Jitterentropy and the challenges associated with Java’s Bouncy Castle entropy, can make a world of difference.

The Technological Need:

  1. Robust Randomness with Jitterentropy: Relying on the natural timing jitter of CPUs, Jitterentropy has emerged as a game-changing hardware random number generator (RNG). The latest renditions of the Symantec Identity Suite Virtual Appliance leverage this RNG, ensuring unparalleled randomness, making decoding by potential threats a herculean task.
  2. Operational Efficiency: Upgrades tuned with contemporary features promise optimized performance. Coupled with Jitterentropy, the RNG processes are turbocharged, promising minimal downtime and an elevated user experience.
  3. Challenges with Bouncy Castle Entropy in Java: Bouncy Castle, despite its vast utility in cryptographic operations in Java, has had its share of entropy-related issues. Some known problems include:
  • Predictability: Certain RNG implementations in Bouncy Castle have been found to be somewhat predictable, which could compromise security.
  • Seed Reuse: There have been instances where seeds were reused, which again poses security concerns.
  • Slow Entropy Accumulation: At times, the entropy collection is slower than expected, leading to potential operational delays. With security solutions the lack of entropy impacts scale and usability.

Business Justification for Rapid Response:

With the business landscape in perpetual flux, the right tech decisions can spell the difference between stagnation and growth:

  1. Enhanced Security: Incorporating Linux OS with Jitterentropy is synonymous with state-of-the-art security. Such forward-thinking measures drastically curtail potential security breaches.
  2. Cost Savings: Forward-looking upgrades, especially those that incorporate cutting-edge features like Jitterentropy, offer tangible long-term financial advantages. Fewer breaches, reduced system errors, and saved manual efforts contribute positively to the bottom line.
  3. Staying Competitive: In an era of rapid technological advancements, integrating elements like Jitterentropy ensures you’re leading from the front.
  4. Compliance and Regulatory Adherence: With cybersecurity standards constantly on the move, staying updated is non-negotiable. Evade potential legal issues and hefty fines by staying on top of these norms.
  5. Customer Trust: By showcasing a commitment to data safety through advanced systems (and by addressing known entropy issues like those in Bouncy Castle), businesses can strengthen customer trust and foster long-term loyalty.

Validating Jitterentropy Integration in the Linux Kernel: A Comprehensive Guide

As the world of Linux continues to evolve, one exciting development is the incorporation of jitterentropy into the kernel. This robust hardware random number generator (RNG) enhances the quality of randomness, making our systems even more secure. If you’re keen on understanding, implementing, or validating this feature in your Linux setup, this guide is tailored just for you.

What is Jitterentropy?

Jitterentropy is an RNG based on the natural timing jitter that occurs in CPUs. In the realm of cybersecurity, RNGs are of paramount importance; they generate the random numbers pivotal for cryptographic operations. The less predictable these numbers are, the tougher it becomes for malicious actors to crack them.

Why is Jitterentropy Essential?

For systems relying on cryptographic functions, such as encryption, the RNG’s caliber can’t be overstated. Jitterentropy guarantees first-rate randomness, upping your system’s security game. https://www.chronox.de/jent.html

How to Validate Jitterentropy Integration:

  1. Identify Your Kernel Version:
    Kick things off by determining your kernel version using the uname -r or uname -acommand.
   uname -r

This will provide insights into your system’s hostname, kernel version, build date, and architecture. You can deterermine if your Linux kernel is greater than 5.6, when entropy functionality was added directly to the kernel. https://github.com/torvalds/linux/commit/3f2dc2798b81531fd93a3b9b7c39da47ec689e55

  1. Is Jitterentropy Part of Your Kernel Configuration?:
    Deploy this simple grep command to figure out if jitterentropy is enabled in your kernel:
   grep -HRin jitter /boot/config*

An output showing CONFIG_CRYPTO_JITTERENTROPY=y confirms that jitterentropy is enabled. The “y” here indicates that the feature is in-built in the kernel.

  1. Time-Driven Testing for Jitterentropy:
    By simulating multiple pulls from the entropy source, you can gauge how efficient jitterentropy is:
   time for i in {1..1000}; do time dd if=/dev/random bs=1 count=16 2>/dev/null | base64; done

This command performs two functions:

  • It times each of the 1000 pulls from /dev/random, allowing you to measure the average time taken, basically emulating 1000 rapid password changes of 16 characters.
  • It provides an overall timing for 1000 pulls, letting you know the total duration for the entire operation. If your system remains responsive and completes the pulls swiftly, it’s a strong indication that your entropy source is in prime working condition. Which implies that any solution on the appliance has adequate entropy to service users and processes to scale.

Another command that add counters to see that 1000 iteration have passed. Note, if there is no entropy pump, this process will NOT succeed. The Linux OS entropy will be rapidly depleted and any solution on the host will be delayed. Ensure there is an entropy pump to keep the performance you need.

counter=1;MAX=1000;time while [ $counter -le $MAX ]; do echo "##########  $counter ##########" ; time dd if=/dev/random bs=16 count=1 2> /dev/null | base64; counter=$(( $counter + 1 )); done;

Wrapping Up:

The integration of Jitterentropy in the Linux kernel underscores the open-source community’s relentless dedication to fortifying security. By understanding, testing, and leveraging it, you ensure that your system is bolstered against potential threats, always staying a step ahead in the cybersecurity arena. Keep exploring, stay updated, and most importantly, remain secure!

Review upgrade your Symantec Identity Suite to improve your performance for users and scale to millions of transactions.

For non-appliances or older Linux OS (Kernel release < 5.6):

Review adding the haveged or jitterentropy packages to your Linux OS, to avoid delays to any business processes. See prior blog discussing entropy, of how adding an entropy pump to your Linux OSes has value. https://anapartner.com/2021/06/25/the-hidden-cost-of-entropy-to-your-business/

SiteMinder / CA SSO Metrics

These are exciting times, marked by a transformative change in the way modern applications are rolled out. The transition to Cloud and related technologies is adding considerable value to the process. If you are utilizing solutions like SiteMinder SSO or CA Access Gateway, having access to real-time metrics is invaluable. In the following article, we’ll explore the inherent features of the CA SSO container form factor that facilitate immediate metrics generation, compatible with platforms like Grafana.

Our Lab cluster is an On-Premise RedHat OpenShift Kubernetes Cluster which has the CA SSO Container solution, available as part of the Broadcom Validate Beta Program. The deployment of different SSO elements like policy servers and access gateway is facilitated through a Helm package provided by Broadcom. Within our existing OpenShift environment, a Prometheus metrics server is configured to gather time-series data. By default, the tracking of user workload metrics isn’t activated in OpenShift and must be manually enabled. To do so, make sure the ‘enableUserWorkload‘ setting is toggled to ‘true‘. You can either create or modify the existing configmap to ensure this setting is activated.

Grafana is also deployed for visuals and connected to the Prometheus data source to create metrics visuals. Grafana data source can be created using the YAML provided below. Note that creation of the grafana datasource will require the Prometheus URL as well as an authorization token to access stored metrics. This token can be extracted from the cluster using the below commands.

Also ensure that a role binding exists to allow the service account (prometheus-k8s) in the openshift-monitoring namespace access to the role which allows monitoring of resources in the target (smdev) namespace.

Once the CA SSO helm chart is installed with metrics enabled, we must also ensure that the namespace in which CA SSO gets deployed has openshift.io/cluster-monitoring label set as true.

We are all set now and should see the metrics getting populated using the OpenShift console (Observe -> Metrics menu item) as well as available for Grafana’s consumption.

In the era of next-generation application delivery, integrated monitoring and observability features now come standard, offering considerable advantages, particularly for operations and management teams seeking clear insights into usage and solution value. This heightened value is especially notable in deployments via container platforms. If you’re on the path to modernization and are looking to speed up your initiatives, feel free to reach out. We’re committed to your success and are keen to partner with you.

VMware Workstation and Vyos Software Router: Expedite on-prem Kubernetes and OpenShift Labs

With the rapid evolution of technology and increasing complexity of software solutions, using tools like VMware Workstation for learning and testing has become necessary. Deploying intricate systems like Kubernetes and OpenShift on VMware Workstation provides an opportunity for in-depth understanding and experience before implementing these solutions on a larger, organization-wide scale.

VMware Workstation, coupled with the powerful container orchestration capabilities of Kubernetes and OpenShift, offers an unparalleled platform for crafting next-generation applications and solutions and lowering costs. It’s a potent combination that can significantly boost your organization’s operational efficiency, application delivery speed, and overall software development lifecycle.

In the realm of advanced solution deployments, the right tools can make all the difference. With VMware Workstation, you’re not just getting a virtualization tool; you’re acquiring a platform that helps you delve deeper into modern software architectures and innovations. Harness its potential and equip yourself with the knowledge and experience needed to stay ahead of the curve.

Certainly, networking is one of the critical aspects of VMware Workstation that make it such a versatile tool. VMware Workstation offers three types of networking options to suit different needs and scenarios. Let’s explore each of these in detail.

1. Bridged Networking

Bridged Networking is the simplest and most straightforward networking mode. When you configure a VM to use bridged networking, the VM is connected directly to the existing network that your host computer is connected to. In essence, it will be as though the VM is another physical device on your network.

With bridged networking, your VM can have its unique identity on the network, such as its IP address, making it an entirely independent entity from the host. This is particularly useful when you need the VM to interact directly with other devices on the network, or when it needs to be accessible from other computers.

2. Network Address Translation (NAT)

The NAT mode allows your VMs to share the IP address of the host machine. Essentially, all the network traffic from the VMs is routed through the host machine. This implies that the VMs can access the external network and the internet, but they cannot be directly reached from the external network since they are ‘hidden’ behind the host.

NAT is highly beneficial when you want to isolate your VMs from your network while still providing them with network access. For instance, this can be handy when testing untrusted applications or experimenting with potentially unstable software that could disrupt your network.

3. Host-Only Networking

The Host-Only networking mode creates a private network shared only between the VMs and the host machine. This means that your VMs can communicate with each other and the host machine but cannot access the external network or the internet.

Host-Only networking is particularly useful when you want to create a secure, isolated environment for your VMs, away from the vulnerabilities of the external network. This is ideal when working with sensitive data or creating a controlled environment for testing network applications.

Each of these three VMware Workstation networking modes has advantages and suitable use-cases. The choice between them depends on your specific needs- creating an isolated testing environment or mimicking a complex, interconnected network for a comprehensive deployment simulation.

Expanding Host-Only for use with OpenShift/Kubernetes Labs

As discussed earlier, VMware workstation offer three (3) types of networks modes: Bridged, NAT, and Host Only. The bridged mode has a challenge that it will share your office or home network and request an IP address to be assigned. This may not be acceptable in your office, or you may wish to keep your main home network free from VMware hosts. NAT is typically the most selected network used for VMware guest OS, as it will not impact the office/home network. The limitation with NAT, is it only allows outward-bound traffic from the Guest OS, via the VMware Host. There are no routing rules to allow traffic from outside to access the Guest OS images. The last network mode is Host-Only. Host-Only is designed to be an isolated network segment between the VMware guest OS and the VMware Host OS. There is no outward or inward-bound traffic. This network mode is typically not used when access to the internet is required.

Introduce: Vyos Software Router for VMware (OVA)

We wanted a more flexible solution than these three (3) modes. We wanted to standardize a network segment for our OpenShift/Kubernetes training/development that did not require a change between locations (like bridged) or force our internal resources to reset their bridged network to match.

After a review, we selected VMware Host-Only, which has the basics of what we needed. We were only missing routing rules for inbound and outbound traffic. We looked around and found a software solution already made that we could immediately leverage with minimal configuration changes to Vmware client OS/images. Vyos software router was already provided in an OVA format for immediate use.

We downloaded and imported the OVA into VMware workstation.

Since we planned to have multiple host network segments to manage large data for OpenShift/Kubernetes, we bumped up the VMware guest OS specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM. And adjusted the extra Network Adapters to be Host-only or Custom (Host-Only) networks.

After we adjusted the Guest OS specs, we snapshotted this VMware Guest OS image to allow rollback if we wanted to change a feature later. We started up the image and logged in with default credentials; vyos/vyos

After login via the VMware Guest OS console, we immediately updated Vyos configuration to allow us to ssh into the Guest OS and perform our work in a better UI.

Below is an example of the bootstrap configuration to enable remote access via ssh, and update eth0 NIC to a bridged IP address that we can access. We standardized a rule that all network routing would use IP xxx.yyy.zzz.254.

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save

We then switched to our favorite SSH terminal tool of MobaXterm (or Putty) to validate we could access the Vyos software router remotely.

We are now ready to add a configuration that allows a default route, inbound routes, and outbound routes for our four (4) network NICs.

The below lines may be pasted into the SSH session. ‘conf’ (config) will open the Vyos configuration shell so that we can paste it into all lines. We will define static IP addresses for all four (4) NICs, a static route to our external network router, outbound rules, and inbound rules. Please ensure that the IP addresses for the four (4) NICs match what you have defined.

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0

Please double check the IP addresses match your VMware Host-only networks.

Validation

We will validate inbound and outbound traffic using ping on the Vyos software router. When this passes, we will move on to routing configuration for external devices.

After basic validation, please snapshot your Vyos Guest OS

In the final step, we will add routing configuration on MS Windows OS and Linux OS to reach all four (4) networks from any external device and any VMware image on one of the four (4) networks.

# Ref: https://docs.vyos.io/en/equuleus/configuration/system/default-route.html
#      https://docs.vyos.io/en/equuleus/quick-start.html
#      https://bertvv.github.io/cheat-sheets/VyOS.html

#Step 000:  Increase Vyos Router specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM when adding more than two interfaces in VMware Workstation


#Step 00: Review VMware Host vmnet addresses, use to build your rules.

ip a | grep vmnet

16: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.10.10.1/24 brd 10.10.10.255 scope global vmnet1
17: vmnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.0.0.1/24 brd 10.0.0.255 scope global vmnet2
18: vmnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.242.1/24 brd 192.168.242.255 scope global vmnet3
19: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.243.1/24 brd 192.168.243.255 scope global vmnet8
20: vmnet255: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.255.0.1/24 brd 10.255.0.255 scope global vmnet255



# Step 0:  Boot strap first interface (via vmware console of vyos running image -  after login with vyos / vyos)

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save
exit
show interface


# Step 1: Vyos configuration - after login with vyos / vyos with an SSH putty session tool to allow copy-n-paste of the below rows

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0 


# Step 2:  Update external lab network devices (laptop on 192.168.2.x) to use Vyos Router for this new routes

# MS Win OS examples:
route add -p 10.10.10.0 mask 255.255.255.0 192.168.2.254
route add -p 10.0.0.0 mask 255.255.255.0 192.168.2.254
route add -p 192.168.242.0 mask 255.255.255.0 192.168.2.254

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Linux OS examples:
sudo route add -net  10.0.0.0/24 gw 192.168.2.254
sudo route add -net  10.10.10.0/24 gw 192.168.2.254
sudo route add -net  192.168.242.0/24 gw 192.168.2.254
route -n
netstat -rn   (dnf -y install net-tools)

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Step 3:  Optional:  Add static routes on network router if missed on a device, to redirect to the vyos bridged interface.


# Step 4:  Update the VMware DHCP configuration file to use the new Vyos Router for any Vmware images with DHCP, then reboot images.
option routers  10.10.10.254;     [VMware Workstation on Linux OS: /etc/vmware/vmnet1/dhcp/dhcpd.conf ]
option routers  10.0.0.254;       [VMware Workstation on Linux OS: /etc/vmware/vmnet2/dhcp/dhcpd.conf ]
option routers  192.168.242.254;  [VMware Workstation on Linux OS: /etc/vmware/vmnet3/dhcp/dhcpd.conf ]

# Note:  MS Win OS:  The VMware DHCP configurations are combined in one file:  C:\ProgramData\VMware\vmnetdhcp.conf
# 
# Restart images, view routes, then do a outbound submission as a test.


ping 8.8.8.8
ping www.google.com


# Step 5:  For Openshift, ensure that your install-config.yaml or agent-config.yaml is defined with the correct gateway router for Vyos.



# Step 6:  Exercise your VMware host images and then monitor within Vyos via:
show nat source translations
show nat source statistics
monitor traffic interface any filter 'host 10.0.0.99'      [embedded tcpdump]

Overview of Vyos Software Router with Vmware Workstation and three (3) host-only networks with bridged network

We now have the methodology to use over 250+ possible VMware host-only network segments for our networking labs with OpenShift and Kubernetes that require internet outbound and/or inbound access. We can standardize a unique host-only network segment to share with team members and clients for training/education/development. With the embedded tcpdump feature in Vyos Software router image, we can quickly address and isolate network routing configuration challenges.

Hopefully, this will allow you to continue to expand your knowledge and awareness of new architectures with your dedicated lab environment.

Secure Application Introspection

Locate “the good, the bad, and the ugly” data with a transparent proxy.

Have you been frustrated with various enterprise/cloud solutions’ APIs implementation or documentation where a single case-sensitive data field entry delays progress? Does the solution have undocumented features for older client tools? Do you wish to know what your mobile apps or laptop sends to the internet?

Utilizing a proxy can help with all the above, and if the process is quick and straightforward, so much the better.

Typically for a proxy, there may be quite a bit of effort and steps. You may need to modify a client host/mobile phone to redirect web traffic with OS environmental variables of HTTP_PROXY and HTTPS_PROXY or adjustment of the underlying OS network/iptables. Prior, we typically set up the open-source Jmeter proxy with the OS environment variables to capture secure traffic data. This process works well for most applications. Additionally, the Firefox browser allows manual modification using a proxy without dependence on the OS environment settings if we wish to capture the user experience and any data challenges.

The example below of modifying a Firefox browser to use a “manual proxy configuration” instead of system/auto configurations.

To ensure accurate capture of web traffic submissions, a more thorough method is needed as the above process may fail if client tools or mobile apps cannot detect OS environmental variables.

We have found a perfect combination within the open-source tool of MITMproxy with podman and the embedded VPN feature of WireGuard.

The process in six (6) steps:

  1. Deployment of the WireGuard VPN client on the client host (MS Win/Linux/Mobile)
  2. Deployment of MITMproxy using podman (or docker) with WireGuard mode/configuration
  3. Edit the wireguard.conf file to have the correct public IP address and import this file to the WireGuard VPN client and establish the VPN connection.
  4. Copy the mitmproxy-ca-cert.cer to the client component Java or OS keystore (if needed) as a trusted CA cert.
  5. Open the MITMproxy Web UI or monitor the command line dashboard
  6. Execute your test on the client host and view the results in the MITMproxy Web UI for both request and response.

MITMproxy UI with WireGuard mode enabled.

The WireGuard client configuration will be provided in three (3) places: the MITMproxy logs (podman logs mitmproxy), the text file wireguard.conf (if podman/docker volumes are enabled), and the MITMproxy UI. The QR code is enabled for mobile phone use, but since the public IP address provided is not correct in this view, you will need to manually edit this configuration on your mobile phone during those use-cases to have the correct endpoint IP address.

MITMproxy UI with standard proxy configuration mode.

Bash Script:

Script to deploy MITMproxy with podman on a linux OS with two (2) configurations: Wireguard mode for any client applications that do not honor HTTP_PROXY/HTTPS_PROXY and Standard proxy mode. This bash script allows a shared volume to use the SAME certs to avoid managing different certs upon restart of the container.

#!/bin/bash
######################################################################################
#
#  Deploy MITMproxy with two (2) configurations:
#
#     MITMProxy with WireGuard mode enabled (UDP 51820) and Web UI (TCP 8081)
#     MITMProxy with standard proxy enabled (TCP 9080) and Web UI (TCP 9081)
#
#  Notes:  Use podman exec to check path and env variables
#    - Binaries:  dnf -y install podman 
#    - Use shared folder to avoid having two (2) different configuration files for both copies
#    - Do not forget the :z for -v volumes to avoid permissions issues
#    - Do not forget quotes around env -e variables
#    - Use --rm as needed
#    - Use this switch as needed, but do not leave it on:   --log-level debug \
#
#   Basic:  podman run -it -v /tmp/mitmproxy/:/home/mitmproxy/.mitmproxy:z -p 8080:8080 mitmproxy/mitmproxy
#   Logs:   podman logs mitmproxy-wireguard
#   Shell:  podman exec -it -u root mitmproxy bash
#
#  Options Ref.  https://docs.mitmproxy.org/stable/concepts-options/
#   - added stream_large_bodies=10m to lower impact to mitmproxy due
#       to possible large json/xml payloads 
#
#  ANA 07/2023
#
######################################################################################

MITMPROXY_HOMEPATH=/tmp/mitmproxy
echo ""
echo "You may delete the shared folder of ${MITMPROXY_HOMEPATH}"
echo "to remove prior configuration of mitmproxy certs & wireguard.conf files"
echo ""
#sudo rm -rf ${MITMPROXY_HOMEPATH}

mkdir -p ${MITMPROXY_HOMEPATH}
chmod -R 777 ${MITMPROXY_HOMEPATH}
ls -hlrt ${MITMPROXY_HOMEPATH}

echo ""
echo " Starting mitmproxy-wireguard proxy "
podman rm mitmproxy-wireguard -f  &>/dev/null
podman run -d -it --name mitmproxy-wireguard \
   -p 51820:51820/udp -p 8081:8081 \
   -l mitmproxy \
   -v ${MITMPROXY_HOMEPATH}:/home/mitmproxy/.mitmproxy:z  \
    docker.io/mitmproxy/mitmproxy \
    mitmweb --mode wireguard --ssl-insecure  --web-host 0.0.0.0 --web-port 8081 --set stream_large_bodies=10m


echo ""
echo " Starting mitmproxy-default proxy "
podman rm mitmproxy-default -f  &>/dev/null
podman run -d -it --name mitmproxy-default \
    -p 9080:9080 -p 9081:9081 \
    -l mitmproxy  \
    -v ${MITMPROXY_HOMEPATH}:/home/mitmproxy/.mitmproxy:z  \
     docker.io/mitmproxy/mitmproxy \
     mitmweb --set listen_port=9080 --web-host 0.0.0.0 --web-port 9081

echo ""
echo ""
echo "###############################################################################"
echo ""
echo " Running Podman Containers for MITMproxy"
sleep 5
podman ps -a --no-trunc | grep -i mitmproxy
echo ""
echo "###############################################################################"
podman logs  mitmproxy-default
echo ""
echo " Monitor the mitmproxy-default UI @ http://$(curl -s ifconfig.me):9081 "
echo "###############################################################################"
podman logs  mitmproxy-wireguard
echo ""
echo " Monitor the mitmproxy-wireguard UI @ http://$(curl -s ifconfig.me):8081 "
echo "###############################################################################"
echo ""
echo "Please update the mitmproxy wireguard client configuration endpoint address to:  $(curl -s ifconfig.me)"
echo ""
echo "###############################################################################"
echo ""

MITMproxy CERTS:

Add mitmproxy-ca-cert to the trusted root certs folder on your client host OS keystore (MS Win: certlm.msc) and/or if there is a java keystore for the client tool, please add the mitmproxy-ca-cert.cer as a trusted cert. keytool -import -trustcacerts -file mitm-ca-proxy.cer -alias mitmproxy -keystore capam.keystore

WireGuard client configuration:

To ensure that only selected web traffic is monitored through wireguard VPN to mitmproxy, make changes to the wireguard.conf file before importing it. Specifically, update the AllowedIPs address field to include a single IP address. Additionally, modify the endpoint field to direct traffic to the public IP address of the mitmproxy host on UDP port 51820. If deploying mitmproxy on AWS or other cloud hosts, confirm that the firewall/security groups permit TCP 8080, 8081, 9080, 9091, and UDP 51820. Once you have activated the WireGuard client, test your processes on the host and monitor the MITMproxy UI for updates.

An example of data captured between two (2) CLI tools. These CLI tools did not honor the OS environmental variables of HTTP_PROXY & HTTPS_PROXY. Using the MITMproxy with WireGuard process, we can now confirm the delta submission behavior that was masked by the CLI tools. This process was useful to confirm that MS Powershell was removing special characters for a password string, e.g. ! (exclamation mark).

Example of script deploying two (2) MITMproxy containers

What changed? Business Logic XML Delta Process

When your solution grows, it may be challenging to identify when and what team members added with regards to new business logic. We can parse through change control documentation, but that may be a long and frustrating process. One of the challenges is that objects created within the Symantec (Broadcom) Identity Suite solution may not have date stamps on the object within the database tables. Again, we could parse through logs and the Task Persistence and Archive Task Persistence databases.

Please stop this behavior.

Let’s introduce a new streamlined process to help your administrative and business IAG teams.

Below is a process to use existing tools & samples within the existing IAG solution.

Goals: Automate a daily backup and create an XML delta of new business objects that were created the prior days. The tools used will be the included Import Export utility with additional Linux commands. We focused this process on the Symantec (Broadcom) Identity Suite Virtual Appliance with the built-in userID of ‘config’.

To start we will copy the solution’s included Import/Export sample to the ‘config’ userID home folder, and the minimal libraries files required to execute this process. We will then modify the script to perform a delta compare every day that it is executed by the ‘config’ crontab.

# Step 1 - Create a copy of the Import / Export Utility under the config home folder

mkdir -p /home/config/backup/export
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/ImportExportUtility/ /home/config/backup/export/


# Step 2 - Copy the three (3) extra JAR files required by the Import / Export Utility

mkdir -p /home/config/backup/export/lib
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/idmutils.jar /home/config/backup/export/lib/
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/log4j.jar /home/config/backup/export/lib/
cp -r -p /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/lib/bc-fips.jar /home/config/backup/export/lib/

# Step 3 - Backup the current shell script and properties file before our changes.

cd /home/config/backup/export/ImportExportUtility/
cp -r -p config.properties config.properties.org
cp -r -p ImportExportUtil.sh ImportExportUtil.sh.org

We now will update the config.properties with your own hostname and credentials.

Below is the contexts of a working file, with additional commentary to assist with the replacement of the PBES password encryption. Please note, that the mode=export, and that we have selected resourceType=RoleDefination. Over 99% of the business objects will reside within this single XLM file when it is exported. We set the localPath=. to be the current path to allow the automated scripts to rename files for use with an XML diff tool. You may wish to update the export path to a network path.

## provide IM server base url with port number 
## Use netstat -apn | grep 8080 to confirm IP address
baseUrl=http://192.168.2.220:8080
## Login credential (in case Management console is protected), use JSAFE algorithm of PasswordTool to encrypt your plain text password
## Example: /opt/CA/IdentityManager/IAM_Suite/IdentityManager/tools/PasswordTool > ./pwdtools.sh -JSAFE -p Password01
userName=admin
password={PBES}:B8+4u/F3aiZ9sXus6HyDNA==
## provide mode import/export
mode=export
## provide resource type ALL/Directory/Environment/RoleDefinition
#resourceType=Directory
resourceType=RoleDefinition
#resourceType=Environment
#resourceType=All
## provide comma separated list of Directories to import/export, in case of import it should be xml file name
directories=ProvStore,UserStore,AuthenticationDirectory
## provide Environment name for Environment/Role Definition import/export, in case of environment import it should be zip file name
environment=identityEnv
## In case of Role Definition import please provide xml file name
roleDefFileName=env-RoleDefinitions
## provide local path to save/get the resources, in case of export directory structure will be created
localPath=.
## provide request time out in minutes
timeout=600
## restart Environment after import: yes/no, For restart to work environment name should be provided
restartEnv=no

We now will update the shell script of ImportExportUtil.sh.

We have renamed this shell script to ExportRoles.sh to clearly call out what we wish for this process to focus on, and what we will call it via a crontab entry. We have enhanced the JAVA_OPTS to speed up exports depending on the number of business objects, e.g. an IME with 40K provisioning roles may take over 60 minutes to export. We then have created a process that will generate an XML diff between the prior export and the latest export. We do date-time stamp the exports to allow past review of changes.

#!/bin/sh
#
#  This batch file sets up the environment and runs the IM Import and Export Utility
#

if [ -z "$JAVA_HOME" ] ; then
  echo "---------------------------------------------------------------------"
  echo "ERROR: Cannot find JAVA_HOME"
  echo "Please specify JAVA_HOME variable in this script file."
  echo "---------------------------------------------------------------------"
  exit
fi

export JAVA_HOME

MYCLASSPATH=.:./importExportUtility.jar:../lib/bc-fips.jar:../lib/idmutils.jar:../lib/log4j.jar

export MYCLASSPATH

##############
### ANA, 04/22
# Update JAVA_OPTS for speed as 40K Prov Roles may take 60-90 minutes to export
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo ""
echo "Starting at : $tz"
echo ""
#JAVA_OPTS="-Xms256m -Xmx512m $JAVA_OPTS"

JAVA_OPTS="-Xms256m -Xmx4g -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true $JAVA_OPTS"


##############
### ANA, 04/22
# Rename prior backup to be used for diff operation prior to exporting a new file
dos2unix identityEnv-RoleDefinitions.xml                                           >/dev/null 2>&1
cp -r -p identityEnv-RoleDefinitions.xml  identityEnv-RoleDefinitions_prior.xml    >/dev/null 2>&1

export JAVA_OPTS
$JAVA_HOME/bin/java $JAVA_OPTS -cp $MYCLASSPATH com.ca.identitymanager.importexportutility.client.ImportExportClient


##############
### ANA, 04/22
# Setup config crontab to execute this task every day at 1:11 AM
#  11 1 * * *   /home/config/scripts/create_pr_and_import_them/ExportRoles/ExportRoles.sh  >/dev/null 2>&1
# Use https://crontab.guru/ to define the correct scheduler
# Rename vApp (all) files for future review and delta compares

dos2unix identityEnv-RoleDefinitions.xml                                           >/dev/null 2>&1
cp -r -p identityEnv-RoleDefinitions.xml    identityEnv-RoleDefinitions_$tz.xml    >/dev/null 2>&1


##############
### ANA, 04/22
# Perform Diff operation between prior and new exports of IME Roles&Tasks.xml files
# Create an XML diff between two prior files.
#diff <(xmllint --c14n identityEnv-RoleDefinitions_prior.xml) <(xmllint --c14n identityEnv-RoleDefinitions.xml)

xmllint --c14n identityEnv-RoleDefinitions_prior.xml                                       > identityEnv-RoleDefinitions_prior_xmllint.xml
xmllint --c14n identityEnv-RoleDefinitions.xml                                             > identityEnv-RoleDefinitions_xmllint.xml
diff identityEnv-RoleDefinitions_prior_xmllint.xml identityEnv-RoleDefinitions_xmllint.xml > identityEnv-RoleDefinitions_DIFF_xmllint.xml
cp -r -p identityEnv-RoleDefinitions_DIFF_xmllint.xml identityEnv-RoleDefinitions_DIFF_xmllint_$tz.xml    >/dev/null 2>&1
echo ""
echo "There are `wc -l identityEnv-RoleDefinitions_DIFF_xmllint.xml` rows different between the two files"
echo ""
echo "There are `grep -i   '<ImsRole' identityEnv-RoleDefinitions_DIFF_xmllint.xml | wc -l ` Roles delta between the two files"
echo ""
echo "Please review if these deltas are correct:  cat identityEnv-RoleDefinitions_DIFF_xmllint.xml | more "
echo ""
tz=`/bin/date --utc +%Y%m%d%H%M%S,3%N.0Z`
echo "Done at : $tz "
echo  ""

Examples of this process executed. First, let’s generate about 5 provisioning roles to be loaded into the IME and then import them.

config@pwdha001 VAPP-14.3.0 (192.168.2.220):~/scripts/create_pr_and_import_them > ./create_5k_pr_xml_file_for_ime_mgmt_import.sh

real    0m0.028s
user    0m0.006s
sys     0m0.013s

Number of Provisioning Roles Created in XML File: 5
-rw-rw-r-- 1 config config 6.2K Apr 23 14:01 new_provisioning_roles.xml

Now we will import this XML file into the IME (see sample below for a Provisioning Role addition)

<?xml version="1.0" encoding="UTF-8"?>
<ims:ImsTemplate xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://imsenvironmentobjects/xsd imsconfig://schema/ImsEnv
ironmentObjects.xsd" xmlns:ims="http://imsenvironmentobjects/xsd" xmlns:imsrule="http://imsmemberrule/xsd" xmlns:imsscope="http://imsscoperule/xsd" x
mlns:imschange="http://imschangeaction/xsd">

        <!--   ******************** Create 10K Provisioning Roles ********************   -->

        <ImsRole name="prov-role_160823241129774" roletype="PROVISIONING" assignable="true" adminassignable="true" enabled="true" allowduplicatecusto
m="false" description="DESC_upto_128_Characters" custom01="CF01_upto_1000_Characters" custom02="CF02" custom03="CF03" custom04="CF04" custom05="CF05"
 custom06="CF06" custom07="CF07" custom08="CF08" custom09="CF09" custom10="CF10">

                <AdminPolicy assignable="true" adminassignable="true">
                        <imsrule:MemberRule><RoleMember><AdminRole name="User Manager"/></RoleMember></imsrule:MemberRule>
                        <imsscope:ScopeRule object="USER" purpose="*"><All/></imsscope:ScopeRule>
                </AdminPolicy>
                <AdminPolicy assignable="true" adminassignable="true">
                        <imsrule:MemberRule><All/></imsrule:MemberRule>
                        <imsscope:ScopeRule object="USER" purpose="*"><All/></imsscope:ScopeRule>
                </AdminPolicy>

                <OwnerPolicy>
                        <imsrule:MemberRule><RoleMember><AdminRole name="System Manager"/></RoleMember></imsrule:MemberRule>
                </OwnerPolicy>

                <Attribute name="comments">2022-04-23T20:01:47.000Z : COMMENTS_upto_128_Characters</Attribute>
                <Attribute name="department">DEPT_upto_100_Characters</Attribute>
     </ImsRole>

</ims:ImsTemplate>

Output from an import with the ImportExportUtil shell script

-----------------------------------------------------------
-------------------Starting a new Import-------------------
-----------------------------------------------------------
Importing Role Definition to Environment 'identityEnv'...
#############  Import Output  #############
Warning: Updating the IdentityMinder environment "identityEnv"
  Deploying role definitions...
    Importing Roles...

*********
0 error(s), 0 warning(s)
Role Definition Imported Successfully!!!

Now we will run our new ExportRoles.sh script, where it will export the IME and do an XML delta compare operation between a prior export and the latest export.

config@pwdha001 VAPP-14.3.0 (192.168.2.220):~/backup/export/ImportExportUtility > time ./ExportRoles.sh

Starting at : 20220423190707,3582824826.0Z

-----------------------------------------------------------
-------------------Starting a new Export-------------------
-----------------------------------------------------------
Exporting Role Definition from Environment 'identityEnv'...
 disposition attachment; filename=identityEnv-RoleDefinitions.xml;
Exported Filename: identityEnv-RoleDefinitions.xml
Role Definition exported Successfully!!!

There are 76 identityEnv-RoleDefinitions_DIFF_xmllint.xml rows different between the two files

There are 5 Roles delta between the two files

Please review if these deltas are correct:  cat identityEnv-RoleDefinitions_DIFF_xmllint.xml | more

Done at : 20220423190713,3241475139.0Z


real    0m5.663s
user    0m2.509s
sys     0m0.394s

Here is a view of the delta file that is generated. We now KNOW what was added since the last export. We have a date range as well.

>       <ImsRole adminassignable="true" allowduplicatecustom="false" assignable="true" custom01="CF01_upto_1000_Characters" custom02="CF02" custom03="CF03" custom04="CF04" custom05="CF0
5" custom06="CF06" custom07="CF07" custom08="CF08" custom09="CF09" custom10="CF10" description="DESC_upto_128_Characters" enabled="true" name="prov-role_160823241129774" roletype="PROVI
SIONING">
>               <AdminPolicy adminassignable="true" assignable="true">
>                       <imsrule:MemberRule><RoleMember><AdminRole name="User Manager"></AdminRole></RoleMember></imsrule:MemberRule>
>                       <imsscope:ScopeRule object="USER" purpose="*"><All></All></imsscope:ScopeRule>
>               </AdminPolicy>
>               <AdminPolicy adminassignable="true" assignable="true">
>                       <imsrule:MemberRule><All></All></imsrule:MemberRule>
>                       <imsscope:ScopeRule object="USER" purpose="*"><All></All></imsscope:ScopeRule>
>               </AdminPolicy>
>               <OwnerPolicy>
>                       <imsrule:MemberRule><RoleMember><AdminRole name="System Manager"></AdminRole></RoleMember></imsrule:MemberRule>
>               </OwnerPolicy>
>               <Attribute name="comments">2022-04-23T20:01:47.000Z : COMMENTS_upto_128_Characters</Attribute>
>               <Attribute name="department">DEPT_upto_100_Characters</Attribute>
>       </ImsRole>

A view of the files generated in the export process. NOTE: The first time executed, there will be an error due to the missing 1st file. Make a change in the IME, and then run this process a 2nd time to see the delta.

For larger IME, the export may take 60-90 minutes. The below example shows for a full “ALL” export that may be configured in the config.properties file. This IME was updated with over 40K Provisioning Roles to help isolate a java memory leak.

Counting the roles (Admin, Access, Provisioning) within the IME (Oracle DB) table of IM_ROLE

When we tested loading 5K Provisioning Roles, we noted it would take about 30 minutes. If the input file was over 10 MB, then the Import process would fail with a generic java error.

Lets Encrypt DNS Challenge

We are fond of the Let’s Encrypt DNS challenge process instead of alternative processes. The Let’s Encrypt DNS challenge using the certbot allows businesses to scale their replacements of certs that are not exposed directly to the internet. The certbot tool has switches that allow custom scripts to be run, which allows for a lot of flexibility.

Installation of certbot

Review the website for https://certbot.eff.org/, and there are sections per OS on how to use “snap” to install this tool. Example with CentOS7. https://certbot.eff.org/instructions?ws=other&os=centosrhel7

sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

We have used certbot with manual steps every 90 days with our development DNS domains but wanted to automate these steps. Unfortunately, we noticed a challenge with using Google Domains not having an API available to update DNS records. After research, we did find that Google Cloud does have the APIs available.

We had several options:

a) Move the DNS domains from Google Domains to Google Cloud,

b) redirect CNAME records from Google Domains to Google Cloud,

c) move to another Domain Register that has APIs available,

d) redirect CNAME records from Google Domain to another Domain Register.

Since we had another Domain register offering APIs, we decided to choose option d. This entry will review our steps and how we leverage certbot and the two (2) DNS Domains Registers.

Step 1: Google Domains – Create _acme-challenge CNAME records.

Step 2: 2nd Domain Register – GoDaddy – Create _acme-challenge TXT records.

Step 3: Enable the API Key on the 2nd Domain Register and the API Password Code

https://developer.godaddy.com/keys?hbi_code=1

Step 4: Validate via CURL the ability to update the TXT records through the 2nd Domain Register’s API.

curl -s -X PUT \
"https://api.godaddy.com/v1/domains/anapartner.in/records/TXT/_acme-challenge.aks.iam.anapartner.dev" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"TESTING THIS STRING FIELD", \"name\": \"_acme-challenge.aks.iam.anapartner.dev", \"ttl\": 600 }]"

Finally, we will create a script that will be executed by crontab every 85 days. Please note that the scripts to be called by certbot are created as HERE DOCS to allow portability within a single script.

#!/bin/bash
###############################################################################
#
#  Update Google DNS via round-about way through 2nd DNS Register DNS API (anapartner.in)
#
#
#  Pre-work:
#     1. Use existing or purchase a domain from 2nd DNS register
#
#     2. Create Google Domain CNAME records for each of the wildcard domain to a remote DNS TXT Record
#     _acme-challenge.gke.iam.anapartner.org CNAME _acme-challenge.gke.iam.anapartner.org.anapartner.in
#     _acme-challenge.aks.iam.anapartner.org CNAME _acme-challenge.aks.iam.anapartner.org.anapartner.in
#     _acme-challenge.eks.iam.anapartner.org CNAME _acme-challenge.eks.iam.anapartner.org.anapartner.in
#
#     3. Create 2nd DNS Register Domain TXT records for each of the object to be updated
#     _acme-challenge.gke.iam.anapartner.org.baugher.net
#     _acme-challenge.aks.iam.anapartner.org.baugher.net
#     _acme-challenge.eks.iam.anapartner.org.baugher.net
#
#     4. Enable the 2nd DNS Register API for Production Access (Developer) & store the KEY & SECRET for use
#        https://developer.godaddy.com/keys?hbi_code=1
#
#     5. Install certbot     dnf -y install certbot
#         Note:  certbot will use two (2) variables of:  CERTBOT_DOMAIN (after the -d switch)
#         and  CERTBOT_VALIDATION (the text string to be used for TXT records)
#
#  ANA 07/2022
#
###############################################################################

GODADDY_API_KEY="XXXXXXXXXXXXXy"
GODADDY_API_SECRET="XXXXXXXXXXXXXXX"
DOMAIN="anapartner.in"


echo ""
echo "Create wildcard domain list"
echo "This may be any TXT record for a remote domain FQDN that is mapped in the anapartner.in"
echo "#####################################################################"

#cat << 'EOF' > wildcard-domains.txt
#*.gke.iam.anapartner.in
#*.aks.iam.anapartner.in
#*.eks.iam.anapartner.in
#EOF

cat << 'EOF' > wildcard-domains.txt
*.gke.iam.anapartner.dev
*.aks.iam.anapartner.dev
*.eks.iam.anapartner.dev
EOF

WILDCARD_DOMAIN=anapartner.dev

echo ""
echo "Create godaddy.sh script to update TXT records"
echo "#####################################################################"
cat <<  EOF > godaddy.sh
#!/bin/bash
if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
    echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
    CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
    echo \$CERTBOT_DOMAIN
fi

DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"


curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"\$CERTBOT_VALIDATION\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"

sleep 30
EOF

chmod 555 godaddy.sh

echo ""
echo "Create godaddy-clean.sh script to wipe TXT records - as needed"
echo "#####################################################################"
cat << EOF > godaddy-clean.sh
#!/bin/bash

if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
    echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
    CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
    echo \$CERTBOT_DOMAIN
fi

DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"

curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H  "accept: application/json" -H  "Content-Type: application/json" \
-H  "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"clean\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"

EOF
chmod 555 godaddy-clean.sh


echo ""
echo "Start Loop to use Let's Encrypt's certbot tool"
echo "#####################################################################"
while read -r domain;
do

echo "#####################################################################"
echo "$domain"
echo ""
certbot -d $domain --agree-tos --register-unsafely-without-email --manual \
--preferred-challenges dns --manual-auth-hook ./godaddy.sh \
--manual-cleanup-hook ./godaddy-clean.sh --manual-public-ip-logging-ok \
--force-renewal certonly

echo ""

done < wildcard-domains.txt
# Add logic to handle the certs/keys when they are issued.

echo ""
echo "#####################################################################"
ls -lart /etc/letsencrypt/archive/*

#rm -rf godaddy.sh godaddy-clean.sh &>/dev/null
echo ""

echo ""
echo "After validation, the TXT records will be marked with the 'clean' string "
echo "#####################################################################"
echo "nslookup  -type=txt _acme-challenge.eks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup  -type=txt _acme-challenge.aks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup  -type=txt _acme-challenge.gke.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"

View of the script being executed

Files generated by Let’s Encrypt certbot [certN.pem, privkeyN.pem, chainN.pem, and fullchainN.pem]

Adding wildcard certificates to Virtual Appliance

While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.

The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.

One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.

We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.

The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.

/etc/pki/tls/certs/localhost.crt -> /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.crt

/etc/pki/tls/private/localhost.key -> /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.key

A view of the Apache HTTPD SSL self-signed certificate and key.

The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caim-srv

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caig-srv

/opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caip-srv

A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.

Provisioning Server process for TLS enablement for IME ETACALLBACK process.

Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.

Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.

Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.

Let’sEncrypt https://letsencrypt.org/ Certificates

Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.

sudo certbot certonly --manual  --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email

Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]

cert1.pem   [The primary server side wildcard cert]

privkey1.pem   [The primary server side private key associated with the wildcard cert]

chain1.pem   [The intermediate chain certs that are needed to validate the cert1 cert]

fullchain1.pem    [two files together in the correct order of  cert1.pem and chain1.pem.]  

NOTE:  fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]

Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/

Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.

CERT=lets-encrypt-r3.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

CERT=isrgrootx1.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

CERT=isrg-root-x2.pem;curl -s -O -L https://letsencrypt.org/certs/$CERT ; openssl x509 -text -noout -in $CERT | grep -i -e issue -e not -e subject ; ls -lart $CERT

cat lets-encrypt-r3.pem isrgrootx1.pem isrg-root-x2.pem > combine-chain-letsencrypt.pem

Replacing the certificates for the vApp Apache, Wildfly (3), and Provisioning Server (ETACALLBACK)

Apache HTTPD Service (TCP 443/10443) (May need to reboot vApp)

cp -r -p  /home/config/aks.iam.anapartner.dev/fullchain2.pem /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.crt

cp -r -p  /home/config/aks.iam.anapartner.dev/privkey2.pem  /opt/CA/VirtualAppliance/custom/apache-ssl-certificates/localhost.key

Wildfly Services (TCP 8443/8444/84445) for IM, IG, and IP (restart services after update)

View of the Wildfly (Java) services for IM, IG, and IP (restart services after update)
openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caim-srv -password pass:changeit
restart_im

openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caig-srv -password pass:changeit
restart_ig

openssl pkcs12 -export -inkey /home/config/aks.iam.anapartner.dev/privkey2.pem -in /home/config/aks.iam.anapartner.dev/fullchain2.pem -out /opt/CA/VirtualAppliance/custom/wildfly-ssl-certificates/caip-srv -password pass:changeit
restart_ip

Provisioning Server ETACALLBACK public certificate location (restart imps service) [Place in bin folder]

su - imps
cp -r -p /home/config/aks.iam.anapartner.dev/combine-chain-letsencrypt.pem /opt/CA/IdentityManager/ProvisioningServer/bin/
imps stop; imps start

Validation of updated services.

Use openssl s_client to validate certificates being used. Examples below for TCP 443 and 8443

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:443 -CAfile combine-chain-letsencrypt.pem  | grep "Verify return code"

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:8443 -CAfile combine-chain-letsencrypt.pem  | grep "Verify return code"

To view all certs in the chain, use the below openssl s_client command with -showcerts switch:

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:443 -CAfile combine-chain-letsencrypt.pem  -showcerts

true | openssl s_client -connect vapp143.aks.iam.anapartner.dev:8443 -CAfile combine-chain-letsencrypt.pem  -showcerts

Validate with browsers and view the HTTPS lock symbol to view the certificate

Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.

A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.

Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.

Common TLS errors seen with the Provisioning Server ETACALLBACK

Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.

Other Error messages (curl)

If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823

References

https://knowledge.broadcom.com/external/article?articleId=54198
https://community.broadcom.com/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=849ea21f-cc5a-4eac-9988-465a75165cf1
https://curl.se/libcurl/c/libcurl-env.html
https://knowledge.broadcom.com/external/article/204213/how-to-setup-inbound-notifications-to-us.html
https://knowledge.broadcom.com/external/article/213480/how-to-replace-the-vapp-wildfly-ssl-cert.html https://www.stephenwagner.com/2021/09/30/sophos-dst-root-ca-x3-expiration-problems-fix/

OpenShift on VMware Workstation

RedHat OpenShift is one of the container orchestration platforms that provides an enterprise-grade solution for deploying, running, and managing applications on public, on-premise, or hybrid cloud environments.

This blog entry outlines the high-level architecture of a LAB OpenShift on-prem cloud environment built on VMware Workstation infrastructure.

Red Hat OpenShift and the customized ISO image with Red Hat Core OS provide a straightforward process to build your lab and can help lower the training cost. You may watch the end-to-end process in the video below or follow this blog entry to understand the overall process.

Requirements:

  • Red Hat Developer Account w/ Red Hat Developer Subscription for Individuals
  • Local DNS to resolve a minimum of three (3) addresses for OpenShift. (api.[domain], api-int.[domain], *.apps.[domain])
  • DHCP Server (may use VMware Workstation NAT’s DHCP)
  • Storage (recommend using NFS for on-prem deployment/lab) for OpenShift logging/monitoring & any db/dir data to be retained.
  • SSH Terminal Program w/ SSH Key.
  • Browser(s)
  • Front Loader/Load Balancer (HAProxy)
  • VMware Workstation Pro 16.x
  • Specs: (We used more than the minimum recommended by OpenShift to prepare for other applications)
    • Three (3) Control Planes Nodes @ 8 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64 bit” Guest OS Type
    • Four (4) Worker Nodes @ 4 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64” Guest OS Type

Post-Efforts: Apply these to provide additional value. [Included as examples]

  • Add entropy service (haveged) to all nodes/pods to increase security & performance.
  • Let’sEncrypt wild card certs for *.[DOMAIN] and *.apps.[DOMAIN] to avoid self-signed certs for external UIs. Avoid using “thisisunsafe” within the Chrome browser to access the local OpenShift console.
  • Update OpenShift Ingress to be aware of more than two (2) worker nodes.
  • Update OpenShift to use NFS as default storage.

Below is a view of our footprint to deploy the OpenShift 4.x environment on a local data center hosted by VMware Workstation.

Red Hat OpenShift provides three (3) options to deploy. Cloud, Datacenter, Local. Local is similar to minikube for your laptop/workstation with a few pods. Red Hat OpenShift license for Cloud requires deployment on other vendors’ sites for the nodes (cpu/ram/disk) and load balancers. If you deploy OpenShift on AWS and GCP, plan a budget of $500/mo per resource for the assets.

After reviewing the open-source OKD solution and the various OpenShift deployment methods, we selected the “DataCenter” option within OpenShift. Two (2) points made this decision easy.

  • Red Hat OpenShift offers a sixty (60) day eval license.
    • This license can be restarted for another sixty (60) days if you delete/archive the last cluster.
  • Red Hat OpenShift provides a customized ISO image with Red Hat Core OS, ignition yaml files, and an embedded SSH Public Key, that does a lot of the heavy lifting for setting up the cluster.

The below screen showcases the process that Red Hat uses to build a bootstrap ISO image using Red Hat Core OS, Ignition yaml files (to determine node type of control plane/worker node), and the embedded SSH Key. This process provides a lot of value to building a cluster and streamlines the effort.

DNS Requirement

The minimal DNS entries required for OpenShift is three (3) addresses.

api.[domain]

api-int.[domain]

*.apps.[domain]

https://docs.openshift.com/container-platform/4.11/installing/installing_platform_agnostic/installing-platform-agnostic.html

Front Load Balancer (HAProxy)

Update HAproxy.cfg as needed for IP addresses / Ports. To avoid deployment of HAProxy twice, we use the “bind” command to join two (2) HAproxy configuration files together to prevent conflict on port 80/443 redirect for both OpenShift and another application deployed on OpenShift.

# Global settings
# Set $IP_RANGE as an OS ENV or Global variable before running HAPROXY
#   Important: If using VMworkstation NAT ensure this range is correctly defined to
#   avoid error message with x509 error on port 22623 upon startup on control planes
#
#   Ensure 3XXXX PORT is defined correct from the ingress
#    - We have predefined these ports to 32080 and 32443 for helm deployment of ingress
#    oc -n ingress get svc
#
#---------------------------------------------------------------------
global
    setenv IP_RANGE 192.168.243
    setenv HA_BIND_IP1 192.168.2.101
    setenv HA_BIND_IP2 192.168.2.111
    maxconn     20000
    log         /dev/log local0 info
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    log                     global
    mode                    http
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  redispatch
    option forwardfor       except 127.0.0.0/8
    retries                 3
    maxconn                 20000
    timeout http-request    10000ms
    timeout http-keep-alive 10000ms
    timeout check           10000ms
    timeout connect         40000ms
    timeout client          300000ms
    timeout server          300000ms
    timeout queue           50000ms

# Enable HAProxy stats
# Important Note:  Patch OpenShift Ingress to allow internal RHEL CoreOS haproxy to run on additional worker nodes
#  oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge
#
listen stats
    bind :9000
    stats uri /
    stats refresh 10000ms

# Kube API Server
frontend k8s_api_frontend
    bind :6443
    default_backend k8s_api_backend
    mode tcp
    option tcplog

backend k8s_api_backend
    mode tcp
    balance source
    server      ocp-cp-1_6443        "$IP_RANGE".128:6443 check
    server      ocp-cp-2_6443        "$IP_RANGE".129:6443 check
    server      ocp-cp-3_6443        "$IP_RANGE".130:6443 check

# OCP Machine Config Server
frontend ocp_machine_config_server_frontend
    mode tcp
    bind :22623
    default_backend ocp_machine_config_server_backend
    option tcplog

backend ocp_machine_config_server_backend
    mode tcp
    balance source
    server      ocp-cp-1_22623        "$IP_RANGE".128:22623 check
    server      ocp-cp-2_22623        "$IP_RANGE".129:22623 check
    server      ocp-cp-3_22623        "$IP_RANGE".130:22623 check

# OCP Machine Config Server #2
frontend ocp_machine_config_server_frontend2
    mode tcp
    bind :22624
    default_backend ocp_machine_config_server_backend2
    option tcplog

backend ocp_machine_config_server_backend2
    mode tcp
    balance source
    server      ocp-cp-1_22624        "$IP_RANGE".128:22624 check
    server      ocp-cp-2_22624        "$IP_RANGE".129:22624 check
    server      ocp-cp-3_22624        "$IP_RANGE".130:22624 check


# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend ocp_http_ingress_frontend
    bind "$HA_BIND_IP1":80
    default_backend ocp_http_ingress_backend
    mode tcp
    option tcplog

backend ocp_http_ingress_backend
    balance source
    mode tcp
    server      ocp-w-1_80     "$IP_RANGE".131:80 check
    server      ocp-w-2_80     "$IP_RANGE".132:80 check
    server      ocp-w-3_80     "$IP_RANGE".133:80 check
    server      ocp-w-4_80     "$IP_RANGE".134:80 check
    server      ocp-w-5_80     "$IP_RANGE".135:80 check
    server      ocp-w-6_80     "$IP_RANGE".136:80 check
    server      ocp-w-7_80     "$IP_RANGE".137:80 check

frontend ocp_https_ingress_frontend
    bind "$HA_BIND_IP1":443
    default_backend ocp_https_ingress_backend
    mode tcp
    option tcplog

backend ocp_https_ingress_backend
    mode tcp
    balance source
    server      ocp-w-1_443     "$IP_RANGE".131:443 check
    server      ocp-w-2_443     "$IP_RANGE".132:443 check
    server      ocp-w-3_443     "$IP_RANGE".133:443 check
    server      ocp-w-4_443     "$IP_RANGE".134:443 check
    server      ocp-w-5_443     "$IP_RANGE".135:443 check
    server      ocp-w-6_443     "$IP_RANGE".136:443 check
    server      ocp-w-7_443     "$IP_RANGE".137:443 check

######################################################################################

# VIPAUTHHUB Ingress
frontend vip_http_ingress_frontend
    bind "$HA_BIND_IP2":80
    mode tcp
    option forwardfor
    option http-server-close
    default_backend vip_http_ingress_backend

backend vip_http_ingress_backend
    mode tcp
    balance roundrobin
    server      vip-w-1_32080     "$IP_RANGE".131:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32080     "$IP_RANGE".132:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32080     "$IP_RANGE".133:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32080     "$IP_RANGE".134:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32080     "$IP_RANGE".135:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32080     "$IP_RANGE".136:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32080     "$IP_RANGE".137:32080 check fall 3 rise 2 send-proxy-v2

frontend vip_https_ingress_frontend
    bind "$HA_BIND_IP2":443
    # mgmt-sspfqdn
    acl is_mgmt_ssp hdr_end(host) -i mgmt-ssp.okd.anapartner.dev
    use_backend vip_ingress-nodes_mgmt-nodeport if is_mgmt_ssp
    mode tcp
    #option forwardfor
    option http-server-close
    default_backend vip_https_ingress_backend

backend vip_https_ingress_backend
    mode tcp
    balance roundrobin
    server      vip-w-1_32443     "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32443     "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32443     "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32443     "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32443     "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32443     "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32443     "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2

backend vip_ingress-nodes_mgmt-nodeport
    mode tcp
    balance roundrobin
    server      vip-w-1_32443     "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32443     "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32443     "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32443     "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32443     "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32443     "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32443     "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2

######################################################################################

Use the following commands to add 2nd IP address to one NIC on the main VMware Workstation Host, where NIC = eno1 and 2nd IP address = 192.168.2.111

nmcli dev show eno1
sudo nmcli dev mod eno1 +ipv4.address 192.168.2.111/24

VMware Workstation Hosts / Nodes

When building the VMware hosts, ensure that you use Guest Type “Red Hat Enterprise Linux 8 x64” to match the embedded Red Hat Core OS provided in an ISO image. Otherwise, DHCP services may not work correctly, and when the VMware host boots, it may not receive an IP address.

The VMware hosts for Control Planes Nodes are recommended to be 8 vCPU, 16 GB RAM, and 100 HDD. The VMware hosts for Worker Nodes are recommended to be 4 vCPU, 16 GB RAM, and 100 HDD.
OpenShift requires a minimum of three (3) Control Plane Nodes and two (2) Worker Nodes. Please check with any solution you may deploy and adjust the parameters as needed. We will deploy four (4) Worker Nodes for Symantec VIP Auth Hub solution. And horizontally scale the solution with more worker nodes for Symantec API Manager and Siteminder.

Before starting any of these images, create a local snapshot as a “before” state. This will allow you to redeploy with minimal impact if there is any issue.

Before starting the deployment, you may wish to create a new NAT VMware Network, to avoid impacting any existing VMware images on the same address range. We will be adjusting the dhcpd.conf and dhcpd.leases files for this network.

To avoid an issue with reverse DNS lookup within PODS and Containers, remove a default value from dhcpd.conf. Stop vmware network, remove or comment out the line “option domain-name localdomain;” , remove any dhcpd.leases information, then restart the vmware network.

ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --stop ; echo ""
sudo cp /dev/null /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat      /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --start ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat      /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""

OpenShift / Kubernetes / Helm Command Line Binaries

Download these two (2) client packages to have three (3) binaries for interfacing with OpenShift/Kubernetes API Server.

Download Openshift Binaries for remote management (on main host)
#########################
sudo su -
cd /tmp/openshift
curl -skOL https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64.tar.gz ; tar -zxvf helm-linux-amd64.tar.gz
curl -skOL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz ; tar -zxvf openshift-client-linux.tar.gz
mv -f oc /usr/bin/oc
mv -f kubectl /usr/bin/kubectl
mv -f helm-linux-amd64 /usr/local/bin/helm
oc version
helm version
kubectl version

Start an OpenShift Cluster Deployment

OpenID Configuration with OpenShift

Post-deployment step: After you have deployed OpenShift cluster, you will be asked to create an IDP to authenticate other accounts. Below is an example with OpenShift and MS Azure. The image below showcases the parameters and values to be shared between the two solutions.

Entropy DaemonSet for OpenShift Nodes/Pods

We can validate the entropy on an OpenShift nodes or Pod via use of /dev/random. We prefer to emulate a 1000 password changes that showcase how rapidly the entropy pool of 4K is depleted when a security process accesses it. Example of the single line bash code.

Validate Entropy in Openshift Nodes [Before/After use of Haveged Deployment]
#########################
(counter=1;MAX=1001;time while [ $counter -le $MAX ]; do echo "";echo "##########  $counter ##########" ; echo "Entropy = `cat /proc/sys/kernel/random/entropy_avail`  out of 4096"; echo "" ; time dd if=/dev/random bs=8 count=1 2>/dev/null | base64; counter=$(( $counter + 1 )); done;)

To deploy an entropy daemonset, we can leverage what is documented by Broadcom/Symantec in their VIP Auth Hub documentation. https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/vip-authentication-hub/2022-Oct/operating/troubleshooting/checking-entropy-level.html#concept.dita_d3303fde-e786-4fd4-b0b6-e3a28fd60a82

$ cat <<EOF > | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  labels:
    run: haveged
  name: haveged
spec:
  selector:
    matchLabels:
      run: haveged
  template:
    metadata:
      labels:
        run: haveged
    spec:
      containers:
      - image: hortonworks/haveged:1.1.0
        name: haveged
        securityContext:
          privileged: true
      tolerations:
      - effect: NoSchedule
        operator: Exists
EOF

Patch OpenShift Workers

If the number of OpenShift Workers is greater than two (2), then you will need to patch the OpenShift Ingress controller to scale up to the number of worker nodes.

WORKERS=`oc get nodes | grep worker | wc -l`

echo ""
echo "######################################################################"
echo "# of Worker replicas in OpenShift Ingress Prior to update"
echo "oc get -n openshift-ingress-operator ingresscontroller -o yaml | grep -i replicas:"
#echo "######################################################################"
echo ""
oc patch -n openshift-ingress-operator ingresscontroller/default --patch "{\"spec\":{\"replicas\": ${WORKERS}}}" --type=merge

LetsEncrypt Certs for OpenShift Ingress and API Server

The certs with OpenShift are self-signed. This is not an issue until you attempt to access the local OpenShift console with a browser and are stopped from accessing the UI by newer security enforcement in the browsers. To avoid this challenge, we recommend switching the certs to LetsEncrypt. There are many examples how to rotate the certs. We used the below link to rotate the certs. https://docs.openshift.com/container-platform/4.12/security/certificates/replacing-default-ingress-certificate.html

echo "Installing ConfigMap for the Default Ingress Controllers"
oc delete configmap letsencrypt-fullchain-ca -n  openshift-config &>/dev/null
oc create configmap letsencrypt-fullchain-ca \
     --from-file=ca-bundle.crt=${CHAINFILE} \
     -n openshift-config

oc patch proxy/cluster \
     --type=merge \
     --patch='{"spec":{"trustedCA":{"name":"letsencrypt-fullchain-ca"}}}'

echo "Installing Certificates for the Default Ingress Controllers"
oc delete secret letsencrypt-certs -n openshift-ingress &>/dev/null
oc create secret tls letsencrypt-certs \
  --cert=${CHAINFILE} \
  --key=${KEYFILE} \
  -n openshift-ingress


echo "Backup prior version of ingresscontroller"
oc get ingresscontroller default -n openshift-ingress-operator -o yaml > /tmp/ingresscontroller.$DATE.yaml
oc patch ingresscontroller.operator default -n openshift-ingress-operator --type=merge --patch='{"spec": { "defaultCertificate": { "name": "letsencrypt-certs" }}}'


echo "Installing Certificates for the API Endpoint"
oc delete secret letsencrypt-certs  -n openshift-config  &>/dev/null
oc create secret tls letsencrypt-certs \
  --cert=${CHAINFILE} \
  --key=${KEYFILE} \
  -n openshift-config

echo "Backup prior version of apiserver"
oc get apiserver cluster -o yaml > /tmp/apiserver_cluster.$DATE.yaml
oc patch apiserver cluster --type merge --patch="{\"spec\": {\"servingCerts\": {\"namedCertificates\": [ { \"names\": [  \"$LE_API\"  ], \"servingCertificate\": {\"name\": \"letsencrypt-certs\" }}]}}}"

echo "#####################################################################################"
echo "true | openssl s_client -connect api.${DOMAIN}:443 --showcerts --servername api.${DOMAIN}"
echo ""


echo "It may take 5-10 minutes for the OpenShift Ingress/API Pods to cycle with the new certs"
echo "You may monitor with:  watch -n 2 'oc get pod -A | grep -i -v -e running -e complete'  "
echo ""
echo "Per Openshift documentation use the below command to monitor the state of the API server"
echo "ensure PROGRESSING column states False as the status before continuing with deployment"
echo ""
echo "oc get clusteroperators kube-apiserver "

Please reach out if you wish to learn more or have ANA assist with Kubernetes / OpenShift opportunities.

View the JMS HornetQ Queue

Typically, we may use various tools to view JMS queue(s) related metrics for trends and stale/stuck activity. During issues with J2EE JMS Queue, though, it would be helpful to be able to view and trace transactions to assist with a resolution. With proper logging levels enabled, Wildfly/JBOSS logs show detailed information containing the JMS IDs associated with each transaction. These JMS transactions we see in the logs are already ‘in-flight’ and are being processed by a message handler.

On the Symantec Identity Suite Virtual Appliance, the Wildfly & HornetQ processes are run under the ‘wildfly’ service ID. The wildfly journals are located in the wildfly data folder and stored in a format that is efficient for processing. To perform analysis on the data within these journals, though, we noticed a challenge with read-permissions for the HornetQ files even when Wildfly/Java process is not actively running.

To avoid this issue on the Virtual Appliance, copy the HornetQ files to a temporary folder. Remember to copy the entire folder, including sub-folders.

mkdir -p /tmp/hornetq; cd /tmp/hornetq

cp -r -p /opt/CA/wildfly-idm/standalone/data/live-hornetq ./

java -cp "/opt/CA/wildfly-idm/modules/system/layers/base/io/netty/main/*:/opt/CA/wildfly-idm/modules/system/layers/base/org/hornetq/main/*:/opt/CA/wildfly-idm/modules/system/layers/base/org/jboss/logging/main/*" org.hornetq.tools.Main print-data /tmp/hornetq/live-hornetq/bindings  /tmp/hornetq/live-hornetq/journal

Once the live-hornetq folder is available in a tmp location, execute the below process for printing Journal content.

Print HornetQ Journal and Bindings

To export the HornetQ Journal Files to XML, the Java module of “org.hornetq.core.journal.impl.ExportJournal” requires the journal sub-folder with the prefix of “hornetq-data”, the file extension (hq), the file sizes, and where to export the XML file (export.dat). The prefix and file extension (hq) are unique to the Identity Suite vApp.

mkdir -p /tmp/hornetq; cd /tmp/hornetq

cp -r -p /opt/CA/wildfly-idm/standalone/data/live-hornetq ./

java -cp "/opt/CA/wildfly-idm/modules/system/layers/base/io/netty/main/*:/opt/CA/wildfly-idm/modules/system/layers/base/org/hornetq/main/*:/opt/CA/wildfly-idm/modules/system/layers/base/org/jboss/logging/main/*" org.hornetq.core.journal.impl.ExportJournal  /tmp/hornetq/live-hornetq/journal hornetq-data hq  25485760  /tmp/hornetq/export.dat
Export HornetQ Journal

The body/rows of the JMS export is partially base64. You may parse through this information as you wish.

Use this information to trace through transactions in the JMS queue.

For Cleanup, within the Symantec Identity Suite vApp, there are a few options. The first is deleting the JMS queue journals before starting the Wildfly service. This can be accomplished using the build-in alias ‘deleteIDMJMSqueue’.

alias deleteIDMJMSqueue='sudo /opt/CA/VirtualAppliance/scripts/.firstrun/deleteIDMJMSqueue.sh'

Another option is to remove a select JMS entry from the queue using /opt/CA/wildfly-idm/bin/jboss-cli.sh process. If created with an input script, escape the colons in the GUID.

/subsystem=transactions/log-store=log-store/:probe()

ls /subsystem=transactions/log-store=log-store/transactions

/subsystem=transactions/log-store=log-store/transactions=0:ffffa409cc8a:1c01b1ff:5c7e95ac:eb:delete() 

View a description of the JMS Processing from Broadcom Engineering/Support Teams (see below video)

This write-up provides the tools required for a deeper analysis. Debugging issues with JMS may test one’s patience, stay the course, stay persistent, and have fun!

References: (Delete JMS queue and remove a single entry)

https://knowledge.broadcom.com/external/article/233003/inprogress-task-issues-a-clients-guide.html

https://knowledge.broadcom.com/external/article/129101/arjuna016037-could-not-find-new-xaresour.html