SiteMinder / CA SSO Metrics

These are exciting times, marked by a transformative change in the way modern applications are rolled out. The transition to Cloud and related technologies is adding considerable value to the process. If you are utilizing solutions like SiteMinder SSO or CA Access Gateway, having access to real-time metrics is invaluable. In the following article, we’ll explore the inherent features of the CA SSO container form factor that facilitate immediate metrics generation, compatible with platforms like Grafana.

Our Lab cluster is an On-Premise RedHat OpenShift Kubernetes Cluster which has the CA SSO Container solution, available as part of the Broadcom Validate Beta Program. The deployment of different SSO elements like policy servers and access gateway is facilitated through a Helm package provided by Broadcom. Within our existing OpenShift environment, a Prometheus metrics server is configured to gather time-series data. By default, the tracking of user workload metrics isn’t activated in OpenShift and must be manually enabled. To do so, make sure the ‘enableUserWorkload‘ setting is toggled to ‘true‘. You can either create or modify the existing configmap to ensure this setting is activated.

Grafana is also deployed for visuals and connected to the Prometheus data source to create metrics visuals. Grafana data source can be created using the YAML provided below. Note that creation of the grafana datasource will require the Prometheus URL as well as an authorization token to access stored metrics. This token can be extracted from the cluster using the below commands.

Also ensure that a role binding exists to allow the service account (prometheus-k8s) in the openshift-monitoring namespace access to the role which allows monitoring of resources in the target (smdev) namespace.

Once the CA SSO helm chart is installed with metrics enabled, we must also ensure that the namespace in which CA SSO gets deployed has openshift.io/cluster-monitoring label set as true.

We are all set now and should see the metrics getting populated using the OpenShift console (Observe -> Metrics menu item) as well as available for Grafana’s consumption.

In the era of next-generation application delivery, integrated monitoring and observability features now come standard, offering considerable advantages, particularly for operations and management teams seeking clear insights into usage and solution value. This heightened value is especially notable in deployments via container platforms. If you’re on the path to modernization and are looking to speed up your initiatives, feel free to reach out. We’re committed to your success and are keen to partner with you.

VMware Workstation and Vyos Software Router: Expedite on-prem Kubernetes and OpenShift Labs

With the rapid evolution of technology and increasing complexity of software solutions, using tools like VMware Workstation for learning and testing has become necessary. Deploying intricate systems like Kubernetes and OpenShift on VMware Workstation provides an opportunity for in-depth understanding and experience before implementing these solutions on a larger, organization-wide scale.

VMware Workstation, coupled with the powerful container orchestration capabilities of Kubernetes and OpenShift, offers an unparalleled platform for crafting next-generation applications and solutions and lowering costs. It’s a potent combination that can significantly boost your organization’s operational efficiency, application delivery speed, and overall software development lifecycle.

In the realm of advanced solution deployments, the right tools can make all the difference. With VMware Workstation, you’re not just getting a virtualization tool; you’re acquiring a platform that helps you delve deeper into modern software architectures and innovations. Harness its potential and equip yourself with the knowledge and experience needed to stay ahead of the curve.

Certainly, networking is one of the critical aspects of VMware Workstation that make it such a versatile tool. VMware Workstation offers three types of networking options to suit different needs and scenarios. Let’s explore each of these in detail.

1. Bridged Networking

Bridged Networking is the simplest and most straightforward networking mode. When you configure a VM to use bridged networking, the VM is connected directly to the existing network that your host computer is connected to. In essence, it will be as though the VM is another physical device on your network.

With bridged networking, your VM can have its unique identity on the network, such as its IP address, making it an entirely independent entity from the host. This is particularly useful when you need the VM to interact directly with other devices on the network, or when it needs to be accessible from other computers.

2. Network Address Translation (NAT)

The NAT mode allows your VMs to share the IP address of the host machine. Essentially, all the network traffic from the VMs is routed through the host machine. This implies that the VMs can access the external network and the internet, but they cannot be directly reached from the external network since they are ‘hidden’ behind the host.

NAT is highly beneficial when you want to isolate your VMs from your network while still providing them with network access. For instance, this can be handy when testing untrusted applications or experimenting with potentially unstable software that could disrupt your network.

3. Host-Only Networking

The Host-Only networking mode creates a private network shared only between the VMs and the host machine. This means that your VMs can communicate with each other and the host machine but cannot access the external network or the internet.

Host-Only networking is particularly useful when you want to create a secure, isolated environment for your VMs, away from the vulnerabilities of the external network. This is ideal when working with sensitive data or creating a controlled environment for testing network applications.

Each of these three VMware Workstation networking modes has advantages and suitable use-cases. The choice between them depends on your specific needs- creating an isolated testing environment or mimicking a complex, interconnected network for a comprehensive deployment simulation.

Expanding Host-Only for use with OpenShift/Kubernetes Labs

As discussed earlier, VMware workstation offer three (3) types of networks modes: Bridged, NAT, and Host Only. The bridged mode has a challenge that it will share your office or home network and request an IP address to be assigned. This may not be acceptable in your office, or you may wish to keep your main home network free from VMware hosts. NAT is typically the most selected network used for VMware guest OS, as it will not impact the office/home network. The limitation with NAT, is it only allows outward-bound traffic from the Guest OS, via the VMware Host. There are no routing rules to allow traffic from outside to access the Guest OS images. The last network mode is Host-Only. Host-Only is designed to be an isolated network segment between the VMware guest OS and the VMware Host OS. There is no outward or inward-bound traffic. This network mode is typically not used when access to the internet is required.

Introduce: Vyos Software Router for VMware (OVA)

We wanted a more flexible solution than these three (3) modes. We wanted to standardize a network segment for our OpenShift/Kubernetes training/development that did not require a change between locations (like bridged) or force our internal resources to reset their bridged network to match.

After a review, we selected VMware Host-Only, which has the basics of what we needed. We were only missing routing rules for inbound and outbound traffic. We looked around and found a software solution already made that we could immediately leverage with minimal configuration changes to Vmware client OS/images. Vyos software router was already provided in an OVA format for immediate use.

We downloaded and imported the OVA into VMware workstation.

Since we planned to have multiple host network segments to manage large data for OpenShift/Kubernetes, we bumped up the VMware guest OS specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM. And adjusted the extra Network Adapters to be Host-only or Custom (Host-Only) networks.

After we adjusted the Guest OS specs, we snapshotted this VMware Guest OS image to allow rollback if we wanted to change a feature later. We started up the image and logged in with default credentials; vyos/vyos

After login via the VMware Guest OS console, we immediately updated Vyos configuration to allow us to ssh into the Guest OS and perform our work in a better UI.

Below is an example of the bootstrap configuration to enable remote access via ssh, and update eth0 NIC to a bridged IP address that we can access. We standardized a rule that all network routing would use IP xxx.yyy.zzz.254.

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save

We then switched to our favorite SSH terminal tool of MobaXterm (or Putty) to validate we could access the Vyos software router remotely.

We are now ready to add a configuration that allows a default route, inbound routes, and outbound routes for our four (4) network NICs.

The below lines may be pasted into the SSH session. ‘conf’ (config) will open the Vyos configuration shell so that we can paste it into all lines. We will define static IP addresses for all four (4) NICs, a static route to our external network router, outbound rules, and inbound rules. Please ensure that the IP addresses for the four (4) NICs match what you have defined.

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0

Please double check the IP addresses match your VMware Host-only networks.

Validation

We will validate inbound and outbound traffic using ping on the Vyos software router. When this passes, we will move on to routing configuration for external devices.

After basic validation, please snapshot your Vyos Guest OS

In the final step, we will add routing configuration on MS Windows OS and Linux OS to reach all four (4) networks from any external device and any VMware image on one of the four (4) networks.

# Ref: https://docs.vyos.io/en/equuleus/configuration/system/default-route.html
#      https://docs.vyos.io/en/equuleus/quick-start.html
#      https://bertvv.github.io/cheat-sheets/VyOS.html

#Step 000:  Increase Vyos Router specs from 1 vCPU 4 GB RAM to 2 vCPU 8 GB RAM when adding more than two interfaces in VMware Workstation


#Step 00: Review VMware Host vmnet addresses, use to build your rules.

ip a | grep vmnet

16: vmnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.10.10.1/24 brd 10.10.10.255 scope global vmnet1
17: vmnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.0.0.1/24 brd 10.0.0.255 scope global vmnet2
18: vmnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.242.1/24 brd 192.168.242.255 scope global vmnet3
19: vmnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 192.168.243.1/24 brd 192.168.243.255 scope global vmnet8
20: vmnet255: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
    inet 10.255.0.1/24 brd 10.255.0.255 scope global vmnet255



# Step 0:  Boot strap first interface (via vmware console of vyos running image -  after login with vyos / vyos)

conf
set service ssh port '22'
set interfaces ethernet eth0 address '192.168.2.254/24'
commit
save
exit
show interface


# Step 1: Vyos configuration - after login with vyos / vyos with an SSH putty session tool to allow copy-n-paste of the below rows

conf
set service ssh port '22'

set interfaces ethernet eth0 address '192.168.2.254/24'
set interfaces ethernet eth0 description 'BRIDGED NETWORK'

set interfaces ethernet eth1 address '10.10.10.254/24'
set interfaces ethernet eth1 description 'VMWARE HOST NETWORK vmnet1'

set interfaces ethernet eth2 address '10.0.0.254/24'
set interfaces ethernet eth2 description 'VMWARE HOST NETWORK vmnet2 - BAREMETAL OPENSHIFT'

set interfaces ethernet eth3 address '192.168.242.254/24'
set interfaces ethernet eth3 description 'VMWARE HOST NETWORK vmnet3'

delete protocols static route 0.0.0.0/0
set protocols static route 0.0.0.0/0 next-hop 192.168.2.1

delete nat

set nat source rule 20 description "Allow Outbound Traffic from VMware Host network from eth1"
set nat source rule 20 outbound-interface 'eth0'
set nat source rule 20 source address '10.10.10.0/24'
set nat source rule 20 translation address masquerade

set nat source rule 30 description "Allow Outbound Traffic from VMware Host network from eth2"
set nat source rule 30 outbound-interface 'eth0'
set nat source rule 30 source address '10.0.0.0/24'
set nat source rule 30 translation address masquerade

set nat source rule 40 description "Allow Outbound Traffic from VMware Host network from eth3"
set nat source rule 40 outbound-interface 'eth0'
set nat source rule 40 source address '192.168.242.0/24'
set nat source rule 40 translation address masquerade

set nat source rule 60 description "Allow Inbound Traffic from Bridged to VMware host network eth1"
set nat source rule 60 outbound-interface 'eth1'
set nat source rule 60 source address '192.168.2.0/24'
set nat source rule 60 translation address masquerade

set nat source rule 61 description "Allow Inbound Traffic from Bridged to VMware Host network eth2"
set nat source rule 61 outbound-interface 'eth2'
set nat source rule 61 source address '192.168.2.0/24'
set nat source rule 61 translation address masquerade

set nat source rule 62 description "Allow Inbound Traffic from Bridged to Vmware Host network eth3"
set nat source rule 62 outbound-interface 'eth3'
set nat source rule 62 source address '192.168.2.0/24'
set nat source rule 62 translation address masquerade

commit
save
exit
show interface
show ip route 0.0.0.0 


# Step 2:  Update external lab network devices (laptop on 192.168.2.x) to use Vyos Router for this new routes

# MS Win OS examples:
route add -p 10.10.10.0 mask 255.255.255.0 192.168.2.254
route add -p 10.0.0.0 mask 255.255.255.0 192.168.2.254
route add -p 192.168.242.0 mask 255.255.255.0 192.168.2.254

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Linux OS examples:
sudo route add -net  10.0.0.0/24 gw 192.168.2.254
sudo route add -net  10.10.10.0/24 gw 192.168.2.254
sudo route add -net  192.168.242.0/24 gw 192.168.2.254
route -n
netstat -rn   (dnf -y install net-tools)

ping 10.10.10.254
ping 10.0.0.254
ping 192.168.242.254

# Step 3:  Optional:  Add static routes on network router if missed on a device, to redirect to the vyos bridged interface.


# Step 4:  Update the VMware DHCP configuration file to use the new Vyos Router for any Vmware images with DHCP, then reboot images.
option routers  10.10.10.254;     [VMware Workstation on Linux OS: /etc/vmware/vmnet1/dhcp/dhcpd.conf ]
option routers  10.0.0.254;       [VMware Workstation on Linux OS: /etc/vmware/vmnet2/dhcp/dhcpd.conf ]
option routers  192.168.242.254;  [VMware Workstation on Linux OS: /etc/vmware/vmnet3/dhcp/dhcpd.conf ]

# Note:  MS Win OS:  The VMware DHCP configurations are combined in one file:  C:\ProgramData\VMware\vmnetdhcp.conf
# 
# Restart images, view routes, then do a outbound submission as a test.


ping 8.8.8.8
ping www.google.com


# Step 5:  For Openshift, ensure that your install-config.yaml or agent-config.yaml is defined with the correct gateway router for Vyos.



# Step 6:  Exercise your VMware host images and then monitor within Vyos via:
show nat source translations
show nat source statistics
monitor traffic interface any filter 'host 10.0.0.99'      [embedded tcpdump]

Overview of Vyos Software Router with Vmware Workstation and three (3) host-only networks with bridged network

We now have the methodology to use over 250+ possible VMware host-only network segments for our networking labs with OpenShift and Kubernetes that require internet outbound and/or inbound access. We can standardize a unique host-only network segment to share with team members and clients for training/education/development. With the embedded tcpdump feature in Vyos Software router image, we can quickly address and isolate network routing configuration challenges.

Hopefully, this will allow you to continue to expand your knowledge and awareness of new architectures with your dedicated lab environment.

OpenShift on VMware Workstation

RedHat OpenShift is one of the container orchestration platforms that provides an enterprise-grade solution for deploying, running, and managing applications on public, on-premise, or hybrid cloud environments.

This blog entry outlines the high-level architecture of a LAB OpenShift on-prem cloud environment built on VMware Workstation infrastructure.

Red Hat OpenShift and the customized ISO image with Red Hat Core OS provide a straightforward process to build your lab and can help lower the training cost. You may watch the end-to-end process in the video below or follow this blog entry to understand the overall process.

Requirements:

  • Red Hat Developer Account w/ Red Hat Developer Subscription for Individuals
  • Local DNS to resolve a minimum of three (3) addresses for OpenShift. (api.[domain], api-int.[domain], *.apps.[domain])
  • DHCP Server (may use VMware Workstation NAT’s DHCP)
  • Storage (recommend using NFS for on-prem deployment/lab) for OpenShift logging/monitoring & any db/dir data to be retained.
  • SSH Terminal Program w/ SSH Key.
  • Browser(s)
  • Front Loader/Load Balancer (HAProxy)
  • VMware Workstation Pro 16.x
  • Specs: (We used more than the minimum recommended by OpenShift to prepare for other applications)
    • Three (3) Control Planes Nodes @ 8 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64 bit” Guest OS Type
    • Four (4) Worker Nodes @ 4 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64” Guest OS Type

Post-Efforts: Apply these to provide additional value. [Included as examples]

  • Add entropy service (haveged) to all nodes/pods to increase security & performance.
  • Let’sEncrypt wild card certs for *.[DOMAIN] and *.apps.[DOMAIN] to avoid self-signed certs for external UIs. Avoid using “thisisunsafe” within the Chrome browser to access the local OpenShift console.
  • Update OpenShift Ingress to be aware of more than two (2) worker nodes.
  • Update OpenShift to use NFS as default storage.

Below is a view of our footprint to deploy the OpenShift 4.x environment on a local data center hosted by VMware Workstation.

Red Hat OpenShift provides three (3) options to deploy. Cloud, Datacenter, Local. Local is similar to minikube for your laptop/workstation with a few pods. Red Hat OpenShift license for Cloud requires deployment on other vendors’ sites for the nodes (cpu/ram/disk) and load balancers. If you deploy OpenShift on AWS and GCP, plan a budget of $500/mo per resource for the assets.

After reviewing the open-source OKD solution and the various OpenShift deployment methods, we selected the “DataCenter” option within OpenShift. Two (2) points made this decision easy.

  • Red Hat OpenShift offers a sixty (60) day eval license.
    • This license can be restarted for another sixty (60) days if you delete/archive the last cluster.
  • Red Hat OpenShift provides a customized ISO image with Red Hat Core OS, ignition yaml files, and an embedded SSH Public Key, that does a lot of the heavy lifting for setting up the cluster.

The below screen showcases the process that Red Hat uses to build a bootstrap ISO image using Red Hat Core OS, Ignition yaml files (to determine node type of control plane/worker node), and the embedded SSH Key. This process provides a lot of value to building a cluster and streamlines the effort.

DNS Requirement

The minimal DNS entries required for OpenShift is three (3) addresses.

api.[domain]

api-int.[domain]

*.apps.[domain]

https://docs.openshift.com/container-platform/4.11/installing/installing_platform_agnostic/installing-platform-agnostic.html

Front Load Balancer (HAProxy)

Update HAproxy.cfg as needed for IP addresses / Ports. To avoid deployment of HAProxy twice, we use the “bind” command to join two (2) HAproxy configuration files together to prevent conflict on port 80/443 redirect for both OpenShift and another application deployed on OpenShift.

# Global settings
# Set $IP_RANGE as an OS ENV or Global variable before running HAPROXY
#   Important: If using VMworkstation NAT ensure this range is correctly defined to
#   avoid error message with x509 error on port 22623 upon startup on control planes
#
#   Ensure 3XXXX PORT is defined correct from the ingress
#    - We have predefined these ports to 32080 and 32443 for helm deployment of ingress
#    oc -n ingress get svc
#
#---------------------------------------------------------------------
global
    setenv IP_RANGE 192.168.243
    setenv HA_BIND_IP1 192.168.2.101
    setenv HA_BIND_IP2 192.168.2.111
    maxconn     20000
    log         /dev/log local0 info
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    log                     global
    mode                    http
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  redispatch
    option forwardfor       except 127.0.0.0/8
    retries                 3
    maxconn                 20000
    timeout http-request    10000ms
    timeout http-keep-alive 10000ms
    timeout check           10000ms
    timeout connect         40000ms
    timeout client          300000ms
    timeout server          300000ms
    timeout queue           50000ms

# Enable HAProxy stats
# Important Note:  Patch OpenShift Ingress to allow internal RHEL CoreOS haproxy to run on additional worker nodes
#  oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge
#
listen stats
    bind :9000
    stats uri /
    stats refresh 10000ms

# Kube API Server
frontend k8s_api_frontend
    bind :6443
    default_backend k8s_api_backend
    mode tcp
    option tcplog

backend k8s_api_backend
    mode tcp
    balance source
    server      ocp-cp-1_6443        "$IP_RANGE".128:6443 check
    server      ocp-cp-2_6443        "$IP_RANGE".129:6443 check
    server      ocp-cp-3_6443        "$IP_RANGE".130:6443 check

# OCP Machine Config Server
frontend ocp_machine_config_server_frontend
    mode tcp
    bind :22623
    default_backend ocp_machine_config_server_backend
    option tcplog

backend ocp_machine_config_server_backend
    mode tcp
    balance source
    server      ocp-cp-1_22623        "$IP_RANGE".128:22623 check
    server      ocp-cp-2_22623        "$IP_RANGE".129:22623 check
    server      ocp-cp-3_22623        "$IP_RANGE".130:22623 check

# OCP Machine Config Server #2
frontend ocp_machine_config_server_frontend2
    mode tcp
    bind :22624
    default_backend ocp_machine_config_server_backend2
    option tcplog

backend ocp_machine_config_server_backend2
    mode tcp
    balance source
    server      ocp-cp-1_22624        "$IP_RANGE".128:22624 check
    server      ocp-cp-2_22624        "$IP_RANGE".129:22624 check
    server      ocp-cp-3_22624        "$IP_RANGE".130:22624 check


# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend ocp_http_ingress_frontend
    bind "$HA_BIND_IP1":80
    default_backend ocp_http_ingress_backend
    mode tcp
    option tcplog

backend ocp_http_ingress_backend
    balance source
    mode tcp
    server      ocp-w-1_80     "$IP_RANGE".131:80 check
    server      ocp-w-2_80     "$IP_RANGE".132:80 check
    server      ocp-w-3_80     "$IP_RANGE".133:80 check
    server      ocp-w-4_80     "$IP_RANGE".134:80 check
    server      ocp-w-5_80     "$IP_RANGE".135:80 check
    server      ocp-w-6_80     "$IP_RANGE".136:80 check
    server      ocp-w-7_80     "$IP_RANGE".137:80 check

frontend ocp_https_ingress_frontend
    bind "$HA_BIND_IP1":443
    default_backend ocp_https_ingress_backend
    mode tcp
    option tcplog

backend ocp_https_ingress_backend
    mode tcp
    balance source
    server      ocp-w-1_443     "$IP_RANGE".131:443 check
    server      ocp-w-2_443     "$IP_RANGE".132:443 check
    server      ocp-w-3_443     "$IP_RANGE".133:443 check
    server      ocp-w-4_443     "$IP_RANGE".134:443 check
    server      ocp-w-5_443     "$IP_RANGE".135:443 check
    server      ocp-w-6_443     "$IP_RANGE".136:443 check
    server      ocp-w-7_443     "$IP_RANGE".137:443 check

######################################################################################

# VIPAUTHHUB Ingress
frontend vip_http_ingress_frontend
    bind "$HA_BIND_IP2":80
    mode tcp
    option forwardfor
    option http-server-close
    default_backend vip_http_ingress_backend

backend vip_http_ingress_backend
    mode tcp
    balance roundrobin
    server      vip-w-1_32080     "$IP_RANGE".131:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32080     "$IP_RANGE".132:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32080     "$IP_RANGE".133:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32080     "$IP_RANGE".134:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32080     "$IP_RANGE".135:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32080     "$IP_RANGE".136:32080 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32080     "$IP_RANGE".137:32080 check fall 3 rise 2 send-proxy-v2

frontend vip_https_ingress_frontend
    bind "$HA_BIND_IP2":443
    # mgmt-sspfqdn
    acl is_mgmt_ssp hdr_end(host) -i mgmt-ssp.okd.anapartner.dev
    use_backend vip_ingress-nodes_mgmt-nodeport if is_mgmt_ssp
    mode tcp
    #option forwardfor
    option http-server-close
    default_backend vip_https_ingress_backend

backend vip_https_ingress_backend
    mode tcp
    balance roundrobin
    server      vip-w-1_32443     "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32443     "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32443     "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32443     "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32443     "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32443     "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32443     "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2

backend vip_ingress-nodes_mgmt-nodeport
    mode tcp
    balance roundrobin
    server      vip-w-1_32443     "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-2_32443     "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-3_32443     "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-4_32443     "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-5_32443     "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-6_32443     "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
    server      vip-w-7_32443     "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2

######################################################################################

Use the following commands to add 2nd IP address to one NIC on the main VMware Workstation Host, where NIC = eno1 and 2nd IP address = 192.168.2.111

nmcli dev show eno1
sudo nmcli dev mod eno1 +ipv4.address 192.168.2.111/24

VMware Workstation Hosts / Nodes

When building the VMware hosts, ensure that you use Guest Type “Red Hat Enterprise Linux 8 x64” to match the embedded Red Hat Core OS provided in an ISO image. Otherwise, DHCP services may not work correctly, and when the VMware host boots, it may not receive an IP address.

The VMware hosts for Control Planes Nodes are recommended to be 8 vCPU, 16 GB RAM, and 100 HDD. The VMware hosts for Worker Nodes are recommended to be 4 vCPU, 16 GB RAM, and 100 HDD.
OpenShift requires a minimum of three (3) Control Plane Nodes and two (2) Worker Nodes. Please check with any solution you may deploy and adjust the parameters as needed. We will deploy four (4) Worker Nodes for Symantec VIP Auth Hub solution. And horizontally scale the solution with more worker nodes for Symantec API Manager and Siteminder.

Before starting any of these images, create a local snapshot as a “before” state. This will allow you to redeploy with minimal impact if there is any issue.

Before starting the deployment, you may wish to create a new NAT VMware Network, to avoid impacting any existing VMware images on the same address range. We will be adjusting the dhcpd.conf and dhcpd.leases files for this network.

To avoid an issue with reverse DNS lookup within PODS and Containers, remove a default value from dhcpd.conf. Stop vmware network, remove or comment out the line “option domain-name localdomain;” , remove any dhcpd.leases information, then restart the vmware network.

ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --stop ; echo ""
sudo cp /dev/null /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat      /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --start ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat      /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""

OpenShift / Kubernetes / Helm Command Line Binaries

Download these two (2) client packages to have three (3) binaries for interfacing with OpenShift/Kubernetes API Server.

Download Openshift Binaries for remote management (on main host)
#########################
sudo su -
cd /tmp/openshift
curl -skOL https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64.tar.gz ; tar -zxvf helm-linux-amd64.tar.gz
curl -skOL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz ; tar -zxvf openshift-client-linux.tar.gz
mv -f oc /usr/bin/oc
mv -f kubectl /usr/bin/kubectl
mv -f helm-linux-amd64 /usr/local/bin/helm
oc version
helm version
kubectl version

Start an OpenShift Cluster Deployment

OpenID Configuration with OpenShift

Post-deployment step: After you have deployed OpenShift cluster, you will be asked to create an IDP to authenticate other accounts. Below is an example with OpenShift and MS Azure. The image below showcases the parameters and values to be shared between the two solutions.

Entropy DaemonSet for OpenShift Nodes/Pods

We can validate the entropy on an OpenShift nodes or Pod via use of /dev/random. We prefer to emulate a 1000 password changes that showcase how rapidly the entropy pool of 4K is depleted when a security process accesses it. Example of the single line bash code.

Validate Entropy in Openshift Nodes [Before/After use of Haveged Deployment]
#########################
(counter=1;MAX=1001;time while [ $counter -le $MAX ]; do echo "";echo "##########  $counter ##########" ; echo "Entropy = `cat /proc/sys/kernel/random/entropy_avail`  out of 4096"; echo "" ; time dd if=/dev/random bs=8 count=1 2>/dev/null | base64; counter=$(( $counter + 1 )); done;)

To deploy an entropy daemonset, we can leverage what is documented by Broadcom/Symantec in their VIP Auth Hub documentation. https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/vip-authentication-hub/2022-Oct/operating/troubleshooting/checking-entropy-level.html#concept.dita_d3303fde-e786-4fd4-b0b6-e3a28fd60a82

$ cat <<EOF > | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  labels:
    run: haveged
  name: haveged
spec:
  selector:
    matchLabels:
      run: haveged
  template:
    metadata:
      labels:
        run: haveged
    spec:
      containers:
      - image: hortonworks/haveged:1.1.0
        name: haveged
        securityContext:
          privileged: true
      tolerations:
      - effect: NoSchedule
        operator: Exists
EOF

Patch OpenShift Workers

If the number of OpenShift Workers is greater than two (2), then you will need to patch the OpenShift Ingress controller to scale up to the number of worker nodes.

WORKERS=`oc get nodes | grep worker | wc -l`

echo ""
echo "######################################################################"
echo "# of Worker replicas in OpenShift Ingress Prior to update"
echo "oc get -n openshift-ingress-operator ingresscontroller -o yaml | grep -i replicas:"
#echo "######################################################################"
echo ""
oc patch -n openshift-ingress-operator ingresscontroller/default --patch "{\"spec\":{\"replicas\": ${WORKERS}}}" --type=merge

LetsEncrypt Certs for OpenShift Ingress and API Server

The certs with OpenShift are self-signed. This is not an issue until you attempt to access the local OpenShift console with a browser and are stopped from accessing the UI by newer security enforcement in the browsers. To avoid this challenge, we recommend switching the certs to LetsEncrypt. There are many examples how to rotate the certs. We used the below link to rotate the certs. https://docs.openshift.com/container-platform/4.12/security/certificates/replacing-default-ingress-certificate.html

echo "Installing ConfigMap for the Default Ingress Controllers"
oc delete configmap letsencrypt-fullchain-ca -n  openshift-config &>/dev/null
oc create configmap letsencrypt-fullchain-ca \
     --from-file=ca-bundle.crt=${CHAINFILE} \
     -n openshift-config

oc patch proxy/cluster \
     --type=merge \
     --patch='{"spec":{"trustedCA":{"name":"letsencrypt-fullchain-ca"}}}'

echo "Installing Certificates for the Default Ingress Controllers"
oc delete secret letsencrypt-certs -n openshift-ingress &>/dev/null
oc create secret tls letsencrypt-certs \
  --cert=${CHAINFILE} \
  --key=${KEYFILE} \
  -n openshift-ingress


echo "Backup prior version of ingresscontroller"
oc get ingresscontroller default -n openshift-ingress-operator -o yaml > /tmp/ingresscontroller.$DATE.yaml
oc patch ingresscontroller.operator default -n openshift-ingress-operator --type=merge --patch='{"spec": { "defaultCertificate": { "name": "letsencrypt-certs" }}}'


echo "Installing Certificates for the API Endpoint"
oc delete secret letsencrypt-certs  -n openshift-config  &>/dev/null
oc create secret tls letsencrypt-certs \
  --cert=${CHAINFILE} \
  --key=${KEYFILE} \
  -n openshift-config

echo "Backup prior version of apiserver"
oc get apiserver cluster -o yaml > /tmp/apiserver_cluster.$DATE.yaml
oc patch apiserver cluster --type merge --patch="{\"spec\": {\"servingCerts\": {\"namedCertificates\": [ { \"names\": [  \"$LE_API\"  ], \"servingCertificate\": {\"name\": \"letsencrypt-certs\" }}]}}}"

echo "#####################################################################################"
echo "true | openssl s_client -connect api.${DOMAIN}:443 --showcerts --servername api.${DOMAIN}"
echo ""


echo "It may take 5-10 minutes for the OpenShift Ingress/API Pods to cycle with the new certs"
echo "You may monitor with:  watch -n 2 'oc get pod -A | grep -i -v -e running -e complete'  "
echo ""
echo "Per Openshift documentation use the below command to monitor the state of the API server"
echo "ensure PROGRESSING column states False as the status before continuing with deployment"
echo ""
echo "oc get clusteroperators kube-apiserver "

Please reach out if you wish to learn more or have ANA assist with Kubernetes / OpenShift opportunities.