RedHat OpenShift is one of the container orchestration platforms that provides an enterprise-grade solution for deploying, running, and managing applications on public, on-premise, or hybrid cloud environments.
This blog entry outlines the high-level architecture of a LAB OpenShift on-prem cloud environment built on VMware Workstation infrastructure.
Red Hat OpenShift and the customized ISO image with Red Hat Core OS provide a straightforward process to build your lab and can help lower the training cost. You may watch the end-to-end process in the video below or follow this blog entry to understand the overall process.
Requirements:
- Red Hat Developer Account w/ Red Hat Developer Subscription for Individuals
- Local DNS to resolve a minimum of three (3) addresses for OpenShift. (api.[domain], api-int.[domain], *.apps.[domain])
- DHCP Server (may use VMware Workstation NAT’s DHCP)
- Storage (recommend using NFS for on-prem deployment/lab) for OpenShift logging/monitoring & any db/dir data to be retained.
- SSH Terminal Program w/ SSH Key.
- Browser(s)
- Front Loader/Load Balancer (HAProxy)
- VMware Workstation Pro 16.x
- Specs: (We used more than the minimum recommended by OpenShift to prepare for other applications)
- Three (3) Control Planes Nodes @ 8 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64 bit” Guest OS Type
- Four (4) Worker Nodes @ 4 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64” Guest OS Type
Post-Efforts: Apply these to provide additional value. [Included as examples]
- Add entropy service (haveged) to all nodes/pods to increase security & performance.
- Let’sEncrypt wild card certs for *.[DOMAIN] and *.apps.[DOMAIN] to avoid self-signed certs for external UIs. Avoid using “thisisunsafe” within the Chrome browser to access the local OpenShift console.
- Update OpenShift Ingress to be aware of more than two (2) worker nodes.
- Update OpenShift to use NFS as default storage.
Below is a view of our footprint to deploy the OpenShift 4.x environment on a local data center hosted by VMware Workstation.

Red Hat OpenShift provides three (3) options to deploy. Cloud, Datacenter, Local. Local is similar to minikube for your laptop/workstation with a few pods. Red Hat OpenShift license for Cloud requires deployment on other vendors’ sites for the nodes (cpu/ram/disk) and load balancers. If you deploy OpenShift on AWS and GCP, plan a budget of $500/mo per resource for the assets.

After reviewing the open-source OKD solution and the various OpenShift deployment methods, we selected the “DataCenter” option within OpenShift. Two (2) points made this decision easy.
- Red Hat OpenShift offers a sixty (60) day eval license.
- This license can be restarted for another sixty (60) days if you delete/archive the last cluster.
- Red Hat OpenShift provides a customized ISO image with Red Hat Core OS, ignition yaml files, and an embedded SSH Public Key, that does a lot of the heavy lifting for setting up the cluster.

The below screen showcases the process that Red Hat uses to build a bootstrap ISO image using Red Hat Core OS, Ignition yaml files (to determine node type of control plane/worker node), and the embedded SSH Key. This process provides a lot of value to building a cluster and streamlines the effort.

DNS Requirement
The minimal DNS entries required for OpenShift is three (3) addresses.
api.[domain]
api-int.[domain]
*.apps.[domain]

Front Load Balancer (HAProxy)
Update HAproxy.cfg as needed for IP addresses / Ports. To avoid deployment of HAProxy twice, we use the “bind” command to join two (2) HAproxy configuration files together to prevent conflict on port 80/443 redirect for both OpenShift and another application deployed on OpenShift.
# Global settings
# Set $IP_RANGE as an OS ENV or Global variable before running HAPROXY
# Important: If using VMworkstation NAT ensure this range is correctly defined to
# avoid error message with x509 error on port 22623 upon startup on control planes
#
# Ensure 3XXXX PORT is defined correct from the ingress
# - We have predefined these ports to 32080 and 32443 for helm deployment of ingress
# oc -n ingress get svc
#
#---------------------------------------------------------------------
global
setenv IP_RANGE 192.168.243
setenv HA_BIND_IP1 192.168.2.101
setenv HA_BIND_IP2 192.168.2.111
maxconn 20000
log /dev/log local0 info
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option redispatch
option forwardfor except 127.0.0.0/8
retries 3
maxconn 20000
timeout http-request 10000ms
timeout http-keep-alive 10000ms
timeout check 10000ms
timeout connect 40000ms
timeout client 300000ms
timeout server 300000ms
timeout queue 50000ms
# Enable HAProxy stats
# Important Note: Patch OpenShift Ingress to allow internal RHEL CoreOS haproxy to run on additional worker nodes
# oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge
#
listen stats
bind :9000
stats uri /
stats refresh 10000ms
# Kube API Server
frontend k8s_api_frontend
bind :6443
default_backend k8s_api_backend
mode tcp
option tcplog
backend k8s_api_backend
mode tcp
balance source
server ocp-cp-1_6443 "$IP_RANGE".128:6443 check
server ocp-cp-2_6443 "$IP_RANGE".129:6443 check
server ocp-cp-3_6443 "$IP_RANGE".130:6443 check
# OCP Machine Config Server
frontend ocp_machine_config_server_frontend
mode tcp
bind :22623
default_backend ocp_machine_config_server_backend
option tcplog
backend ocp_machine_config_server_backend
mode tcp
balance source
server ocp-cp-1_22623 "$IP_RANGE".128:22623 check
server ocp-cp-2_22623 "$IP_RANGE".129:22623 check
server ocp-cp-3_22623 "$IP_RANGE".130:22623 check
# OCP Machine Config Server #2
frontend ocp_machine_config_server_frontend2
mode tcp
bind :22624
default_backend ocp_machine_config_server_backend2
option tcplog
backend ocp_machine_config_server_backend2
mode tcp
balance source
server ocp-cp-1_22624 "$IP_RANGE".128:22624 check
server ocp-cp-2_22624 "$IP_RANGE".129:22624 check
server ocp-cp-3_22624 "$IP_RANGE".130:22624 check
# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend ocp_http_ingress_frontend
bind "$HA_BIND_IP1":80
default_backend ocp_http_ingress_backend
mode tcp
option tcplog
backend ocp_http_ingress_backend
balance source
mode tcp
server ocp-w-1_80 "$IP_RANGE".131:80 check
server ocp-w-2_80 "$IP_RANGE".132:80 check
server ocp-w-3_80 "$IP_RANGE".133:80 check
server ocp-w-4_80 "$IP_RANGE".134:80 check
server ocp-w-5_80 "$IP_RANGE".135:80 check
server ocp-w-6_80 "$IP_RANGE".136:80 check
server ocp-w-7_80 "$IP_RANGE".137:80 check
frontend ocp_https_ingress_frontend
bind "$HA_BIND_IP1":443
default_backend ocp_https_ingress_backend
mode tcp
option tcplog
backend ocp_https_ingress_backend
mode tcp
balance source
server ocp-w-1_443 "$IP_RANGE".131:443 check
server ocp-w-2_443 "$IP_RANGE".132:443 check
server ocp-w-3_443 "$IP_RANGE".133:443 check
server ocp-w-4_443 "$IP_RANGE".134:443 check
server ocp-w-5_443 "$IP_RANGE".135:443 check
server ocp-w-6_443 "$IP_RANGE".136:443 check
server ocp-w-7_443 "$IP_RANGE".137:443 check
######################################################################################
# VIPAUTHHUB Ingress
frontend vip_http_ingress_frontend
bind "$HA_BIND_IP2":80
mode tcp
option forwardfor
option http-server-close
default_backend vip_http_ingress_backend
backend vip_http_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32080 "$IP_RANGE".131:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32080 "$IP_RANGE".132:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32080 "$IP_RANGE".133:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32080 "$IP_RANGE".134:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32080 "$IP_RANGE".135:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32080 "$IP_RANGE".136:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32080 "$IP_RANGE".137:32080 check fall 3 rise 2 send-proxy-v2
frontend vip_https_ingress_frontend
bind "$HA_BIND_IP2":443
# mgmt-sspfqdn
acl is_mgmt_ssp hdr_end(host) -i mgmt-ssp.okd.anapartner.dev
use_backend vip_ingress-nodes_mgmt-nodeport if is_mgmt_ssp
mode tcp
#option forwardfor
option http-server-close
default_backend vip_https_ingress_backend
backend vip_https_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
backend vip_ingress-nodes_mgmt-nodeport
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
######################################################################################
Use the following commands to add 2nd IP address to one NIC on the main VMware Workstation Host, where NIC = eno1 and 2nd IP address = 192.168.2.111
nmcli dev show eno1
sudo nmcli dev mod eno1 +ipv4.address 192.168.2.111/24
VMware Workstation Hosts / Nodes
When building the VMware hosts, ensure that you use Guest Type “Red Hat Enterprise Linux 8 x64” to match the embedded Red Hat Core OS provided in an ISO image. Otherwise, DHCP services may not work correctly, and when the VMware host boots, it may not receive an IP address.
The VMware hosts for Control Planes Nodes are recommended to be 8 vCPU, 16 GB RAM, and 100 HDD. The VMware hosts for Worker Nodes are recommended to be 4 vCPU, 16 GB RAM, and 100 HDD.
OpenShift requires a minimum of three (3) Control Plane Nodes and two (2) Worker Nodes. Please check with any solution you may deploy and adjust the parameters as needed. We will deploy four (4) Worker Nodes for Symantec VIP Auth Hub solution. And horizontally scale the solution with more worker nodes for Symantec API Manager and Siteminder.

Before starting any of these images, create a local snapshot as a “before” state. This will allow you to redeploy with minimal impact if there is any issue.
Before starting the deployment, you may wish to create a new NAT VMware Network, to avoid impacting any existing VMware images on the same address range. We will be adjusting the dhcpd.conf and dhcpd.leases files for this network.
To avoid an issue with reverse DNS lookup within PODS and Containers, remove a default value from dhcpd.conf. Stop vmware network, remove or comment out the line “option domain-name localdomain;” , remove any dhcpd.leases information, then restart the vmware network.

ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --stop ; echo ""
sudo cp /dev/null /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
sudo /usr/bin/vmware-networks --start ; echo ""
ls -lart /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
cat /etc/vmware/vmnet8/dhcpd/dhcpd.leases ; echo ""
OpenShift / Kubernetes / Helm Command Line Binaries
Download these two (2) client packages to have three (3) binaries for interfacing with OpenShift/Kubernetes API Server.

Download Openshift Binaries for remote management (on main host)
#########################
sudo su -
cd /tmp/openshift
curl -skOL https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64.tar.gz ; tar -zxvf helm-linux-amd64.tar.gz
curl -skOL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz ; tar -zxvf openshift-client-linux.tar.gz
mv -f oc /usr/bin/oc
mv -f kubectl /usr/bin/kubectl
mv -f helm-linux-amd64 /usr/local/bin/helm
oc version
helm version
kubectl version
Start an OpenShift Cluster Deployment

OpenID Configuration with OpenShift
Post-deployment step: After you have deployed OpenShift cluster, you will be asked to create an IDP to authenticate other accounts. Below is an example with OpenShift and MS Azure. The image below showcases the parameters and values to be shared between the two solutions.

Entropy DaemonSet for OpenShift Nodes/Pods
We can validate the entropy on an OpenShift nodes or Pod via use of /dev/random. We prefer to emulate a 1000 password changes that showcase how rapidly the entropy pool of 4K is depleted when a security process accesses it. Example of the single line bash code.
Validate Entropy in Openshift Nodes [Before/After use of Haveged Deployment]
#########################
(counter=1;MAX=1001;time while [ $counter -le $MAX ]; do echo "";echo "########## $counter ##########" ; echo "Entropy = `cat /proc/sys/kernel/random/entropy_avail` out of 4096"; echo "" ; time dd if=/dev/random bs=8 count=1 2>/dev/null | base64; counter=$(( $counter + 1 )); done;)
To deploy an entropy daemonset, we can leverage what is documented by Broadcom/Symantec in their VIP Auth Hub documentation. https://techdocs.broadcom.com/us/en/symantec-security-software/identity-security/vip-authentication-hub/2022-Oct/operating/troubleshooting/checking-entropy-level.html#concept.dita_d3303fde-e786-4fd4-b0b6-e3a28fd60a82
$ cat <<EOF > | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: kube-system
labels:
run: haveged
name: haveged
spec:
selector:
matchLabels:
run: haveged
template:
metadata:
labels:
run: haveged
spec:
containers:
- image: hortonworks/haveged:1.1.0
name: haveged
securityContext:
privileged: true
tolerations:
- effect: NoSchedule
operator: Exists
EOF
Patch OpenShift Workers
If the number of OpenShift Workers is greater than two (2), then you will need to patch the OpenShift Ingress controller to scale up to the number of worker nodes.
WORKERS=`oc get nodes | grep worker | wc -l`
echo ""
echo "######################################################################"
echo "# of Worker replicas in OpenShift Ingress Prior to update"
echo "oc get -n openshift-ingress-operator ingresscontroller -o yaml | grep -i replicas:"
#echo "######################################################################"
echo ""
oc patch -n openshift-ingress-operator ingresscontroller/default --patch "{\"spec\":{\"replicas\": ${WORKERS}}}" --type=merge
LetsEncrypt Certs for OpenShift Ingress and API Server
The certs with OpenShift are self-signed. This is not an issue until you attempt to access the local OpenShift console with a browser and are stopped from accessing the UI by newer security enforcement in the browsers. To avoid this challenge, we recommend switching the certs to LetsEncrypt. There are many examples how to rotate the certs. We used the below link to rotate the certs. https://docs.openshift.com/container-platform/4.12/security/certificates/replacing-default-ingress-certificate.html
echo "Installing ConfigMap for the Default Ingress Controllers"
oc delete configmap letsencrypt-fullchain-ca -n openshift-config &>/dev/null
oc create configmap letsencrypt-fullchain-ca \
--from-file=ca-bundle.crt=${CHAINFILE} \
-n openshift-config
oc patch proxy/cluster \
--type=merge \
--patch='{"spec":{"trustedCA":{"name":"letsencrypt-fullchain-ca"}}}'
echo "Installing Certificates for the Default Ingress Controllers"
oc delete secret letsencrypt-certs -n openshift-ingress &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-ingress
echo "Backup prior version of ingresscontroller"
oc get ingresscontroller default -n openshift-ingress-operator -o yaml > /tmp/ingresscontroller.$DATE.yaml
oc patch ingresscontroller.operator default -n openshift-ingress-operator --type=merge --patch='{"spec": { "defaultCertificate": { "name": "letsencrypt-certs" }}}'
echo "Installing Certificates for the API Endpoint"
oc delete secret letsencrypt-certs -n openshift-config &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-config
echo "Backup prior version of apiserver"
oc get apiserver cluster -o yaml > /tmp/apiserver_cluster.$DATE.yaml
oc patch apiserver cluster --type merge --patch="{\"spec\": {\"servingCerts\": {\"namedCertificates\": [ { \"names\": [ \"$LE_API\" ], \"servingCertificate\": {\"name\": \"letsencrypt-certs\" }}]}}}"
echo "#####################################################################################"
echo "true | openssl s_client -connect api.${DOMAIN}:443 --showcerts --servername api.${DOMAIN}"
echo ""
echo "It may take 5-10 minutes for the OpenShift Ingress/API Pods to cycle with the new certs"
echo "You may monitor with: watch -n 2 'oc get pod -A | grep -i -v -e running -e complete' "
echo ""
echo "Per Openshift documentation use the below command to monitor the state of the API server"
echo "ensure PROGRESSING column states False as the status before continuing with deployment"
echo ""
echo "oc get clusteroperators kube-apiserver "
Please reach out if you wish to learn more or have ANA assist with Kubernetes / OpenShift opportunities.