We are fond of the Let’s Encrypt DNS challenge process instead of alternative processes. The Let’s Encrypt DNS challenge using the certbot allows businesses to scale their replacements of certs that are not exposed directly to the internet. The certbot tool has switches that allow custom scripts to be run, which allows for a lot of flexibility.
We have used certbot with manual steps every 90 days with our development DNS domains but wanted to automate these steps. Unfortunately, we noticed a challenge with using Google Domains not having an API available to update DNS records. After research, we did find that Google Cloud does have the APIs available.
We had several options:
a) Move the DNS domains from Google Domains to Google Cloud,
b) redirect CNAME records from Google Domains to Google Cloud,
c) move to another Domain Register that has APIs available,
d) redirect CNAME records from Google Domain to another Domain Register.
Since we had another Domain register offering APIs, we decided to choose option d. This entry will review our steps and how we leverage certbot and the two (2) DNS Domains Registers.
Step 1: Google Domains – Create _acme-challenge CNAME records.
Finally, we will create a script that will be executed by crontab every 85 days. Please note that the scripts to be called by certbot are created as HERE DOCS to allow portability within a single script.
#!/bin/bash
###############################################################################
#
# Update Google DNS via round-about way through 2nd DNS Register DNS API (anapartner.in)
#
#
# Pre-work:
# 1. Use existing or purchase a domain from 2nd DNS register
#
# 2. Create Google Domain CNAME records for each of the wildcard domain to a remote DNS TXT Record
# _acme-challenge.gke.iam.anapartner.org CNAME _acme-challenge.gke.iam.anapartner.org.anapartner.in
# _acme-challenge.aks.iam.anapartner.org CNAME _acme-challenge.aks.iam.anapartner.org.anapartner.in
# _acme-challenge.eks.iam.anapartner.org CNAME _acme-challenge.eks.iam.anapartner.org.anapartner.in
#
# 3. Create 2nd DNS Register Domain TXT records for each of the object to be updated
# _acme-challenge.gke.iam.anapartner.org.baugher.net
# _acme-challenge.aks.iam.anapartner.org.baugher.net
# _acme-challenge.eks.iam.anapartner.org.baugher.net
#
# 4. Enable the 2nd DNS Register API for Production Access (Developer) & store the KEY & SECRET for use
# https://developer.godaddy.com/keys?hbi_code=1
#
# 5. Install certbot dnf -y install certbot
# Note: certbot will use two (2) variables of: CERTBOT_DOMAIN (after the -d switch)
# and CERTBOT_VALIDATION (the text string to be used for TXT records)
#
# ANA 07/2022
#
###############################################################################
GODADDY_API_KEY="XXXXXXXXXXXXXy"
GODADDY_API_SECRET="XXXXXXXXXXXXXXX"
DOMAIN="anapartner.in"
echo ""
echo "Create wildcard domain list"
echo "This may be any TXT record for a remote domain FQDN that is mapped in the anapartner.in"
echo "#####################################################################"
#cat << 'EOF' > wildcard-domains.txt
#*.gke.iam.anapartner.in
#*.aks.iam.anapartner.in
#*.eks.iam.anapartner.in
#EOF
cat << 'EOF' > wildcard-domains.txt
*.gke.iam.anapartner.dev
*.aks.iam.anapartner.dev
*.eks.iam.anapartner.dev
EOF
WILDCARD_DOMAIN=anapartner.dev
echo ""
echo "Create godaddy.sh script to update TXT records"
echo "#####################################################################"
cat << EOF > godaddy.sh
#!/bin/bash
if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
echo \$CERTBOT_DOMAIN
fi
DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"
curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H "accept: application/json" -H "Content-Type: application/json" \
-H "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"\$CERTBOT_VALIDATION\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"
sleep 30
EOF
chmod 555 godaddy.sh
echo ""
echo "Create godaddy-clean.sh script to wipe TXT records - as needed"
echo "#####################################################################"
cat << EOF > godaddy-clean.sh
#!/bin/bash
if [[ "\$CERTBOT_DOMAIN" =~ .*anapartner.in* ]];then
echo "If domain contains anapartner.in, we need to remove the last part to avoid duplicates during registration"
CERTBOT_DOMAIN="\${CERTBOT_DOMAIN/".anapartner.in"//}"
echo \$CERTBOT_DOMAIN
fi
DNS_REC_NAME="_acme-challenge.\$CERTBOT_DOMAIN"
curl -s -X PUT \
"https://api.godaddy.com/v1/domains/${DOMAIN}/records/TXT/\${DNS_REC_NAME}" \
-H "accept: application/json" -H "Content-Type: application/json" \
-H "Authorization: sso-key ${GODADDY_API_KEY}:${GODADDY_API_SECRET}" \
-d "[{ \"data\": \"clean\", \"name\": \"\${DNS_REC_NAME}\", \"ttl\": 600 }]"
EOF
chmod 555 godaddy-clean.sh
echo ""
echo "Start Loop to use Let's Encrypt's certbot tool"
echo "#####################################################################"
while read -r domain;
do
echo "#####################################################################"
echo "$domain"
echo ""
certbot -d $domain --agree-tos --register-unsafely-without-email --manual \
--preferred-challenges dns --manual-auth-hook ./godaddy.sh \
--manual-cleanup-hook ./godaddy-clean.sh --manual-public-ip-logging-ok \
--force-renewal certonly
echo ""
done < wildcard-domains.txt
# Add logic to handle the certs/keys when they are issued.
echo ""
echo "#####################################################################"
ls -lart /etc/letsencrypt/archive/*
#rm -rf godaddy.sh godaddy-clean.sh &>/dev/null
echo ""
echo ""
echo "After validation, the TXT records will be marked with the 'clean' string "
echo "#####################################################################"
echo "nslookup -type=txt _acme-challenge.eks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup -type=txt _acme-challenge.aks.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
echo "nslookup -type=txt _acme-challenge.gke.iam.$WILDCARD_DOMAIN 8.8.8.8 | head -6"
View of the script being executed
Files generated by Let’s Encrypt certbot [certN.pem, privkeyN.pem, chainN.pem, and fullchainN.pem]
While preparing to enable a feature within the Identity Suite Virtual Appliance for TLS encryption for the Provisioning Tier to send notification events, we noticed some challenges that we wish to clarify.
The Identity Suite Virtual Appliance has four (4) web services that use pre-built self-signed certificates when first deployed. Documentation is provided to change these certificates/key using aliases or soft-links.
One of the challenges we discovered is the Provisioning Tier may be using an older version of libcurl & OpenSSL that have constraints that need to be managed. These libraries are used during the web submission to the IME ETACALLBACK webservice. We will review the processes to capture these error messages and how to address them.
We will introduce the use of Let’s Encrypt wildcard certificates into the four (4) web services and the Provisioning Server’s ETACALLBACK use of a valid public root certificate.
The Apache HTTPD service is used for both a forward proxy (TCP 443) to the three (3) Wildfly Services and service for the vApp Management Console (TCP 10443). The Apache HTTPD service SSL certs use the path /etc/pki/tls/certs/localhost.crt for a self-signed certificate. A soft-link is used to redirect this to a location that the ‘config’ service ID has access to modify. The same is true for the private key.
A view of the Apache HTTPD SSL self-signed certificate and key.
The three (3) Wildfly services are deployed for the Identity Manager, Identity Governance and Identity Portal components. The configuration for TLS security is defined within the primary Wildfly configuration file of standalone.xml. The current configuration is already setup with the paths to PKCS12 keystore files of:
A view of the three (3) Wildfly PKCS12 keystore files and view of the self-signed cert/key with the pseudo hostname of the vApp host.
Provisioning Server process for TLS enablement for IME ETACALLBACK process.
Step 1. Ensure that the Provisioning Server is enabled to send data/notification events to the IME.
Step 2. Within the IME Management Console, there is a baseURL parameter. This string is sent down to the Provisioning Server upon restart of the IME, and appended to a list. This list is viewable and manageable within the Provisioning Manager UI under [System/Identity Manager Setup]. The URL string will be appended with the string ETACALLBACK/?env=identityEnv. Within this Provisioning Server, we can manage which URLs have priority in the list. This list is a failover list and not load-balancing. We have the opportunity to introduce an F5 or similar load balancer URL, but we should enable TLS security prior.
Step 3. Added the public root CA Cert or CA chain certs to the following location. [System/Domain Configuration/Identity Manager Server/Trusted CA Bundle]. This PEM file may be placed in the Provisioning Server bin folder with no path or may use a fully qualified path to the PEM file. Note: The Provisioning Server is using a version of openssl/libcurl that will report errors that can be managed with wildcard certificates. We will show the common errors in this blog entry.
Let’sEncrypt Certificates offers a free service to build wildcard certificates. We are fond of using their DNS method to request a wildcard certificate.
sudo certbot certonly --manual --preferred-challenges dns -d *.aks.iam.anapartner.dev --register-unsafely-without-email
Let’s Encrypt will provide four (4) files to be used. [certN.pem, privkeyN.pem, chainN.pem, fullchainN.pem]
cert1.pem [The primary server side wildcard cert]
privkey1.pem [The primary server side private key associated with the wildcard cert]
chain1.pem [The intermediate chain certs that are needed to validate the cert1 cert]
fullchain1.pem [two files together in the correct order of cert1.pem and chain1.pem.]
NOTE: fullchain1.pem is the file you typically would use as the cert for a solution, so the solution will also have the intermediate CA chain certs for validation]
Important Note: One of the root public certs was cross-signed by another root public cert that expired. Most solutions are able to manage this challenge, but the provisioning service ETACALLBACK has a challenge with an expired certificate, but there are replacements for this expired certificate that we will walk through. Ref: https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/
Create a new CA chain PEM files for LE (Let’s Encrypt) validation to use with the Provisioning Server.
Validate with browsers and view the HTTPS lock symbol to view the certificate
Test with an update to a Provisioning Global User’s attribute [Note: No need to sync to accounts]. Ensure that the Identity Manager Setup Log Level = DEBUG to monitor this submission with the Provisioning Server etanotifyXXXXXXX.log.
A view of the submission for updating the Global User’s Description via IMPS (IM Provisioning Server) etanotifyXXXXXXX.log. The configuration will be loaded for using the URLs defined. Then we can monitor for the submission of the update.
Finally, a view using the IME VST (View Submitted Tasks) for the ETACALLBACK process using the task Provisioning Modify User.
Common TLS errors seen with the Provisioning Server ETACALLBACK
Ensure that the configuration is enabled for debug log level, so we may view these errors to correct them. [rc=77] will occur if the PEM file does not exist or is not in the correct path. [rc=51] will occur if the URL defined does not match the exact server-side certificate (this is a good reason to use a wildcard certificate or adjust your URL FQDN to match the cert subject (CN=XXXX) value. [rc=60] will occur if the remote web service is using a self-signed certificate or if the certificate has any expiration dates within the certificate or chain or the public root CA cert.
Other Error messages (curl)
If you see an error message with Apache HTTPD (TCP 443) with curl about “curl: (60) Peer certificate cannot be authenticated with known CA certificates”, please ignore this, as the vApp does not have the “ca-bundle.crt” configuration enabled. See RedHat note: https://access.redhat.com/solutions/523823
RedHat OpenShift is one of the container orchestration platforms that provides an enterprise-grade solution for deploying, running, and managing applications on public, on-premise, or hybrid cloud environments.
This blog entry outlines the high-level architecture of a LAB OpenShift on-prem cloud environment built on VMware Workstation infrastructure.
Red Hat OpenShift and the customized ISO image with Red Hat Core OS provide a straightforward process to build your lab and can help lower the training cost. You may watch the end-to-end process in the video below or follow this blog entry to understand the overall process.
Requirements:
Red Hat Developer Account w/ Red Hat Developer Subscription for Individuals
Local DNS to resolve a minimum of three (3) addresses for OpenShift. (api.[domain], api-int.[domain], *.apps.[domain])
DHCP Server (may use VMware Workstation NAT’s DHCP)
Storage (recommend using NFS for on-prem deployment/lab) for OpenShift logging/monitoring & any db/dir data to be retained.
SSH Terminal Program w/ SSH Key.
Browser(s)
Front Loader/Load Balancer (HAProxy)
VMware Workstation Pro 16.x
Specs: (We used more than the minimum recommended by OpenShift to prepare for other applications)
Three (3) Control Planes Nodes @ 8 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64 bit” Guest OS Type
Four (4) Worker Nodes @ 4 vCPU/16 GB RAM/100 GB HDD with “Red Hat Enterprise Linux 8 x64” Guest OS Type
Post-Efforts: Apply these to provide additional value. [Included as examples]
Add entropy service (haveged) to all nodes/pods to increase security & performance.
Let’sEncrypt wild card certs for *.[DOMAIN] and *.apps.[DOMAIN] to avoid self-signed certs for external UIs. Avoid using “thisisunsafe” within the Chrome browser to access the local OpenShift console.
Update OpenShift Ingress to be aware of more than two (2) worker nodes.
Update OpenShift to use NFS as default storage.
Below is a view of our footprint to deploy the OpenShift 4.x environment on a local data center hosted by VMware Workstation.
Red Hat OpenShift provides three (3) options to deploy. Cloud, Datacenter, Local. Local is similar to minikube for your laptop/workstation with a few pods. Red Hat OpenShift license for Cloud requires deployment on other vendors’ sites for the nodes (cpu/ram/disk) and load balancers. If you deploy OpenShift on AWS and GCP, plan a budget of $500/mo per resource for the assets.
After reviewing the open-source OKD solution and the various OpenShift deployment methods, we selected the “DataCenter” option within OpenShift. Two (2) points made this decision easy.
Red Hat OpenShift offers a sixty (60) day eval license.
This license can be restarted for another sixty (60) days if you delete/archive the last cluster.
Red Hat OpenShift provides a customized ISO image with Red Hat Core OS, ignition yaml files, and an embedded SSH Public Key, that does a lot of the heavy lifting for setting up the cluster.
The below screen showcases the process that Red Hat uses to build a bootstrap ISO image using Red Hat Core OS, Ignition yaml files (to determine node type of control plane/worker node), and the embedded SSH Key. This process provides a lot of value to building a cluster and streamlines the effort.
DNS Requirement
The minimal DNS entries required for OpenShift is three (3) addresses.
Update HAproxy.cfg as needed for IP addresses / Ports. To avoid deployment of HAProxy twice, we use the “bind” command to join two (2) HAproxy configuration files together to prevent conflict on port 80/443 redirect for both OpenShift and another application deployed on OpenShift.
# Global settings
# Set $IP_RANGE as an OS ENV or Global variable before running HAPROXY
# Important: If using VMworkstation NAT ensure this range is correctly defined to
# avoid error message with x509 error on port 22623 upon startup on control planes
#
# Ensure 3XXXX PORT is defined correct from the ingress
# - We have predefined these ports to 32080 and 32443 for helm deployment of ingress
# oc -n ingress get svc
#
#---------------------------------------------------------------------
global
setenv IP_RANGE 192.168.243
setenv HA_BIND_IP1 192.168.2.101
setenv HA_BIND_IP2 192.168.2.111
maxconn 20000
log /dev/log local0 info
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option redispatch
option forwardfor except 127.0.0.0/8
retries 3
maxconn 20000
timeout http-request 10000ms
timeout http-keep-alive 10000ms
timeout check 10000ms
timeout connect 40000ms
timeout client 300000ms
timeout server 300000ms
timeout queue 50000ms
# Enable HAProxy stats
# Important Note: Patch OpenShift Ingress to allow internal RHEL CoreOS haproxy to run on additional worker nodes
# oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 7}}' --type=merge
#
listen stats
bind :9000
stats uri /
stats refresh 10000ms
# Kube API Server
frontend k8s_api_frontend
bind :6443
default_backend k8s_api_backend
mode tcp
option tcplog
backend k8s_api_backend
mode tcp
balance source
server ocp-cp-1_6443 "$IP_RANGE".128:6443 check
server ocp-cp-2_6443 "$IP_RANGE".129:6443 check
server ocp-cp-3_6443 "$IP_RANGE".130:6443 check
# OCP Machine Config Server
frontend ocp_machine_config_server_frontend
mode tcp
bind :22623
default_backend ocp_machine_config_server_backend
option tcplog
backend ocp_machine_config_server_backend
mode tcp
balance source
server ocp-cp-1_22623 "$IP_RANGE".128:22623 check
server ocp-cp-2_22623 "$IP_RANGE".129:22623 check
server ocp-cp-3_22623 "$IP_RANGE".130:22623 check
# OCP Machine Config Server #2
frontend ocp_machine_config_server_frontend2
mode tcp
bind :22624
default_backend ocp_machine_config_server_backend2
option tcplog
backend ocp_machine_config_server_backend2
mode tcp
balance source
server ocp-cp-1_22624 "$IP_RANGE".128:22624 check
server ocp-cp-2_22624 "$IP_RANGE".129:22624 check
server ocp-cp-3_22624 "$IP_RANGE".130:22624 check
# OCP Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend ocp_http_ingress_frontend
bind "$HA_BIND_IP1":80
default_backend ocp_http_ingress_backend
mode tcp
option tcplog
backend ocp_http_ingress_backend
balance source
mode tcp
server ocp-w-1_80 "$IP_RANGE".131:80 check
server ocp-w-2_80 "$IP_RANGE".132:80 check
server ocp-w-3_80 "$IP_RANGE".133:80 check
server ocp-w-4_80 "$IP_RANGE".134:80 check
server ocp-w-5_80 "$IP_RANGE".135:80 check
server ocp-w-6_80 "$IP_RANGE".136:80 check
server ocp-w-7_80 "$IP_RANGE".137:80 check
frontend ocp_https_ingress_frontend
bind "$HA_BIND_IP1":443
default_backend ocp_https_ingress_backend
mode tcp
option tcplog
backend ocp_https_ingress_backend
mode tcp
balance source
server ocp-w-1_443 "$IP_RANGE".131:443 check
server ocp-w-2_443 "$IP_RANGE".132:443 check
server ocp-w-3_443 "$IP_RANGE".133:443 check
server ocp-w-4_443 "$IP_RANGE".134:443 check
server ocp-w-5_443 "$IP_RANGE".135:443 check
server ocp-w-6_443 "$IP_RANGE".136:443 check
server ocp-w-7_443 "$IP_RANGE".137:443 check
######################################################################################
# VIPAUTHHUB Ingress
frontend vip_http_ingress_frontend
bind "$HA_BIND_IP2":80
mode tcp
option forwardfor
option http-server-close
default_backend vip_http_ingress_backend
backend vip_http_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32080 "$IP_RANGE".131:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32080 "$IP_RANGE".132:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32080 "$IP_RANGE".133:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32080 "$IP_RANGE".134:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32080 "$IP_RANGE".135:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32080 "$IP_RANGE".136:32080 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32080 "$IP_RANGE".137:32080 check fall 3 rise 2 send-proxy-v2
frontend vip_https_ingress_frontend
bind "$HA_BIND_IP2":443
# mgmt-sspfqdn
acl is_mgmt_ssp hdr_end(host) -i mgmt-ssp.okd.anapartner.dev
use_backend vip_ingress-nodes_mgmt-nodeport if is_mgmt_ssp
mode tcp
#option forwardfor
option http-server-close
default_backend vip_https_ingress_backend
backend vip_https_ingress_backend
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
backend vip_ingress-nodes_mgmt-nodeport
mode tcp
balance roundrobin
server vip-w-1_32443 "$IP_RANGE".131:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-2_32443 "$IP_RANGE".132:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-3_32443 "$IP_RANGE".133:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-4_32443 "$IP_RANGE".134:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-5_32443 "$IP_RANGE".135:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-6_32443 "$IP_RANGE".136:32443 check fall 3 rise 2 send-proxy-v2
server vip-w-7_32443 "$IP_RANGE".137:32443 check fall 3 rise 2 send-proxy-v2
######################################################################################
Use the following commands to add 2nd IP address to one NIC on the main VMware Workstation Host, where NIC = eno1 and 2nd IP address = 192.168.2.111
nmcli dev show eno1
sudo nmcli dev mod eno1 +ipv4.address 192.168.2.111/24
VMware Workstation Hosts / Nodes
When building the VMware hosts, ensure that you use Guest Type “Red Hat Enterprise Linux 8 x64” to match the embedded Red Hat Core OS provided in an ISO image. Otherwise, DHCP services may not work correctly, and when the VMware host boots, it may not receive an IP address.
The VMware hosts for Control Planes Nodes are recommended to be 8 vCPU, 16 GB RAM, and 100 HDD. The VMware hosts for Worker Nodes are recommended to be 4 vCPU, 16 GB RAM, and 100 HDD. OpenShift requires a minimum of three (3) Control Plane Nodes and two (2) Worker Nodes. Please check with any solution you may deploy and adjust the parameters as needed. We will deploy four (4) Worker Nodes for Symantec VIP Auth Hub solution. And horizontally scale the solution with more worker nodes for Symantec API Manager and Siteminder.
Before starting any of these images, create a local snapshot as a “before” state. This will allow you to redeploy with minimal impact if there is any issue.
Before starting the deployment, you may wish to create a new NAT VMware Network, to avoid impacting any existing VMware images on the same address range. We will be adjusting the dhcpd.conf and dhcpd.leases files for this network.
To avoid an issue with reverse DNS lookup within PODS and Containers, remove a default value from dhcpd.conf. Stop vmware network, remove or comment out the line “option domain-name localdomain;” , remove any dhcpd.leases information, then restart the vmware network.
OpenShift / Kubernetes / Helm Command Line Binaries
Download these two (2) client packages to have three (3) binaries for interfacing with OpenShift/Kubernetes API Server.
Download Openshift Binaries for remote management (on main host)
#########################
sudo su -
cd /tmp/openshift
curl -skOL https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64.tar.gz ; tar -zxvf helm-linux-amd64.tar.gz
curl -skOL https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz ; tar -zxvf openshift-client-linux.tar.gz
mv -f oc /usr/bin/oc
mv -f kubectl /usr/bin/kubectl
mv -f helm-linux-amd64 /usr/local/bin/helm
oc version
helm version
kubectl version
Start an OpenShift Cluster Deployment
OpenID Configuration with OpenShift
Post-deployment step: After you have deployed OpenShift cluster, you will be asked to create an IDP to authenticate other accounts. Below is an example with OpenShift and MS Azure. The image below showcases the parameters and values to be shared between the two solutions.
Entropy DaemonSet for OpenShift Nodes/Pods
We can validate the entropy on an OpenShift nodes or Pod via use of /dev/random. We prefer to emulate a 1000 password changes that showcase how rapidly the entropy pool of 4K is depleted when a security process accesses it. Example of the single line bash code.
Validate Entropy in Openshift Nodes [Before/After use of Haveged Deployment]
#########################
(counter=1;MAX=1001;time while [ $counter -le $MAX ]; do echo "";echo "########## $counter ##########" ; echo "Entropy = `cat /proc/sys/kernel/random/entropy_avail` out of 4096"; echo "" ; time dd if=/dev/random bs=8 count=1 2>/dev/null | base64; counter=$(( $counter + 1 )); done;)
If the number of OpenShift Workers is greater than two (2), then you will need to patch the OpenShift Ingress controller to scale up to the number of worker nodes.
WORKERS=`oc get nodes | grep worker | wc -l`
echo ""
echo "######################################################################"
echo "# of Worker replicas in OpenShift Ingress Prior to update"
echo "oc get -n openshift-ingress-operator ingresscontroller -o yaml | grep -i replicas:"
#echo "######################################################################"
echo ""
oc patch -n openshift-ingress-operator ingresscontroller/default --patch "{\"spec\":{\"replicas\": ${WORKERS}}}" --type=merge
LetsEncrypt Certs for OpenShift Ingress and API Server
The certs with OpenShift are self-signed. This is not an issue until you attempt to access the local OpenShift console with a browser and are stopped from accessing the UI by newer security enforcement in the browsers. To avoid this challenge, we recommend switching the certs to LetsEncrypt. There are many examples how to rotate the certs. We used the below link to rotate the certs. https://docs.openshift.com/container-platform/4.12/security/certificates/replacing-default-ingress-certificate.html
echo "Installing ConfigMap for the Default Ingress Controllers"
oc delete configmap letsencrypt-fullchain-ca -n openshift-config &>/dev/null
oc create configmap letsencrypt-fullchain-ca \
--from-file=ca-bundle.crt=${CHAINFILE} \
-n openshift-config
oc patch proxy/cluster \
--type=merge \
--patch='{"spec":{"trustedCA":{"name":"letsencrypt-fullchain-ca"}}}'
echo "Installing Certificates for the Default Ingress Controllers"
oc delete secret letsencrypt-certs -n openshift-ingress &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-ingress
echo "Backup prior version of ingresscontroller"
oc get ingresscontroller default -n openshift-ingress-operator -o yaml > /tmp/ingresscontroller.$DATE.yaml
oc patch ingresscontroller.operator default -n openshift-ingress-operator --type=merge --patch='{"spec": { "defaultCertificate": { "name": "letsencrypt-certs" }}}'
echo "Installing Certificates for the API Endpoint"
oc delete secret letsencrypt-certs -n openshift-config &>/dev/null
oc create secret tls letsencrypt-certs \
--cert=${CHAINFILE} \
--key=${KEYFILE} \
-n openshift-config
echo "Backup prior version of apiserver"
oc get apiserver cluster -o yaml > /tmp/apiserver_cluster.$DATE.yaml
oc patch apiserver cluster --type merge --patch="{\"spec\": {\"servingCerts\": {\"namedCertificates\": [ { \"names\": [ \"$LE_API\" ], \"servingCertificate\": {\"name\": \"letsencrypt-certs\" }}]}}}"
echo "#####################################################################################"
echo "true | openssl s_client -connect api.${DOMAIN}:443 --showcerts --servername api.${DOMAIN}"
echo ""
echo "It may take 5-10 minutes for the OpenShift Ingress/API Pods to cycle with the new certs"
echo "You may monitor with: watch -n 2 'oc get pod -A | grep -i -v -e running -e complete' "
echo ""
echo "Per Openshift documentation use the below command to monitor the state of the API server"
echo "ensure PROGRESSING column states False as the status before continuing with deployment"
echo ""
echo "oc get clusteroperators kube-apiserver "
Please reach out if you wish to learn more or have ANA assist with Kubernetes / OpenShift opportunities.
Typically, we may use various tools to view JMS queue(s) related metrics for trends and stale/stuck activity. During issues with J2EE JMS Queue, though, it would be helpful to be able to view and trace transactions to assist with a resolution. With proper logging levels enabled, Wildfly/JBOSS logs show detailed information containing the JMS IDs associated with each transaction. These JMS transactions we see in the logs are already ‘in-flight’ and are being processed by a message handler.
On the Symantec Identity Suite Virtual Appliance, the Wildfly & HornetQ processes are run under the ‘wildfly’ service ID. The wildfly journals are located in the wildfly data folder and stored in a format that is efficient for processing. To perform analysis on the data within these journals, though, we noticed a challenge with read-permissions for the HornetQ files even when Wildfly/Java process is not actively running.
To avoid this issue on the Virtual Appliance, copy the HornetQ files to a temporary folder. Remember to copy the entire folder, including sub-folders.
Once the live-hornetq folder is available in a tmp location, execute the below process for printing Journal content.
Print HornetQ Journal and Bindings
To export the HornetQ Journal Files to XML, the Java module of “org.hornetq.core.journal.impl.ExportJournal” requires the journal sub-folder with the prefix of “hornetq-data”, the file extension (hq), the file sizes, and where to export the XML file (export.dat). The prefix and file extension (hq) are unique to the Identity Suite vApp.
The body/rows of the JMS export is partially base64. You may parse through this information as you wish.
Use this information to trace through transactions in the JMS queue.
For Cleanup, within the Symantec Identity Suite vApp, there are a few options. The first is deleting the JMS queue journals before starting the Wildfly service. This can be accomplished using the build-in alias ‘deleteIDMJMSqueue’.
alias deleteIDMJMSqueue='sudo /opt/CA/VirtualAppliance/scripts/.firstrun/deleteIDMJMSqueue.sh'
Another option is to remove a select JMS entry from the queue using /opt/CA/wildfly-idm/bin/jboss-cli.sh process. If created with an input script, escape the colons in the GUID.
/subsystem=transactions/log-store=log-store/:probe()
ls /subsystem=transactions/log-store=log-store/transactions
/subsystem=transactions/log-store=log-store/transactions=0:ffffa409cc8a:1c01b1ff:5c7e95ac:eb:delete()
View a description of the JMS Processing from Broadcom Engineering/Support Teams (see below video)
This write-up provides the tools required for a deeper analysis. Debugging issues with JMS may test one’s patience, stay the course, stay persistent, and have fun!
References: (Delete JMS queue and remove a single entry)
Kubernetes was designed for the deployment of applications to cloud architecture with containers. Another way of thinking about Kubernetes; it gets us “out-of-the-install-binaries” business and focuses our efforts on the business value of a solution. We have documented our process of how we train our resources and partners. This process will help your team to excel and gain confidence with cloud technologies.
One of the business challenges of Kubernetes in the cloud architecture is the ongoing cost ($300-$600/month per resource) during the learning or development process. To lower this ongoing cost per resource, we focused on a method to use on-prem Kubernetes deployments.
We have found examples online of using minikube and Oracle Virtualbox to assist with keeping costs low while using an on-prem deployment but did not find many examples of using Vmware Workstation to our satisfaction. Our goal was to utilize a solution that we are very familiar with and has the supporting capabilities for rollback via snapshots.
We have used Vmware Workstation for many years while working on service projects. We cannot overstate its usefulness to offer a “play-ground” and development environment independent of a client’s environment. The features of snapshots allow for negative use-case testing or “what-if” scenarios to destroy or impact solutions being tested with minimal impact.
In this entry, we will discuss the use of Vmware Workstation and CentOS (or Ubuntu) as the primary Kubernetes Nodes. Both CentOS and/or Ubuntu OS are used by the cloud providers as their Kubernetes nodes, so this on-prem process will translate well.
Some of our team members run the Kubernetes environment from their laptop, a collection of individual servers, or a larger server that may scale to the number of vCPU/RAM required for the Kubernetes solution.
Decision 1: Choose an OS to be used.
Either CentOS or Ubuntu OS is acceptable to use for on-prem. When we checked the OSes used by the cloud providers, we noted they used one of these two (2) OS for Linux OS. We decided on CentOS 7, as iptables for routing are used within Kubernetes; and iptables are used by default in CentOS 7. You may find that other OSes will work fine as well.
Decision 2: Build a reference image
Identify all expected binaries to be used within this image. This reference image will be cloned for the Kubernetes control plane node (1) and the worker nodes (3-4). We will also use this image to build a supporting node (non-Kubernetes) for SiteMinder integration and a docker repository for the Kubernetes docker images. For a total of six (6) nodes.
Decision 3: DNS and Certificates
Recommendation: Please do not attempt to deploy a Kubernetes solution on-prem without having purchased a DNS domain/site and use wild card certificates tied to the DNS domain.
Without these two (2) supporting components, it is a challenge to have a working Kubernetes solution that reflects what you will experience in a cloud deployment.
For example, we purchased a domain for $12/year, and then created several “A” records that will host the IP addresses we may use to redirect to cloud or on-prem. Using sub-domains “A” records, we can have as many cloud addresses as we wish.
DNS "A" Records Example:
aks.iam.anapartner.net (MS Azure),
eks.iam.anapartner.net (Amazon),
gke.iam.anapartner.net (Google).
DNS "CNAME" Records Example:
alertmanager.aks.iam.anapartner.net,
grafana.aks.iam.anapartner.net,
jaeger.aks.iam.anapartner.net,
kibana.aks.iam.anapartner.net,
mgmt-ssp.aks.iam.anapartner.net,
sm.aks.iam.anapartner.net,
ssp.aks.iam.anapartner.net.
Example of using Synology DNS Server for Kubernetes cluster’s application. With “A” and “CNAME” records.
Finally, we prefer to use wildcard certificates for these domains to avoid challenges within our Kubernetes deployment. There are several services out there offering free certificates.
We chose Let’sEncrypt https://letsencrypt.org/. While Let’sEncrypt has automated processes to renew their certs, we chose to use their DNS validation process with a CertBot solution. We can renew these certificates every 90 days for on-prem usage. The DNS validation process requires a unique string generated by the Let’sEncrypt process to be populated in a DNS “TXT” record like so: _acme-challenge.aks.iam.anapartner.net . See the example at the bottom of this blog entry on this process.
Decision 4: Supporting Components: Storage, Load-Balancing, DNS Resolution (Local)
The last step required for on-prem deployment is where will you decide to place persistence storage for your Kubernetes cluster. We chose to use an NFS share.
We first tested using the control-plane node, then decided to move the NFS share to a Synology NAS solution. Similar for the DNS resolution option, at first we used a DNS service on the control-plane node and then moved to to the Synology NAS solution.
For Load-Balancing, Kubernetes has a service option of NodePort and LoadBalancing. The LoadBalancing service if not deployed in the cloud, will default to NodePort behavior. To introduce load balancing for on-prem, we introduced the HA-proxy service on the control-plane node, along with Kubernetes NodePort service to meet this goal.
After the decisions have been made, we can now walk through the steps to set up a Vmware environment for Kubernetes.
Reference Image
Step 1: Download the OS DVD ISO image for deployment on Vmware Workstation (Centos 7 / Ubuntu ).
Determine specs for the future solution to be deployed on Kubernetes. Some solutions have pods that may require minimal memory/disc space. For the solution we decided on deploying, we confirmed that we need 16 GBRAM and 4vCPU minimal. We have confirmed these specs were required by previously deploying the solution in a cloud environment.
Without these memory/cpu specs, the solution that we chose would pause the deployment of Kubernetes pods to the nodes. You may or may not see error messages in the deployment of pods stating that the nodes did not have enough resources for all or some of the pods.
For disc size, we selected 100 GB to future-proof the solution during testing. For networking, please select BRIDGED mode, to allow the Vmware images to have minimal network issues when routing within your local network. Please avoid double NAT’ing the deployment to reduce your headaches.
Step 2: Install useful base packages and disable any UI tools. Please install an Entropy Daemon to avoid delays due to certificates usage of /dev/random and low entropy.
### UI Update for CentOS7 was stopping yum deployment - not required for our solution to be tested (e.g. VIP Auth Hub)
# su to root to run the below commands. We will add sudo access later.
su -
systemctl disable packagekit; systemctl stop packagekit; systemctl status packagekit
### Installed base useful packages.
yum -y install dnf epel-release yum-utils nfs-utils
### Install useful 2nd tools.
yum -y install openldap-clients jq python3-pip tree
pip3 install yq
yum -y upgrade
### Install Entropy process (epel repo)
dnf -y install haveged
systemctl enable haveged --now
Step 3: Install docker and update the docker configuration for use with Kubernetes. Update the path & storage-driver for the docker images for initial deployment.
### Install Docker repo & docker package
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf -y install docker-ce
docker version
systemctl enable docker --now
docker version
### Update docker image info after deployment and restart service
cat << EOF > /etc/docker/daemon.json
{
"debug": false,
"data-root": "/home/docker-images",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
### Restart docker to load updated image info.
systemctl restart docker; systemctl status docker; docker version
Step 4: Deploy the three (3) primary Kubernetes & the HELM binaries.
Ensure you select a Kubernetes version that matches what solution you wish to deploy and work with. This can be a gotcha if the Kubernetes binaries update during a dnf / yum upgrade process and your solution has not been vetted for the newer release of Kubernetes. See the reference link below on how to upgrade Kubernetes binaries.
### Stop FirewallD - May add ports later for security
systemctl stop firewalld;systemctl disable firewalld; iptables -F
### Update OS Parameters for kubernetes
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
modprobe br_netfilter
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
### Note: IP forwarding is enabled by default.
sysctl -a | grep -i forward
### Note: Update /etc/fstab to comment out swap line with # character
### Warning: kubectl init will fail if swap is left on cp or any worker node.
swapoff -a
sed -i 's|UUID\=\(.*\)-\(.*\)-\(.*\)-\(.*\)-\(.*\) swap|#UUID\=\1-\2-\3-\4-\5 swap|g' /etc/fstab
cat /etc/fstab
Step 6: Create SSH key for root or other services IDs to allow remote script updates from CP to Worker Nodes
### Create SSH key for root to allow remote script updates from CP to Worker Nodes - Enter a Blank/Null PASSWORD.
su -
rm -rf ~/.ssh; echo y | ssh-keygen -b 4096 -C $USER -f ~/.ssh/id_rsa
### Copy the public rsa key to authorized keys to avoid password between cp/worker nodes for remote ssh commands.
cp -r -p ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys;ls -lart .ssh
### Test for remote connection with no password:
ssh -i ~/.ssh/id_rsa root@localhost
### Copy the id_rsa key to your host system for ease of testing.
### Add your local non-root user to sudo wheel group. Change vip to your user ID.
LOCALUSER=vip
gpasswd -a $LOCALUSER wheel
### Update sudoers file to allow wheel group with no-password
sed -i 's|# %wheel|%wheel|g' /etc/sudoers
### View update wheel group.
grep "%wheel" /etc/sudoers
# Example of return query.
# %wheel ALL=(ALL) ALL
# %wheel ALL=(ALL) NOPASSWD: ALL
Step 7: Stop or adjust the OS network manager, shutdown the reference image, and create a Vmware Snapshot
### Adjust or Disable the OS NetworkManager (to avoid overwriting /etc/resolv.conf)
### Important when using an internal DNS server.
systemctl disable NetworkManager;systemctl stop NetworkManager
### reboot CentOS7 Image and validate no issues upon reboot.
reboot
### Shutdown image and manually create snapshot called "base"
Vmware Workstation Cloning
Step 8: Now that we have a reference image, we can now make clone images for the control-plane (1), the worker nodes (4), and the supporting node (1). This is a fairly quick process.
export BASE=/home/me/vmware/kub
export REF=/home/me/vmware/kub/CentOS7/CentOS7.vmx
VM=cp;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
VM=worker01;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
VM=worker02;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
VM=worker03;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
VM=worker04;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
VM=sm;mkdir -p $BASE/$VM; time vmrun -T ws clone $REF $BASE/$VM/$VM.vmx -cloneName=$VM -snapshot=base full
Step 9: Start the clone images and remotely assign new hostname/IP addresses to the images
Step 12: Copy the root .ssh public cert to your main host, rename it to a useful name and these test your newly deployed clone images for DSN resolution using ssh. Please confirm this step is successful prior to continuing with the configuration of the control plane and worker nodes.
Step 13a: Copy files to CP Node from Vmware Workstation host and configure the CP node for dedicated CP usage. Recommend using two terminals/sessions to speed up the process. Install HAproxy for Load Balancing, copy the Let’s Encrypt wild card certificates, and copy the Kubernetes solution you will be deploying (scripts/yaml).
### Open Terminal 1 to CP host.
### Add bash completion to have better use of TAB to view parameters.
CP=192.168.2.60
ssh -tt -i ~/vip_kub_root_id_rsa root@$CP
dnf -y install bash-completion
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo "alias k=kubectl | complete -F __start_kubectl k" >>~/.bashrc
### Install HAProxy and replace the haproxy.cfg file.
dnf -y install haproxy
systemctl enable haproxy --now
netstat -anp | grep -i -e haproxy
### Open Terminal 2 to host and push files to CP node.
### Copy HAProxy configuration, certs, and scripts
scp -i ~/vip_kub_root_id_rsa haproxy.cfg root@$CP:/etc/haproxy/haproxy.cfg
scp -i ~/vip_kub_root_id_rsa cloud-certs-aks-eks-gke_exp-202X-01-12.tar root@$CP:
scp -i ~/vip_kub_root_id_rsa 202X-11-03_vip_auth_hub_working_centos7_v2.tar root@$CP:
### On Terminal 1 - on CP host - Restart to use new haproxy configuration file.
systemctl restart haproxy
netstat -anp | grep -i -e haproxy
### Extract CERTS to root home folder
tar -xvf cloud-certs-aks-eks-gke_exp-202X-01-12.tar
### Extract Working Scripts
tar -xvf 202X-11-03_vip_auth_hub_working_centos7_v2.tar
### Update env variables for unique environment within step00 file.
vi step00_kubernetes_env.sh
### Add the env variables to the .bashrc file
echo ". ./step00_kubernetes_env.sh"
Step 13b: Example of /etc/haproxy/haproxy.cfg configuration for Kubernetes Load Balancing functionality for on-prem worker nodes. HAproxy deployed on control plane (CP) node. The example configuration file will route TCP 80/443/389 to one (1) of the four (4) worker nodes. If a Kubernetes NodePort service is enabled for TCP 389 (31888) ports, then this load balancer will function correctly and route the traffic for LDAP traffic as well.
[root@cp ~]# cat /etc/haproxy/haproxy.cfg
global
user haproxy
group haproxy
chroot /var/lib/haproxy
log /dev/log local0
log /dev/log local1 notice
defaults
mode http
log global
retries 2
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 10m
timeout server 10m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend ingress
bind *:80
option tcplog
mode http
option forwardfor
option http-server-close
default_backend kubernetes-ingress-nodes
backend kubernetes-ingress-nodes
mode http
balance roundrobin
server k8s-ingress-0 worker01.aks.iam.anapartner.net:80 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-1 worker02.aks.iam.anapartner.net:80 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-2 worker03.aks.iam.anapartner.net:80 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-2 worker04.aks.iam.anapartner.net:80 check fall 3 rise 2 send-proxy-v2
frontend ingress-https
bind *:443
option tcplog
mode tcp
option forwardfor
option http-server-close
default_backend kubernetes-ingress-nodes-https
backend kubernetes-ingress-nodes-https
mode tcp
balance roundrobin
server k8s-ingress-0 worker01.aks.iam.anapartner.net:443 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-1 worker02.aks.iam.anapartner.net:443 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-2 worker03.aks.iam.anapartner.net:443 check fall 3 rise 2 send-proxy-v2
server k8s-ingress-2 worker04.aks.iam.anapartner.net:443 check fall 3 rise 2 send-proxy-v2
frontend ldap
bind *:389
option tcplog
mode tcp
default_backend kubernetes-nodes-ldap
backend kubernetes-nodes-ldap
mode tcp
balance roundrobin
server k8s-ldap-0 worker01.aks.iam.anapartner.net:31888 check fall 3 rise 2
server k8s-ldap-1 worker02.aks.iam.anapartner.net:31888 check fall 3 rise 2
server k8s-ldap-2 worker03.aks.iam.anapartner.net:31888 check fall 3 rise 2
server k8s-ldap-2 worker04.aks.iam.anapartner.net:31888 check fall 3 rise 2
Deploy Solution on Kubernetes
Step 14: Validate that DNS and Storage are ready before deploying any solution or if you wish to have a base Kubernetes environment to use with the control-plane and four (4). worker nodes.
### Step: Setup NFS Share either on-prem remote server or Synology NFS
### Use version 4.x checkbox for Synology.
### Example of lines on remote Linux Host with NFS share.
yum -y install nfs-utils
systemctl enable --now nfs-server rpcbind
mkdir -p /export/nfsshare ; chown nobody /export/nfsshare ; chmod -R 777 /export/nfsshare
echo "/export/nfsshare *(rw,sync,no_root_squash,insecure)" >> /etc/exports
exportfs -rav; exportfs -v
firewall-cmd --add-service=nfs --permanent
firewall-cmd --add-service={nfs3,mountd,rpc-bind} --permanent
firewall-cmd --reload
#### Setup DNS entries (A and CNAME) for twelve (12) items ( May be on-prem DNS or Synology DNS)
ns.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.60)
aks.iam.anapartner.net NS ns.aks.iam.anapartner.net
cp.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.60)
worker01.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.61)
worker02.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.62)
worker03.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.63)
worker04.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.64)
sm.aks.iam.anapartner.net A IP_ADDRESS (192.168.2.65)
kibana CNAME cp.aks.iam.anapartner.net
grafana CNAME cp.aks.iam.anapartner.net
jaeger CNAME cp.aks.iam.anapartner.net
alertmanager CNAME cp.aks.iam.anapartner.net
ssp CNAME cp.aks.iam.anapartner.net
ssp-mgmt CNAME cp.aks.iam.anapartner.net
### Pre-Step: Enable DNS resolution for external IP addresses
### Enable forwarding to external h/w router and 8.8.8.8
Step 15: Recommendation. Deploy your solution in steps using Kubernetes yaml or Helm charts to assist with debugging any deployment issues. Do not forget to use kubectl logs, and kubectl describe to isolate startup or cert issues.
### Run scripts one-by-one. They will have a watch command in each that will
### provide feedback on the startup processes.
### Total startup from scratch to final with VIP Sample App is about 15-20 minutes.
### Note: Step04 has a different chart variables for on-prem for Symantec Directory.
### Note: /step00_kubernetes_env.sh is called by each script.
./step01_kubernetes_cluster_init_with_worker_nodes.sh
./step02_kubernetes_cluster_with_ingress_and_other_charts.sh
./step03_kubernetes_cluster_with_vip_auth_hub_charts.sh
./step04_kubernetes_cluster_with_vip_auth_hub_sample_app.sh
Docker Registry for On-Prem
There are two (2) types of docker registries we have found useful.
a. The standard Mirror method will capture all docker images from “docker.io” site to a local mirror. When Kubernetes or Helm deployments are used, the docker configuration file can be adjusted to check the local mirror without updating Kubernetes yaml files or Helm charts.
b. The second method is a full query of all images after they have been deployed once, and using the docker push process into a local registry. The challenge of the second method is that the Kubernetes yaml files and/or Helm charts do have to be updated to use this local registry.
Either method will help lower bandwidth cost to re-download the same docker images, if you use a docker prune method to keep your worker nodes disc size “clean”. If the docker prune process is not used, you may notice that the worker nodes may run out of disc space due to temporary docker images/containers that did not clean up properly.
#!/bin/bash
#################################################################################
# Create a local docker mirror registry for docker-ios
# and local docker non-mirror registry for all other images
# to minimize download impact
# during restart of the kubernetes solution
#
# All registry iamges will be placed on NFS share
# mount -v -t nfs 192.168.2.30:/volume1/nfs /mnt &>/dev/null
#
# Certs will be provided by Let's Encrypt every 90 days
#
# For docker-io mirror registry, all clients must have the following line in
# /etc/docker/daemon.json {Note: Use commas as needed}
#
# "registry-mirrors":
# [
# "https://sm.aks.iam.anapartner.net:444"
# ],
#
#
#
# ANA 11/2021
#
#################################################################################
# To remove all containers - to allow restart of process
docker rm -f `docker ps -a | grep -v -e CONTAINER | awk '{print $1}'` ; docker image rm `docker image ls | grep -v -e REPOSITORY | grep -e minutes -e hour -e days -e '2 weeks'| awk '{print $3}'` &>/dev/null
#################################################################################
# Update HOST name for local server for docker image
HOST=sm.aks.iam.anapartner.net
NFS_SERVER=192.168.2.30
NFS_SHARE=/volume1/nfs
#################################################################################
function start_registry {
local_port=$1
remote_registry_name=$2
if [ "$3" == "" ]; then
remote_registry_url=$remote_registry_name
else
remote_registry_url=$3
fi
echo -e "$local_port $remote_registry_name $remote_registry_url"
mount -v -t nfs $NFS_SERVER:$NFS_SHARE /mnt &>/dev/null
mkdir -p /mnt/registry/${remote_registry_name} &>/dev/null
docker run -d --name registry-${remote_registry_name}-mirror \
-p $local_port:443 \
--restart=always \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_PROXY_REMOTEURL="https://${remote_registry_url}/" \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/fullchain.pem \
-e REGISTRY_HTTP_TLS_KEY=/certs/privkey.pem \
-e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true \
-v /mnt/registry/certs:/certs \
-v /mnt/registry/${remote_registry_name}:/var/lib/registry \
registry:latest
sleep 1
echo "#################################################################################"
curl -s -X GET https://$HOST:$local_port/v2/_catalog | jq
echo "#################################################################################"
}
#################################################################################
# start_registry <local_port> <remote_registry_name> <remote_registry_url>
#################################################################################
start_registry 444 docker-io registry-1.docker.io
#################################################################################
# Non-Proxy configuration to allow 'docker tag & docker push' for all other images
#################################################################################
remote_registry_name=all
local_port=455
mkdir -p /var/lib/docker/registry/${remote_registry_name} &>/dev/null
docker run -d --name registry-${remote_registry_name}-mirror \
-p $local_port:443 \
--restart=always \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/fullchain.pem \
-e REGISTRY_HTTP_TLS_KEY=/certs/privkey.pem \
-e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true \
-v /mnt/registry/certs:/certs \
-v /mnt/registry/${remote_registry_name}:/var/lib/registry \
registry:latest
sleep 1
echo "#################################################################################"
curl -s -X GET https://$HOST:$local_port/v2/_catalog | jq
echo "#################################################################################"
docker ps -a
echo "#################################################################################"
echo "##### To tail the log of the docker-io container - useful for monitoring helm deployments #####"
echo "docker logs `docker ps -a --no-trunc | grep -v NAMES | grep 'docker-io' | awk '{print $1}'` -f "
echo "#################################################################################"
echo "##### To tail the log of the ALL container - useful for monitoring helm deployments #####"
echo "docker logs `docker ps -a --no-trunc | grep -v NAMES | grep 'all' | awk '{print $1}'` -f "
echo "#################################################################################"
echo "##### Location of Registry Files on NFS share #####"
echo "ls -lart /mnt/registry/docker-io/docker/registry/v2/repositories"
echo "ls -lart /mnt/registry/all/docker/registry/v2/repositories"
echo "#################################################################################"
Example of the /etc/docker/daemon.json configuration file to use a local mirror for docker.io. See the parameter of “registry-mirrors”. Unfortunately, we were unable to use this process for the other docker registries.
Use Let’sEncrypt Certbox and manual DNS validation, to create our 90-day wild card certificates. Manual DNS validation allows us to avoid setting up a public-facing component for our internal labs.
# Step 1: Install SNAP service for Certbot usage on your host OS
cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
sudo yum install -y snapd
Updating Subscription Management repositories.
Package snapd-2.49-2.el8.x86_64 is already installed.
systemctl enable --now snapd.socket
### Wait 1 min
snap install core; sudo snap refresh core
# Step 2: Remove prior certbot (if installed by yum/dnf)
yum remove -y certbot.
# Step 3: Install new "classic" Certbot
sudo snap install --classic certbot
certbot 1.17.0 from Certbot Project (certbot-eff✓) installed
sudo ln -s /snap/bin/certbot /usr/bin/certbot
# Step 4: Issue certbot command with wildcard cert & update your DNS TXT record with the string provided.
sudo certbot certonly --manual --preferred-challenges dns -d *.aks.iam.anapartner.org --register-unsafely-without-email
Saving debug log to /var/log/letsencrypt/letsencrypt.log
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Account registered.
Requesting a certificate for *.aks.iam.anapartner.org
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name:
_acme-challenge.iam.anapartner.org.
with the following value:
u2cXXXXXXXXXXXXXXXXXXXXc
Before continuing, verify the TXT record has been deployed. Depending on the DNS
provider, this may take some time, from a few seconds to multiple minutes. You can
check if it has finished deploying with aid of online tools, such as the Google
Admin Toolbox: https://toolbox.googleapps.com/apps/dig/#TXT/_acme-challenge.iam.anapartner.org.
Look for one or more bolded line(s) below the line ';ANSWER'. It should show the
value(s) you've just added.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Step 5: In a 2nd terminal, validate that the DNS record has been updated and can be seen by a standard DNS query. Have the 2nd console window open to test the DNS record, prior to <ENTER> key on verification request
# Example:
nslookup -type=txt _acme-challenge.aks.iam.anapartner.org
Non-authoritative answer:
_acme-challenge.aks.iam.anapartner.org text = "u2cXXXXXXXXXXXXXXXXXXXXc"
# Step 6: Press <ENTER> after you have validated the TXT record.
Press Enter to Continue
Waiting for verification...
Cleaning up challenges
Subscribe to the EFF mailing list (email: nala@baugher.us).
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/aks.iam.anapartner.org/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/aks.iam.anapartner.org/privkey.pem
# Step 7: View certs of fullchain.pem & privkey.pem
cat /etc/letsencrypt/live/aks.iam.anapartner.org/fullchain.pem
-----BEGIN CERTIFICATE-----
<REMOVED>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<REMOVED>
-----END CERTIFICATE-----
cat /etc/letsencrypt/live/aks.iam.anapartner.org/privkey.pem
-----BEGIN PRIVATE KEY-----
<REMOVED>
-----END PRIVATE KEY-----
# Step 8: Use the two files for your kubernetes solution
# Step 9: Ensure domain on host OS, cp, worker nodes in /etc/resolv.conf is set correctly to aks.iam.anapartner.org to allow the certs to be resolved correctly.
# Step 10: Ensure Synology NAS DNS service is configurated with all alias
# Step 11: Optional: Validate certs with openssl
# Show the kubernetes self-signed cert
true | openssl s_client -connect kibana.aks.iam.anapartner.org:443 2>/dev/null | openssl x509 -inform pem -noout -text
# Show the new wildcard cert for same hostname & port
curl -vvI https://kibana.aks.iam.anapartner.org/app/home#/curl -vvI https://kibana.aks.iam.anapartner.org/app/home#/ 2>&1 | awk 'BEGIN { cert=0 } /^\* SSL connection/ { cert=1 } /^\*/ { if (cert) print }'
nmap -p 443 --script ssl-cert kibana.aks.iam.anapartner.org
Kubernetes Side Note: Let's Encrypt certs do NOT show up within the Kubernetes cluster certs check process.
kubeadm certs check-expiration
View of the DNS TXT records to be updated with your DNS service provider. The Let’sEncrypt Certbot will need to be able to query these records for it to assign you wildcard certificates. Create the _acme-challenge hostname entry as a TXT type, and paste in the string provided by the Let’sEncrypt Certbot process. Wait 5 minutes or test the TXT record with nslookup, then upon positive validation, continue the Let’sEncrypt Certbot process.
View your kubernetes cluster / nodes for any constraints
After your cluster is created and you have worker nodes joined to the cluster, you may wish to monitor for any constraints of your on-prem deployment. Kubectl command with the action verb of describe or top is very useful for this goal.
kubectl describe nodes worker01kubectl top node / kubectl top pod
Kubernetes Training (Formal)
If you are new to Kubernetes, we recommend the following class. You may need to dedicate 4-8 weeks to complete the course and then take the CKA exam via the Linux Foundation.
Most Mobile Authenticator Apps will allow you to backup the Authenticator registration to an account.
Alternatively, if you have a spare phone (with or without a SIM chip), you may wish to deploy your Authenticator Apps to a 2nd phone, IPad, or Android Tablet to grant yourself additional freedom from being forced to using a single device for authentication.
Important Note: If the website allows it, you can register your QR code multiple times to different Authenticator Apps on the SAME or DIFFERENT phone. If you already registered to a site, you may re-register the QR code on both devices to ensure they both have the same “seed” for your login ID.
You may use your Ipad/Android Tablet without needing your primary phone near you while authenticating to your secure applications/websites.
Below is an example of using the following Authenticator Apps that registered the same QR code, e.g. Last Pass Authenticator (Red Shield Icon), Google Authenticator (Grey G), Microsoft Authenticator (Blue Lock Icon), and Okta Verify Authenticator (Blue “O” CheckMark Icon).
We did a test to confirm that these Authenticator Apps are all time based with your unique registration QR Code. As you can see from the below screenshot, any time-based authenticator app will return the same code within that 60 seconds cycle before they rotate.
Please note that other authenticators do not base the return value ONLY on time but other variables. Example: SecureID Token (Cloud Icon), Symantec VIP Access (Yellow Circle with Checkmark Icon), Okta Mobile (Blue Icon), and IRS2Go – Authenticator & App (IRS Logo Icon).
As we see more accounts get compromised, we strongly recommend using one or more of any authenticator applications with your mobile phone. Please note, all of these authenticator applications are free to use by the vendors.
Every website you access with an account usually has a “two-factor” authentication security setting that you may enable. You can enroll your mobile phone with the provided QR (quick response) code.
Examples of QR Codes that you may scan with your cell phone camera. Modern cell phone will auto transcribe these pictures into text for a web site URL, text, or registration code. The below three QR codes are all text base messages that you may practice your cell phone on. The more characters, the smaller the blocks will be in the QR code.
Hopefully, this entry may have value to you for account recovery, or managing access for/with a partner, spouse, dependents, and parents accounts.
Additional benefit, if the primary phone is lost or damaged, you will still have access to your accounts without being forced to go through recovery methods on each account, e.g. disable Authentication App, prove your identity, access your account, re-apply Authenticator App.
Only negative to this process is that you must remember to register 2nd device at the same time as the primary phone, for any new websites or wish to update your account on an existing website/application.
Example for Facebook TFA (Two-Factor-Authentication) Configuration:
Select Security and Login / Two-Factor Authentication under Facebook Settings. You will need to re-authenticate with your password to ensure that you are the correct person to change these settings.
Next, select the “Authenticator App” Manage button to add in an Authenticator App. Have both your primary phone and your 2nd device available within one of the Authenticator App open. Scan the QA code with both devices. Do NOT click the Continue Button, until you have scanned with both devices. This QA code is the “seed” for your authentication app. If you have any issues, you can re-scan a new code to retry.
After you click continue, most application/websites will ask you to input the code from your phone/device into the website, to prove that it was recorded correctly. If you look at both devices, you should see the same code being repeated on both every 60 seconds when they rotate.
LassPass Example:
If you are a fan of LastPass, the online password management tool, you can enable the three (3) popular Authenticator apps as well. The Google Authenticator App selection may also be used with Okta Verify Authenticator App.
The recent DNS challenges for a large organization that impacted their worldwide customers bring to mind a project we completed this year, a global password reset redundancy solution.
We worked with a client who desired to manage unplanned WAN outages to their five (5) data centers for three (3) independent MS Active Directory Domains with integration to various on-prem applications/ endpoints. The business requirement was for self-service password sync, where the users’ password change process is initialed/managed by the two (2) different MS Active Directory Password Policies.
Without the WAN outage requirement, any IAM/IAG solution may manage this request within a single data center. A reverse password sync agent process is enabled on all writable MS Active Directory domain controllers (DC). All the world-wide MS ADS domain controllers would communicate to the single data center to validate and resend this password change to all of the users’ managed endpoint/application accounts, e.g. SAP, Mainframe (ACF2/RACF/TSS), AS/400, Unix, SaaS, Database, LDAP, Certs, etc.
With the WAN outage requirement, however, a queue or components must be deployed/enabled at each global data center, so that password changes are allowed to sync locally to avoid work-stoppage and async-queued to avoid out-of-sync password to the other endpoint/applications that may be in other data centers.
We were able to work with the client to determine that their current IAM/IAG solution would have the means to meet this requirement, but we wished to confirm no issues with WAN latency and the async process. The WAN latency was measured at less than 300 msec between remote data centers that were opposite globally. The WAN latency measured is the global distance and any intermediate devices that the network traffic may pass through.
To review the solution’s ability to meet the latency issues, we introduced a test environment to emulate the global latency for deployment use-cases, change password use-cases, and standard CrUD use-cases. There is a feature within VMWare Workstation, that allows emulation of degraded network traffic. This process was a very useful planning/validation tool to lower rollback risk during production deployment.
VMWare Workstation Network Adapter Advance Settings for WAN latency emulation
The solution used for the Global Password Rest solution was Symantec Identity Suite Virtual Appliance r14.3cp2. This solution has many tiers, where select components may be globally deployed and others may not.
We avoided any changes to the J2EE tier (Wildfly) or Database for our architecture as these components arenot supported for WAN latency by the Vendor. Note: We have worked with other clients that have deployment at two (2) remote data centers within 1000 km, that have reported minimal challenges for these tiers.
We focused our efforts on the Provisioning Tier and Connector Tier. The Provisioning Tier consists of the Provisioning Server and Provisioning Directory.
The Provisioning Server has no shared knowledge with other Provisioning Servers. The Provisioning Directory (Symantec Directory) is where the provisioning data may be set up in a multi-write peer model. Symantec Directory is a proper X.500 directory with high redundancy and is designed to manage WAN latency between remote data centers and recovery after an outage. See example provided below.
The Connector Tier consists of the Java Connector Server and C++ Connector Server, which may be deployed on MS Windows as an independent component. There is no shared knowledge between Connector Servers, which works in our favor.
Requirement:
Three (3) independent MS Active Directory domain in five (5) remote data centers need to allow self-service password change & allow local password sync during a WAN outage. Passwords changes are driven by MS ADS Password Policies (every N days). The IME Password Policy for IAG/IAM solution is not enabled, IME authentication is redirected to an ADS domain, and the IMPS IM Callback Feature is disabled.
Below is an image that outlines the topology for five (5) global data centers in AMER, EMEA, and APAC.
The flow diagram below captures the password change use-case (self-service or delegated), the expected data flow to the user’s managed endpoints/applications, and the eventual peer sync of the MS Active Directory domain local to the user.
Observation(s):
The standalone solution of Symantec IAG/IAM has no expected challenges with configurations, but the Virtual Appliance offers pre-canned configurations that may impact a WAN deployment.
During this project, we identified three (3) challenges using the virtual appliance.
Two (2) items needed the assistance of the Broadcom Support and Engineering teams. They were able to work with us to address deployment configuration challenges with the “check_cluster_clock_sync -v ” process that incorrectly increments time delays between servers instead of resetting a value of zero between testing between servers.
Why this is important? The “check_cluster_clock_sync” alias is used during auto-deployment of vApp nodes. If the time reported between servers is > 15 seconds then replication may fail. This time check issue was addressed with a hotfix. After the hot-fix was deployed, all clock differences were resolved.
The second challenge was a deployment challenge of the IMPS component for its embedded “registry files/folders”. The prior embedded copy process was observed to be using standard “scp”. With a WAN latency, the scp copy operation may take more than 30 seconds. Our testing with the Virtual Appliance showed that a simple copy would take over two (2) minutes for multiple small files. After reviewing with CA support/engineering, they provided an updated copy process using “rsync” that speeds up copy performance by >100x. Before this update, the impact was provisioning tier deployment would fail and partial rollback would occur.
The last challenge we identified was using the Symantec Directory’s embedded features to manage WAN latency via multi-write HUB groups. The Virtual Appliance cannot automatically manage this feature when enabled in the knowledge files of the provisioning data DSAs. Symantec Directory will fail to start after auto-deployment.
Fortunately, on the Virtual appliance, we have full access to the ‘dsa’ service ID and can modify these knowledge files before/after deployment. Suppose we wish to roll back or add a new Provisioning Server Virtual Appliance. In that case, we must disable the multi-write HUB group configuration temporarily, e.g. comment out the configuration parameter and re-init the DATA DSAs.
Six (6) Steps for Global Password Reset Solution Deployment
We were able to refine our list of steps for deployment using pre-built knowledge files and deployment of the vApp nodes in blank slates with the base components of Provisioning Server (PS) and Provisioning Directory) with a remote MS Windows server for the Connector Server (JCS/CCS).
Step 1: Update Symantec Directory DATA DSA’s knowledge configuration files to use the multiple group HUB model. Note that multi-write group configuration is enabled within the DATA DSA’s *.dxc files. One Directory servers in each data center will be defined as a “HUB”.
To assist this configuration effort, we leveraged a serials of bash shell scripts that could be pasted into multiple putty/ssh sessions on each vApp to replace the “HUB” string with a “sed” command.
After the HUB model is enabled (stop/start the DATA DSAs), confirm that delayed WAN latency has no challenge with Symantec Directory sync processes. By monitoring the Symantec Directory logs during replication, we can see that sync operation with the WAN latency is captured with the delay > 1 msecs between data centers AMER1 and APAC1.
Step 2: Update IMPS configurations to avoid delays with Global Password Reset solution.
Note for this architecture, we do not use external IME Password Policies. We ensure that each AD endpoint has the checkbox enabled for “Password synchronization agent is installed” & each Global User (GU) has “Enable Password Synchronization Agent” checkbox enabled to prevent data looping. To ensure this GU attribute is always enabled, we updated an attribute under “Create Users Default Attributes”.
Step 3a: Update the Connector Tier (CCS Component)
Ensure that the MS Windows Environmental variables for the CCS connector are defined for Failover (ADS_FAILOVER) and Retry (ADS_RETRY).
Step 3b: Update the CCS DNS knowledge file of ADS DCs hostnames.
Important Note: Avoid using the refresh feature “Refresh DC List” within the IMPS GUI for the ADS Endpoint. If this feature is used, then a “merge” will be processed from the local CCS DNS file contents and what is defined within the IMPS GUI refresh process. If we wish to manage the redirection to local MS ADS Domain Controllers, we need to control this behavior. If this step is done, we can clean out the Symantec Directory of extra entries. The only negative aspect is the local password change may attempt to communicate to one of the remote MS ADS Domain Controllers that are not within the local data center. During a WAN outage, a user would notice a delay during the password change event while the CCS connector timed out the connection until it connected to the local MS ADS DC.
Step 3c: CCS ADS Failover
If using SSL over TCP 636 confirm the ADS Domain Root Certificate is deployed to the MS Windows Server where the CCS service is deployed. If using SASL over TCP 389 (if available), then no additional effort is required.
If using SSL over TCP 636, use the MS tool certlm.msc to export the public root CA Certificate for this ADS Domain. Export to base64 format for import to the MS Windows host (if not already part of the ADS Domain) with the same MS tool certlm.msc.
Step 4a: Update the Connector Tier for the JCS component.
Add the stabilization parameter “maxWait” to the JCS/CCS configuration file. Recommend 10-30 seconds.
Step 4b: Update JCS registration to the IMPS Tier
You may use the Virtual Appliance Console, but this has a delay when pulling the list of any JCS connector that may be down at this time of the check/submission. If we use the Connector Xpress UI, we can accomplish the same process much faster with additional flexibility for routing rules to the exact MS ADS Endpoints in the local data center.
Step 4c: Observe the IMPS routing to JCS via etatrans log during any transaction.
If any JCS service is unavailable (TCP 20411), then the routing rules process will report a value of 999.00, instead of a low value of 0.00-1.00.
Step 5: Update the Remote Password Change Agent (DLL) on MS ADS Domain Controllers (writable)
Step 6a: Validation of Self-Service Password Change to selected MS ADS Domain Controller.
Using various MS Active Directory processes, we can emulate a delegated or self-service password change early during the configuration cycle, to confirm deployment is correct. The below example uses MS Powershell to select a writable MS ADS Domain Controller to update a user’s password. We can then monitor the logs at all tiers for completion of this password change event.
A view of the password change event from the Reverse Password Sync Agent log file on the exact MS Domain Controller.
Step 6b: Validation of password change event via CCS ADS Log.
Step 6c: Validation of password change event via IMPS etatrans log
Note: Below screenshot showcases alias/function to assist with monitoring the etatrans logs on the Virtual Appliance.
Below screen shot showcases using ldapsearch to check timestamps for before/after of password change event within MS Active Directory Domain.
We hope these notes are of some value to your business and projects.
Appendix
Using the MS Windows Server for CCS Server
Get current status of AD account on select DC server before Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with current password.)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password05" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Change AD account's password via Powershell:
PowerShell Example:
Set-ADAccountPassword -Identity "idmpwtest" -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "Password06" -Force) -Server dc2016.exchange.lab
Get current status of AD account on select DC server after Password Change:
PowerShell Example:
get-aduser -Server dc2012.exchange2020.lab "idmpwtest" -properties passwordlastset, passwordneverexpires | ft name, passwordlastset
LdapSearch Example: (using ldapsearch.exe from CCS bin folder - as the user with NEW password)
C:\> & "C:\Program Files (x86)\CA\Identity Manager\Connector Server\ccs\bin\ldapsearch.exe" -LLL -h dc2012.exchange2012.lab -p 389 -D "cn=idmpwtest,cn=Users,DC=exchange2012,DC=lab" -w "Password06" -b "CN=idmpwtest,CN=Users,DC=exchange2012,DC=lab" -s base pwdLastSet
Using the Provisioning Server for password change event
Get current status of AD account on select DC server before Password Change:
LDAPSearch Example: (From IMPS server - as user with current password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password05 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
Change AD account's password via ldapmodify & base64 conversion process:
LDAPModify Example:
BASE64PWD=`echo -n '"Password06"' | iconv -f utf8 -t utf16le | base64 -w 0`
ADSHOST='192.168.242.154'
ADSUSERDN='CN=Administrator,CN=Users,DC=exchange2012,DC=lab'
ADSPWD='Password01!’
ldapmodify -v -a -H ldaps://$ADSHOST:636 -D "$ADSUSERDN" -w "$ADSPWD" << EOF
dn: CN=idmpwtest,OU=People,dc=exchange2012,dc=lab
changetype: modify
replace: unicodePwd
unicodePwd::$BASE64PWD
EOF
Get current status of AD account on select DC server after Password Change:
LDAPSearch Example: (From IMPS server - with user's account and new password)
LDAPTLS_REQCERT=never ldapsearch -LLL -H ldaps://192.168.242.154:636 -D 'CN=idmpwtest,OU=People,dc=exchange2012,dc=lab' -w Password06 -b "CN=idmpwtest,OU=People,dc=exchange2012,dc=lab" -s sub dn pwdLastSet whenChanged
The virtual appliance and standalone deployment of Symantec (CA) Identity Suite allow for redirecting authentication for the J2EE tier application through Symantec SSO or directly to an Active Directory domain, instead of the existing userstore for the solution.
Challenge:
The standalone deployment of Symantec (CA) Identity Suite on MS Windows OS allowed for the mid-tier component to utilize PAM modules to redirect to AD authentication for the Global User.
However this PAM feature does not exist for Provisioning Servers on the virtual appliance.
To be clear, there are noexpectations this feature will be introduced in the future roadmap for the solution, as the primary UI will be the web browser.
Review:
Symantec (CA) Identity Suite architecture for virtual appliance versus standalone deployment architecture.
The standalone deployment architecture has both MS Windows and Linux components of all tiers.
The vApp deployment architecture has primary Linux components and few MS Windows components.
The vApp MS Windows components do not include the IMPS (Provisioning Server)
Proposal:
To address this requirement of enabling AD authentication to the vApp Provisioning Server, we will introduce the concept of a “jump server”.
The “jump server” will utilize the standalone deployment of Symantec Identity Provisioning Server on an MS Windows OS. This “jump server” will be deployed as an “alternative server” integrated into the existing vApp Provisioning Directory deployment.
We will select deployment configuration ONLY of the Provisioning Server itself. We do not require the embedded CCS Service.
We will integrate this “jump server” deployment with the existing Symantec Identity solution.
Ensure the imps_datakey encryption seed file is in sync between all components vApp and standalone.
To avoid impacting the existing vApp deployment, we will NOT integrate the “jump server” deployment to the IME. The IME’s Directory XML for the Provisioning Directory will not be updated.
Important Note: The Symantec/CA Directory solution is required as a pre-step.
Summary of deployment steps:
Select a MS Window OS workstation (clean or with JCS/CCS Services) that may be part of the MS AD Domain
Option 1: [RECOMMENDED & PREFERRED] If using a clean OS, install MS .NetFramework 3.5.1 for the provisioning component.
Open cmd as administrator to deploy: DISM /Online /Enable-Feature /All /FeatureName:NetFx3
Option 2: [MED-HIGH RISK] If using “side-deployment” on an existing JCS/CCS server (MS Win OS), we will need to make modifications to this server.
Will need to rename the file C:\Windows\vpd.properties to avoid conflict with the JCS/CCS component naming convention in this “registry” file. (see below screen shot)
Will require a post-install execution of the IMPS pwdmgr tool to address an MS Registry path conflict between the CCS and IMPS components.
Ensure all CA Directory hostnames are in DNS or in the MS Windows local host file (C:\Windows\System32\drivers\etc\hosts ) otherwise this “jump server” deployment will fail when it tries to validate all possible directory nodes’ hostnames and build the respective Directory knowledge files.
Create a reference file for the new IMPS router dxc file on at least one of the existing vApp Identity Suite Directory Server otherwise this “jump server” deployment will fail due to trust issue when testing connections to other directory nodes’ hostnames.
Deploy Symantec/CA Directory (if not already done) – default configurations. Otherwise, you will see this error message
Deploy IMPS MS Windows – Only IMPS (no CCS) with Alternative Server Selection Configuration & update to latest CP patches. Note: For “side-deployment” only: If the vpd.properties file was not renamed, then a name collision will occur due to this registry file, if using the JCS/CCS server to side-deploy. It is low risk to change this file, as it is used to prevent deployments of lower release version of components over the prior installed higher release versions of the same component. If there is a concern, all components can be reinstalled as needed. Do not forget to install the latest CP patches to ensure this “jump server” is the same binary level as the vApp solution.
Review of additional notes during deployment of “jump server”. Note: For “side-deployment” only: On the page that ask for the Identity Suite Directory connection information, you will see the solution attempt to load env variables that do not exist. Override these value and enter the Directory hostname, Port 20394, and the default bind DN credentials for a Directory userID: eTDSAContainerName=DSAs,eTNamespaceName=CommonObjects,dc=im,dc=etadb
Deploy IMPM Manager GUI if needed.
Post-Deployment – Update IMPM Manager GUI preference to ONLY connect to the new IMPS server on MS Windows. Use the “Enable Failover” checkbox and place the IP/hostname first in the list. Hint: Remove the other IMPS servers from this list or you may add an extra digit to the IMPS servers entries you wish to save, but prevent auto-connectivity to them. Confirm able to authenticate directly through the solution using prior credentials for your service ID: etaadmin or imadmin. This will validate connectivity to the existing vApp Identity Suite solution.
On the “jump server” under the Provisioning Server\pam\ADS folder copy the etapam.dll to the IMPS \bin folder. Then copy the etapam_id.conf configuration file to the \pam parent folder. Update the parameters in this file. Set the enable= parameter to yes. Set the domain= to either the MS AD Domain or use the FQDN hostname of the ADS Domain Controller (DC). If we use the FQDN hostname of the DC the “jump server” does NOT have to be made a member of the MS AD Domain. Save the file and restart the “CA Identity Manager – Provisioning Server”
Validate PAM functionality in the IMPS etatrans log is enabled. We will see two (2) entries: PAM: Initialization started (same for all use-cases) and PAM: Not enabled or No PAM managed endpoint. We want “PAM: No PAM managed endpoint” – That is an extra feature we could enable, but do not require for the “jump server” scenario.
Validate PAM functionality with MS Sysinternals. Ensure that we copied the etapam.dll to the bin folder and that the configuration file is being read.
Test authentication using IMPM Manager login as IMPS Manager Global User that has same userID format as AD sAMAccountName. Negative Use-Case testing: Create a new AD domain user that does NOT exist as a Global User and attempt to authentication. Test with etaadmin or other Global User that does NOT have a match AD sAMAcccount name entry. Review the IMPS etatrans logs on the “jump server”
Update the IMPS encrypted seed file imps_datakey as needed.
Note: The MS Win version of IMPS encrypted seed file may be different than the vApp seed.
If this step is skipped, there will be no obvious error message with the exception that a bind has failed for communication to the JCS/CCS services.
After this file is updated, we will need to re-install IMPS service to ensure that all prior encryption passwords are replaced with new passwords using the new seed file. Basically, we need to install the MS Win version of IMPS Server twice, e.g. standard install, change the seed file value, re-install with update all components and updated passwords.
CCS Service conflict with “side-loading” IMPS Service {“side-loading” methodology}
The “side loading “process of deploying the “jump server” IMPS Provisioning Server on the JCS/CCS Server will impact starting of the CCS service. The installation will update the MS Registry with extra branches and updated shared attribute values between the CCS service and IMPS service, e.g. ETAHOME.
This challenge is a strong reason why we may choose the “clean” installation methodology, to avoid this conflict and possible support challenge.
To address this concern, update the new registry values that store the embedded reversible encrypted password for the CCS Service. Use the password reset tool “pwdmgr” and reset the “Connector Server” for both “eta” & “im” domain to the prior stored password. If the imps_datakey file is not in sync between all provisioning servers (& ccs service), then we will see failed bind connections error messages in the logs.
We will now be able to stop/start the JCS service, and see the embedded CCS service stop and start as well.
Example of challenge and error messages if imps_datakey is not updated and in sync.
Use the following command, csfconfig.exe, under the newly deployed IMPS bin folder to view the JCS connectors defined to the solution stack.
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin show
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>echo Password01 > c:\imps.pwd
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin add name=pamjcs host=192.168.242.143 pass=c:\imps.pwd br-add=@ debug=yes port=20411
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
Created CS object with name = pamjcs
C:\Program Files (x86)\CA\Identity Manager\Provisioning Server\bin>csfconfig.exe auth=etaadmin remove name=pamjcs
EtaSSL.initialize: CRYPTO_library_init: 1
EtaSSL.initialize: SSL_library_init: 1
Enter your authentication password:
We will see both error status when the imps_datakey file is out-of-sync with others. Please ensure the Linux & MS Win versions are in sync.
You may view the file imps_datakey being referenced with the pwdmgr tool:
You wish to monitor what accounts (embedded) are updated with the IMPS pwdmgr tool: su – imps and execute the two commands in a different SSH shell to monitor the pwdmgr.log that was enabled.
Enablement of extra functionality (bypass the no-sync option on Global User password update)
You may wish to keep the Global User and AD password in sync. If they are not, then you will have two passwords that will work for the Global User account. The newer PAM AD authentication credentials, and the older Global User password. The etapam.dll module data path appears to check for PAM AD first, and if it fails, then it will check the Global User eTPassword field as well.
Enable the AD endpoint in the etapam_id.conf file. The type and domain will be as shown, e.g. Active Directory and im (for the vApp). The endpoint-name will be free-form and whatever you may have named your AD endpoint in the IMPS GUI.
Monitor the startup of the PAM module within the IMPS etatrans*.log
Perform a use-case test with changing a Global User account without correlation to an AD endpoint; and then retest with a Global User that is correlated to an AD endpoint. Do both test with NO SYNC operation
If the Global User is already correlated to an AD endpoint account, then we will see a “Child Modify” operation to the correlated AD endpoint account’s Password within the IMPS etatrans*.log.
One “gotcha”. There appears to be a check against the AD password policy. If the new password does not fit the AD password policy, the following error message will appear, “ETA_E_0007 <MGU>, Global user XXXXXXX modification failed: PAM account password updated failed: Account password must match global user password.
On Linux OS, there are two (2) device drivers that provide entropy “noise” for components that require encryption, e.g. the /dev/random and the /dev/urandom device drivers. The /dev/random is a “blocking” device driver. When the “noise” is low, any component that relies on this driver will be “stalled” until enough entropy is returned. We can measure the entropy from a range of 0-4096. Where a value over 1000 is excellent. Any value in the double or single digits will impact the performance of the OS and solutions with delays. The root cause of these delays is not evident during troubleshooting, and typically there are no warning nor error messages related to entropy.
The Symantec Identity Suite solution, when deployed on Linux OS is typically deployed with the JVM switch -Djava.security.egd=file:/dev/./urandom for any component that uses Java (Oracle or AdoptOpenJDK), e.g. Wildfly (IM/IG/IP) and IAMCS (JCS). This JVM variable is sufficient for most use-cases to manage the encryption/hash needs of the solution.
However, for any component that does not provide a mechanism to use the alternative of /dev/urandom driver, the Linux OS vendors offer tools such as the “rng-tools” package. We can review what OS RNGD service is available using package tools, e.g.
dnf list installed | grep -i rng
If the Symantec Identity Suite or other solutions are deployed as standalone components, then we may adjust the Linux OS as we need with no restrictions to add the RNGD daemon as we wish. One favorite is the HAVEGED daemon over the default OS RNGD.
See prior notes on value and testing for Entropy on Linux OS (standalone deployments):
The challenge for Virtual Appliances is that we are limited to what functionality the Symantec Product Team provides for us to leverage. The RNGD service was available on the vApp r14.3, but was disabled for OS challenges with 100% utilization with CentOS 6.4. The service is still installed, but the actual binary is non-executable.
A new Virtual Appliance patch would be required to re-enable this RNGD on vApp r14.3cp2. We have access via sudo, to /sbin/chkconfig, /sbin/service to re-enable this service, but as the binary is not executable, we cannot progress any further. We can see the alias in the documentation still exist, but the OS alias was removed in the cp2 update.
However, since vApp r14.4 was release, we can focus on this Virtual Appliance which is running Centos 8 stream. The RNGD service here is disabled (masked) but can be re-enabled for our use with the sudo command. There is no current documented method for RNGD on vApp r14.4 at this time, but the steps below will show an approved way using the ‘config’ userID and sudo commands.
Confirm that the “rng-tools” package is installed and that the RNGD binary is executable. We can also see that the RNGD service is “masked”. Masked services are prevented from starting manually or automatically as an extra safety measure when we wish for tighter control over our systems.
If we test OS entropy for this vApp r14.4 server without RNGD, we can monitor how a simple BASH shell script that emulates a password being generated will impact the “entropy” of /dev/random. The below script will reduce the entropy to low numbers. This process will now impact the OS itself and any components that reference /dev/random. We can observe with “lsof /dev/random” that the java programs will still reference /dev/random; even though most activity is going to /dev/urandom.
Using the time command in the BASH shell script, we can see that the response is rapid for the first 20+ iterations, but as soon as the entropy is depleted, each execution is delayed by 10-30x times.
Now let’s see what RNGD service will do for us when it is enabled. Let’s follow the steps below to unmask, enable, and start the RNGD service as the ‘config’ userID. We have access to sudo to the Centos 8 Stream command of /sbin/systemctl.
After the RNGD service is enabled, test again with the same prior BASH shell script but bump the loops to 1000 or higher. Note using the time command we can see that each loop finishes within a fraction of a second.
Aim to keep the solution footprint small and the right-sized to solve the business’ needs. Do not accept the default performance; avoid over-purchasing to scale to your expected growth.
Use the JVM switch wherever there is a java process, e.g. BLC or home-grown ETL (extract-transform-load) processes.
-Djava.security.egd=file:/dev/./urandom
If you suspect a dependence may impact the OS or other processes on /dev/random, then enable the OS RNGD and perform your testing. Monitor with the top command to ensure RNGD service is providing value and not impacting the solution.
One business risk to manage when new business logic is being promoted to production environments is how to plan for a rollback process, where prior state data is restored, especially for an application/endpoint that is critical for a business; and as important to users as their login credentials and access.
In this entry, we showcase how to use CA Directory to snapshot an endpoint on a scheduled basis (daily/hourly) and have the process prepare a rollback delta file for user’s entitlements.
Understanding how queries may be direct to an endpoint/application or via the CA Identity Manager provisioning tier, we can speed up this process rapidly for sites that have millions of identities in an endpoint.
#!/bin/bash
##############################################################################
#
# POC to demostrate process to snapshot endpoint data on a daily basis
# and to allow a format for roll back
#
# 1. Review ADS with dxsearch/dxmodify
# 2. Create ADS representative Router DSA with CA Directory
# 3. Create ldif delta of snapshot data
# 4. Convert 'replace' to 'add' to ensure Roll back process is a 'merge'
# and NOT an 'overwrite' of entitlements
#
#
#
# A. Baugher, ANA, 11/2019
#
##############################################################################
########## Secure password for script ########
FILE=/tmp/.ads.hash.pwd
#rm -rf $FILE $FILE.salt
[[ -f $FILE ]]
echo "Check if $FILE exists: $?"
[[ -s $FILE ]]
echo "Check if $FILE is populated: $?"
if [[ ! -s $FILE && ! -s $FILE.salt ]]
then
# File did not have any data
# Run script once with pwd then replace with junk data in script
SALT=$RANDOM$RANDOM$RANDOM
PWD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ENCPWD=$(echo $PWD | openssl enc -aes-256-cbc -a -salt -pass pass:$SALT)
echo $ENCPWD > $FILE
echo $SALT > $FILE.salt
chmod 600 $FILE $FILE.salt
fi
if [[ -s $FILE && -s $FILE.salt ]]
then
ENCPWD=`cat $FILE`
SALT=`cat $FILE.salt`
echo "$PWD and $SALT for $ENCPWD"
MYPWD=$(echo "$ENCPWD" | openssl enc -aes-256-cbc -a -d -salt -pass pass:$SALT)
echo "$PWD and $SALT for $MYPWD"
else
echo "Missing password encrypted data and salt"
exit 1
fi
#exit
echo ""
echo "##############################################################################"
echo "Step 0 # Remove prior ads schema files"
echo "##############################################################################"
ADS_SCHEMA=ads_schema
ADS_SUFFIX="dc=exchange,dc=lab"
RANDOM_PORT=50389
rm -rf $DXHOME/config/knowledge/$ADS_SCHEMA.dxc
rm -rf $DXHOME/config/servers/$ADS_SCHEMA.dxi
rm -rf $DXHOME/config/schema/$ADS_SCHEMA.dxc
echo ""
echo "##############################################################################"
echo "Step 1 # Create new router DSA"
echo "##############################################################################"
echo "dxnewdsa -t router $ADS_SCHEMA $RANDOM_PORT $ADS_SUFFIX"
dxnewdsa -t router $ADS_SCHEMA $RANDOM_PORT $ADS_SUFFIX
echo""
echo "##############################################################################"
echo "Step 2 # Create temporary LDIF file of ADS schema"
echo "##############################################################################"
cd $DXHOME/config/schema
ADS_BIND_DN="CN=Administrator,CN=Users,DC=exchange,DC=lab"
ADS_BIND_PWD=$MYPWD
ADS_PASSFILE=/tmp/.ads.pwd
echo -n $MYPWD > $ADS_PASSFILE
chmod 600 $ADS_PASSFILE
ADS_SERVER=dc2016.exchange.lab
ADS_PORT=389
echo "dxschemaldif -v -D $ADS_BIND_DN -w ADS_BIND_PASSWORD_HERE $ADS_SERVER:$ADS_PORT > $ADS_SCHEMA.ldif"
dxschemaldif -v -D $ADS_BIND_DN -w $ADS_BIND_PWD $ADS_SERVER:$ADS_PORT > $ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 3 # Replace unknown SYNTAX with closely related SYNTAX known by CA Directory r12.6.5"
echo "##############################################################################"
echo "sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' $ADS_SCHEMA.ldif"
sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' $ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 4 - # Create CA Directory Schema DXC File from LDIF Schema File"
echo "##############################################################################"
echo "ldif2dxc -f $ADS_SCHEMA.ldif -b bad.ldif -x default.dxg -v $ADS_SCHEMA.dxc"
ldif2dxc -f $ADS_SCHEMA.ldif -b bad.ldif -x default.dxg -v $ADS_SCHEMA.dxc
echo ""
echo "##############################################################################"
echo "Step 5 - # Update router DSA schema reference"
echo "##############################################################################"
echo "sed -i \"s|source \"../schema/default.dxg\";|source \"../schema/default.dxg\";\nsource \"../schema/$ADS_SCHEMA.dxc\"; |g\" $DXHOME/config /servers/$ADS_SCHEMA.dxi"
sed -i "s|source \"../schema/default.dxg\";|source \"../schema/default.dxg\";\nsource \"../schema/$ADS_SCHEMA.dxc\"; |g" $DXHOME/config/servers /$ADS_SCHEMA.dxi
echo ""
echo "##############################################################################"
echo "Step 6 - # Query ADS endpoint for snapshot 1 "
echo "##############################################################################"
echo "dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX '(objectClass=User)' memberOf > snapshot_1_ $ADS_SCHEMA.ldif "
echo "ldifsort snapshot_1_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif "
dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX "(objectClass=User)" memberOf | perl -p00e 's/\r?\ n //g' > snapshot_1_$ADS_SCHEMA.ldif
ldifsort snapshot_1_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 7 - # Query ADS endpoint for snapshot 2"
echo "##############################################################################"
echo "dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX '(objectClass=User)' memberOf > snapshot_2_ $ADS_SCHEMA.ldif "
echo "ldifsort snapshot_2_$ADS_SCHEMA.ldif snapshot_2_sorted_$ADS_SCHEMA.ldif "
dxsearch -LLL -h $ADS_SERVER -p $ADS_PORT -x -D $ADS_BIND_DN -y $ADS_PASSFILE -b $ADS_SUFFIX "(objectClass=User)" memberOf | perl -p00e 's/\r?\ n //g' > snapshot_2_$ADS_SCHEMA.ldif
ldifsort snapshot_2_$ADS_SCHEMA.ldif snapshot_2_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 8 - # Find the delta for any removed objects"
echo "##############################################################################"
echo "ldifdelta -x -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif"
ldifdelta -x -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif
echo ""
echo "##############################################################################"
echo "Step 9a: Convert from User ldapmodify syntax of 'overwrite' of 'replace' "
echo "##############################################################################"
ldifdelta -S $ADS_SCHEMA snapshot_2_sorted_$ADS_SCHEMA.ldif snapshot_1_sorted_$ADS_SCHEMA.ldif user_mod_syntax_input.ldif >/dev/null 2>&1
cat user_mod_syntax_input.ldif | perl -p00e 's/\r?\n //g' > user_mod_syntax.ldif
cat user_mod_syntax.ldif
echo "##############################################################################"
echo "Step 9b: Convert to ADS Group ldapmodify syntax with a 'merge' of 'add' for the group objects"
echo "##############################################################################"
perl /opt/CA/Directory/dxserver/samples/dxsoak/convert.pl user_mod_syntax.ldif > group_mod_syntax_input.ldif
cat group_mod_syntax_input.ldif | perl -p00e 's/\r?\n //g' > group_mod_syntax.ldif
cat group_mod_syntax.ldif
echo "##############################################################################"
Example of output from above script:
[dsa@vapp0001]$ ./active_directory_user_delta_via_ca_dir_tools-lab.sh
Check if /tmp/.ads.hash.pwd exists: 0
Check if /tmp/.ads.hash.pwd is populated: 0
/opt/CA/Directory/dxserver/samples/dxsoak and 31936904511291 for U2FsdGVkX195Ti6A8GdFTG6Kmrf6xDcOhrd2aPWVezc=
/opt/CA/Directory/dxserver/samples/dxsoak and 31936904511291 for CAdemo123
20200427150345,505.0Z = Current OS UTC time stamp
##############################################################################
Step 0 # Remove prior ads schema files
##############################################################################
20200427150345,509.0Z = Current OS UTC time stamp
##############################################################################
Step 1 # Create new router DSA
##############################################################################
dxnewdsa -t router ads_schema 50389 dc=exchange,dc=lab
Writing the knowledge file...
knowledge file written
Writing the initialization file...
Initialization file written
Starting the DSA 'ads_schema'...
ads_schema starting
ads_schema started
20200427150345,513.0Z = Current OS UTC time stamp
##############################################################################
Step 2 # Create temporary LDIF file of ADS schema
##############################################################################
dxschemaldif -v -D CN=Administrator,CN=Users,DC=exchange,DC=lab -w ADS_BIND_PASSWORD_HERE dc2016.exchange.lab:389 > ads_schema.ldif
>> Issuing LDAP v3 synchronous bind to 'dc2016.exchange.lab:389'...
>> Fetching root DSE 'subschemaSubentry' attribute...
>> Downloading schema from 'CN=Aggregate,CN=Schema,CN=Configuration,DC=exchange,DC=lab'...
>> Received (4527) values
>> Done.
20200427150345,539.0Z = Current OS UTC time stamp
##############################################################################
Step 3 # Replace unknown SYNTAX with closely related SYNTAX known by CA Directory r12.6.5
##############################################################################
sed -i 's|1.2.840.113556.1.4.1221|1.3.6.1.4.1.1466.115.121.1.26|g' ads_schema.ldif
20200427150345,560.0Z = Current OS UTC time stamp
##############################################################################
Step 4 - # Create CA Directory Schema DXC File from LDIF Schema File
##############################################################################
ldif2dxc -f ads_schema.ldif -b bad.ldif -x default.dxg -v ads_schema.dxc
>> Opening input file 'ads_schema.ldif' ...
>> Opening existing dxserver schema file '/opt/CA/Directory/dxserver/config/schema/default.dxg' ...
>> Opening bad file 'bad.ldif' ...
>> Opening output file '/opt/CA/Directory/dxserver/config/schema/ads_schema.dxc' ...
>> Processing dxserver schema group file '/opt/CA/Directory/dxserver/config/schema/default.dxg'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/x500.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/cosine.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/umich.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/inetop.dxc'...
>> Processing dxserver schema config file '/opt/CA/Directory/dxserver/config/schema/dxserver.dxc'...
>> Loaded (248) existing dxserver schema entries
>> Loading LDIF records...
>> Loading LDIF record number (1)...
>> Skipping attr: 'objectClass'
>> Skipping attr: 'objectClass'
>> Processing loaded LDIF records...
>> Moving objectClasses to end of list...
>> Sorting attrs/objectClasses so parents precede their children...
>> Processing attributeTypes...
>> Defaulting 'directoryString' syntax without any (required) matching rules to 'caseIgnoreString'...
[Remove repeating lines x 1000]
>> Processing objectClasses...
>> Skipping existing schema entry 'top' with oid '2.5.6.0'...
>> Skipping existing schema entry 'locality' with oid '2.5.6.3'...
>> Skipping existing schema entry 'device' with oid '2.5.6.14'...
>> Skipping existing schema entry 'certificationAuthority' with oid '2.5.6.16'...
>> Skipping existing schema entry 'groupOfNames' with oid '2.5.6.9'...
>> Skipping existing schema entry 'organizationalRole' with oid '2.5.6.8'...
>> Skipping existing schema entry 'organizationalUnit' with oid '2.5.6.5'...
>> Skipping existing schema entry 'domain' with oid '1.2.840.113556.1.5.66'...
>> Skipping existing schema entry 'rFC822LocalPart' with oid '0.9.2342.19200300.100.4.14'...
>> Skipping existing schema entry 'applicationProcess' with oid '2.5.6.11'...
>> Skipping existing schema entry 'document' with oid '0.9.2342.19200300.100.4.6'...
>> Skipping existing schema entry 'room' with oid '0.9.2342.19200300.100.4.7'...
>> Skipping existing schema entry 'domainRelatedObject' with oid '0.9.2342.19200300.100.4.17'...
>> Skipping existing schema entry 'country' with oid '2.5.6.2'...
>> Skipping existing schema entry 'friendlyCountry' with oid '0.9.2342.19200300.100.4.18'...
>> Skipping existing schema entry 'groupOfUniqueNames' with oid '2.5.6.17'...
>> Skipping existing schema entry 'organization' with oid '2.5.6.4'...
>> Skipping existing schema entry 'simpleSecurityObject' with oid '0.9.2342.19200300.100.4.19'...
>> Skipping existing schema entry 'person' with oid '2.5.6.6'...
>> Skipping existing schema entry 'organizationalPerson' with oid '2.5.6.7'...
>> Skipping existing schema entry 'inetOrgPerson' with oid '2.16.840.1.113730.3.2.2'...
>> Skipping existing schema entry 'residentialPerson' with oid '2.5.6.10'...
>> Skipping existing schema entry 'applicationEntity' with oid '2.5.6.12'...
>> Skipping existing schema entry 'dSA' with oid '2.5.6.13'...
>> Skipping existing schema entry 'cRLDistributionPoint' with oid '2.5.6.19'...
>> Skipping existing schema entry 'documentSeries' with oid '0.9.2342.19200300.100.4.9'...
>> Skipping existing schema entry 'account' with oid '0.9.2342.19200300.100.4.5'...
>> Converting LDIF records to DXserver schema format...
>> Converted (4398) of (4525) schema records
20200427150345,894.0Z = Current OS UTC time stamp
##############################################################################
Step 5 - # Update router DSA schema reference
##############################################################################
sed -i "s|source "../schema/default.dxg";|source "../schema/default.dxg";\nsource "../schema/ads_schema.dxc"; |g" /opt/CA/Directory/dxserver/config/servers/ads_schema.dxi
20200427150345,897.0Z = Current OS UTC time stamp
##############################################################################
step 6 - # Update an ADS account with memberOf for testing with initial conditions
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd << EOF >/dev/null 2>&1
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
adding new entry CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
20200427150345,909.0Z = Current OS UTC time stamp
##############################################################################
Step 7 - # Query ADS endpoint for snapshot 1
##############################################################################
dxsearch -LLL -h dc2016.exchange.lab -p 389 -x -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -b dc=exchange,dc=lab '(&(objectClass=User)(memberOf=*))' memberOf | perl -p00e 's/\r?\n //g' > snapshot_1_ads_schema.ldif
ldifsort snapshot_1_ads_schema.ldif snapshot_1_sorted_ads_schema.ldif
creating buckets
creating sort cluster 1 of size 200
sorting 0 records
creating sort cluster 2 of size 200
sorting 200 records
creating sort cluster 3 of size 200
sorting 400 records
3 buckets created
sorting 588 records
588 records sorted, 0 bad records
20200427150345,940.0Z = Current OS UTC time stamp
##############################################################################
Step 8 - # Update an ADS account with memberOf for testing after snapshot
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd << EOF
Ignore the error msg: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM)
This error will occur if a non-existant value is removed from the group's member attribute
##############################################################################
ldap_initialize( ldap://dc2016.exchange.lab:389 )
delete member:
CN=Test User 001,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=eeeee,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=Test User 001,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
ldap_modify: Server is unwilling to perform (53)
additional info: 00000561: SvcErr: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM), data 0
delete member:
CN=alantest,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
delete member:
CN=eeeee,CN=Users,DC=exchange,DC=lab
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
ldap_modify: Server is unwilling to perform (53)
additional info: 00000561: SvcErr: DSID-031A1254, problem 5003 (WILL_NOT_PERFORM), data 0
add member:
CN=alantest,CN=Users,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modify complete
deleting entry "CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab"
delete complete
20200427150345,954.0Z = Current OS UTC time stamp
##############################################################################
Step 9 - # Query ADS endpoint for snapshot 2
##############################################################################
dxsearch -LLL -h dc2016.exchange.lab -p 389 -x -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -b dc=exchange,dc=lab '(&(objectClass=User)(memberOf=*))' memberOf | perl -p00e 's/\r?\n //g' > snapshot_2_ads_schema.ldif
ldifsort snapshot_2_ads_schema.ldif snapshot_2_sorted_ads_schema.ldif
creating buckets
creating sort cluster 1 of size 200
sorting 0 records
creating sort cluster 2 of size 200
sorting 200 records
creating sort cluster 3 of size 200
sorting 400 records
3 buckets created
sorting 587 records
587 records sorted, 0 bad records
20200427150345,985.0Z = Current OS UTC time stamp
##############################################################################
Step 10 - # Find the delta for any removed objects
##############################################################################
ldifdelta -x -S ads_schema snapshot_2_sorted_ads_schema.ldif snapshot_1_sorted_ads_schema.ldif
dn: CN=eeeee,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alantest,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=la
b
-
dn: CN=Test User 001,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
changetype: add
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
ldifdelta summary:
587 entries in old file
588 entries in new file
Produced:
1 add entry records
0 delete entry records
3 modify entry records
20200427150346,070.0Z = Current OS UTC time stamp
##############################################################################
Step 11a: Convert from User ldapmodify syntax of 'overwrite' of 'replace'
##############################################################################
dn: CN=eeeee,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alantest,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
memberOf: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
-
dn: CN=Test User 001,CN=Users,DC=exchange,DC=lab
changetype: modify
replace: memberOf
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
-
dn: CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab
changetype: add
memberOf: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
20200427150346,163.0Z = Current OS UTC time stamp
##############################################################################
Step 11b: Convert to ADS Group ldapmodify syntax with a 'merge' of 'add' for the group objects
##############################################################################
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=eeeee,CN=Users,DC=exchange,DC=lab
member: CN=Test User 001,CN=Users,DC=exchange,DC=lab
dn: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
changetype: modify
add: member
member: CN=alantest,CN=Users,DC=exchange,DC=lab
# Ignoring Users: [CN=alan-del-scenario,OU=o365,DC=exchange,DC=lab <-> CN=Account Operators,CN=Builtin,DC=exchange,DC=lab] Reason: User NOT present in the latest Snapshot! Cannot add to group.
20200427150346,172.0Z = Current OS UTC time stamp
##############################################################################
Step 11c: Query ADS Group member(s) before Roll back process
##############################################################################
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
20200427150346,185.0Z = Current OS UTC time stamp
##############################################################################
Step 12: Roll back change to ADS User membershipOf to ADS
##############################################################################
Ignore the false positive warning message of: (ENTRY_EXISTS) - This is the 'merge' process
##############################################################################
dxmodify -c -H ldap://dc2016.exchange.lab:389 -D CN=Administrator,CN=Users,DC=exchange,DC=lab -y /tmp/.ads.pwd -f group_mod_syntax.ldif
modifying entry CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
modifying entry CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
modifying entry CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
ldap_modify: Already exists (68)
additional info: 00000562: UpdErr: DSID-031A11E2, problem 6005 (ENTRY_EXISTS), data 0
20200427150346,194.0Z = Current OS UTC time stamp
##############################################################################
Step 13: Query ADS Group member after Roll back process
##############################################################################
dn: CN=Account Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=eeeee,CN=Users,DC=exchange,DC=lab
member: CN=Test User 001,CN=Users,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Access Control Assistance Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Backup Operators,CN=Builtin,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab
dn: CN=Help Desk,OU=Microsoft Exchange Security Groups,DC=exchange,DC=lab
member: CN=alantest,CN=Users,DC=exchange,DC=lab