top of page

Experienced Technology Product Manager adept at steering success throughout the entire product lifecycle, from conceptualization to market delivery. Proficient in market analysis, strategic planning, and effective team leadership, utilizing data-driven approaches for ongoing enhancements.

  • Twitter
  • LinkedIn
Writer's pictureArun Nukula

Configuring a vRO 8.x Cluster

In this blog, I shall share the procedure to create a vRealize Orchestrator cluster on version 8.2


The recommended number of nodes that can be used to create a clustered vRealize Orchestrator environment is three.



vropri.prem.com
vrosec.prem.com
vroteri.prem.com

Once deployed we have to ensure that all pods are in a RUNNING state so that we can move on to the next step to configure clustering






Procedure


In my case, there isn't any working load balancer so I am using a workaround at this moment. I've created a cname which is pointing to the primary node


Ideally, you're expected to configure as written in this documentation Orchestrator Load-Balancing Guide


Now that we have all the nodes in a ready state and I've taken a snapshot of all three nodes before making any changes. These snapshots are taken without memory while the appliances are powered on



Configure the primary node.


Log in to the vRealize Orchestrator Appliance of the primary node over SSH as root.

To configure the cluster load balancer server, run the

vracli load-balancer set load_balancer_FQDN command

Click Change and set the host address of the connected load balancer server.


Configure the authentication provider. See Configuring a Standalone vRealize Orchestrator Server.



Join secondary nodes to the primary node

Log in to the vRealize Orchestrator Appliance of the secondary node over SSH as root.


To join the secondary node to the primary node, run the

vracli cluster join primary_node_hostname_or_IP

Enter the root password of the primary node. Repeat the procedure for other secondary nodes.

Following are the events which happen when you execute the cluster join command


Below Snippet is from one of the nodes vroteri.prem.com


root@vroteri [ ~ ]#  vracli cluster join vropri.prem.com
2020-11-24 14:13:19,085 [INFO] Resetting the current node ..
2020-11-24 14:18:09,306 [INFO] Getting join bundle from remote endpoint ..
Password:
2020-11-24 14:22:30,362 [INFO] Parsing join bundle
2020-11-24 14:22:30,390 [INFO] Deleting data from previous use on this node ..
2020-11-24 14:22:31,006 [INFO] Creating missing data directories on this node ..
2020-11-24 14:22:31,082 [INFO] Allowing other nodes to access this node ..
2020-11-24 14:22:31,101 [INFO] Updating hosts file for remote endpoint ..
2020-11-24 14:22:31,114 [INFO] Executing cluster join ..
I1124 14:22:31.221791   30566 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery
I1124 14:22:31.224053   30566 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName
I1124 14:22:31.224103   30566 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I1124 14:22:31.224243   30566 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
I1124 14:22:31.224550   30566 interface.go:400] Looking for default routes with IPv4 addresses
I1124 14:22:31.224592   30566 interface.go:405] Default route transits interface "eth0"
I1124 14:22:31.224842   30566 interface.go:208] Interface eth0 is up
I1124 14:22:31.224968   30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20].
I1124 14:22:31.225031   30566 interface.go:223] Checking addr  10.109.44.140/20.
I1124 14:22:31.225068   30566 interface.go:230] IP found 10.109.44.140
I1124 14:22:31.225117   30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0".
I1124 14:22:31.225151   30566 interface.go:411] Found active IP 10.109.44.140
[preflight] Running pre-flight checks
I1124 14:22:31.225361   30566 preflight.go:90] [preflight] Running general checks
I1124 14:22:31.225501   30566 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1124 14:22:31.225580   30566 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I1124 14:22:31.225626   30566 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1124 14:22:31.225673   30566 checks.go:102] validating the container runtime
I1124 14:22:31.342648   30566 checks.go:128] validating if the service is enabled and active
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1124 14:22:31.532103   30566 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1124 14:22:31.532217   30566 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1124 14:22:31.532283   30566 checks.go:649] validating whether swap is enabled or not
I1124 14:22:31.532392   30566 checks.go:376] validating the presence of executable conntrack
I1124 14:22:31.532532   30566 checks.go:376] validating the presence of executable ip
I1124 14:22:31.532583   30566 checks.go:376] validating the presence of executable iptables
I1124 14:22:31.532650   30566 checks.go:376] validating the presence of executable mount
I1124 14:22:31.532687   30566 checks.go:376] validating the presence of executable nsenter
I1124 14:22:31.532741   30566 checks.go:376] validating the presence of executable ebtables
I1124 14:22:31.532847   30566 checks.go:376] validating the presence of executable ethtool
I1124 14:22:31.532959   30566 checks.go:376] validating the presence of executable socat
I1124 14:22:31.533016   30566 checks.go:376] validating the presence of executable tc
I1124 14:22:31.533088   30566 checks.go:376] validating the presence of executable touch
I1124 14:22:31.533353   30566 checks.go:520] running all checks
I1124 14:22:31.631717   30566 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I1124 14:22:31.632132   30566 checks.go:618] validating kubelet version
I1124 14:22:31.724374   30566 checks.go:128] validating if the service is enabled and active
I1124 14:22:31.737672   30566 checks.go:201] validating availability of port 10250
I1124 14:22:31.738591   30566 checks.go:432] validating if the connectivity type is via proxy or direct
I1124 14:22:31.738713   30566 join.go:455] [preflight] Fetching init configuration
I1124 14:22:31.738727   30566 join.go:493] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1124 14:22:31.798981   30566 interface.go:400] Looking for default routes with IPv4 addresses
I1124 14:22:31.799061   30566 interface.go:405] Default route transits interface "eth0"
I1124 14:22:31.799375   30566 interface.go:208] Interface eth0 is up
I1124 14:22:31.799490   30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20].
I1124 14:22:31.799542   30566 interface.go:223] Checking addr  10.109.44.140/20.
I1124 14:22:31.799591   30566 interface.go:230] IP found 10.109.44.140
I1124 14:22:31.799656   30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0".
I1124 14:22:31.799699   30566 interface.go:411] Found active IP 10.109.44.140
I1124 14:22:31.799798   30566 preflight.go:101] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I1124 14:22:31.801206   30566 checks.go:577] validating Kubernetes and kubeadm version
I1124 14:22:31.801271   30566 checks.go:166] validating if the firewall is enabled and active
I1124 14:22:31.813148   30566 checks.go:201] validating availability of port 6443
I1124 14:22:31.813292   30566 checks.go:201] validating availability of port 10259
I1124 14:22:31.813358   30566 checks.go:201] validating availability of port 10257
I1124 14:22:31.813430   30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1124 14:22:31.813486   30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1124 14:22:31.813516   30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1124 14:22:31.813541   30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1124 14:22:31.813566   30566 checks.go:432] validating if the connectivity type is via proxy or direct
I1124 14:22:31.813609   30566 checks.go:471] validating http connectivity to first IP address in the CIDR
I1124 14:22:31.813653   30566 checks.go:471] validating http connectivity to first IP address in the CIDR
I1124 14:22:31.813695   30566 checks.go:201] validating availability of port 2379
I1124 14:22:31.813833   30566 checks.go:201] validating availability of port 2380
I1124 14:22:31.813917   30566 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1124 14:22:31.882093   30566 checks.go:838] image exists: vmware/kube-apiserver:v1.18.5-vmware.1
I1124 14:22:31.945357   30566 checks.go:838] image exists: vmware/kube-controller-manager:v1.18.5-vmware.1
I1124 14:22:32.007510   30566 checks.go:838] image exists: vmware/kube-scheduler:v1.18.5-vmware.1
I1124 14:22:32.071842   30566 checks.go:838] image exists: vmware/kube-proxy:v1.18.5-vmware.1
I1124 14:22:32.139079   30566 checks.go:838] image exists: vmware/pause:3.2
I1124 14:22:32.204019   30566 checks.go:838] image exists: vmware/etcd:3.3.6.670
I1124 14:22:32.269153   30566 checks.go:838] image exists: vmware/coredns:1.2.6.11743048
I1124 14:22:32.269226   30566 controlplaneprepare.go:211] [download-certs] Skipping certs download
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1124 14:22:32.269257   30566 certs.go:38] creating PKI assets
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [vroteri.prem.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vra-k8s.local] and IPs [10.244.4.1 10.109.44.140]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I1124 14:22:34.339423   30566 certs.go:69] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1124 14:22:36.170921   30566 manifests.go:91] [control-plane] getting StaticPodSpecs
W1124 14:22:36.171453   30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1124 14:22:36.171988   30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1124 14:22:36.172011   30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1124 14:22:36.172020   30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1124 14:22:36.194181   30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1124 14:22:36.194238   30566 manifests.go:91] [control-plane] getting StaticPodSpecs
W1124 14:22:36.194357   30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1124 14:22:36.194806   30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1124 14:22:36.194836   30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1124 14:22:36.194866   30566 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1124 14:22:36.194875   30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1124 14:22:36.194883   30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1124 14:22:36.196052   30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1124 14:22:36.196092   30566 manifests.go:91] [control-plane] getting StaticPodSpecs
W1124 14:22:36.196176   30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1124 14:22:36.196505   30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1124 14:22:36.197169   30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I1124 14:22:36.214097   30566 local.go:78] [etcd] Checking etcd cluster health
I1124 14:22:36.214140   30566 local.go:81] creating etcd client that connects to etcd pods
I1124 14:22:36.214188   30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I1124 14:22:36.242843   30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:36.334793   30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:36.334878   30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:36.406046   30566 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1124 14:22:36.412385   30566 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "vroteri.prem.com" and status "Ready"
I1124 14:22:36.425067   30566 kubelet.go:159] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1124 14:22:41.776119   30566 cert_rotation.go:137] Starting client certificate rotation controller
I1124 14:22:41.781038   30566 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node
I1124 14:22:41.781103   30566 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "vroteri.prem.com" as an annotation
I1124 14:22:50.317665   30566 local.go:130] creating etcd client that connects to etcd pods
I1124 14:22:50.317743   30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I1124 14:22:50.348895   30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:50.399038   30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:50.399144   30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379
I1124 14:22:50.399168   30566 local.go:139] Adding etcd member: https://10.109.44.140:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I1124 14:22:50.480013   30566 local.go:145] Updated etcd member list: [{vroteri.prem.com https://10.109.44.140:2380} {vropri.prem.com https://10.109.44.138:2380} {vrosec.prem.com https://10.109.44.139:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I1124 14:22:50.482224   30566 etcd.go:509] [etcd] attempting to see if all cluster endpoints ([https://10.109.44.138:2379 https://10.109.44.139:2379 https://10.109.44.140:2379]) are available 1/8
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.


2020-11-24 14:22:52,499 [INFO] Enabling the current node to run workloads ..
node/vroteri.prem.com untainted

2020-11-24 14:22:52,945 [INFO] Enabling flannel on the current node ..
2020-11-24 14:22:53,175 [INFO] Updating hosts file for local endpoint ..
2020-11-24 14:22:53,192 [INFO] Sleeping for 120 seconds to allow infrastructure services to start.
2020-11-24 14:24:53,289 [INFO] Updating proxy-exclude settings to include this node FQDN (vroteri.prem.com) and IPv4 address (10.109.44.140)

I don't have a custom certificate in this environment so I'd skip that step

As you would see once we execute



kubectl -n prelude get nodes

You would see all the three nodes in a ready state. Now let's run the script which would start all the services

/opt/scripts/deploy.sh

At this point in time monitor /var/log/deploy.log


Monitor pod deployments using

 watch -n 5 "kubectl -n prelude get pods - wide"

Finally, all the POD's are up on all the underlying nodes



Now heading to vRO Load Balancer FQDN you will be presented with the landing page if this vRO




Clicking on the control center you would be able to configure the environment accordingly.


3,233 views1 comment

Recent Posts

See All

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Aug 11, 2023
Rated 5 out of 5 stars.

So detail, thank a lot, wish you have best day ever

Like
bottom of page