
Search Results
243 results found with an empty search
- VMware Aria Automation 8.18 to VCF Automation 9.0 Upgrade MindMap
Presenting mind map of VMware Aria Automation 8.18 to VCF Automation 9.0
- VCF Automation 9.0 Installation | Deep-Dive |
In today's fast-paced IT landscape, automation has become a critical component for streamlining operations and enhancing efficiency. One of the most powerful products available for Private Cloud is VMware Cloud Foundation (VCF) , which a cohesive infrastructure solution. This blog delves into the intricacies of VCF Automation installation, providing a comprehensive guide for IT professionals looking to simplify their deployment processes. We will explore the prerequisites, installation steps, and best practices to ensure a smooth and successful implementation of VCF Automation, enabling organizations to harness the full potential of their cloud environments. Deployment Types VCF Automation comes with three sizing profiles. Small Medium Large Below table describes Number of nodes deployed for a specific deployment type Number of IP's needed for a specific deployment type Let's now get into the deployment flow and see what all parameters we need for a successful deployment Login into VCF Operations UI , Select Fleet Management -- Lifecycle -- VCF Management -- Overview Click on ADD on the automation tile This would launch the automation installation wizard There are three options here New Install Deploys a new VCF Automation component Import Gives an ability to import Automation 9.0 when it's removed as a management component from VCF Operations Fleet Management Appliance for troubleshooting purposes post its deployment. Import from legacy Fleet Management Provides an ability for a customer to import their existing VMware Aria Automation 8.18.x instances into VCF Operations fleet management appliance, so that they can be upgraded to VCF Automation 9.0 The initial/first VCF Automation instance you deploy or import will be classified as an INTEGRATED instance. Any subsequent VCF Automations added to the VCF Operations fleet management, whether through deployment or the import & upgrade methods, will be classified as NON-INTEGRATED. Since we would be coverting New Install for this blog, so let's select this option and move forward by clicking on NEXT. We shall select MEDIUM deployment type In the next step, select the certificate which would be used for the deployment If you have a certificate pre-created select it. If you don't have the certificate, then go ahead and click on the "+" sign to Generate it. This would be a VCF Operations fleet management Locker CA based certificate If you have an external party certificate authorized by your organization, then choose Import Certificate and then import it Unlike VMware Aria Automation 8.x, where you need 1 VMware Aria Automation Load Balancer FQDN and 3 VMware Aria Automation node FQDN's, when deploying VCF Automation 9.0 , you just need 1 VCF Automation FQDN. This VCF Automation FQDN is the only one needed while generating the certificate and no other inputs are needed. Select the certificate and click on NEXT for further inputs on the Infrastructure tab Select vCenter Server This would be the management domain where VCF Automation would be deployed If the vCenter Server where you would like to deploy VCF Automation is not listed, then go ahead and check if that's added as one of the deployment targets under VCF Operations - Administration - Integrations - Accounts - vCenter or VMware Cloud Foundation Fleet Management - Lifecycle - VCF Management - Settings - Deployment Target Select Cluster This would be the place where you would like to host your nodes Select Folder Placeholder for placing the VCF Automation ndoes Select Resource Pool Placeholder for placing the VCF Automation ndoes Select Network This would be the network where your VCF Automation nodes would be connected to Select Datastore This would be the datastore where your VCF Automation nodes would be deployed to Once done with the Infrastructure tab , proceed with the Network tab Domain Name Enter the domain name of the organization Domain Search Path Enter the domain search path of the organization DNS Servers Add NEW Server Add a new DNS Server and then select it EDIT SERVER SELECTION Select the DNS Server which you would like to use for this deployment Time Sync Mode Use NTP Server Add NEW Server Add a new NTP Server and then select it EDIT SERVER SELECTION Select the NTP Server which you would like to use for this deployment Use Host Time Leverage the NTP set on the ESXi host where it's deployed IPv4 Details Default IPv4 Gateway Enter the defaut gateway for the deployment IPv4 Netmask Enter the netmask used for the deployment Click on next to enter the component properties Component Properties FQDN Enter VCF Automation FQDN Certificate As we selected this during initial screen, this is pre-populated Component Password Create a 15 character long password If the password is not created, create it using "ADD PASSWORD" on the top right corner of the screen Once created , select the password Cluster Virtual IP FQDN Enter VCF Automation FQDN Yes, you have entered this before, but you need to enter this again. Controller Type Internal Load Balancer When using internal load balancer the VCF Automation FQDN should be pointing to the Primary VIP Internal Load Balancer Others This is an option if a customer wants to leverage an external load balancer like F5, Netscaler , NSX-T etc. When this option is used VCF Automation FQDN should be pointing to the VIRTUAL SERVER IP of the Load Balancer The Primary VIP and Additional VIP's which would be collected as inputs in the subsequent steps should be part of it's POOL Components Node Prefix Specify a unique prefix for the VCF Automation nodes. Ensure the prefix is unique within the VCF Instance fleet to avoid conflicts and enable accurate VM backup identification This is used as prefix to the VCF Automation ndoes we deploy and a suffix is autogenerated during deployment This behavior cannot be changed. Primary VIP The Primary Virtual IP address of the VCF Automation used for accessing the services As described above if using internal load balancer the VCF Automation FQDN should be pointing to this Primary VIP if using others or external load balancer, then this Primary VIP should be part of the pool on the load balancer Internal Cluster CIDR IP Address Range used for internal network communication within the cluster. Choose a range that does not conflict with any existing networks. Note: Once a cluster CIDR is selected and component is deployed, this cannot be changed Additional VIP's You can add upto 2 additional VIP's for VCF Automation This is not mandatory for Greenfield Installs Click on ADD ADDITIONAL VIP POOL to add IP addresses one after another Cluster Node IP Pool A node IP pool is a range of IP addresses allocated for nodes being deployed to host VCF Automation, from which they will receive their IP addresses. For Medium and Large deployment types a minimum of 4 IP's are needed For Small deployment type a minimum of 2 IP's are needed As stated it accepts CIDR based format Individual IP Addresses A range Click on NEXT now to run a PRECHECK Proceed only when PRECHECK is successful Once Prechecks are successful, review the summary and submit the deployment request As stated in the summary page, parallel deployments of VCF Automation or Identity Broker is not supported. Deploy them one after another Once submitted, the deployment procedure begins Deep-Dive into deployment process will be blogged soon
- Depot & Binary Management in VCF Operations fleet management appliance or OPS LCM 9.0
Background We are accustomed to downloading the binaries and copying them into VMware Aria Suite Lifecycle, followed by binary mapping, allowing us to use these binaries for installing, upgrading, or patching the VMware Aria Products in version 8.x. In VCF 9.0, there have been changes. VMware Aria Suite Lifecycle is no longer included in VCF 9.0. Instead, the responsibility for VMware Aria Suite Lifecycle now falls under the VCF Operations fleet management appliance, also known as OPS LCM. OPS LCM allows us to set up a depot for downloading the relevant binaries. Depot Configuration is a must or a pre-requisite before downloading the binaries. So let's understand the process thoroughly. Extending Storage for Binaries The /data partition is the largest in the VCF Operations Fleet Management appliance. This is where customers upload bundles in the Offline Depot → Local (Dark Site Use Case). If storage is running low, you can extend the /data partition storage. Browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings . Click on EXTEND . Enter the vCenter Server FQDN. Select the vCenter Server password. Enter the desired size to which you want to extend the partition. Click EXTEND to increase the size. This will add additional storage to the /data partition. Depot Configuration Starting with VCF 9.0, VCF Operations Fleet Management introduces support for Depot configuration. A Depot serves as a source for downloading installation, upgrade, and patch binaries. There are two types of Depot configurations: Online – Connects directly to the online VCF Depot. Offline Web Server – Connects to an offline web server hosting OBTU, suitable for air-gapped environments. Local – Requires copying the tar bundle downloaded via OBTU to the /data partition of the VCF Operations Fleet Management appliance, ideal for dark site environments. Only one Depot connection can be set to ACTIVE at a time. If a connection is already ACTIVE, the option to switch the depot to ONLINE or OFFLINE will be unavailable until the current depot connection is disconnected. Configure Online Depot To enable the Online Depot, generate the download token from support.broadcom.com and use it during the setup process. This ensures entitlement to download the required binaries. Fetching Download Token Login into support.broadcom.com using your credentials.Post successful login, select VMware Cloud Foundation In the bottom right side of the screen select “ Generate Download Token ” under "Quick Links" Pick the correct Site and click " Generate" Now that we have the token, let's go ahead and setup the ONLINE depot Login into VCF Operations 9.0 console Browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Depot Configuration Click on Online Depot → Configure Click on the "+" on the Download Token field to add the token to the locker Password Alias Enter an alias to identify the token Password Enter/Paste the token generated from the Broadcom Support Portal Confirm Password Re-Enter the token generated again Password Description Enter a description User Name It's not needed in this scenario. Can be left blank. Click on ADD so that the token is added to the locker Once done, now click on "Select Download Token", just select the token which we just added to the locker Accept the Certificate and then click on "OK" Online Depot connection is now established. Offline Depot Configuration Offline depots simplify artifact distribution for "dark site" or "air-gapped" customers, reducing the steps needed to manage multiple VCF instances. To set up the depot correctly, customers should follow the procedure outlined below. Leverage VCF Download Tool and setup an Offline Depot structure where the binaries can be downloaded. This is not new for VCF Customers. This was called as Offline Bundle Transfer Utility in VCF 5.x days. To set up the "Offline Depot," a dedicated Virtual Machine (VM) must be prepared. Refer to VCF Download Tool documentation for more information. Note: Both Offline depot methods only expect a bundle from OBTU. This would be a tar bundle with metadata included. Individual binary mapping as it used to happen in VMware Aria Suite Lifecycle 8.x does not work anymore Once we have the VCF Download Tool installed and configured, in order to download the binaries, we can leverage following commands. Explaination of the command /vcf-download-tool.bat: Invokes the VCF download tool batch script (for Windows environments). binaries download: Action to download binary files . -d /Users/Arun/local-depot-config/: Target local directory where binaries and metadata will be stored. --depot-download-token-file downloadtoken.txt: File containing the secure token required to authenticate and authorize the download from the Broadcom depot. --vcf-version="9.0.0.0": Specifies the exact VCF version to download (in this case, VCF 9.0.0.0 ). --lifecycle-managed-by=VRSLCM: Indicates that the binaries are intended to be managed by VCF Operations fleet management appliance (OPS LCM or VRSLCM) . --type=INSTALL: Specifies the type of binaries — in this case, for a fresh install (not upgrade or patch). Downloads all VCF Management Component Binaries of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=INSTALL Downloads all VCF Management Component Binaries of type "UPGRADE" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=UPGRADE Downloads a specific VCF Management Component Binary of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=INSTALL --component=VRLI Downloads a specific VCF Management Component Binary of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=UPGRADE --component=VRNI When the command execution finishes, it will create a bundle on the location mentioned. This is the bundle one has to use to leverage in Offline Depot usecases. Web Server Method Offline Depot Webserver method is catered to solve air-gapped use cases. Keep the Web Server URL handy, then navigate to VCF Operations → Fleet Management → Lifecycle → VCF Management → Depot Configuration → Offline Depot → WebServer should be set to reference the "Offline Depot" The Offline Depot with a Web Server-based connection has now been set up. The username and password provided here are for Web Server Authentication to download the binaries. Local Method The Local Method is used in dark site infrastructures which are secure, isolated network environments with no direct internet access, typically designed for sensitive or high-security operations. To support these environments, the Offline Depot → Local option can be utilized to map bundles and binaries, enabling upgrades, installations, and patching of components. Transfer the bundle we recently downloaded with the VCF download tool into the "/data" partition of the VCF Operations fleet management appliance, as this is the largest partition specifically designed to store the binaries. If there's a need to extend the partition size, steps to extend has been shared above. Once the bundle has been copied over, as shown in the screenshot below, direct it to the location where it has been copied. Click on OK to configure the Offline Depot to point to Local path. We have now completed our understanding of how to configure these Depots. Regardless of the specific depot configuration, once it is set up correctly, the VCF Operations fleet management appliance will read the metadata and subsequently populate the corresponding binaries under Binary Management for download. Now select the required binary , then click on "DOWNLOAD" so that the binary can be staged into VCF Operations fleet management content repo and can be used during install , patching or upgrade. The methods you used in VMware Aria Suite Lifecycle no longer apply to VCF 9.0. Therefore, you should stop copying the ISO, PAK, OVA, and .patch files directly and begin learning the new required methods.
- VMware Aria Suite Lifecycle 8.18 Patch 2 Released
VMware Aria Suite Lifecycle 8.18 Patch 2 is live. This is only needed if your upgrading to VCF 9.0. This patch provides you a capability to upgrade VMware Aria Operations 8.18 to 9.0 & Install the new VCF Operations fleet management appliance 9.0 VMware Aria Suite Lifecycle 8.18 Patch 2 does not have any features or bug fixes. If the customer has no immediate plans to upgrade to 9.0 , then there is no need to install this patch. Release Notes https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes/vmware-aria-suite-lifecycle-818-patch-2-release-notes.html
- VMware-Aria-Operations-8.18-HF6 Released
VMware Aria Operations 8.18 HF 6 is now released. Click here for Release Notes Click here for Solution Details Click here to download the VMware Aria Suite Lifecycle wrapped patches which correspond to HF 6 VMware-Aria-Operations-8.18-HF6 VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.0 vrlcm-vrops-8.18.1_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.1 version vrlcm-vrops-8.18.0_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.0 version vrlcm-vrops-8.18.3_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.3 version vrlcm-vrops-8.18.2_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.2 version
- New VMware Aria Operations 8.18.x patch released
Payload VMware Aria Operations 8.18 Hot Fix 5 is a public Hot Fix that addresses the following issues: Product managed agent installation is failing CP status became unhealthy after removing nodes from cluster "Network|Total Transmitted Packets Dropped" metric for VMs is missing JS error in creation flow of Payload Templates for webhook plugin The session is not authenticated issue while calling SPBM API's [Diagnostics MP] Add new VMSA rules: VMware ESXi CVE-2025-22224, CVE-2025-22225, CVE-2025-22226, to VMSA-2024-0004 [Diagnostics MP] Update to VMSA rules: VMware vCenter Server CVE-2024-38812, CVE-2024-38813, to VMSA-2024-0019 [Diagnostics MP] Add new VMSA rules: VMware Aria automation CVE-2025-22215, to VMSA-2025-0001 If on version Then use File Name VMware Aria Operations 8.18.0 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5818 vrlcm-vrops-8.18.0-HF5.patch VMware Aria Operations 8.18.1 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5827 vrlcm-vrops-8.18.1-HF5.patch VMware Aria Operations 8.18.2 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5828 vrlcm-vrops-8.18.2-HF5.patch VMware Aria Operations 8.18.3 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5829 vrlcm-vrops-8.18.3-HF5.patch Implementation Details NOTE: BEFORE Installing the patch ensure snapshot of the cluster is taken. As my environment was on VMware Aria Operations 8.18.0 , I did download vrlcm-vrops-8.18.0-HF5.patch using https://support.broadcom.com/web/ecx/solutiondetails?patchId=5818 Now let's check how to get this installed or implemented. Firstly, Download the patch or HF and then copy it into VMware Aria Suite Lifeccyle /data partition. Go to Lifecycle Operations -- Binary Mapping -- Patch Binary -- Add Enter the base location where the patch is present and then click on DISCOVER Select the discovered patch and click on ADD Here's the version info and the build number before i started the patch install Let's now go to the Day-2 Operation called Patch Select the Patch to be installed Review and Submit the patch. Click on INSTALL for the Patch install flow to begin After sometime , the patching is now complete. Before patch the build number was 24025145. Now after 8.18.1 HF5, the build number is 8.18.3.24663033 Here's the environment section where it shows that it's on 8.18.3 Patch. Build Number is clearly shown in the properties. No matter which version your coming from when the patch is applied , it goes to 8.18.3 Patch Build. Which is the latest one. Screenshot from the VMware Aria Operations UI itself. This concludes the patching blog.
- Configuring a vRO 8.x Cluster
In this blog, I shall share the procedure to create a vRealize Orchestrator cluster on version 8.2 The recommended number of nodes that can be used to create a clustered vRealize Orchestrator environment is three. vropri.prem.com vrosec.prem.com vroteri.prem.com Once deployed we have to ensure that all pods are in a RUNNING state so that we can move on to the next step to configure clustering Procedure In my case, there isn't any working load balancer so I am using a workaround at this moment. I've created a cname which is pointing to the primary node Ideally, you're expected to configure as written in this documentation Orchestrator Load-Balancing Guide Now that we have all the nodes in a ready state and I've taken a snapshot of all three nodes before making any changes. These snapshots are taken without memory while the appliances are powered on Configure the primary node. Log in to the vRealize Orchestrator Appliance of the primary node over SSH as root . To configure the cluster load balancer server, run the vracli load-balancer set load_balancer_FQDN command Click Change and set the host address of the connected load balancer server. Configure the authentication provider. See Configuring a Standalone vRealize Orchestrator Server. Join secondary nodes to the primary node Log in to the vRealize Orchestrator Appliance of the secondary node over SSH as root . To join the secondary node to the primary node, run the vracli cluster join primary_node_hostname_or_IP Enter the root password of the primary node. Repeat the procedure for other secondary nodes. Following are the events which happen when you execute the cluster join command Below Snippet is from one of the nodes vroteri.prem.com root@vroteri [ ~ ]# vracli cluster join vropri.prem.com 2020-11-24 14:13:19,085 [INFO] Resetting the current node .. 2020-11-24 14:18:09,306 [INFO] Getting join bundle from remote endpoint .. Password: 2020-11-24 14:22:30,362 [INFO] Parsing join bundle 2020-11-24 14:22:30,390 [INFO] Deleting data from previous use on this node .. 2020-11-24 14:22:31,006 [INFO] Creating missing data directories on this node .. 2020-11-24 14:22:31,082 [INFO] Allowing other nodes to access this node .. 2020-11-24 14:22:31,101 [INFO] Updating hosts file for remote endpoint .. 2020-11-24 14:22:31,114 [INFO] Executing cluster join .. I1124 14:22:31.221791 30566 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery I1124 14:22:31.224053 30566 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName I1124 14:22:31.224103 30566 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress I1124 14:22:31.224243 30566 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock I1124 14:22:31.224550 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.224592 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.224842 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.224968 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.225031 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.225068 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.225117 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.225151 30566 interface.go:411] Found active IP 10.109.44.140 [preflight] Running pre-flight checks I1124 14:22:31.225361 30566 preflight.go:90] [preflight] Running general checks I1124 14:22:31.225501 30566 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I1124 14:22:31.225580 30566 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I1124 14:22:31.225626 30566 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:31.225673 30566 checks.go:102] validating the container runtime I1124 14:22:31.342648 30566 checks.go:128] validating if the service is enabled and active [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I1124 14:22:31.532103 30566 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I1124 14:22:31.532217 30566 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I1124 14:22:31.532283 30566 checks.go:649] validating whether swap is enabled or not I1124 14:22:31.532392 30566 checks.go:376] validating the presence of executable conntrack I1124 14:22:31.532532 30566 checks.go:376] validating the presence of executable ip I1124 14:22:31.532583 30566 checks.go:376] validating the presence of executable iptables I1124 14:22:31.532650 30566 checks.go:376] validating the presence of executable mount I1124 14:22:31.532687 30566 checks.go:376] validating the presence of executable nsenter I1124 14:22:31.532741 30566 checks.go:376] validating the presence of executable ebtables I1124 14:22:31.532847 30566 checks.go:376] validating the presence of executable ethtool I1124 14:22:31.532959 30566 checks.go:376] validating the presence of executable socat I1124 14:22:31.533016 30566 checks.go:376] validating the presence of executable tc I1124 14:22:31.533088 30566 checks.go:376] validating the presence of executable touch I1124 14:22:31.533353 30566 checks.go:520] running all checks I1124 14:22:31.631717 30566 checks.go:406] checking whether the given node name is reachable using net.LookupHost I1124 14:22:31.632132 30566 checks.go:618] validating kubelet version I1124 14:22:31.724374 30566 checks.go:128] validating if the service is enabled and active I1124 14:22:31.737672 30566 checks.go:201] validating availability of port 10250 I1124 14:22:31.738591 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.738713 30566 join.go:455] [preflight] Fetching init configuration I1124 14:22:31.738727 30566 join.go:493] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' I1124 14:22:31.798981 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.799061 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.799375 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.799490 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.799542 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.799591 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.799656 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.799699 30566 interface.go:411] Found active IP 10.109.44.140 I1124 14:22:31.799798 30566 preflight.go:101] [preflight] Running configuration dependant checks [preflight] Running pre-flight checks before initializing the new control plane instance I1124 14:22:31.801206 30566 checks.go:577] validating Kubernetes and kubeadm version I1124 14:22:31.801271 30566 checks.go:166] validating if the firewall is enabled and active I1124 14:22:31.813148 30566 checks.go:201] validating availability of port 6443 I1124 14:22:31.813292 30566 checks.go:201] validating availability of port 10259 I1124 14:22:31.813358 30566 checks.go:201] validating availability of port 10257 I1124 14:22:31.813430 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I1124 14:22:31.813486 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I1124 14:22:31.813516 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I1124 14:22:31.813541 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I1124 14:22:31.813566 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.813609 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813653 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813695 30566 checks.go:201] validating availability of port 2379 I1124 14:22:31.813833 30566 checks.go:201] validating availability of port 2380 I1124 14:22:31.813917 30566 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I1124 14:22:31.882093 30566 checks.go:838] image exists: vmware/kube-apiserver:v1.18.5-vmware.1 I1124 14:22:31.945357 30566 checks.go:838] image exists: vmware/kube-controller-manager:v1.18.5-vmware.1 I1124 14:22:32.007510 30566 checks.go:838] image exists: vmware/kube-scheduler:v1.18.5-vmware.1 I1124 14:22:32.071842 30566 checks.go:838] image exists: vmware/kube-proxy:v1.18.5-vmware.1 I1124 14:22:32.139079 30566 checks.go:838] image exists: vmware/pause:3.2 I1124 14:22:32.204019 30566 checks.go:838] image exists: vmware/etcd:3.3.6.670 I1124 14:22:32.269153 30566 checks.go:838] image exists: vmware/coredns:1.2.6.11743048 I1124 14:22:32.269226 30566 controlplaneprepare.go:211] [download-certs] Skipping certs download [certs] Using certificateDir folder "/etc/kubernetes/pki" I1124 14:22:32.269257 30566 certs.go:38] creating PKI assets [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [vroteri.prem.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vra-k8s.local] and IPs [10.244.4.1 10.109.44.140] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" I1124 14:22:34.339423 30566 certs.go:69] creating new public/private key files for signing service account users [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I1124 14:22:36.170921 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.171453 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.171988 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I1124 14:22:36.172011 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I1124 14:22:36.172020 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I1124 14:22:36.194181 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I1124 14:22:36.194238 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.194357 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.194806 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I1124 14:22:36.194836 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I1124 14:22:36.194866 30566 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I1124 14:22:36.194875 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I1124 14:22:36.194883 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I1124 14:22:36.196052 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I1124 14:22:36.196092 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.196176 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.196505 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I1124 14:22:36.197169 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [check-etcd] Checking that the etcd cluster is healthy I1124 14:22:36.214097 30566 local.go:78] [etcd] Checking etcd cluster health I1124 14:22:36.214140 30566 local.go:81] creating etcd client that connects to etcd pods I1124 14:22:36.214188 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:36.242843 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334793 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334878 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.406046 30566 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:36.412385 30566 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "vroteri.prem.com" and status "Ready" I1124 14:22:36.425067 30566 kubelet.go:159] [kubelet-start] Stopping the kubelet [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... I1124 14:22:41.776119 30566 cert_rotation.go:137] Starting client certificate rotation controller I1124 14:22:41.781038 30566 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node I1124 14:22:41.781103 30566 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "vroteri.prem.com" as an annotation I1124 14:22:50.317665 30566 local.go:130] creating etcd client that connects to etcd pods I1124 14:22:50.317743 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:50.348895 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399038 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399144 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399168 30566 local.go:139] Adding etcd member: https://10.109.44.140:2380 [etcd] Announced new etcd member joining to the existing etcd cluster I1124 14:22:50.480013 30566 local.go:145] Updated etcd member list: [{vroteri.prem.com https://10.109.44.140:2380} {vropri.prem.com https://10.109.44.138:2380} {vrosec.prem.com https://10.109.44.139:2380}] [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s I1124 14:22:50.482224 30566 etcd.go:509] [etcd] attempting to see if all cluster endpoints ([https://10.109.44.138:2379 https://10.109.44.139:2379 https://10.109.44.140:2379]) are available 1/8 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. 2020-11-24 14:22:52,499 [INFO] Enabling the current node to run workloads .. node/vroteri.prem.com untainted 2020-11-24 14:22:52,945 [INFO] Enabling flannel on the current node .. 2020-11-24 14:22:53,175 [INFO] Updating hosts file for local endpoint .. 2020-11-24 14:22:53,192 [INFO] Sleeping for 120 seconds to allow infrastructure services to start. 2020-11-24 14:24:53,289 [INFO] Updating proxy-exclude settings to include this node FQDN (vroteri.prem.com) and IPv4 address (10.109.44.140) I don't have a custom certificate in this environment so I'd skip that step As you would see once we execute kubectl -n prelude get nodes You would see all the three nodes in a ready state. Now let's run the script which would start all the services /opt/scripts/deploy.sh At this point in time monitor /var/log/deploy.log Monitor pod deployments using watch -n 5 "kubectl -n prelude get pods - wide" Finally, all the POD's are up on all the underlying nodes Now heading to vRO Load Balancer FQDN you will be presented with the landing page if this vRO Clicking on the control center you would be able to configure the environment accordingly.
- VMware Cloud Foundation 5.2.1 | What's New & Product Support Packs
VMware Cloud Foundation 5.2.1 has just been released. omponent Version What’s New Relevant Links SDDC Manager 5.2.1 Reduced Downtime Upgrade (RDU) support for vCenter NSX in-place upgrades for clusters that use vSphere Lifecycle Manager baselines Support for vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image-based clusters in same workload domain. Support for the "License Now" option for vSAN add-on licenses based on capacity per tebibyte (TiB) Set up VMware Private AI Foundation infrastructure from the vSphere Client Manage all SDDC certificates and passwords from a single UI Release Notes VCF 5.2.1 Build Number 24307856 vCenter Server 8.0u3c Resolves an issue with the vSphere Client becoming unresponsive when a session stays idle for more than 50 minutes Built on top of VC 8.0u3b which delivered critical security fixes resolving CVE-2024-38812 and CVE-2024-38813 and bug fixes. Release Notes VC 8.0U3c Build Number 24305161 ESXi 8.0u3b Bugs and Security Fixes DPU/SmartNIC :VMware vSphere Distributed Services Engine support with NVIDIA Bluefield-3 DPUs Design improvements for cloud-init guestInfo variables Release Notes ESXi 8.0U3b Build Number 24280767 NSX 4.2.1 NSX now supports a number of enhancements that help the customer provide high availability for networking including TEP Groups, monitoring for dual DPU deployments, improved alarms on the NSX Edge. As part of our multi-tenancy feature set we now offer the exposure of NSX Projects as folders in vCenter. NSX now supports VRF configuration at the Global Manager level. This allows customers a single place to configure stretched T0 Gateways. Invoke NSX In-Place Upgrades on non-VLCM clusters now in a VCF environment, and leverage faster upgrades of NSX enabled hypervisors without having to migrate your workloads. Release Notes NSX 4.2.1 Build Number 24304122 VMware Aria Suite Lifecycle 8.18 with updated versions of the following: Supported with PSPAK 3 Product Support Pack 3 adds support for: VMware Aria Automation 8.18.1 VMware Aria Orchestrator 8.18.1 VMware Aria Operations for Networks 6.14.0 Release Notes VMware Aria Suite Lifecycle 8.18 Product Support Pack Build Number 24286744 VMware Aria Automation 8.18.1 NSX network and security group automation with discovered resources from NSX Projects and VPCs Onboarding of vSphere namespaces to be used in the Cloud Consumption Interface Support 64 disks on Paravirtual SCSI (PVSCSI) controller Include DGCM Exporter by default in Deep Learning VM Release Notes vRA 8.18.1 Build Number 24280767 VMware Aria Operations 8.18.1 Resolves important security and functionality issues. Release Notes Aria Ops 8.18.1 Build Number 24267784 VMware Aria Operations for Networks 6.14 Plan blueprint can be created for the workloads based on various scoping criteria and have those workloads migrated through VMware HCX NSX Assessment Dashboard Improvements Flow-based application discovery with CSV upload Release Notes Aria Ops 6.14 Build Number 1725688792 There are a bunch of Product Support Packs released to support VMware Aria Suite Lifecycle's upgrade and support latest products like VMware Aria Automation 8.18.1 VMware Aria Orchestrator 8.18.1 VMware Aria Operations for Networks 6.14.0 Product Support Pack Download Link Release Notes vRealize Suite Lifecycle Manager Product Support Pack 20 8.10 PSPACK 20 8.10 PSPACK RN VMware Aria Suite Lifecycle 8.12.0 PSPACK 12 8.12 PSPACK 12 8.12 PSPACK RN VMware Aria Suite Lifecycle 8.14.0 PSPACK 10 8.14 PSPACK 10 8.14 PSPACK RN VMware Aria Suite Lifecycle 8.16.0 PSPACK 6 8.16.0 PSPACK 6 8.16 PSPACK RN VMware Aria Suite Lifecycle 8.18.0 PSPACK 3 8.18 PSPACK 3 8.18 PSPACK RN
- VMware Aria Suite Lifecycle 8.18.0 PSPACK 4 is out
VMware Aria Suite Lifecycle 8.18.0 Product Support Pack 4adds support for: VMware Aria Operations for Logs 8.18.1 VMware Aria Operations 8.18.2 Release Notes: https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes/Chunk708728041.html#Chunk708728041 Download Link: https://support.broadcom.com/web/ecx/solutiondetails?patchId=5630
- VMware Aria Suite Lifecycle Documentation finds a New Home
The VMware Aria Suite Lifecycle documentation has a new home! Previously hosted on VMware Docs , it has now been relocated to Broadcom's TechDocs platform . Here is a collection of links that might be helpful Installation and Upgrade Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vmware-aria-suite-lifecycle-installation-upgrade-and-management-8-18.html Security Hardening Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vrealize-suite-lifecycle-manager-security-hardening-guide-8-18.html Release Notes (New Release, Patches and Product Support Packs) https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes.html API Programming Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vmware-aria-suite-lifecycle-programming-guide-8-18.html Documentation Legal Notice https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/documentation-legal-notice-english-public.html
- Authenticating via AD users and executing vRSLCM API's -- Detailed Procedure
This blog is available in PDF format too. Download the PDF attached below to consume it. Demo Pre-Requisites vIDM LB url ( if clustered ) or vIDM FQDN ( single ) vIDM local Account vIDM local Account Password AD User Name AD Password Domain Procedure Phase-1 As a first step, fetch the session token. This can be done by using below API Method POST URL {{idmurl}}/SAAS/API/1.0/REST/auth/system/login Payload { "username": "{{idmlocalusername}}", "password": "{{idmlocalpassword}}", "issueToken": "true" } Response false eyJ0eXAiOiJKV1Q****9XsskFqilcg Note: sessionToken on the above response has been trimmed Copy this session into a variable called vIDMSessionToken in Postman Phase-2 As next step , we will create oauth2clients by running an API. This definition will enable a service or its users to authenticate to VMware Identity Manager using the OAuth2 protocol. In short client is created by admin with trust and APIs can use client:secret to get token and auth happens Method POST URL {{idmurl}}/SAAS/jersey/manager/api/oauth2clients {{idmurl}} is a variable for vIDM FQDN Payload { "clientId":"admintesttwo", "secret":"Vk13YXJlMTIzIQ==", "scope":"user admin", "authGrantTypes":"password", "tokenType":"Bearer", "tokenLength":23, "accessTokenTTL":36000, "refreshTokenTTL":432000 } clientId is a name given to the client which would be created. This can be any given name. The secret is the base64 encoded password you would like to assign to this client Response { "clientId": "admintesttwo", "secret": "Vk13YXJlMTIzIQ==", "scope": "user admin", "authGrantTypes": "password", "redirectUri": null, "tokenType": "Bearer", "tokenLength": 32, "accessTokenTTL": 36000, "refreshTokenTTL": 432000, "refreshTokenIdleTTL": null, "rememberAs": null, "resourceUuid": null, "displayUserGrant": true, "internalSystemClient": false, "activationToken": null, "strData": null, "inheritanceAllowed": false, "returnFailureResponse": false, "_links": { "self": { "href": "/SAAS/jersey/manager/api/oauth2clients/admintesttwo" } } } If this API is successful, then there is a 201 Created response is triggered Headers Key Content-Type Value application/vnd.vmware.horizon.manager.oauth2client+json Key Accept Value application/vnd.vmware.horizon.manager.oauth2client+json If we login into vIDM , Under Administration Console click on Catalog and then select Settings. Once we browse to Remote App Access. You would be able to see the client id Clicking on it will provide more details about the OAuth2Client created Phase-3 Once the client id is created , we now need to go ahead and fetch the token for AD authentication Method POST URL {{idmurl}}/SAAS/auth/oauthtoken?grant_type=password Body ( form data ) {{username}} {{password}} {{domain}} {{username}} refers to the AD username {{password}} refers to AD username's password {{domain}} refers to the domain where the user belongs to Authorization Basic {{clientid}}:{{secret}} In the previous step we did create the clientid and then secret ( base64 encoded ) password Content-Type Use it to fetch the t Content-Type multipart/form-data I'd copy this access token into a variable again and call it as a adusertoken Now , let's execute a Get Environment API call to fetch details . These are vRSLCM's APIs. Method Get URL {{lcmurl}}/lcm/lcops/api/v2/environments/{{geenvid}} Authorization Bearer Token {{adusertoken}} {{adusertoken}} is the token captured above Response { "vmid": "90b3269b-9338-4cab-9b3c-f744a2a1e13b", "transactionId": null, "tenant": "default", "environmentName": "globalenvironment", "environmentDescription": "", "environmentId": "globalenvironment", "state": null, "status": "COMPLETED", "environmentData": * * * "{\"environmentId\":\"globalenvironment\",\"environmentName\":\"globalenvironment\",\"environmentDescription\":null,\"environmentHealth\":null,\"logHistory\":\"[ {\\n \\\"logGeneratedTime\\\" : 1657682435109,\\n \ "dataCenterName": null } Truncated version of the response This is how one may generate access token using a AD user account and then use it