
Search Results
247 results found with an empty search
- Handling Postgres Startup and Cert Regeneration After a VCF 9.0.1 Fleet Management Reboot
Overview Applying VCF Operations fleet management appliance 9.0.1.0 patch on top of version 9.0.0.0 is pretty straight forward. This does not need a reboot of the appliance , as it just triggers a service restart. What we noticed was this VCF Operations fleet management appliance was rebooted for some weird reason we did see 2 issues happening Postgres Service fails to start Certificate of VCF Operations fleet management is regenerated If this appliance isn't restarted, you won't encounter this issue at all. Postgres Exception from vmware_vrlcm.log Sep 18 15:33:07 <> postgres[12859]: pg_ctl: could not open PID file "/var/vmware/vpostgres/current/pgdata/postmaster.pid": Permission denied Sep 18 15:33:07 <> systemd[1]: vpostgres.service: Control process exited, code=exited, status=1/FAILURE Certificate Issue Browsing to /opt/vmware/vlcm/cert/ will list a new cert and a key generated , the existing one is backed up with the format server.ke y.<> , server.cr t.<> Remediation Login into VCF Operations fleet management via ssh Execute the command systemctl status vpostgres Must be down, if that's the case then execute the following command to fix the permissions chmod 700 /var/vmware/vpostgres/current/pgdata/ Navigate to the /opt/vmware/vlcm/cert directory. The key and certificate files requiring change will have a timestamp in their names (e.g., server.crt.250930102056). Run the following commands to move the timestamped files into place, replacing the filenames with the ones in your directory: mv server.key.250930102056 server.key mv server.crt.250930102056 server.crt Restart NGINX service systemctl restart nginx Restart VCF Operations fleet management appliance service systemctl restart vrlcm-server.service Check the status of the service systemctl status vrlcm-server.service Once the service startup is complete, you should be now good to go.
- VCF 9.0 to VCF 9.0.1 Management Components Upgrade & Patching Runbook
Environment/Setup VMware Cloud Foundation 9.0.0.0 with all of the management components deployed. Background This document outlines the steps a customer must follow to upgrade from VCF 9.0 to VCF 9.0.1 for management components , eventually followed by the core components. It assists in implementing the newly released maintenance update on the VCF 9.0 GA release. Depot & Binary Management Online Depot If customer is using an online depot, then they would get a message stating there's a new version available for VCF Operations fleet management to which they can upgrade. Offline Depot - Local Customer can download the new bundles of VCF 9.0.1 using VCF-DT and then upload the tar into the VCF Operations fleet management appliance's /data path. Once done, they can click on Depot Configuration → EDIT DEPOT SETTINGS to refresh the depot connection which then detects the new upgrade/patch binaries. Offline Depot - WebServer Customer can download the new bundles of VCF 9.0.1 using VCF-DT and place them on a repository using which this tar bundle can be exposed to the VCF Operations fleet management appliance. Once done, they can click on Depot Configuration → EDIT DEPOT SETTINGS to refresh the depot connection which then detects the new upgrade/patch binaries. In this example , i am leveraging Offline Depot → WebServer mechanism Patch VCF Operations fleet management appliance Login into VCF Operations → Fleet Management → Lifecycle → VCF Management A banner stating a new version of Fleet Management is available is shown. We need to upgrade Fleet Management appliance before we can upgrade any other component to it's version of 9.0.1 The same banner is made available under Components pane too Note: VCF Operations fleet management appliance version 9.0.1 is treated as a patch and not an upgrade. Hence, the binary for this fleet management appliance will be made available under VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries Download the VCF Operations fleet management appliance patch binary from VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries Select the "fleet management" under Patch Binaries and click on "Download" This generates a task, we can monitor the task under the tasks pane. As we can see in the screenshot below, the download of the patch binary is now completed. Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → System Patches Click on "Create Snapshot" which opens following pane which asks for vCenter Hostname vCenter Credential By entering or selecting above information and clicking on "SUBMIT" will create a snapshot of VCF Operations fleet management appliance. This is a mandatory step and cannot be skipped so that we have an appropriate rollback or revert option in case of a failure. After taking snapshot, go ahead and click on "New Patch", a pane opens up which shows the patch we just downloaded Select the Patch and Click on "NEXT" Now under "Review and Install" pane , go ahead and review the information of the patch. There's a release note link as well which can be clicked and reviewed. Once done , go ahead and click on "INSTALL" The moment you click on INSTALL , you will be redirected to Tasks pane where you can see the installation task for the patch run and completed. While the patch is being installed and services being restarted int he background , you would see this "Zero Page" as a placeholder. At this point in time , one can monitor following logs to see what's really happening /var/log/vrlcm/vmware_vrlcm.log /var/log/vrlcm/patchcli.log /var/log/vrlcm/bootstrap.log /data/script.log At this point one can login into VCF Operations fleet management appliance's shell and look or monitor above mentioned logs Once the services are up, the VCF Operations → Fleet Management → Lifecycle → VCF Management page should be back into functional state. Once the services are up, the UI is automatically refreshed. It takes around 5 minutes for it to be back. We can browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → System Details to validate version Now you have succesfully patched VCF Operations fleet management 9.0 to version 9.0.1 Downloading Component Binaries We already have the depot configured before , whether it's offline or online. Because we did update the VCF Operations fleet management appliance to 9.0.1 , now we have the rest of the component binaries made available for download as well. Important to note VCF Automation 9.0.1 and VCF Identity Broker 9.0.1 will be available under "Patch Binaries" VCF Operations 9.0.1 , VCF Operations for Logs 9.0.1 and VCF Operations for Networks 9.0.1 will be available under "Upgrade Binaries" Delete the binaries which you don't need to make some room for the new ones being downloaded. If there's enough room under /data then , there's no need to delete them. We can check the available size of /data where the binaries are downloaded to under VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings Select the components and click on download so that the binaries can now be downloaded and mapped As stated above for VCF Operations for Networks , VCF Operations and VCF Operations for Logs , go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Upgrade Binaries to download them As stated above for VCF Automation and VCF Identity Broker , go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries to download them Downloaded all of the necessary binaries Plan Upgrade Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Plan Upgrade" VCF version would be 9.0 itself as 9.0.1 is a Maintenace Release Click on the "Target Version" for each of the component Select version 9.0.1.0 The moment target version is selected, Target Build number is auto-populated. Once done , click on "CREATE PLAN" The moment a plan is created , under "Components" pane, the respective Actions which needs to be implemented on the components are enabled As stated above , Operations-Logs , Operations and Operations-Networks it would be an "Upgrade" Automation and Identity Broker it would be a "Patch" Upgrade VCF Operations 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it Once we click on "Submit" , it will generate a task where progress can be tracked Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request Since the binary is alredy available, the repository url is already populated Run APUAT by clicking on "Run Assessment", It takes few minutes for the assessment to complete. So don't panic. Review and acknowlegde assessment and click on next Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT Click on SUBMIT to start the upgrade Upgrade VCF Operations for Logs from 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it Once we click on "Submit" , it will generate a task where progress can be tracked Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request Since the binary is alredy available, the repository url is already populated Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT Click on SUBMIT to start the upgrade Upgrade VCF Operations for Networks 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it Once we click on "Submit" , it will generate a task where progress can be tracked Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request Since the binary is already available, the repository url is already populated Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT Click on SUBMIT to start the upgrade Patch VCF Automation 9.0 to 9.0.1 Click on the "Apply Patch" on the Component pane or on the Overview Pane A pane appears displaying important information that must be reviewed. It is critical to verify that SFTP is properly configured and backups are functioning before starting the patch installation. These prerequisites are essential because VCF Automation nodes do not support snapshots. During the patch process, the workflow automatically takes a backup to provide a recovery point in case a failure occurs. How to verify SFTP is working Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → SFTP Settings → SFTP status , there's no exception being reported. Optional: It's fine to take an ad-hoc backup from VCF Operations → Fleet Management → Lifecycle → VCF Management → Components → Automation → Backup & Restore (Day-N Operation) → Backup , just to be on safe side. Select the Patch and also acknowledge that you have verified the SFTP configuration is working Click on Next to go to "Review and Install" pane for the patch and then click on INSTALL to begin the process. Patch VCF Identity Broker 9.0 to 9.0.1 Click on the "Apply Patch" on the Component pane or on the Overview Pane A pane appears displaying important information that must be reviewed. It is critical to verify that SFTP is properly configured and backups are functioning before starting the patch installation. These prerequisites are essential because VCF Automation nodes do not support snapshots. During the patch process, the workflow automatically takes a backup to provide a recovery point in case a failure occurs. How to verify SFTP is working Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → SFTP Settings → SFTP status , there's no exception being reported. Optional: It's fine to take an ad-hoc backup from VCF Operations → Fleet Management → Lifecycle → VCF Management → Components → Automation → Backup & Restore (Day-N Operation) → Backup , just to be on safe side. Select the Patch and also acknowledge that you have verified the SFTP configuration is working Click on Next to go to "Review and Install" pane for the patch and then click on INSTALL to begin the process.
- Journey to VCF 9 having vSphere 8.x and VMware Aria Operations 8.x
Introduction Introducing a mind map for a customer topology with vSphere and VMware Aria Operations, illustrating their path to VMware Cloud Foundation 9. This blog details the journey step by step, guiding them through component deployments and upgrades, ultimately completing their VMware Cloud Foundation 9 journey. Customer Topology vSphere 8.x with several vCenter Servers, with one vCenter hosting VMware Aria Operations 8.x VMware Aria Operations 8.x There's no NSX deployed Mindmap Upgrading VMware Aria Operations 8.x to VCF Operations 9.0 Obtain Software Upgrade PAK file Snapshot VMware Aria Operations 8.x cluster It is mandatory to create a snapshot of each node in a cluster before you update a VMware Aria Operations cluster. Once the update is complete, you must delete the snapshot to avoid performance degradation. Log into the VMware Aria Operations Administrator interface at https:///admin. Click Take Offline under the cluster status. When all nodes are offline, open the vSphere client. Right-click a VMware Aria Operations virtual machine. Click Snapshot and then click Take Snapshot. Name the snapshot. Use a meaningful name such as "Pre-Update." Uncheck the Snapshot the Virtual Machine Memory check box. Uncheck the Ensure Quiesce Guest File System (Needs VMware Tools installed) check box. Click OK. Repeat these steps for each node in the cluster. Log into the primary node VMware Aria Operations administrator interface of your cluster at https://primary-node-FQDN-or-IP-address/admin . Click Software Update in the left pane. Click Install a Software Update in the main pane. Follow the steps in the wizard to locate and install your PAK file. This updates the OS on the virtual appliance and restarts each virtual machine. Read the End User License Agreement and Update Information, and click Next. Click Install to complete the installation of the software update . Log back into the primary node administrator interface. The main Cluster Status page appears and the cluster goes online automatically. The status page also displays the Bring Online button, but do not click it. Clear the browser caches and if the browser page does not refresh automatically, refresh the page.The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete. Click Software Update to check that the update is done. A message indicating that the update completed successfully appears in the main pane. When you update VMware Aria Operations to a latest version, all nodes get upgraded by default. If you are using cloud proxies, the cloud proxy upgrades start after the VMware Aria Operations upgrade is completed successfully. Upgrading vCenter Server Appliance Reference Link : https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/vcenter-upgrade/upgrading-and-updating-the-vcenter-server-appliance.html Upgrading ESX hosts Reference Link: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/esx-upgrade/overview-of-the-esxi-host-upgrade-process.html Import an existing vCenter Server as a Workload Domain Reference Link: https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vcf-9-0-and-later/9-0/building-your-private-cloud-infrastructure/working-with-workload-domains/import-an-existing-vcenter-to-create-a-workload-domain.html
- Retrieving password using locker API in VCF 9.0 for Management Components
Retrieving Password from Locker Step 1: Generate API Token To generate an API token, you can use either the VCF Operations Fleet Management appliance logging into the shell as a root user or any Base64 encoding tool. Encode your credentials in the following format: echo 'admin@local:youradminatlocalpassword' | base64 Copy the resulting Base64-encoded string. This will be used for authorization. Step 2: Authenticate via Swagger UI Open the API documentation in your browser: https:///api/swagger-ui/index.html Navigate to VCF Operations → Developer Central → Fleet Management API → API Documentation . In the Swagger UI, locate the API Token section. When prompted for authorization, enter the following format in the input field: Basic Replace with the string you copied in Step 1. Click Authorize to authenticate and begin executing API requests. Step 3: Retrieve Passwords from Locker Firstly let’s retrieve all passwords from the locker. So that we can use leverage the VMID out of the response and then retrieve specific password GET https://vcf-operations-fleetmanagement-appliance-fqdn/lcm/locker/api/passwords Above API will return response with the paginated list of passwords [ { "alias": "Default Password for vCenters", "createdOn": 1605791587373, "lastUpdatedOn": 1605791587373, "password": "PASSWORD****", "passwordDescription": "This password is being used for all my vCenters", "principal": "string", "referenced": true, "tenant": "string", "transactionId": "string", "userName": "administrator@vsphere.local", "vmid": "6c9fca27-678d-4e79-9a0f-5f690735e67c" } ] Now retrieve the password by using the root password of VCF Operations fleet management appliance. Fetch the VMID of the password from the POST https://vcf-operations-fleetmanmagement-appliance-fqdn/lcm/locker/api/passwords/view/{vmid} The Body should be {\"rootPassword\":\"V*********!\"} The response of the previous call will retrieve the password needed.
- VMware Aria Automation 8.18 to VCF Automation 9.0 Upgrade MindMap
Presenting mind map of VMware Aria Automation 8.18 to VCF Automation 9.0
- VCF Automation 9.0 Installation | Deep-Dive |
In today's fast-paced IT landscape, automation has become a critical component for streamlining operations and enhancing efficiency. One of the most powerful products available for Private Cloud is VMware Cloud Foundation (VCF) , which a cohesive infrastructure solution. This blog delves into the intricacies of VCF Automation installation, providing a comprehensive guide for IT professionals looking to simplify their deployment processes. We will explore the prerequisites, installation steps, and best practices to ensure a smooth and successful implementation of VCF Automation, enabling organizations to harness the full potential of their cloud environments. Deployment Types VCF Automation comes with three sizing profiles. Small Medium Large Below table describes Number of nodes deployed for a specific deployment type Number of IP's needed for a specific deployment type Let's now get into the deployment flow and see what all parameters we need for a successful deployment Login into VCF Operations UI , Select Fleet Management -- Lifecycle -- VCF Management -- Overview Click on ADD on the automation tile This would launch the automation installation wizard There are three options here New Install Deploys a new VCF Automation component Import Gives an ability to import Automation 9.0 when it's removed as a management component from VCF Operations Fleet Management Appliance for troubleshooting purposes post its deployment. Import from legacy Fleet Management Provides an ability for a customer to import their existing VMware Aria Automation 8.18.x instances into VCF Operations fleet management appliance, so that they can be upgraded to VCF Automation 9.0 The initial/first VCF Automation instance you deploy or import will be classified as an INTEGRATED instance. Any subsequent VCF Automations added to the VCF Operations fleet management, whether through deployment or the import & upgrade methods, will be classified as NON-INTEGRATED. Since we would be coverting New Install for this blog, so let's select this option and move forward by clicking on NEXT. We shall select MEDIUM deployment type In the next step, select the certificate which would be used for the deployment If you have a certificate pre-created select it. If you don't have the certificate, then go ahead and click on the "+" sign to Generate it. This would be a VCF Operations fleet management Locker CA based certificate If you have an external party certificate authorized by your organization, then choose Import Certificate and then import it Unlike VMware Aria Automation 8.x, where you need 1 VMware Aria Automation Load Balancer FQDN and 3 VMware Aria Automation node FQDN's, when deploying VCF Automation 9.0 , you just need 1 VCF Automation FQDN. This VCF Automation FQDN is the only one needed while generating the certificate and no other inputs are needed. Select the certificate and click on NEXT for further inputs on the Infrastructure tab Select vCenter Server This would be the management domain where VCF Automation would be deployed If the vCenter Server where you would like to deploy VCF Automation is not listed, then go ahead and check if that's added as one of the deployment targets under VCF Operations - Administration - Integrations - Accounts - vCenter or VMware Cloud Foundation Fleet Management - Lifecycle - VCF Management - Settings - Deployment Target Select Cluster This would be the place where you would like to host your nodes Select Folder Placeholder for placing the VCF Automation ndoes Select Resource Pool Placeholder for placing the VCF Automation ndoes Select Network This would be the network where your VCF Automation nodes would be connected to Select Datastore This would be the datastore where your VCF Automation nodes would be deployed to Once done with the Infrastructure tab , proceed with the Network tab Domain Name Enter the domain name of the organization Domain Search Path Enter the domain search path of the organization DNS Servers Add NEW Server Add a new DNS Server and then select it EDIT SERVER SELECTION Select the DNS Server which you would like to use for this deployment Time Sync Mode Use NTP Server Add NEW Server Add a new NTP Server and then select it EDIT SERVER SELECTION Select the NTP Server which you would like to use for this deployment Use Host Time Leverage the NTP set on the ESXi host where it's deployed IPv4 Details Default IPv4 Gateway Enter the defaut gateway for the deployment IPv4 Netmask Enter the netmask used for the deployment Click on next to enter the component properties Component Properties FQDN Enter VCF Automation FQDN Certificate As we selected this during initial screen, this is pre-populated Component Password Create a 15 character long password If the password is not created, create it using "ADD PASSWORD" on the top right corner of the screen Once created , select the password Cluster Virtual IP FQDN Enter VCF Automation FQDN Yes, you have entered this before, but you need to enter this again. Controller Type Internal Load Balancer When using internal load balancer the VCF Automation FQDN should be pointing to the Primary VIP Internal Load Balancer Others This is an option if a customer wants to leverage an external load balancer like F5, Netscaler , NSX-T etc. When this option is used VCF Automation FQDN should be pointing to the VIRTUAL SERVER IP of the Load Balancer The Primary VIP and Additional VIP's which would be collected as inputs in the subsequent steps should be part of it's POOL Components Node Prefix Specify a unique prefix for the VCF Automation nodes. Ensure the prefix is unique within the VCF Instance fleet to avoid conflicts and enable accurate VM backup identification This is used as prefix to the VCF Automation ndoes we deploy and a suffix is autogenerated during deployment This behavior cannot be changed. Primary VIP The Primary Virtual IP address of the VCF Automation used for accessing the services As described above if using internal load balancer the VCF Automation FQDN should be pointing to this Primary VIP if using others or external load balancer, then this Primary VIP should be part of the pool on the load balancer Internal Cluster CIDR IP Address Range used for internal network communication within the cluster. Choose a range that does not conflict with any existing networks. Note: Once a cluster CIDR is selected and component is deployed, this cannot be changed Additional VIP's You can add upto 2 additional VIP's for VCF Automation This is not mandatory for Greenfield Installs Click on ADD ADDITIONAL VIP POOL to add IP addresses one after another Cluster Node IP Pool A node IP pool is a range of IP addresses allocated for nodes being deployed to host VCF Automation, from which they will receive their IP addresses. For Medium and Large deployment types a minimum of 4 IP's are needed For Small deployment type a minimum of 2 IP's are needed As stated it accepts CIDR based format Individual IP Addresses A range Click on NEXT now to run a PRECHECK Proceed only when PRECHECK is successful Once Prechecks are successful, review the summary and submit the deployment request As stated in the summary page, parallel deployments of VCF Automation or Identity Broker is not supported. Deploy them one after another Once submitted, the deployment procedure begins Deep-Dive into deployment process will be blogged soon
- Depot & Binary Management in VCF Operations fleet management appliance or OPS LCM 9.0
Background We are accustomed to downloading the binaries and copying them into VMware Aria Suite Lifecycle, followed by binary mapping, allowing us to use these binaries for installing, upgrading, or patching the VMware Aria Products in version 8.x. In VCF 9.0, there have been changes. VMware Aria Suite Lifecycle is no longer included in VCF 9.0. Instead, the responsibility for VMware Aria Suite Lifecycle now falls under the VCF Operations fleet management appliance, also known as OPS LCM. OPS LCM allows us to set up a depot for downloading the relevant binaries. Depot Configuration is a must or a pre-requisite before downloading the binaries. So let's understand the process thoroughly. Extending Storage for Binaries The /data partition is the largest in the VCF Operations Fleet Management appliance. This is where customers upload bundles in the Offline Depot → Local (Dark Site Use Case). If storage is running low, you can extend the /data partition storage. Browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings . Click on EXTEND . Enter the vCenter Server FQDN. Select the vCenter Server password. Enter the desired size to which you want to extend the partition. Click EXTEND to increase the size. This will add additional storage to the /data partition. Depot Configuration Starting with VCF 9.0, VCF Operations Fleet Management introduces support for Depot configuration. A Depot serves as a source for downloading installation, upgrade, and patch binaries. There are two types of Depot configurations: Online – Connects directly to the online VCF Depot. Offline Web Server – Connects to an offline web server hosting OBTU, suitable for air-gapped environments. Local – Requires copying the tar bundle downloaded via OBTU to the /data partition of the VCF Operations Fleet Management appliance, ideal for dark site environments. Only one Depot connection can be set to ACTIVE at a time. If a connection is already ACTIVE, the option to switch the depot to ONLINE or OFFLINE will be unavailable until the current depot connection is disconnected. Configure Online Depot To enable the Online Depot, generate the download token from support.broadcom.com and use it during the setup process. This ensures entitlement to download the required binaries. Fetching Download Token Login into support.broadcom.com using your credentials.Post successful login, select VMware Cloud Foundation In the bottom right side of the screen select “ Generate Download Token ” under "Quick Links" Pick the correct Site and click " Generate" Now that we have the token, let's go ahead and setup the ONLINE depot Login into VCF Operations 9.0 console Browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Depot Configuration Click on Online Depot → Configure Click on the "+" on the Download Token field to add the token to the locker Password Alias Enter an alias to identify the token Password Enter/Paste the token generated from the Broadcom Support Portal Confirm Password Re-Enter the token generated again Password Description Enter a description User Name It's not needed in this scenario. Can be left blank. Click on ADD so that the token is added to the locker Once done, now click on "Select Download Token", just select the token which we just added to the locker Accept the Certificate and then click on "OK" Online Depot connection is now established. Offline Depot Configuration Offline depots simplify artifact distribution for "dark site" or "air-gapped" customers, reducing the steps needed to manage multiple VCF instances. To set up the depot correctly, customers should follow the procedure outlined below. Leverage VCF Download Tool and setup an Offline Depot structure where the binaries can be downloaded. This is not new for VCF Customers. This was called as Offline Bundle Transfer Utility in VCF 5.x days. To set up the "Offline Depot," a dedicated Virtual Machine (VM) must be prepared. Refer to VCF Download Tool documentation for more information. Note: Both Offline depot methods only expect a bundle from OBTU. This would be a tar bundle with metadata included. Individual binary mapping as it used to happen in VMware Aria Suite Lifecycle 8.x does not work anymore Once we have the VCF Download Tool installed and configured, in order to download the binaries, we can leverage following commands. Explaination of the command /vcf-download-tool.bat: Invokes the VCF download tool batch script (for Windows environments). binaries download: Action to download binary files . -d /Users/Arun/local-depot-config/: Target local directory where binaries and metadata will be stored. --depot-download-token-file downloadtoken.txt: File containing the secure token required to authenticate and authorize the download from the Broadcom depot. --vcf-version="9.0.0.0": Specifies the exact VCF version to download (in this case, VCF 9.0.0.0 ). --lifecycle-managed-by=VRSLCM: Indicates that the binaries are intended to be managed by VCF Operations fleet management appliance (OPS LCM or VRSLCM) . --type=INSTALL: Specifies the type of binaries — in this case, for a fresh install (not upgrade or patch). Downloads all VCF Management Component Binaries of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=INSTALL Downloads all VCF Management Component Binaries of type "UPGRADE" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=UPGRADE Downloads a specific VCF Management Component Binary of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=INSTALL --component=VRLI Downloads a specific VCF Management Component Binary of type "INSTALL" /vcf-download-tool.bat binaries download -d /Users/Arun/local-depot-config/ --depot-download-token-file downloadtoken.txt --vcf-version="9.0.0.0" --lifecycle-managed-by=VRSLCM --type=UPGRADE --component=VRNI When the command execution finishes, it will create a bundle on the location mentioned. This is the bundle one has to use to leverage in Offline Depot usecases. Web Server Method Offline Depot Webserver method is catered to solve air-gapped use cases. Keep the Web Server URL handy, then navigate to VCF Operations → Fleet Management → Lifecycle → VCF Management → Depot Configuration → Offline Depot → WebServer should be set to reference the "Offline Depot" The Offline Depot with a Web Server-based connection has now been set up. The username and password provided here are for Web Server Authentication to download the binaries. Local Method The Local Method is used in dark site infrastructures which are secure, isolated network environments with no direct internet access, typically designed for sensitive or high-security operations. To support these environments, the Offline Depot → Local option can be utilized to map bundles and binaries, enabling upgrades, installations, and patching of components. Transfer the bundle we recently downloaded with the VCF download tool into the "/data" partition of the VCF Operations fleet management appliance, as this is the largest partition specifically designed to store the binaries. If there's a need to extend the partition size, steps to extend has been shared above. Once the bundle has been copied over, as shown in the screenshot below, direct it to the location where it has been copied. Click on OK to configure the Offline Depot to point to Local path. We have now completed our understanding of how to configure these Depots. Regardless of the specific depot configuration, once it is set up correctly, the VCF Operations fleet management appliance will read the metadata and subsequently populate the corresponding binaries under Binary Management for download. Now select the required binary , then click on "DOWNLOAD" so that the binary can be staged into VCF Operations fleet management content repo and can be used during install , patching or upgrade. The methods you used in VMware Aria Suite Lifecycle no longer apply to VCF 9.0. Therefore, you should stop copying the ISO, PAK, OVA, and .patch files directly and begin learning the new required methods.
- VMware Aria Suite Lifecycle 8.18 Patch 2 Released
VMware Aria Suite Lifecycle 8.18 Patch 2 is live. This is only needed if your upgrading to VCF 9.0. This patch provides you a capability to upgrade VMware Aria Operations 8.18 to 9.0 & Install the new VCF Operations fleet management appliance 9.0 VMware Aria Suite Lifecycle 8.18 Patch 2 does not have any features or bug fixes. If the customer has no immediate plans to upgrade to 9.0 , then there is no need to install this patch. Release Notes https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes/vmware-aria-suite-lifecycle-818-patch-2-release-notes.html
- VMware-Aria-Operations-8.18-HF6 Released
VMware Aria Operations 8.18 HF 6 is now released. Click here for Release Notes Click here for Solution Details Click here to download the VMware Aria Suite Lifecycle wrapped patches which correspond to HF 6 VMware-Aria-Operations-8.18-HF6 VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.0 vrlcm-vrops-8.18.1_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.1 version vrlcm-vrops-8.18.0_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.0 version vrlcm-vrops-8.18.3_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.3 version vrlcm-vrops-8.18.2_HF6 VMware Aria Suite Lifecycle wrapped VMware Aria Operations 8.18 HF 6 which can be applied on top of VMware Aria Operations 8.18.2 version
- New VMware Aria Operations 8.18.x patch released
Payload VMware Aria Operations 8.18 Hot Fix 5 is a public Hot Fix that addresses the following issues: Product managed agent installation is failing CP status became unhealthy after removing nodes from cluster "Network|Total Transmitted Packets Dropped" metric for VMs is missing JS error in creation flow of Payload Templates for webhook plugin The session is not authenticated issue while calling SPBM API's [Diagnostics MP] Add new VMSA rules: VMware ESXi CVE-2025-22224, CVE-2025-22225, CVE-2025-22226, to VMSA-2024-0004 [Diagnostics MP] Update to VMSA rules: VMware vCenter Server CVE-2024-38812, CVE-2024-38813, to VMSA-2024-0019 [Diagnostics MP] Add new VMSA rules: VMware Aria automation CVE-2025-22215, to VMSA-2025-0001 If on version Then use File Name VMware Aria Operations 8.18.0 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5818 vrlcm-vrops-8.18.0-HF5.patch VMware Aria Operations 8.18.1 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5827 vrlcm-vrops-8.18.1-HF5.patch VMware Aria Operations 8.18.2 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5828 vrlcm-vrops-8.18.2-HF5.patch VMware Aria Operations 8.18.3 https://support.broadcom.com/web/ecx/solutiondetails?patchId=5829 vrlcm-vrops-8.18.3-HF5.patch Implementation Details NOTE: BEFORE Installing the patch ensure snapshot of the cluster is taken. As my environment was on VMware Aria Operations 8.18.0 , I did download vrlcm-vrops-8.18.0-HF5.patch using https://support.broadcom.com/web/ecx/solutiondetails?patchId=5818 Now let's check how to get this installed or implemented. Firstly, Download the patch or HF and then copy it into VMware Aria Suite Lifeccyle /data partition. Go to Lifecycle Operations -- Binary Mapping -- Patch Binary -- Add Enter the base location where the patch is present and then click on DISCOVER Select the discovered patch and click on ADD Here's the version info and the build number before i started the patch install Let's now go to the Day-2 Operation called Patch Select the Patch to be installed Review and Submit the patch. Click on INSTALL for the Patch install flow to begin After sometime , the patching is now complete. Before patch the build number was 24025145. Now after 8.18.1 HF5, the build number is 8.18.3.24663033 Here's the environment section where it shows that it's on 8.18.3 Patch. Build Number is clearly shown in the properties. No matter which version your coming from when the patch is applied , it goes to 8.18.3 Patch Build. Which is the latest one. Screenshot from the VMware Aria Operations UI itself. This concludes the patching blog.
- Configuring a vRO 8.x Cluster
In this blog, I shall share the procedure to create a vRealize Orchestrator cluster on version 8.2 The recommended number of nodes that can be used to create a clustered vRealize Orchestrator environment is three. vropri.prem.com vrosec.prem.com vroteri.prem.com Once deployed we have to ensure that all pods are in a RUNNING state so that we can move on to the next step to configure clustering Procedure In my case, there isn't any working load balancer so I am using a workaround at this moment. I've created a cname which is pointing to the primary node Ideally, you're expected to configure as written in this documentation Orchestrator Load-Balancing Guide Now that we have all the nodes in a ready state and I've taken a snapshot of all three nodes before making any changes. These snapshots are taken without memory while the appliances are powered on Configure the primary node. Log in to the vRealize Orchestrator Appliance of the primary node over SSH as root . To configure the cluster load balancer server, run the vracli load-balancer set load_balancer_FQDN command Click Change and set the host address of the connected load balancer server. Configure the authentication provider. See Configuring a Standalone vRealize Orchestrator Server. Join secondary nodes to the primary node Log in to the vRealize Orchestrator Appliance of the secondary node over SSH as root . To join the secondary node to the primary node, run the vracli cluster join primary_node_hostname_or_IP Enter the root password of the primary node. Repeat the procedure for other secondary nodes. Following are the events which happen when you execute the cluster join command Below Snippet is from one of the nodes vroteri.prem.com root@vroteri [ ~ ]# vracli cluster join vropri.prem.com 2020-11-24 14:13:19,085 [INFO] Resetting the current node .. 2020-11-24 14:18:09,306 [INFO] Getting join bundle from remote endpoint .. Password: 2020-11-24 14:22:30,362 [INFO] Parsing join bundle 2020-11-24 14:22:30,390 [INFO] Deleting data from previous use on this node .. 2020-11-24 14:22:31,006 [INFO] Creating missing data directories on this node .. 2020-11-24 14:22:31,082 [INFO] Allowing other nodes to access this node .. 2020-11-24 14:22:31,101 [INFO] Updating hosts file for remote endpoint .. 2020-11-24 14:22:31,114 [INFO] Executing cluster join .. I1124 14:22:31.221791 30566 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery I1124 14:22:31.224053 30566 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName I1124 14:22:31.224103 30566 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress I1124 14:22:31.224243 30566 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock I1124 14:22:31.224550 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.224592 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.224842 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.224968 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.225031 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.225068 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.225117 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.225151 30566 interface.go:411] Found active IP 10.109.44.140 [preflight] Running pre-flight checks I1124 14:22:31.225361 30566 preflight.go:90] [preflight] Running general checks I1124 14:22:31.225501 30566 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I1124 14:22:31.225580 30566 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I1124 14:22:31.225626 30566 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:31.225673 30566 checks.go:102] validating the container runtime I1124 14:22:31.342648 30566 checks.go:128] validating if the service is enabled and active [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I1124 14:22:31.532103 30566 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I1124 14:22:31.532217 30566 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I1124 14:22:31.532283 30566 checks.go:649] validating whether swap is enabled or not I1124 14:22:31.532392 30566 checks.go:376] validating the presence of executable conntrack I1124 14:22:31.532532 30566 checks.go:376] validating the presence of executable ip I1124 14:22:31.532583 30566 checks.go:376] validating the presence of executable iptables I1124 14:22:31.532650 30566 checks.go:376] validating the presence of executable mount I1124 14:22:31.532687 30566 checks.go:376] validating the presence of executable nsenter I1124 14:22:31.532741 30566 checks.go:376] validating the presence of executable ebtables I1124 14:22:31.532847 30566 checks.go:376] validating the presence of executable ethtool I1124 14:22:31.532959 30566 checks.go:376] validating the presence of executable socat I1124 14:22:31.533016 30566 checks.go:376] validating the presence of executable tc I1124 14:22:31.533088 30566 checks.go:376] validating the presence of executable touch I1124 14:22:31.533353 30566 checks.go:520] running all checks I1124 14:22:31.631717 30566 checks.go:406] checking whether the given node name is reachable using net.LookupHost I1124 14:22:31.632132 30566 checks.go:618] validating kubelet version I1124 14:22:31.724374 30566 checks.go:128] validating if the service is enabled and active I1124 14:22:31.737672 30566 checks.go:201] validating availability of port 10250 I1124 14:22:31.738591 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.738713 30566 join.go:455] [preflight] Fetching init configuration I1124 14:22:31.738727 30566 join.go:493] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' I1124 14:22:31.798981 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.799061 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.799375 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.799490 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.799542 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.799591 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.799656 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.799699 30566 interface.go:411] Found active IP 10.109.44.140 I1124 14:22:31.799798 30566 preflight.go:101] [preflight] Running configuration dependant checks [preflight] Running pre-flight checks before initializing the new control plane instance I1124 14:22:31.801206 30566 checks.go:577] validating Kubernetes and kubeadm version I1124 14:22:31.801271 30566 checks.go:166] validating if the firewall is enabled and active I1124 14:22:31.813148 30566 checks.go:201] validating availability of port 6443 I1124 14:22:31.813292 30566 checks.go:201] validating availability of port 10259 I1124 14:22:31.813358 30566 checks.go:201] validating availability of port 10257 I1124 14:22:31.813430 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I1124 14:22:31.813486 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I1124 14:22:31.813516 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I1124 14:22:31.813541 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I1124 14:22:31.813566 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.813609 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813653 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813695 30566 checks.go:201] validating availability of port 2379 I1124 14:22:31.813833 30566 checks.go:201] validating availability of port 2380 I1124 14:22:31.813917 30566 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I1124 14:22:31.882093 30566 checks.go:838] image exists: vmware/kube-apiserver:v1.18.5-vmware.1 I1124 14:22:31.945357 30566 checks.go:838] image exists: vmware/kube-controller-manager:v1.18.5-vmware.1 I1124 14:22:32.007510 30566 checks.go:838] image exists: vmware/kube-scheduler:v1.18.5-vmware.1 I1124 14:22:32.071842 30566 checks.go:838] image exists: vmware/kube-proxy:v1.18.5-vmware.1 I1124 14:22:32.139079 30566 checks.go:838] image exists: vmware/pause:3.2 I1124 14:22:32.204019 30566 checks.go:838] image exists: vmware/etcd:3.3.6.670 I1124 14:22:32.269153 30566 checks.go:838] image exists: vmware/coredns:1.2.6.11743048 I1124 14:22:32.269226 30566 controlplaneprepare.go:211] [download-certs] Skipping certs download [certs] Using certificateDir folder "/etc/kubernetes/pki" I1124 14:22:32.269257 30566 certs.go:38] creating PKI assets [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [vroteri.prem.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vra-k8s.local] and IPs [10.244.4.1 10.109.44.140] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" I1124 14:22:34.339423 30566 certs.go:69] creating new public/private key files for signing service account users [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I1124 14:22:36.170921 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.171453 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.171988 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I1124 14:22:36.172011 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I1124 14:22:36.172020 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I1124 14:22:36.194181 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I1124 14:22:36.194238 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.194357 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.194806 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I1124 14:22:36.194836 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I1124 14:22:36.194866 30566 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I1124 14:22:36.194875 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I1124 14:22:36.194883 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I1124 14:22:36.196052 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I1124 14:22:36.196092 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.196176 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.196505 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I1124 14:22:36.197169 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [check-etcd] Checking that the etcd cluster is healthy I1124 14:22:36.214097 30566 local.go:78] [etcd] Checking etcd cluster health I1124 14:22:36.214140 30566 local.go:81] creating etcd client that connects to etcd pods I1124 14:22:36.214188 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:36.242843 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334793 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334878 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.406046 30566 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:36.412385 30566 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "vroteri.prem.com" and status "Ready" I1124 14:22:36.425067 30566 kubelet.go:159] [kubelet-start] Stopping the kubelet [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... I1124 14:22:41.776119 30566 cert_rotation.go:137] Starting client certificate rotation controller I1124 14:22:41.781038 30566 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node I1124 14:22:41.781103 30566 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "vroteri.prem.com" as an annotation I1124 14:22:50.317665 30566 local.go:130] creating etcd client that connects to etcd pods I1124 14:22:50.317743 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:50.348895 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399038 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399144 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399168 30566 local.go:139] Adding etcd member: https://10.109.44.140:2380 [etcd] Announced new etcd member joining to the existing etcd cluster I1124 14:22:50.480013 30566 local.go:145] Updated etcd member list: [{vroteri.prem.com https://10.109.44.140:2380} {vropri.prem.com https://10.109.44.138:2380} {vrosec.prem.com https://10.109.44.139:2380}] [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s I1124 14:22:50.482224 30566 etcd.go:509] [etcd] attempting to see if all cluster endpoints ([https://10.109.44.138:2379 https://10.109.44.139:2379 https://10.109.44.140:2379]) are available 1/8 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. 2020-11-24 14:22:52,499 [INFO] Enabling the current node to run workloads .. node/vroteri.prem.com untainted 2020-11-24 14:22:52,945 [INFO] Enabling flannel on the current node .. 2020-11-24 14:22:53,175 [INFO] Updating hosts file for local endpoint .. 2020-11-24 14:22:53,192 [INFO] Sleeping for 120 seconds to allow infrastructure services to start. 2020-11-24 14:24:53,289 [INFO] Updating proxy-exclude settings to include this node FQDN (vroteri.prem.com) and IPv4 address (10.109.44.140) I don't have a custom certificate in this environment so I'd skip that step As you would see once we execute kubectl -n prelude get nodes You would see all the three nodes in a ready state. Now let's run the script which would start all the services /opt/scripts/deploy.sh At this point in time monitor /var/log/deploy.log Monitor pod deployments using watch -n 5 "kubectl -n prelude get pods - wide" Finally, all the POD's are up on all the underlying nodes Now heading to vRO Load Balancer FQDN you will be presented with the landing page if this vRO Clicking on the control center you would be able to configure the environment accordingly.







