top of page

Search Results

252 results found with an empty search

  • vRealize LogInsight Upgrade from 8.8.2 to 8.10

    Pre-Requisite Ensure vRSLCM 8.10 PSPACK 1 is implemented Once done , download / upload the product binary which will enable you to upgrade vRLI to 8.10 vRSLCM 8.10 PSPACK 1 also allows you to upgrade vRNI to 6.8 , we will discuss that in the next blog Click here for vRSLCM 8.10 PSPACK 1 Release Notes Environment consists of vRLI 8.8.2 Upgrading this version to 8.10 Trigger Inventory Sync Check the binary or the pak file Take Snapshot Pre-checks would run in the background Pre-checks complete TLS Check is performed SSH Check is performed Valid IP/FQDN check is performed Cluster setup check for vRLI Disk space check on the root filesystem Submit the request It's a single node vRealize LogInsight environment , hence there are 8 stages. Let's discuss each one of them in detail Stage:1 Shutting Down Guest OS Stage:2 Creating Node Snapshot Stage:3 Power On VM Stage:4 vrlihealthcheck Stage:5 Create snapshot inventory Stage:6 Start vRealize LogInsight Generic Task Stage:7 deleteNodeSnapshot Stage:8 productupgradeinventoryupdate Looking at Stage 6 in detail and what really happens in the background Reference: vmware_vrlcm.log ### Genetic Task is Initiated ### 2022-10-28 13:28:37.695 INFO [pool-3-thread-42] c.v.v.l.p.v.StartGenericVRLIInstallTask - -- Starting :: Start vRLI Generic Task 2022-10-28 13:28:37.695 INFO [pool-3-thread-42] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnGenericVrliTaskInitialized ### Upgrade Task is triggered , pak file is identified ### 2022-10-28 13:28:38.265 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Starting :: vRLI upgrade task 2022-10-28 13:28:38.266 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Checking if vRLI instance is running 2022-10-28 13:28:38.349 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- The vRLI instance https://10.109.44.140 service is running 2022-10-28 13:28:39.234 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return message for vrli: {"releaseName":"GA","version":"8.8.2-20056468"} 2022-10-28 13:28:39.238 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return status code for vrli: 200 2022-10-28 13:28:39.240 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Current vRLI version is: 8.8.2-20056468 2022-10-28 13:28:39.241 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Current vRLI version without build is: 8.8.2 2022-10-28 13:28:39.241 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade spec properties {product=vrli, productId=vrli, masterNodeIP=10.109.44.140, takeSnapshot=true, snapshotWithMemory=false, vCenterHost=vc.cap.org, snapshotNamePrefix=VRLCM_AUTOGENERATED_2735afc8-a188-4779-a58f-50cd3997aa00, version=8.10.0, vrliAdminPassword=KXKXKXKX, isRetainSnapshot=false, snapshotPrefix=VRLCM_AUTOGENERATED_2735afc8-a188-4779-a58f-50cd3997aa00, environmentId=cf8ac4ce-a7a7-4958-8401-50efdf4f1489, environmentName=Production, vcUsername=vrsvc@cap.org, snapshotWithShutdown=true, tenantId=, vrliHostName=li.cap.org, repositoryType=lcmrepository, vrliUpgradePakUrl=http://lcm.cap.org/repo/productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak, quiesceSnapshot=false, isVcfUser=false, isVcfEnabledEnv=false, vcPassword=KXKXKXKX, rootPassword=KXKXKXKX 2022-10-28 13:28:39.243 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade version from user input 8.10.0 2022-10-28 13:28:39.245 INFO [pool-3-thread-13] c.v.v.l.c.s.ContentLeaseServiceImpl - -- Inside create content lease. 2022-10-28 13:28:39.246 INFO [pool-3-thread-13] c.v.v.l.c.s.ContentLeaseServiceImpl - -- Created lease for the folder with id :: 564ae968-e032-4c87-89a6-2c1525f11ebd. 2022-10-28 13:28:39.261 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- /data/vm-config/symlinkdir/564ae968-e032-4c87-89a6-2c1525f11ebd 2022-10-28 13:28:39.262 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Started Downloading Content Repo 2022-10-28 13:28:39.263 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-10-28 13:28:39.263 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /productBinariesRepo 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- URL :: /productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.272 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='86af9699-d28e-42b5-ac2b-ff2c32f5fab8', version=8.1.0.0} -> repoName='productBinariesRepo', contentState='PUBLISHED', url='/productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak'} 2022-10-28 13:28:39.350 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Completed Downloading Content Repo and starting InputStream 2022-10-28 13:28:39.351 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- /data/vm-config/symlinkdir/564ae968-e032-4c87-89a6-2c1525f11ebd/VMware-vRealize-Log-Insight-8.10.0-20623770.pak ### pak file is uploaded to vRLI ### 2022-10-28 13:29:32.013 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Proceeding with manual copy of upgrade pak using ssh. 2022-10-28 13:29:32.141 INFO [pool-3-thread-13] c.v.v.l.u.SshUtils - -- Uploading file --> ssh://root@10.109.44.140/tmp 2022-10-28 13:29:50.961 INFO [pool-3-thread-13] c.v.v.l.u.SshUtils - -- Uploaded file sucessfully 2022-10-28 13:29:52.652 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return status code for vrli: 200 2022-10-28 13:29:52.654 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- pak file upgrade version: 8.10.0-20623770 2022-10-28 13:29:52.655 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade version from pak file: 8.10.0-20623770 ### Upgrade is now triggered ### 2022-10-28 13:29:53.108 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Pak was loaded successfully. Eula pending. Triggerring upgrade 2022-10-28 13:29:53.109 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- pak file successfully uploaded for version: 8.10.0-20623770. Triggerring the upgrade now 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Upgrade triggered. Status is: Upgrading 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Upgrade status: Upgrading. Waiting for upgrade to finish 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- vRLI is not up. sleeping for 3 min Reference: /storage/var/loginsight/upgrade.log ( At this point in time switch over to vRLI logs ) ### checks are performed on disk space and certificates ### 2022-10-28 13:29:51,927 loginsight-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK 2022-10-28 13:29:51,951 loginsight-upgrade INFO Signature of the manifest validated: Verified OK 2022-10-28 13:29:52,559 loginsight-upgrade INFO Current version is 8.8.2-20056468 and upgrade version is 8.10.0-20623770. Version Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /tmp: 3440889856 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /storage/core: 16030388224 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /storage/var: 11989639168 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,561 loginsight-upgrade INFO Loading eula license successful! 2022-10-28 13:29:52,561 loginsight-upgrade INFO Done! 2022-10-28 13:29:53,564 loginsight-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK ### Version Check is done ### 2022-10-28 13:29:53,579 loginsight-upgrade INFO Signature of the manifest validated: Verified OK 2022-10-28 13:29:53,805 loginsight-upgrade INFO Current version is 8.8.2-20056468 and upgrade version is 8.10.0-20623770. Version Check successful! 2022-10-28 13:29:53,806 loginsight-upgrade INFO Available Disk Space at /tmp: 3440914432 2022-10-28 13:29:53,806 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:53,806 loginsight-upgrade INFO Available Disk Space at /storage/core: 16030560256 2022-10-28 13:29:53,806 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:53,807 loginsight-upgrade INFO Available Disk Space at /storage/var: 11989626880 2022-10-28 13:29:53,808 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:30:53,506 loginsight-upgrade INFO Checksum validation successful! 2022-10-28 13:30:53,512 loginsight-upgrade INFO Attempting to upgrade to version 8.10.0-20623770 ### upgrade-driver script is triggered ### 2022-10-28 13:30:53,925 upgrade-driver INFO Starting 'upgrade-driver' script ... 2022-10-28 13:30:53,927 upgrade-driver INFO Start processing the manifest file ... 2022-10-28 13:30:53,927 upgrade-driver INFO Log Insight TO_VERSION is manifest file is 8.10.0-20623770 2022-10-28 13:30:53,927 upgrade-driver INFO Parsed version is 8.10.0-20623770 2022-10-28 13:30:53,927 upgrade-driver INFO Creating file /storage/core/upgrade-version to store upgrade version. 2022-10-28 13:30:53,943 upgrade-driver INFO The file /storage/core/upgrade-version is created successfully. 2022-10-28 13:31:07,318 upgrade-driver INFO Cassandra snapshot run time: 0:00:13.369487 2022-10-28 13:31:07,714 upgrade-driver INFO Start processing key list ... 2022-10-28 13:31:07,714 upgrade-driver INFO Start processing rpm list ... 2022-10-28 13:31:07,714 upgrade-driver INFO Rpm by name upgrade-image-8.10.0-20623770.rpm 2022-10-28 13:32:34,318 upgrade-driver INFO INFO: Running /storage/core/upgrade/kexec-li - Resize|Partition|Boot ... Starting to run kexec-li script ... Reading and saving /etc/ssh/sshd_config Reading and saving old ssh keys if key based Authentication is enabled cp: cannot stat '/root/.ssh//id_rsa': No such file or directory cp: cannot stat '/root/.ssh//id_rsa.pub': No such file or directory cp: cannot stat '/root/.ssh//known_hosts': No such file or directory Reading and saving /etc/hosts Reading and saving ssh host keys Reading and saving /var/lib/loginsight-agent/liagent.ini Reading and saving hostname Reading and saving old cassandra keystore Failed copying /usr/lib/loginsight/application/lib/apache-cassandra-*/conf/keystore* Reading and saving old default keystore Reading and saving old default truststore Reading and saving old tomcat configs chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/3rd_config/keystore* chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/truststore* Reading and saving old loginsight.conf Reading and saving old password in /etc/shadow Root password info root P 07/29/2022 0 365 7 -1 Root password change date is 07/29/2022 Root password is set. Password reset will not be required on first login. Reading and saving /etc/fstab Reading and saving cacerts Copying java.security to java.security.old Reading and saving network configs Reading and saving resolv.conf Lazy partition is sda5 sda partition count is 5 /storage/core/upgrade/kexec-li script run took 74 seconds Partition sda5 , which is lazy partition, will be formatted and will become root partition Photon to Photon upgrade flow will be called, where base OS was Photon ... Starting to run photon2photon script ... Root partition copy took 84 seconds clean up upgrade-image.rpm Removing lock file /storage/core/upgrade/photon2photon-base-photon.sh script run took 87 seconds ### reboots appliance ### Rebooting... ### After reboot , from journalcrl logs , we can see following statement stating that the upgrade was successful ### Oct 28 13:36:17 li.cap.org systemd[1]: Started Mark VMware vRealize Log Insight upgrade successful. -- Subject: Unit loginsight_mark_upgrade_successful.service has finished start-up -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit loginsight_mark_upgrade_successful.service has finished starting up. -- -- The start-up result is RESULT. Oct 28 13:36:17 li.cap.org systemd[1]: Started Cleanup after successful upgrade of VMware vRealize Log Insight. -- Subject: Unit loginsight_cleanup_after_upgrade.service has finished start-up -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit loginsight_cleanup_after_upgrade.service has finished starting up. -- -- The start-up result is RESULT. ### cleanup done ### Oct 28 13:36:27 li.cap.org cleanup_after_upgrade.sh[2043]: Removed /etc/systemd/system/graphical.target.wants/loginsight_cleanup_after_upgrade.service. Oct 28 13:36:27 li.cap.org cleanup_after_upgrade.sh[2043]: Removed /etc/systemd/system/multi-user.target.wants/loginsight_cleanup_after_upgrade.service. Upgrade is now complete Important Logs to Refer vRSLCM Appliance: /var/log/vrlcm/vmware_vrlcm.log vRLI Applinace: /storage/var/loginsight/upgrade.log

  • Upgrading VMware Aria Suite Lifecycle | 8.12 to 8.14 | Demo |

    Pre-Requisites Take Snapshots Review Release notes Video Important Logs Pre-Upgrade & Upgrade Phase /var/log/vmware/capengine/cap-non-lvm-update/worflow.log /var/log/vmware/capenginecap-non-lvm-update/installer-<>.log Post Update Phase /var/log/bootstrap/postupdate.log /data/script.log /var/log/vrlcm/vmware_vrlcm.log

  • VMware Aria Suite Lifecycle Product Support Packs and Patches

    Product Support Packs and Patches are cumulative. Below is a table which describes there release dates and heirarchy. If you look at the table below, If i install VMware Aria Suite Lifecycle 8.14 PSPACK 7 , then there is no need for me to install VMware Aria Suite Lifecycle 8.14 Patch 1 seperately as the changes are rolled up into the latest PSPACK or Patch on this version. Hope it's clear. VMware Aria Suite Lifecycle 8.18.0 VMware Aria Suite Lifecycle 8.18.0 GA 23 June 2024 Provides support for VMware Aria Automation 8.18.0 VMware Aria Operation 8.18.0 VMware Aria Orchestrator VMware Aria Suite Lifecycle 8.16.0 VMware Aria Suite Lifecycle 8.16.0 PSPACK 1 VMware Aria Suite Lifecycle 8.16.0 PSPACK 1 21 March 2024 Provides support for VMware Aria Automation 8.16.2 VMware Aria Orchestrator 8.16.2 VMware Aria Automation Config 8.16.2 21 March 2024 Provides support for VMware Aria Automation 8.16.2 VMware Aria Orchestrator 8.16.2 VMware Aria Automation Config 8.16.2 VMware Aria Suite Lifecycle 8.16.0 GA VMware Aria Suite Lifecycle 8.16.0 GA 29 Feb 2024 Provides support for VMware Aria Automation 8.16.1 VMware Aria Automation Orchestrator 8.16.1 VMware Aria Automation Config 8.16.1 VMware Aria Operations 8.16.1 VMware Aria Operations for Logs 8.16.0 VMware Aria Operations for Networks 6.12.1 29 Feb 2024 Provides support for VMware Aria Automation 8.16.1 VMware Aria Automation Orchestrator 8.16.1 VMware Aria Automation Config 8.16.1 VMware Aria Operations 8.16.1 VMware Aria Operations for Logs 8.16.0 VMware Aria Operations for Networks 6.12.1 VMware Aria Suite Lifecycle 8.14.0 VMware Aria Suite Lifecycle 8.14.0 PSPACK 8 5 Mar 2024 Provides support for customers who have VCF aware VMware Aria Suite Lifecycle 8.14.0 to upgrade to version 8.16.0 VMware Aria Suite Lifecycle 8.14.0 PSPACK 7 23 Feb 2024 Provides support for VMware Aria Operations 8.16.0 VMware Aria Suite Lifecycle 8.14.0 PSPACK 6 7 Feb 2024 Provides support for VMware Aria Operations for Networks 6.12.0 VMware Aria Suite Lifecycle 8.14 PSPACK 5.0  23 Jan 2024 Provides support for VMware Aria Automation Config 8.16.0 VMware Aria Suite Lifecycle 8.14.0 PSPACK 4  16 Jan 2024 Provides support for VMware Aria Automation 8.16.0 & VMware Aria Automation Orchestrator 8.16.0 VMware Aria Suite Lifecycle 8.14.0 Patch 1  12 Dec 2023 Bug FixesRemoval of SHA1 hashes from VMware Aria Suite Lifecycle Resolution for deploying VMware Aria Operations 8.12.1 Cloud Proxy. Addressing offset calculation issues with perpetual license units in 'CPU(s)' or 'PLU(S)'. Improvements to VMware Aria Suite Lifecycle upgrade functionality, including enabling upgrade checks and upgrades over an HTTP proxy for air-gapped instances. Addition of support for VMware Identity Manager Windows connector mapping under product binaries. Rectification of the public API '/lcm/lcops/api/v2/settings/proxies' to facilitate the addition of a proxy without authentication. VMware Aria Suite Lifecycle 8.14.0 PSPACK 3 30 Nov 2023 Provides support for VMware Aria Automation 8.14.1, VMware Aria Automation Orchestrator 8.14.1 & VMware Aria Automation Config 8.14.1 VMware Aria Suite Lifecycle 8.14.0 PSPACK 2 24 Nov 2023 Provide support for VMware Aria Operations 8.14.1 & VMware Aria Operations for Logs 8.14.1 VMware Aria Suite Lifecycle 8.14.0 PSPACK 1 08 Nov 2023 VMware Aria Suite Lifecycle 8.14.0 Product Support Pack 1 adds support for: VMware Aria Operations 8.10.2, 8.10.1, 8.10.0 VMware Aria Automation 8.12.2, 8.12.1, 8.12.0, 8.11.2, 8.11.1, 8.11.0 VMware Aria Operations for Logs 8.10.2, 8.8.2 VMware Aria Suite Lifecycle 8.14.0 GA  19 Oct 2023 Provides support for VMware Aria Automation 8.14.0 , VMware Aria Automation Orchestrator 8.14.0 , VMware Aria Automation Config 8.14.0 , VMware Aria Operations 8.14.0 , VMware Aria Operations for Logs 8.14.0 , VMware Aria Operations for Networks 6.11

  • Upgrade Paths vSphere to VCF 9.0.x | How do we get there from here? |

    Navigating Your Path to VCF 9: A Unified Journey for Every Environment The vision for a truly unified private cloud is finally here with VMware Cloud Foundation (VCF) 9. But if you are looking at your current data center footprint, you might be asking yourself a very practical question: How do we get there from here? Every organization's infrastructure is unique. Your starting point might be a traditional, rock-solid vSphere  environment that you've relied on for years. Perhaps you are already on the VCF journey running an earlier iteration of VMware Cloud Foundation . Or maybe you have a complex, hybrid environment—vSphere augmented with standalone management components like the Aria Suite (formerly vRealize). The good news is that the path to a modernized, integrated cloud platform doesn't have to be a guessing game. In this post, we are going to demystify the journey to VCF 9. I will break down the exact upgrade paths and migration strategies based on your current deployment architecture. Whether you are starting from bare-bones compute virtualization or an existing, fully managed SDDC stack, we'll map out your specific route to unlocking the full potential, simplified operations, and centralized management of VCF 9. Let's dive into your blueprint for the future. vSphere 8 to VCF 9.0.x Starting State Source Available vCenter 8.x ✅ ESX ✅ Step-1: Upgrade to vSphere 9.0.x Upgrading from vCenter 8.0 update 2/3 to vCenter 9.0 is a two step process.   The first step involves deploying a new vCenter instance alongside the current vCenter appliance.  This new instance will initially be deployed using a temporary IP address.   After the new appliance has been deployed, the configuration from the current vCenter instance will be copied to the new appliance after which a brief outage is taken to “switch” to the upgraded appliance. Note:  the time required to upgrade the vCenter instance to version 9 will vary based on the size of the appliance, the size of the vCenter inventory, and the type of storage being used login to the vSphere client as administrator@vsphere.local In the vCenter Details pane verify the vCenter is now running version 9.0.    With the vCenter upgraded to version 9, the next step is to upgrade the ESX hosts to version 9. To upgrade the hosts to ESX 9.0, we first need to create a new image with the new version Import the 9.0 software depot in preparation for creating a new image. Click ACTIONS Click Import Updates Click BROWSE Click to select VMware-ESXi-9.0.0.0.xxxxx-depot.zip Click Open Click Software Depot With the ESX 9.0.0.0 software depot successfully imported.   Create a new vLCM image using this software depot.   Click Image Library Click CREATE IMAGE Click the Image Name field to enter the vLCM image name esx-9-xxxxx Click the Select Version dropdown Click to select 9.0.0.0.xxxxx Click VALIDATE . Here you see a blue banner reporting that the image is validated.   Click the scroll bar  to scroll down Click Save Assign an ESX 9.0 image to the vSphere cluster. Click the vSphere menu icon in the top right (near the words vSphere Client) Click Inventory To assign the new vLCM image to the cluster edit the image. Click EDIT Click the ESXi Version dropdown Click to select 9.0.0.0.xxxxxx Click SAVE Upon assigning the new image a compliance check is automatically performed.  Here you see that all 4 hosts are now flagged as being non compliant.   To bring the hosts back into compliance remediate the cluster. Click the scroll bar to scroll down Click REMEDIATE (ALL) Review the Remediation Impact summary. It shows that all hosts will be upgraded to ESX version 9 as part of the remediation.  As noted, the remediation order is determined at run time and the hosts are updated one at a time. Click START REMEDATION . Here you see the upgrade begins with the host esx-xxx.arun.com   vMotion is used to migrate running VMs off the host after which it is placed into maintenance mode. Once in maintenance mode, the new image is installed on the host. After the image has been installed the host is rebooted. After the host reboots, it is taken out of maintenance mode. A compliance check is performed to validate the upgrade was successful. With the first host done, vLCM then proceeds to repeat the steps for the remaining hosts in the cluster. After a while you see all hosts have been upgraded as indicated by the Image Compliance status showing “All hosts in this cluster are compliant”.   Click Summary Click esx-xxx.arun.com The version number shown in the Host Details pane confirms the host is now running ESX version 9.0. Note:  remember to update the vSAN disk version and vSphere Distributed switch versions for your environment Step-2: Deploy VCF Installer With vSphere upgrade you are ready to deploy the VCF Installer.    Click on the cluster Click Actions Click Deploy OVF Template… Click Local file Click UPLOAD FILES Click to select VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova Click Open Note:  The VCF Installer is a service that runs on the SDDC Manager appliance.  As such, to deploy the VCF Installer you deploy an instance of the SDDC Manager OVA.   By default, each new deployment of the SDDC Manager will run in “installer mode”.  Click NEXT Confirm the virtual machine name and location. Notes:  The appliance will start in “installer mode” and then get switched to “SDDC manager mode”.   Hence you name the appliance based on the end state. Click NEXT Once the VCF Installer has been deployed, power it on.  Click <> Click the Power On icon To access the VCF installer use a web browser to connect to the Fully Qualified Domain Name (FQDN). Click +  to open a new browser tab Click in the URL field  to enter the FQDN https://vcfinstaller-name.arun.com Login as the user “admin@local” using the password assigned during the OVA deployment Step-3: Download VCF 9.0.x binaries The VCF Installer supports both an online depot and offline depot. Use the online depot when you have internet access (either direct or through a proxy).   With the online depot you simply provide your Broadcom software token to authenticate, after which you are able to download the binaries. The offline depot is for sites that do not have direct access to the internet.  To use the offline depot set up a local web server and use the VCF Download Tool to authenticate and download the VCF 9.0 binaries to the web server.  Then point the VCF Installer to the web server and use it as an offline depot. In this example, we will configure an online depot. Click CONFIGURE  under Connect to the online depot Click the Download Token field  to enter your software download token Click AUTHENTICATE Note: if you are using a proxy server to access the internet you would toggle the enable proxy server switch.  In this example, we are not using a proxy server. Once successfully authenticated the VCF 9.0 binaries are made available for download. Click the scroll bar  to scroll down Click to select all binaries Click DOWNLOAD The binaries are downloaded.    Step-4: Deploy VCF Fleet With the binaries downloaded you are ready to converge your existing vSphere environment into a new VCF Fleet. To Deploy a new VCF Fleet you use the deployment wizard. Click DEPLOYMENT WIZARD Click VMware Cloud Foundation Begin by indicating if you will deploy a new VCF Fleet or add a new VCF Instance to an existing VCF Fleet. When choosing to “Deploy a new VCF fleet” a new instance of VCF Operations and VCF Automation will be deployed in conjunction with deploying a new VCF management domain.   If you already have a VCF Fleet deployed, with a corresponding instance of VCF Operations and VCF Automation, use the “Deploy a VCF instance in an existing VCF fleet” option to add a new VCF management domain and configure it with the existing VCF Operations and VCF Automation. When deploying a new VCF fleet you can choose whether to deploy a new instance of vCenter and VCF Operations or if you want to converge an existing instance that is running in your environment. When converging existing components, the VCF Installer will configure the existing components as part of the VCF fleet deployment.   Otherwise, new instances will be deployed and configured. In this scenario, because we are upgrading from existing infrastructure where we just have vCenter and no VMware Aria products , you check the box to converge the VMware vCenter Click the check box next to VMware vCenter Click NEXT Next, provide details for the new VCF instance that will be deployed with the VCF Fleet.   This includes providing a name for the VCF instance, a name for the management domain, and specifying the deployment model.    Choose the deployment model , Simple or High Availability. With the simple deployment model the NSX manager, VCF Operations, and VCF Automation will each be configured single-node instances.   If you chose the High Availability model, then these components would be deployed as three-node instances thus providing a higher degree of availability as well as additional capacity for improved scalability.  Click NEXT Because we don't have VCF Operations , a new VCF Operations is deployed. One Analytics node and a VCF Operations Collector (Cloud Proxy) along with VCF Operations fleet management appliance. Folowing inputs are needed here VCF Operations load balancer FQDN VCF Operations Master (Analytics) FQDN VCF Operations Collector FQDN VCF Operations fleet management FQDN Password (would be asked if autogenerate option is not selected before) Provide the information needed to deploy a new VCF Automation instance with the new VCF fleet.  Folowing inputs are needed here VCF Automation FQDN Node Name Prefix Internal Cluster CIDR Password (would be asked if autogenerate option is not selected before) Node IP-1 and IP-2 Note:  If you have an existing VMware Aria Automation instance that you want to use with the new VCF Fleet you would click the check box next to “I want to connect a VCF Automation Instance later”.   In this case you would proceed with the VCF Fleet deployment and after the deployment completes use the Fleet Management feature in VCF Operations to import the existing VMware Aria Automation instance and upgrade it to VCF Automation. Click NEXT. You then provide the FQDN along with the SSO login credentials for the vCenter instance that will be converged to the VCF management domain.  Click NEXT Here again you check the boxes to accept the vCenter certificate and SSH thumbprint. Click CONFIRM Next, you provide the details for the NSX instance that will be deployed in the VCF management domain.   Here you specify the appliance size along with the cluster FQDN and appliance FQDN (remember, since you chose the simple deployment model only one appliance will be deployed). NSX virtual networking uses overlay networks.   Here you can toggle the switch to indicate if you want to optionally configure the NSX overlay networks on the management network as part of the NSX deployment or skip the overlay network setup during the deployment and enable it later. In larger environments it is typically recommended to reconfigure the overlay networking onto separate networks/VLANs.  The NSX overlay networking can easily be (re)configured after the deployment using the NSX manager. Provide the passwords to use for the admin, root, and audit accounts. Click NEXT. As previously mentioned, the VCF Installer is a service that runs on the SDDC Manager appliance.   You will recall when we deployed the VCF Installer we downloaded and deployed the SDDC Manager OVA template.    There would be a prompt for the FQDN and administrator password that will be used to convert the appliance to the SDDC Manager. Click NEXT With the required information gathered, you are presented with a summary of the VCF Deployment Click the scroll bar to scroll down and review the VCF deployment information. Click NEXT The confirmation is then validated.   Depending on your configuration the validation will typically take between 5 to 10 minutes to complete. Validations takes place.  Read and acknowledge the warnings. One of the warning can be that it has detected vSphere Standard Switches (VSS) configured on the ESX hosts.   This warning can be ignored. With the successful validation and having acknowledged the warnings we are ready to proceed with the VCF Fleet deployment. Click DEPLOY The Deployment now starts and completes. As stated, in this scenario, we had an existing vCenter which was converged into VCF management domain. NSX was deployed and configured, the VCF Installer appliance was reconfigured to become the SDDC Manager, the new VCF management domain was added to the VCF Fleet Management and VCF Operations and a new instance of VCF Automation was deployed.  Note:  depending on your topology and whether you are deploying new components or converging existing components a VCF Fleet deployment can take between 90 minutes and 4 hours.   Here you will  pick up after the deployment has completed. The VCF installer provides a detailed audit trail of the tasks that were performed as part of the fleet deployment. Here you see the detailed list of tasks that were performed.   You can use this view to monitor deployment while it is in progress.  If any problems are encountered, you use this view to troubleshoot and diagnose the issue.   If the issue is able to be resolved, you can resume the workflow where it will pick up with the most recent tasks. With the VCF Fleet deployment complete, you will now login to VCF Operations to view details about the private cloud. Click OPEN VCF OPERATIONS UI Click LOG IN to login to VCF Operations as the admin user In the VCF Operations home page.   Observe the VCF management domain has automatically been registered as an account with VCF Operations. Also, note the new Fleet Management section of the VCF Operations UI. Navigate to the Inventory to view details about the management domain. Click Fleet Management Click Inventory We see a summary of the VCF management domain.  To see additional details, click the vCenter instance. Switch to the vSphere Client to observe the changes that have been made to the vCenter inventory as part of the convergence to a VCF management domain. Click the Login browser tab Click LOG IN to login as the administrator@vsphere.local  user Here we see the vCenter inventory.  Note that the upgrade to VCF 9.0 was largely non-disruptive to the running workloads.   One notable change has been the creation of a new resource pool in the inventory Once you expand the resource pool , you see that several of the infrastructure components that make up the VCF management domain have been moved to this resource pool.   These include the vCenter Server, the SDDC-Manager, the VCF Operations Collector, and the VCF automation.  Recap You upgraded from vSphere 8.0 Update 2 to vSphere 9.0.  This involved upgrading both the vCenter and ESX hosts. You deployed the VCF Installer. You configured the online depot and downloaded the VCF 9.0 binaries to the VCF Installer. You used the VCF Installer to converge the vSphere You deployed VCF Operations , VCF Operations Collector (Cloud Proxy) , VCF Operations fleet management appliance You deployed VCF Automation Step-5: License your VCF Following the upgrade, the next step is to use VCF Operations to register the VCF Operations with the Business Services Console in order to download a new license file and license the new VCF instanc vSphere 8 to VCF 9.0.x with VMware Aria Operations (No VMware Aria Suite Lifecycle) Source Available vCenter 8.x ✅ ESX ✅ VMware Aria Operations 8.x ✅ vSphere 8 to VCF 9.0.x with VMware Aria Operations and VMware Aria Operations for Logs (No VMware Aria Suite Lifecycle) Source Available vCenter 8.x ✅ ESX ✅ VMware Aria Operations 8.x ✅ VMware Aria Operations for Logs 8.x ✅ Step-1: Upgrade to VCF Operations 9.0.x Obtain Software Upgrade PAK file Snapshot VMware Aria Operations 8.x cluster ​​It is mandatory to create a snapshot of each node in a cluster before you update a VMware Aria Operations cluster. Once the update is complete, you must delete the snapshot to avoid performance degradation. For more information about snapshots, see the vSphere Virtual Machine Administration documentation. Log into the VMware Aria Operations Administrator interface at https:///admin. Click Take Offline under the cluster status. When all nodes are offline, open the vSphere client. Right-click a VMware Aria Operations virtual machine. Click Snapshot and then click Take Snapshot. Name the snapshot. Use a meaningful name such as "Pre-Update." Uncheck the Snapshot the Virtual Machine Memory check box. Uncheck the Ensure Quiesce Guest File System (Needs VMware Tools installed) check box. Click OK. Repeat these steps for each node in the cluster. Log into the primary node VMware Aria Operations administrator interface of your cluster at https://primary-node-FQDN-or-IP-address/admin  . Click Software Update in the left pane. Click Install a Software Update in the main pane. Follow the steps in the wizard to locate and install your PAK file. This updates the OS on the virtual appliance and restarts each virtual machine. Read the End User License Agreement and Update Information, and click Next. Click Install to complete the installation of the software update . Log back into the primary node administrator interface. The main Cluster Status page appears and the cluster goes online automatically. The status page also displays the Bring Online button, but do not click it. Clear the browser caches and if the browser page does not refresh automatically, refresh the page.The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete. Click Software Update to check that the update is done. A message indicating that the update completed successfully appears in the main pane. When you update VMware Aria Operations to a latest version, all nodes get upgraded by default. If you are using cloud proxies, the cloud proxy upgrades start after the VMware Aria Operations upgrade is completed successfully. Step-2: Upgrade to vSphere 9.0.x Upgrading from vCenter 8.0 update 2/3 to vCenter 9.0 is a two step process.   The first step involves deploying a new vCenter instance alongside the current vCenter appliance.  This new instance will initially be deployed using a temporary IP address.   After the new appliance has been deployed, the configuration from the current vCenter instance will be copied to the new appliance after which a brief outage is taken to “switch” to the upgraded appliance. Note:  the time required to upgrade the vCenter instance to version 9 will vary based on the size of the appliance, the size of the vCenter inventory, and the type of storage being used login to the vSphere client as administrator@vsphere.local In the vCenter Details pane verify the vCenter is now running version 9.0.    With the vCenter upgraded to version 9, the next step is to upgrade the ESX hosts to version 9. To upgrade the hosts to ESX 9.0, we first need to create a new image with the new version Import the 9.0 software depot in preparation for creating a new image. Click ACTIONS Click Import Updates Click BROWSE Click to select VMware-ESXi-9.0.0.0.xxxxx-depot.zip Click Open Click Software Depot With the ESX 9.0.0.0 software depot successfully imported.   Create a new vLCM image using this software depot.   Click Image Library Click CREATE IMAGE Click the Image Name field to enter the vLCM image name esx-9-xxxxx Click the Select Version dropdown Click to select 9.0.0.0.xxxxx Click VALIDATE . Here you see a blue banner reporting that the image is validated.   Click the scroll bar  to scroll down Click Save Assign an ESX 9.0 image to the vSphere cluster. Click the vSphere menu icon in the top right (near the words vSphere Client) Click Inventory To assign the new vLCM image to the cluster edit the image. Click EDIT Click the ESXi Version dropdown Click to select 9.0.0.0.xxxxxx Click SAVE Upon assigning the new image a compliance check is automatically performed.  Here you see that all 4 hosts are now flagged as being non compliant.   To bring the hosts back into compliance remediate the cluster. Click the scroll bar to scroll down Click REMEDIATE (ALL) Review the Remediation Impact summary. It shows that all hosts will be upgraded to ESX version 9 as part of the remediation.  As noted, the remediation order is determined at run time and the hosts are updated one at a time. Click START REMEDATION . Here you see the upgrade begins with the host esx-xxx.arun.com   vMotion is used to migrate running VMs off the host after which it is placed into maintenance mode. Once in maintenance mode, the new image is installed on the host. After the image has been installed the host is rebooted. After the host reboots, it is taken out of maintenance mode. A compliance check is performed to validate the upgrade was successful. With the first host done, vLCM then proceeds to repeat the steps for the remaining hosts in the cluster. After a while you see all hosts have been upgraded as indicated by the Image Compliance status showing “All hosts in this cluster are compliant”.   Click Summary Click esx-xxx.arun.com The version number shown in the Host Details pane confirms the host is now running ESX version 9.0. Note:  remember to update the vSAN disk version and vSphere Distributed switch versions for your environment Step-3: Deploy VCF Installer With vSphere upgrade you are ready to deploy the VCF Installer.    Click on the cluster Click Actions Click Deploy OVF Template… Click Local file Click UPLOAD FILES Click to select VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova Click Open Note:  The VCF Installer is a service that runs on the SDDC Manager appliance.  As such, to deploy the VCF Installer you deploy an instance of the SDDC Manager OVA.   By default, each new deployment of the SDDC Manager will run in “installer mode”.  Click NEXT Confirm the virtual machine name and location. Notes:  The appliance will start in “installer mode” and then get switched to “SDDC manager mode”.   Hence you name the appliance based on the end state. Click NEXT Once the VCF Installer has been deployed, power it on.  Click <> Click the Power On icon To access the VCF installer use a web browser to connect to the Fully Qualified Domain Name (FQDN). Click +  to open a new browser tab Click in the URL field  to enter the FQDN https://vcfinstaller-name.arun.com Login as the user “admin@local” using the password assigned during the OVA deployment Step-4: Download VCF 9.0.x binaries The VCF Installer supports both an online depot and offline depot. Use the online depot when you have internet access (either direct or through a proxy).   With the online depot you simply provide your Broadcom software token to authenticate, after which you are able to download the binaries. The offline depot is for sites that do not have direct access to the internet.  To use the offline depot set up a local web server and use the VCF Download Tool to authenticate and download the VCF 9.0 binaries to the web server.  Then point the VCF Installer to the web server and use it as an offline depot. In this example, we will configure an online depot. Click CONFIGURE  under Connect to the online depot Click the Download Token field  to enter your software download token Click AUTHENTICATE Note: if you are using a proxy server to access the internet you would toggle the enable proxy server switch.  In this example, we are not using a proxy server. Once successfully authenticated the VCF 9.0 binaries are made available for download. Click the scroll bar  to scroll down Click to select all binaries Click DOWNLOAD The binaries are downloaded.    Step-5: Deploy VCF Fleet With the binaries downloaded you are ready to converge your existing vSphere environment into a new VCF Fleet. To Deploy a new VCF Fleet you use the deployment wizard. Click DEPLOYMENT WIZARD Click VMware Cloud Foundation Begin by indicating if you will deploy a new VCF Fleet or add a new VCF Instance to an existing VCF Fleet. When choosing to “Deploy a new VCF fleet” a new instance of VCF Operations and VCF Automation will be deployed in conjunction with deploying a new VCF management domain.   If you already have a VCF Fleet deployed, with a corresponding instance of VCF Operations and VCF Automation, use the “Deploy a VCF instance in an existing VCF fleet” option to add a new VCF management domain and configure it with the existing VCF Operations and VCF Automation. When deploying a new VCF fleet you can choose whether to deploy a new instance of vCenter and VCF Operations or if you want to converge an existing instance that is running in your environment. When converging existing components, the VCF Installer will configure the existing components as part of the VCF fleet deployment.   Otherwise, new instances will be deployed and configured. In this scenario, because you are upgrading from existing infrastructure you check the box to converge both the VCF Operations and the VMware vCenter Click the check box next to VCF Operations Click the check box next to VMware vCenter Click NEXT Next, provide details for the new VCF instance that will be deployed with the VCF Fleet.   This includes providing a name for the instance, a name for the management domain, and specifying the deployment model.    With the simple deployment model the NSX manager, VCF Operations, and VCF Automation will each be configured single-node instances. If you chose the High Availability model, then these components would be deployed as three-node instances thus providing a higher degree of availability as well as additional capacity for improved scalability.  Click NEXT. Because you are converging an existing VCF Operations instance you next provide the FQDN of the existing VCF Operations instance along with the admin password.  The installer uses this information to connect to the VCF Operations instance. Click CONNECT. Click the checkboxes to confirm the certificate and SSH thumbprint. Click CONFIRM You see the green banner notifying you that the VCF Installer was able to successfully detect the VCF Operations instance.   As part of the VCF Operations connectivity validation the installer detects if the existing fleet management appliance that is registered with VCF Operations.   The FQDN has been auto populated in the UI.  You are now asked to provide the corresponding administrator password. Also, with VCF 9.0 you will need to deploy an instance of the VCF Operations Collector appliance.   At this stage you need to provide VCF Operations Collector FQDN along with the administrator password to  assign to the collector.   Click NEXT Confirm the thumbprint for the VCF Fleet Management appliance. VCF Operations fleet management FQDN Password (would be asked if autogenerate option is not selected before) Provide the information needed to deploy a new VCF Automation instance with the new VCF fleet.  Folowing inputs are needed here VCF Automation FQDN Node Name Prefix Internal Cluster CIDR Password (would be asked if autogenerate option is not selected before) Node IP-1 and IP-2 ( Additional IP's are needed if we select HA deployment model) Note:  If you have an existing VMware Aria Automation instance that you want to use with the new VCF Fleet you would click the check box next to “I want to connect a VCF Automation Instance later”.    In this case you would proceed with the VCF Fleet deployment and after the deployment completes use the Fleet Management feature in VCF Operations to import the existing VMware Aria Automation instance and upgrade it to VCF Automation. Click NEXT. You then provide the FQDN along with the SSO login credentials for the vCenter instance that will be converged to the VCF management domain.  Click NEXT Here again you check the boxes to accept the vCenter certificate and SSH thumbprint. Click CONFIRM Next, you provide the details for the NSX instance that will be deployed in the VCF management domain.   Here you specify the appliance size along with the cluster FQDN and appliance FQDN (remember, since you chose the simple deployment model only one appliance will be deployed). NSX virtual networking uses overlay networks.   Here you can toggle the switch to indicate if you want to optionally configure the NSX overlay networks on the management network as part of the NSX deployment or skip the overlay network setup during the deployment and enable it later. In larger environments it is typically recommended to reconfigure the overlay networking onto separate networks/VLANs.  The NSX overlay networking can easily be (re)configured after the deployment using the NSX manager. Provide the passwords to use for the admin, root, and audit accounts. Click NEXT. As previously mentioned, the VCF Installer is a service that runs on the SDDC Manager appliance.   You will recall when we deployed the VCF Installer we downloaded and deployed the SDDC Manager OVA template.    There would be a prompt for the FQDN and administrator password that will be used to convert the appliance to the SDDC Manager. Click NEXT With the required information gathered, you are presented with a summary of the VCF Deployment Click the scroll bar to scroll down and review the VCF deployment information. Click NEXT The confirmation is then validated.   Depending on your configuration the validation will typically take between 5 to 10 minutes to complete. Validations takes place.  Read and acknowledge the warnings. One of the warning can be that it has detected vSphere Standard Switches (VSS) configured on the ESX hosts.   This warning can be ignored. With the successful validation and having acknowledged the warnings we are ready to proceed with the VCF Fleet deployment. Click DEPLOY The Deployment now starts and completes. As stated, in this scenario, we had an existing vCenter which was converged into VCF management domain. NSX was deployed and configured, the VCF Installer appliance was reconfigured to become the SDDC Manager, the new VCF management domain was added to the VCF Fleet Management and VCF Operations and a new instance of VCF Automation was deployed.  Note:  depending on your topology and whether you are deploying new components or converging existing components a VCF Fleet deployment can take between 90 minutes and 4 hours.   Here you will  pick up after the deployment has completed. The VCF installer provides a detailed audit trail of the tasks that were performed as part of the fleet deployment. Here you see the detailed list of tasks that were performed.   You can use this view to monitor deployment while it is in progress.  If any problems are encountered, you use this view to troubleshoot and diagnose the issue.   If the issue is able to be resolved, you can resume the workflow where it will pick up with the most recent tasks. With the VCF Fleet deployment complete, you will now login to VCF Operations to view details about the private cloud. Click OPEN VCF OPERATIONS UI Click LOG IN to login to VCF Operations as the admin user In the VCF Operations home page.   Observe the VCF management domain has automatically been registered as an account with VCF Operations. Also, note the new Fleet Management section of the VCF Operations UI. Navigate to the Inventory to view details about the management domain. Click Fleet Management Click Inventory We see a summary of the VCF management domain.  To see additional details, click the vCenter instance. Switch to the vSphere Client to observe the changes that have been made to the vCenter inventory as part of the convergence to a VCF management domain. Click the Login browser tab Click LOG IN to login as the administrator@vsphere.local  user Here we see the vCenter inventory.  Note that the upgrade to VCF 9.0 was largely non-disruptive to the running workloads.   One notable change has been the creation of a new resource pool in the inventory Once you expand the resource pool , you see that several of the infrastructure components that make up the VCF management domain have been moved to this resource pool.   These include the vCenter Server, the SDDC-Manager, the VCF Operations Collector, and the VCF automation.  Recap You upgraded from vSphere 8.0 Update 2 to vSphere 9.0.  This involved upgrading both the vCenter and ESX hosts. You deployed the VCF Installer. You configured the online depot and downloaded the VCF 9.0 binaries to the VCF Installer. You used the VCF Installer to converge the vSphere You deployed VCF Operations , VCF Operations Collector (Cloud Proxy) , VCF Operations fleet management appliance You deployed VCF Automation Step-6: License your VCF Step-7: Deploy VCF Operations for logs Download Required Binaries Verify that the Depot Configuration  is complete (Offline Depot). Navigate to the Binary Management  pane and download the required installation binary. The necessary binary is named "operations-logs." Check the box next to the binary name and click "DOWNLOAD." A download request will be triggered. Once the process is complete, proceed to the next step to install the component From the Overview pane, we can now click on “ADD” under “operations-logs” to initiate the installation process. The Deployment pane  opens, prompting users to make a selection: Installation Type New Install  – Choose this option to perform a fresh installation of operations-logs Import  – Select this option to import an existing operations-logs 9.0 when deployed previously. This option is not applicable at the moment since we are currently performing a new installation. Version  which must be 9.0.0 .0  Deployment Type which can be standard or a cluster Click 'Next' to proceed with certificate selection. Make sure the FQDN of the LOGS cluster nodes is included in the Subject Alternative Names section without fail Click on next to enter “Infrastructure” pane Select vCenter Server Only lists management domains for deployment  Select Cluster Select Folder Select Resource Pool  Select Network Select Datastore Select Disk Mode Use Content Library (Optional) Click on next to enter “Network” pane Enter Domain Name Enter Domain Search Path Select DNS Servers , if one does not exist, then add and select Select Time Sync Mode Enter IPv4 details Netmask  Default Gateway Click on next to enter “Components” pane Enter Component Properties Node Size Select node size Small, Medium or Large based on the requirement. Medium is recommended FIPS Compliance  Turn it ON or OFF, default option is OFF for VCF 9.0 Certificate Pre-selected from the previous step Set Affinity & Anti-Affinity rule if needed Configure Cluster VIP This is the load balancer (internal) IP for operations-logs If Cluster VIP is set to NO, then the CLUSTER VIP section will not be presented Upgrade VM Compatibility Always Use English Admin Email  Time Sync Mode which is pre-selected Component Password which will be used for “root” and “admin” local accounts.  If Cluster VIP is present then enter FQDN of the LOAD BALANCER IP Address pointing to the FQDN of the LOAD BALANCER Enter Component/Node properties VM Name FQDN IP Address Click on next to enter “Prechecks” pane Run Prechecks and ensure everything is successfully completed If there are any exceptions , fix them and re-run prechecks, only then click on “NEXT” to proceed to “SUMMARY” pane Review and Submit deployment request Wait for the deployment request to complete and then leverage operations-logs Now , that operations-logs is deployed, go ahead and migrate the logs data from the old VMware Aria Operations for Logs 8.x cluster to the new VCF Operations for logs 9.x You can do this under VCF Operations - Administration - Control Panel - Log Data Transfer For more information , look at official documentation : https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vcf-9-0-and-later/9-0/infrastructure-operations/log-analysis/field-management.html vSphere 8 to VCF 9.0.x with VMware Aria Operations VMware Aria Automation ,VMware Aria Operations for Logs & VMware Aria Operations for Networks Source Available vCenter 8.x ✅ ESX ✅ VMware Aria Operations 8.x ✅ VMware Aria Operations for Logs 8.x ✅ VMware Aria Automation 8.x ✅ VMware Aria Suite Lifecycle ✅ (has no upgrade path to VCF 9.x) VMware Identity Manager ✅ (has no upgrade path to VCF 9.x) VMware Aria Operations for Networks ✅ Step-1: Upgrade to VCF Operations 9.0.x Obtain Software Upgrade PAK file Snapshot VMware Aria Operations 8.x cluster ​​It is mandatory to create a snapshot of each node in a cluster before you update a VMware Aria Operations cluster. Once the update is complete, you must delete the snapshot to avoid performance degradation. For more information about snapshots, see the vSphere Virtual Machine Administration documentation. Log into the VMware Aria Operations Administrator interface at https:///admin. Click Take Offline under the cluster status. When all nodes are offline, open the vSphere client. Right-click a VMware Aria Operations virtual machine. Click Snapshot and then click Take Snapshot. Name the snapshot. Use a meaningful name such as "Pre-Update." Uncheck the Snapshot the Virtual Machine Memory check box. Uncheck the Ensure Quiesce Guest File System (Needs VMware Tools installed) check box. Click OK. Repeat these steps for each node in the cluster. Log into the primary node VMware Aria Operations administrator interface of your cluster at https://primary-node-FQDN-or-IP-address/admin  . Click Software Update in the left pane. Click Install a Software Update in the main pane. Follow the steps in the wizard to locate and install your PAK file. This updates the OS on the virtual appliance and restarts each virtual machine. Read the End User License Agreement and Update Information, and click Next. Click Install to complete the installation of the software update . Log back into the primary node administrator interface. The main Cluster Status page appears and the cluster goes online automatically. The status page also displays the Bring Online button, but do not click it. Clear the browser caches and if the browser page does not refresh automatically, refresh the page.The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete. Click Software Update to check that the update is done. A message indicating that the update completed successfully appears in the main pane. When you update VMware Aria Operations to a latest version, all nodes get upgraded by default. If you are using cloud proxies, the cloud proxy upgrades start after the VMware Aria Operations upgrade is completed successfully. Step-2: Upgrade to vSphere 9.0.x Upgrading from vCenter 8.0 update 2/3 to vCenter 9.0 is a two step process.   The first step involves deploying a new vCenter instance alongside the current vCenter appliance.  This new instance will initially be deployed using a temporary IP address.   After the new appliance has been deployed, the configuration from the current vCenter instance will be copied to the new appliance after which a brief outage is taken to “switch” to the upgraded appliance. Note:  the time required to upgrade the vCenter instance to version 9 will vary based on the size of the appliance, the size of the vCenter inventory, and the type of storage being used login to the vSphere client as administrator@vsphere.local In the vCenter Details pane verify the vCenter is now running version 9.0.    With the vCenter upgraded to version 9, the next step is to upgrade the ESX hosts to version 9. To upgrade the hosts to ESX 9.0, we first need to create a new image with the new version Import the 9.0 software depot in preparation for creating a new image. Click ACTIONS Click Import Updates Click BROWSE Click to select VMware-ESXi-9.0.0.0.xxxxx-depot.zip Click Open Click Software Depot With the ESX 9.0.0.0 software depot successfully imported.   Create a new vLCM image using this software depot.   Click Image Library Click CREATE IMAGE Click the Image Name field to enter the vLCM image name esx-9-xxxxx Click the Select Version dropdown Click to select 9.0.0.0.xxxxx Click VALIDATE . Here you see a blue banner reporting that the image is validated.   Click the scroll bar  to scroll down Click Save Assign an ESX 9.0 image to the vSphere cluster. Click the vSphere menu icon in the top right (near the words vSphere Client) Click Inventory To assign the new vLCM image to the cluster edit the image. Click EDIT Click the ESXi Version dropdown Click to select 9.0.0.0.xxxxxx Click SAVE Upon assigning the new image a compliance check is automatically performed.  Here you see that all 4 hosts are now flagged as being non compliant.   To bring the hosts back into compliance remediate the cluster. Click the scroll bar to scroll down Click REMEDIATE (ALL) Review the Remediation Impact summary. It shows that all hosts will be upgraded to ESX version 9 as part of the remediation.  As noted, the remediation order is determined at run time and the hosts are updated one at a time. Click START REMEDATION . Here you see the upgrade begins with the host esx-xxx.arun.com   vMotion is used to migrate running VMs off the host after which it is placed into maintenance mode. Once in maintenance mode, the new image is installed on the host. After the image has been installed the host is rebooted. After the host reboots, it is taken out of maintenance mode. A compliance check is performed to validate the upgrade was successful. With the first host done, vLCM then proceeds to repeat the steps for the remaining hosts in the cluster. After a while you see all hosts have been upgraded as indicated by the Image Compliance status showing “All hosts in this cluster are compliant”.   Click Summary Click esx-xxx.arun.com The version number shown in the Host Details pane confirms the host is now running ESX version 9.0. Note:  remember to update the vSAN disk version and vSphere Distributed switch versions for your environment Step-3: Deploy VCF Installer With vSphere upgrade you are ready to deploy the VCF Installer.    Click on the cluster Click Actions Click Deploy OVF Template… Click Local file Click UPLOAD FILES Click to select VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova Click Open Note:  The VCF Installer is a service that runs on the SDDC Manager appliance.  As such, to deploy the VCF Installer you deploy an instance of the SDDC Manager OVA.   By default, each new deployment of the SDDC Manager will run in “installer mode”.  Click NEXT Confirm the virtual machine name and location. Notes:  The appliance will start in “installer mode” and then get switched to “SDDC manager mode”.   Hence you name the appliance based on the end state. Click NEXT Once the VCF Installer has been deployed, power it on.  Click <> Click the Power On icon To access the VCF installer use a web browser to connect to the Fully Qualified Domain Name (FQDN). Click +  to open a new browser tab Click in the URL field  to enter the FQDN https://vcfinstaller-name.arun.com Login as the user “admin@local” using the password assigned during the OVA deployment Step-4: Download VCF 9.0.x binaries The VCF Installer supports both an online depot and offline depot. Use the online depot when you have internet access (either direct or through a proxy).   With the online depot you simply provide your Broadcom software token to authenticate, after which you are able to download the binaries. The offline depot is for sites that do not have direct access to the internet.  To use the offline depot set up a local web server and use the VCF Download Tool to authenticate and download the VCF 9.0 binaries to the web server.  Then point the VCF Installer to the web server and use it as an offline depot. In this example, we will configure an online depot. Click CONFIGURE  under Connect to the online depot Click the Download Token field  to enter your software download token Click AUTHENTICATE Note: if you are using a proxy server to access the internet you would toggle the enable proxy server switch.  In this example, we are not using a proxy server. Once successfully authenticated the VCF 9.0 binaries are made available for download. Click the scroll bar  to scroll down Click to select all binaries Click DOWNLOAD The binaries are downloaded.    Step-5: Deploy VCF Fleet With the binaries downloaded you are ready to converge your existing vSphere environment into a new VCF Fleet. To Deploy a new VCF Fleet you use the deployment wizard. Click DEPLOYMENT WIZARD Click VMware Cloud Foundation Begin by indicating if you will deploy a new VCF Fleet or add a new VCF Instance to an existing VCF Fleet. When choosing to “Deploy a new VCF fleet” a new instance of VCF Operations and VCF Automation will be deployed in conjunction with deploying a new VCF management domain.   If you already have a VCF Fleet deployed, with a corresponding instance of VCF Operations and VCF Automation, use the “Deploy a VCF instance in an existing VCF fleet” option to add a new VCF management domain and configure it with the existing VCF Operations and VCF Automation. When deploying a new VCF fleet you can choose whether to deploy a new instance of vCenter and VCF Operations or if you want to converge an existing instance that is running in your environment. When converging existing components, the VCF Installer will configure the existing components as part of the VCF fleet deployment.   Otherwise, new instances will be deployed and configured. In this scenario, because you are upgrading from existing infrastructure you check the box to converge both the VCF Operations and the VMware vCenter Click the check box next to VCF Operations Click the check box next to VMware vCenter Click NEXT Next, provide details for the new VCF instance that will be deployed with the VCF Fleet.   This includes providing a name for the instance, a name for the management domain, and specifying the deployment model.    With the simple deployment model the NSX manager, VCF Operations, and VCF Automation will each be configured single-node instances. If you chose the High Availability model, then these components would be deployed as three-node instances thus providing a higher degree of availability as well as additional capacity for improved scalability.  Click NEXT. Because you are converging an existing VCF Operations instance you next provide the FQDN of the existing VCF Operations instance along with the admin password.  The installer uses this information to connect to the VCF Operations instance. Click CONNECT. Click the checkboxes to confirm the certificate and SSH thumbprint. Click CONFIRM You see the green banner notifying you that the VCF Installer was able to successfully detect the VCF Operations instance.   As part of the VCF Operations connectivity validation the installer detects if the existing fleet management appliance that is registered with VCF Operations.   The FQDN has been auto populated in the UI.  You are now asked to provide the corresponding administrator password. Also, with VCF 9.0 you will need to deploy an instance of the VCF Operations Collector appliance.   At this stage you need to provide VCF Operations Collector FQDN along with the administrator password to  assign to the collector.   Click NEXT Confirm the thumbprint for the VCF Fleet Management appliance. VCF Operations fleet management FQDN Password (would be asked if autogenerate option is not selected before) Provide the information needed to deploy a new VCF Automation instance with the new VCF fleet.  Folowing inputs are needed here VCF Automation FQDN Node Name Prefix Internal Cluster CIDR Password (would be asked if autogenerate option is not selected before) Node IP-1 and IP-2 ( Additional IP's are needed if we select HA deployment model) Note:  If you have an existing VMware Aria Automation instance that you want to use with the new VCF Fleet you would click the check box next to “I want to connect a VCF Automation Instance later”.    In this case you would proceed with the VCF Fleet deployment and after the deployment completes use the Fleet Management feature in VCF Operations to import the existing VMware Aria Automation instance and upgrade it to VCF Automation. Click NEXT. You then provide the FQDN along with the SSO login credentials for the vCenter instance that will be converged to the VCF management domain.  Click NEXT Here again you check the boxes to accept the vCenter certificate and SSH thumbprint. Click CONFIRM Next, you provide the details for the NSX instance that will be deployed in the VCF management domain.   Here you specify the appliance size along with the cluster FQDN and appliance FQDN (remember, since you chose the simple deployment model only one appliance will be deployed). NSX virtual networking uses overlay networks.   Here you can toggle the switch to indicate if you want to optionally configure the NSX overlay networks on the management network as part of the NSX deployment or skip the overlay network setup during the deployment and enable it later. In larger environments it is typically recommended to reconfigure the overlay networking onto separate networks/VLANs.  The NSX overlay networking can easily be (re)configured after the deployment using the NSX manager. Provide the passwords to use for the admin, root, and audit accounts. Click NEXT. As previously mentioned, the VCF Installer is a service that runs on the SDDC Manager appliance.   You will recall when we deployed the VCF Installer we downloaded and deployed the SDDC Manager OVA template.    There would be a prompt for the FQDN and administrator password that will be used to convert the appliance to the SDDC Manager. Click NEXT With the required information gathered, you are presented with a summary of the VCF Deployment Click the scroll bar to scroll down and review the VCF deployment information. Click NEXT The confirmation is then validated.   Depending on your configuration the validation will typically take between 5 to 10 minutes to complete. Validations takes place.  Read and acknowledge the warnings. One of the warning can be that it has detected vSphere Standard Switches (VSS) configured on the ESX hosts.   This warning can be ignored. With the successful validation and having acknowledged the warnings we are ready to proceed with the VCF Fleet deployment. Click DEPLOY The Deployment now starts and completes. As stated, in this scenario, we had an existing vCenter which was converged into VCF management domain. NSX was deployed and configured, the VCF Installer appliance was reconfigured to become the SDDC Manager, the new VCF management domain was added to the VCF Fleet Management and VCF Operations and a new instance of VCF Automation was deployed.  Note:  depending on your topology and whether you are deploying new components or converging existing components a VCF Fleet deployment can take between 90 minutes and 4 hours.   Here you will  pick up after the deployment has completed. The VCF installer provides a detailed audit trail of the tasks that were performed as part of the fleet deployment. Here you see the detailed list of tasks that were performed.   You can use this view to monitor deployment while it is in progress.  If any problems are encountered, you use this view to troubleshoot and diagnose the issue.   If the issue is able to be resolved, you can resume the workflow where it will pick up with the most recent tasks. With the VCF Fleet deployment complete, you will now login to VCF Operations to view details about the private cloud. Click OPEN VCF OPERATIONS UI Click LOG IN to login to VCF Operations as the admin user In the VCF Operations home page.   Observe the VCF management domain has automatically been registered as an account with VCF Operations. Also, note the new Fleet Management section of the VCF Operations UI. Navigate to the Inventory to view details about the management domain. Click Fleet Management Click Inventory We see a summary of the VCF management domain.  To see additional details, click the vCenter instance. Switch to the vSphere Client to observe the changes that have been made to the vCenter inventory as part of the convergence to a VCF management domain. Click the Login browser tab Click LOG IN to login as the administrator@vsphere.local  user Here we see the vCenter inventory.  Note that the upgrade to VCF 9.0 was largely non-disruptive to the running workloads.   One notable change has been the creation of a new resource pool in the inventory Once you expand the resource pool , you see that several of the infrastructure components that make up the VCF management domain have been moved to this resource pool.   These include the vCenter Server, the SDDC-Manager, the VCF Operations Collector, and the VCF automation.  Recap You upgraded from vSphere 8.0 Update 2 to vSphere 9.0.  This involved upgrading both the vCenter and ESX hosts. You deployed the VCF Installer. You configured the online depot and downloaded the VCF 9.0 binaries to the VCF Installer. You used the VCF Installer to converge the vSphere You deployed VCF Operations , VCF Operations Collector (Cloud Proxy) , VCF Operations fleet management appliance You deployed VCF Automation Step-6: License your VCF Step-7: Deploy VCF Operations for logs Step-8: Import and Upgrade VMware Aria Automation 8.x to VCF Automation 9.0.x The whole process of import and upgrade is explained on this blog : https://www.arunnukula.com/post/seamless-upgrades-from-vmware-aria-automation-8-18-x-to-vcf-automation-9-0-x Step-9: Import and Upgrade VMware Aria Operations for Networks 6.x to VCF Operations for networks 9.0.x Import Prior to upgrading VMware Aria Operations for Networks 6.x to VCF Operations for networks 9.0.x, it must first be imported to VCF Operations 9.0.x to start the upgrade process Within VCF Operations, navigate to Fleet Management → Lifecycle → Overview → operations-networks  Select ADD Select “Import from legacy Aria Suite Lifecycle” , Click on NEXT In Lifecycle Configuration pane Enter VMware Aria Suite Lifecycle FQDN which is managing VMware Aria Operations for Networks 6.x you would like to upgrade admin@local would be the username Enter admin@local password Root password of VMware Aria Suite Lifecycle We now discover VMware Aria Operations for Networks 6.x deployments from the VMware Aria Suite Lifecycle. Choose the VMware Aria Operations for Networks 6.x you would like to import Click on next to review and submit so that the import of VMware Aria Operations for Networks 6.x into the VCF Operations begins Once this process is completed, you should now be able to VMware Aria Automation in VCF Operations 9.0.x as a managed component. After the import is finished, VMware Aria Automation will be unregistered from the VMware Aria Suite Lifecycle where it was previously registered. No Day-N actions will be available unless you upgrade it to VCF Automation. The primary reason for importing should be to promptly upgrade to the new version. If you are uncertain, only proceed with the import when you have a clear plan. Download Binary Depot Configuration is mandatory before downloading any binaries, whether it’s install,upgrade and patches Navigate to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Upgrade Binary Select automation, click on DOWNLOAD Wait till the binary download completes Upgrade The upgrade process automatically completes fields with values discovered and imported from VMware Aria Operations for Networks, so you can accept default values to perform the upgrade. From the Overview tab, click Upgrade on the VCF Operations for networks tile. To ensure that VCF Operations fleet management appliance and the imported Aria Operations for Networks environment are in sync, the upgrade triggers an inventory sync. Click Trigger Inventory Sync, then click Submit. When all stages of the inventory sync are complete, it is safe to start the upgrade. Return to the Overview tab, click Upgrade, then click Proceed. The upgrade moves through the following stages. Product Version and Repository URL are populated by default. Snapshot: Take product snapshot (default) Precheck: To validate various properties of VCF Operations for networks, click Run Precheck. Upgrade Summary: Shows the details of the upgrade. To start the upgrade, click Submit. The Tasks tab shows the status of the upgrade.

  • Seamless Upgrades: From VMware Aria Automation 8.18.x to VCF Automation 9.0.x

    VMware Aria Automation 8.x deployment topology VMware Aria Automation can be set up using either a simple deployment model or a clustered deployment model. Simple Deployment This model consists of a single VMware Aria Automation node. No load balancer is used in this setup. The fully qualified domain name (FQDN) of VMware Aria Automation directs to the IP address of the node. Clustered Deployment A clustered setup consists of three VMware Aria Automation nodes situated behind an external load balancer. The Load Balancer FQDN for VMware Aria Automation directs to the Load Balancer's Virtual IP Address. Each of the three VMware Aria Automation nodes in the cluster will have its own unique IP Address and FQDN assigned. Things to know before you upgrade Ensure VCF Operations fleet management appliance's hostname is correct. It must be an FQDN. Go to VCF Operations - Fleet Management - Lifecycle - VCF Management - Settings - Summary , the hostname field there must be an FQDN ( example : fleetmgmt.arun.com ) , also ssh to the fleet management appliance and ensure hostname -f , returns an FQDN not a shortname. VMware Aria Suite Lifecycle is responsible for managing the lifecycle of VMware Aria Automation 8.x. To upgrade to VCF Automation 9.0.x, you must import VMware Aria Automation 8.x into VCF Operations 9.0.x, which is supported by the fleet management appliance. It is advisable to replace the VMware Identity Manager certificate before importing VMware Aria Automation 8.x to 9.0.x into VCF Operations 9.0.x. If the certificate is valid for several more years, it is acceptable. However, if it is set to expire soon, it should be replaced. Utilizing VMware Aria Suite Lifecycle's replace certificate Day-N action not only updates the certificate on VMware Identity Manager but also propagates the new certificate to the integrated products. For instance, if VMware Aria Automation, VMware Aria Operations, and other management products are integrated with VMware Identity Manager, they will receive the new certificate, re-establishing trust among them. This is considered best practice, though not mandatory. Initiate an inventory synchronization for VMware Aria Automation on VMware Aria Suite Lifecycle prior to starting the import process into VCF Operations 9.0.x. This guarantees that the most recent infrastructure data is consistently available and up-to-date. To upgrade VMware Aria Automation to VCF Automation 9.0.x, the minimum supported version is 8.18.1. This upgrade brings architectural changes that may impact current configurations, especially those related to cloud accounts and integrations. To facilitate a seamless and successful upgrade, it is crucial to examine the pre-check requirements and remediation steps detailed in KB 389563 Download VCF Automation upgrade binary Depot Configuration is mandatory before downloading any binaries, whether it’s install,upgrade and patches Navigate to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Upgrade Binary Select automation, click on DOWNLOAD Wait till the binary download completes Importing VMware Aria Automation 8.18.1 into VCF Operations 9.0.x Prior to upgrading VMware Aria Automation 8.18.x to VCF Automation 9.0.x, it must first be updated to VCF Operations 9.0.x to start the upgrade process Within VCF Operations, navigate to Fleet Management → Lifecycle → Overview → Automation  Select ADD Select “Import from legacy Aria Suite Lifecycle” , Click on NEXT In Lifecycle Configuration pane Enter VMware Aria Suite Lifecycle FQDN which is managing VMware Aria Automation you would like to upgrade admin@local would be the username Enter admin@local password Root password of VMware Aria Suite Lifecycle We now discover VMware Aria Automation deployments from the VMware Aria Suite Lifecycle. Choose the VMware Aria Automation you would like to import Click on next to review and submit so that the import of VMware Aria Automation into the VCF Operations begins Once this process is completed, you should now be able to VMware Aria Automation in VCF Operations 9.0.x as a managed component. After the import is finished, VMware Aria Automation will be unregistered from the VMware Aria Suite Lifecycle where it was previously registered. No Day-N actions will be available unless you upgrade it to VCF Automation. The primary reason for importing should be to promptly upgrade to the new version. If you are uncertain, only proceed with the import when you have a clear plan. Plan Upgrade Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Plan Upgrade" VCF version would be 9.0 itself , that's fine. Click on the "Target Version" for VCF Automation and select the version you intend to go to. The "Current Version" here would be 8.18.1 as you just imported it. Keep that in mind. The moment target version is selected, Target Build number is auto-populated. Once done , click on "CREATE PLAN". The upgrade plan is now set. It's time to kick into action. The "Upgrade" Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Upgrade" next to VCF Automation There’s a modal which pops up with few messages. It’s very important to read it. If you haven't read KB 389563 yet, make sure to do so now to verify that your environment does not include any unsupported endpoints in VCF 9.0. If any endpoints fall into that category, follow the necessary remediation steps. If an upgrade to VCF Automation 9.0 is not possible due to specific endpoints, those deployments will remain on VMware Aria Automation 8.x. If you have already imported and realized that your upgrade cannot continue. Delete Automation from VCF Operations and import VMware Aria Automation into VMware Aria Suite Lifecycle where it was managed before If you're good with the endpoints and have no blockers , then perform inventory sync and click on PROCEED Clicking on PROCEED will lead you to a pane where you're presented with the upgrade inputs. Entire infrastructure information is pre-populated, this is bound to be correct as just performed inventory sync Click on NEXT for Network pane, If the DNS objects are already created before the import under VCF Management → Settings → Network Settings they are auto populated, else go ahead and create one by clicking on ADD DNS SERVER and ADD NTP SERVER In this pane, the Fully Qualified Domain Name (FQDN) is pre-populated along with certificates and other relevant data. Below is an explanation of each property: FQDN: Represents the VCF Automation FQDN. Cluster FQDN: Represents the VCF Automation FQDN (for clustered VMware Aria Automation , enter the VMware Aria Automation load balancer FQDN) Controller Type: There are two types here Internal leverages internal load balancer. During upgrade , if the source is a simple VMware Aria Automation , it defaults to internal load balancer Others leverages external load balancer , During upgrade, if the source is a clustered VMware Aria Automation, it defaults to "others". There is no need for anyone to change this value. Primary VIP: For Simple VMware Aria Automation deployment , the IP address of the lone node is populated For Clustered VMware Aria Automation deployment , the IP Address of the first VMware Aria Automation node is designated as Primary VIP. Additional VIP: For Simple VMware Aria Automation deployment, it would be blank. For Clustered VMware Aria Automation deployment , the IP addresses of the second and third VMware Aria Automation nodes are populated automatically. Cluster CIDR: A CIDR block that the customer must choose, ensuring it does not conflict with their existing network infrastructure. Node Prefix: A user-defined prefix for naming newly deployed VCF Automation virtual machines. The suffix is automatically generated. Cluster Node IP Pool: Simple: A set of two IP addresses used for deploying new virtual machines that will host VCF Automation components. These are a set of IP's needed from customer, should be on the same range as in the Primary VIP. Clustered : A set of four IP addresses used for deploying new virtual machines that will host VCF Automation components. These are a set of IP's needed from customer, should be on the same range as in the Primary VIP. Once this set of information is entered , click on next and execute prechecks. Only if the prechecks are successful , proceed to review and submit upgrade request. Reusing VMware Aria Automation 8.x IP's in VCF Automation 9.0.x. Understanding mechanics behind it. In a simple VMware Aria Automation deployment model, there is a single VMware Aria Automation 8.x node. This node has a fully qualified domain name (FQDN) that resolves to its IP address. During the upgrade process, this IP address is switched to serve as the Primary VIP, while the FQDN and Cluster FQDN fields continue to reference the VMware Aria Automation's FQDN. This setup exclusively utilizes an "Internal Load Balancer." There's no change in DNS records needed. In a clustered VMware Aria Automation deployment model, three nodes are set up behind a load balancer. The VMware Aria Automation load balancer will point to the virtual server IP of the load balancer, which will remain unchanged. In a clustered VMware Aria Automation 8.x setup, the first node's IP address becomes the Primary VIP, while the IP addresses of the second and third nodes become Additional VIPs. As previously mentioned, the Cluster Node IP Pool requires four new IP addresses. These addresses are used for deploying the nodes that will host VCF Automation. I hope this is clear. It's crucial to grasp this concept. All these inputs are automatically populated. However, as a customer, it's essential to understand how it functions. Behind the Scenes : Upgrade Workflow Stage 1 : Overall FIPS status collection , we would collect the status of fips from the source which is VMware Aria Automation 8.x Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T09:37:07.788Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.v.p.t.GetOverallFipsStatusForVcfTask] -- Starting :: Get FIPS status for VCF task. 2025-02-06T09:37:08.366Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.v.p.t.GetOverallFipsStatusForVcfTask] -- Collected VCF FIPS status successfully. 2025-02-06T09:37:08.367Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnOverallFipsStatusForVcfCollectionSuccess Stage 2: Overall CEIP status collection , we would collect the status of ceip from the source which is VMware Aria Automation 8.x Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T09:37:09.992Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.GetOverallCeipStatusForVcfTask] -- Starting :: Get CEIP status for VCF task. 2025-02-06T09:37:10.411Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.GetOverallCeipStatusForVcfTask] -- Collected VCF CEIP status successfully. 2025-02-06T09:37:10.411Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnOverallCeipStatusForVcfCollectionSuccess Stage 3: Creates snapshot of VMware Aria Automation 8.x nodes. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T09:37:11.158Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => createNodeSnapshot and the priority is => 2 2025-02-06T09:37:11.680Z INFO vrlcm[171714] [pool-3-thread-28] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnNodeVmidCollectionSuccess 2025-02-06T09:37:12.209Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.v.d.h.TaskHelper] -- Task : CreateSnapshot_Task is still not completed 2025-02-06T09:37:27.258Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.v.v.u.SnapshotUtil] -- Snapshot create status : true 2025-02-06T09:37:27.259Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.VaSnapshotFromVmidTask] -- Snapshot Result: {   "snapshotName" : "LCM-vRA-VA-Snapshot - Thu Feb 06 09:37:06 UTC 2025",   "snapshotDescription" : "vRSLCM Snapshot: Thu Feb 06 09:37:06 UTC 2025",   "snapshotId" : "snapshot-28930",   "nodeIp" : "10.80.43.173",   "snapshotSuccess" : true } 2025-02-06T09:37:27.260Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVaSnapshotCompletion 2025-02-06T09:37:27.385Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVaSnapshotCompletion Stage 4: The backup of databases and the necessary configuration of VMware Aria Automation 8.x cluster is taken and stored on a disk. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T09:37:27.958Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vravadatabasebackup and the priority is => 3 2025-02-06T09:37:27.981Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Injected OnStart Edge for the Machine ID :: vravadatabasebackup 2025-02-06T09:37:28.642Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- Creating folder: /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c 2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- Creating yaml file 2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ShellExecutor] -- Executing shell command: touch /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml 2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ProcessUtil] -- Execute touch 2025-02-06T09:37:28.645Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ShellExecutor] -- Result: []. 2025-02-06T09:37:28.658Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg push --hooks-only 10.80.43.173:30000 /data/vm-config/vmrepo/productBinariesRepo/5f/5f5b1799-5e9a-4008-9f25-643b12fef77b/5f5b1799-5e9a-4008-9f25-643b12fef77b -l upgrade=true --wait 2025-02-06T09:37:28.660Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T09:37:39.299Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg push --hooks-only 10.80.43.173:30000 /data/vm-config/vmrepo/productBinariesRepo/5f/5f5b1799-5e9a-4008-9f25-643b12fef77b/5f5b1799-5e9a-4008-9f25-643b12fef77b -l upgrade=true --wait exit code: 0 output: running /data/vmsp-pkg3370493006/files/scripts/prePush.sh (VRA_NODE=ops-43-173.ibn.arun.com,VRA_USERNAME=root,VCFA_UPGRADE_VALUES_FILE=KXKXKXKX * * 2025-02-06T09:37:39.694Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- VRA database backup successful 2025-02-06T09:37:39.694Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnDatabaseBackupTaskCompletion Stage 5: VMware Aria Automation 8.x cluster is shutdown. Remember at this stage, we have taken the backup of the entire VMware Aria Automation 8.x databases and stored in a disk which will be needed later during upgrade procedure. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T09:37:40.559Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vravagracefulshutdown and the priority is => 4 2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Power state of the VMware Aria Automation host - ops-43-173.ibn.arun.com is ON 2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Completed power state check of all the Automation hosts. All node(s) found in ON state. 2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Injecting success event for power state check of VMware Aria Automation nodes 2025-02-06T09:37:43.013Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.p.c.v.t.VraVaStopServicesTask] -- Starting :: VMware Aria Automation VA Stop Services Task 2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.p.c.v.t.VraVaStopServicesTask] -- Stopping services on VMware Aria Automation VA : ops-43-173.ibn.arun.com 2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.d.v.h.VraPreludeInstallHelper] -- PRELUDE ENDPOINT HOST :: ops-43-173.ibn.arun.com 2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.d.v.h.VraPreludeInstallHelper] -- COMMAND :: /opt/scripts/svc-stop.sh 2025-02-06T09:45:51.947Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.d.v.v.g.VsphereGuestOSOperation] -- shutdown guest OS successful 2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- vmid found from vcenter : vm-19164 for hostname ops-43-173.ibn.arun.com 2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- Completed shutdown of all the prelude hosts 2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- Injecting success event 2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVraShutdownVMCompletion Stage 6: A new set of nodes or a new cluster is deployed. As previously mentioned, we requested a cluster node IP pool, from which IP addresses are retrieved and utilized for node deployment. For a straightforward deployment, one node is deployed, while for a cluster, three nodes are deployed. The nodes are deployed on the same vCenter where your existing VMware Aria Automation 8.x is. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log /var/log/vrlcm/vmsp_bootstrap-<>.log Log Snippets 2025-02-06T09:45:53.158Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => installvmsp and the priority is => 5 2025-02-06T09:45:55.613Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing StartVMSPDeployTask 2025-02-06T09:45:55.615Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- VMSP Bootstrap logs are streamed to /var/log/vrlcm/vmsp_bootstrap_8f86a9cc-62ab-4c01-80fb-c399c0e648eb.log 2025-02-06T09:45:55.616Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp passwd YXYXYXYX 2025-02-06T09:45:55.617Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T09:45:56.379Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX exit code: 0 output: $6$rounds=300000$9zNGkhOg4.gwdDTz$elMVsPLH7k/fZ6btZAyg/LkrSVe.MgZqwKCV1I932absmicgyYi0ZrUxwgHaShA5DWzftcg0FNyycjtsXPv0C. error: WARNING! Using --password YXYXYXYX CLI is insecure. Use --password-stdin. 2025-02-06T09:45:56.379Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX Output: $6$rounds=300000$9zNGkhOg4.gwdDTz$elMVsPLH7k/fZ6btZAyg/LkrSVe.MgZqwKCV1I932absmicgyYi0ZrUxwgHaShA5DWzftcg0FNyycjtsXPv0C. 2025-02-06T09:45:56.380Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX Error: WARNING! Using --password YXYXYXYX CLI is insecure. Use --password-stdin. 2025-02-06T10:26:51.152Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPUtil] -- VMSP Provisioning process exited with error code 0 2025-02-06T10:26:51.152Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- VMSP Cluster provisioned successfully !! 2025-02-06T10:26:51.153Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPUtil] -- Found kubeconfig YXYXYXYX -> vcf-mgmt-4d35c714f2.kubeconfig 2025-02-06T10:26:51.156Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- kubeconfig YXYXYXYX : /data/vmsp/vcf-mgmt-4d35c714f2.kubeconfig 2025-02-06T10:26:51.175Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.l.c.CredentialController] -- Password YXYXYXYX with name a60476a9-60b2-4059-a3f6-fbd4c87c37b6 created successfully. 2025-02-06T10:26:51.177Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.l.s.CredentialOperationService] -- Added internal only password YXYXYXYX alias vmsp_kubeconfi_ops-43-173.ibn.arun.com_c1f4858e-7ba4-4e58-a330-76eeee2dcf00 2025-02-06T10:26:51.177Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Kubeconfig YXYXYXYX Reference locker:password:KXKXKXKX Stage 7: After the new nodes are deployed , the certificate is applied on the new nodes / cluster. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log vcf automation cluster logs Log Snippets // Install certificate on services platform // 2025-02-06T10:26:53.224Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.VmspCertificateInitTask] -- Starting :: Install Certificate Init Task.... 2025-02-06T10:26:53.224Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.VmspCertificateInitTask] -- certificate :: {}locker:certificate:9d0cd8c5-6038-4c4c-8119-0e04061fc1a5:*. ibn.arun.com 2025-02-06T10:26:53.828Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.t.VmspInstallCertificateTask] -- Starting :: Install Certificate Task.... 2025-02-06T10:26:53.831Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.t.VmspInstallCertificateTask] -- kubeconfig YXYXYXYX reference is locker:password:KXKXKXKX 2025-02-06T10:26:53.831Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:26:53.834Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Kubeconfig YXYXYXYX file created at /data/vmsp/.kubeconfig_283216c7-b2c0-491d-a1b4-3d67052b019b 2025-02-06T10:26:54.084Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Installing certificate command being executed :: /usr/local/bin/kubectl --kubeconfig=KXKXKXKX create secret YXYXYXYX --cert=/data/vmsp/ops-43-173.crt --key=/data/vmsp/ops-43-173.key --namespace=istio-ingress --save-config --dry-run=client -o yaml | /usr/local/bin/kubectl --kubeconfig=KXKXKXKX apply -f - 2025-02-06T10:26:54.087Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Process exited with code: 0 2025-02-06T10:26:54.087Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ShellExecutor] -- Executing shell command: /data/vmsp/ops-43-173.sh 2025-02-06T10:26:54.088Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ProcessUtil] -- Execute /data/vmsp/ops-43-173.sh 2025-02-06T10:26:54.268Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ShellExecutor] -- Result: [secret/vmsp-tls YXYXYXYX 2025-02-06T10:26:54.268Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Install command response :: secret/vmsp-tls YXYXYXYX 2025-02-06T10:26:54.392Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVmspInstallCertificateTaskCompletion Stage 8: This is where actual magic happens. The disk where VMware Aria Automation 8.x's data is stored is mounted and then restored on the new nodes as part of the upgrade process and once the upgrade completes, the disk is unmounted. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log /var/log/vrlcm/vmsp_bootstrap-<>.log vcf automation cluster logs Log Snippets 2025-02-06T10:26:54.957Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vrava9xupgrade and the priority is => 7 2025-02-06T10:26:55.016Z INFO vrlcm[171714] [pool-3-thread-75] [c.v.v.l.p.c.v.t.StartVraVaGenericTask] -- Starting :: Start VMware Aria Automation VA Generic Task 2025-02-06T10:26:55.609Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX 2025-02-06T10:26:55.612Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster 2025-02-06T10:26:55.612Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnFetchVmspKubeconfigSuccess 2025-02-06T10:26:56.383Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Certificate not trusted 2025-02-06T10:26:56.383Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- NDC is not yet implemented for this, trusting certificate by default 2025-02-06T10:26:56.384Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustStoreManager] -- Storing certificate in the trust store 2025-02-06T10:26:56.508Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Successfully trusted certificate 2025-02-06T10:26:56.553Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.RestHelper] -- RestHelper execute methode connection.getResponseCode : 200 2025-02-06T10:26:56.554Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"running":true,"statusURI":"/webhooks/vmsp-platform/vcenter-disk-mount/mount","completedAt":null} 2025-02-06T10:26:56.557Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response with post call : { "statusCode" : 200, "responseMessage" : "OK", "outputData" : "{\"statusCode\":200,\"running\":true,\"statusURI\":\"/webhooks/vmsp-platform/vcenter-disk-mount/mount\",\"completedAt\":null}", "token" : null, "contentLength" : 115, "allHeaders" : null } 2025-02-06T10:26:56.560Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpGetCall url : /webhooks/vmsp-platform/vcenter-disk-mount/mount 2025-02-06T10:26:56.560Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX 2025-02-06T10:26:56.561Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T10:26:56.628Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX exit code: 0 output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR error: 2025-02-06T10:26:56.629Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX Output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR 2025-02-06T10:26:56.630Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX Error: 2025-02-06T10:26:56.630Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Triggering API call:GET https://10.80.43.173:30005/webhooks/vmsp-platform/vcenter-disk-mount/mount 2025-02-06T10:26:56.701Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Certificate chain trusted 2025-02-06T10:26:56.706Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.RestHelper] -- RestHelper execute methode connection.getResponseCode : 200 2025-02-06T10:26:56.706Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"running":true,"statusURI":"/webhooks/vmsp-platform/vcenter-disk-mount/mount","completedAt":null}çççççç 2025-02-06T10:26:56.707Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Sleeping for 10 seconds, waiting for webhook to complete * * 2025-02-06T10:27:27.170Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Webhook completed :: {TARGET_DISK_NAME=disk-1000-10, TARGET_DISK_UUID=6000C29c-db3b-0c5a-0c46-1949d94fc213} 2025-02-06T10:27:27.172Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.VmspMountDiskTask] -- targetDiskName: disk-1000-10 2025-02-06T10:27:27.172Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspMountDiskSuccess * * 2025-02-06T10:27:27.410Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- Starting :: vmsp based product deployment task. 2025-02-06T10:27:27.412Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- vcfaYamlfolderName: /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c 2025-02-06T10:27:27.413Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- Yaml folder found for VCFA deployment 2025-02-06T10:27:27.413Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- yamlFileName:/tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml 2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- Yaml file found for VCFA deployment 2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ShellExecutor] -- Executing shell command: cat /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml 2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ProcessUtil] -- Execute cat 2025-02-06T10:27:27.417Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ShellExecutor] -- Result: [deploy: dataMigration: true vcfa: fipsMode: false]. 2025-02-06T10:27:27.418Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg push registry.vmsp-platform.svc.cluster.local:5000 https://ops-43-171.ibn.arun.com/repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar --deploy -n prelude --set vcfa.fqdn=ops-43-173.ibn.arun.com --set deploy.dataMigration=true --wait --timeout=120m --remote --set deployment.size=small --kubeconfig=KXKXKXKX -f /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml 2025-02-06T10:27:27.419Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T10:27:28.323Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.l.MaskingPrintStream] -- * SYSOUT/SYSERR CAPTURED: -- KKKKK I AM RETRIEVING 2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- URL :: /repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- Query URL :: /productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- Decoded query url ::/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.l.MaskingPrintStream] -- * SYSOUT/SYSERR CAPTURED: -- KKKKK I AM RETRIEVING 2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- URL :: /repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- Query URL :: /productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- Decoded query url ::/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar 2025-02-06T10:58:52.663Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg push registry.vmsp-platform.svc.cluster.local:5000 https://ops-43-171.ibn.a run.com /repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar --deploy -n prelude --set vcfa.fqdn=ops-43-173.ibn.arun.com --set deploy.dataMigration=true --wait --timeout=120m --remote --set deployment.size=small --kubeconfig=KXKXKXKX -f /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml exit code: 0 output: error: 2025/02/06 10:27:28 bundle.vmsp.vmware.com/v1alpha1, Kind=Bundle/prelude/vra: created 2025/02/06 10:58:52 Deployment succeeded 2025/02/06 10:58:52 bundle.vmsp.vmware.com/v1alpha1, Kind=Bundle/prelude/vra cleaned up 2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- Deleting /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c 2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.f.FileUtil] -- Directory is deleted : /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c 2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- VMSP package push successful 2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnPackagePushTaskCompletion 2025-02-06T10:58:53.210Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.t.VmspUnmountDiskTask] -- Starting :: vmsp unmount disk task. 2025-02-06T10:58:53.211Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPUtil] -- Disk payload: { "govcUrl" : " ops-43-17.ibn.arun.com ", "govcInsecure" : "1", "diskLabel" : "Hard disk 2", "sourcevmName" : "dnd-anushan-vrava-primary", "sshUser" : "vmware-system-user", "sshPass" : "JXJXJXJX", "mountDir" : "/vra-db", "targetDiskName" : "disk-1000-10" } 2025-02-06T10:58:53.211Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpPostCall url : /webhooks/vmsp-platform/vcenter-disk-mount/unmount 2025-02-06T10:59:03.974Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"output":"Warning: Permanently added '10.80.43.106' (ED25519) to the list of known hosts.\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\n# 10.80.43.106:22 SSH-2.0-OpenSSH_9.3\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\n+ exec\nDetaching disk disk-1000-10 from dnd-anushan-vcfa-mgmt-zzv8s\nnode/ 2025-02-06T10:59:05.825Z INFO vrlcm[171714] [pool-3-thread-21] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing VMSP Provisioning Generic Task // Fetch Node Detials Task // 2025-02-06T10:59:06.419Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- Executing FetchVMSPNodeDetailsTask 2025-02-06T10:59:06.422Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- hostname in product properties: null 2025-02-06T10:59:06.422Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:59:06.511Z INFO vrlcm[171714] [Thread-55240] [c.v.v.l.v.p.u.VMSPUtil] -- Cmd Output -> NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 2025-02-06T10:59:06.511Z INFO vrlcm[171714] [Thread-55240] [c.v.v.l.v.p.u.VMSPUtil] -- Cmd Output -> dnd-anushan-vcfa-mgmt-zzv8s Ready control-plane 62m v1.32.0+vmware.1-fips 10.80.43.106 10.80.43.106 VMware Photon OS/Linux 6.1.126-3.ph5 containerd://1.7.24+vmware.1-fips 2025-02-06T10:59:06.515Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- Node Details : dnd-anushan-vcfa-mgmt-zzv8s, 10.80.43.106 2025-02-06T10:59:07.057Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- Executing command on the host: 10.80.43.173 , as user: vmware-system-user 2025-02-06T10:59:07.057Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- ------------------------------------------------------ 2025-02-06T10:59:07.058Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- Command: sudo kubectl --kubeconfig YXYXYXYX get pd -n vmsp-platform vmsp-platform -oyaml 2025-02-06T10:59:07.058Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- ------------------------------------------------------ 2025-02-06T10:59:07.681Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- exit-status: 0 2025-02-06T10:59:07.753Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: FetchNodeDetailsCompletion Stage 9: There's a password which was collected during the upgrade inputs. This password is used for vmware-system-user , a break glass account used to login into VCF Automation cluster. admin , an admin account for the provider organization in VCF Automation Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log vcf automation cluster logs Log Snippets 2025-02-06T10:59:09.433Z INFO vrlcm[171714] [pool-3-thread-24] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing VMSP Provisioning Generic Task 2025-02-06T10:59:09.433Z INFO vrlcm[171714] [pool-3-thread-24] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnGenericVMSPEvent 2025-02-06T10:59:10.011Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- Starting :: vmsp fetch current admin pwd task. 2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- vmid to fetch kubeconfig YXYXYXYX 2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- ... com.vmware.vrealize.lcm.locker.controller.CredentialController@16d9c898 2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpPostCall url : /webhooks/prelude/tenant-manager/resetpassword 2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- payload : {"username":"admin"} 2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX 2025-02-06T10:59:10.019Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T10:59:10.223Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX 2025-02-06T10:59:10.223Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T10:59:10.285Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX exit code: 0 output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR error: * * 2025-02-06T10:59:30.657Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Webhook completed :: {password=KXKXKXKX, status=SUCCESS, username=admin} 2025-02-06T10:59:30.657Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- current status response: {"password":"JXJXJXJX","status":"SUCCESS","username":"admin"} 2025-02-06T10:59:30.661Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- current status responseDTO: com.vmware.vrealize.lcm.vmsp.common.dto.VmspFetchAdminPwdResponseDTO@6006a6c8 2025-02-06T10:59:30.661Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspFetchCurrentAdminPwdSuccess 2025-02-06T10:59:31.007Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.VcfaUpdateAdminPwdTask] -- Starting VCFA update admin pwd task 2025-02-06T10:59:31.022Z WARN vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.d.r.VcfaRestClient] -- VCF Automation RestClient : Request body is null for HTTP method POST 2025-02-06T10:59:31.023Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.d.r.VcfaRestClient] -- Triggering request :: https://ops-43-173.ibn.arun.com/tm/cloudapi/1.0.0/sessions/provider * * 2025-02-06T10:59:31.676Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnUpdatingAdminPasswordSuccesffuly Stage 10: In VCF 9.0.x , we use service accounts for inter component communications and integrations. So , after once the upgrade is comleted , a service account for VCF Automation is now created. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:32.757Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vrslcmcreateserviceaccount and the priority is => 10 2025-02-06T10:59:33.394Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Start ServiceRegistryPreCheckTask 2025-02-06T10:59:33.396Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Service Account Flow : Checking if vmsp exists 2025-02-06T10:59:33.398Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnServiceRegistryPreCheckSuccess 2025-02-06T10:59:34.055Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.d.c.t.LcmCreateServiceAccountTask] -- Executing LcmCreateServiceAccountTask 2025-02-06T10:59:34.060Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.s.u.ServiceAccountNameGenerator] -- Service account name to be created :: svc_vmsp_ops-lcm_392 2025-02-06T10:59:34.066Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.authN.controller.AuthNUserController.createServiceAccount(UserRequestDTO)) 2025-02-06T10:59:34.074Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- Role DTO : RoleDTO [vmid=a9681b90-1a16-4d7c-88f6-da861a997d70, roleName=LCM Admin, roleDescription=VCF Operations Fleet Management Administrator] 2025-02-06T10:59:34.075Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- User Entity : User [username=svc_vmsp_ops-lcm_392, password=KXKXKXKX, userType=LCM_LOCAL_USER, displayName=svc_vmsp_ops-lcm_392, providerIdentifier=null, domain=LCM Local, isDisabled=false, userPrincipalName=null, userMetadata=null, toString()=com.vmware.vrealize.lcm.authN.model.User@53c0118d] 2025-02-06T10:59:34.148Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- User Role Mapping Entity : UserRoleMapping [uservmid=ee236354-9819-45d1-8a34-630da915e908, rolevmid=a9681b90-1a16-4d7c-88f6-da861a997d70, toString()=com.vmware.vrealize.lcm.authN.model.UserRoleMapping@544b49bc] 2025-02-06T10:59:34.164Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthNUserController] -- Created User with Id : ee236354-9819-45d1-8a34-630da915e908 2025-02-06T10:59:34.168Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.c.CredentialController] -- Request to create internal password. 2025-02-06T10:59:34.169Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.s.c.PasswordStoreServiceImpl] -- Create new internal password YXYXYXYX alias: svc_vmsp_ops-lcm_392 2025-02-06T10:59:34.172Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.c.CredentialController] -- Password YXYXYXYX with name e170f70f-3519-4a72-bb68-9996d959322d created successfully. 2025-02-06T10:59:34.197Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateServiceAccountSuccess Stage 11: Service Registry is an index or directory of service accounts and how each management components integrate with each other. VCF Automation will now be part of service registry once this stage completes. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:35.159Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => createserviceregistry and the priority is => 11 2025-02-06T10:59:35.190Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Injected OnStart Edge for the Machine ID :: createserviceregistry 2025-02-06T10:59:35.801Z INFO vrlcm[171714] [pool-3-thread-33] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Start ServiceRegistryPreCheckTask 2025-02-06T10:59:36.408Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.d.c.t.CreateServiceRegistryTask] -- Executing CreateServiceRegistryTask 2025-02-06T10:59:36.409Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.d.c.t.CreateServiceRegistryTask] -- Executing Service Registry creation for product : vrslcm 2025-02-06T10:59:36.488Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateServiceRegistrySuccess Stage 12: Push capabilities stage understands the existing management components already deployed and what integrations are needed. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:37.562Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmsppushcapabilities and the priority is => 12 2025-02-06T10:59:37.563Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.MachineRegistry] -- GETTING MACHINE FOR THE KEY :: vmsppushcapabilities 2025-02-06T10:59:38.804Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Executing VmspPushCapabilitiesTask 2025-02-06T10:59:38.806Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- VMSP ProductIds : [ "b482c25c-809b-498d-864a-2b70527de903" ] 2025-02-06T10:59:38.806Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Executing Push Capabilities for VMSP : b482c25c-809b-498d-864a-2b70527de903 8f86a9cc-62ab-4c01-80fb-c399c0e648eb 2025-02-06T10:59:38.812Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:59:38.815Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Kubeconfig YXYXYXYX file created at /data/vmsp/.kubeconfig_e5316135-53e7-412d-be10-5e4971923ee5 2025-02-06T10:59:38.822Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.c.CapabilitiesRestController] -- Request to build capabilities for product : VMSP : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb 2025-02-06T10:59:38.823Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.s.CapabilitiesService] -- Requested for capabilities object for consumer type :: VMSP : environmentId : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb 2025-02-06T10:59:38.824Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.s.CapabilitiesService] -- Building capability for dependent component with type :: VCF_OPS_LCM 2025-02-06T10:59:38.935Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Constructed Capabilities : { "capabilities" : [ { "key" : "ops-lcm", "type" : "VCF_OPS_LCM", "name" : "VCF OPS LCM", "nodes" : [ { "name" : "VCF OPS LCM", "addresses" : [ { "type" : "IPv4", "value" : "10.80.43.171" }, { "type" : "Fqdn", "value" : " ops-43-171.ibn.arun.com " } ], "certificates" : [ "-----BEGIN CERTIFICATE-----\nMIIDuDCCAqCgAwIBAgIUcTc/nKdxW7BPQXH45YJb1VbowEUwDQYJKoZIhvcNAQEL\nBQAwVDELMAkGA1UEBhMCVVMxETAPBgNVBAoMCEJyb2FkY29tMQwwCgYDVQQLDANW\nQ0YxJDAiBgNVBAMMG29wcy00My0xNzEuaWJuLmJyb2FkY29tLm5ldDAeFw0yNTAx\nMzAxNTQ3MzZaFw0zMDAxMjkxNTQ3MzZaMFQxCzAJBgNVBAYTAlVTMREwDwYDVQQK\nDAhCcm9hZGNvbTEMMAoGA1UECwwDVkNGMSQwIgYDVQQDDBtvcHMtNDMtMTcxLmli\nbi5icm9hZGNvbS5uZXQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCe\nFehoXFQoeyI7X6JTsz/JoQxM55zUxqRTrtfC1Onc/6p9JuIOrdJxIN45f2zVNOZG\nohCnIOyz9Kr/MA162qSHz64RbSzLiXId0VROXxWaQ0qK0BBwjrO7g6V23YTb7cqk\n84qebEKIPKlYxsY0x71l4q6qStqV7RrYbPmWq8eWD2SJ3ZDAjCSyATV18rirNlqs\nMD21UvGzzB0jAGBO4O8d81ft7ByxIdoOomlBTr/jDfiAsIM6i3lm0YExMr8rWt5M\nIKkpDwgeDMedMC+7HSvvZX3hoESScneE5SD20Bi6l2jp0o96+4V31KiEaKwmTEPC\nIkT6j1U0k+4bn5tyL2zLAgMBAAGjgYEwfzAdBgNVHQ4EFgQUX1HpY4iA4SRqrN1s\nqU9LdheUfGYwHwYDVR0jBBgwFoAUX1HpY4iA4SRqrN1sqU9LdheUfGYwDwYDVR0T\nAQH/BAUwAwEB/zAsBgNVHREEJTAjghtvcHMtNDMtMTcxLmlibi5icm9hZGNvbS5u\nZXSHBApQK6swDQYJKoZIhvcNAQELBQADggEBAFYRvDmkWLDGwKHk4MXA2nFHTCV0\nk10tESZAlqqfotLG0sjdHJa7pjk5ke2RxjMi0Sc7HpsflFsKXxkR6k60L93oJqVb\nH6V1beqa7+gpu3sR2do2PItu2vBT88oBxaECtMpnGMxM0yjZlISTEnq/m8idjgYR\n11AsUOVSmx+NYtzd5r2+Gl3A+HargR/iDUmvvNiWp90aeLtCfuT2qo2z5tY/H1zd\nuS2Lq+oVxV9eVsfCxWxpEVx6xIqd4Hxe28o2adoX0llTYz5Sf8GFsSIaxphY2ccc\njEGAImjr4x9D/XZaNptGcuOpV4Z1aGN2GwUrGJ9y0IpKZoy2j3/Fe+FZcqM=\n-----END CERTIFICATE-----\n" ] } ], "secret" : JXJXJXJX"type" : "Credential", "username" : "svc_vmsp_ops-lcm_392", "password" : "JXJXJXJX" } } ] } 2025-02-06T10:59:49.460Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Updating LCM Certificate 2025-02-06T10:59:49.461Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Command to create LCM Cert Secret YXYXYXYX /usr/local/bin/kubectl --kubeconfig=KXKXKXKX create secret YXYXYXYX ops-mgmt-cert -n vmsp-platform --dry-run=client 2025-02-06T10:59:50.019Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.u.ShellExecutor] -- Result: [secret/ops-mgmt-password-secret YXYXYXYX 2025-02-06T10:59:50.019Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Install command response :: secret/ops-mgmt-password-secret YXYXYXYX 2025-02-06T10:59:50.020Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Push Capabilities to VMSP successful 2025-02-06T10:59:50.021Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspPushCapabilitySuccess Stage 13: At this stage , VCF Operations is anyways present. Let's say VCF Operations for Logs is deployed as well. Based on the capabilities listed in the service registry, VCF Automation is integrated with VCF Operations and VCF Operations for logs using the service account created before. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:50.759Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => configurecrossproducts and the priority is => 13 2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- environmentId : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb 2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- monitorvRAWithvROps : null 2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- registervROpswithvRA : null 2025-02-06T10:59:50.839Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- monitorvRNIWithvROps : null 2025-02-06T10:59:50.839Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCrossProductConfigurationCompleted Stage 14: With the upgrade nearly finished, the VCF Automation 9.0.x cluster is now included in the VCF Operations (fleet management appliance's inventory). At this point, when you navigate to the Components pane, you'll find VCF Automation 9.0 available, along with all applicable Day-N actions. It's important to note that during this phase, the VMware Aria Automation 8.x cluster is removed from VCF Operations, making way for the new version of VCF Automation. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Creating reference for pwd with env id - 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destination name - vra and type - product 2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference addition required 2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Checking whether locker id 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 is already referenced 2025-02-06T10:59:52.437Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- locker id is already referenced 2025-02-06T10:59:52.437Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Duplicating pwd with vmid - 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 2025-02-06T10:59:52.439Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference request entity : BaseDTO{vmid='3659e0da-183d-4837-a911-63d0b28eb0bc', version=8.1.0.0} 2025-02-06T10:59:52.440Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference model entity : com.vmware.vrealize.lcm.locker.model.LockerReference@74d797eb 2025-02-06T10:59:52.442Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference with envID 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destinationName vra and lockerID ff3c03ab-5719-4495-8f58-d76dff253e18 successfully added. 2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully. 2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating env inventory for password YXYXYXYX 2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:59:52.445Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- ###Printing password YXYXYXYX before updating env inventory: locker:password:KXKXKXKX 2025-02-06T10:59:52.447Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating enhanced request for product vra in environment 8f86a9cc-62ab-4c01-80fb-c399c0e648eb. 2025-02-06T10:59:52.448Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.a.g.s.UserRequestServiceImpl] -- Saving user request 2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Creating reference for pwd with env id - 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destination name - vra and node type - vcfa-primary 2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference addition required 2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Checking whether locker id 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 is already referenced 2025-02-06T10:59:52.458Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- locker id is already referenced 2025-02-06T10:59:52.458Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Duplicating pwd with vmid - 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 2025-02-06T10:59:52.460Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference request entity : BaseDTO{vmid='6394188c-f1dd-4f85-a4a5-80d1954da981', version=8.1.0.0} 2025-02-06T10:59:52.461Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference model entity : com.vmware.vrealize.lcm.locker.model.LockerReference@5ec191cf 2025-02-06T10:59:52.464Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference with envID 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destinationName vra and lockerID f899b9a1-954c-407b-80b4-5722e25a6ae3 successfully added. 2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully. 2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating env inventory for password YXYXYXYX 2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String)) 2025-02-06T10:59:52.468Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- ###Printing password YXYXYXYX before updating env inventory: locker:password:KXKXKXKX 2025-02-06T10:59:52.469Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating enhanced request for product vra and node vcfa-primary in environment 8f86a9cc-62ab-4c01-80fb-c399c0e648eb. 2025-02-06T10:59:52.470Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.a.g.s.UserRequestServiceImpl] -- Saving user request 2025-02-06T10:59:52.480Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- ###Printing licenseVmIds from Ref : 2025-02-06T10:59:52.481Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Finding references with envId '8f86a9cc-62ab-4c01-80fb-c399c0e648eb', destinationName 'vra', type: 'license' 2025-02-06T10:59:52.483Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- ###Printing licenseVmIds from Ref : 2025-02-06T10:59:52.484Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Finding references with envId '8f86a9cc-62ab-4c01-80fb-c399c0e648eb', destinationName 'vmsp', type: 'license' 2025-02-06T10:59:52.486Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Locker reference creation json : [ ] 2025-02-06T10:59:52.487Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully. 2025-02-06T10:59:52.488Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Status code : 201 , body : [ ] Stage 15: The new VCF Automation component is now registered with the notifications service of fleet management appliance. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:53.761Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => notificationschedules and the priority is => 15 2025-02-06T10:59:53.860Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.p.c.n.t.NotificationSchedulesTask] -- Notification Schedule Task for supported Products in vCF mode. Product : vra 2025-02-06T10:59:53.873Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.c.n.u.NotificationUtil] -- Spec for nodeName productUpgradeNotification : {"symbolicName":"productUpgradeNotification","displayName":null,"productVersion":null,"priority":0,"dependsOn":[],"components":[{"component":{"symbolicName":"productUpgradeNotification","type":null,"componentVersion":null,"properties":{"environmentId":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","environmentName":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","productName":"vra","currentVersion":"9.0.0.0"}},"priority":0}]} 2025-02-06T10:59:53.874Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.c.n.u.NotificationUtil] -- Spec for nodeName productPatchingNotification : {"symbolicName":"productPatchingNotification","displayName":null,"productVersion":null,"priority":0,"dependsOn":[],"components":[{"component":{"symbolicName":"productPatchingNotification","type":null,"componentVersion":null,"properties":{"environmentId":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","environmentName":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","productName":"vra","currentVersion":"9.0.0.0"}},"priority":0}]} 2025-02-06T10:59:53.875Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnRegisterNotificationSchedulesTaskInitiated Stage 16: Most important thing , VCF Automation does not support snapshots, customers should configure SFTP the moment they land into VCF Operations 9.0.x , via install or upgrade route. If this SFTP is configured , then this configuration is automatically pushed to VCF Automation 9.0.x cluster. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T10:59:55.558Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmspsftpupdate and the priority is => 16 2025-02-06T10:59:56.217Z INFO vrlcm[171714] [pool-3-thread-54] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX 2025-02-06T10:59:56.220Z INFO vrlcm[171714] [pool-3-thread-54] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster 2025-02-06T10:59:56.820Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPUtil] -- Writing kubeconfig YXYXYXYX : 436c3cae-4cd7-4ce3-a63f-d411f4aac29f process exited with code: 0 2025-02-06T10:59:56.820Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- kubeconfig YXYXYXYX /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f 2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- Process exited with code: 0 2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: chmod 777 /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh 2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute chmod 2025-02-06T10:59:56.825Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: []. 2025-02-06T10:59:56.825Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh 2025-02-06T10:59:57.112Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX code: 0 output: secret/sftp-password-secret YXYXYXYX error: Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. 2025-02-06T10:59:57.112Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX secret/sftp-password-secret YXYXYXYX 2025-02-06T10:59:57.113Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. 2025-02-06T10:59:57.114Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- response: {"error":"Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\n","output":"secret/sftp-password-secret configured\n","exitCode":0} 2025-02-06T10:59:57.114Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: rm -rf /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh 2025-02-06T10:59:57.115Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute rm 2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: []. 2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: rm -rf /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f 2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute rm 2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: []. 2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- VMSP create secret YXYXYXYX 2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateSecretTaskCompletion 2025-02-06T10:59:57.537Z INFO vrlcm[171714] [pool-3-thread-58] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Triggering API call:POST https://10.80.43.173:30005/webhooks/vmsp-platform/sftp/configure 2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.CreateVmspSftpReferencesTask] -- Reference creation for SFTP is successful with response: {"headers":{},"body":{"status":"SUCCESS","statusCode":"CREATED","message":"Success","resourceIdentifier":"81e56e29-f4aa-4bb1-9f5a-eb24a59e469a","errorCode":0,"errors":null},"statusCode":"CREATED","statusCodeValue":201} 2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.CreateVmspSftpReferencesTask] -- VMSP SFTP reference update successful 2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspSftpReferenceCreationSuccess 2025-02-06T11:00:16.015Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVmspSftpReferenceCreationSuccess Stage 17: At this point, after configuring the SFTP on the VCF Automation cluster, the backup schedule is deployed into the cluster. This guarantees that backups occur regularly. A full backup is performed every 24 hours, and an incremental backup is taken every 15 minutes. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T11:00:16.559Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmspscheduledbackup and the priority is => 17 2025-02-06T11:00:17.190Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX 2025-02-06T11:00:17.194Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster 2025-02-06T11:00:17.195Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnFetchVmspKubeconfigSuccess 2025-02-06T11:00:17.797Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.t.VmspScheduledBackupTask] -- Starting :: vmsp scheduled backup task. 2025-02-06T11:00:17.800Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX 2025-02-06T11:00:17.801Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started 2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX Output: 2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX Error: 2025/02/06 11:00:18 releases.vmsp.vmware.com/v1alpha1, Kind=PackageDeployment/vmsp-platform/vmsp-platform: updated 2025/02/06 11:00:21 Deployment succeeded 2025/02/06 11:00:21 /v1, Kind=Secret/vmsp-platform/vmsp.release.vmsp-platform.v4: KXKXKXKX 2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.t.VmspScheduledBackupTask] -- VMSP scheduled backup successful 2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspScheduledBackupSuccess 2025-02-06T11:00:32.677Z INFO vrlcm[171714] [pool-3-thread-81] [c.v.v.l.v.p.t.VmspScheduledBackupRetentionPeriodConfigTask] -- VMSP scheduled backup retention period config successful Stage 18: Now comes the last stage where the VCF Operations inventory , the SDDC endpoints are pushed into VCF Automation. This would be made automatically available in VCF Automation's All Apps organization. Logs to Monitor at this stage : /var/log/vrlcm/vmware_vrlcm.log Log Snippets 2025-02-06T11:00:33.357Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => triggerinventorysyncwithops and the priority is => 18 2025-02-06T11:00:33.417Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.d.c.t.TriggerInventorySyncWithOpsTask] -- Trigger inventory sync ops task 2025-02-06T11:00:33.417Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.d.c.t.TriggerInventorySyncWithOpsTask] -- inventorySyncOpsDtoString: "{\"environmentId\":\"8f86a9cc-62ab-4c01-80fb-c399c0e648eb\",\"productIds\":[\"vra\",\"vmsp\"],\"registrationFlow\":false,\"vropsProperties\":null,\"vropsGettingDeployed\":false}" 2025-02-06T11:00:33.419Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.l.c.EnvironmentController] -- Request to sync endpoints with Ops : { "environmentId" : "8f86a9cc-62ab-4c01-80fb-c399c0e648eb", "productIds" : [ "vra", "vmsp" ], "registrationFlow" : false, "vropsProperties" : null, "vropsGettingDeployed" : false } -- InventorySyncOpsDto : { "environmentId" : "8f86a9cc-62ab-4c01-80fb-c399c0e648eb", "productIds" : [ "vra", "vmsp" ], "registrationFlow" : false, "vropsProperties" : null, "vropsGettingDeployed" : false } 2025-02-06T11:00:33.764Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.r.c.p.InventorySyncWithOpsPlanner] -- Fetching the list of products which needs to be synced 2025-02-06T11:00:34.031Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.r.c.RequestProcessor] -- Processing request with ID : 79f9b28e-3f66-4207-b3fb-92fa319437c3 with request type INVENTORY_SYNC_WITH_OPS with request state INPROGRESS. 2025-02-06T11:00:40.291Z INFO vrlcm[171714] [pool-3-thread-87] [c.v.v.l.p.c.v.t.ActivateAdaptersTask] -- oldStatus : {id=CASAdapter, state=ACTIVATED} 2025-02-06T11:00:40.291Z INFO vrlcm[171714] [pool-3-thread-87] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnActivateAdaptersTaskCompleted 2025-02-06T11:00:40.583Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnActivateAdaptersTaskCompleted This concludes the VCF Automation upgrade process. Hope this helps.

  • VCF Operations for Networks 9.0.x | Install Guide |

    Introduction VCF Operations for Networks  brings intelligent automation and oversight to your software-defined networking and security. It empowers you to build an optimized, highly available, and deeply secure infrastructure across any multi-cloud environment. By bridging the gap between virtual and physical networks, it delivers end-to-end visibility, allowing you to accelerate micro-segmentation planning and gain the exact operational insights needed to confidently manage and scale your VCF NSX deployments. Architecture When you install VCF Operations for Networks  (formerly known as VMware Aria Operations for Networks or vRealize Network Insight), the architecture relies on two distinct types of virtual appliances working together. The Platform Node (Platform VM) This is the brain of the operation. The Platform Node is the centralized analytics and management node. What it does:  It processes all the network data, runs the analytics engine, stores the mathematical models of your network, and serves the web-based User Interface (UI) that you interact with. Role in the network:  When you search for a traffic flow, map out application dependencies, or generate micro-segmentation recommendations, the Platform Node is doing the heavy lifting. Scalability:  In small environments, you only need one. For larger enterprise environments or for High Availability (HA), you can deploy a cluster of multiple Platform nodes to share the load and ensure redundancy. The Collector Node (Proxy/Collector VM) Think of this as the data-gatherer. The Collector Node sits closer to your actual data sources (like vCenter, NSX Managers, physical switches, firewalls, and public cloud endpoints). What it does:  It actively ingests and collects data from your infrastructure using various protocols (like APIs, NetFlow, SNMP, SSH, etc.). After gathering the data, it encrypts and forwards it to the Platform Node for processing. Role in the network:  It acts as a secure proxy, so the Platform VM doesn't need direct access to every single switch or firewall in your environment. Scalability:  You will typically deploy one or more Collector VMs depending on the size of your network, the geographical distribution of your data centers, and the volume of traffic data being generated Sizing VMware sizes these deployments into "Bricks" (Medium, Large, Extra Large, etc.) based on the scale of your environment, such as the number of VMs and the volume of network flows you need to monitor. Crucial Deployment Note:  For VCF Operations for Networks to function properly and remain fully supported, VMware strictly requires a 100% reservation  for both CPU and Memory on these nodes. Platform Node Requirements Because the Platform Node runs the analytics engine and database, it is the more resource-heavy of the two. Brick Size vCPU (at 2.6 GHz) Memory (RAM) Storage (Thin Provisioned) Medium 8 Cores 32 GB 1 TB Large 12 Cores 48 GB 1 TB Extra Large 16 Cores 64 GB 2 TB Collector Node Requirements The Collector Node is lighter since its primary job is to ingest, encrypt, and forward data rather than analyze it. Brick Size vCPU (at 2.6 GHz) Memory (RAM) Storage (Thin Provisioned) Medium 4 Cores 12 GB 200 GB Large 8 Cores 16 GB 200 GB Extra Large 8 Cores 24 GB 200 GB 2X Large 16 Cores 48 GB 300 GB Things to Note, Very Important An IP for each node , that's for Platform and Collector VCF Operations for networks does not accept FQDN during installation. It will ask only for an IP Address. Even though during install only IP Address is accepted, it is recommended to assign the IP's to an FQDN. Have all these IP's and FQDN as part of certificate's subject alternative name. This is very important keeping in mind future upgrades. The platform and collector node are only deployed on the management domain of a given VCF Instance Installation Login into VCF Operations , browse to Fleet Management -- Lifecycle -- VCF Management -- Binary Management -- Install Binaries. Download VCF Operations for networks install binary Binary download request would be triggered and it would take couple of minutes to complete Once the binary download is complete , browse to Fleet Management -- Lifecycle -- VCF Management -- Add Component -- operations-networks A new pane opens where it presents following options Installation Type New Install Use this option when you would like to install a vanilla or a brand new operations-networks Import Use this option if you already have operations-networks 9.0.x installed in your environment and you would like to import it into VCF Operations ( fleet management appliance's inventory for lifecycle management) This option is used rarely when the operations-networks component is soft-deleted or removed from fleet management appliance's inventory for troubleshooting purposes Leverage this option if you do have  VMware Aria Operations for Networks 6.x which is not managed by VMware Aria Suite Lifecycle but you would like to import it and upgrade it to VCF Operations for networks 9.0.x Import from legacy VMware Aria Suite Lifecycle Use this option if you already have VMware Aria Operations for Networks 6.x and being managed by VMware Aria Suite Lifecycle 8.x Version Select the version you would like to deploy Deployment Type Choose between Standard and Cluster, this defines how many platform and collector nodes are deployed Click "Next" to choose the Certificate. If you don't have a certificate for this operations-networks deployment, click the "+" sign to open a pane for generating a new certificate. Remember to fill in the Hostname and IP Addresses field with all the hostnames and IP addresses of the platform and collector nodes. As mentioned earlier, even though operations-networks doesn't use hostnames in its deployment, most customers assign a hostname to an IP address as a best practice to avoid IP conflicts if the IP is not reserved. It's crucial to include hostnames and IP addresses for both platform and collector nodes in the certificate. Click on next to enter infrastructure details on where the nodes are deployed Select vCenter Server Select Cluster Select Folder Select Resource Pool Select Network Select Datastore Select Disk Mode Use Content library (Optional) Click on next to enter network details Enter Domain Name Enter Domain Search Path Select the DNS Server Select the NTP Server Enter IPv4 details Default IPv4 gateway IPv4 Netmask Click on next to enter component details. This is the pane where you enter platform and collector nodes deployment details Platform Node VM Name Its the display name of the platform node IP Address IP address of the platform node Node Size Select the size of the platform node Collector Node VM Name Its the display name of the collector node IP Address IP address of the collector node Node Size Select the size of the collector node Click on next to run prechecks After the prechecks are successful, proceed by clicking "Next" to review the details and submit to initiate the deployment process. Once the deployment is completed. Go ahead and use the VCF Operations for networks by accessing the platform node's FQDN.

  • Beyond the Biological Clock: Finding Your ‘Mission 100’

    We are living in an era where the dream of reaching 100 is becoming a statistical reality. But as the lifespan expands, a critical question emerges: How do we ensure our healthspan keeps pace? In his transformative book, "Mission 100 Years", renowned psychiatrist Dr. Laxmi Naresh Vadlamani argues that a vibrant century isn't a matter of luck, it’s a mission. Drawing on over three decades of medical experience, Dr. Vadlamani peels back the layers of aging to reveal that the secret to a long life lies at the intersection of mental resilience, purposeful productivity, and deep-seated joy. Whether you are in your thirties planning for the future or a senior looking to reclaim your vitality, this guide offers more than just medical advice; it offers a blueprint for a life well-lived. In today’s post, we’ll explore the key takeaways from Dr. Vadlamani’s work and how you can start your own mission to 100 today. Dr Naresh Vadlamani's Bio One can purchase book here at Amazon. https://amzn.in/d/1fiKqhv

  • How Doctors Solve the Mystery of Your Symptoms

    Why Your Doctor Won’t Give You an Answer Right Away (And Why That’s a Good Thing) We’ve all been there: You go to the doctor with a nagging cough or a strange pain in your side, hoping for a quick name for your problem and a prescription to fix it. Instead, your doctor asks a dozen questions, pokes around, and then says, "It could be X, but we need to run tests to rule out Y and Z." This can feel frustrating, but your doctor is actually using one of the most powerful tools in medicine: Differential Diagnosis. What is a "Differential"? In simple terms, a differential diagnosis is a shortlist of suspects.  Because many different conditions share the exact same symptoms, a doctor rarely knows the cause of an ailment the moment they walk into the room. For example, a simple sore throat could be: • A common cold (viral) • Strep throat (bacterial) • Seasonal allergies • Acid reflux The "Differential" is the process of weighing these possibilities against each other to find the truth. How the Detective Work Happens Think of your doctor as a medical detective. They use a three-step process to narrow down your "shortlist": 1. The Interview:  When they ask, "When did the pain start?" or "Does it feel sharp or dull?", they aren't just making small talk. They are looking for "clues" to cross items off the list. 2. The "Must-Not-Miss" Rule:  Doctors are trained to look for the most dangerous possibilities first. Even if they are 90% sure you have a simple muscle strain, they might run a test to make sure it isn't a blood clot. They prioritize your safety over a quick guess. 3. Testing by Elimination:  This is the part that tests our patience. Blood work, X-rays, or swabs are used to "rule out" the suspects one by one until only the correct diagnosis remains. Why This Matters to You Understanding this process can change how you experience healthcare: • Negative results are still good results:  If a test comes back "clear," it doesn't mean the test was a waste of time. It means the doctor has successfully narrowed the search. • You are the lead witness:  The more specific you can be about your symptoms (when they happen, what makes them better, what makes them worse), the faster your doctor can narrow down their list. • It prevents "Anchoring":  Sometimes, we get stuck on a diagnosis we found on Google. Differential diagnosis forces the doctor to keep an open mind, ensuring they don't miss a rare condition just because a common one seems more obvious. The Bottom Line Medicine is rarely a straight line; it’s a process of elimination. The next time your doctor says they want to "run a few possibilities," know that they are being a diligent detective to ensure you get the right treatment for the right problem

  • Handling Postgres Startup and Cert Regeneration After a VCF 9.0.1 Fleet Management Reboot

    Overview Applying VCF Operations fleet management appliance 9.0.1.0 patch on top of version 9.0.0.0 is pretty straight forward. This does not need a reboot of the appliance , as it just triggers a service restart. What we noticed was this VCF Operations fleet management appliance was rebooted for some weird reason we did see 2 issues happening Postgres Service fails to start Certificate of VCF Operations fleet management is regenerated If this appliance isn't restarted, you won't encounter this issue at all. Postgres Exception from vmware_vrlcm.log Sep 18 15:33:07 <> postgres[12859]: pg_ctl: could not open PID file "/var/vmware/vpostgres/current/pgdata/postmaster.pid": Permission denied Sep 18 15:33:07 <> systemd[1]: vpostgres.service: Control process exited, code=exited, status=1/FAILURE Certificate Issue Browsing to /opt/vmware/vlcm/cert/ will list a new cert and a key generated , the existing one is backed up with the format server.ke y.<> , server.cr t.<> Remediation Login into VCF Operations fleet management via ssh Execute the command systemctl status vpostgres Must be down, if that's the case then execute the following command to fix the permissions chmod 700 /var/vmware/vpostgres/current/pgdata/ Navigate to the /opt/vmware/vlcm/cert directory. The key and certificate files requiring change will have a timestamp in their names (e.g., server.crt.250930102056). Run the following commands to move the timestamped files into place, replacing the filenames with the ones in your directory: mv server.key.250930102056 server.key mv server.crt.250930102056 server.crt Restart NGINX service systemctl restart nginx Restart VCF Operations fleet management appliance service systemctl restart vrlcm-server.service Check the status of the service systemctl status vrlcm-server.service Once the service startup is complete, you should be now good to go.

  • VCF 9.0 to VCF 9.0.1 Management Components Upgrade & Patching Runbook

    Environment/Setup VMware Cloud Foundation 9.0.0.0 with all of the management components deployed.   Background This document outlines the steps a customer must follow to upgrade from VCF 9.0 to VCF 9.0.1 for management components , eventually followed by the core components. It assists in implementing the newly released maintenance update on the VCF 9.0 GA release. Depot & Binary Management  Online Depot If customer is using an online depot, then they would get a message stating there's a new version available for VCF Operations fleet management to which they can upgrade.  Offline Depot - Local Customer can download the new bundles of VCF 9.0.1 using VCF-DT and then upload the tar into the VCF Operations fleet management appliance's /data path. Once done, they can click on Depot Configuration → EDIT DEPOT SETTINGS to refresh the depot connection which then detects the new upgrade/patch binaries. Offline Depot - WebServer  Customer can download the new bundles of VCF 9.0.1 using VCF-DT and place them on a repository using which this tar bundle can be exposed to the VCF Operations fleet management appliance. Once done, they can click on Depot Configuration → EDIT DEPOT SETTINGS to refresh the depot connection which then detects the new upgrade/patch binaries. In this example , i am leveraging Offline Depot → WebServer mechanism  Patch VCF Operations fleet management appliance  Login into VCF Operations → Fleet Management → Lifecycle → VCF Management  A banner stating a new version of Fleet Management is available is shown. We need to upgrade Fleet Management appliance before we can upgrade any other component to it's version of 9.0.1 The same banner is made available under Components pane too Note:  VCF Operations fleet management appliance version 9.0.1 is treated as a patch and not an upgrade. Hence, the binary for this fleet management appliance will be made available under VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries Download the VCF Operations fleet management appliance patch binary from VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries  Select the "fleet management" under Patch Binaries and click on "Download" This generates a task, we can monitor the task under the tasks pane. As we can see in the screenshot below, the download of the patch binary is now completed.  Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → System Patches Click on "Create Snapshot" which opens following pane which asks for  vCenter Hostname vCenter Credential By entering or selecting above information and clicking on "SUBMIT" will create a snapshot of VCF Operations fleet management appliance. This is a mandatory step and cannot be skipped so that we have an appropriate rollback or revert option in case of a failure. After taking snapshot, go ahead and click on "New Patch", a pane opens up which shows the patch we just downloaded Select the Patch and Click on "NEXT" Now under "Review and Install" pane , go ahead and review the information of the patch. There's a release note link as well which can be clicked and reviewed. Once done , go ahead and click on "INSTALL" The moment you click on INSTALL , you will be redirected to Tasks pane where you can see the installation task for the patch run and completed. While the patch is being installed and services being restarted int he background , you would see this "Zero Page" as a placeholder. At this point in time , one can monitor following logs to see what's really happening  /var/log/vrlcm/vmware_vrlcm.log /var/log/vrlcm/patchcli.log /var/log/vrlcm/bootstrap.log /data/script.log At this point one can login into VCF Operations fleet management appliance's shell and look or monitor above mentioned logs  Once the services are up, the VCF Operations → Fleet Management → Lifecycle → VCF Management  page should be back into functional state. Once the services are up, the UI is automatically refreshed. It takes around 5 minutes for it to be back.  We can browse to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → System Details to validate version  Now you have succesfully patched VCF Operations fleet management 9.0 to version 9.0.1  Downloading Component Binaries We already have the depot configured before , whether it's offline or online. Because we did update the VCF Operations fleet management appliance to 9.0.1 , now we have the rest of the component binaries made available for download as well.  Important to note VCF Automation 9.0.1 and VCF Identity Broker 9.0.1 will be available under "Patch Binaries" VCF Operations 9.0.1 , VCF Operations for Logs 9.0.1 and VCF Operations for Networks 9.0.1 will be available under "Upgrade Binaries" Delete the binaries which you don't need to make some room for the new ones being downloaded. If there's enough room under /data  then , there's no need to delete them. We can check the available size of /data  where the binaries are downloaded to under VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings Select the components and click on download so that the binaries can now be downloaded and mapped  As stated above for VCF Operations for Networks , VCF Operations and VCF Operations for Logs , go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Upgrade Binaries to download them  As stated above for VCF Automation and VCF Identity Broker  , go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Patch Binaries to download them  Downloaded all of the necessary binaries Plan Upgrade Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Plan Upgrade" VCF version would be 9.0 itself as 9.0.1 is a Maintenace Release Click on the "Target Version" for each of the component  Select version 9.0.1.0 The moment target version is selected, Target Build number is auto-populated. Once done , click on "CREATE PLAN" The moment a plan is created , under "Components" pane, the respective Actions which needs to be implemented on the components are enabled  As stated above , Operations-Logs , Operations and Operations-Networks it would be an "Upgrade" Automation and Identity Broker it would be a "Patch" Upgrade VCF Operations 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it  Once we click on "Submit" , it will generate a task where progress can be tracked  Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request  Since the binary is alredy available, the repository url is already populated  Run APUAT by clicking on "Run Assessment", It takes few minutes for the assessment to complete. So don't panic. Review and acknowlegde assessment and  click on next Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT  Click on SUBMIT to start the upgrade  Upgrade VCF Operations for Logs from 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it  Once we click on "Submit" , it will generate a task where progress can be tracked  Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request  Since the binary is alredy available, the repository url is already populated  Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT  Click on SUBMIT to start the upgrade  Upgrade VCF Operations for Networks 9.0 to 9.0.1 Click on the "Upgrade" on the Component pane or on the Overview Pane A pane opens up with the information which is important to read. It has "Trigger Inventory Sync" which has to be executed as a best practice before executing an "Upgrade" Clicking on "Trigger Inventory Sync" opens up another pane which asks if you want to submit it  Once we click on "Submit" , it will generate a task where progress can be tracked  Once completed , go back to Component or Overview pane and click on "Upgrade". Since "Trigger Inventory Sync" task was already complete, go ahead and click on "Proceed" to launch the upgrade request  Since the binary is already available, the repository url is already populated  Under Snapshot pane , the option to "Take Component Snapshot" is by default checked. Ensure this is not unchecked as part of the upgrade , it will take a snapshot and then upgrade the component. There's an option to "Retain Component Snapshot taken" which will keep the snapshot taken before the upgrade. Click on "NEXT" to move forward Under the "Precheck" pane, click on "RUN PRECHECK" so that the checks begin Now that the Prechecks are now successful, go ahead and click on NEXT  Click on SUBMIT to start the upgrade  Patch VCF Automation 9.0 to 9.0.1  Click on the "Apply Patch" on the Component pane or on the Overview Pane A pane appears displaying important information that must be reviewed. It is critical to verify that SFTP is properly configured and backups are functioning before starting the patch installation. These prerequisites are essential because VCF Automation nodes do not support snapshots. During the patch process, the workflow automatically takes a backup to provide a recovery point in case a failure occurs. How to verify SFTP is working Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → SFTP Settings → SFTP status , there's no exception being reported. Optional: It's fine to take an ad-hoc backup from VCF Operations → Fleet Management → Lifecycle → VCF Management → Components → Automation → Backup & Restore (Day-N Operation) → Backup , just to be on safe side.  Select the Patch and also acknowledge that you have verified the SFTP configuration is working  Click on Next to go to "Review and Install" pane for the patch and then click on INSTALL to begin the process.  Patch VCF Identity Broker 9.0 to 9.0.1  Click on the "Apply Patch" on the Component pane or on the Overview Pane A pane appears displaying important information that must be reviewed. It is critical to verify that SFTP is properly configured and backups are functioning before starting the patch installation. These prerequisites are essential because VCF Automation nodes do not support snapshots. During the patch process, the workflow automatically takes a backup to provide a recovery point in case a failure occurs. How to verify SFTP is working Go to VCF Operations → Fleet Management → Lifecycle → VCF Management → Settings → SFTP Settings → SFTP status , there's no exception being reported. Optional: It's fine to take an ad-hoc backup from VCF Operations → Fleet Management → Lifecycle → VCF Management → Components → Automation → Backup & Restore (Day-N Operation) → Backup , just to be on safe side.  Select the Patch and also acknowledge that you have verified the SFTP configuration is working  Click on Next to go to "Review and Install" pane for the patch and then click on INSTALL to begin the process.

  • Journey to VCF 9 having vSphere 8.x and VMware Aria Operations 8.x

    Introduction Introducing a mind map for a customer topology with vSphere and VMware Aria Operations, illustrating their path to VMware Cloud Foundation 9. This blog details the journey step by step, guiding them through component deployments and upgrades, ultimately completing their VMware Cloud Foundation 9 journey. Customer Topology vSphere 8.x with several vCenter Servers, with one vCenter hosting VMware Aria Operations 8.x VMware Aria Operations 8.x There's no NSX deployed Mindmap Upgrading VMware Aria Operations 8.x to VCF Operations 9.0 Obtain Software Upgrade PAK file Snapshot VMware Aria Operations 8.x cluster ​​It is mandatory to create a snapshot of each node in a cluster before you update a VMware Aria Operations cluster. Once the update is complete, you must delete the snapshot to avoid performance degradation. Log into the VMware Aria Operations Administrator interface at https:///admin. Click Take Offline under the cluster status. When all nodes are offline, open the vSphere client. Right-click a VMware Aria Operations virtual machine. Click Snapshot and then click Take Snapshot. Name the snapshot. Use a meaningful name such as "Pre-Update." Uncheck the Snapshot the Virtual Machine Memory check box. Uncheck the Ensure Quiesce Guest File System (Needs VMware Tools installed) check box. Click OK. Repeat these steps for each node in the cluster. Log into the primary node VMware Aria Operations administrator interface of your cluster at https://primary-node-FQDN-or-IP-address/admin  . Click Software Update in the left pane. Click Install a Software Update in the main pane. Follow the steps in the wizard to locate and install your PAK file. This updates the OS on the virtual appliance and restarts each virtual machine. Read the End User License Agreement and Update Information, and click Next. Click Install to complete the installation of the software update . Log back into the primary node administrator interface. The main Cluster Status page appears and the cluster goes online automatically. The status page also displays the Bring Online button, but do not click it. Clear the browser caches and if the browser page does not refresh automatically, refresh the page.The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete. Click Software Update to check that the update is done. A message indicating that the update completed successfully appears in the main pane. When you update VMware Aria Operations to a latest version, all nodes get upgraded by default. If you are using cloud proxies, the cloud proxy upgrades start after the VMware Aria Operations upgrade is completed successfully. Upgrading vCenter Server Appliance Reference Link : https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/vcenter-upgrade/upgrading-and-updating-the-vcenter-server-appliance.html Upgrading ESX hosts Reference Link: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/9-0/esx-upgrade/overview-of-the-esxi-host-upgrade-process.html Import an existing vCenter Server as a Workload Domain Reference Link: https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vcf-9-0-and-later/9-0/building-your-private-cloud-infrastructure/working-with-workload-domains/import-an-existing-vcenter-to-create-a-workload-domain.html

  • Retrieving password using locker API in VCF 9.0 for Management Components

    Retrieving Password from Locker Step 1: Generate API Token To generate an API token, you can use either the VCF Operations Fleet Management appliance  logging into the shell as a root user or any Base64 encoding tool. Encode your credentials in the following format: echo   'admin@local:youradminatlocalpassword'   |   base64 Copy the resulting Base64-encoded string. This will be used for authorization. Step 2: Authenticate via Swagger UI Open the API documentation in your browser:  https:///api/swagger-ui/index.html Navigate to VCF Operations → Developer Central → Fleet Management API → API Documentation . In the Swagger UI, locate the API Token  section. When prompted for authorization, enter the following format in the input field: Basic  Replace  with the string you copied in Step 1. Click Authorize  to authenticate and begin executing API requests. Step 3: Retrieve Passwords from Locker Firstly let’s retrieve all passwords from the locker. So that we can use leverage the VMID out of the response and then retrieve specific password             GET  https://vcf-operations-fleetmanagement-appliance-fqdn/lcm/locker/api/passwords Above API will return response with the paginated list of passwords [   {     "alias": "Default Password for vCenters",     "createdOn": 1605791587373,     "lastUpdatedOn": 1605791587373,     "password": "PASSWORD****",     "passwordDescription": "This password is being used for all my vCenters",     "principal": "string",     "referenced": true,     "tenant": "string",     "transactionId": "string",     "userName": "administrator@vsphere.local",     "vmid": "6c9fca27-678d-4e79-9a0f-5f690735e67c"   } ] Now retrieve the password by using the root password of VCF Operations fleet management appliance. Fetch the VMID of the password from the  POST  https://vcf-operations-fleetmanmagement-appliance-fqdn/lcm/locker/api/passwords/view/{vmid} The Body should be {\"rootPassword\":\"V*********!\"} The response of the previous call will retrieve the password needed.

bottom of page