Seamless Upgrades: From VMware Aria Automation 8.18.x to VCF Automation 9.0.x
- Arun Nukula
- 3 hours ago
- 24 min read
VMware Aria Automation 8.x deployment topology
VMware Aria Automation can be set up using either a simple deployment model or a clustered deployment model.
Simple Deployment
This model consists of a single VMware Aria Automation node.
No load balancer is used in this setup.
The fully qualified domain name (FQDN) of VMware Aria Automation directs to the IP address of the node.

Clustered Deployment
A clustered setup consists of three VMware Aria Automation nodes situated behind an external load balancer.
The Load Balancer FQDN for VMware Aria Automation directs to the Load Balancer's Virtual IP Address.
Each of the three VMware Aria Automation nodes in the cluster will have its own unique IP Address and FQDN assigned.

Things to know before you upgrade
Ensure VCF Operations fleet management appliance's hostname is correct. It must be an FQDN. Go to VCF Operations - Fleet Management - Lifecycle - VCF Management - Settings - Summary , the hostname field there must be an FQDN ( example : fleetmgmt.arun.com ) , also ssh to the fleet management appliance and ensure hostname -f , returns an FQDN not a shortname.
VMware Aria Suite Lifecycle is responsible for managing the lifecycle of VMware Aria Automation 8.x. To upgrade to VCF Automation 9.0.x, you must import VMware Aria Automation 8.x into VCF Operations 9.0.x, which is supported by the fleet management appliance.
It is advisable to replace the VMware Identity Manager certificate before importing VMware Aria Automation 8.x to 9.0.x into VCF Operations 9.0.x. If the certificate is valid for several more years, it is acceptable. However, if it is set to expire soon, it should be replaced. Utilizing VMware Aria Suite Lifecycle's replace certificate Day-N action not only updates the certificate on VMware Identity Manager but also propagates the new certificate to the integrated products. For instance, if VMware Aria Automation, VMware Aria Operations, and other management products are integrated with VMware Identity Manager, they will receive the new certificate, re-establishing trust among them. This is considered best practice, though not mandatory.
Initiate an inventory synchronization for VMware Aria Automation on VMware Aria Suite Lifecycle prior to starting the import process into VCF Operations 9.0.x. This guarantees that the most recent infrastructure data is consistently available and up-to-date.
To upgrade VMware Aria Automation to VCF Automation 9.0.x, the minimum supported version is 8.18.1.
This upgrade brings architectural changes that may impact current configurations, especially those related to cloud accounts and integrations. To facilitate a seamless and successful upgrade, it is crucial to examine the pre-check requirements and remediation steps detailed in KB 389563
Download VCF Automation upgrade binary
Depot Configuration is mandatory before downloading any binaries, whether it’s install,upgrade and patches
Navigate to VCF Operations → Fleet Management → Lifecycle → VCF Management → Binary Management → Upgrade Binary
Select automation, click on DOWNLOAD
Wait till the binary download completes
Importing VMware Aria Automation 8.18.1 into VCF Operations 9.0.x
Prior to upgrading VMware Aria Automation 8.18.x to VCF Automation 9.0.x, it must first be updated to VCF Operations 9.0.x to start the upgrade process
Within VCF Operations, navigate to Fleet Management → Lifecycle → Overview → Automation
Select ADD
Select “Import from legacy Aria Suite Lifecycle” , Click on NEXT
In Lifecycle Configuration pane
Enter VMware Aria Suite Lifecycle FQDN which is managing VMware Aria Automation you would like to upgrade
admin@local would be the username
Enter admin@local password
Root password of VMware Aria Suite Lifecycle
We now discover VMware Aria Automation deployments from the VMware Aria Suite Lifecycle.
Choose the VMware Aria Automation you would like to import
Click on next to review and submit so that the import of VMware Aria Automation into the VCF Operations begins
Once this process is completed, you should now be able to VMware Aria Automation in VCF Operations 9.0.x as a managed component.
After the import is finished, VMware Aria Automation will be unregistered from the VMware Aria Suite Lifecycle where it was previously registered.
No Day-N actions will be available unless you upgrade it to VCF Automation.
The primary reason for importing should be to promptly upgrade to the new version. If you are uncertain, only proceed with the import when you have a clear plan.
Plan Upgrade
Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Plan Upgrade"
VCF version would be 9.0 itself , that's fine.
Click on the "Target Version" for VCF Automation and select the version you intend to go to.
The "Current Version" here would be 8.18.1 as you just imported it. Keep that in mind.
The moment target version is selected, Target Build number is auto-populated. Once done , click on "CREATE PLAN".
The upgrade plan is now set. It's time to kick into action.
The "Upgrade"
Under VCF Operations → Fleet Management → Lifecycle → VCF Management → Components , Click on "Upgrade" next to VCF Automation
There’s a modal which pops up with few messages. It’s very important to read it.
If you haven't read KB 389563 yet, make sure to do so now to verify that your environment does not include any unsupported endpoints in VCF 9.0. If any endpoints fall into that category, follow the necessary remediation steps. If an upgrade to VCF Automation 9.0 is not possible due to specific endpoints, those deployments will remain on VMware Aria Automation 8.x.
If you have already imported and realized that your upgrade cannot continue. Delete Automation from VCF Operations and import VMware Aria Automation into VMware Aria Suite Lifecycle where it was managed before

If you're good with the endpoints and have no blockers , then perform inventory sync and click on PROCEED
Clicking on PROCEED will lead you to a pane where you're presented with the upgrade inputs.
Entire infrastructure information is pre-populated, this is bound to be correct as just performed inventory sync
Click on NEXT for Network pane, If the DNS objects are already created before the import under VCF Management → Settings → Network Settings they are auto populated, else go ahead and create one by clicking on ADD DNS SERVER and ADD NTP SERVER
In this pane, the Fully Qualified Domain Name (FQDN) is pre-populated along with certificates and other relevant data. Below is an explanation of each property:
FQDN: Represents the VCF Automation FQDN.
Cluster FQDN: Represents the VCF Automation FQDN (for clustered VMware Aria Automation , enter the VMware Aria Automation load balancer FQDN)
Controller Type: There are two types here
Internal
leverages internal load balancer. During upgrade , if the source is a simple VMware Aria Automation , it defaults to internal load balancer
Others
leverages external load balancer , During upgrade, if the source is a clustered VMware Aria Automation, it defaults to "others". There is no need for anyone to change this value.
Primary VIP:
For Simple VMware Aria Automation deployment , the IP address of the lone node is populated
For Clustered VMware Aria Automation deployment , the IP Address of the first VMware Aria Automation node is designated as Primary VIP.
Additional VIP:
For Simple VMware Aria Automation deployment, it would be blank.
For Clustered VMware Aria Automation deployment , the IP addresses of the second and third VMware Aria Automation nodes are populated automatically.
Cluster CIDR: A CIDR block that the customer must choose, ensuring it does not conflict with their existing network infrastructure.
Node Prefix: A user-defined prefix for naming newly deployed VCF Automation virtual machines. The suffix is automatically generated.
Cluster Node IP Pool:
Simple: A set of two IP addresses used for deploying new virtual machines that will host VCF Automation components. These are a set of IP's needed from customer, should be on the same range as in the Primary VIP.
Clustered : A set of four IP addresses used for deploying new virtual machines that will host VCF Automation components. These are a set of IP's needed from customer, should be on the same range as in the Primary VIP.
Once this set of information is entered , click on next and execute prechecks.
Only if the prechecks are successful , proceed to review and submit upgrade request.
Reusing VMware Aria Automation 8.x IP's in VCF Automation 9.0.x. Understanding mechanics behind it.
In a simple VMware Aria Automation deployment model, there is a single VMware Aria Automation 8.x node.
This node has a fully qualified domain name (FQDN) that resolves to its IP address.
During the upgrade process, this IP address is switched to serve as the Primary VIP, while the FQDN and Cluster FQDN fields continue to reference the VMware Aria Automation's FQDN. This setup exclusively utilizes an "Internal Load Balancer."
There's no change in DNS records needed.

In a clustered VMware Aria Automation deployment model, three nodes are set up behind a load balancer.
The VMware Aria Automation load balancer will point to the virtual server IP of the load balancer, which will remain unchanged.
In a clustered VMware Aria Automation 8.x setup, the first node's IP address becomes the Primary VIP, while the IP addresses of the second and third nodes become Additional VIPs.
As previously mentioned, the Cluster Node IP Pool requires four new IP addresses. These addresses are used for deploying the nodes that will host VCF Automation.

I hope this is clear. It's crucial to grasp this concept. All these inputs are automatically populated. However, as a customer, it's essential to understand how it functions.
Behind the Scenes : Upgrade Workflow
Stage 1 : Overall FIPS status collection , we would collect the status of fips from the source which is VMware Aria Automation 8.x

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T09:37:07.788Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.v.p.t.GetOverallFipsStatusForVcfTask] -- Starting :: Get FIPS status for VCF task.
2025-02-06T09:37:08.366Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.v.p.t.GetOverallFipsStatusForVcfTask] -- Collected VCF FIPS status successfully.
2025-02-06T09:37:08.367Z INFO vrlcm[171714] [pool-3-thread-23] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnOverallFipsStatusForVcfCollectionSuccessStage 2: Overall CEIP status collection , we would collect the status of ceip from the source which is VMware Aria Automation 8.x

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T09:37:09.992Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.GetOverallCeipStatusForVcfTask] -- Starting :: Get CEIP status for VCF task.
2025-02-06T09:37:10.411Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.GetOverallCeipStatusForVcfTask] -- Collected VCF CEIP status successfully.
2025-02-06T09:37:10.411Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnOverallCeipStatusForVcfCollectionSuccessStage 3: Creates snapshot of VMware Aria Automation 8.x nodes.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T09:37:11.158Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => createNodeSnapshot and the priority is => 2
2025-02-06T09:37:11.680Z INFO vrlcm[171714] [pool-3-thread-28] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnNodeVmidCollectionSuccess
2025-02-06T09:37:12.209Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.v.d.h.TaskHelper] -- Task : CreateSnapshot_Task is still not completed
2025-02-06T09:37:27.258Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.v.v.u.SnapshotUtil] -- Snapshot create status : true
2025-02-06T09:37:27.259Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.VaSnapshotFromVmidTask] -- Snapshot Result: {
"snapshotName" : "LCM-vRA-VA-Snapshot - Thu Feb 06 09:37:06 UTC 2025",
"snapshotDescription" : "vRSLCM Snapshot: Thu Feb 06 09:37:06 UTC 2025",
"snapshotId" : "snapshot-28930",
"nodeIp" : "10.80.43.173",
"snapshotSuccess" : true
}
2025-02-06T09:37:27.260Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVaSnapshotCompletion
2025-02-06T09:37:27.385Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVaSnapshotCompletionStage 4: The backup of databases and the necessary configuration of VMware Aria Automation 8.x cluster is taken and stored on a disk.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T09:37:27.958Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vravadatabasebackup and the priority is => 3
2025-02-06T09:37:27.981Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Injected OnStart Edge for the Machine ID :: vravadatabasebackup
2025-02-06T09:37:28.642Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- Creating folder: /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c
2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- Creating yaml file
2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ShellExecutor] -- Executing shell command: touch /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml
2025-02-06T09:37:28.643Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ProcessUtil] -- Execute touch
2025-02-06T09:37:28.645Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.u.ShellExecutor] -- Result: [].
2025-02-06T09:37:28.658Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg push --hooks-only 10.80.43.173:30000 /data/vm-config/vmrepo/productBinariesRepo/5f/5f5b1799-5e9a-4008-9f25-643b12fef77b/5f5b1799-5e9a-4008-9f25-643b12fef77b -l upgrade=true --wait
2025-02-06T09:37:28.660Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T09:37:39.299Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg push --hooks-only 10.80.43.173:30000 /data/vm-config/vmrepo/productBinariesRepo/5f/5f5b1799-5e9a-4008-9f25-643b12fef77b/5f5b1799-5e9a-4008-9f25-643b12fef77b -l upgrade=true --wait exit code: 0 output: running /data/vmsp-pkg3370493006/files/scripts/prePush.sh (VRA_NODE=ops-43-173.ibn.arun.com,VRA_USERNAME=root,VCFA_UPGRADE_VALUES_FILE=KXKXKXKX
*
*
2025-02-06T09:37:39.694Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.v.p.t.VmspDatabaseBackupTask] -- VRA database backup successful
2025-02-06T09:37:39.694Z INFO vrlcm[171714] [pool-3-thread-32] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnDatabaseBackupTaskCompletionStage 5: VMware Aria Automation 8.x cluster is shutdown. Remember at this stage, we have taken the backup of the entire VMware Aria Automation 8.x databases and stored in a disk which will be needed later during upgrade procedure.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T09:37:40.559Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vravagracefulshutdown and the priority is => 4
2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Power state of the VMware Aria Automation host - ops-43-173.ibn.arun.com is ON
2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Completed power state check of all the Automation hosts. All node(s) found in ON state.
2025-02-06T09:37:42.458Z INFO vrlcm[171714] [pool-3-thread-36] [c.v.v.l.p.c.v.t.VraVaCheckPowerOnStatusTask] -- Injecting success event for power state check of VMware Aria Automation nodes
2025-02-06T09:37:43.013Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.p.c.v.t.VraVaStopServicesTask] -- Starting :: VMware Aria Automation VA Stop Services Task
2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.p.c.v.t.VraVaStopServicesTask] -- Stopping services on VMware Aria Automation VA : ops-43-173.ibn.arun.com
2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.d.v.h.VraPreludeInstallHelper] -- PRELUDE ENDPOINT HOST :: ops-43-173.ibn.arun.com
2025-02-06T09:37:43.014Z INFO vrlcm[171714] [pool-3-thread-37] [c.v.v.l.d.v.h.VraPreludeInstallHelper] -- COMMAND :: /opt/scripts/svc-stop.sh
2025-02-06T09:45:51.947Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.d.v.v.g.VsphereGuestOSOperation] -- shutdown guest OS successful
2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- vmid found from vcenter : vm-19164 for hostname ops-43-173.ibn.arun.com
2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- Completed shutdown of all the prelude hosts
2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.c.v.t.u.VraVaShutdownVmFromVmidTask] -- Injecting success event
2025-02-06T09:45:51.958Z INFO vrlcm[171714] [pool-3-thread-47] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVraShutdownVMCompletion
Stage 6: A new set of nodes or a new cluster is deployed. As previously mentioned, we requested a cluster node IP pool, from which IP addresses are retrieved and utilized for node deployment. For a straightforward deployment, one node is deployed, while for a cluster, three nodes are deployed.
The nodes are deployed on the same vCenter where your existing VMware Aria Automation 8.x is.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.log
/var/log/vrlcm/vmsp_bootstrap-<<datetimestamp>>.logLog Snippets
2025-02-06T09:45:53.158Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => installvmsp and the priority is => 5
2025-02-06T09:45:55.613Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing StartVMSPDeployTask
2025-02-06T09:45:55.615Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- VMSP Bootstrap logs are streamed to /var/log/vrlcm/vmsp_bootstrap_8f86a9cc-62ab-4c01-80fb-c399c0e648eb.log
2025-02-06T09:45:55.616Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp passwd YXYXYXYX
2025-02-06T09:45:55.617Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T09:45:56.379Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX exit code: 0 output: $6$rounds=300000$9zNGkhOg4.gwdDTz$elMVsPLH7k/fZ6btZAyg/LkrSVe.MgZqwKCV1I932absmicgyYi0ZrUxwgHaShA5DWzftcg0FNyycjtsXPv0C.
error: WARNING! Using --password YXYXYXYX CLI is insecure. Use --password-stdin.
2025-02-06T09:45:56.379Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX Output: $6$rounds=300000$9zNGkhOg4.gwdDTz$elMVsPLH7k/fZ6btZAyg/LkrSVe.MgZqwKCV1I932absmicgyYi0ZrUxwgHaShA5DWzftcg0FNyycjtsXPv0C.
2025-02-06T09:45:56.380Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp passwd YXYXYXYX Error: WARNING! Using --password YXYXYXYX CLI is insecure. Use --password-stdin.
2025-02-06T10:26:51.152Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPUtil] -- VMSP Provisioning process exited with error code 0
2025-02-06T10:26:51.152Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- VMSP Cluster provisioned successfully !!
2025-02-06T10:26:51.153Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.u.VMSPUtil] -- Found kubeconfig YXYXYXYX -> vcf-mgmt-4d35c714f2.kubeconfig
2025-02-06T10:26:51.156Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- kubeconfig YXYXYXYX : /data/vmsp/vcf-mgmt-4d35c714f2.kubeconfig
2025-02-06T10:26:51.175Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.l.c.CredentialController] -- Password YXYXYXYX with name a60476a9-60b2-4059-a3f6-fbd4c87c37b6 created successfully.
2025-02-06T10:26:51.177Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.l.s.CredentialOperationService] -- Added internal only password YXYXYXYX alias vmsp_kubeconfi_ops-43-173.ibn.arun.com_c1f4858e-7ba4-4e58-a330-76eeee2dcf00
2025-02-06T10:26:51.177Z INFO vrlcm[171714] [pool-3-thread-52] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Kubeconfig YXYXYXYX Reference locker:password:KXKXKXKXStage 7: After the new nodes are deployed , the certificate is applied on the new nodes / cluster.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.log
vcf automation cluster logs Log Snippets
// Install certificate on services platform //
2025-02-06T10:26:53.224Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.VmspCertificateInitTask] -- Starting :: Install Certificate Init Task....
2025-02-06T10:26:53.224Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.VmspCertificateInitTask] -- certificate :: {}locker:certificate:9d0cd8c5-6038-4c4c-8119-0e04061fc1a5:*.ibn.arun.com
2025-02-06T10:26:53.828Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.t.VmspInstallCertificateTask] -- Starting :: Install Certificate Task....
2025-02-06T10:26:53.831Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.t.VmspInstallCertificateTask] -- kubeconfig YXYXYXYX reference is locker:password:KXKXKXKX
2025-02-06T10:26:53.831Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:26:53.834Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Kubeconfig YXYXYXYX file created at /data/vmsp/.kubeconfig_283216c7-b2c0-491d-a1b4-3d67052b019b
2025-02-06T10:26:54.084Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Installing certificate command being executed :: /usr/local/bin/kubectl --kubeconfig=KXKXKXKX create secret YXYXYXYX --cert=/data/vmsp/ops-43-173.crt --key=/data/vmsp/ops-43-173.key --namespace=istio-ingress --save-config --dry-run=client -o yaml | /usr/local/bin/kubectl --kubeconfig=KXKXKXKX apply -f -
2025-02-06T10:26:54.087Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Process exited with code: 0
2025-02-06T10:26:54.087Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ShellExecutor] -- Executing shell command: /data/vmsp/ops-43-173.sh
2025-02-06T10:26:54.088Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ProcessUtil] -- Execute /data/vmsp/ops-43-173.sh
2025-02-06T10:26:54.268Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.u.ShellExecutor] -- Result: [secret/vmsp-tls YXYXYXYX
2025-02-06T10:26:54.268Z INFO vrlcm[171714] [pool-3-thread-73] [c.v.v.l.v.p.u.VMSPUtil] -- Install command response :: secret/vmsp-tls YXYXYXYX
2025-02-06T10:26:54.392Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVmspInstallCertificateTaskCompletion
Stage 8: This is where actual magic happens. The disk where VMware Aria Automation 8.x's data is stored is mounted and then restored on the new nodes as part of the upgrade process and once the upgrade completes, the disk is unmounted.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.log
/var/log/vrlcm/vmsp_bootstrap-<<datetimestamp>>.log
vcf automation cluster logs Log Snippets
2025-02-06T10:26:54.957Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vrava9xupgrade and the priority is => 7
2025-02-06T10:26:55.016Z INFO vrlcm[171714] [pool-3-thread-75] [c.v.v.l.p.c.v.t.StartVraVaGenericTask] -- Starting :: Start VMware Aria Automation VA Generic Task
2025-02-06T10:26:55.609Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX
2025-02-06T10:26:55.612Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster
2025-02-06T10:26:55.612Z INFO vrlcm[171714] [pool-3-thread-76] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnFetchVmspKubeconfigSuccess
2025-02-06T10:26:56.383Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Certificate not trusted
2025-02-06T10:26:56.383Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- NDC is not yet implemented for this, trusting certificate by default
2025-02-06T10:26:56.384Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustStoreManager] -- Storing certificate in the trust store
2025-02-06T10:26:56.508Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Successfully trusted certificate
2025-02-06T10:26:56.553Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.RestHelper] -- RestHelper execute methode connection.getResponseCode : 200
2025-02-06T10:26:56.554Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"running":true,"statusURI":"/webhooks/vmsp-platform/vcenter-disk-mount/mount","completedAt":null}
2025-02-06T10:26:56.557Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response with post call : {
"statusCode" : 200,
"responseMessage" : "OK",
"outputData" : "{\"statusCode\":200,\"running\":true,\"statusURI\":\"/webhooks/vmsp-platform/vcenter-disk-mount/mount\",\"completedAt\":null}",
"token" : null,
"contentLength" : 115,
"allHeaders" : null
}
2025-02-06T10:26:56.560Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpGetCall url : /webhooks/vmsp-platform/vcenter-disk-mount/mount
2025-02-06T10:26:56.560Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX
2025-02-06T10:26:56.561Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T10:26:56.628Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX exit code: 0 output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR error:
2025-02-06T10:26:56.629Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX Output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR
2025-02-06T10:26:56.630Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX Error:
2025-02-06T10:26:56.630Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Triggering API call:GET https://10.80.43.173:30005/webhooks/vmsp-platform/vcenter-disk-mount/mount
2025-02-06T10:26:56.701Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.CustomTrustManager] -- Certificate chain trusted
2025-02-06T10:26:56.706Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.u.RestHelper] -- RestHelper execute methode connection.getResponseCode : 200
2025-02-06T10:26:56.706Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"running":true,"statusURI":"/webhooks/vmsp-platform/vcenter-disk-mount/mount","completedAt":null}çççççç
2025-02-06T10:26:56.707Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Sleeping for 10 seconds, waiting for webhook to complete
*
*
2025-02-06T10:27:27.170Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Webhook completed :: {TARGET_DISK_NAME=disk-1000-10, TARGET_DISK_UUID=6000C29c-db3b-0c5a-0c46-1949d94fc213}
2025-02-06T10:27:27.172Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.VmspMountDiskTask] -- targetDiskName: disk-1000-10
2025-02-06T10:27:27.172Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspMountDiskSuccess
*
*
2025-02-06T10:27:27.410Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- Starting :: vmsp based product deployment task.
2025-02-06T10:27:27.412Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- vcfaYamlfolderName: /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c
2025-02-06T10:27:27.413Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- Yaml folder found for VCFA deployment
2025-02-06T10:27:27.413Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- yamlFileName:/tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml
2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPUtil] -- Yaml file found for VCFA deployment
2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ShellExecutor] -- Executing shell command: cat /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml
2025-02-06T10:27:27.414Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ProcessUtil] -- Execute cat
2025-02-06T10:27:27.417Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.ShellExecutor] -- Result: [deploy:
dataMigration: true
vcfa:
fipsMode: false].
2025-02-06T10:27:27.418Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg push registry.vmsp-platform.svc.cluster.local:5000 https://ops-43-171.ibn.arun.com/repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar --deploy -n prelude --set vcfa.fqdn=ops-43-173.ibn.arun.com --set deploy.dataMigration=true --wait --timeout=120m --remote --set deployment.size=small --kubeconfig=KXKXKXKX -f /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml
2025-02-06T10:27:27.419Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T10:27:28.323Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.l.MaskingPrintStream] -- * SYSOUT/SYSERR CAPTURED: -- KKKKK I AM RETRIEVING
2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- URL :: /repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- Query URL :: /productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-10] [c.v.v.l.c.c.FileContentDatabase] -- Decoded query url ::/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:27:28.324Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.l.MaskingPrintStream] -- * SYSOUT/SYSERR CAPTURED: -- KKKKK I AM RETRIEVING
2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- URL :: /repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- Query URL :: /productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:27:28.325Z INFO vrlcm[171714] [http-nio-8080-exec-8] [c.v.v.l.c.c.FileContentDatabase] -- Decoded query url ::/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar
2025-02-06T10:58:52.663Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg push registry.vmsp-platform.svc.cluster.local:5000 https://ops-43-171.ibn.arun.com/repo/productBinariesRepo/vra/9.0.0.0/upgrade/vra.tar --deploy -n prelude --set vcfa.fqdn=ops-43-173.ibn.arun.com --set deploy.dataMigration=true --wait --timeout=120m --remote --set deployment.size=small --kubeconfig=KXKXKXKX -f /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c/vcfa-upgrade-values.yaml exit code: 0 output: error: 2025/02/06 10:27:28 bundle.vmsp.vmware.com/v1alpha1, Kind=Bundle/prelude/vra: created
2025/02/06 10:58:52 Deployment succeeded
2025/02/06 10:58:52 bundle.vmsp.vmware.com/v1alpha1, Kind=Bundle/prelude/vra cleaned up
2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- Deleting /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c
2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.u.f.FileUtil] -- Directory is deleted : /tmp/5d54cf05-f38d-4ed2-9faf-3bddcc58bb4c
2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.v.p.t.VmspPkgPushTask] -- VMSP package push successful
2025-02-06T10:58:52.664Z INFO vrlcm[171714] [pool-3-thread-78] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnPackagePushTaskCompletion
2025-02-06T10:58:53.210Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.t.VmspUnmountDiskTask] -- Starting :: vmsp unmount disk task.
2025-02-06T10:58:53.211Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPUtil] -- Disk payload: {
"govcUrl" : "ops-43-17.ibn.arun.com",
"govcInsecure" : "1",
"diskLabel" : "Hard disk 2",
"sourcevmName" : "dnd-anushan-vrava-primary",
"sshUser" : "vmware-system-user",
"sshPass" : "JXJXJXJX",
"mountDir" : "/vra-db",
"targetDiskName" : "disk-1000-10"
}
2025-02-06T10:58:53.211Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpPostCall url : /webhooks/vmsp-platform/vcenter-disk-mount/unmount
2025-02-06T10:59:03.974Z INFO vrlcm[171714] [pool-3-thread-17] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Response status for API call : 200 Response data : {"statusCode":200,"output":"Warning: Permanently added '10.80.43.106' (ED25519) to the list of known hosts.\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\n# 10.80.43.106:22 SSH-2.0-OpenSSH_9.3\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\nWelcome to Photon 5.0 (\\m) - Kernel \\r (\\l)\n+ exec\nDetaching disk disk-1000-10 from dnd-anushan-vcfa-mgmt-zzv8s\nnode/
2025-02-06T10:59:05.825Z INFO vrlcm[171714] [pool-3-thread-21] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing VMSP Provisioning Generic Task
// Fetch Node Detials Task //
2025-02-06T10:59:06.419Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- Executing FetchVMSPNodeDetailsTask
2025-02-06T10:59:06.422Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- hostname in product properties: null
2025-02-06T10:59:06.422Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:59:06.511Z INFO vrlcm[171714] [Thread-55240] [c.v.v.l.v.p.u.VMSPUtil] -- Cmd Output -> NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
2025-02-06T10:59:06.511Z INFO vrlcm[171714] [Thread-55240] [c.v.v.l.v.p.u.VMSPUtil] -- Cmd Output -> dnd-anushan-vcfa-mgmt-zzv8s Ready control-plane 62m v1.32.0+vmware.1-fips 10.80.43.106 10.80.43.106 VMware Photon OS/Linux 6.1.126-3.ph5 containerd://1.7.24+vmware.1-fips
2025-02-06T10:59:06.515Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.v.p.t.FetchVMSPNodeDetailsTask] -- Node Details : dnd-anushan-vcfa-mgmt-zzv8s, 10.80.43.106
2025-02-06T10:59:07.057Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- Executing command on the host: 10.80.43.173 , as user: vmware-system-user
2025-02-06T10:59:07.057Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- ------------------------------------------------------
2025-02-06T10:59:07.058Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- Command: sudo kubectl --kubeconfig YXYXYXYX get pd -n vmsp-platform vmsp-platform -oyaml
2025-02-06T10:59:07.058Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- ------------------------------------------------------
2025-02-06T10:59:07.681Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.u.SshUtils] -- exit-status: 0
2025-02-06T10:59:07.753Z INFO vrlcm[171714] [pool-3-thread-22] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: FetchNodeDetailsCompletion
Stage 9: There's a password which was collected during the upgrade inputs. This password is used for
vmware-system-user , a break glass account used to login into VCF Automation cluster.
admin , an admin account for the provider organization in VCF Automation

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.log
vcf automation cluster logs Log Snippets
2025-02-06T10:59:09.433Z INFO vrlcm[171714] [pool-3-thread-24] [c.v.v.l.v.p.t.BootstrapVMSPTask] -- Executing VMSP Provisioning Generic Task
2025-02-06T10:59:09.433Z INFO vrlcm[171714] [pool-3-thread-24] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnGenericVMSPEvent
2025-02-06T10:59:10.011Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- Starting :: vmsp fetch current admin pwd task.
2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- vmid to fetch kubeconfig YXYXYXYX
2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- ... com.vmware.vrealize.lcm.locker.controller.CredentialController@16d9c898
2025-02-06T10:59:10.014Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- httpPostCall url : /webhooks/prelude/tenant-manager/resetpassword
2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- payload : {"username":"admin"}
2025-02-06T10:59:10.018Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX
2025-02-06T10:59:10.019Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T10:59:10.223Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX
2025-02-06T10:59:10.223Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T10:59:10.285Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/kubectl get secrets YXYXYXYX -n vmsp-platform -ojsonpath={.data.token} --kubeconfig=KXKXKXKX exit code: 0 output: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqZFBSemRuYzBvek5WZHBWMmhKY2s0MlNVWkNRbWRQVlZOeFpsaEhPWEpqVkVKbFNXZFpkR3BsVUZVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUoyYlhOd0xYQnNZWFJtYjNKdElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTjViblJvWlhScFl5MWphR1ZqYTJWeUxXdHljQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUp6ZVc1MGFHVjBhV010WTJobFkydGxjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJakJpWXpNek1qbGhMVEF4TmpVdE5Ea3pNaTA1WkdJMExURTBZV00wTmpJM09XTmpaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwMmJYTndMWEJzWVhSbWIzSnRPbk41Ym5Sb1pYUnBZeTFqYUdWamEyVnlJbjAuYkJ4ZHNYOXVHQ2o1dkdadzBIdzc0ZHlXTFZlc2lka0JONno1dm9mMktjNGZSYTIzem1NLVFQdDdBWVhXLWQ3alowVG9UM2FDQXBUUU1zamRqNlpoR1JyVXNzQWZvNXcyX3BvejZZS0h4TWJUYWFzSERhWE9uYU94TFB1VDVGaFd1cFRUUi1KX252SE5Qdlo4MFpTVG04VUtyblVCeWF0emRlV0hkVU0wZ1d3cVFiem9YZzBtb24xalN4S0lCUkJMdjNEMzNYUzNVck9hSXRJa2x4RkRUWndnbkxJVWZKNjhjblpDeVU2NFpmVXF1T1JsajNYY2VxLWswdUwwRWRrMzMyeldYSTgwc3NUV0d0MFlaWHpnbjFLYjNHNnZoSWh1Z19ZQzVON0w1ampINDJxaURuZ1JuTDZ5WnF6SXZaTHJOYTluX2I0R1J1LV9EbmNpaVRMNzVR error:
*
*
2025-02-06T10:59:30.657Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Webhook completed :: {password=KXKXKXKX, status=SUCCESS, username=admin}
2025-02-06T10:59:30.657Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- current status response: {"password":"JXJXJXJX","status":"SUCCESS","username":"admin"}
2025-02-06T10:59:30.661Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.v.p.t.VmspFetchCurrentAdminPwdTask] -- current status responseDTO: com.vmware.vrealize.lcm.vmsp.common.dto.VmspFetchAdminPwdResponseDTO@6006a6c8
2025-02-06T10:59:30.661Z INFO vrlcm[171714] [pool-3-thread-25] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspFetchCurrentAdminPwdSuccess
2025-02-06T10:59:31.007Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.p.t.VcfaUpdateAdminPwdTask] -- Starting VCFA update admin pwd task
2025-02-06T10:59:31.022Z WARN vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.d.r.VcfaRestClient] -- VCF Automation RestClient : Request body is null for HTTP method POST
2025-02-06T10:59:31.023Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.v.d.r.VcfaRestClient] -- Triggering request :: https://ops-43-173.ibn.arun.com/tm/cloudapi/1.0.0/sessions/provider
*
*
2025-02-06T10:59:31.676Z INFO vrlcm[171714] [pool-3-thread-26] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnUpdatingAdminPasswordSuccesffuly
Stage 10: In VCF 9.0.x , we use service accounts for inter component communications and integrations. So , after once the upgrade is comleted , a service account for VCF Automation is now created.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.log Log Snippets
2025-02-06T10:59:32.757Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vrslcmcreateserviceaccount and the priority is => 10
2025-02-06T10:59:33.394Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Start ServiceRegistryPreCheckTask
2025-02-06T10:59:33.396Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Service Account Flow : Checking if vmsp exists
2025-02-06T10:59:33.398Z INFO vrlcm[171714] [pool-3-thread-29] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnServiceRegistryPreCheckSuccess
2025-02-06T10:59:34.055Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.d.c.t.LcmCreateServiceAccountTask] -- Executing LcmCreateServiceAccountTask
2025-02-06T10:59:34.060Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.s.u.ServiceAccountNameGenerator] -- Service account name to be created :: svc_vmsp_ops-lcm_392
2025-02-06T10:59:34.066Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.authN.controller.AuthNUserController.createServiceAccount(UserRequestDTO))
2025-02-06T10:59:34.074Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- Role DTO : RoleDTO [vmid=a9681b90-1a16-4d7c-88f6-da861a997d70, roleName=LCM Admin, roleDescription=VCF Operations Fleet Management Administrator]
2025-02-06T10:59:34.075Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- User Entity : User [username=svc_vmsp_ops-lcm_392, password=KXKXKXKX, userType=LCM_LOCAL_USER, displayName=svc_vmsp_ops-lcm_392, providerIdentifier=null, domain=LCM Local, isDisabled=false, userPrincipalName=null, userMetadata=null, toString()=com.vmware.vrealize.lcm.authN.model.User@53c0118d]
2025-02-06T10:59:34.148Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthznCustomObjectMapper] -- User Role Mapping Entity : UserRoleMapping [uservmid=ee236354-9819-45d1-8a34-630da915e908, rolevmid=a9681b90-1a16-4d7c-88f6-da861a997d70, toString()=com.vmware.vrealize.lcm.authN.model.UserRoleMapping@544b49bc]
2025-02-06T10:59:34.164Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.a.c.AuthNUserController] -- Created User with Id : ee236354-9819-45d1-8a34-630da915e908
2025-02-06T10:59:34.168Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.c.CredentialController] -- Request to create internal password.
2025-02-06T10:59:34.169Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.s.c.PasswordStoreServiceImpl] -- Create new internal password YXYXYXYX alias: svc_vmsp_ops-lcm_392
2025-02-06T10:59:34.172Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.l.c.CredentialController] -- Password YXYXYXYX with name e170f70f-3519-4a72-bb68-9996d959322d created successfully.
2025-02-06T10:59:34.197Z INFO vrlcm[171714] [pool-3-thread-30] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateServiceAccountSuccessStage 11: Service Registry is an index or directory of service accounts and how each management components integrate with each other. VCF Automation will now be part of service registry once this stage completes.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:35.159Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => createserviceregistry and the priority is => 11
2025-02-06T10:59:35.190Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Injected OnStart Edge for the Machine ID :: createserviceregistry
2025-02-06T10:59:35.801Z INFO vrlcm[171714] [pool-3-thread-33] [c.v.v.l.d.c.t.ServiceRegistryPreCheckTask] -- Start ServiceRegistryPreCheckTask
2025-02-06T10:59:36.408Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.d.c.t.CreateServiceRegistryTask] -- Executing CreateServiceRegistryTask
2025-02-06T10:59:36.409Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.d.c.t.CreateServiceRegistryTask] -- Executing Service Registry creation for product : vrslcm
2025-02-06T10:59:36.488Z INFO vrlcm[171714] [pool-3-thread-34] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateServiceRegistrySuccessStage 12: Push capabilities stage understands the existing management components already deployed and what integrations are needed.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:37.562Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmsppushcapabilities and the priority is => 12
2025-02-06T10:59:37.563Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.MachineRegistry] -- GETTING MACHINE FOR THE KEY :: vmsppushcapabilities
2025-02-06T10:59:38.804Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Executing VmspPushCapabilitiesTask
2025-02-06T10:59:38.806Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- VMSP ProductIds : [ "b482c25c-809b-498d-864a-2b70527de903" ]
2025-02-06T10:59:38.806Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Executing Push Capabilities for VMSP : b482c25c-809b-498d-864a-2b70527de903 8f86a9cc-62ab-4c01-80fb-c399c0e648eb
2025-02-06T10:59:38.812Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:59:38.815Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Kubeconfig YXYXYXYX file created at /data/vmsp/.kubeconfig_e5316135-53e7-412d-be10-5e4971923ee5
2025-02-06T10:59:38.822Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.c.CapabilitiesRestController] -- Request to build capabilities for product : VMSP : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb
2025-02-06T10:59:38.823Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.s.CapabilitiesService] -- Requested for capabilities object for consumer type :: VMSP : environmentId : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb
2025-02-06T10:59:38.824Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.s.s.CapabilitiesService] -- Building capability for dependent component with type :: VCF_OPS_LCM
2025-02-06T10:59:38.935Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Constructed Capabilities : {
"capabilities" : [ {
"key" : "ops-lcm",
"type" : "VCF_OPS_LCM",
"name" : "VCF OPS LCM",
"nodes" : [ {
"name" : "VCF OPS LCM",
"addresses" : [ {
"type" : "IPv4",
"value" : "10.80.43.171"
}, {
"type" : "Fqdn",
"value" : "ops-43-171.ibn.arun.com"
} ],
"certificates" : [ "-----BEGIN CERTIFICATE-----\nMIIDuDCCAqCgAwIBAgIUcTc/nKdxW7BPQXH45YJb1VbowEUwDQYJKoZIhvcNAQEL\nBQAwVDELMAkGA1UEBhMCVVMxETAPBgNVBAoMCEJyb2FkY29tMQwwCgYDVQQLDANW\nQ0YxJDAiBgNVBAMMG29wcy00My0xNzEuaWJuLmJyb2FkY29tLm5ldDAeFw0yNTAx\nMzAxNTQ3MzZaFw0zMDAxMjkxNTQ3MzZaMFQxCzAJBgNVBAYTAlVTMREwDwYDVQQK\nDAhCcm9hZGNvbTEMMAoGA1UECwwDVkNGMSQwIgYDVQQDDBtvcHMtNDMtMTcxLmli\nbi5icm9hZGNvbS5uZXQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCe\nFehoXFQoeyI7X6JTsz/JoQxM55zUxqRTrtfC1Onc/6p9JuIOrdJxIN45f2zVNOZG\nohCnIOyz9Kr/MA162qSHz64RbSzLiXId0VROXxWaQ0qK0BBwjrO7g6V23YTb7cqk\n84qebEKIPKlYxsY0x71l4q6qStqV7RrYbPmWq8eWD2SJ3ZDAjCSyATV18rirNlqs\nMD21UvGzzB0jAGBO4O8d81ft7ByxIdoOomlBTr/jDfiAsIM6i3lm0YExMr8rWt5M\nIKkpDwgeDMedMC+7HSvvZX3hoESScneE5SD20Bi6l2jp0o96+4V31KiEaKwmTEPC\nIkT6j1U0k+4bn5tyL2zLAgMBAAGjgYEwfzAdBgNVHQ4EFgQUX1HpY4iA4SRqrN1s\nqU9LdheUfGYwHwYDVR0jBBgwFoAUX1HpY4iA4SRqrN1sqU9LdheUfGYwDwYDVR0T\nAQH/BAUwAwEB/zAsBgNVHREEJTAjghtvcHMtNDMtMTcxLmlibi5icm9hZGNvbS5u\nZXSHBApQK6swDQYJKoZIhvcNAQELBQADggEBAFYRvDmkWLDGwKHk4MXA2nFHTCV0\nk10tESZAlqqfotLG0sjdHJa7pjk5ke2RxjMi0Sc7HpsflFsKXxkR6k60L93oJqVb\nH6V1beqa7+gpu3sR2do2PItu2vBT88oBxaECtMpnGMxM0yjZlISTEnq/m8idjgYR\n11AsUOVSmx+NYtzd5r2+Gl3A+HargR/iDUmvvNiWp90aeLtCfuT2qo2z5tY/H1zd\nuS2Lq+oVxV9eVsfCxWxpEVx6xIqd4Hxe28o2adoX0llTYz5Sf8GFsSIaxphY2ccc\njEGAImjr4x9D/XZaNptGcuOpV4Z1aGN2GwUrGJ9y0IpKZoy2j3/Fe+FZcqM=\n-----END CERTIFICATE-----\n" ]
} ],
"secret" : JXJXJXJX"type" : "Credential",
"username" : "svc_vmsp_ops-lcm_392",
"password" : "JXJXJXJX"
}
} ]
}
2025-02-06T10:59:49.460Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Updating LCM Certificate
2025-02-06T10:59:49.461Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Command to create LCM Cert Secret YXYXYXYX /usr/local/bin/kubectl --kubeconfig=KXKXKXKX create secret YXYXYXYX ops-mgmt-cert -n vmsp-platform --dry-run=client
2025-02-06T10:59:50.019Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.u.ShellExecutor] -- Result: [secret/ops-mgmt-password-secret YXYXYXYX
2025-02-06T10:59:50.019Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.u.VMSPUtil] -- Install command response :: secret/ops-mgmt-password-secret YXYXYXYX
2025-02-06T10:59:50.020Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.v.p.t.VmspPushCapabilitiesTask] -- Push Capabilities to VMSP successful
2025-02-06T10:59:50.021Z INFO vrlcm[171714] [pool-3-thread-39] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspPushCapabilitySuccess
Stage 13: At this stage , VCF Operations is anyways present. Let's say VCF Operations for Logs is deployed as well. Based on the capabilities listed in the service registry, VCF Automation is integrated with VCF Operations and VCF Operations for logs using the service account created before.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:50.759Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => configurecrossproducts and the priority is => 13
2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- environmentId : 8f86a9cc-62ab-4c01-80fb-c399c0e648eb
2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- monitorvRAWithvROps : null
2025-02-06T10:59:50.838Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- registervROpswithvRA : null
2025-02-06T10:59:50.839Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.d.c.t.i.InterProductConfigurationTask] -- monitorvRNIWithvROps : null
2025-02-06T10:59:50.839Z INFO vrlcm[171714] [pool-3-thread-42] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCrossProductConfigurationCompleted
Stage 14: With the upgrade nearly finished, the VCF Automation 9.0.x cluster is now included in the VCF Operations (fleet management appliance's inventory). At this point, when you navigate to the Components pane, you'll find VCF Automation 9.0 available, along with all applicable Day-N actions.
It's important to note that during this phase, the VMware Aria Automation 8.x cluster is removed from VCF Operations, making way for the new version of VCF Automation.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Creating reference for pwd with env id - 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destination name - vra and type - product
2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference addition required
2025-02-06T10:59:52.436Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Checking whether locker id 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 is already referenced
2025-02-06T10:59:52.437Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- locker id is already referenced
2025-02-06T10:59:52.437Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Duplicating pwd with vmid - 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901
2025-02-06T10:59:52.439Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference request entity : BaseDTO{vmid='3659e0da-183d-4837-a911-63d0b28eb0bc', version=8.1.0.0}
2025-02-06T10:59:52.440Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference model entity : com.vmware.vrealize.lcm.locker.model.LockerReference@74d797eb
2025-02-06T10:59:52.442Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference with envID 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destinationName vra and lockerID ff3c03ab-5719-4495-8f58-d76dff253e18 successfully added.
2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully.
2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating env inventory for password YXYXYXYX
2025-02-06T10:59:52.443Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:59:52.445Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- ###Printing password YXYXYXYX before updating env inventory: locker:password:KXKXKXKX
2025-02-06T10:59:52.447Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating enhanced request for product vra in environment 8f86a9cc-62ab-4c01-80fb-c399c0e648eb.
2025-02-06T10:59:52.448Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.a.g.s.UserRequestServiceImpl] -- Saving user request
2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Creating reference for pwd with env id - 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destination name - vra and node type - vcfa-primary
2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference addition required
2025-02-06T10:59:52.457Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Checking whether locker id 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901 is already referenced
2025-02-06T10:59:52.458Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- locker id is already referenced
2025-02-06T10:59:52.458Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Duplicating pwd with vmid - 99cb0df6-0f1f-4be3-b39f-3b6fbc07e901
2025-02-06T10:59:52.460Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference request entity : BaseDTO{vmid='6394188c-f1dd-4f85-a4a5-80d1954da981', version=8.1.0.0}
2025-02-06T10:59:52.461Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.u.ObjectMapperUtil] -- Reference model entity : com.vmware.vrealize.lcm.locker.model.LockerReference@5ec191cf
2025-02-06T10:59:52.464Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Reference with envID 8f86a9cc-62ab-4c01-80fb-c399c0e648eb and destinationName vra and lockerID f899b9a1-954c-407b-80b4-5722e25a6ae3 successfully added.
2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully.
2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating env inventory for password YXYXYXYX
2025-02-06T10:59:52.465Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.c.a.InternalOnlyApiAspect] -- Internal Only Check for: execution(ResponseEntity com.vmware.vrealize.lcm.locker.controller.CredentialController.getPassword(String))
2025-02-06T10:59:52.468Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- ###Printing password YXYXYXYX before updating env inventory: locker:password:KXKXKXKX
2025-02-06T10:59:52.469Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CommonInventoryUtil] -- Updating enhanced request for product vra and node vcfa-primary in environment 8f86a9cc-62ab-4c01-80fb-c399c0e648eb.
2025-02-06T10:59:52.470Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.a.g.s.UserRequestServiceImpl] -- Saving user request
2025-02-06T10:59:52.480Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- ###Printing licenseVmIds from Ref :
2025-02-06T10:59:52.481Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Finding references with envId '8f86a9cc-62ab-4c01-80fb-c399c0e648eb', destinationName 'vra', type: 'license'
2025-02-06T10:59:52.483Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- ###Printing licenseVmIds from Ref :
2025-02-06T10:59:52.484Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.s.LockerReferenceServiceImpl] -- Finding references with envId '8f86a9cc-62ab-4c01-80fb-c399c0e648eb', destinationName 'vmsp', type: 'license'
2025-02-06T10:59:52.486Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Locker reference creation json : [ ]
2025-02-06T10:59:52.487Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.l.c.LockerReferenceController] -- References created successfully.
2025-02-06T10:59:52.488Z INFO vrlcm[171714] [pool-3-thread-41] [c.v.v.l.d.c.t.i.CreateEnvironmentInventoryUpdateTask] -- Status code : 201 , body : [ ]Stage 15: The new VCF Automation component is now registered with the notifications service of fleet management appliance.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:53.761Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => notificationschedules and the priority is => 15
2025-02-06T10:59:53.860Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.p.c.n.t.NotificationSchedulesTask] -- Notification Schedule Task for supported Products in vCF mode. Product : vra
2025-02-06T10:59:53.873Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.c.n.u.NotificationUtil] -- Spec for nodeName productUpgradeNotification : {"symbolicName":"productUpgradeNotification","displayName":null,"productVersion":null,"priority":0,"dependsOn":[],"components":[{"component":{"symbolicName":"productUpgradeNotification","type":null,"componentVersion":null,"properties":{"environmentId":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","environmentName":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","productName":"vra","currentVersion":"9.0.0.0"}},"priority":0}]}
2025-02-06T10:59:53.874Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.c.n.u.NotificationUtil] -- Spec for nodeName productPatchingNotification : {"symbolicName":"productPatchingNotification","displayName":null,"productVersion":null,"priority":0,"dependsOn":[],"components":[{"component":{"symbolicName":"productPatchingNotification","type":null,"componentVersion":null,"properties":{"environmentId":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","environmentName":"8f86a9cc-62ab-4c01-80fb-c399c0e648eb","productName":"vra","currentVersion":"9.0.0.0"}},"priority":0}]}
2025-02-06T10:59:53.875Z INFO vrlcm[171714] [pool-3-thread-45] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnRegisterNotificationSchedulesTaskInitiatedStage 16: Most important thing , VCF Automation does not support snapshots, customers should configure SFTP the moment they land into VCF Operations 9.0.x , via install or upgrade route. If this SFTP is configured , then this configuration is automatically pushed to VCF Automation 9.0.x cluster.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T10:59:55.558Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmspsftpupdate and the priority is => 16
2025-02-06T10:59:56.217Z INFO vrlcm[171714] [pool-3-thread-54] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX
2025-02-06T10:59:56.220Z INFO vrlcm[171714] [pool-3-thread-54] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster
2025-02-06T10:59:56.820Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPUtil] -- Writing kubeconfig YXYXYXYX : 436c3cae-4cd7-4ce3-a63f-d411f4aac29f process exited with code: 0
2025-02-06T10:59:56.820Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- kubeconfig YXYXYXYX /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f
2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- Process exited with code: 0
2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: chmod 777 /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh
2025-02-06T10:59:56.822Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute chmod
2025-02-06T10:59:56.825Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: [].
2025-02-06T10:59:56.825Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh
2025-02-06T10:59:57.112Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX code: 0 output: secret/sftp-password-secret YXYXYXYX
error: Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
2025-02-06T10:59:57.112Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX secret/sftp-password-secret YXYXYXYX
2025-02-06T10:59:57.113Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh YXYXYXYX Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
2025-02-06T10:59:57.114Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- response: {"error":"Warning: resource secrets/sftp-password-secret YXYXYXYX the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\n","output":"secret/sftp-password-secret configured\n","exitCode":0}
2025-02-06T10:59:57.114Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: rm -rf /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f_sftp_create_secret.sh
2025-02-06T10:59:57.115Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute rm
2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: [].
2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Executing shell command: rm -rf /data/vmsp/436c3cae-4cd7-4ce3-a63f-d411f4aac29f
2025-02-06T10:59:57.118Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ProcessUtil] -- Execute rm
2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.u.ShellExecutor] -- Result: [].
2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.v.p.t.VmspCreateSftpSecretTask] -- VMSP create secret YXYXYXYX
2025-02-06T10:59:57.120Z INFO vrlcm[171714] [pool-3-thread-55] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnCreateSecretTaskCompletion
2025-02-06T10:59:57.537Z INFO vrlcm[171714] [pool-3-thread-58] [c.v.v.l.v.p.u.VMSPServerRestUtil] -- Triggering API call:POST https://10.80.43.173:30005/webhooks/vmsp-platform/sftp/configure
2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.CreateVmspSftpReferencesTask] -- Reference creation for SFTP is successful with response: {"headers":{},"body":{"status":"SUCCESS","statusCode":"CREATED","message":"Success","resourceIdentifier":"81e56e29-f4aa-4bb1-9f5a-eb24a59e469a","errorCode":0,"errors":null},"statusCode":"CREATED","statusCodeValue":201}
2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.v.p.t.CreateVmspSftpReferencesTask] -- VMSP SFTP reference update successful
2025-02-06T11:00:15.483Z INFO vrlcm[171714] [pool-3-thread-72] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspSftpReferenceCreationSuccess
2025-02-06T11:00:16.015Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnVmspSftpReferenceCreationSuccessStage 17: At this point, after configuring the SFTP on the VCF Automation cluster, the backup schedule is deployed into the cluster. This guarantees that backups occur regularly. A full backup is performed every 24 hours, and an incremental backup is taken every 15 minutes.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T11:00:16.559Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => vmspscheduledbackup and the priority is => 17
2025-02-06T11:00:17.190Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Starting :: fetch VMSP kubeconfig YXYXYXYX
2025-02-06T11:00:17.194Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.v.p.t.FetchVmspKubeconfigTask] -- Successfully fetched kubeconfig YXYXYXYX cluster
2025-02-06T11:00:17.195Z INFO vrlcm[171714] [pool-3-thread-77] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnFetchVmspKubeconfigSuccess
2025-02-06T11:00:17.797Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.t.VmspScheduledBackupTask] -- Starting :: vmsp scheduled backup task.
2025-02-06T11:00:17.800Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Running command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX
2025-02-06T11:00:17.801Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Process started
2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX Output:
2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.u.VMSPDay2Util] -- Command: /usr/local/bin/vmsp pkg configure vmsp-platform -n vmsp-platform backups.schedule.hour=2 --wait --kubeconfig=KXKXKXKX Error: 2025/02/06 11:00:18 releases.vmsp.vmware.com/v1alpha1, Kind=PackageDeployment/vmsp-platform/vmsp-platform: updated
2025/02/06 11:00:21 Deployment succeeded
2025/02/06 11:00:21 /v1, Kind=Secret/vmsp-platform/vmsp.release.vmsp-platform.v4: KXKXKXKX
2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.v.p.t.VmspScheduledBackupTask] -- VMSP scheduled backup successful
2025-02-06T11:00:21.248Z INFO vrlcm[171714] [pool-3-thread-79] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnVmspScheduledBackupSuccess
2025-02-06T11:00:32.677Z INFO vrlcm[171714] [pool-3-thread-81] [c.v.v.l.v.p.t.VmspScheduledBackupRetentionPeriodConfigTask] -- VMSP scheduled backup retention period config successful
Stage 18: Now comes the last stage where the VCF Operations inventory , the SDDC endpoints are pushed into VCF Automation. This would be made automatically available in VCF Automation's All Apps organization.

Logs to Monitor at this stage :
/var/log/vrlcm/vmware_vrlcm.logLog Snippets
2025-02-06T11:00:33.357Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.FlowProcessor] -- Processing the Engine Request to create the machine with ID => triggerinventorysyncwithops and the priority is => 18
2025-02-06T11:00:33.417Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.d.c.t.TriggerInventorySyncWithOpsTask] -- Trigger inventory sync ops task
2025-02-06T11:00:33.417Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.d.c.t.TriggerInventorySyncWithOpsTask] -- inventorySyncOpsDtoString: "{\"environmentId\":\"8f86a9cc-62ab-4c01-80fb-c399c0e648eb\",\"productIds\":[\"vra\",\"vmsp\"],\"registrationFlow\":false,\"vropsProperties\":null,\"vropsGettingDeployed\":false}"
2025-02-06T11:00:33.419Z INFO vrlcm[171714] [pool-3-thread-84] [c.v.v.l.l.c.EnvironmentController] -- Request to sync endpoints with Ops : {
"environmentId" : "8f86a9cc-62ab-4c01-80fb-c399c0e648eb",
"productIds" : [ "vra", "vmsp" ],
"registrationFlow" : false,
"vropsProperties" : null,
"vropsGettingDeployed" : false
}
-- InventorySyncOpsDto : {
"environmentId" : "8f86a9cc-62ab-4c01-80fb-c399c0e648eb",
"productIds" : [ "vra", "vmsp" ],
"registrationFlow" : false,
"vropsProperties" : null,
"vropsGettingDeployed" : false
}
2025-02-06T11:00:33.764Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.r.c.p.InventorySyncWithOpsPlanner] -- Fetching the list of products which needs to be synced
2025-02-06T11:00:34.031Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.r.c.RequestProcessor] -- Processing request with ID : 79f9b28e-3f66-4207-b3fb-92fa319437c3 with request type INVENTORY_SYNC_WITH_OPS with request state INPROGRESS.
2025-02-06T11:00:40.291Z INFO vrlcm[171714] [pool-3-thread-87] [c.v.v.l.p.c.v.t.ActivateAdaptersTask] -- oldStatus : {id=CASAdapter, state=ACTIVATED}
2025-02-06T11:00:40.291Z INFO vrlcm[171714] [pool-3-thread-87] [c.v.v.l.p.a.s.Task] -- Injecting Edge :: OnActivateAdaptersTaskCompleted
2025-02-06T11:00:40.583Z INFO vrlcm[171714] [scheduling-1] [c.v.v.l.a.c.EventProcessor] -- Responding for Edge :: OnActivateAdaptersTaskCompletedThis concludes the VCF Automation upgrade process. Hope this helps.

