
Search Results
237 results found with an empty search
- Upgrade vRLSCM 8.8.2.3 to 8.10
What's new in vRSLCM 8.10 (Key Features) VCF Enhancements vRealize Suite Lifecycle Manager now provides integration between vRealize Suite products. With vRealize Log Insight, you can now perform log forwarding configuration from other vRealize Suite products to vRealize Log Insight. Similarly, with vRealize Operations Manager, you can now perform management pack configuration of other vRealize products in vRealize Operations Manager. Usability Enhancements You can enable or disable health checks for vRealize Suite products in vRealize Suite Lifecycle Manager. vRealize Automation Enhancements If vRealize Automation upgrade fails, an auto-revert feature is introduced to revert back the appliance to its previous working state. Participation in the Customer Experience Improvement Program for vRealize Automation is now available using the Pendo Customer Experience Program (Pendo CEIP). PSPR Enhancements If you have five failed login attempts in vRealize Suite Lifecycle Manager, your account will be locked with an error message. vRealize Operations Manager Enhancements Remote collectors are now renamed as Cloud Proxy nodes in vRealize Operations Manager version 8.10. Take Snapshot of vRSLCM Under "System Upgrade" , Check Online to identify the 8.10 manifest and then start uprgade Packages would be downloaded Once packages are downloaded, installation will begin Even though the installation completed, after the reboot and during the startup procedure , encountered an exception Reference: postupdate.log ++ date '+%Y-%m-%d %H:%M:%S' + echo '2022-10-11 18:22:48 /etc/bootstrap/postupdate.d/25-start-services starting...' + /etc/bootstrap/postupdate.d/25-start-services 8.8.2.3 8.10.0.6 0 + log '/etc/bootstrap/postupdate.d/25-start-services done, succeeded.' ++ date '+%Y-%m-%d %H:%M:%S' + echo '2022-10-11 18:29:30 /etc/bootstrap/postupdate.d/25-start-services done, succeeded.' + for script in "${bootstrap_dir}"/* + echo + '[' '!' -e /etc/bootstrap/postupdate.d/60-update-upgrade-status ']' + '[' '!' -x /etc/bootstrap/postupdate.d/60-update-upgrade-status ']' + log '/etc/bootstrap/postupdate.d/60-update-upgrade-status starting...' ++ date '+%Y-%m-%d %H:%M:%S' + echo '2022-10-11 18:29:30 /etc/bootstrap/postupdate.d/60-update-upgrade-status starting...' + /etc/bootstrap/postupdate.d/60-update-upgrade-status 8.8.2.3 8.10.0.6 0 + log '/etc/bootstrap/postupdate.d/60-update-upgrade-status done, succeeded.' ++ date '+%Y-%m-%d %H:%M:%S' + echo '2022-10-11 18:29:30 /etc/bootstrap/postupdate.d/60-update-upgrade-status done, succeeded.' Reference: updatecli.log root@lcm [ ~ ]# tail -f /opt/vmware/var/log/vami/updatecli.log 11/10/2022 18:29:31 [INFO] Update status: Running VMware tools reconfiguration 11/10/2022 18:29:31 [INFO] Running /opt/vmware/share/vami/vami_reconfigure_tools vmware-toolbox-cmd is /bin/vmware-toolbox-cmd vmtoolsd wrapper not required on this VM with systemd. 11/10/2022 18:29:31 [INFO] Update status: Done VMware tools reconfiguration 11/10/2022 18:29:31 [INFO] Update status: Running finalizing installation 11/10/2022 18:29:31 [INFO] Running /opt/vmware/var/lib/vami/update/data/job/4/manifest_update 11/10/2022 18:29:31 [INFO] Update status: Done finalizing installation 11/10/2022 18:29:31 [INFO] Update status: Update completed successfully 11/10/2022 18:29:31 [INFO] Install Finished When i click on download log , it downloads bootstrap.log 2022-10-11 18:25:30,142 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e9-suite-vrealizeflex-c3-201907.dlf 2022-10-11 18:25:30,143 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e11-suite-vrealizeflex-c4-202007.dlf 2022-10-11 18:25:30,143 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e5-suite-vrealize-c2-201807.dlf 2022-10-11 18:25:30,144 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e15-suite-vrealize-c3-202207.dlf 2022-10-11 18:25:30,145 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e1-bt-c1-201903.dlf 2022-10-11 18:25:30,145 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e10-suite-vrealizeflex-c4-202007.dlf 2022-10-11 18:25:30,146 __main__ - INFO:Adding file to Zip dlfRepo/vra/8.10.0/dlf/license-vac-80-e14-suite-vrealizeflex-c3-202109.dlf 2022-10-11 18:25:30,163 __main__ - INFO:zip created successfully 2022-10-11 18:25:30,169 __main__ - ERROR:Exception while getting serviceadmin password Traceback (most recent call last): File "/var/lib/vrlcm/insert.py", line 76, in getPassword response = requests.get("http://localhost:8080/lcm/local/getpassword/serviceadmin@local",headers=headers) File "/usr/lib/python2.7/site-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/usr/lib/python2.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /lcm/local/getpassword/serviceadmin@local (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Reverted and then tried upgrade again. This time ensured no password expired. Just thought this might help but let's see how the uprgade goes this time Before root@lcm [ ~ ]# chage -l root Last password change : Dec 02, 2021 Password expires : Dec 02, 2022 Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 365 Number of days of warning before password expires : 7 root@lcm [ ~ ]# chage -l postgres Last password change : Oct 08, 2021 Password expires : Dec 07, 2021 Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 60 Number of days of warning before password expires : 7 After root@lcm [ ~ ]# chage -m 0 -M 99999 -I -1 -E -1 postgres root@lcm [ ~ ]# chage -l postgres Last password change : Oct 08, 2021 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 root@lcm [ ~ ]# chage -m 0 -M 99999 -I -1 -E -1 root root@lcm [ ~ ]# chage -l root Last password change : Dec 02, 2021 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 Started again , installation begins after all packages are downloaded 2022-10-11 19:26:07.310 INFO [http-nio-8080-exec-9] c.v.v.l.u.ShellExecutor - -- Executing shell command: /opt/vmware/bin/vamicli update --progress --detail 2022-10-11 19:26:07.312 INFO [http-nio-8080-exec-9] c.v.v.l.u.ProcessUtil - -- Execute /opt/vmware/bin/vamicli 2022-10-11 19:26:07.425 INFO [http-nio-8080-exec-9] c.v.v.l.u.ShellExecutor - -- Result: [56 packages have been downloaded. Installing...]. Services coming up now Postupdate phase is now complete root@lcm [ /opt/vmware/var/log/vami ]# tail -f /var/log/bootstrap/postupdate.log + [[ -f /var/log/vrlcm/status.txt ]] + /var/lib/vrlcm/populate.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to localhost port 8080 after 0 ms: Connection refused % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to localhost port 8080 after 0 ms: Connection refused % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (7) Failed to connect to localhost port 8080 after 7 ms: Connection refused % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Traceback (most recent call last): File "/usr/lib/python2.7/logging/__init__.py", line 868, in emit msg = self.format(record) File "/usr/lib/python2.7/logging/__init__.py", line 741, in format return fmt.format(record) File "/usr/lib/python2.7/logging/__init__.py", line 465, in format record.message = record.getMessage() File "/usr/lib/python2.7/logging/__init__.py", line 329, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Logged from file insert.py, line 302 + [[ -f /var/lib/vrlcm/SUCCESS ]] + echo 'Creating INPROGRESS to block UI from loading...' Creating INPROGRESS to block UI from loading... + rm -rf /var/lib/vrlcm/SUCCESS + touch /var/lib/vrlcm/INPROGRESS + exit 0 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/25-start-services done, succeeded. 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/60-update-upgrade-status starting... 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/60-update-upgrade-status done, succeeded. 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/70-blackstone-upgrade.sh starting... NOTICE: column "pipeline_name" of relation "cms_pipeline_execution" already exists, skipping 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/70-blackstone-upgrade.sh done, succeeded. 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/80-cleanup-patch-history starting... vami upgrade is successful 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/80-cleanup-patch-history done, succeeded. 2022-10-11 19:34:48 /etc/bootstrap/postupdate.d/81-postgres-autovacuum starting... auto vacuum setting for request table ALTER TABLE ALTER TABLE 2022-10-11 19:34:49 /etc/bootstrap/postupdate.d/81-postgres-autovacuum done, succeeded. 2022-10-11 19:34:49 /etc/bootstrap/postupdate.d/90-enable-fips-mode starting... 2022-10-11 19:34:49 /etc/bootstrap/postupdate.d/90-enable-fips-mode done, succeeded. 2022-10-11 19:34:49 /etc/bootstrap/postupdate.d/99-reboot-va starting... Creating SUCCESS before rebooting VA... Rebooting the VA.. vRSLCM appliance is rebooted After reboot , what i notice is root password expiration is now back to Dec 02,2022 where it was before the change root@lcm [ ~ ]# chage -l root Last password change : Dec 02, 2021 Password expires : Dec 02, 2022 Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 365 Number of days of warning before password expires : 7 root@lcm [ ~ ]# chage -l postgres Last password change : Oct 08, 2021 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 The login page is now up Logged in System details has the new version Version: 8.10.0.6 Build Number: 20594451 Policy : 8.10.0 Details of the PSPACK 8.10.0.0 Finally this concludes the upgrade Regarding the issue i faced during the first attemp it has nothing to do with the root / postgres password for sure. I don't believe anyone would face same issue. Will try to get deep into it and see if i can find the cause
- vRealize Operations Upgrade from 8.6.4 to 8.10
Here's the pdf file of the blog with screenshots if you would like to download and read Take a snapshot of vROps product using vRSLCM Then click on upgrade , that would prompt you to trigger an inventory sync to start with and then one can proceed Once inventory sync is complete proceed with next step I'd be using vRSLCM's repository where my binary has been downloaded using MyVMware account a few minutes back. Once all the information is verified , click on next When we click on next , the APUAT page opens up where we can run assessment You may click on view report to explore it Once satisfied with the report and consent to the changes are made, we will move on to the product snapshot pane I'd like to take product snapshot and also retain the snapshot taken. Hence both the boxes are checked Now , click next for validations or prechecks before upgrade Since the validations are successful , go ahead and initiate the upgrade The total upgrade time for a single node would be around 1 hour and 21 minutes That's it for the upgrade.
- vRealize LogInsight Upgrade from 8.8.2 to 8.10
Pre-Requisite Ensure vRSLCM 8.10 PSPACK 1 is implemented Once done , download / upload the product binary which will enable you to upgrade vRLI to 8.10 vRSLCM 8.10 PSPACK 1 also allows you to upgrade vRNI to 6.8 , we will discuss that in the next blog Click here for vRSLCM 8.10 PSPACK 1 Release Notes Environment consists of vRLI 8.8.2 Upgrading this version to 8.10 Trigger Inventory Sync Check the binary or the pak file Take Snapshot Pre-checks would run in the background Pre-checks complete TLS Check is performed SSH Check is performed Valid IP/FQDN check is performed Cluster setup check for vRLI Disk space check on the root filesystem Submit the request It's a single node vRealize LogInsight environment , hence there are 8 stages. Let's discuss each one of them in detail Stage:1 Shutting Down Guest OS Stage:2 Creating Node Snapshot Stage:3 Power On VM Stage:4 vrlihealthcheck Stage:5 Create snapshot inventory Stage:6 Start vRealize LogInsight Generic Task Stage:7 deleteNodeSnapshot Stage:8 productupgradeinventoryupdate Looking at Stage 6 in detail and what really happens in the background Reference: vmware_vrlcm.log ### Genetic Task is Initiated ### 2022-10-28 13:28:37.695 INFO [pool-3-thread-42] c.v.v.l.p.v.StartGenericVRLIInstallTask - -- Starting :: Start vRLI Generic Task 2022-10-28 13:28:37.695 INFO [pool-3-thread-42] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnGenericVrliTaskInitialized ### Upgrade Task is triggered , pak file is identified ### 2022-10-28 13:28:38.265 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Starting :: vRLI upgrade task 2022-10-28 13:28:38.266 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Checking if vRLI instance is running 2022-10-28 13:28:38.349 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- The vRLI instance https://10.109.44.140 service is running 2022-10-28 13:28:39.234 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return message for vrli: {"releaseName":"GA","version":"8.8.2-20056468"} 2022-10-28 13:28:39.238 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return status code for vrli: 200 2022-10-28 13:28:39.240 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Current vRLI version is: 8.8.2-20056468 2022-10-28 13:28:39.241 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Current vRLI version without build is: 8.8.2 2022-10-28 13:28:39.241 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade spec properties {product=vrli, productId=vrli, masterNodeIP=10.109.44.140, takeSnapshot=true, snapshotWithMemory=false, vCenterHost=vc.cap.org, snapshotNamePrefix=VRLCM_AUTOGENERATED_2735afc8-a188-4779-a58f-50cd3997aa00, version=8.10.0, vrliAdminPassword=KXKXKXKX, isRetainSnapshot=false, snapshotPrefix=VRLCM_AUTOGENERATED_2735afc8-a188-4779-a58f-50cd3997aa00, environmentId=cf8ac4ce-a7a7-4958-8401-50efdf4f1489, environmentName=Production, vcUsername=vrsvc@cap.org, snapshotWithShutdown=true, tenantId=, vrliHostName=li.cap.org, repositoryType=lcmrepository, vrliUpgradePakUrl=http://lcm.cap.org/repo/productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak, quiesceSnapshot=false, isVcfUser=false, isVcfEnabledEnv=false, vcPassword=KXKXKXKX, rootPassword=KXKXKXKX 2022-10-28 13:28:39.243 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade version from user input 8.10.0 2022-10-28 13:28:39.245 INFO [pool-3-thread-13] c.v.v.l.c.s.ContentLeaseServiceImpl - -- Inside create content lease. 2022-10-28 13:28:39.246 INFO [pool-3-thread-13] c.v.v.l.c.s.ContentLeaseServiceImpl - -- Created lease for the folder with id :: 564ae968-e032-4c87-89a6-2c1525f11ebd. 2022-10-28 13:28:39.261 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- /data/vm-config/symlinkdir/564ae968-e032-4c87-89a6-2c1525f11ebd 2022-10-28 13:28:39.262 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Started Downloading Content Repo 2022-10-28 13:28:39.263 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-10-28 13:28:39.263 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /productBinariesRepo 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- URL :: /productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.264 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak 2022-10-28 13:28:39.272 INFO [pool-3-thread-13] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='86af9699-d28e-42b5-ac2b-ff2c32f5fab8', version=8.1.0.0} -> repoName='productBinariesRepo', contentState='PUBLISHED', url='/productBinariesRepo/vrli/8.10.0/upgrade/VMware-vRealize-Log-Insight-8.10.0-20623770.pak'} 2022-10-28 13:28:39.350 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Completed Downloading Content Repo and starting InputStream 2022-10-28 13:28:39.351 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- /data/vm-config/symlinkdir/564ae968-e032-4c87-89a6-2c1525f11ebd/VMware-vRealize-Log-Insight-8.10.0-20623770.pak ### pak file is uploaded to vRLI ### 2022-10-28 13:29:32.013 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Proceeding with manual copy of upgrade pak using ssh. 2022-10-28 13:29:32.141 INFO [pool-3-thread-13] c.v.v.l.u.SshUtils - -- Uploading file --> ssh://root@10.109.44.140/tmp 2022-10-28 13:29:50.961 INFO [pool-3-thread-13] c.v.v.l.u.SshUtils - -- Uploaded file sucessfully 2022-10-28 13:29:52.652 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Return status code for vrli: 200 2022-10-28 13:29:52.654 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- pak file upgrade version: 8.10.0-20623770 2022-10-28 13:29:52.655 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- vRLI upgrade version from pak file: 8.10.0-20623770 ### Upgrade is now triggered ### 2022-10-28 13:29:53.108 INFO [pool-3-thread-13] c.v.v.l.p.v.VrliUpgradeTask - -- Pak was loaded successfully. Eula pending. Triggerring upgrade 2022-10-28 13:29:53.109 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- pak file successfully uploaded for version: 8.10.0-20623770. Triggerring the upgrade now 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Upgrade triggered. Status is: Upgrading 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- Upgrade status: Upgrading. Waiting for upgrade to finish 2022-10-28 13:29:53.421 INFO [pool-3-thread-13] c.v.v.l.d.v.InstallConfigureVRLI - -- vRLI is not up. sleeping for 3 min Reference: /storage/var/loginsight/upgrade.log ( At this point in time switch over to vRLI logs ) ### checks are performed on disk space and certificates ### 2022-10-28 13:29:51,927 loginsight-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK 2022-10-28 13:29:51,951 loginsight-upgrade INFO Signature of the manifest validated: Verified OK 2022-10-28 13:29:52,559 loginsight-upgrade INFO Current version is 8.8.2-20056468 and upgrade version is 8.10.0-20623770. Version Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /tmp: 3440889856 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /storage/core: 16030388224 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,560 loginsight-upgrade INFO Available Disk Space at /storage/var: 11989639168 2022-10-28 13:29:52,560 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:52,561 loginsight-upgrade INFO Loading eula license successful! 2022-10-28 13:29:52,561 loginsight-upgrade INFO Done! 2022-10-28 13:29:53,564 loginsight-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK ### Version Check is done ### 2022-10-28 13:29:53,579 loginsight-upgrade INFO Signature of the manifest validated: Verified OK 2022-10-28 13:29:53,805 loginsight-upgrade INFO Current version is 8.8.2-20056468 and upgrade version is 8.10.0-20623770. Version Check successful! 2022-10-28 13:29:53,806 loginsight-upgrade INFO Available Disk Space at /tmp: 3440914432 2022-10-28 13:29:53,806 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:53,806 loginsight-upgrade INFO Available Disk Space at /storage/core: 16030560256 2022-10-28 13:29:53,806 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:29:53,807 loginsight-upgrade INFO Available Disk Space at /storage/var: 11989626880 2022-10-28 13:29:53,808 loginsight-upgrade INFO Disk Space Check successful! 2022-10-28 13:30:53,506 loginsight-upgrade INFO Checksum validation successful! 2022-10-28 13:30:53,512 loginsight-upgrade INFO Attempting to upgrade to version 8.10.0-20623770 ### upgrade-driver script is triggered ### 2022-10-28 13:30:53,925 upgrade-driver INFO Starting 'upgrade-driver' script ... 2022-10-28 13:30:53,927 upgrade-driver INFO Start processing the manifest file ... 2022-10-28 13:30:53,927 upgrade-driver INFO Log Insight TO_VERSION is manifest file is 8.10.0-20623770 2022-10-28 13:30:53,927 upgrade-driver INFO Parsed version is 8.10.0-20623770 2022-10-28 13:30:53,927 upgrade-driver INFO Creating file /storage/core/upgrade-version to store upgrade version. 2022-10-28 13:30:53,943 upgrade-driver INFO The file /storage/core/upgrade-version is created successfully. 2022-10-28 13:31:07,318 upgrade-driver INFO Cassandra snapshot run time: 0:00:13.369487 2022-10-28 13:31:07,714 upgrade-driver INFO Start processing key list ... 2022-10-28 13:31:07,714 upgrade-driver INFO Start processing rpm list ... 2022-10-28 13:31:07,714 upgrade-driver INFO Rpm by name upgrade-image-8.10.0-20623770.rpm 2022-10-28 13:32:34,318 upgrade-driver INFO INFO: Running /storage/core/upgrade/kexec-li - Resize|Partition|Boot ... Starting to run kexec-li script ... Reading and saving /etc/ssh/sshd_config Reading and saving old ssh keys if key based Authentication is enabled cp: cannot stat '/root/.ssh//id_rsa': No such file or directory cp: cannot stat '/root/.ssh//id_rsa.pub': No such file or directory cp: cannot stat '/root/.ssh//known_hosts': No such file or directory Reading and saving /etc/hosts Reading and saving ssh host keys Reading and saving /var/lib/loginsight-agent/liagent.ini Reading and saving hostname Reading and saving old cassandra keystore Failed copying /usr/lib/loginsight/application/lib/apache-cassandra-*/conf/keystore* Reading and saving old default keystore Reading and saving old default truststore Reading and saving old tomcat configs chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/3rd_config/keystore* chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/truststore* Reading and saving old loginsight.conf Reading and saving old password in /etc/shadow Root password info root P 07/29/2022 0 365 7 -1 Root password change date is 07/29/2022 Root password is set. Password reset will not be required on first login. Reading and saving /etc/fstab Reading and saving cacerts Copying java.security to java.security.old Reading and saving network configs Reading and saving resolv.conf Lazy partition is sda5 sda partition count is 5 /storage/core/upgrade/kexec-li script run took 74 seconds Partition sda5 , which is lazy partition, will be formatted and will become root partition Photon to Photon upgrade flow will be called, where base OS was Photon ... Starting to run photon2photon script ... Root partition copy took 84 seconds clean up upgrade-image.rpm Removing lock file /storage/core/upgrade/photon2photon-base-photon.sh script run took 87 seconds ### reboots appliance ### Rebooting... ### After reboot , from journalcrl logs , we can see following statement stating that the upgrade was successful ### Oct 28 13:36:17 li.cap.org systemd[1]: Started Mark VMware vRealize Log Insight upgrade successful. -- Subject: Unit loginsight_mark_upgrade_successful.service has finished start-up -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit loginsight_mark_upgrade_successful.service has finished starting up. -- -- The start-up result is RESULT. Oct 28 13:36:17 li.cap.org systemd[1]: Started Cleanup after successful upgrade of VMware vRealize Log Insight. -- Subject: Unit loginsight_cleanup_after_upgrade.service has finished start-up -- Defined-By: systemd -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit loginsight_cleanup_after_upgrade.service has finished starting up. -- -- The start-up result is RESULT. ### cleanup done ### Oct 28 13:36:27 li.cap.org cleanup_after_upgrade.sh[2043]: Removed /etc/systemd/system/graphical.target.wants/loginsight_cleanup_after_upgrade.service. Oct 28 13:36:27 li.cap.org cleanup_after_upgrade.sh[2043]: Removed /etc/systemd/system/multi-user.target.wants/loginsight_cleanup_after_upgrade.service. Upgrade is now complete Important Logs to Refer vRSLCM Appliance: /var/log/vrlcm/vmware_vrlcm.log vRLI Applinace: /storage/var/loginsight/upgrade.log
- | Implementing vRNI 6.6 Patch 2 via vRSLCM 8.x | Deepdive |
Goal Implement vRNI 6.6 Patch 2 and understand it's specifics. This is a standard install of vRNI , not a vRNI cluster PDF Download attached document if the screenshots are not clear Procedure Deployed a vRNI 6.6 environment Downloaded patch vRNI 6.6 P2 which would be applied Before going ahead and implementing the patch, wanted to take a snapshot selected option to shutdown and then take snapshot precheck is successful Now that snapshot request is complete , we shall now get into the patch implementation phase Ensure there is an inventory sync performed before installing the patch. Recommended step. Select the product , click on the three dots which would give you the day-2 actions pane. Click on Install Patch This retrieves available patches for the product. Remember I've already downloaded vRNI 6.6 P2 and kept it ready . Select the patch and click on next When we click on next , there's an option to "INSTALL" Patch installation progresses Following tasks are performed during an patch installation Stage 1 1. Start 2. vRNI Upload Upgrade Bundle 3. vRNI Upgrade Bundle Status 4. Upgrade Precheck 5. Upgrade Precheck Status 6. VRNI Online Upgrade 7. Check Upgrade Status 8. vRealize Network Insight Watermark Configuration 9. Product Upgrade Patch Schedule Notification 10. Modify Notification Scheduler 11. Final Stage 2 1. Start 2. Product Patch Inventory Update 3. Final We do see that the patch implementation completed in approx 20 minutes Now , If i browse to patch history under product day-2 actions pane. I do see this implemented Let's deep dive a bit to understand what really happens in the background 1. Request id is generated 2022-11-23 02:20:33.205 INFO [http-nio-8080-exec-3] c.v.v.l.l.u.RequestSubmissionUtil - -- Generic Request Response : { "requestId" : "c1faa281-96e1-4296-b2b0-fdf469a4f24b" } 2. ProductPatchPlanner is initiated 2022-11-23 02:20:33.560 INFO [scheduling-1] c.v.v.l.r.c.p.ProductPatchPlanner - -- Patch url :: https://lcm.cap.org/repo/productPatchRepo/patches/vrni/6.6.0/VMware-vRNI.6.6.0.P2.1651837449.patch.bundle 2022-11-23 02:20:33.560 INFO [scheduling-1] c.v.v.l.r.c.p.ProductPatchPlanner - -- Patch version selected for application : 6.6.0.P2.1651837449 2022-11-23 02:20:33.561 INFO [scheduling-1] c.v.v.l.r.c.p.ProductPatchPlanner - -- Patch Product Spec :: { "vmid" : "ebc901f5-68b8-483b-a3ae-aa55151baab0", "tenant" : "default", "originalRequest" : null, "enhancedRequest" : null, "symbolicName" : null, "acceptEula" : false, "variables" : { }, "products" : [ { "symbolicName" : "upgradevrni", * * * "productId" : "vrni" } }, "priority" : 3 } ] } ] } 3. Request is created and is set to be processed 2022-11-23 02:20:33.674 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- Processing request with ID : c1faa281-96e1-4296-b2b0-fdf469a4f24b with request type PRODUCT_PATCH_INSTALL with request state INPROGRESS 4. Tasks start #### VRNI Upload Upgrade Bundle task #### 2022-11-23 02:20:34.536 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Processing the Engine Request to create the machine with ID => upgradevrni and the priority is => 0 2022-11-23 02:20:34.576 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Injected OnStart Edge for the Machine ID :: upgradevrni 2022-11-23 02:20:34.778 INFO [pool-3-thread-29] c.v.v.l.p.c.v.t.u.VRNIUploadUpgradeBundleTask - -- Starting :: VRNI Upload Upgrade Bundle task. 2022-11-23 02:20:41.430 INFO [pool-3-thread-29] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /patches/vrni/6.6.0/VMware-vRNI.6.6.0.P2.1651837449.patch.bundle 2022-11-23 02:20:41.430 INFO [pool-3-thread-29] c.v.v.l.c.c.ContentDownloadController - -- URL :: /productPatchRepo/patches/vrni/6.6.0/VMware-vRNI.6.6.0.P2.1651837449.patch.bundle 2022-11-23 02:20:41.430 INFO [pool-3-thread-29] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /productPatchRepo/patches/vrni/6.6.0/VMware-vRNI.6.6.0.P2.1651837449.patch.bundle 2022-11-23 02:20:41.433 INFO [pool-3-thread-29] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='886ffa3e-86f4-4427-a7e2-ae5efde0be06', version=8.1.0.0} -> repoName='productPatchRepo', contentState='PUBLISHED', url='/productPatchRepo/patches/vrni/6.6.0/VMware-vRNI.6.6.0.P2.1651837449.patch.bundle'} 2022-11-23 02:21:08.098 INFO [pool-3-thread-29] c.v.v.l.p.c.v.t.u.VRNIUploadUpgradeBundleTask - -- VRNI Upload Upgrade Bundle Rest call status code:200 output message :OK output:{"status":true,"statusCode":{"code":0,"codeStr":"OK"},"message":"Bundle upload in progress","data":null} 2022-11-23 02:21:08.119 INFO [pool-3-thread-29] c.v.v.l.p.c.v.t.u.VRNIUploadUpgradeBundleTask - -- Completed :: VRNI Upload Upgrade Bundle task. Injecting status check of upgrade bundle. 2022-11-23 02:21:08.119 INFO [pool-3-thread-29] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNIProcessUpgradeBundleInitiated #### VRNI Upgrade Bundle Status task #### 2022-11-23 02:21:09.034 INFO [pool-3-thread-25] c.v.v.l.p.c.v.t.u.VRNIUpgradeBundleStatusTask - -- Starting :: VRNI Upgrade Bundle Status task. 2022-11-23 02:21:14.478 INFO [pool-3-thread-25] c.v.v.l.p.c.v.t.u.VRNIUpgradeBundleStatusTask - -- VRNI Upgrade Bundle Status Rest call status code:200 output message :OK output:{"status":false,"statusCode":{"code":37,"codeStr":"IN_PROGRESS"},"message":"Bundle upload in progress","data":null} 2022-11-23 02:24:20.024 INFO [pool-3-thread-25] c.v.v.l.p.c.v.t.u.VRNIUpgradeBundleStatusTask - -- VRNI Upgrade Bundle Status Rest call status code:200 output message :OK output:{"status":true,"statusCode":{"code" :0,"codeStr":"OK"},"message":"Update bundle uploaded successfully","data":{"version":{"number":"1651837449","name":"6.6.0","createdTs":1651837449000,"offline":true},"readMe":{"sections":[{"name":"Release Version","b ody":["VMware-vRNI.6.6.0.P2"],"type":"string"},{"name":"KB Article","body":[""],"type":"link"},{"name":"Release Note","body":[""],"type":"link"},{"name":"Release Items","body":["Defect fixes."],"type":"list"},{"name":"Release Date","body":["1651837449"],"type":"string"}]}}} 2022-11-23 02:24:20.038 INFO [pool-3-thread-25] c.v.v.l.p.c.v.t.u.VRNIUpgradeBundleStatusTask - -- Completed :: VRNI Upgrade Bundle Status task. #### VRNI Upgrade Precheck task #### 2022-11-23 02:24:20.039 INFO [pool-3-thread-25] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNIUpgradePreCheckInitialized 2022-11-23 02:24:20.419 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vrni.task.upgrade.UpgradePrecheckTask 2022-11-23 02:24:20.424 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vrni.task.upgrade.UpgradePrecheckTask 2022-11-23 02:24:20.458 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.u.UpgradePrecheckTask - -- Starting :: VRNI Upgrade Precheck task. 2022-11-23 02:24:25.955 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.u.UpgradePrecheckTask - -- VRNI Upgrade pre-check Rest call status code:200 output message :OK output:{"status":true,"statusCode":{"code":0,"codeStr":"OK"},"message":"OK","data":"633e4ed1-612d-4eff-9c03-6919a39a45f5"} 2022-11-23 02:24:25.988 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.u.UpgradePrecheckTask - -- Upgrade pre-check request processed successfully 2022-11-23 02:24:26.000 INFO [pool-3-thread-47] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNIUpgradePreCheckStatusInitialized 2022-11-23 02:24:26.327 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Starting :: VRNI Upgrade Precheck Status task. 2022-11-23 02:24:31.697 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- vRNI Upgrade Precheck Status URI :: /api/management/upgrade/pre-checks/633e4ed1-612d-4eff-9c03-6919a39a45f5 2022-11-23 02:24:31.774 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- VRNI Upgrade pre-check Rest call status code:200 output message :OK output:{"status":true,"statusCode":{"code":0,"codeStr":"OK"},"message":"OK","data":{"messages":[{"msg":"Customer table check","type":"INFO","status":"PASS","id":"ActiveCustomersCardinalityCheckTask","title":"Customer Table Check","consentMsg":null},{"msg":"HDFS Journal Nodes sync check","type":"INFO","status":"PASS","id":"JournalNodeSyncCheckTask","title":"HDFS Journal Node","consentMsg":null},{"msg":"HBASE and HDFS health check is in progress..","type":"INFO","status":"IN_PROGRESS","id":"HBaseHdfsHealthCheckTask","title":"HBase and HDFS Health Check","consentMsg":null},{"msg":"Disk check","type":"INFO","status":"PASS","id":"DiskCheckTask","title":"Disk Check","consentMsg":null},{"msg":"Version check","type":"INFO","status":"PASS","id":"VersionCheckTask","title":"Version Check","consentMsg":null},{"msg":"NTP is in sync","type":"INFO","status":"PASS","id":"NtpSyncCheckTask","title":"NTP Sync Check","consentMsg":null},{"msg":"Bundle verification","type":"INFO","status":"PASS","id":"BundleVerificationTask","title":"Bundle Verification","consentMsg":null}],"lastUpdateTs":1669170269132}} 2022-11-23 02:24:31.780 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Customer table check 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- ActiveCustomersCardinalityCheckTask pre-check status : PASS 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 1 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : HDFS Journal Nodes sync check 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- JournalNodeSyncCheckTask pre-check status : PASS 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 2 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : HBASE and HDFS health check is in progress.. 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- HBaseHdfsHealthCheckTask pre-check status : IN_PROGRESS 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Disk check 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- DiskCheckTask pre-check status : PASS 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 3 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Version check 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- VersionCheckTask pre-check status : PASS 2022-11-23 02:24:31.781 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 4 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : NTP is in sync 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- NtpSyncCheckTask pre-check status : PASS 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 5 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Bundle verification 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- BundleVerificationTask pre-check status : PASS 2022-11-23 02:24:31.782 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 6 2022-11-23 02:30:37.201 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- vRNI Upgrade Precheck Status URI :: /api/management/upgrade/pre-checks/633e4ed1-612d-4eff-9c03-6919a39a45f5 2022-11-23 02:30:37.250 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- VRNI Upgrade pre-check Rest call status code:200 output message :OK output:{"status":true,"statusCode":{"code":0,"codeStr":"OK"},"message":"OK","data":{"messages":[{"msg":"Customer table check","type":"INFO","status":"PASS","id":"ActiveCustomersCardinalityCheckTask","title":"Customer Table Check","consentMsg":null},{"msg":"HDFS Journal Nodes sync check","type":"INFO","status":"PASS","id":"JournalNodeSyncCheckTask","title":"HDFS Journal Node","consentMsg":null},{"msg":"HBase and HDFS are healthy","type":"INFO","status":"PASS","id":"HBaseHdfsHealthCheckTask","title":"HBase and HDFS Health Check","consentMsg":null},{"msg":"Disk check","type":"INFO","status":"PASS","id":"DiskCheckTask","title":"Disk Check","consentMsg":null},{"msg":"Version check","type":"INFO","status":"PASS","id":"VersionCheckTask","title":"Version Check","consentMsg":null},{"msg":"NTP is in sync","type":"INFO","status":"PASS","id":"NtpSyncCheckTask","title":"NTP Sync Check","consentMsg":null},{"msg":"Bundle verification","type":"INFO","status":"PASS","id":"BundleVerificationTask","title":"Bundle Verification","consentMsg":null}],"lastUpdateTs":1669170272054}} 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Customer table check 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- ActiveCustomersCardinalityCheckTask pre-check status : PASS 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 1 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : HDFS Journal Nodes sync check 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- JournalNodeSyncCheckTask pre-check status : PASS 2022-11-23 02:30:37.255 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 2 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : HBase and HDFS are healthy 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- HBaseHdfsHealthCheckTask pre-check status : PASS 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 3 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Disk check 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- DiskCheckTask pre-check status : PASS 2022-11-23 02:30:37.256 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 4 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Version check 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- VersionCheckTask pre-check status : PASS 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 5 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : NTP is in sync 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- NtpSyncCheckTask pre-check status : PASS 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 6 2022-11-23 02:30:37.257 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Upgrade Pre-check message : Bundle verification 2022-11-23 02:30:37.258 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- BundleVerificationTask pre-check status : PASS 2022-11-23 02:30:37.258 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Passed YXYXYXYX count 7 2022-11-23 02:30:37.258 INFO [pool-3-thread-36] c.v.v.l.p.c.v.t.u.UpgradePrecheckStatusTask - -- Pre-check completed successfully. 2022-11-23 02:30:37.258 INFO [pool-3-thread-36] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNIUpgradePreCheckStatusSuccess #### Now that the precheck is successful. Online VRNI Upgrade is triggered #### 2022-11-23 02:30:37.832 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVRNIUpgradePreCheckStatusSuccess 2022-11-23 02:30:37.832 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vrni.task.upgrade.UpgradePrecheckStatusTask 2022-11-23 02:30:37.832 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vrni.task.upgrade.VRNIOnlineUpgradeTask 2022-11-23 02:30:37.841 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vrni.task.upgrade.VRNIOnlineUpgradeTask 2022-11-23 02:31:24.667 INFO [pool-3-thread-14] c.v.v.l.p.c.v.t.u.VRNIOnlineUpgradeTask - -- Platform upgrade request processed successfully 2022-11-23 02:31:24.668 INFO [pool-3-thread-14] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNISingleClickOfflineUpgradeSuccess 2022-11-23 02:31:25.506 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Starting :: Check Upgrade status. 2022-11-23 02:31:31.238 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Upgrade status check : IN_PROGRESS 2022-11-23 02:31:31.279 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Executing command --> sudo -H -u ubuntu bash -c 'sudo cli show-version' 2022-11-23 02:31:31.882 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- exit-status: -1 2022-11-23 02:31:31.894 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-11-23 02:31:31.895 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : -1, "outputData" : "6.6.0.1648892975\n", "errorData" : null, "commandTimedOut" : false } 2022-11-23 02:31:31.896 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Upgrade version: 6.6.0 2022-11-23 02:31:31.896 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Expected Patch version : 1651837449 2022-11-23 02:31:31.896 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Current Patch version : 1648892975 #### Approximately 10 minutes later, we do complete the patching process as shown below #### 2022-11-23 02:41:38.105 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Upgrade status check : SUCCESS 2022-11-23 02:41:38.146 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Executing command --> sudo -H -u ubuntu bash -c 'sudo cli show-version' 2022-11-23 02:41:38.747 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- exit-status: -1 2022-11-23 02:41:38.747 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-11-23 02:41:38.748 INFO [pool-3-thread-45] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : -1, "outputData" : "6.6.0.1651837449\n", "errorData" : null, "commandTimedOut" : false } 2022-11-23 02:41:38.749 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Upgrade version: 6.6.0 2022-11-23 02:41:38.749 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Expected Patch version : 1651837449 2022-11-23 02:41:38.749 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Current Patch version : 1651837449 2022-11-23 02:41:38.749 INFO [pool-3-thread-45] c.v.v.l.p.c.v.t.u.CheckUpgradeStatusTask - -- Upgrade completed successfully 2022-11-23 02:41:38.749 INFO [pool-3-thread-45] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVRNIOfflineUpgradeStatusSuccess #### vRNI VCF Watermark Configuration is skipped as it is not a VCF based environment #### 2022-11-23 02:41:39.292 INFO [pool-3-thread-33] c.v.v.l.p.c.v.t.v.VrniUpgradeVcfWatermarkingTask - -- ******** vRNI VCF Watermark Configuration Skipped as its not VCF User #### ProductPatchInventoryUpdateTask #### 2022-11-23 02:41:42.226 INFO [pool-3-thread-49] c.v.v.l.d.c.t.i.ProductPatchInventoryUpdateTask - -- Executing ProductPatchInventoryUpdateTask... 2022-11-23 02:41:42.227 INFO [pool-3-thread-49] c.v.v.l.d.c.t.i.ProductPatchInventoryUpdateTask - -- environmentId : 0efced57-abfa-4494-875e-337a6037b8e8 2022-11-23 02:41:42.227 INFO [pool-3-thread-49] c.v.v.l.d.c.t.i.ProductPatchInventoryUpdateTask - -- productId : vrni #### Finally the patch is now marked as complete #### 2022-11-23 02:41:42.255 INFO [pool-3-thread-49] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnProductPatchUpdateCompleted During upgrade precheck phase following checks are performed . vRSLCM invokes the upgrade precheck API on vRNI end and vRNI performs the checks and return if it is a success or a failure. Based on the result it would either move forward with the next phase or throw an error in the UI. 1. Customer table check 2. HDFS Journal Nodes sync check 3. HBASE and HDFS health check 4. Disk check 5. Version check 6. NTP Sync check 6. Bundle verification If there is an error, then fix it based on the exception and then retry the patch implementation by clicking on the "retry" button on the request.
- Why do i need Product Support Packs ? Let’s learn …
Goal While speaking to customers and peers , there were questions around why do i need product support packs and what's the importance of them. Before getting into "Why" part , let's try and understand what is a "Product Support Pack" Note: Product Support Packs are also referred as PSPACK or PSPAK VMware Aria Suite Lifecycle is also known as Suite Lifecycle or VASL so let's use these acronyms in our blog from here on ... Definition In VMware Aria Suite Lifecycle, a Product Support Pack is a software package that updates an existing product support policy to incorporate support for new releases of VMware Aria suite of products. Occassionally , it's used to deliver fixes for some minior bugs. Example Let's discuss a real world example than discussing some hypthetical ones. We will use VMware Suite Lifecycle 8.12 as product version for our use-case discussion As a customer when i install VMware Aria Suite Lifecycle 8.12 and login into Suite Lifecycle's UI , click on Lifecycle Operations --- Settings --- Product Support Pack , i'll be presented with Policy ID Supported products along with versions Option to check product support packs online and upload a pspack ( offline method ) What does this Policy 8.12.0.0 mean ? Why am i being shown these product names and versions ? Policy 8.12.0.0 means that this is an out of the box product support which comes when you install or upgrade to a specific verison of Suite Lifecycle Product Names & Versions listed on that product support policy are the ones which you can install ( greenfield ) or upgrade ( brownfield ) to. Suite Lifecycle 8.12 was released around 20th March 2023 , VMware Aria Automation , Operations and Operations for Logs were released on the same day. The policy 8.12.0.0 does contain the versions of these products This means users/customers can download or map product binaries and start installing / uprgading their products But hold on , VMware Aria Operations for Networks 6.10 was released a week later ? So , how can i upgrade or install that specific version when it's not listed in the policy ? This is a point where Product Support Pack comes into picture As documented in the release notes we have 3 PSPACK's released for Suite Lifecycle 8.12 . If we read product support details section of 8.12 Product Supprot Pack 1 , it was released to support VMware Aria Operations for Networks 6.10 which came a week later. This implies Product Support Packs have a mechanism to update VMware Aria Suite Lifecycle's product checksums which are released after it's own release. This would enable customer's to consume the product's latest verison for install & uprgade. One more question , reading through the release notes we do see that PSPACK2 supports refreshed product binary of VMware Aria Operations for Logs 8.12 why was this done ? Let me answer that . Suite Lifecycle 8.12 GA version supported Operations for Logs 8.12 out of the box. But due to a certificate issue which occurred in the month of April 2023 , the product binaries were replaced. Which means , the checksums present in the product out of the box would not match with the ones of the new product binaries. Hence , a new product support pack was released to help users / customers to support the refreshed or the new product binaries of VMware Aria Operations for Logs 8.12 Remember , Product Support Packs are here to help you and they are never a overhead. It hardly takes few minutes to implement. Hope this clarifies ..... cheers
- Check for upgrades returned with a value -6 Internal Error. Please make sure sfcbd is running.
A week back , there was a scenario where a check for an upgrade was being performed in vRealize Suite Lifecycle Manager 8.10 to be upgraded to VMware Aria Suite Lifecycle 8.12 but this was resulting on a failure as shown below in the screenshot Exception: Checking for available updates, this process can take a few minutes... Check for updates returned with a value - 6 Internal error. Please make sure stcbd is running. Why is this seen ? This message is usually seen if the upgrade is being triggered using a CD Rom ( iso method ) and that is not mapped appropriately or is not connected properly To resolve ensure the CD Rom is connected properly. When an iso is mapped there is a reconfigure task which is created and this should complete appropriately without an exception. If this task is failing continuously then power-off vRSLCM or Suite Lifecycle appliance and then map an iso. Hope this helps !!
- VMware Aria Cloud Extensibility Proxy | Deploy and Upgrade | Runbook |
The cloud extensibility proxy is a virtual appliance (VA) used in the configuration of the on-premises extensibility action integrations and VMware Aria Automation Orchestrator 8.x integrations in Automation Assembler. Download PDF version of this blog with all of the screenshots of snippets below Fetching and Creating API token locker object As a prerequisite, we need to fetch the refresh key and store it in our VASL's locker Logged into my organization Click on My Account and to create a new refresh token Click on Generate a New Token to get a new one Enter the token name and the roles you would like to assign , in this case i've assigned everything as it's a demo That's your token One can copy the token and store it in a safe location n*************************R Now come back to VASL and then create a locker entry for the refresh token Enter the refresh token and create a locker object Remember , this locker object for refresh token will be used during deployment of cloud extensibility proxy Deployment of Cloud Extensibility Proxy Login into VASL and then click on VMware Aria Cloud Click on "Create a Cloud Proxy" Entered details like environment name and password to be used during deployment Select Cloud Extensibility Proxy tile Accept EULA Enter infrastructure details Enter Network Details I did create a DNS entry for Cloud Extensibility proxy , we will be using that fqdn and ip during deployment In this pane , the proxy name is an identifier on how you want to see you cloud extensitiblity proxy to be referred. Select the product password which would be the one to login into appliance. Other one is the refresh token we created before. Click on next to run a precheck All prehceck sare successful Verify Submit deployment request Various phases are as below 1. Validate Environment Data 2. Infra Pre-Validation 3. Cloud Proxy Validations 4. Deploy Cloud Extensibility Proxy Start Start Cloud Proxy Generic Get Orgniazation id for Cloud Proxy Deployment Get Binary for Cloud Proxy Deployment Download Binary for Cloud Proxy Deployment Get OTK for Cloud Proxy Deployment Validate Deployment Inputs Deploy OVF Power On Virtual Machine Check Guest Tools Status Check Hostname/IP status Final 5. Update Environment Details 6. Schedule Notifications Deployment is now complete Some important log snippets from VASL ***** Binary Location Task ***** 2023-06-23 03:39:14.655 INFO [pool-3-thread-27] c.v.v.l.c.c.t.CloudProxyGetBinaryLocationTask - -- Start cloud proxy get binary location task 2023-06-23 03:39:14.662 INFO [pool-3-thread-27] c.v.v.l.c.c.t.CloudProxyGetBinaryLocationTask - -- Getting binary location for abxcloudproxy 2023-06-23 03:39:14.662 INFO [pool-3-thread-27] c.v.v.l.c.d.r.u.CloudProxyServerRestUtil - -- Request URL : https://api.mgmt.cloud.vmware.com/api/artifact-provider?artifact=cexp-data-collector 2023-06-23 03:39:14.667 INFO [pool-3-thread-27] c.v.v.l.c.d.r.c.CloudProxyRestClient - -- https://api.mgmt.cloud.vmware.com/api/artifact-provider?artifact=cexp-data-collector 2023-06-23 03:39:14.668 INFO [pool-3-thread-27] c.v.v.l.c.d.r.c.CloudProxyRestClient - -- Connecting without Proxy 2023-06-23 03:39:15.674 INFO [pool-3-thread-27] c.v.v.l.c.d.r.c.CloudProxyRestClient - -- API Response Status : 200 Response Message : {"artifact":"cexp-data-collector","providerUrl":"https://vro-appliance-distrib.s3.amazonaws.com/VMware-Extensibility-Appliance-SAAS.ova","latestOvaVersion":"7.6"} 2023-06-23 03:39:15.679 INFO [pool-3-thread-27] c.v.v.l.c.c.t.CloudProxyGetBinaryLocationTask - -- Binary location retrieved: https://vro-appliance-distrib.s3.amazonaws.com/VMware-Extensibility-Appliance-SAAS.ova 2023-06-23 03:39:15.680 INFO [pool-3-thread-27] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnCloudProxyGetBinaryLocationSuccess ***** Binary Download Task ***** 2023-06-23 03:42:35.408 INFO [pool-3-thread-10] c.v.v.l.u.DownloadHelper - -- Finished writing input stream to the file '/data/cloudproxy/abxcloudproxy/VMware-Extensibility-Appliance-SAAS.ova'. 2023-06-23 03:42:35.411 INFO [pool-3-thread-10] c.v.v.l.c.d.r.u.CloudProxyServerRestUtil - -- file download successful 2023-06-23 03:42:40.411 INFO [pool-3-thread-10] c.v.v.l.c.c.t.CloudProxyDownloadBinaryTask - -- Cloud proxy binary downloaded to location /data/cloudproxy/abxcloudproxy/VMware-Extensibility-Appliance-SAAS.ova 2023-06-23 03:42:40.413 INFO [pool-3-thread-10] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnCloudProxyDownloadBinarySuccess ***** Fetching OTK or One Time Key ***** 2023-06-23 03:42:41.097 INFO [pool-3-thread-38] c.v.v.l.c.c.t.CloudProxyGetOneTimeKeyTask - -- Start cloud proxy get OTK YXYXYXYX 2023-06-23 03:42:41.104 INFO [pool-3-thread-38] c.v.v.l.c.d.r.u.FlexCommonUtils - -- cspUrl : https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/authorize 2023-06-23 03:42:41.573 INFO [pool-3-thread-38] c.v.v.l.u.RestHelper - -- Status code : 200 2023-06-23 03:42:41.575 INFO [pool-3-thread-38] c.v.v.l.c.d.r.u.FlexCommonUtils - -- {"statusCode":200,"responseMessage":null,"outputData":"{\"id_token\":\"eyJhbGciOiJSUzI1**********N2TqgHQDhAugCe2cQ\",\"token_type\":\"bearer\",\"expires_in\":1799,\"scope\":\"ALL_PERMISSIONS customer_number openid group_ids group_names\",\"access_token\":\"eyJhbGciOiJSUzI1NiI****YBTSn-FuoKRbUg\",\"refresh_token\":\"n-aeMxqH9M5****mSRYCoWBp4vgllvfsR\"}","token":null,"contentLength":-1,"allHeaders":null} 2023-06-23 03:42:42.834 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnCloudProxyGetOtkSuccess ***** Deploy OVF Task ***** 2023-06-23 03:42:44.164 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.DeployOvfTask - -- Starting :: DeployOvfTask 2023-06-23 03:42:44.414 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- Getting OvfManager from VC ServiceContent 2023-06-23 03:42:44.419 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- Cluster provided is: Singapore#CAP 2023-06-23 03:42:44.439 INFO [pool-3-thread-7] c.v.v.l.d.v.v.u.CoreUtility - -- Found the cluster in the Datacenter 2023-06-23 03:42:44.448 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- Setting host xx.xx.xx.xx for import spec creation 2023-06-23 03:42:44.449 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- Creating Import result 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.DeployOvfTask - -- {"name":null,"description":null,"vcServerUrl":"https://vc.cap.org/sdk","vcServerUsername":"vrsvc@cap.org","vcServerPassword":"JXJXJXJX"} 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- Importing OVf 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- ########################################################## 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- OvfFileItem 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- chunkSize: null 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- create: false 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- deviceId: /cexproxy/VirtualLsiLogicController0:0 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- path: Prelude_Extensibility_VA-8.12.1.31024-21715470-system.vmdk 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- size: 1783000064 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- ########################################################## 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- ########################################################## 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- OvfFileItem 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- chunkSize: null 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- create: false 2023-06-23 03:42:44.767 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- deviceId: /cexproxy/VirtualLsiLogicController0:1 2023-06-23 03:42:44.768 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- path: Prelude_Extensibility_VA-8.12.1.31024-21715470-data.vmdk 2023-06-23 03:42:44.768 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- size: 17189376 2023-06-23 03:42:44.768 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- ########################################################## 2023-06-23 03:42:44.768 INFO [pool-3-thread-7] c.v.v.l.d.v.d.i.BaseOvfDeploy - -- ########################################################## 2023-06-23 03:42:45.535 INFO [Thread-1642328] c.v.v.l.d.v.d.i.OvfDeployLocal - -- completed percent uploaded------------------------------------------------------------------------------------------>0 2023-06-23 03:45:45.595 INFO [Thread-1642328] c.v.v.l.d.v.d.i.OvfDeployLocal - -- completed percent uploaded------------------------------------------------------------------------------------------>86 2023-06-23 03:46:10.999 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.DeployOvfTask - -- OVF deployment completed successfully. Will be proceeding with post deployment process 2023-06-23 03:46:11.000 INFO [Thread-1642328] c.v.v.l.d.v.d.i.OvfDeployLocal - -- ********************** Thread interrupted ******************* 2023-06-23 03:46:11.007 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.DeployOvfTask - -- Found VM : cexproxy .proceeding further 2023-06-23 03:46:11.008 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.DeployOvfTask - -- upgrade_vm ?null 2023-06-23 03:46:11.008 INFO [pool-3-thread-7] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnSuccessfulOvfDeployment ***** CEXP deployment completed ***** 2023-06-23 03:47:08.356 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- Updating the Environment request status to COMPLETED for environment : CEXP with request ID : 232ad7e7-7e5a-4946-9742-ac38bc51db95 and request type : VALIDATE_AND_CR EATE_ENVIRONMENT. Going back to VASL , I do see my CEXP deployed If we go to CAS and then click on VMware Aria Automation , we can see our CEXP available for consumption List of Pods which run inside Cloud Extensibility Proxy root@cexproxy [ /services-logs/prelude ]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress ingress-ctl-traefik-6b9fc769fc-4kxs4 1/1 Running 0 15m kube-system command-executor-p8rgl 1/1 Running 0 18m kube-system coredns-jpb4w 1/1 Running 0 18m kube-system health-reporting-app-6q8r6 1/1 Running 0 18m kube-system kube-apiserver-cexproxy.cap.org 1/1 Running 0 18m kube-system kube-controller-manager-cexproxy.cap.org 1/1 Running 0 18m kube-system kube-flannel-ds-tplcv 1/1 Running 0 18m kube-system kube-node-monitor-mwc56 1/1 Running 0 18m kube-system kube-proxy-4lkwc 1/1 Running 0 18m kube-system kube-scheduler-cexproxy.cap.org 1/1 Running 0 18m kube-system kubelet-rubber-stamp-r47x8 1/1 Running 0 18m kube-system metrics-server-5psc6 1/1 Running 0 18m kube-system network-health-monitor-25cgn 1/1 Running 0 18m kube-system predictable-pod-scheduler-thxd8 1/1 Running 0 18m kube-system prelude-network-monitor-cron-1687492980-szm8w 0/1 Completed 0 3m48s kube-system prelude-network-monitor-cron-1687493160-lgtsh 0/1 Completed 0 47s kube-system state-enforcement-cron-1687492920-k6v4r 0/1 Completed 0 4m48s kube-system state-enforcement-cron-1687493040-vjvn4 0/1 Completed 0 2m48s kube-system state-enforcement-cron-1687493160-zvv9z 0/1 Completed 0 47s kube-system update-etc-hosts-gzlwr 1/1 Running 0 18m prelude orchestration-ui-app-67bd95c5c9-4f2mm 1/1 Running 0 4m5s prelude postgres-0 1/1 Running 0 15m prelude proxy-service-5d7564fbf8-5ktzk 1/1 Running 0 15m prelude vco-app-57dd775776-lvscf 3/3 Running 0 8m8s Upgrading Cloud Extensibility Proxy Before performing an upgrade verified the version root@cexproxy [ /services-logs/prelude ]# vracli version Version - 8.12.1.31024 Build 21715470 Description - Aria Automation Extensibility Appliance 05/2023 Accoridng to patch portal , this is the latest one Download the iso and upload it to one of the datastores on vCenter where this Cloud Extensibility proxy resides Took a snapshot before making any changes Map the downloaded iso to the appliance Now that the iso is mapped , let's find out the id for CD rom using command blkid root@cexproxy [ ~ ]# blkid /dev/sr0: UUID="2023-06-19-02-30-47-00" LABEL="CDROM" TYPE="iso9660" /dev/sda1: PARTUUID="b0fed661-3a90-4b48-a0db-e3f74915c76f" /dev/sda2: UUID="6050b84c-a9bb-4387-baef-d43f2b7b9c75" TYPE="ext3" PARTUUID="e907703a-dba2-44ea-b22c-c63a2ccf6a47" /dev/sda3: UUID="9fbb9c60-bbaf-4491-bd91-7caa2e44c852" TYPE="swap" PARTUUID="a3b288b9-1a4f-433c-9a1d-fd85afabddff" /dev/sda4: UUID="6b8a710d-390f-483f-9d62-db66a2ce6429" TYPE="ext4" PARTUUID="16886689-6979-4fab-b2b8-ddb1488e11d1" /dev/sdc: UUID="uIoR7Q-0Hr5-HjWx-6iOq-cZN7-3VLw-ahJAPH" TYPE="LVM2_member" /dev/sdb: UUID="rPeD9M-lw0t-dIMu-socb-FxDj-Q3J3-EinsF4" TYPE="LVM2_member" /dev/mapper/data_vg-data: UUID="4aa2662d-c4f0-41fd-b004-de88e244969e" TYPE="ext4" /dev/sdd: UUID="vqX6KJ-wCi4-Fc4T-A0cP-4lCP-0tmY-0eoa0j" TYPE="LVM2_member" /dev/mapper/logs_vg-log: UUID="9c1b3c0a-68fb-4348-8ee1-9948449c7c6c" TYPE="ext4" /dev/mapper/home_vg-home: UUID="7617d195-007f-49c7-99fc-d01751e54693" TYPE="ext4" Mount the CD-ROM drive , based on the id shown above. In our case it is /dev/sr0 mount /dev/sr0 /mnt/cdrom root@cexproxy [ ~ ]# mount /dev/sr0 /mnt/cdrom mount: /mnt/cdrom: WARNING: device write-protected, mounted read-only. root@cexproxy [ ~ ]# Back up your cloud extensibility proxy by taking a virtual machine (VM) snapshot. We've already done that. To initiate the upgrade, run the vracli upgrade exec -y --repo cdrom://command vracli upgrade exec -y --repo cdrom:// root@cexproxy [ ~ ]# vracli upgrade exec -y --repo cdrom:// Loading update bootstrap. Update bootstrap loaded successfully. Saving configuration parameters Running procedures in background... ....................................... Upgrade procedure started in background. During upgrade, downtime of the services and restarts of the VAs are expected. Use 'vracli upgrade status --follow' to monitor the progress. ... Preparing for upgrade ..................................... ............................................................... Running health check before upgrade for nodes and pods. Health check before upgrade for nodes and pods passed successfully. ............................................................... Configuring SSH channels. SSH channels configured successfully. ............................................................... Checking version of current platform and services. Version check of current platform and services passed successfully. ............................................................... Running infrastructure health check before upgrade. Infrastructure health check before upgrade passed successfully. ............................................................... Configuring upgrade repository. Upgrade repository configuration completed successfully. ............................................................... Saving restore point for artifacts. This might take more than an hour. Restore point for artifacts saved successfully. ............................................................... Shutting down services. Services shut down successfully. ............................................................... Saving system configurations. System configuration saved successfully. ............................................................... Saving restore point for data. This might take more than an hour. Restore point for data saved successfully. ... Upgrade preparation completed successfully. ... ... Executing upgrade ......................................... ............................................................... Running additional health check before upgrade for nodes. Health check before upgrade for nodes passed successfully. ............................................................... Starting upgrade monitor. Upgrade monitor started successfully. ............................................................... Deactivating cluster of appliance nodes. This might take several minutes. Cluster deactivation of appliance nodes skipped. ............................................................... Configuring upgrade. Upgrade is in progress. This might take more than an hour to complete and the system might be rebooted several times. ** *When the VAMI upgrade exists , the session will be closed , once rebooted , services deployment would start After a while once services are deployed upgrade is completed It all starts with the CD Rom mapping and executing the uprgade command The first thing which occurs is mapping of repository [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Downloading manifest ... [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Manifest downloaded successfully. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Validating manifest ... [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Manifest signature validated successfully. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Repository product ID matched successfully. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Manifest validated successfully. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Searching for update bootstrap RPM ... [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Update bootstrap RPM found. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Update bootstrap RPM downloaded successfully. [INFO][2023-06-23 05:39:47][cexproxy.cap.org] Applying the bootstrap system ... Then the next sequence of events are triggered Bootstrap is applied Version is retrieved Installables list built successfully Upgrade plan is created Node status is verified Infrastructure health check is performed Update repository set successfully [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Bootstrap system applied successfully. [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Retrieving product version... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Product version retrieved successfully. [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/patch-template ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/rel830 ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/rel840 ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/rel842 ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/rel881 ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Searching for bootstrap package for product version 8.12.1.31024 in /var/vmware/prelude/upgrade/bootstrap/rel882 ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Selecting installables ... [INFO][2023-06-23 05:39:50][cexproxy.cap.org] Building installables list ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Installables list built successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Installables selected successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Creating upgrade plan ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Creating upgrade configaration from installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Aggregating upgrade configuration of all installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Upgrade configuration of all installables aggregated successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Computing version dependent upgrade configuration of installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Version dependent upgrade configuration of installables computed sucessfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Reducing upgrade configuration of installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Upgrade configuration of installables reduced successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Adjusting upgrade configuration of installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Upgrade configuration of installables adjusted successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Determining the effective upgrade configuration of installables ... [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Effective upgrade configuration of installables determined successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Upgrade configuration from installables created successfully. [INFO][2023-06-23 05:39:51][cexproxy.cap.org] Upgrade plan created successfully. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Retrieving nodes status... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Nodes status retrieval succeeded. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Processing nodes data... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Nodes data processing succeeded. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Verifying nodes... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Nodes verification completed successfully. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Retrieving services status... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Services status retrieval succeeded. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Processing services data... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Services data processing succeeded. [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Verifying services... [INFO][2023-06-23 05:39:53][cexproxy.cap.org] Services verification completed successfully. [INFO][2023-06-23 05:39:54][cexproxy.cap.org] Creating SSH key pairs [INFO][2023-06-23 05:39:55][cexproxy.cap.org] SSH key pairs created successfully. [INFO][2023-06-23 05:39:55][cexproxy.cap.org] Configuring SSH on nodes. [INFO][2023-06-23 05:39:56][cexproxy.cap.org] Service sshd status: active/enabled [INFO][2023-06-23 05:39:56][cexproxy.cap.org] SSH port 22 is open. [INFO][2023-06-23 05:39:56][cexproxy.cap.org] SSH configurated on nodes successfully. [INFO][2023-06-23 05:39:56][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/ssh-noop.sh at host: cexproxy.cap.org [INFO][2023-06-23 05:40:05][cexproxy.cap.org] Remote command succeeded: /opt/scripts/upgrade/ssh-noop.sh at host: cexproxy.cap.org Pseudo-terminal will not be allocated because stdin is not a terminal. Warning: Permanently added 'cexproxy.cap.org,10.109.45.58' (ED25519) to the list of known hosts. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:40:05][cexproxy.cap.org] Verifying remote nodes are able to connect to one another and to this node. [INFO][2023-06-23 05:40:06][cexproxy.cap.org] Verification that nodes are able to connect to one another and to this node succeeded. [INFO][2023-06-23 05:40:07][cexproxy.cap.org] Retriving product versions on all nodes. [INFO][2023-06-23 05:40:07][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/vami-save-vers.sh prep at host: cexproxy.cap.org [INFO][2023-06-23 05:40:17][cexproxy.cap.org] Retrieving product version. Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:40:18][cexproxy.cap.org] Product version retrieved successfully. [INFO][2023-06-23 05:40:18][cexproxy.cap.org] Remote command succeeded: /opt/scripts/upgrade/vami-save-vers.sh prep at host: cexproxy.cap.org [INFO][2023-06-23 05:40:18][cexproxy.cap.org] Product versions successfully retrieved from all nodes. [INFO][2023-06-23 05:40:18][cexproxy.cap.org] Checking that product versions match across all nodes. [INFO][2023-06-23 05:40:18][cexproxy.cap.org] Product versions match across all nodes verified. [INFO][2023-06-23 05:40:19][cexproxy.cap.org] Checking appliance nodes infrastructure health [INFO][2023-06-23 05:40:19][cexproxy.cap.org] Running remote command: /opt/health/run-once.sh health at host: cexproxy.cap.org [INFO][2023-06-23 05:40:42][cexproxy.cap.org] Remote command succeeded: /opt/health/run-once.sh health at host: cexproxy.cap.org Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:40:42][cexproxy.cap.org] Infrastructure health check passed on all appliance nodes. [INFO][2023-06-23 05:40:43][cexproxy.cap.org] Setting up update repository on all nodes. [INFO][2023-06-23 05:40:43][cexproxy.cap.org] Running remote command: vamicli update --repo cdrom:// at host: cexproxy.cap.org [INFO][2023-06-23 05:40:53][cexproxy.cap.org] Remote command succeeded: vamicli update --repo cdrom:// at host: cexproxy.cap.org Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:40:53][cexproxy.cap.org] Update repository set successfully on all nodes. [INFO][2023-06-23 05:40:53][cexproxy.cap.org] Verifying the access to update repository on all nodes. [INFO][2023-06-23 05:40:53][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/vami-config-repo.sh 'local' at host: cexproxy.cap.org [INFO][2023-06-23 05:41:02][cexproxy.cap.org] Checking FIPS configuration. Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 Once the update repository is checked , the next action is shifted to vami again 23/06/2023 05:40:53 [INFO] Setting local repository address. url=cdrom://, username=, password= 23/06/2023 05:41:03 [INFO] Checking available updates. jobid=1 23/06/2023 05:41:03 [INFO] Using update repository found on CDROM device: /dev/sr0 23/06/2023 05:41:03 [INFO] Downloading latest manifest. jobId=1, url=file:///tmp/update_agent_cdrom_ugPpOU/update, username=, password= 23/06/2023 05:41:04 [INFO] Signature script output: Verified OK 23/06/2023 05:41:04 [INFO] Signature script output: 23/06/2023 05:41:04 [INFO] Manifest signature verification passed 23/06/2023 05:41:04 [INFO] New manifest. installed-version=8.12.1.31024, downloaded-version=8.12.2.31329 Restore points are now saved , Services are stopped and then preparation for vami uprgade is initiated [INFO][2023-06-23 05:41:03][cexproxy.cap.org] Verifying the access to the update repository. [INFO][2023-06-23 05:41:04][cexproxy.cap.org] Access to the update repository verified. [INFO][2023-06-23 05:41:04][cexproxy.cap.org] Verifying availability of new product version in the repository. [INFO][2023-06-23 05:41:04][cexproxy.cap.org] Availability of new version in the repository verified. [INFO][2023-06-23 05:41:04][cexproxy.cap.org] Remote command succeeded: /opt/scripts/upgrade/vami-config-repo.sh 'local' at host: cexproxy.cap.org [INFO][2023-06-23 05:41:04][cexproxy.cap.org] Update repository is verified successfully on all nodes. [INFO][2023-06-23 05:41:05][cexproxy.cap.org] Saving restore points on all nodes. [INFO][2023-06-23 05:41:05][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-save.sh 'local' immutable-artifact artifacts-lastworking at host: cexproxy.cap.org [INFO][2023-06-23 05:41:20][cexproxy.cap.org] Saving local restore points. Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Saving restore point for /opt/charts . [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Restore point for /opt/charts saved successfully. [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Verifying source and destination checksums for restore point /opt/charts [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Source and destination checksums for restore point /opt/charts matched successfully. [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Selecting docker images to save in local restore point. [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Docker images successfully selected to save in local restore point. [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Saving docker images in local restore point. [INFO][2023-06-23 05:41:21][cexproxy.cap.org] Saving docker image coredns_private latest e9cdb7735889 [INFO][2023-06-23 05:41:22][cexproxy.cap.org] Saving docker image db-image14_private 8.12.1.31024 0390266e909c [INFO][2023-06-23 05:41:25][cexproxy.cap.org] Saving docker image flannel_private latest 243c95edf4b0 [INFO][2023-06-23 05:41:26][cexproxy.cap.org] Saving docker image health_private latest 762744863022 [INFO][2023-06-23 05:41:27][cexproxy.cap.org] Saving docker image metrics-server_private latest b5d05b47245b [INFO][2023-06-23 05:41:28][cexproxy.cap.org] Saving docker image network-health-monitor_private latest b6a651793b11 [INFO][2023-06-23 05:41:28][cexproxy.cap.org] Saving docker image nginx-httpd_private latest 06816bc1a534 [INFO][2023-06-23 05:41:29][cexproxy.cap.org] Saving docker image scripting-runtime_private latest c2de02927d4c [INFO][2023-06-23 05:41:32][cexproxy.cap.org] Saving docker image squid-container_private 8.12.1.30661 d7cc0c97b8a8 [INFO][2023-06-23 05:41:34][cexproxy.cap.org] Saving docker image squid-container_private latest d7cc0c97b8a8 [INFO][2023-06-23 05:41:35][cexproxy.cap.org] Saving docker image traefik-ingress-controller_private 8.12.1.31024 0d300bea6394 [INFO][2023-06-23 05:41:36][cexproxy.cap.org] Saving docker image wavefront-proxy_private 8.12.1.31024 5cc0f36861be [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Docker images saved successully in local restore point. [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Saving restore point images catalog [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Restore point images catalog saved successfully. [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Saving restore point for /opt/vmware/prelude/metadata . [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Restore point for /opt/vmware/prelude/metadata saved successfully. [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Verifying source and destination checksums for restore point /opt/vmware/prelude/metadata [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Source and destination checksums for restore point /opt/vmware/prelude/metadata matched successfully. [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Local restore points saved successfully. [INFO][2023-06-23 05:41:45][cexproxy.cap.org] Restore points on all nodes saved successfully. [INFO][2023-06-23 05:41:46][cexproxy.cap.org] Shutting down application services [INFO][2023-06-23 05:43:52][cexproxy.cap.org] Application services shut down successfully. [INFO][2023-06-23 05:43:52][cexproxy.cap.org] Shutting down infrastructure services [INFO][2023-06-23 05:46:05][cexproxy.cap.org] Infrastructure services shut down successfully. [INFO][2023-06-23 05:46:06][cexproxy.cap.org] Saving restore points on all nodes. [INFO][2023-06-23 05:46:06][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-save.sh 'local' sys-config sys-config at host: cexproxy.cap.org [INFO][2023-06-23 05:46:21][cexproxy.cap.org] Saving local restore points. Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:46:21][cexproxy.cap.org] Activating LCC maintenance mode. [INFO][2023-06-23 05:46:22][cexproxy.cap.org] LCC maintenance mode activated successfully. [INFO][2023-06-23 05:46:22][cexproxy.cap.org] Saving local restore point for Kubernetes object prelude-vaconfig. [INFO][2023-06-23 05:46:22][cexproxy.cap.org] Local restore point for Kubernetes object prelude-vaconfig saved successfully. [INFO][2023-06-23 05:46:22][cexproxy.cap.org] Local restore points saved successfully. [INFO][2023-06-23 05:46:22][cexproxy.cap.org] Restore points on all nodes saved successfully. [INFO][2023-06-23 05:46:23][cexproxy.cap.org] Saving restore points on all nodes. [INFO][2023-06-23 05:46:23][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-save.sh 'local' live-data live-data at host: cexproxy.cap.org [INFO][2023-06-23 05:46:38][cexproxy.cap.org] Saving local restore points. Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:46:38][cexproxy.cap.org] Saving restore point for /data/db/live . [INFO][2023-06-23 05:46:38][cexproxy.cap.org] Restore point for /data/db/live saved successfully. [INFO][2023-06-23 05:46:38][cexproxy.cap.org] Verifying source and destination checksums for restore point /data/db/live [INFO][2023-06-23 05:46:45][cexproxy.cap.org] Source and destination checksums for restore point /data/db/live matched successfully. [INFO][2023-06-23 05:46:45][cexproxy.cap.org] Saving restore point for /data/vco . [INFO][2023-06-23 05:46:46][cexproxy.cap.org] Restore point for /data/vco saved successfully. [INFO][2023-06-23 05:46:46][cexproxy.cap.org] Verifying source and destination checksums for restore point /data/vco [INFO][2023-06-23 05:47:03][cexproxy.cap.org] Source and destination checksums for restore point /data/vco matched successfully. [INFO][2023-06-23 05:47:03][cexproxy.cap.org] Local restore points saved successfully. [INFO][2023-06-23 05:47:03][cexproxy.cap.org] Restore points on all nodes saved successfully. [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Retrieving nodes status... [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Nodes status retrieval succeeded. [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Processing nodes data... [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Nodes data processing succeeded. [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Verifying nodes... [INFO][2023-06-23 05:47:06][cexproxy.cap.org] Nodes verification completed successfully. [INFO][2023-06-23 05:47:07][cexproxy.cap.org] Activating local monitors on all nodes. [INFO][2023-06-23 05:47:07][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/mon-activate.sh at host: cexproxy.cap.org [INFO][2023-06-23 05:47:17][cexproxy.cap.org] Activating upgrade monitor on the node Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 05/2023 [INFO][2023-06-23 05:47:17][cexproxy.cap.org] Upgrade monitor activated successfully on the node. [INFO][2023-06-23 05:47:17][cexproxy.cap.org] Remote command succeeded: /opt/scripts/upgrade/mon-activate.sh at host: cexproxy.cap.org [INFO][2023-06-23 05:47:17][cexproxy.cap.org] Local monitors successfully activated on all nodes. As a next step , we need to go ahead and check vami logs again 23/06/2023 05:49:12 [INFO] Installing updates. instanceid=VMware:VMware_8.12.2.31329, jobid=2 23/06/2023 05:49:12 [INFO] Using update repository found on CDROM device: /dev/sr0 23/06/2023 05:49:12 [INFO] Installing update. instanceId=VMware:VMware_8.12.2.31329, jobId=2, url=file:///tmp/update_agent_cdrom_PPEywF/update, username=, password= 23/06/2023 05:49:12 [INFO] Downloading and installing update packages 23/06/2023 05:49:12 [INFO] Signature script output: Verified OK 23/06/2023 05:49:12 [INFO] Signature script output: 23/06/2023 05:49:12 [INFO] Manifest signature verification passed 23/06/2023 05:49:12 [INFO] Creating /opt/vmware/var/lib/vami/update/data/update_progress.json 23/06/2023 05:49:12 [INFO] Downloading the following packages for update version 8.12.2.31329 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: audit x86_64 (none) 2.8.5 25.ph3 /package-pool/audit-2.8.5-25.ph3.x86_64.rpm rpm 452607 2fdf95616439cdd2064887aefe2fe26d1cc0dd6e 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: bootstrap-extensibility noarch (none) 8.12.2.31329 1 /package-pool/bootstrap-extensibility-8.12.2.31329-1.noarch.rpm rpm 361379 be75189afedd82a3dbab10c491dee5b7971ef481 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: crd-tools noarch (none) 8.12.2.31329 1 /package-pool/crd-tools-8.12.2.31329-1.noarch.rpm rpm 39246 8134056bbe1966a486b33bd5e1e09cb3c215159b 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: curl x86_64 (none) 8.1.1 1.ph3 /package-pool/curl-8.1.1-1.ph3.x86_64.rpm rpm 166073 e0c4eb1c445143f439c3e7d9ce69a629645f4881 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: curl-libs x86_64 (none) 8.1.1 1.ph3 /package-pool/curl-libs-8.1.1-1.ph3.x86_64.rpm rpm 340471 6427ba93d5b01517cfeb8d6a8b5b2b08f06fc52d 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: libuv x86_64 (none) 1.34.2 3.ph3 /package-pool/libuv-1.34.2-3.ph3.x86_64.rpm rpm 98392 76e2bc02b3270f9dc5ee79a68efc8393b1f91963 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: linux x86_64 (none) 4.19.283 2.ph3 /package-pool/linux-4.19.283-2.ph3.x86_64.rpm rpm 23920750 2f3e65df8b679f3c73d8d03ce606d5e43861c420 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: linux-hmacgen x86_64 (none) 4.19.283 2.ph3 /package-pool/linux-hmacgen-4.19.283-2.ph3.x86_64.rpm rpm 49845 a3d2880a685c75c6cf9e40dccdc4b417cade0a56 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: nss x86_64 (none) 3.44 10.ph3 /package-pool/nss-3.44-10.ph3.x86_64.rpm rpm 938828 6ff75c42717feba40d922a3a9f8ddeabc7698915 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: nss-libs x86_64 (none) 3.44 10.ph3 /package-pool/nss-libs-3.44-10.ph3.x86_64.rpm rpm 930098 b31032c37de21175e820119bbee383d012326f1a 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: open-vm-tools x86_64 (none) 12.2.0 2.ph3 /package-pool/open-vm-tools-12.2.0-2.ph3.x86_64.rpm rpm 1195114 941a315c53f5ac5ffcc9ff8bef061c18303ac629 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: openssl x86_64 (none) 1.0.2zh 1.ph3 /package-pool/openssl-1.0.2zh-1.ph3.x86_64.rpm rpm 2192646 1b442868bbc53bf4410deb7155e13d6ae2291b46 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: openssl-c_rehash x86_64 (none) 1.0.2zh 1.ph3 /package-pool/openssl-c_rehash-1.0.2zh-1.ph3.x86_64.rpm rpm 13878 844d112946530181b5f2fa36dc0b5e56eb21a3ac 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-deploy-base noarch (none) 8.12.2.31329 1 /package-pool/prelude-deploy-base-8.12.2.31329-1.noarch.rpm rpm 47756 3d2480f04422d7b627f85b6fe9930a68d890b6b5 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-deploy-common noarch (none) 8.12.2.31329 1 /package-pool/prelude-deploy-common-8.12.2.31329-1.noarch.rpm rpm 7530916 14b37cdb99f903c4e547ee4f3913af89d587fcae 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-etcd noarch (none) 8.12.2.31329 1 /package-pool/prelude-etcd-8.12.2.31329-1.noarch.rpm rpm 8008981 3bf0fa797c739db3d84924770ed1a9154925cc56 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-flannel noarch (none) 8.12.2.31329 1 /package-pool/prelude-flannel-8.12.2.31329-1.noarch.rpm rpm 3751 ecf07a7dc3d753bb294d27fc9dc48de903baa30b 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-health noarch (none) 8.12.2.31329 1 /package-pool/prelude-health-8.12.2.31329-1.noarch.rpm rpm 14045 ec9b85e2ce01832ed8b4148503b7b59fde1dec86 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-k8s-base noarch (none) 8.12.2.31329 1 /package-pool/prelude-k8s-base-8.12.2.31329-1.noarch.rpm rpm 46022708 53c2cc91d3bc7fe6e7a39e470a0bf3d556cd6ad2 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-k8s-config noarch (none) 8.12.2.31329 1 /package-pool/prelude-k8s-config-8.12.2.31329-1.noarch.rpm rpm 6990 c91ce8c42eea084f80b0fdcae4e455bfb39c483f 23/06/2023 05:49:12 [INFO] package UPDATE VERSION: prelude-k8s-runtime noarch (none) 8.12.2.31329 1 /package-pool/prelude-k8s-runtime-8.12.2.31329-1.noarch.rpm rpm 11974880 9895104faa98e07ee4d0843d01d8b65375881f17 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-00415ba83db6c6520545c018d970a961f49e4678bd4f1f2c7ee52b164fb8e796 noarch (none) 1 1 /package-pool/prelude-layer-00415ba83db6c6520545c018d970a961f49e4678bd4f1f2c7ee52b164fb8e796-1-1.noarch.rpm rpm 5843224 840e78096de57d6e4a95db09a40b10a8df6b7bd1 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-0c56a2b135ef8e6f67d57ab24ab06dd400564839e4fe41e2785d7084c2f3a777 noarch (none) 1 1 /package-pool/prelude-layer-0c56a2b135ef8e6f67d57ab24ab06dd400564839e4fe41e2785d7084c2f3a777-1-1.noarch.rpm rpm 12696780 a96670c157192288e3e1c54d8a3bc2593a2f595e 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-1c1d194ce1da0096def3c9fff6f308fa5e36c4a4ed8b3722ec70adf9680770ec noarch (none) 1 1 /package-pool/prelude-layer-1c1d194ce1da0096def3c9fff6f308fa5e36c4a4ed8b3722ec70adf9680770ec-1-1.noarch.rpm rpm 77327172 3437fbb3b087cc6c5a2a65a1aef8ca59b979f399 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-1fd0ebdf6ea14ee709222ade830f1646b260c810b96e74d167af479fd3ca531d noarch (none) 1 1 /package-pool/prelude-layer-1fd0ebdf6ea14ee709222ade830f1646b260c810b96e74d167af479fd3ca531d-1-1.noarch.rpm rpm 92048 2569c80f9ff105168d0f964c259e6192dbb1746f 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-2b4e3dcc6c3b62909cb5a005d4d1dbb5ad9d198687a9a7d17724b6810e32239f noarch (none) 1 1 /package-pool/prelude-layer-2b4e3dcc6c3b62909cb5a005d4d1dbb5ad9d198687a9a7d17724b6810e32239f-1-1.noarch.rpm rpm 37640636 114fd8a32e0f70258fcfffed80141b13f33a3d75 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-3e5b28fa5e2d2fe28a2cc09f24bbc0d71f8da23509a910eb926b9476ce0ed8d3 noarch (none) 1 1 /package-pool/prelude-layer-3e5b28fa5e2d2fe28a2cc09f24bbc0d71f8da23509a910eb926b9476ce0ed8d3-1-1.noarch.rpm rpm 21827248 688febc2b0691db5f402ef2e6720b67ce476dc0f 23/06/2023 05:49:12 [INFO] package NEW PACKAGE : prelude-layer-435ddda7eb71d8c7220891fc92a06bbad44c87b6f3c54d14ee1e4f61b2f5b95c noarch (none) 1 1 /package-pool/prelude-layer-435ddda7eb71d8c7220891fc92a06bbad44c87b6f3c54d14ee1e4f61b2f5b95c-1-1.noarch.rpm rpm 3444 cf1639bd88da2fe748f313035c80547a6e0a7cce * ** * * 23/06/2023 05:49:52 [INFO] Reboot required when installing linux package 23/06/2023 05:49:52 [INFO] Created reboot required file 23/06/2023 05:49:52 [INFO] Update 8.12.2.31329 manifest is set to be installed 23/06/2023 05:49:52 [INFO] Using update post-install script 23/06/2023 05:49:52 [INFO] Running updatecli to install updates. command={ mkdir -p /usr/share/update-notifier ; ln -s /opt/vmware/share/vami/vami_notify_reboot_required /usr/share/update-notifier/notify-reboot-required ; /opt/vmware/share/vami/update/updatecli '/opt/vmware/var/lib/vami/update/data/job/2' '8.12.1.31024' '8.12.2.31329' ; mv /opt/vmware/var/lib/vami/update/data/update_progress.json /opt/vmware/var/lib/vami/update/data/.update_progress.json ; } >> /opt/vmware/var/log/vami/updatecli.log 2>&1 & 23/06/2023 05:49:53 [INFO] Installation running in the background 23/06/2023 05:56:40 [INFO] Reloading new configuration. file=/opt/vmware/var/lib/vami/update/provider/provider-deploy.xml 23/06/2023 05:57:47 [INFO] Moving next manifest to installed manifest Now the actual uprgade has trigged and the focus would be on updatecli and upgrade-datetime.log 23/06/2023 05:49:52 [INFO] Starting Install 23/06/2023 05:49:52 [INFO] Update status: Starting Install 23/06/2023 05:49:52 [INFO] Update status: Running pre-install scripts 23/06/2023 05:49:52 [INFO] Running /opt/vmware/var/lib/vami/update/data/job/2/pre_install '8.12.1.31024' '8.12.2.31329' JOBPATH /opt/vmware/var/lib/vami/update/data/job/2 total 168 drwxr-xr-x 2 root root 4096 Jun 23 05:49 . drwxr-xr-x 3 root root 4096 Jun 23 05:49 .. -rwx------ 1 root root 125 Jun 23 05:49 manifest_update --w--w-r-- 1 root root 120169 Jun 23 05:49 manifest.xml -rwx------ 1 root root 833 Jun 23 05:49 post_install -rwx------ 1 root root 2759 Jun 23 05:49 pre_install -rwx------ 1 root root 10606 Jun 23 05:49 run_command -rw-r--r-- 1 root root 57 Jun 23 05:49 status -rwx------ 1 root root 10613 Jun 23 05:49 test_command + preupdate=/var/vmware/prelude/upgrade/bootstrap/preupdate.sh ** * dracut: *** Installing kernel module dependencies *** dracut: *** Installing kernel module dependencies done *** dracut: *** Resolving executable dependencies *** dracut: *** Resolving executable dependencies done*** dracut: *** Generating early-microcode cpio image *** dracut: *** Store current command line parameters *** dracut: Stored kernel commandline: dracut: root=UUID=6b8a710d-390f-483f-9d62-db66a2ce6429 rootfstype=ext4 rootflags=rw,relatime dracut: *** Creating image file '/boot/initrd.img-4.19.283-2.ph3' *** dracut: *** Creating initramfs image file '/boot/initrd.img-4.19.283-2.ph3' done *** 23/06/2023 05:50:49 [INFO] Update status: Done package installation 23/06/2023 05:50:49 [INFO] Update status: Running post-install scripts 23/06/2023 05:50:49 [INFO] Running /opt/vmware/var/lib/vami/update/data/job/2/post_install '8.12.1.31024' '8.12.2.31329' 0 2023-06-23 05:50:49Z Main bootstrap postupdate started* * * * 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-00-restore-ovfenv.sh starting... 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-00-restore-ovfenv.sh succeeded ========================= 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-aa-set-fips-cexp.sh starting... 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-aa-set-fips-cexp.sh succeeded ========================= 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-clear-services.sh starting... 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-clear-services.sh succeeded ========================= 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-configure-dns.sh starting... 2023-06-23 05:50:49Z /etc/bootstrap/postupdate.d/00-configure-dns.sh succeeded* * * *** + chmod a+x /etc/bootstrap/everyboot.d/zz-zz-resume-upgrade.sh + export -f vami_reboot_background + echo 'Scheduling a post-upgrade reboot...' Scheduling a post-upgrade reboot... + echo 'Post-upgrade reboot scheduled' Post-upgrade reboot scheduled + nohup bash -c vami_reboot_background + exit 0 2023-06-23 05:56:39Z /etc/bootstrap/postupdate.d/99-10-handover-prelude succeeded ========================= 2023-06-23 05:56:39Z /etc/bootstrap/postupdate.d/README is not executable. 2023-06-23 05:56:39Z Main bootstrap postupdate done 23/06/2023 05:56:39 [INFO] Update status: Done post-install scripts 23/06/2023 05:56:39 [INFO] Update status: Running VMware tools reconfiguration 23/06/2023 05:56:39 [INFO] Running /opt/vmware/share/vami/vami_reconfigure_tools vmware-toolbox-cmd is /bin/vmware-toolbox-cmd vmtoolsd wrapper not required on this VM with systemd. 23/06/2023 05:56:39 [INFO] Update status: Done VMware tools reconfiguration 23/06/2023 05:56:39 [INFO] Update status: Running finalizing installation 23/06/2023 05:56:39 [INFO] Running /opt/vmware/var/lib/vami/update/data/job/2/manifest_update 23/06/2023 05:56:39 [INFO] Update status: Done finalizing installation 23/06/2023 05:56:39 [INFO] Update status: Update completed successfully 23/06/2023 05:56:39 [INFO] Install Finished [INFO][2023-06-23 05:49:11][cexproxy.cap.org] Starting installation of updates. [INFO][2023-06-23 05:49:11][cexproxy.cap.org] Update installation started successfully. [INFO][2023-06-23 05:50:10][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 05:50:10][cexproxy.cap.org] VAMI upgrade in progress. [INFO][2023-06-23 05:50:10][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 05:51:10][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 05:51:10][cexproxy.cap.org] VAMI upgrade in progress. [INFO][2023-06-23 05:51:10][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 05:52:11][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 05:52:11][cexproxy.cap.org] VAMI upgrade in progress. [INFO][2023-06-23 05:52:11][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 05:53:10][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 05:53:10][cexproxy.cap.org] VAMI upgrade in progress. * * [INFO][2023-06-23 05:56:11][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 05:56:39][cexproxy.cap.org] Waiting for VAMI to exit ... [INFO][2023-06-23 05:57:09][cexproxy.cap.org] Verifying VAMI overall upgrade result ...* * * [INFO][2023-06-23 05:59:10][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 06:00:10][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 06:00:10][cexproxy.cap.org] VAMI upgrade in progress. [INFO][2023-06-23 06:00:10][cexproxy.cap.org] VAMI upgrade still in progress. [INFO][2023-06-23 06:01:10][cexproxy.cap.org] VAMI checking upgrade progress [INFO][2023-06-23 06:01:10][cexproxy.cap.org] VAMI upgrade completed succesfully. [INFO][2023-06-23 06:01:10][cexproxy.cap.org] VAMI upgrade completed successfully. Once the VAMI upgrade is completed , there would be a reboot of the node and then services deployment would trigger [INFO][2023-06-23 06:02:10][cexproxy.cap.org] Saving artifacts metadata [INFO][2023-06-23 06:02:11][cexproxy.cap.org] Artifacts metadata saved. [INFO][2023-06-23 06:03:10][cexproxy.cap.org] Resolving post-upgrade controller. [INFO][2023-06-23 06:03:10][cexproxy.cap.org] This node is elected for post-upgrade controller. [INFO][2023-06-23 06:03:12][cexproxy.cap.org] Resolving post-upgrade nodes quorum... [INFO][2023-06-23 06:03:13][cexproxy.cap.org] Adding nodes to the cluster. [INFO][2023-06-23 06:03:13][cexproxy.cap.org] Cluster nodes added successfully. [INFO][2023-06-23 06:03:15][cexproxy.cap.org] Restoring all restore points. [INFO][2023-06-23 06:03:16][cexproxy.cap.org] Restoring Kubernetes object from local restore point: /data/restorepoint/sys-config/vaconfig [INFO][2023-06-23 06:03:16][cexproxy.cap.org] Restoration of Kubernetes object from local restore point /data/restorepoint/sys-config/vaconfig completed successfully [INFO][2023-06-23 06:03:16][cexproxy.cap.org] Triggering vaconfig schema update... [INFO][2023-06-23 06:03:17][cexproxy.cap.org] Vaconfig schema update completed [INFO][2023-06-23 06:03:16][cexproxy.cap.org] Looking for pending changes for vaconfig 2023-06-23 06:03:16,938 vra_crd.schema_migration.k8s_obj_rev_manager [DEBUG] Listing pending changes... ** * * 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_rev_manager [DEBUG] Looking into change candidate /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.8.2.100-introduce-fips-mode-property-in-vaconfig.sh... 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_change [DEBUG] Computed revision for change /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.8.2.100-introduce-fips-mode-property-in-vaconfig.sh: 8.8.2.100 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_rev [DEBUG] Compared 8.8.2.100 to 8.12.0.100: -1 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_rev_manager [DEBUG] Looking into change candidate /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.10.2.220-register_liagent_action.py... 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_change [DEBUG] Computed revision for change /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.10.2.220-register_liagent_action.py: 8.10.2.220 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_rev [DEBUG] Compared 8.10.2.220 to 8.12.0.100: -1 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_rev_manager [DEBUG] Looking into change candidate /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.12.0.100-migrate_capabilities_data_model.py... 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_change [DEBUG] Computed revision for change /etc/vmware-prelude/crd-schema-changelogs/vaconfig/8.12.0.100-migrate_capabilities_data_model.py: 8.12.0.100 2023-06-23 06:03:16,978 vra_crd.schema_migration.k8s_obj_schema_rev [DEBUG] Compared 8.12.0.100 to 8.12.0.100: 0 2023-06-23 06:03:16,979 vra_crd.schema_migration.k8s_obj_rev_manager [DEBUG] Computed list of pending changes: [INFO][2023-06-23 06:03:17][cexproxy.cap.org] Vaconfig is up to date [INFO][2023-06-23 06:03:17][cexproxy.cap.org] Deactivating LCC maintenance mode. [INFO][2023-06-23 06:03:17][cexproxy.cap.org] LCC maintenance mode deactivated successfully. [INFO][2023-06-23 06:03:17][cexproxy.cap.org] Restoration from local restore points completed successfully. [INFO][2023-06-23 06:03:18][cexproxy.cap.org] Deployment of infrastructure and application services started. [INFO][2023-06-23 06:04:10][cexproxy.cap.org] Deactivating upgrade monitor on the node [INFO][2023-06-23 06:04:10][cexproxy.cap.org] Upgrade monitor deactivated successfully on the node. [INFO][2023-06-23 06:12:44][cexproxy.cap.org] Infrastructure and application services deployed successfully. [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Retrieving services status... [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Services status retrieval succeeded. [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Processing services data... [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Services data processing succeeded. [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Verifying services... [INFO][2023-06-23 06:12:45][cexproxy.cap.org] Services verification completed successfully. [INFO][2023-06-23 06:12:46][cexproxy.cap.org] Cleaning up restore point on all nodes. [INFO][2023-06-23 06:12:46][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-clean.sh sys-config at host: cexproxy.cap.org [INFO][2023-06-23 06:13:01][cexproxy.cap.org] Cleaning up restore point /data/restorepoint/sys-config . Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 06/2023 [INFO][2023-06-23 06:13:01][cexproxy.cap.org] Restore point /data/restorepoint/sys-config cleaned up successfully. [INFO][2023-06-23 06:13:01][cexproxy.cap.org] Restore point cleaned up on all nodes. [INFO][2023-06-23 06:13:01][cexproxy.cap.org] Cleaning up restore point on all nodes. [INFO][2023-06-23 06:13:01][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-clean.sh artifacts-lastworking at host: cexproxy.cap.org [INFO][2023-06-23 06:13:15][cexproxy.cap.org] Cleaning up restore point /data/restorepoint/artifacts-lastworking . Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 06/2023 [INFO][2023-06-23 06:13:15][cexproxy.cap.org] Restore point /data/restorepoint/artifacts-lastworking cleaned up successfully. [INFO][2023-06-23 06:13:15][cexproxy.cap.org] Restore point cleaned up on all nodes. [INFO][2023-06-23 06:13:15][cexproxy.cap.org] Cleaning up restore point on all nodes. [INFO][2023-06-23 06:13:15][cexproxy.cap.org] Running remote command: /opt/scripts/upgrade/rstp-clean.sh live-data at host: cexproxy.cap.org [INFO][2023-06-23 06:13:30][cexproxy.cap.org] Cleaning up restore point /data/restorepoint/live-data . Pseudo-terminal will not be allocated because stdin is not a terminal. Welcome to Aria Automation Extensibility Appliance 06/2023 [INFO][2023-06-23 06:13:32][cexproxy.cap.org] Restore point /data/restorepoint/live-data cleaned up successfully. [INFO][2023-06-23 06:13:32][cexproxy.cap.org] Restore point cleaned up on all nodes. [INFO][2023-06-23 06:13:42][cexproxy.cap.org] Reverting SSH configuration on nodes. [INFO][2023-06-23 06:13:43][cexproxy.cap.org] Stopping sshd service.. [INFO][2023-06-23 06:13:44][cexproxy.cap.org] Service sshd stopped. [INFO][2023-06-23 06:13:44][cexproxy.cap.org] Starting sshd service.. [INFO][2023-06-23 06:13:44][cexproxy.cap.org] Service sshd started. [INFO][2023-06-23 06:13:44][cexproxy.cap.org] SSH configuration reverted on nodes successfully. [INFO][2023-06-23 06:14:01][cexproxy.cap.org] Cleaning up upgrade runtime state. [INFO][2023-06-23 06:14:11][cexproxy.cap.org] Archiving upgrade runtime data. [INFO][2023-06-23 06:14:13][cexproxy.cap.org] Upgrade runtime data archived successfully. [INFO][2023-06-23 06:14:13][cexproxy.cap.org] Clearing upgrade runtime directory. [INFO][2023-06-23 06:14:13][cexproxy.cap.org] Upgrade runtime directory cleared successfully. [INFO][2023-06-23 06:14:14][cexproxy.cap.org] Upgrade runtime clean up completed. Check pods status Upgrade Status This concludes the upgrade procedure of cloud extensibility proxy
- Fetching Environment Details from VMware Aria Suite Lifecycle using API
Step:1 Get the login token from VMware Aria Suite Lifecycle Below table describes the data we need When this API is executed , it generates a cookie which is stored under cookies section Step:2 Execute VMware Aria Suite Lifecycle API's Fetch Environment Details using API As seen above , we have all the details of globalenvironment. globalenvironment is nothing but VMware Identity Manager Fetching VMware Aria Suite Lifecycle Version and System Informaiton
- Deploying VMware Aria Automation Orchestrator with vSphere Authentication using Suite Lifecycle 8.12
We have been requested interally many times on how we can deploy VMware Aria Automation Orchestrator without VMware Aria Automation. There are still many customers who have first started using Automation Orchestrator and end up with Automation Orchestrator as their main automation engine. For them it was really difficult visualize Automation and Automation Orchestrator is the only way to deploy via Suite Lifecycle. Now, we got this sorted out. This is really a happy moment for us to share with you, that you can deploy Automation Orchestrator ...with Automation ...using Suite Lifecycle and can do complete Lifecycle management..as any other Suite Lifecycle integrated product In this example , i have following versions being deployed. VMware Aria Suite Lifecycle 8.12 VMware Aria Orchestrator 8.12 I'll be creating a new environment to install product "VMware Aria Automation Orchestrator 8.12 with vSphere Authentication" Agree EULA Select appropriate certificate created for this installation Select appropriate infrastructure configuration for this installation Input appropriate network details Just before i go to the next screen , i'd like to state that there's a AD group which was created to be used during vSphere Authentication during Automation Orchestrator deployment. This is the admin group. Now , going ahead with the Suite Lifecycle product creation flow and entering required information of the product Prechecks in progress and everything is successful Review and Submit the request Request is now submitted and deployment is in progress Appliances deployment on vCenter Request in progress and after 43 minutes it completes the deployment of the VMware Aria Automation Cluster version 8.12 Exploring environment details As we can see in the screenshot below Authentication Type is set to vSphere, as we deployed a vSphere authentication based VMware Aria Automation Orchestrator cluster On the load balancer end , you can see the pool is up Accessing the cluster with it's VIP works. One cannot access VMware Aria Automation Orchestrator page per node basis you would get a 404 not found exception. Behind the Scenes Different Phases of VMware Aria Automation Orchestrator 8.12 through Suite Lifecycle are documented below Stage 1 -- Check Product Pre-requisites Stage 2 -- Install VMware Aria Automation Orchestrator node01 a. Start b. VMware Aria Automation Orchestrator Deployment c. VMware Aria Automation Orchestrator Binary Validation d. Validate Deploy Inputs e. Deploy OVF f. Power On Virtual Machine g. Check Guest Tools Status h. Check Hostname / IP i. Check first boot completion of VMware Aria Automation Orchestrator j. Sync password for VMware Aria Orchestrator k. Health Check for VMware Aria Orchestrator l. Final node02 a. Start b. VMware Aria Automation Orchestrator Deployment c. VMware Aria Automation Orchestrator Binary Validation d. Validate Deploy Inputs e. Deploy OVF f. Power On Virtual Machine g. Check Guest Tools Status h. Check Hostname / IP i. Check first boot completion of VMware Aria Automation Orchestrator j. Sync password for VMware Aria Automation Orchestrator k. Health Check for VMware Aria Automation Orchestrator l. Final node03 a. Start b. VMware Aria Automation Orchestrator Deployment c. VMware Aria Automation Orchestrator Binary Validation d. Validate Deploy Inputs e. Deploy OVF f. Power On Virtual Machine g. Check Guest Tools Status h. Check Hostname / IP i. Check first boot completion of VMware Aria Automation Orchestrator j. Sync password for VMware Aria Automation Orchestrator k. Health Check for VMware Aria Automation Orchestrator l. Final Stage:3 -- Configure Telemetry for VMware Aria Automation Orchestrator a. Start b. Configure Telemetry for VMware Aria Automation Orchestrator c. Finish Stage:4 -- Install Certificate on VMware Aria Automation Orchestrator a. Start b. Initiate Certificate on VMware Aria Automation Orchestrator c. Install Certificate on VMware Aria Automation Orchestrator d. Health Check for VMware Aria Automation Orchestrator e. Final Stage:5 -- Configure Load Balancer a. Start b. Start VMware Aria Automation Orchestrator Deployment c. Configure Load Balancer d. Final Step:6 -- Configuring Secondary VMware Aria Automation Orchestrator a. Start b. Start VMware Aria Automation Orchestrator Deployment c. Start joining VMware Aria Automation Orchestrator secondary node to primary node d. Final Step:7 -- Configuring Secondary VMware Aria Automation Orchestrator a. Start b. Start VMware Aria Automation Orchestrator Deployment c. Start joining VMware Aria Automation Orchestrator secondary node to primary node d. Final Step:8 -- Configuring VMware Aria Automation Orchestrator Cluster a. Start b. Start VMware Aria Automation Orchestrator Cluster Deployment c. Start joining VMware Aria Automation Orchestrator cluster deployment d. Final Step:9 -- vSphere Authentication on VMware Aria Automation Orchestrator Cluster a. Start b. Fetch Configuration ID c. Configure vSphere Authentication on VMware Aria Automation Orchestrator d. Check if admin group is valid for given vSphere e. Configure vSphere Admin group on VMware Aria Automation Orchestrator f. Persist vSphere Authentication on VMware Aria Automation Orchestrator g. Final Step:10 -- VMware Aria Automation Orchestrator SDDC manager watermark configuration a. Start b. VMware Aria Automation Orchestrator SDDC manager watermark configuration c. Final Step:11 -- Update Environment Details a. Start b. Create Environment Inventory Update c. Final
- Upgrading Operations for Logs through Suite Lifecycle
Recently VMware has announced that there are few vulnerabilities which are affecting VMware Aria Operations for Logs which are documneted under VMSA As per VMSA , we need to upgrade VMware Aria Operations for Logs ( formerly vRLI ) to version 8.12. Remember this resolves another certificate issue documented under KB91441 Procedure VMware Aria Operations for Logs 8.12 is supported along with VMware Aria Suite Lifecycle 8.12 Which means before uprgading VMware Aria Operations for Logs , we should upgrade VMware Aria Suite Lifecycle to 8.12 I have documented how to upgrade Suite Lifecycle to 8.12 in this blog post . This descrived CD Rom method. I'll share screenshots from Online method. It's pretty much same just the source is different. After a while upgrade is done. vRSLCM is now VMware Aria Suite Lifecycle We will refer the new VMware Aria Suite Lifecycle as Suite Lifecycle to make it much easier If we go to Product Support Pack pane under settings , we can see that the new policy 8.12.0.0 which is the out of the box policy which comes with upgrade adds support for VMware Aria Operations 8.12 VMware Aria Automation 8.12 VMware Aria Orchestrator 8.12 VMware Aria Automation Config 8.12 VMware Aria Operations for Logs 8.12 Now , let's browse our environment and see what all products we have Note: Even though the version of the managed products are still old for example verison 8.10 and below, we still show VMware Aria Branding as it's difficult to maintain both vRealize and VMware Aria branding together. No matter what version of the product is available, we will show only the new "VMware Aria" branding once we uprgade Suite Lifecycle to version 8.12 Now let's uprgade our existing vRLI environment whcih call in the UI as Logs Let's begin with downloading the binary and wait till it's completed. Once done , let's go to the product and perform inventory sync Inventory Sync is now completed. Let's now start VMware Aria Operations for Logs upgrade to version 8.12 Just performed inventory sync so i'll directly proceed with upgrade Ensure appropriate binary is mapped Take and Retain Snapshot Validations Successful Review and Submit the request It took 24 minutes end to end to complete the uprgade process , remember this includes This is a single node VMware Aria Operations for Logs Diving into logs a bit.... Important Logs to Monitor /storage/core/loginsight/var/upgrade.log /var/log/vrlcm/vmware_vrlcm.log ****** Reference: /storage/core/loginsight/var/upgrade.log ****** ******Certificiate , Version and Disk Space checks are performed ****** 2023-04-23 00:41:51,607 loginsight-pak-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK 2023-04-23 00:41:51,629 loginsight-pak-upgrade INFO Signature of the manifest validated: Verified OK 2023-04-23 00:41:52,466 loginsight-pak-upgrade INFO Current version is 8.10.2-21145187 and upgrade version is 8.12.0-21618456. Version Check successful! 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Available Disk Space at /tmp: 3402510336 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Available Disk Space at /storage/core: 468438704128 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Available Disk Space at /storage/var: 19037392896 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:41:52,467 loginsight-pak-upgrade INFO Loading eula license successful! 2023-04-23 00:41:52,468 loginsight-pak-upgrade INFO Done! 2023-04-23 00:41:53,409 loginsight-pak-upgrade INFO Certificate verified: VMware-vRealize-Log-Insight.cert: C = US, ST = California, L = Palo Alto, O = "VMware, Inc." error 18 at 0 depth lookup:self signed certificate OK PAK file is verified 2023-04-23 00:41:53,422 loginsight-pak-upgrade INFO Signature of the manifest validated: Verified OK 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Current version is 8.10.2-21145187 and upgrade version is 8.12.0-21618456. Version Check successful! 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Available Disk Space at /tmp: 3402534912 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Available Disk Space at /storage/core: 468438573056 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Available Disk Space at /storage/var: 19037380608 2023-04-23 00:41:53,658 loginsight-pak-upgrade INFO Disk Space Check successful! 2023-04-23 00:46:30,378 loginsight-pak-upgrade INFO Checksum validation successful! ****** Upgrade is triggered ****** 2023-04-23 00:46:30,381 loginsight-pak-upgrade INFO Attempting to upgrade to version 8.12.0-21618456 2023-04-23 00:46:30,696 upgrade-driver INFO Starting 'upgrade-driver' script ... 2023-04-23 00:46:30,696 upgrade-driver INFO Start processing the manifest file ... 2023-04-23 00:46:30,708 upgrade-driver INFO Log Insight TO_VERSION in manifest file is 8.12.0-21618456 2023-04-23 00:46:30,708 upgrade-driver INFO Parsed version is 8.12.0-21618456 2023-04-23 00:46:30,708 upgrade-driver INFO Creating file /storage/core/upgrade-version to store upgrade version. 2023-04-23 00:46:30,718 upgrade-driver INFO The file /storage/core/upgrade-version is created successfully. 2023-04-23 00:46:30,739 upgrade-driver INFO Start upgrading cassandra sstable schema... 2023-04-23 00:46:33,487 upgrade-driver INFO Cassandra sstable schema upgrading done. 2023-04-23 00:46:39,873 upgrade-driver INFO Cassandra snapshot run time: 0:00:06.386209 2023-04-23 00:46:39,874 upgrade-driver INFO Start processing key list ... 2023-04-23 00:46:39,874 upgrade-driver INFO Start processing rpm list ... 2023-04-23 00:46:39,874 upgrade-driver INFO Rpm by name upgrade-image-8.12.0-21618456.rpm 2023-04-23 00:49:24,515 upgrade-driver INFO INFO: Running /storage/core/upgrade/kexec-li - Resize|Partition|Boot ... Starting to run kexec-li script ... Reading and saving /etc/ssh/sshd_config Reading and saving old ssh keys if key based Authentication is enabled cp: cannot stat '/root/.ssh//id_rsa': No such file or directory cp: cannot stat '/root/.ssh//id_rsa.pub': No such file or directory cp: cannot stat '/root/.ssh//known_hosts': No such file or directory Reading and saving /etc/hosts Reading and saving ssh host keys Reading and saving /var/lib/loginsight-agent/liagent.ini Reading and saving hostname Reading and saving old cassandra keystore Failed copying /usr/lib/loginsight/application/lib/apache-cassandra-*/conf/keystore* Reading and saving old default keystore Reading and saving old default truststore Reading and saving old tomcat configs chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/3rd_config/keystore* chmod ing /storage/core/upgrade/vmdk-extracted-root/usr/lib/loginsight/application/etc/truststore* Reading and saving old loginsight.conf Reading and saving old password in /etc/shadow Root password info root P 04/22/2023 0 365 7 -1 Root password change date is 04/22/2023 Root password is set. Password reset will not be required on first login. Reading and saving /etc/fstab Reading and saving cacerts Copying java.security to java.security.old Reading and saving network configs Reading and saving resolv.conf Checking for certificate renewal Current certificate fingerprint: 69:3F:9C:F7:52:7B:B6:F8:2C:E3:AF:A1:C1:2A:16:0B:A1:A7:53:92 Cassandra and tomcat certificate fingerprints are different. Updating... Lazy partition is sda5 sda partition count is 5 /storage/core/upgrade/kexec-li script run took 233 seconds Partition sda5 , which is lazy partition, will be formatted and will become root partition Photon to Photon upgrade flow will be called, where base OS was Photon ... Starting to run photon2photon script ... Root partition copy took 194 seconds clean up upgrade-image.rpm Removing lock file /storage/core/upgrade/photon2photon-base-photon.sh script run took 196 seconds Rebooting... After the reboot and services are up , upgrade is now complete.
- What's CAP in VMware Aria Suite Lifecycle 8.12
CAP is a Common Appliance Platform, an approach to standardize appliance management for all VMware appliances which would significantly reduce overhead and operational inefficiencies CAP improves appliance upgrade experience for VMware Aria Suite Lifecycle, this is a direct replacement for VMware Appliance Management Interface , also known as VAMI Benefits of CAP
- What's New in vRealize Automation 8.8.1
vRealize Automation 8.8.1 capabilities are focusing on the areas of multi-cloud support with ability to Enable/Disable Log Analytics for Azure VMs Manage resource RBAC permission with quick create VM Resolved CVE-2022-22965. The benefits of vRealize Automation 8.8.1 include: Cloud Guardrails initial functionality to access the out of the template library, the ability to combine templates into desired states, and the ability to enforce desired states Ability to enable/disable Log Analytics for Azure VMs - Day2 - Customers can now enable/disable Azure VM log Analytics. Manage resource RBAC permission with quick create VM : Quick create VM / Quick create VM with existing network /Quick create VM with new/existing storage Updated Spring version 5.3.18 to resolve CVE-2022-22965 - please see https://www.vmware.com/security/advisories/VMSA-2022-0010.html Support for remote vSphere agents - vRealize Automation on-premises will support management of remote vCenter Server Cloud accounts Updated vRO AWS plugin that supports newer AWS Java SDK Documentation and Links Download links: vRealize Automation 8.8.1 vRealize Orchestrator 8.8.1 vRealize Automation Saltstack Config 8.8.1 Release Notes: vRealize Automation 8.8.1 vRealize Orchestrator 8.8.1 Documentation Links: vRealize Automation 8.8.1 vRealize Orchestrator 8.8.1