top of page

Search Results

252 results found with an empty search

  • Remediation steps to handle rollback of VMware Aria Suite Lifecycle 8.16.0 and other products

    The initial release of VMware Aria Suite Lifecycle 8.16.0 took place on February 20, 2024. However, it rolled back, along with other products within the VMware Aria Suite, within a few hours due to a new licensing requirement. What were the versions which were released on 20th Febuary and the one's released on 29th Febuary Product  Version on 20 Feb Version on 29th Feb Upgrade Path VMware Aria Automation 8.16.1-23319891 8.16.1- 23373968 If your Automation version is below 8.16.1, please upgrade to the build released on Feb 29th. If you have already upgraded, your next upgrade would be 8.16.2 or higher. VMware Aria Automation Orchestrator 8.16.1-23319892 8.16.1-23319892 If your Automation Orchestrator is not yet upgraded, proceed with upgrading to version 8.16.1. There are no changes in the build of Orchestrator.If you have already upgraded, your next upgrade would be 8.16.2 or higher. VMware Aria Automation Config 8.16.1-23266830 8.16.1-23266830 If your Automation Config is not yet upgraded, proceed with upgrading to version 8.16.1. There are no changes in the build of Automation Config. If you have already upgraded, your next upgrade would be 8.16.2 or higher. VMware Aria Operations 8.16.0-23251571 8.16.1-23365475 Perform the upgrade to version 8.16.1 of Operations using the VMware Aria Suite Lifecycle. VMware Aria Operations for Logs  8.16.0-23264422 8.16.0-23364779 If you have already upgraded to Operations for Logs 8.16.0, then go ahead and upgrade to the new build released outside of Suite Lifecycle and perform inventory sync. As build to build upgrade of products are not supported in Suite Lifecycle. If you have an older version of Operations for Logs, then standard upgrade procedure applies from Suite Lifecycle.  VMware Aria Operations for Networks  6.12.1 6.12.1 You're all set,no changes are necessary. If you've already upgraded to the build released on February 20th, then there's no further action required. VMware Aria Suite Lifecycle  8.16.0 - 23334789 8.16.0-23377566 Upgrade to the new build, 8.16.0; it will be a build-to-build upgrade. If your Suite Lifecycle is already on version 8.16.0, there won't be any version change. For those who haven't upgraded yet, follow the same path as before. Ensure to carefully read the release notes if you are upgrading from older versions to the new one. Let's explore the situation where you've completed the upgrade to VMware Aria Suite Lifecycle 8.16.0 on February 20th. What actions would you take with the newly released 8.16.0 version that came out on February 29th? This is will be a build to build upgrade. As you may observe from the screenshot the version and the build number are 8.16.0.4 - 23334789 When i browse to "System Upgrade" page under Lifecycle Operations of VMware Aria Suite Lifecycle we do see that i recently upgraded by Suite Lifecycle to 8.16.0 Check for upgrade It will be presented with the following 8.16.0 Build : 8.16.0.4, Build: 23377566 Remember the new build number is 23377566 I have already taken a snapshot, so i will go ahead and upgrade now Then run pre-checks. Following pre-checks are performed during upgrade Mandatory Value Check Root password expiration check Disk space check on / file system Request in-progress check VMware Aria Suite Lifecycle services health check Once all checks are completed, then we can proceed with upgrade Once you click on upgrade, now let's check what happens in the background to see it's progress As a first step let's SSH into VMware Aria Suite Lifecycle's appliance Reference: /var/log/vmware/capengine/cap-non-lvm-update/workflow.log **** Identifies upgrade is available **** 2024/02/29 13:03:52.642452 update_precheck.go:135: Target appliance 8.16.0.4-23377566 is higher than the source appliance 8.16.0.4-23334789, patch allowed. **** Updates are staged, RPM's are validated , Installation of RPM's **** 2024/02/29 13:04:08.530842 progress.go:11: Updates staged successfully 2024/02/29 13:04:08.531055 task_progress.go:24: Updates staged successfully 2024/02/29 13:04:09.793564 workflow_manager.go:221: Task stage completed 2024/02/29 13:04:09.793705 task_progress.go:24: Starting task ext-pre-install 2024/02/29 13:04:09.875452 progress.go:11: Starting to execute pre install extension script 2024/02/29 13:04:09.875692 command_exec.go:50: DEBUG running command: /bin/bash -c /var/tmp/95ecb5d4-bedf-4d85-81ef-97c9c3e6b1fb/pre-install-script.sh 8.16.0.4 8.16.0.4 2024/02/29 13:04:09.875877 task_progress.go:24: Starting to execute pre install extension script 2024/02/29 13:04:09.928611 pre_update_script_plugin.go:75: Pre install extension script output Installing update from version 8.16.0.4 to version 8.16.0.4 cap-applmgmt-1.4.0-279.x86_64 2024/02/29 13:04:09.928647 progress.go:11: Finished executing pre install extension script 2024/02/29 13:04:09.929051 task_progress.go:24: Finished executing pre install extension script 2024/02/29 13:04:09.965438 workflow_manager.go:221: Task ext-pre-install completed 2024/02/29 13:04:09.965718 task_progress.go:24: Starting task validate-install 2024/02/29 13:04:10.021836 progress.go:11: Running test transaction 2024/02/29 13:04:10.022014 installer.go:35: Rebuilding RPM database 2024/02/29 13:04:10.022108 task_progress.go:24: Running test transaction 2024/02/29 13:04:10.977129 installer.go:174: Checking RPMs validity 2024/02/29 13:04:11.149250 installer.go:181: Finished RPM validation 2024/02/29 13:04:11.149287 progress.go:11: Finished validating RPMs 2024/02/29 13:04:11.149387 task_progress.go:24: Finished validating RPMs 2024/02/29 13:04:11.182645 workflow_manager.go:221: Task validate-install completed **** Installation starts and completes **** 2024/02/29 13:04:11.182876 task_progress.go:24: Starting task install 2024/02/29 13:04:11.256759 progress.go:11: Installing RPMs 2024/02/29 13:04:11.256975 installer.go:47: Installing RPMs 2024/02/29 13:04:11.257067 task_progress.go:24: Installing RPMs 2024/02/29 13:04:49.304463 installer.go:35: Rebuilding RPM database 2024/02/29 13:04:50.138162 installer.go:60: Finished RPM installation 2024/02/29 13:04:50.172422 progress.go:11: Finished installing RPMs 2024/02/29 13:04:50.172548 task_progress.go:24: Finished installing RPMs 2024/02/29 13:04:50.192370 workflow_manager.go:221: Task install completed **** Post install is triggered and completed **** 2024/02/29 13:04:50.192587 task_progress.go:24: Starting task ext-post-install 2024/02/29 13:04:50.226859 progress.go:11: Starting to execute post install extension script 2024/02/29 13:04:50.227097 command_exec.go:50: DEBUG running command: /bin/bash -c /var/tmp/95ecb5d4-bedf-4d85-81ef-97c9c3e6b1fb/post-install-script.sh 8.16.0.4 8.16.0.4 0 2024/02/29 13:04:50.227287 task_progress.go:24: Starting to execute post install extension script 2024/02/29 13:11:06.813823 non_lvm_update_post_update_script_plugin.go:84: Post install extension script output Updating vami-sfcb.service - Removing vmtoolsd service dependency already service is having restart policy Finished installing version 8.16.0.4 2024/02/29 13:11:06.813872 progress.go:11: Finished executing post install extension script 2024/02/29 13:11:06.813996 task_progress.go:24: Finished executing post install extension script 2024/02/29 13:11:06.837387 workflow_manager.go:221: Task ext-post-install completed **** Starts and finishes the final phase of upgrade **** 2024/02/29 13:11:06.841249 task_progress.go:24: Starting task metadata_update 2024/02/29 13:11:06.922190 progress.go:11: Updating appliance metadata 2024/02/29 13:11:06.922534 task_progress.go:24: Updating appliance metadata 2024/02/29 13:11:06.922769 progress.go:11: Metadata update completed 2024/02/29 13:11:06.922885 task_progress.go:24: Metadata update completed 2024/02/29 13:11:06.946944 workflow_manager.go:221: Task metadata_update completed 2024/02/29 13:11:06.947063 task_progress.go:24: Starting task cleanup 2024/02/29 13:11:07.015059 progress.go:11: Removing stage path 2024/02/29 13:11:07.015365 cleanup.go:64: Removing update directory: /storage/95ecb5d4-bedf-4d85-81ef-97c9c3e6b1fb 2024/02/29 13:11:07.015388 task_progress.go:24: Removing stage path 2024/02/29 13:11:07.117053 cleanup.go:72: Successfully removed update location 2024/02/29 13:11:07.159940 workflow_manager.go:221: Task cleanup completed 2024/02/29 13:11:07.159989 workflow_manager.go:183: All tasks finished for workflow 2024/02/29 13:11:07.160000 workflow_manager.go:354: Updating instance status to Completed Remember, the other logs to monitor during upgrade phases are Pre-Upgrade & Upgrade Phase /var/log/vmware/capengine/cap-non-lvm-update/worflow.log /var/log/vmware/capenginecap-non-lvm-update/installer-<>.log Post Update Phase /var/log/bootstrap/postupdate.log /data/script.log /var/log/vrlcm/vmware_vrlcm.log Now that upgrade is complete , let's go ahead and review the version and build As you can see the version is still 8.16.0 but the build number is 23377566 Which is the new build released on 29th Febuary 2024. When we started upgrade the source was 8.16.0.4 Build 23334789 released on 20th Feb After we finished upgrade we are on 8.16.0.4 Build 23377566 released on 29th Feb Remember there is no need of any new product support packs to consume new builds of other VMware Aria Products. The policy of VMware Aria Suite Lifecycle 8.16.0.4 Build 23377566 contains latest checksums of the products being released on 29th Feb 2024. If you upgrade to the new build of VMware Aria Suite Lifecycle your all set. The policy reads as below Leave comments if there are any further questions, they will be answered.

  • Difference between System Upgrade, System Patch , PSPACK and Product Upgrades in Suite Lifecycle

    Below video explains clear differences between a system upgrade, system patch , product support pack and then the product upgrade itself.

  • Could you explain the infrastructure type shown in the Easy Installer vCenter information pane?

    While setting up VMware Aria Automation or VMware Identity Manager through Easy Installer, when you reach the stage of providing vCenter details for deployment, you'll encounter an option called "Infrastructure Type." This choice determines the location of your vCenter: - vSphere Whether it is within your datacenter or on VMC on AWS - Hyperscaler if it resides on a Hyperscaler such as AVS, GCVE, or OVCS In the case of deploying on hyperscalers, it's important to note that the admin account often comes with restrictions. Therefore, it is advisable to select the Infrastructure Type as Hyperscaler during deployment. This ensures that the installer performs checks with relaxed permissions to verify if the necessary privileges are available.

  • VMware Aria Suite Lifecycle 8.14 Product Support Pack 6 is released

    VMware Aria Suite Lifecycle 8.14 Product Support Pack 6 is released which supports VMware Aria Operations for Networks 6.12 8.14 PSPACK 6 Release Notes Remember, Product Support Packs and Patches are cumulative. What do i mean by that, let me explain Type Release Date 8.14 GA 19th Oct 2023 8.14 PSPACK 1 8th Nov 2023 8.14 PSPACK 2 24th Nov 2023 8.14 PSPACK 3 30th Nov 2023 8.14 Patch 1 12th Dec 2023 8.14 PSPACK 4 16th Jan 2023 8.14 PSPACK 5 24th Nov 2023 8.14 PSPACK 6 8th Nov 2023 By looking at the table if i install 8.14 PSPACK 6 , there is no need for me to go ahead and install 8.14 Patch 1 as all of the changes are already available on the latest PSPACK or Patch Always take a snapshot before applying a product support pack

  • VMSA-2024-0001 Queries | Remediation through Suite Lifecycle |

    What's this about? CVE-2023-34063 details a missing access control vulnerability that impacts Aria Automation VMware's response to this vulnerability is documented in  VMSA-2024-0001 Please ensure that you have reviewed VMSA-2024-001 How do i remediate? All versions of Aria Automation 8.11.x, 8.12.x, 8.13.x and 8.14.x are impacted by this vulnerability Customers running versions of Aria Automation that are passed their end of general support date are recommended to upgrade to a supported version and then mitigate this issue VMware Aria Suite Lifecycle has released 8.14 PSPACK 4 to support VMware Aria Automation 8.16 and Orchestrator 8.16 Note: If your on Suite Lifecycle 8.12 , then apply Patch 2 or 3 before upgrading your environment to Suite Lifecycle 8.14. Patch 3 was released last week. Click on this link to understand why it's so important to do so. Available both online and offline modes. Here's the Release Notes Link There are patches released for previous versions of Automation which are under support and that's documented in this KB Article 96098 Above stated KB article is your bible, it has all information needed Leave a message if there are any further queries. Will try to answer.

  • Install VMware Aria Suite Lifecycle 8.12 Patch 2 before upgrading to 8.14 for 7 crucial reasons

    The product team for VMware Aria Suite Lifecycle has issued a new update, known as " VMware Aria Suite Lifecycle 8.12 Patch 2 ," addressing crucial issues. We advise customers intending to upgrade from 8.12 to 8.14 to initially apply Patch 2 to their existing 8.12 installation. Following this, proceed with the upgrade to 8.14 for a smooth and uninterrupted upgrade. Release Notes : https://docs.vmware.com/en/VMware-Aria-Suite-Lifecycle/8.12/rn/vmware-aria-suite-lifecycle-812-patch-2-release-notes/index.html Note It's crucial to take snapshots before upgrading. If a failure occurs, rollback, analyze, and proceed with caution, avoiding repeated upgrade attempts on the failed environment What are those 7 Reasons? Solution to address the issue associated with the "Check Online" upgrade method of VMware Aria Suite Lifecycle when configured with a proxy If a proxy is set up in Suite Lifecycle, the "Check Online" upgrade method may encounter a failure, presenting an exception message indicating "No Upgrade Available." The resolution to this problem is implemented through the application of 8.12 Patch 2, which ensures that the requests originating from Suite Lifecycle to the upgrade repository adhere to the configured proxy settings. Updated checksum for the Windows Connector, enabling the successful download of VMware Identity Manager 3.3.7 binaries When attempting to map VMware Identity Manager binaries, a checksum exception arises, as depicted below. This issue stems from a recent modification in the Windows connector build. For users with Suite Lifecycle 8.12 planning to download 3.3.7 binaries, it is crucial to apply Patch 2. This ensures that the updated checksum is integrated into Suite Lifecycle, leading to a successful mapping of the binaries. Fixed the issue related to collecting all necessary logs needed for debugging the upgrade of VMware Aria Suite Lifecycle In the event of an upgrade failure, it is essential to gather all relevant logs for debugging purposes. Upon the installation of Suite Lifecycle 8.12 Patch 2, it guarantees the comprehensive collection of all necessary logs required for an upgrade. Key logs that demand attention during the upgrade process include: /var/log/vmware/capengine/cap-non-lvm-update/installer-* /var/log/vmware/capengine/cap-non-lvm-update/workflow.log /var/log/vmware/cap_am/appliance_update_list_ms.log /var/log/bootstrap/postupdate.log /data/script.log Updated descriptions for the VMware Aria Suite Lifecycle upgrade pre-check to ensure accuracy The descriptions for pre-checks conducted prior to the upgrade were identical. This issue has been rectified in the Suite Lifecycle Patch 2 Improved the upgrade pre-validation report, explicitly displaying the KB link when the root partition check fails. Before initiating an upgrade, one of the pre-checks involves verifying whether there is sufficient space in the / partition. With the implementation of Patch 2, if a warning for this pre-check arises during an upgrade attempt, it will display the relevant KB article outlining steps to address and alleviate the space issue under the / partition Resolved the upgrade problem in VMware Aria Suite Lifecycle when FIPS is deactivated In Suite Lifecycle 8.12 environments where FIPS is disabled, an upgrade may encounter a stall during the post-update phase. To preemptively circumvent this issue, a workaround involves creating a /var/log/bootstrap/fips_disable flag before initiating the upgrade. If the issue is already encountered, the following KB article provides steps to resolve it: https://kb.vmware.com/s/article/95231. The application of Patch 2 addresses and resolves this problem, eliminating the need for these workarounds. Addressed issues related to Operations for Logs scale-out operations An improvement has been implemented in the scale-out process to prevent any failures when new nodes attempt to join the cluster

  • VMware Aria Suite Lifecycle 8.14 PSPACK 2 | Installation | Demo |

    VMware Aria Suite Lifecycle 8.14 Product Support Pack 2 or PSPACK2 as we call it offers support for VMware Aria Operations 8.14.1 and VMware Aria Operations for Logs 8.14.1 Recorded a small demo which explains logs to monitor and the process involved to implement it Logs to monitor /var/log/vrlcm/vmware_vrlcm.log /var/log/vrlcm/bootstrap.log /var/log/vrlcm/patchcli.log Once there is a successful PSPACK implementation you would see the following messages in logs Reference: /var/log/vrlcm/patchcli.log 2023-11-24 13:14:46,865 - __main__ - INFO - Metadata: {"patchInfo":{"name":"VMware Aria Suite Lifecycle, version 8.14.0 Pspack 2","summary":"Cumulative pspack bundle for vRealize Suite Lifecycle Manager","description":"This cumulative pspack bundle provides fixes to issues observed with various VMware Aria Suite Lifecycle components. Refer the associated docUrl for more details.","kbUrl":"https:\/\/docs.vmware.com\/en\/VMware-vRealize-Lifecycle-Manager\/8.14.0\/rn\/vRealize-Lifecycle-Manager-814-Pspack-2.html","eulaFile":"","category":"bugfix","urgency":"critical","releaseType":"pspack","releaseDate":1700808903000,"additionaInfo":{}},"metadataId":"vrlcm-8.14.0-PSPACK2","metadataVersion":"1","patchId":"6813196a-22c8-4425-a527-3e86a4d30502","patchBundleCreationDate":1700808903,"selfPatch":true,"product":{"productId":"vrlcm","productName":"VMware Aria Suite Lifecycle","productVersion":"8.14.0","supportedVersions":["8.14.0"],"productBuild":"10689094","productPatchBuild":"","additionaInfo":{"patchInstructions":"mkdir -p \/data\/tmp-pspack-81402\/10318114; cp -r /tmp/10318114/VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak \/data\/tmp-pspack-81402\/10318114; cd \/data\/tmp-pspack-81402\/10318114; unzip VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak; unzip lcm_PSPACK_artifacts.zip; cp -r \/tmp\/10318114\/lcm_pspack_metadata.json \/data\/tmp-pspack-81402\/10318114; sh pre_pspack_instructions.sh; sh pspack_instructions.sh \/data\/tmp-pspack-81402\/10318114 6813196a-22c8-4425-a527-3e86a4d30502;"},"patchAlreadyApplied":false},"payload":{"productPatchLevel":"PSPACK2","patchPayloadFilename":"VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak","patchPayloadUri":"","patchPayloadSize":395054038,"sha256sum":"041c06b1e11c83b6d8c20ba915866aa2e2fbd7b2eee79cc1df2d284e3ea2dedb","productMinorLevel":null},"patchFileName":"vrlcm-8.14.0-PSPACK2.pspak","patchSize":0,"patchSha256sum":"","patchRunningCounter":2,"patchStatus":"ACTIVE","patchDownloadStatus":null,"extract":false,"patchCounter":"2"} 2023-11-24 13:14:46,879 - __main__ - INFO - Patch File: /tmp/10318114//VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak 2023-11-24 13:14:46,879 - __main__ - INFO - metadata after parsing : 2023-11-24 13:14:46,879 - __main__ - INFO - patch instructions:mkdir -p /data/tmp-pspack-81402/10318114; cp -r /tmp/10318114/VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak /data/tmp-pspack-81402/10318114; cd /data/tmp-pspack-81402/10318114; unzip VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak; unzip lcm_PSPACK_artifacts.zip; cp -r /tmp/10318114/lcm_pspack_metadata.json /data/tmp-pspack-81402/10318114; sh pre_pspack_instructions.sh; sh pspack_instructions.sh /data/tmp-pspack-81402/10318114 6813196a-22c8-4425-a527-3e86a4d30502; 2023-11-24 13:14:46,880 - __main__ - INFO - installing patch ... Archive: VMware-vLCM-Appliance-8.14.0-PSPACK2.pspak extracting: lcm_PSPACK_artifacts.zip Archive: lcm_PSPACK_artifacts.zip creating: os/ inflating: vmlcm-service-gui-8.14.0-SNAPSHOT.jar inflating: vmlcm-service-8.14.0-SNAPSHOT.jar inflating: vmware-service-configuration.jar extracting: blackstone.zip inflating: APUAT-8.5.0.18176777.pak inflating: APUAT-for-8.10.x-8.14.1.22799028.pak inflating: pre_pspack_instructions.sh inflating: pspack_instructions.sh inflating: policy.json inflating: post_patch_instructions.sh inflating: dev-build-upgrade.sh inflating: populate.sh inflating: patchcli.py inflating: patchcliproxy inflating: vlcm-support inflating: vrlcm-server.service 2023-11-24_13:14:54 Pre-Product Support Pack - vRSLCM 2023-11-24_13:14:54 Cleaning backups from old location... 2023-11-24_13:14:54 Cleaning previous backups... 2023-11-24_13:14:54 Creating new backup file... 2023-11-24_13:15:16 Backup done. 2023-11-24_13:15:16 Copy script to /var/lib/vrlcm 2023-11-24 13:16:40,265 - __main__ - INFO - patch installation process ended. 2023-11-24 13:16:40,265 - __main__ - INFO - patch installation completed. Post above message there would be a reboot of appliance. Then the work begins under bootstrap.log and vmware_vrlcm.log To validate from UI , one can check

  • What's VMware Identity Manager Cluster Auto-Recovery in VMware Aria Suite Lifecycle 8.14

    VMware Aria Suite Lifecycle 8.14 introduces an innovative capability known as "VMware Identity Manager Cluster Auto-Recovery". Why do we need this ? The aim of this 'autorecovery' service is to minimize the necessity to do the time-consuming 'Remediate' operation from the Suite Lifecycle UI. In greenfield deployments, this feature is automatically activated, while in brownfield deployments, it needs to be manually enabled after upgrading to VMware Aria Suite Lifecycle 8.14 How does it work? The 'autorecovery' service is deployed as a Linux service and operates on all three nodes within the vIDM cluster Operations like start/restart of the pgPool service is controlled by the script individually on each node The handling of cluster-VIP or delegateIP is handled only on nodes with the role as primary Detachment of VIO is done on standby nodes based on their role as "standby" Operations like "recovery" is synchronized base don the node's status as database "primary" and in one case and cluster "leader" if all nodes are in standby Because of which there would be no duplicate operations which are triggered by any of the nodes What challenges or issues does this feature tackle? Cluster-VIP loss on primary Cluster issues due to network outages Recovery of "down" cluster node/s Avoids necessity to initiate the "Remediate" from UI in most cases, which involves node(s) restarts contributing to downtime This script eliminates the necessity of rebooting vIDM nodes because of PostgreSQL cluster problems Recovery in cases with significant replication delay ('significant' configurable in bytes, say more than 1000 bytes of lag between primary and secondary) Recovery in rare cases of all nodes are in a ‘standby’ state Prevent discrepancies in the /etc/hosts Is there any downtime during the execution of auto-recovery? No, there is no downtime when the auto-recovery script is triggered in the backend for any of the reasons How do i enable and disable this feature? Once enabled , users can come to day-2 operations pane of globalenvironment or vIDM and then choose to disable and vice-versa

  • Upgrading VMware Aria Suite Lifecycle | 8.12 to 8.14 | Deepdive |

    Pre-Requisites Take Snapshots Review Release notes Note : For VCF aware VMware Aria Suite Lifecycle environments , wait for respective PSPACK or Product Support Packs to be released Important Logs Pre-Upgrade & Upgrade Phase /var/log/vmware/capengine/cap-non-lvm-update/worflow.log /var/log/vmware/capenginecap-non-lvm-updateinstaller-<>.log Post Update Phase /var/log/bootstrap/postupdate.log /data/script.log /var/log/vrlcm/vmware_vrlcm.log Procedure We may use one of the repository methods to upgrade Check Online URL CD-Rom In this example/demo , we will be using CD Rom method Let's now deepdive and understand the uprgade procedure Ignore the message which states that Suite Lifecycle is already upgraded. That will be removed in next version Click on CD-Rom and then check for upgrades It now reads the manifest and comes back stating that the uprgade is available Give your conscent , agree that you have taken snapshot Validations should go through What do we check Mandatory value check Root password check Disk space check on / filesystem Requests in-progress check VMware Aria Suite Lifecycle health check VMware Identity Manager health check Now click on Upgrade Upgrade is now triggered It would stay on 22% for a while as the rpm's are being verfified and staged into repo Reference: /var/log/vmware/capengine/cap-non-lvm-update/worflow.log Prechecks 2023/10/20 11:09:42.479602 workflow_manager.go:84: Fetching metadata for workflow 2023/10/20 11:09:42.480148 workflow_manager.go:354: Updating instance status to Running 2023/10/20 11:09:42.480601 task_progress.go:24: Starting task update-precheck * * 2023/10/20 11:09:43.171868 task_progress.go:24: Reading appliance metadata complete. 2023/10/20 11:09:43.185705 workflow_manager.go:221: Task precheck_metadata completed Staging 2023/10/20 11:09:43.186514 task_progress.go:24: Starting task stage 2023/10/20 11:09:43.250662 stage_plugin.go:124: Using stage directory: /storage/eb595996-31c9-432c-933e-fa354438df65/stage 2023/10/20 11:09:43.267429 progress.go:11: Staging update * * 2023/10/20 11:10:43.268446 task_progress.go:24: Updates staged successfully 2023/10/20 11:10:43.288182 workflow_manager.go:221: Task stage completed Post staging , goes ahead with pre installation scripts 2023/10/20 11:10:43.288321 task_progress.go:24: Starting task ext-pre-install 2023/10/20 11:10:44.745400 task_progress.go:24: Finished executing pre install extension script 2023/10/20 11:10:44.757120 workflow_manager.go:221: Task ext-pre-install completed Validates Installation 2023/10/20 11:10:44.757212 task_progress.go:24: Starting task validate-install 2023/10/20 11:10:50.381346 workflow_manager.go:221: Task validate-install completed Moves to 55% when the rpm installation begins Starts Installing RPM's 2023/10/20 11:10:50.381492 task_progress.go:24: Starting task install 2023/10/20 11:10:50.647395 progress.go:11: Installing RPMs 2023/10/20 11:10:50.658173 installer.go:44: Installing RPMs 2023/10/20 11:10:50.658307 task_progress.go:24: Installing RPMs Remember, this is the stage where you can check install-<>.log as well. That's the lenghty log which installs all rpm's so will not describe it in detail ufdio: 1 reads, 17154 total bytes in 0.000013 secs D: ============== /storage/f5e77458-7b7f-49c8-a66f-d0bf3b0cca38/stage/apache-ant-1.10.12-2.ph3.noarch.rpm D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key D: loading keyring from rpmdb D: opening db environment /var/lib/rpm cdb:0x401 D: opening db index /var/lib/rpm/Packages 0x400 mode=0x0 * * * * D: closed db index /var/lib/rpm/Providename D: closed db index /var/lib/rpm/Requirename D: closed db index /var/lib/rpm/Group D: closed db index /var/lib/rpm/Basenames D: closed db index /var/lib/rpm/Name D: closed db environment /var/lib/rpm D: Exit status: 0 Once the installer log states Exit staus to 0 , it means that the upgrade part is now complete. Workflow logs state that it has finished installing the RPM's 2023/10/20 11:15:27.232130 installer.go:32: Rebuilding RPM database 2023/10/20 11:15:27.875407 installer.go:57: Finished RPM installation 2023/10/20 11:15:27.875461 progress.go:11: Finished installing RPMs 2023/10/20 11:15:27.875762 task_progress.go:24: Finished installing RPMs 2023/10/20 11:15:27.946718 workflow_manager.go:221: Task install completed Post this the post installation script starts Reference: /var/log/vmware/capengine/cap-non-lvm-update/worflow.log 2023/10/20 13:10:40.708519 task_progress.go:24: Starting task ext-post-install 2023/10/20 13:10:41.324567 progress.go:11: Starting to execute post install extension script 2023/10/20 13:10:41.334470 command_exec.go:49: DEBUG running command: /bin/bash -c /var/tmp/f5e77458-7b7f-49c8-a66f-d0bf3b0cca38/post-install-script.sh 8.12.0.7 8.14.0.4 0 2023/10/20 13:10:41.334522 task_progress.go:24: Starting to execute post install extension script When it starts executing post install script, we need to check a 2 different logs, postupdate.log shown below to begin with Reference: /var/log/bootstrap/postupdate.log During postupdate phase it checks *** Begins Postupdate *** *** RPM Checks *** *** CAP User Creation *** *** Disable Password Expiration *** *** Update ulimit *** *** Set Python *** *** Postgres Configuration *** *** Cleanup Inprogress Requests *** *** Starts Services *** 2023-10-20 11:15:28 /etc/bootstrap/postupdate.d/25-start-services starting... + set -e + echo 'Reboot not required so starting all service.' Reboot not required so starting all service. + systemctl daemon-reload + systemctl restart vpostgres + touch /var/log/bootstrap/reboot-required + systemctl restart vrlcm-server + systemctl restart blackstone-spring + cp -f /var/lib/vrlcm/dev-build-upgrade.sh /usr/local/bin/vrlcm-cli + chmod 700 /usr/local/bin/vrlcm-cli + cp -r /var/lib/vrlcm/nginx.conf /etc/nginx/ + cp -r /var/lib/vrlcm/ssl.conf /etc/nginx/ + systemctl reload nginx + rm -f /var/lib/vrlcm/SUCCESS + rm -rf /tmp/dlfRepo + [[ ! -f /etc/triggerLicenseUpdate ]] + touch /etc/triggerLicenseUpdate + [[ -f /var/log/vrlcm/status.txt ]] + rm -rf /var/log/vrlcm/status.txt_backup + cp -r /var/log/vrlcm/status.txt /var/log/vrlcm/status.txt_backup + rm -rf /var/log/vrlcm/status.txt + /var/lib/vrlcm/populate.sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 You would see the progress on postupdate will wait for sometime for populate.sh script to run This. is the time you may check following logs Reference : /data/script.log There are bunch of updates which go through in the database checking all service status checking services are running checking dependent service status % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 * * delete sucess api count: 176: delete failed api count: 0: Starting postgres update script for vRSLCM specific tables ALTER TABLE ALTER TABLE Disabling FIPS settings ALTER TABLE UPDATE 0 After populate.sh script is complete, then go ahead and check postupdate.log + [[ -f /var/lib/vrlcm/SUCCESS ]] + echo 'Creating INPROGRESS to block UI from loading...' Creating INPROGRESS to block UI from loading... + rm -rf /var/lib/vrlcm/SUCCESS + touch /var/lib/vrlcm/INPROGRESS + exit 0 2023-10-20 11:22:58 /etc/bootstrap/postupdate.d/25-start-services done, succeeded. *** Service Startups Conclude *** *** Cleaning-up PSPACK db's starts *** *** Fill CAP product info *** *** Updates upgrade status *** *** blackstone upgrade starts *** *** cleanup patch history *** *** Enable fips mode *** *** Create CAP update settings *** *** Disable VAMI *** *** Creates flag to reboot VA *** *** Postupdate task is now complete *** Since postupdate task. is now complete , we shall now see the whole upgrade procedure completed by CAPENGINE and documented under workflow.log 2023/10/20 11:15:28.274925 command_exec.go:49: DEBUG running command: /bin/bash -c /var/tmp/eb595996-31c9-432c-933e-fa354438df65/post-install-script.sh 8.12.0.7 8.14.0.4 0 2023/10/20 11:22:59.237676 non_lvm_update_post_update_script_plugin.go:84: Post install extension script output Updating vami-sfcb.service - Removing vmtoolsd service dependency already service is having restart policy Finished installing version 8.14.0.4 2023/10/20 11:22:59.237734 progress.go:11: Finished executing post install extension script 2023/10/20 11:22:59.238148 task_progress.go:24: Finished executing post install extension script 2023/10/20 11:22:59.249143 workflow_manager.go:221: Task ext-post-install completed 2023/10/20 11:22:59.249404 task_progress.go:24: Starting task metadata_update 2023/10/20 11:22:59.870305 progress.go:11: Updating appliance metadata 2023/10/20 11:22:59.872860 task_progress.go:24: Updating appliance metadata 2023/10/20 11:23:00.044506 progress.go:11: Metadata update completed 2023/10/20 11:23:00.044835 task_progress.go:24: Metadata update completed 2023/10/20 11:23:00.059053 workflow_manager.go:221: Task metadata_update completed 2023/10/20 11:23:00.059324 task_progress.go:24: Starting task cleanup 2023/10/20 11:23:00.289545 progress.go:11: Removing stage path 2023/10/20 11:23:00.306077 task_progress.go:24: Removing stage path 2023/10/20 11:23:00.306810 cleanup.go:64: Removing update directory: /storage/eb595996-31c9-432c-933e-fa354438df65 2023/10/20 11:23:00.912667 cleanup.go:72: Successfully removed update location 2023/10/20 11:23:00.983155 workflow_manager.go:221: Task cleanup completed 2023/10/20 11:23:00.983189 workflow_manager.go:183: All tasks finished for workflow 2023/10/20 11:23:00.983208 workflow_manager.go:354: Updating instance status to Completed This concludes the upgrade of VMware Aria Suite Lifecycle to version 8.14

  • VMware Aria Suite Lifecycle Upgrade to 8.14 | Postupdate phase failure |

    This is a rare occurrance and might be seen on appliances which were deployed with fips mode disabled. Tried to explain this in a video recording hope this helps VMware KB Article to be followed : https://kb.vmware.com/s/article/95231 Always ensure there's a snapshot taken before upgrade Proactive Measure If your planning for an uprgade of Suite Lifecycle 8.12 to 8.14 then... 1. Create a file called fips_disable touch /var/log/bootstrap/fips_disable 2. Change permissions chmod 644 var/log/bootstrap/fips_disable 3. Go ahead and upgrade, should be seamless Reactive measure If you have hit the problem, then you can come out of it in 2 ways. KB https://kb.vmware.com/s/article/95231 lists both of the methods. Method 1 1. Revert Suite Lifecycle appliance to pre-upgrade snapshot 2. Create a file called /var/log/bootstrap/fips_disable: touch /var/log/bootstrap/fips_disable 3. Set the correct permission on the file chmod 644 /var/log/bootstrap/fips_disable 4. Retry the upgrade Method 2 Follow the instructions laid out under second section of resolution section of KB: https://kb.vmware.com/s/article/95231

  • How to overcome / partition space warning during pre-check of Suite Lifecycle Upgrade to 8.14

    There's a precheck which was slight enhance to a higher limit to ensure there is enough space available on / before going for an upgrade In some environments which's been running for a while you might encounter a warning / error which might tell you that there's not enough space. How do i resolve this issue so that i can upgrade? Follow this KB article : https://kb.vmware.com/s/article/95238 to fix this issue. I'll explain this in detail anyways below Let's begin Created a short video which explains the same. As a first step, take a snapshot before making any changes. This is mandatory Understand what's the current disk space looking like inside appliance Command: df -h Output: root@tvasl [ ~ ]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 2.9G 0 2.9G 0% /dev tmpfs 3.0G 20K 3.0G 1% /dev/shm tmpfs 3.0G 792K 3.0G 1% /run tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup /dev/mapper/system-system_0 9.8G 3.9G 5.4G 42% / tmpfs 3.0G 18M 2.9G 1% /tmp /dev/mapper/storage-storage_0 9.8G 81M 9.2G 1% /storage /dev/mapper/vg_lvm_snapshot-lv_lvm_snapshot 7.8G 24K 7.4G 1% /storage/lvm_snapshot /dev/mapper/vg_alt_root-lv_alt_root 9.8G 24K 9.3G 1% /storage/alt_root /dev/sda3 488M 40M 412M 9% /boot /dev/mapper/data-data_0 49G 33G 14G 71% /data /dev/sda2 10M 2.0M 8.1M 20% /boot/efi This /dev/sda4 belongs to / partition In this above example , i do have ample of space but what if i don't have and what files should i delete to clear space Execute below command to check what are the files in the appliance taking up space above a certain size If i want to check file sizes above 100MB then i'll execute find / -xdev -type f -size +100M -exec du -sh {} ';' | sort -rh | head -n50 If i want to check file sizes above 50MB then i'll execute find / -xdev -type f -size +50M -exec du -sh {} ';' | sort -rh | head -n50 If i want to check file sizes above 25MB then i'll execute find / -xdev -type f -size +25M -exec du -sh {} ';' | sort -rh | head -n50 Now i'll get a list of files back , there might be some .jar files nad .bkp files. Along with it there must be some package-pool folder with bunch of files in it. So how do i decide on what to delete Following files in the folder are safe to delete. Let's discuss them in detail. The backups present in the following folder are created during product support pack installation. These are not cleared upon successful installation. /opt/vmware/vlcm/postgresbackups/ There will be only 1 backup available inside this folder at any point in time and you may delete this without any issues. One note for the future, once you upgrade to VMware Aria Suite Lifecycle 8.14 and you apply a Product Support Pack, the database backups will be taken and stored under /data partition. This will ensure there's always space available under / Files under blackstone backup folder are taken during previous Suite Lifecycle upgrades. Contents under this folder can be successfully deleted /opt/vmware/vlcm/blackstone_bkp When ever you upgrade the contents of blackstone or the brain behind Suite Lifecycle's Content Management feature will store it's previous versions file in this folder Files under package_pool can be deleted as well. These files are the rpm's extracted during your previous upgrade and staged. These are no longer needed. /opt/vmware/var/lib/vami/update/data/package-pool/package-pool/*.* Remember, if your using VMware Aria Suite Lifecycle 8.12 and going to 8.14 , the upgrades are powered by CAP platform CAP is known as Common Appliance Platform and it's a replacement for VAMI, popularly known as VMware Appliance Management Interface Deleting files under package-pool will clear-up lots of space After all there, re-run precheck and you should be fine with the upgrades.

bottom of page