top of page

Experienced Technology Product Manager adept at steering success throughout the entire product lifecycle, from conceptualization to market delivery. Proficient in market analysis, strategic planning, and effective team leadership, utilizing data-driven approaches for ongoing enhancements.

  • Twitter
  • LinkedIn
White Background

Behind the Scenes : vRealize Automation Upgrade to 7.5

We monitor /opt/vmware/var/log/vami/updatecli.log and upgrade-iaas.log during upgrade of vRealize Automation , it might be any version for that matter.

Was working on an upgrade to vRA 7.5 today and decided to look behind the scenes on what exactly does it log under above mentioned logs.

So let's start our deep-dive into them.

  1. Upgrade Starts

  2. Starts Running pre-install scripts

  3. Pre-Install is set to "IN PRORGESS"

  4. Then it disables database automatic failover ( SYNCHRONOUS TO ASYNCHRONOUS )

  5. Run's abort-on-replica script , at this point it checks if the MASTER version is lesser that it's REPLICA version

  6. Then creates a copy the upgrade repository to another location so it can be used in postupgrade scripts. This happens only on the MASTER node

  7. The following script checks if the hardware resources are enough for the newly installable vRA version

  8. Checks vRA Replica hosts availability for source versions >= 7.1 , if Replica hosts are not available then it would throw an exception to fix them before the upgrade

  9. If check's for any vRO duplicates and if none if proceeds with the next step. If it finds any then it would delete them

  10. At this point it upgrades MANAGEMENT AGENTS on all IAAS nodes

  11. Post Management Agent's upgrade , pre-requisite checks on IAAS node's start. This is the point where it checks if we have included IAAS upgrade or we have excluded it and upgrading it manually. If it finds /tmp/disable-iaas-upgrade , then it's going to disable or skip all pre-req steps on IAAS nodes

  12. Identifies or Generates cluster node id

  13. Checks if vRB service is registered. If REGISTERED then it's going to let us know if that version of vRB is compatible with the version of vRA we are upgrading to. Else , it would let us know that's there is a compatibility issue but it would not stop the upgrade

  14. Validates if there are blueprints in the system that cannot be migrated automatically. If such blueprints are found the upgrade will be blocked

  15. Maps LB to localhost

  16. Kills all Java Processes. It executes vcac-server , vco-server and vco-configurator stop commands

  17. Cleans any temporary files under /var/lib/vco/*

  18. Formats /dev/sdd and moves database to it. /opt and /var are moved to the older db partition.

  19. Applies few fixes w.r.t extending partition

  20. Checks if /dev/sda is 50 GB and resize partitions according to the new existing space

  21. Kills Health Broker service and monitor before upgrading

  22. Stops IaaS services in the order: Agent, DemWorker, DemOrchestrator, ManagerService

  23. Stops vRA services on all the Replica hosts for source versions >= 7.1

  24. Vacuums database only from the primary VA

  25. This script is used to dump vco database and import it to vcac. It's only for older versions

  26. This script is used to fix location where packages are downloaded because in 6.2.x and earlier 7.x ,versions there is not enough space in the root partition. If there is not enough space , then it's going to create one and move the content into it

  27. Uninstalls the artifactory rpms from the appliance

  28. This script is used to dump PostgreSQL databases in case of major upgrade. Checks if dump/restore is needed at all and exit if major versions are the same

  29. Executes a script which is a workaround for an issue when upgrading vmStudio from versions prior to If the current vmStudio version is prior to, then forces deletetion of the vmware-studio-vami-cimom package

  30. Removes persistent net rules

  31. Artifactory uninstall fix starts

  32. Saves the JRE cacerts file as some of them are imported by horizon

  33. Removes any resource bundles

  34. Stop psql-manager service at the beginning of post update operations, as it might not be able to connect to the database, thus it will not be able to see that it is in async mode and will try to perform a reset. It will be started again at the end of the post update

  35. After PostgreSQL database is processed (still in preupdate step), this script will check if there is already exported database (which means major upgrade) then ,Will ensure PostgreSQL is stopped and delete existing data and server directory

  36. Prepares various services to stop

  37. It mark's Pre-Install tasks as complete

  38. Now it start's running installation tests

  39. Start's package installation

  40. Now that package installation is complete , it would start running post installation scripts

  41. Preserves DB settings , copies /etc/vcac/server.xml to /tmp

  42. Performs rpm status checks

  43. Stop psql-manager service at the beginning of post update operations, as it might not be able to connect to the database, thus it will not be able to see that it is in async mode and will try to perform a reset. It will be started again at the end of the post update.

  44. Creates a file /tmp/vra-upgrade-on.txt

  45. Checks if there are any external databases to be merged

  46. Ensures that all local users will not expire

  47. Ensure the keys are not already in the file /etc/sysctl.conf

  48. This script is used to fix location where packages are downloaded because in 6.2.x and earlier 7.x ,versions there is not enough space in the root partition.The script will not run (see if the /opt is symlink, which means it is moved to /storage/ext

  49. Stop PostgreSQL server and ensure data directory is on the correct partition, checks or set's the MASTER in Database

  50. Checks recovery.conf and updates other config files on postgres

  51. This script is used to restore PostgreSQL databases previously exported (in preupdate step) in case of major upgrade

  52. Starts sshd after the update if it is enabled

  53. Prepares required services

  54. Initialize users and generate encrypted pwd for administrator foe XENON. And it initializes XENON

  55. Initialize users and generate pswds for vrhb

  56. Performs cleanup of sandbox dir. In upgrade from previous system than 7.5 it's mandatory. Upgrade from 7.5 and later, sandbox folder will contains only the static UI files that are regenerated from the host

  57. Calls firstboot scripts for postgres clustering

  58. Only if D is external and local database is replica ( in the case of 6.3 with external lb ) , the db replica state will be cleared ,else it would exit

  59. Patches the rabbitmq scripts for the sed options

  60. Removing persistent net rules

  61. Prepares required vcac services

  62. Updates database and then creates tables used by vcac-config

  63. Removes truststore

  64. Adds vCO system properties in and enables vra mode in ControlCenter

  65. Reencrypts keystore password

  66. Changing the hzn master keystore password if it's been set to a default one

  67. Applies fix for issue with value: none for property

  68. This script will disable particular PIDs from hardening scripts being invoked after upgrade

  69. Replaces the update URL

  70. Reconfigures vco

  71. Add additional lighttpd configurations directory

  72. Just logs version

  73. Removes old log file that is not used anymore - the same messages are in /var/log/vmware/vcac/vcac-config.log

  74. Executes set guest to export vami variables (they were not exported in the old versions)

  75. Deploys all vRA services to tomcat

  76. Remove the orig file created by the studio build process /etc/init.d/rc

  77. Updates the java timezone data

  78. Pinning telemetry log collection runs only on master. Disables them on Replica

  79. Setting up log symlinks for /storage/log/

  80. Set's coredumps under /storage/core

  81. Fixes sshd config

  82. Edits sysctl.conf and pushes configurations into it

  83. Additional lighttpd configuration goes to the /opt/vmware/etc/lighttpd/conf.d directory - removes old config if there is any

  84. Configures allowed services under /etc/hosts.allow

  85. makes a symlink for the /etc/issue file - does not overwrite it instead writes to /etc/issue.ORIG

  86. Set's and customizes grub timeout

  87. Fixes one of the vami_ovf bugs

  88. Fixes for another bug

  89. Fixes for another bug

  90. Disable screen blanking on tty1

  91. Links vmware-rpctool where vami expects it

  92. Patches vami_set_hostname

  93. Another bug fix

  94. Add user root to wheel group otherwise it cannot login with SSH because of hardening scripts

  95. Fixes tcserver startup flles

  96. Disable chroot for ntpd

  97. Adds lighttpd headers

  98. Patches VAMI css

  99. Patches vami-deploy.xm

  100. Executes haproxy fix

  101. Enables haproxy

  102. Deletes postgres export directory

  103. Deletes legacy services

  104. Copies openscap branding

  105. Removes multiple tomcat servers if existed

  106. Applies default ciphers for SFCB server

  107. Starts psql manager

  108. Marks to trigger automated IAAS upgrade after node's reboot

  109. Checks RabbitMQ node health. Starts upgrade on the replica nodes or VA's. Now you would see that there would be no logging for a long time on the MASTER until the replica's are upgraded

  110. Deletes /tmp/disable-iaas-upgrade , if it was created before ( incase of manual iaas upgrade) + for script in '"${bootstrap_dir}"/*'

  111. Ciphers updated

  112. Flag set for vami_setnetwork

  113. Applies workaround for kernel hanging on non-available cdrom device

  114. Migrates custom groups if any

  115. Posts iaas upgrade messages

"After all appliances are upgraded, ssh to the master appliance and go to /usr/lib/vcac/tools/upgrade and execute the "./upgrade" "

"Wait for step *Post-install* to complete and then reboot the master appliance. After it is rebooted, IaaS nodes upgrade will commence automatically.Progress will be displayed on *this page* on the master appliance"

  1. All Appliances are now upgraded

  2. Completes Post Install scripts and finishes reconfiguration

  3. Finalizes Installation

  4. Complete Upgrade on Appliance successfully on MASTER and REPLICA's

  5. Now that Virtual Appliance Upgrade is complete , we will reboot the MASTER node. Once we reboot MASTER while it's starting up at a point where it brings up network interfaces and start's application services , it would initiate a automatic reboot on other REPLICA nodesOnce all Services on the MASTER node are REGISTERED , IAAS Upgrade would kick in

  6. IAAS Upgrade starts, upgrades components on my two of IAAS nodes

  7. Disables Maintenance mode on second IAAS node

  8. Upgrades DEM's

  9. Upgrades Proxy Agents

  10. Enables Manager Service Automatic Failover mode

  11. Finally completes the upgrade and restores Postgres Replication mode back to SYNCHRONOUS mode

That's Curtains for your vRA Upgrade......

Download complete updatecli.log here

709 views0 comments
bottom of page