top of page

Search Results

237 results found with an empty search

  • vRealize Suite LifeCycle Manager Introduction

    vRealize Suite LifeCycle Manager a.k.a vRLCM is designed to streamline and simplify deployment and on-going management of vRealize product portfolio vRLCM automates the install , configuration , upgrade and health management across vRealize suite products within a single pane of glass Features vRealize Suite Lifecycle Manager can install and manage following vRealize Suite products vRealize Automation vRealize Business for Cloud vRealize Operations vRealize Log Insight Installation User is prompted up front for the hostnames / IP's and license keys. vRLCM then proceeds to deploy and configure products with no further user interactions. Configuration Management and Drift Remediation Once the product is configured tot he desired state, we can use vRLCM to capture a baseline configuration. Over time, configuration changes may occur causing product configuration to drift from the baseline configuration. vRLCM displays the configuration drift that has occurred and provides the capability to remediate this configuration drift, returning the product to the baseline configuration. Health and Market Place Working in conjunction with vRealize Operations, vRLCM can display the health status of the product it manages Upgrade One click upgrade of vRealize Suite products it manages Content Management Feature to capture, test and release software defined content such as blueprint, templates, workflows etc... Benefits Simplified installation and configuration that saves time and effort Easy alignment with VMware recommended reference architecture and validated design Minimized on-going management effort by leveraging automated configuration and drift management with health monitoring The upgrade is simplified for all supported vRealize suite products System Requirements vRLCM runs as a single virtual appliance running VMware's photon OS Clustering is not possible use vSphere HA 2 vCPUs if content management is disabled 4 vCPUs if content management is enabled 16 GB Memory 135 GB Storage Supported vRealize Suite Products Click here Installation Download OVA from My VMware portal VMware-vLCM-Appliance-1.3.0.14-9069107_OVF10.ova Deploy and pass on necessary parameters like hostname & network information Once powered on and boot process completes , use a supported browser to connect to your vRealize Suite Lifecycle Manager appliance by using appliance IP Address or FQDN https://<>/vrlcm If your using for logging in for the first time username : admin@localhost password : vmware That's it Welcome to the world of vRealize Suite Lifecycle Manager I would blog more about vRLCM as i am configuring it in my lab .. so stay tuned ....

  • Patching vRealize Automation after upgrade to version 7.4

    VMware recently released HF3 as it's called internally to remediate few important bugs. Click here to checkout what bugs are fixed Getting Ready Take Snapshots Verify that all nodes in your vRealize Automation installation are up and running If your environment uses load balancers for HA, disable traffic to secondary nodes and disable service monitoring until after installing or removing patches and all services are showing REGISTERED. In the same manner disable traffic for all secondary members Obtain the patch file from knowledge base article and copy it to the file system from where you would access VAMI interface of vRA appliances Note : Click here for Additional Pre-Requisites, do read , do not skip Implementing Patch Login into Master Node's VAMI page Click on vRA Settings -> Patches Note : To enable or disable Patch Management, log in to the vRealize Automation appliance using the console or SSH as root, and enter one of the following commands: /opt/vmware/share/htdocs/service/hotfix/scripts/hotfix.sh enable /opt/vmware/share/htdocs/service/hotfix/scripts/hotfix.sh disable Click on upload patch , select location and upload Once upload is complete it would give you an option to install the patch. Remember do not refresh the page once you start an upload , just wait till it completes Click on "INSTALL" to start installation of patch Once Installed, status should change to "Success.Install Complete" We can check what patches are installed by browsing through "Installed Patches" section Post Installation Procedures Verify if all services are in "REGISTERED" state on all nodes ( not showing other two nodes as i know they are running ) Re-enable LB traffic to secondary nodes Finish testing by provisioning few workloads , once done consolidate snapshots taken during pre-requisites check #vRealizeAutomation

  • ehcache replication errors in horizon.log

    Are you seeing similar exceptions in horizon.log? First step you do to troubleshoot elasticcache problem is to check if runtime-config.properties file is properly configured From first vRA appliance in a 3 node cluster # ehcache configuration properties ehcache.replication.rmi.registry.port=40002 ehcache.replication.rmi.remoteObject.port=40003 # Overrides the list of ehcache replication peers. FQDNs separated by ":", e.g. server1.example.com:server2.example.com ehcache.replication.rmi.servers=node2fqdn:node3fqdn From Second vRA appliance in a 3 node cluster # ehcache configuration properties ehcache.replication.rmi.registry.port=40002 ehcache.replication.rmi.remoteObject.port=40003 # Overrides the list of ehcache replication peers. FQDNs separated by ":", e.g. server1.example.com:server2.example.com ehcache.replication.rmi.servers=node1fqdn:node3fqdn From Third vRA appliance in a 3 node cluster # ehcache configuration properties ehcache.replication.rmi.registry.port=40002 ehcache.replication.rmi.remoteObject.port=40003 # Overrides the list of ehcache replication peers. FQDNs separated by ":", e.g. server1.example.com:server2.example.com ehcache.replication.rmi.servers=node1fqdn:node2fqdn Next , we did check if port connectivity is established and open node1:~ # curl -v telnet://node1fqdn:40003 * Rebuilt URL to: telnet://node1fqdn:40003/ * Trying 10.37.79.15... * TCP_NODELAY set * Connected to node1fqdn (XX.XX.XX.XX) port 40003 (#0) Ran elastic-search health-check and it's output was promising as well https://hostname/SAAS/API/1.0/REST/system/health/ Did approach Engineering and was suggested to perform following steps 1) Backup existing file: cp /opt/vmware/horizon/workspace/bin/setenv.sh /opt/vmware/horizon/workspace/bin/setenv_bak.sh 2) vi /opt/vmware/horizon/workspace/bin/setenv.sh 3) Import utils.inc file by adding this line in setenv.sh: . /usr/local/horizon/scripts/utils.inc 4) Search for JVM_OPTS in setenv.sh file and ensure you have this property set exactly like this: -Djava.rmi.server.hostname=$(myip) 5) Please repeat above steps for all appliances 6) Restart vIDM service on all appliances: service horizon-workspace restart By default this is how it looks... JVM_OPTS="-server -Djdk.tls.ephemeralDHKeySize=1024 -XX:+AggressiveOpts \ -XX:MaxMetaspaceSize=768m -XX:MetaspaceSize=768m \ -Xss1m -Xmx3419m -Xms2564m \ -XX:+UseParallelGC -XX:+UseParallelOldGC \ -XX:NewRatio=3 -XX:SurvivorRatio=12 \ -XX:+DisableExplicitGC \ -XX:+UseBiasedLocking -XX:-LoopUnswitching" and we need to change it to JVM_OPTS="-server -Djdk.tls.ephemeralDHKeySize=1024 -Djava.rmi.server.hostname=$(myip) -XX:+AggressiveOpts \ -XX:MaxMetaspaceSize=768m -XX:MetaspaceSize=768m \ -Xss1m -Xmx3419m -Xms2564m \ -XX:+UseParallelGC -XX:+UseParallelOldGC \ -XX:NewRatio=3 -XX:SurvivorRatio=12 \ -XX:+DisableExplicitGC \ -XX:+UseBiasedLocking -XX:-LoopUnswitching" After making the change on all available vRA appliances and restarting them. There were no more exceptions seen in horizon.log This file is setting the correct hostname and IP address in the java environment for the application to form a cluster correctly. This code has been already fixed in IDM and should be added in vRA 7.4 Finally , root-cause is that using IPv6 address in /etc/hosts file is not setting the hostname and ip-address correctly for the application. #vRealizeAutomation

  • Removing stale Puppet entries from Configuration Items

    When provisioning / destroy fails or is stuck due to whatever reason , there is every chance that we have to manually clean them. In our scenario, these stale entries were present under vRA Items -> Configuration Management Note : Before we get into steps to remove these entries from Database , we have to have a full backup of vRA vPostgres database Also ensure , Virtual Machine tagged to this entry is no longer present on the endpoint and managed by vRA Steps to remove these entries from Database After database backup has been taken, take a snapshot of vRA appliance Connect to vRA postgres database These entries would be present under cat_resource and cat_resource_owners tables Filter active Puppet entries using below query select * FROM public.cat_resource WHERE resourcetype_id = 'ConfigManagement.Puppet' AND status = 'ACTIVE'; One above query is executed, you would be presented with all active Puppet entries, this would match with the UI entries Before we delete entries from cat_resource , we need to remove references from cat_resource_owner Using the value present in id column of above query execute following query on cat_resource_owners select * FROM public.cat_resource_owners WHERE resource_id = 'XXXXX'; You would be presented with one result, then delete it delete * FROM public.cat_resource_owners WHERE resource_id = 'XXXXX'; Now for the resource_id which was used in the previous query to remove from cat_resource_owners , select the binding ID and then remove it from cat_resource table select * FROM public.cat_resource WHERE binding_id = 'YYYYY'; then delete this entry from database delete * FROM public.cat_resource WHERE binding_id = 'YYYYY'; Now refresh vRA portal , removed Puppet entry would be no longer present. #vRealizeAutomation

  • Removing an existing vRA License

    Came across a situation where even though the product was licensed , it was stating that it's not found in the database. Re-applying same license was not working and it was stating that it's invalid While going through /var/log/vmware/vcac/catalina.log found that there jdbc connection errors while trying to access database entry where licensed assets were present. Also, /var/log/vmware/messages was clearly stating that the license is missing or not found Follow below steps to delete an existing license in vRA ** Before Attempting these steps take a snapshot and a valid backup of vRA database ( vPostgres ) ** SSH or putty into the vRealize Automation appliance as root Take a backup of vRealize Automation Database Stop vRealize automation service service vcac-server stop Change directory using below command cd /tmp Run this command to create a copy of the database in /tmp su -m -c "/opt/vmware/vpostgres/current/bin/pg_dumpall -c -f /tmp/vcac.sql" postgres Run this command to compress database bzip2 -z /tmp/vcac.sql Connect to vPostgres database su -postgres psql vcac Verify if embeddedlicenseentry table is present in database \dt embeddedlicenseentry Review how many rows are present inside the table SELECT * FROM embeddedlicenseentry; There should be 36 ~ 37 entries Delete all information from embeddedlicenseentry table delete FROM embeddedlicenseentry; Verify if the table is now empty SELECT * FROM embeddedlicenseentry; Exit from Database \q Restart services on appliance vcac­vami service­manage stop vco­server vcac­server horizon­workspace elasticsearch vcac­vami service­manage start vco­server vcac­server horizon­workspace elasticsearch Once the Services are back , if we go back to VAMI portal , under vRA Settings --> Licensing , you should not see your previous license present Re-Apply your existing license it should be successful. For distributed environments 2 vRA appliances scenario:- Perform database on the Master node , you can get to know the MASTER from vRA Settings --> Database Restart services on both the nodes 3 vRA appliance scenario:- Change database to ASYNCHRONOUS mode Once done , shutdown both the non-MASTER applainces Since your left only with the MASTER node now , follow same steps as above Once the key is accepted , bring the other nodes online Once all services are registered , go ahead and change the database mode to SYNCHRONOUS Ensure , there are proper backups before performing any of these steps. #vRealizeAutomation

  • Upgrade vROps 6.6.1 to 6.7 - Runbook

    Pre-requisites Take a snapshot of all virtual appliances that make up your vROps cluster. Per VMware’s update instructions, you should uncheck the option to snapshot the virtual appliance memory and uncheck the option to quiesce the guest operating system Download the vRealize Operations Manager Virtual Appliance Update vRealize_Operations_Manager-VA-OS-6.7.0.8183616.pak vRealize_Operations_Manager-VA-6.7.0.8183616.pak Procedure One of the many advantages to using the virtual appliance deployment of vRealize Operations is that it’s not only easy to deploy but it’s super-easy to upgrade. Both the operating system AND the vRealize Operations application can be upgraded in a few clicks. Here’s how to do it, step by step Step 1 Login to the vROps admin interface at – https:///admin Step 2 Take the cluster offline Step 3 Install the Operating System Update Go to the Software Update panel and click Install a Software Update. Select the OS PAK update file that you downloaded, as in the graphic below and click Upload Once upload completes , staging starts Accept EULA Read Release and Update Information Click on Install Step 4 Monitor Update Progress After some time, the OS update will be complete and you can move on to the virtual appliance update Step 5 Install the vRealize Operations Manager virtual appliance update As in Step 5, you’ll go to the Software Update panel and click Install Software Update. Follow the same series of steps, shown above but provide the virtual appliance update PAK file this time. Click Upload. PAK upload in progress Staging in progress.... Read Release and Update information and click next .. Click on Install .... Installation in Progress... Step 6 Installation Completed Step 7 Finally , we have our new version of vRealize Operations 6.7

  • Manually deleting a directory in vRA 7.x

    There might be situations where you would not be able to delete directory from vRA portal Recently, was working on a issue where vRA directory deletion process throws an exception First thing what come to one's mind when it's a vIDM or Directory Services issue in vRA is to check horizon.log Checking horizon.log , did see following exception 2018-03-16 03:11:15,121 ERROR (Thread-132) [vsphere.local;3f2680e5-99e4-457d-a080-e3b692c9fd62;10.2.14.176;] com.vmware.horizon.connector.management.DirectoryConfigOperationsImpl - Directory Delete Exception :d025cfc3-3949-4c4b-b630-1beb4b7170bd com.vmware.horizon.common.api.components.exceptions.GroupDeleteException: group.delete.error at com.vmware.horizon.components.authorization.GroupServiceImpl.deleteGroupsByUserStore(GroupServiceImpl.java:1004) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.support.AopUtils.invoke vRA UI states it's System Exception and contact System Administrator We did resolve this issue by connecting to database and manually removing the config Pre-Requisites Before you do this ensure you have a valid snapshot and a recent backup of vRA Postgres database. Click here to learn how to take backup of vPostgres vRA database Once you have satisfied all pre-requisites , take ssh to vRA appliance2 . Connect to vRA database Execute su - postgres /opt/vmware/vpostgres/current/bin/psql vcac Now change schema to "saas" so that vIDM tables can be accessed set schema 'saas'; Find idOrganization and domain name Select * from "Organizations"; Select * from "Domain"; DELETE from “GroupParent” where “groupUuid” in ( select “uuid” from “Group” where “idOrganization” = ‘’ and “domain” = ‘’); DELETE FROM “UserGroup” where “idGroup” in ( select id from “Group” where “idOrganization” = '' and “domain” = ‘’); DELETE from “Group” where “idOrganization” = '' and “domain”=‘’; update “DirectoryConfig” set “deleteInProgress”=‘f’ where “id”=‘’; Delete directory from UI Ensure all of these steps are done in a proper manner. #vRealizeAutomation

  • vRealize Automation Appliance High Availability

    The vRealize Automation appliance supports active-active high availability for all components except the appliance database. Starting with the 7.3 release, database failover is automatic if three nodes are deployed and synchronous replication is configured between two nodes. When vRealize Automation appliance detects database failure, it promotes a suitable database server to be the master. You can monitor and manage the appliance database on the Virtual Appliance Management Console vRA Settings > Database tab. vRealize Automation appliance provides the environment where services are running in a pre-configured and isolated environment. If several appliances are joined in a cluster following rules are valid to all of them On each node there is a equal set of running services All services share common database vRA, vRO and Horizon are accessed though external load balancer On one node the database is running in Master mode, while on the rest nodes databases are configured in Replica mode When joining a new node to cluster it accepts the same configuration as the rest nodes in the cluster Services running inside virtual appliance which are managed by automated HA deployment are vRA core services ( vcac-server ) , this one is the heart of vRA and starting this starts all services vRO ( vco-server and vco-configurator ) , vIDM ( horizon-workspace ) , provides authentication framework int he form of VMware Identity Manager which is embedded in appliance vPostgres (vpostgres) , embedded Postgres database that contains tables for vRA, vRO and vIDM Health (vrhb-service), reports functional health of select suite components that's vRA and vRO Setting these services in HA mode is part of automated process whern new vRA appliance is joined to already running vRA. Once the new appliance is joined to cluster, all services are automatically configured in cluster mode and synchronized against the cluster #vRealizeAutomation

  • Upgrading vRealize Automation

    Recently I did work on a project to upgrade vRealize Automation from 7.0.1 to 7.3. I would like to share my experience on how the upgrade took place and various issues encountered while implementing it We had to execute upgrade in two phases Upgrade from 7.0.1 to 7.2 Once 7.2 validation is complete upgrade to 7.3 This was done in two phases because in-place upgrade between 7.0.1 to 7.3 is not supported Pre-Requisites Ensure Databases have been backed up. Both IaaS ( Contact your DBA ) and vRA Postgres database. Take snapshots of all virtual machines ( vRA appliances and IaaS machines ). Ensure vRA components are in working and stable condition and correct all issues before starting upgrade Check for any space issues on root partition of appliances Check the /storage/log subfolder and remove any old archived ZIP files to cleanup space Ensure you have access to database , if you have restricted access then ensure DBA is around to help you during situation where you need to revert during failure System must be unavailable for users to submit new request and any applications that query vRealize Automation Upgrade from 7.0.1 to 7.2 Appliance Upgrade Create a file called disable-iaas-upgrade under /tmp on all vRA appliances as we do not want to perform an automated IaaS upgrade. ​/tmp/disable-iaas-upgrade Mount 7.2 vRA iso images to appliances (VMware-vR-Appliance-7.2.0.381-4660246-updaterepo.iso) Ensure under VAMI portal , Update Appliance settings are pointing towards CDRom updates Click on "Check Updates" , then you would get information on existing version and the version your upgrading to , accept and proceed for upgrade You may use following logs to see how upgrade is progressing ​/opt/vmware/var/log/vami/vami.log /opt/vmware/var/log/vami/updatecli.log /var/log/messages Once upgrade is complete, VAMI asks you to reboot appliances once done vRA appliances are upgraded to 7.2 IaaS Upgrade Ensure Java version is Java 8 build 91 or above Download IaaS installer from vRA VAMI ( https://vrapphostname:5480/installer) Execute IaaS install on all the IaaS nodes starting with ​IaaS Websites starting with the one where Model Manager Data is installed and then going ahead with other nodes Manager Services , ensure active one is upgraded before the passive node DEM Orchestrator and workers. Ensure you perform one box after another Agents , one node after another Management Agents on all nodes ( automatically upgraded ) , no manual intervention needed Check cluster status rabbitmqctl cluster_status Ensure all services are in "REGISTERED" status under VAMI portal of vRA nodes Upgrade vRO appliances if it's external Issues encountered during upgrade to 7.2 While starting vRA upgrade by mounting iso image during post upgrade phase of postgresql on appliances we did encounter failure Exception + echo 'Script /etc/bootstrap/postupdate.d/09-90-prepare-psql failed, error status 1' + exit 1 + exit 1 + trapfunc + excode=1 + test 1 -gt 0 + vami_update_msg set post-install 'Post-install: failed' + test -x /usr/sbin/vami-update-msg + /usr/sbin/vami-update-msg set post-install 'Post-install: failed' + sleep 1 + test 1 -gt 0 -o 0 -gt 0 + vami_update_msg set update-status 'Update failed (code 0-1). Check logs in /opt/vmware/var/log/vami or retry update later.' + test -x /usr/sbin/vami-update-msg + /usr/sbin/vami-update-msg set update-status 'Update failed (code 0-1). Check logs in /opt/vmware/var/log/vami or retry update later.' + exit 21/10/2017 01:12:41 [ERROR] Failed with exit code 256 21/10/2017 01:12:41 [INFO] Update status: Running VMware tools reconfiguration 21/10/2017 01:12:41 [INFO] Running /opt/vmware/share/vami/vami_reconfigure_tools vmware-toolbox-cmd is /usr/bin/vmware-toolbox-cmd Configuring VAMI VMware tools service wrapper. 21/10/2017 01:12:41 [INFO] Update status: Done VMware tools reconfiguration 21/10/2017 01:12:41 [INFO] Update status: Error while running post-install scripts 21/10/2017 01:12:41 [ERROR] Failure: updatecli exiting abnormally 21/10/2017 01:12:41 [INFO] Install Finished Resolution Ensure Clustering is working as expected between Primary and Secondary vRA nodes Exception Model Manager Data upgrade fails with an error stating [21/10/2017 8:09:09 PM] "E:\Program Files (x86)\VMware\vCAC\Server\Model Manager Data\RepoUtil.exe" Model-Uninstall -f DynamicOps.ManagementModel.dll -v [21/10/2017 8:09:11 PM] System.InvalidOperationException: File DynamicOps.ManagementModel.dll not found. Upgrade from 7.2 to 7.3 Resolution When above failure occurred , we had to revert IaaS node where Model Manager Data was and then the SQL server which was hosting the database Once done , we had to uninstall and reinstall MSDTC Post these steps , restarted upgrade which helped in fixing upgrade failure on IaaS Upgrade from 7.2 to 7.3 Appliance Upgrade Same steps as the previous upgrade to 7.2. Only difference is that when you start upgrading primary vRA node it automatically upgrades the second one as well. IaaS Upgrade Same steps as previous upgrade to 7.2 Issues encountered during upgrade to 7.3 Exception Though vRA upgrade was successful post reboot of the nodes or appliances apart from component registry , none of the services were showing up as registered Browsing through logs we did find out that horizon.log under vRA appliance ( Primary ) was full of postgres errors org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [spring/datastore-wireup.xml]: Invocation of init method failed; nested exception is liquibase.exception.MigrationFailedException: Migration failed for change set changelog-0008-2015-H2.xml::1::HHW-58146: Reason: liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE saas."EncryptionKeys" DROP CONSTRAINT encryptionkeys_pkey: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist: Caused By: Error executing SQL ALTER TABLE saas."EncryptionKeys" DROP CONSTRAINT encryptionkeys_pkey: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist: Caused By: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist 2017-10-22 01:41:43,410 ERROR (localhost-startStop-1) [;;;] org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [spring/datastore-wireup.xml]: Invocation of init method failed; nested exception is liquibase.exception.MigrationFailedException: Migration failed for change set changelog-0008-2015-H2.xml::1::HHW-58146: Reason: liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE saas."EncryptionKeys" DROP CONSTRAINT encryptionkeys_pkey: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist: Caused By: Error executing SQL ALTER TABLE saas."EncryptionKeys" DROP CONSTRAINT encryptionkeys_pkey: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist: Caused By: ERROR: constraint "encryptionkeys_pkey" of relation "EncryptionKeys" does not exist at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1628) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302) Resolution If you encounter this issue it does not mean that your upgrade failed. After the upgrade there are new liquibase files which are downloaded who performs DB changes if any between versions. The reason of failure is because CONSTRAINT was not present Check details below Login into vRA database set schema 'saas' \x to set expanded display ON \d+ "EncrytionKeys" Output of above commands would be Before you create a constraint you need to verify if there are duplicate key id's. If yes , then your alter command will fail with following error ALTER TABLE "EncryptionKeys" ADD CONSTRAINT "encryptionkeys_pkey" PRIMARY KEY(id); ERROR: could not create unique index "encryptionkeys_pkey" DETAIL: Key (id)=(1) is duplicated. So now identify number of id's present in this table. In my case there were 8 where first two had same key ID as described below id | 1 uuid | 6e4c6867-377e-4ce8-9f76-102da90951a7 keyContainer | __GLOBAL_REST_KEY_CONTAINER__ algorithmId | bbdec4a9-dbaa-49bd-95ed-11fd6cff26f3 keySize | 1024 id | 1 uuid | 6d1d29ae-4bb6-487e-a902-2ddb798f9c3c keyContainer | VSPHERE.LOCAL algorithmId | 1f932d40-0204-11e2-a21f-0800200c9a66 keySize | 256 Observe carefully id number is same. Select a number for ID which is not present in the output from previous command and then change the value of key id Select UUID to change the Key ​​select id,uuid from "EncryptionKeys" where uuid = '6e4c6867-377e-4ce8-9f76-102da90951a7'; Remember the value of uuid will vary from environment to environment Update UUID with the new key update saas."EncryptionKeys" set id=8 where uuid='6e4c6867-377e-4ce8-9f76-102da90951a7'; Verify if the new ID is updated select id,uuid from "EncryptionKeys" where uuid = '6e4c6867-377e-4ce8-9f76-102da90951a7'; Once done now run the Alter table command to create constraint for the table ALTER TABLE "EncryptionKeys" ADD CONSTRAINT "encryptionkeys_pkey" PRIMARY KEY(id); Once done verify this using \d+ "EncryptionKeys" command , you would see now the Primary Key is present. Restart vRA services so that all services will now be shown as "REGISTERED" successfully on both the nodes if distributed and on single node if simple installation This issue is pre-dominantly seen on version upgrades from 7.0 and 7.0.1 to 7.3 Happy Upgrades

bottom of page