top of page

Search Results

247 results found with an empty search

  • Connecting to Rabbitmq POD and executing its status

    Connect to POD kubectl -n prelude exec -ti rabbitmq-ha-0 bash Checking status rabbitmqctl cluster_status Type exit to get out of the POD

  • Restarting Rabbitmq systematically in vRealize Automation 8.x

    Distributed Architecture in vRealize Automation 8.x has 3 nodes. Each node is part of the RabbitMQ cluster. It is important RabbitMQ POD's are restarted sequentially. The order does not matter, just kill one POD, wait for it to come (be up and running) do the same for the second one, and finally the last one. To restart a pod just delete it by typing kubectl -n prelude delete pod rabbitmq-ha-0 Wait until the pod is up and running: you may check it by calling kubectl -n prelude get pods -l app=rabbitmq-ha Result shall be something like NAME READY STATUS RESTARTS AGE rabbitmq-ha-0 1/1 Running 0 1m rabbitmq-ha-1 1/1 Running 0 1d rabbitmq-ha-2 1/1 Running 0 1d In the end, all 3 POD shall be restarted Verify RabbitMQ cluster status is OK by executing the below command. Remember, this should be in a single line for n in {0,1,2}; do echo node-$n; kubectl -n prelude exec -ti rabbitmq-ha-$n -- rabbitmqctl cluster_status; done Each of the RMQ cluster nodes shall see all 3 members Then, restart EBS POD. The order doesn't matter, but it is important to restart all the 3 of them kubectl -n prelude delete pod ebs-app-- The POD name could be queried by typing kubectl -n prelude get pods -l app=ebs-app

  • Integrating vRA 8.x with vRLI 8.x

    You can forward logs from vRealize Automation to vRealize Log Insight to take advantage of more robust log analysis and report generation. vRealize Automation is bundled with a fluentd based logging agent. The agent collects and stores logs so that they can be included in a log bundle and examined later. You can configure the agent to forward a copy of the logs to a vRealize Log Insight server by using the vRealize Log Insight API. The supplied API allows other programs to communicate with vRealize Log Insight. Log Insight. In a high availability (HA) environment, logs are tagged with different hostnames, depending on the node that they originated on. The environment tag is configurable by using the --environment ENV option as described below in the Configure or update integration of vRealize Log Insight section. In an HA environment, the environment tag has the same value for all log lines, regardless of the node they originated on. Check the existing configuration of vRealize Log Insight $ vracli vrli No vRLI integration configured Configure vRA <--> vRLI Integration Command vracli vrli set [options] FQDN_OR_URL Arguments The following command-line arguments are available: FQDN_OR_URL - the FQDN or IP address of the vRealize Log Insight server that is to be used to post logs by using the vRealize Log Insight API configuration. Port 443 and an HTTPS scheme are used by default. If any of these settings must be changed, you can use a URL instead. options --agent-id SOME_ID - Set the ID of the logging agent for this appliance. The default value is 0. Use to identify the logging agent for logs that are posted to vRealize Log Insight by using the vRealize Log Insight API configuration. --environment ENV - set an identifier for the current environment. It will be available in vRealize Log Insight logs as a tag for each log line event. The default value is prod. --ca-file /path/to/server-ca.crt - Specify a file that contains the certificate authority (CA) certificate that was used to sign the vRealize Log Insight server certificate. Force the logging agent to trust the specified CA and enable it to verify the certificate of the vRealize Log Insight server. The file can contain a whole certificate chain if needed to verify the certificate. In case of a self-signed certificate, pass the certificate itself. --ca-cert CA_CERT - Specify a file in the same manner as --ca-file but pass the certificate (chain) inline as a string. --insecure - Disable SSL verification of the server certificate. Force the logging agent to accept any SSL certificate when posting logs. Command that I executed in my lab vracli vrli set https://labvrli.prem.com:9543 -e labvraenv --insecure -id labvra Checking configuration post-execution The moment I can see logs in my vRLI 8.x environment Click here for the doco link

  • vRealize Automation 8.x and Google Cloud Platform Integration

    vRealize Automation 8.x can integrate with several Cloud Providers As a cloud administrator, you can create a Google Cloud Platform (GCP) cloud account for account regions to which your team will deploy vRealize Automation blueprints. To integrate your Google Cloud Platform (GCP) environment, select Cloud Assembly > Infrastructure > Connections > Cloud Accounts and create each of the cloud integrations you require. GCP Configuration For GCP, there is some preparation we need to do on the GCP Console, so let's explore what do we need to perform as pre-requisites. Step-1: Capture Project Info As a first step on has to login into GCP console and make a note of Project ID Step-2:Enable Compute Engine API The GCP project you have created for your environment must have the Compute Engine API enable. Log into the GCP Console and select your project. From the menu, select APIs & Services > Enable APIs and Services. In the Search for APIs & Services box, type Compute Engine API and select the Compute Engine API Select Enable to enable the API and wait for this to complete as it may take up to a minute. Step-3: Create Service Account After enabling Compute Engine API, we then have to go ahead and then create a service account Once created and you can click on the account and then you would see Unique ID and email information. Also, you would be able to create keys and use them for authentication Step-4: Assign Roles to Service Account Once an account is created then we need to assign appropriate roles to this service account There are several roles listed below which the service account requires to enable full functionality with vRA 8.x. Compute Engine > Compute Admin Kubernetes Engine > Kubernetes Engine Admin Service Accounts > Service Account User Pub/Sub > Pub/Sub Admin As you can see above all the roles have been assigned to the user Step-5: Create KEY The next step, granting users access to this service account, is optional and is not required but may be relevant based on your organization's policies, etc. Once you have reviewed the page to see if it applies, select Create Key. This will create a JSON file and your browser will download the file. Save it somewhere you can retrieve later. Once stored somewhere securely, select Done. A file in such format would be downloaded, this will be used as an input when you try and create a cloud account. vRA Configuration Step-6: Create Cloud Account You create a Google Cloud Platform cloud account by using the credentials Click on IMPORTJSON key, you have to point to the JSON you saved under Step-5 Once saved cloud account is created After cloud account is created, data collection, image synchronization are performed in the background. Once completed there is a message displayed on top of the same pane. Step-7 After Cloud Account is created a Cloud Zone is automatically created Step-8: Create Project and map Cloud Zone Now create a new Project and map it to a Cloud Zone Once we create a project , then map users/groups Then we need to go to the Provisioning tab and then map Cloud Zone to a Project The final summary looks this way Step-9: Create Flavor Mapping You create a flavor mapping for the Google Cloud Platform cloud zone. In the left pane, click to select Flavor Mappings under Configure. Click +NEW FLAVOR MAPPING. Click the Flavor name text box to populate VMW-GCP-Small. Click the Account/Region search box and click to select inner-cinema-280415 / asia-southeast1 Click the Value search box and click to select g1-small. Click CREATE Step-10 : Create an Image Mapping You create a CentOS image for the Google Cloud Platform cloud zone. In the left pane, click to select Image Mappings under Configure. Click +NEW IMAGE MAPPING. Click the Image name text box to populate GCP-CentOS. Click the Account/Region search box and click to select inner-cinema-280415 / asia-southeast1 Click the Image search box to populate CentOS-8 Click to select centos-8-v20191002. Click CREATE Step-11: Create Blueprint You create a simple blueprint by using the Google Cloud Platform flavor and image. Click the Blueprints tab. Click +NEW. Click the Name text box to populate PREMGCP-CentOS. Click the Description text box to populate CentOS Blueprint for GCP. Click the Project search box and click to select PREMGCP. Click CREATE. In the left pane under the Cloud Agnostic section, click to select the Network component to drag it to the design canvas. In the left pane under the GCP section, click to select the Machine component to drag it to the design canvas. Click Cloud_GCP_Machine in the design canvas. A small bubble appears, and a line connects to Cloud_Network. In the YAML editor, click image: and click to select PREMGCP-CentOS. In the YAML editor, click flavor: and click to select PREMGCP-Small. Click DEPLOY Perform TEST to check if your blueprint configuration is correct. If it is then it the test results can be shown as above Step-12: Deploy Machine using the Blueprint created Select the blueprint from Cloud Assembly and then click on Deploy. Type Deployment Name, Blueprint Version and Description then Deploy Step-13: Monitor Deployment in vRA and Google Cloud Platform Console Various events after we deploy a VM can be seen below VM creation events can be seen on Google Cloud console as well You can see the VM deployed as well Step-14: Details of the VM can be seen as below

  • Having Multi-Tenancy enabled in vRO causes vRA Upgrade failures during the pre-update phase

    Failures are seen during vRA upgrade through vRSLCM 2.1during preupdate phase of vRA appliance upgrade Upgrade was being performed from vRA 7.4 to 7.6 vRLCM was on version 2.1 Patch 2 /var/log/bootstrap/preupdate.log Running a check on replicas: Executing a script on multiple cluster nodes... 2020-05-28 05:34:18+00:00 /etc/bootstrap/preupdate.d/00-00-01-va-resources-check done, succeeded. 2020-05-28 05:34:18+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-replica-availability starting... Executing a script on multiple cluster nodes... 2020-05-28 05:34:26+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-replica-availability done, succeeded. 2020-05-28 05:34:26+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-vro-duplicates starting... --> -->    <type 'exceptions.TypeError'>Python 2.7.14: /usr/bin/pythonThu May 28 05:34:28 2020 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.  /etc/bootstrap/preupdate.d/00-00-02-check-vro-duplicates in ()     51     for dup in duplicates_check_result:      52         dup_line = "{} items in table {}: {} {}".format(str(dup.get('? column?')), dup.get('type'), =>   53                             ;                             '' if len(dup.get('categoryid')) == 0  ;else "ID=" + dup.get('categoryid'),      54                                                          '' if len(dup.get('name')) == 0  else "NAME=" + dup.get('name'))      55         msg_html += "<p style='margin-left:20px'>- {}</p>".format(dup_line)builtin len = <built-in function len>, dup = {'?column?': 1L, 'type': 'vmo_configelementcategory', 'categoryid': None, 'name': 'wbg_SQL'}, du p.get = <built-in method get of RealDictRow object><type 'exceptions.TypeError'>: object of type 'NoneType' has no len()       args = ("object of type 'NoneType' has no len()",)       message = "object of type 'NoneType' has no len()"

  • vRA HF Pre-Check Failure: xenon-service is not running and release-management is UNAVAILABLE

    vRealize Automation PRECHECK might fail with the following exception where it states XenonService is not running and release-management service is UNAVAILABLE Solution for this would be to start respective underlying service For Xenon service xenon-service start For release-management service service tekton-server start Once above commands are executed successfully, re-run precheck and you should be able to proceed with the next step of patching

  • vRealize Automation Postgres database status is down and inconsistent

    Even though all services are registered on an appliance and vpostgres status is actually up and running you might end up in seeing an exception on VAMI stating the database status is inconsistent and down. One of the reasons for this would be that the replication from MASTER to REPLICA's is not happening properly. When we check the status of Postgres we would get a message stating that the pgdata is not a cluster directory and postgressq.auto.conf is missing. [replica] vranode2:/storage/db/pgdata # service vpostgres status Last login: Fri Jul 24 10:31:31 UTC 2020 LOG: skipping missing configuration file "/var/vmware/vpostgres/current/pgdata/postgresql.auto.conf" pg_ctl: directory "/var/vmware/vpostgres/current/pgdata" is not a database cluster directory When we check MASTER's pgdata structure we do see the following files [master] mum01-2-vra01:/storage/db/pgdata # ls -ltrh total 192K drwx------ 2 postgres users 4.0K Mar 28 2019 pg_twophase drwx------ 2 postgres users 4.0K Mar 28 2019 pg_tblspc drwx------ 2 postgres users 4.0K Mar 28 2019 pg_snapshots drwx------ 2 postgres users 4.0K Mar 28 2019 pg_serial drwx------ 2 postgres users 4.0K Mar 28 2019 pg_replslot drwx------ 4 postgres users 4.0K Mar 28 2019 pg_multixact drwx------ 4 postgres users 4.0K Mar 28 2019 pg_logical drwx------ 2 postgres users 4.0K Mar 28 2019 pg_dynshmem drwx------ 2 postgres users 4.0K Mar 28 2019 pg_commit_ts -rw------- 1 postgres users 4 Mar 28 2019 PG_VERSION -rw------- 1 postgres users 1.6K Mar 28 2019 pg_ident.conf -rw------- 1 postgres users 1.7K Jun 17 14:32 server.key -rw-r--r-- 2 postgres users 4.4K Jun 17 14:32 server.crt -rw------- 1 postgres users 22K Jun 17 14:32 postgresql.conf.bak -rw------- 1 postgres users 5.0K Jun 17 14:35 pg_hba.conf drwx------ 8 postgres users 4.0K Jun 23 07:58 base -rw------- 1 root root 22K Jul 22 13:53 postgresql.conf.bak22072020 -rw------- 1 postgres users 272 Jul 22 14:02 postgresql.auto.conf drwx------ 2 postgres users 4.0K Jul 23 13:00 pg_log -rw------- 1 postgres users 22K Jul 24 09:25 postgresql.conf -rw------- 1 postgres users 85 Jul 24 09:54 postmaster.pid -rw------- 1 postgres users 83 Jul 24 09:54 postmaster.opts drwx------ 2 postgres users 4.0K Jul 24 09:54 pg_stat drwx------ 2 postgres users 4.0K Jul 24 09:54 pg_notify -rw------- 1 postgres users 6.8K Jul 24 09:54 serverlog drwx------ 2 postgres users 4.0K Jul 24 10:19 global drwx------ 2 postgres users 4.0K Jul 24 10:30 pg_subtrans drwx------ 2 postgres users 4.0K Jul 24 10:30 pg_clog drwx------ 3 postgres users 4.0K Jul 24 10:31 pg_xlog drwx------ 2 postgres users 4.0K Jul 24 10:33 pg_stat_tmp [master] mum01-2-vra01:/storage/db/pgdata # There is one such file which is the odd man out. The file postgresql.conf.bak.<>. looks like a file created by an admin trying to backup postgresql.conf Remember, whenever we have to take a backup of any particular file to ensure you place them inside a folder in a separate location rather than the same one The moment we removed this manual backup file, the database inconsistent message under VAMI's Cluster tab disappeared and both the replica nodes status was showing UP Moral of the story, do not place any manual backups under pgdata folder

  • vRA Migration fails with Exception: Invalid data format

    I was involved in a vRealize Automaton 7.x migration project, we did encounter failure at a point where we were trying to backup vRA license. Let me share some snippets From migration logs [2020-07-21 02:22:24.523079] [e87799dbb0c348cf8e37f98861c3c135:Pending] Obtain migration package from the source vRealize Automation appliance. [2020-07-21 02:22:24.523216] [3ad82c24b47944ce8e9d44309ab719d3:Pending] Back up vRealize Automation license. [2020-07-21 02:22:24.523273] [3b568e500a5940f19a4758377dd31185:Pending] Stop vRealize Automation services on cluster node (sevenvra.prem.com). [2020-07-21 02:22:24.523333] [2fc1c91db692483c806eef983f64f36b:Pending] Stop vRealize Automation services. [2020-07-21 02:22:24.523395] [5d593b9319624e26a0614ebc08ce4888:Pending] Prepare vRealize Automation appliance for database migration. [2020-07-21 02:22:24.523457] [cf3c61d6eb464388abaf5114658d6c3b:Pending] Migrate vRealize Automation database. [2020-07-21 02:22:24.523511] [6650bbaa755047bfabb70e3abcdcd7c0:Pending] Re-encrypt sensitive vRealize Automation configuration. [2020-07-21 02:22:24.523561] [1fd72029646b4f8f8f0f93b87889eb53:Pending] Upgrade migrated vRealize Automation database. [2020-07-21 02:22:24.523609] [e5ab28549aa64aec9374296a1bcef976:Pending] Reconfigure vRealize Automation database failover service. [2020-07-21 02:22:24.523656] [99542458f82c4feb8da2dc3bce027461:Pending] Reconfigure vRealize Automation messaging service. [2020-07-21 02:22:24.523701] [14a8d96417d845d5a951903a237c68c4:Pending] Reconfigure Containers Management service. [2020-07-21 02:22:24.523746] [8120bdad9ec04974b9a95d010d7cac18:Pending] Reconfigure vRealize Health Broker service. [2020-07-21 02:22:24.523795] [2e4528d38bc849259fe1cf29603b5cae:Pending] Reconfigure default vRealize Automation tenant. [2020-07-21 02:22:24.523844] [ec5681ef323d4c3e9d8b3492a009cb76:Pending] Migrate embedded vRealize Orchestrator [2020-07-21 02:22:24.523892] [539ed8b0e9dd4534a8a6bc3a20de440b:Pending] Start vRealize Automation services. [2020-07-21 02:22:24.523939] [bb8b9eb8abbd490bb00f84eee747e2e1:Pending] Reconfigure cluster node (sevenvra.prem.com). [2020-07-21 02:22:24.523984] [96f1c4ce28e14f1a9ae8011407fc2283:Pending] Restart vRealize Automation services. [2020-07-21 02:22:24.524030] [d5d7431d4e91461a878f14a33b53a7d5:Pending] Migrate infrastructure node (sevenvra.prem.com). [2020-07-21 02:22:24.524075] [acdd2284cabb4a9abfbfd4e2b4106673:Pending] Restart vRealize Automation services. [2020-07-21 02:22:24.524120] [b42778bdc8c1462ead6c8b983a7f3ef5:Pending] Restore vRealize Automation license. [2020-07-21 02:22:24.524165] [347c7a7a30bc44f0b40c834160c7d013:Pending] Finalize migration. [2020-07-21 02:22:24.541154] Sequence initialized [2020-07-21 02:22:24.541232] Sequence state changed to [migration.ready] [2020-07-21 02:22:24.541287] Sequence execution started [2020-07-21 02:22:24.543532] Sequence state changed to [migration.progress] [2020-07-21 02:22:24.543614] [e87799dbb0c348cf8e37f98861c3c135:Running] Obtain migration package from the source vRealize Automation appliance. [2020-07-21 02:22:24.545893] Invoke script /usr/lib/vcac/tools/migration/sequence/migration/scripts/M00-log-environment [2020-07-21 02:22:27.423438] Script invocation completed with code 0 [2020-07-21 02:22:27.423532] Invoke script /usr/lib/vcac/tools/migration/sequence/migration/scripts/M02-get-migration-package [2020-07-21 02:25:24.183529] Script invocation completed with code 0 [2020-07-21 02:25:24.183719] [e87799dbb0c348cf8e37f98861c3c135:Completed] Obtain migration package from the source vRealize Automation appliance. [2020-07-21 02:25:24.528063] [3ad82c24b47944ce8e9d44309ab719d3:Running] Back up vRealize Automation license. [2020-07-21 02:25:24.530730] Back up license serial key(s) [2020-07-21 02:25:32.820798] Traceback (most recent call last): File "/usr/lib/vcac/tools/migration/framework/mcore.py", line 470, in __execute task.execute(self.__context) File "/usr/lib/vcac/tools/migration/sequence/migration/execute", line 137, in execute for li in mutil.invokeConfigurator(['/usr/sbin/vcac-config', 'license-info']): File "/usr/lib/vcac/tools/migration/framework/mutil.py", line 61, in invokeConfigurator result = parseConfiguratorResult(errors, errorMessage) File "/usr/lib/vcac/tools/migration/framework/mutil.py", line 56, in parseConfiguratorResult raise ex Exception: Invalid data format. [2020-07-21 02:25:32.820925] [3ad82c24b47944ce8e9d44309ab719d3:Failed] Back up vRealize Automation license. [2020-07-21 02:25:32.825554] Sequence execution finished [2020-07-21 02:25:32.827844] Sequence state changed to [migration.failed] [2020-07-21 02:25:32.827923] Sequence has a task execution error. Cancel pending tasks [2020-07-21 02:25:32.827984] [3b568e500a5940f19a4758377dd31185:Cancelled] Stop vRealize Automation services on cluster node (sevenvra.prem.com). [2020-07-21 02:25:32.830210] [2fc1c91db692483c806eef983f64f36b:Cancelled] Stop vRealize Automation services. [2020-07-21 02:25:32.832655] [5d593b9319624e26a0614ebc08ce4888:Cancelled] Prepare vRealize Automation appliance for database migration. So one can see above the task of backing up license was a failure. Let's understand what happens at this point in time. When a backup of license script is triggered we execute a command called [master] sevenvra:~ # /usr/sbin/vcac-config license-info ---BEGIN--- [{"licenseInfo":{"@type":"SerialKeyLicenseInfo","serialKeys":["YYYYYY-YYYYYY-YYYYYY-YYYYYYY"],"expiration":null,"restrictions":[{"product":{"name":"VMware vRealize Automation Enterprise","editionKey":"vac.enterprise.serverVm","suiteName":null,"family":{"name":"VMware vCloud Automation Center","version":"7.0"},"id":"VMware vCloud Automation Center7.0vac.enterprise.serverVmserverVm"},"costUnitLimits":[{"enforcementType":"hardEnforced","unit":{"id":"serverVm"},"value":25}],"licenseProductCapabilities":[{"version":"7.5.0.0","features":[{"id":"vac"},{"id":"vdc"}],"keyValues":null}]}],"name":"vRA Standalone License"},"id":"urn:vri:com.vmware.license.license:XXXXXX-XXXXXX-XXXXXXX-XXXXXXX-XXXXXXX","assetId":"urn:vri:com.vmware.license.asset:comvmwarevcacstandalone"}] ---END--- The output of the above command returns serial keys and other parameters required which would be applied to the destination server where we are migrating data to. If we read the exception, there is some sort of corruption happening when the values are being returned by the command. The path or remediation plan implemented was to remove the existing license on the Source vRealize Automation server and then re-implement the license after we reboot the server. Steps to remove license on a vRealize Automation 7.x node have been documented in my previous blog article. Click on this link and read the procedure. Once the license was removed and re-applied give a shot at migration and it should work.

  • Enable TLS on Localhost Configuration as part of vRealize Automation Hardening 7.x

    I and my peers were assisting a project where vRealize Automation 7.x was supposed to be deployed and hardened. Found out that there are lots of issues/misconfigurations inside the document for certain sections which has to be called out. Click here for the hardening guide version 7.6 I would call out certain sections where issues were seen after implementing it. Not all sections will be discussed here as most of them are straight forward. Problematic sections are "Enable TLS on Localhost Configuration", Page 22 "Verify that SSLv3, TLS 1.0, and TLS 1.1" are Disabled, Page 24 Let's start with the section "Enable TLS on Localhost Configuration" Step 1 Take SSH to vRA appliance Step 2 Set permissions for the vcac keystore by running the following commands usermod -A vco,coredump,pivotal vco chown vcac.pivotal /etc/vcac/vcac.keystore chmod 640 /etc/vcac/vcac.keystore Execute this as shown in the document, there are no changes to this step Step 3 According to documentation, it states to perform following steps Update the HAProxy configuration Open the HAProxy configuration file located at /etc/haproxy/conf.d and choose the 20- vcac.cfg service Locate the lines containing the following string: server local 127.0.0.1… and add the following to the end of such lines: ssl verify none It states that the change has to be performed under the following sections of 20-vcac.cfg file backend backend-vrhb backend-horizon backend-vro backend-vra backend-artifactory backend-vra-health But when you take a look at the file , there is no backend-artifactory section in it. So that's a mistake The only backend's which are available are backend backend-vrhb backend backend-horizon backend backend-vra backend backend-vra-health backend backend-vro backend backend-vco-health Another important change in the documentation which is missing is that backend-vro port has to be changed from 8280 to 8281 NOTE : TAKE A BACKUP OF ORIGINAL FILES BEFORE CHANGES /etc/haproxy/20-vcac.cfg file after changes backend backend-horizon mode http balance leastconn option http-server-close option forwardfor option redispatch http-response replace-value Set-Cookie JSESSIONID=(.*) JSESSIONID_HZN=\1 http-response replace-value Set-Cookie XSRF-TOKEN=(.*) XSRF-TOKEN_HZN=\1 http-request replace-value Cookie (.*?)JSESSIONID_HZN=([^;]+)(.*?) \1JSESSIONID=\2\3 http-request replace-value Cookie (.*?)XSRF-TOKEN_HZN=([^;]+)(.*?) \1XSRF-TOKEN=\2\3 cookie JSESSIONID prefix timeout check 10s server local 127.0.0.1:8443 maxconn 500 ssl verify none backend backend-vra mode http balance leastconn option http-server-close option forwardfor option redispatch http-response replace-value Set-Cookie JSESSIONID=(.*) JSESSIONID_VRA=\1 http-response replace-value Set-Cookie XSRF-TOKEN=(.*) XSRF-TOKEN_VRA=\1 http-request replace-value Cookie (.*?)JSESSIONID_VRA=([^;]+)(.*?) \1JSESSIONID=\2\3 http-request replace-value Cookie (.*?)XSRF-TOKEN_VRA=([^;]+)(.*?) \1XSRF-TOKEN=\2\3 cookie JSESSIONID prefix server local 127.0.0.1:8082 maxconn 1500 cookie A check ssl verify none backend backend-vra-health mode http balance leastconn option http-server-close option log-health-checks option httplog option forwardfor option redispatch http-response replace-value Set-Cookie JSESSIONID=(.*) JSESSIONID_VRA=\1 http-response replace-value Set-Cookie XSRF-TOKEN=(.*) XSRF-TOKEN_VRA=\1 http-request replace-value Cookie (.*?)JSESSIONID_VRA=([^;]+)(.*?) \1JSESSIONID=\2\3 http-request replace-value Cookie (.*?)XSRF-TOKEN_VRA=([^;]+)(.*?) \1XSRF-TOKEN=\2\3 cookie JSESSIONID prefix server local 127.0.0.1:8082 cookie A check ssl verify none backend backend-vro mode http balance leastconn option http-server-close option forwardfor option redispatch http-response replace-value Set-Cookie JSESSIONID=(.*) JSESSIONID_VRO=\1 http-response replace-value Set-Cookie XSRF-TOKEN=(.*) XSRF-TOKEN_VRO=\1 http-request replace-value Cookie (.*?)JSESSIONID_VRO=([^;]+)(.*?) \1JSESSIONID=\2\3 http-request replace-value Cookie (.*?)XSRF-TOKEN_VRO=([^;]+)(.*?) \1XSRF-TOKEN=\2\3 cookie JSESSIONID prefix option httpchk GET /vcac/services/api/health server local 127.0.0.1:8281 cookie A check ssl verify none # server node2 REMOTE-IP:443 cookie A check ssl verify none backend backend-vco-health mode http option http-server-close option forwardfor option redispatch http-response replace-value Set-Cookie JSESSIONID=(.*) JSESSIONID_VRO=\1 http-response replace-value Set-Cookie XSRF-TOKEN=(.*) XSRF-TOKEN_VRO=\1 http-request replace-value Cookie (.*?)JSESSIONID_VRO=([^;]+)(.*?) \1JSESSIONID=\2\3 http-request replace-value Cookie (.*?)XSRF-TOKEN_VRO=([^;]+)(.*?) \1XSRF-TOKEN=\2\3 cookie JSESSIONID prefix server local 127.0.0.1:8280 cookie A check Step 4 Get the password of keystorePass. Locate the property certificate.store.password in the /etc/vcac/security.properties file. Example certificate.store.password=s2enc~00k52MwbaLOWSpiLLl9d2Q\=\= Then it asks to decrypt the value using the command the password from the security.properties file vcac-config prop-util -d --p VALUE The output would be as below [master] sbivra:~ # vcac-config prop-util -d --p s2enc~00k52MwbaLOWSpiLLl9d2Q\=\= password[master] asbvra:~ # So the decrypted password is actually a plain text password Step 5 This step asks you to "Configure the vRealize Automation service" document states Open the /etc/vcac/server.xml file and it asks to add the below attribute to the Connector tag, replacing certificate.store.password with the certificate store password value found in /etc/vcac/security.properties. scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="/etc/vcac/ vcac.keystore" keyAlias="apache" keystorePass="certificate.store.password" But if you follow this as it is you will end up doing as follows scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="/etc/vcac/ vcac.keystore" keyAlias="apache" keystorePass="s2enc~00k52MwbaLOWSpiLLl9d2Q\=\=" But this is wrong. You have to use the decrypted password which is nothing but password The correct attribute is as below Step 6 Even here you ave to use just the decrypted password in the attribute. Not the encrypted one The correct attribute is as below content being updated............

  • No valid endpoints found in the Management Agent

    You may encounter the following exception during vRA 7.x patching While executing patchscript.sh file, following exception, is seen _main__ - ERROR : 242 - ('Command execution result:\nCommand id: edc117f3-bd3e-4589-8891-a9c889f2262f\n Type: upgrade->management-agent\n Node id: 523C2C66-A308-43AE-8D2A-63FE41C19A9F\n Node host: seveniaas.prem.com\n Result: No >valid endpoints found in the Management Agent\'s configuration.\n Result description: System.InvalidOperationException: >No valid endpoints found in the Management Agent\'s configuration.\r\n at >VMware.IaaS.Management.Commands.Installation.ParameterHelper.GetFirstAvailableEndpointFromContext(IExecutionContext >context)\r\n at VMware.IaaS.Management.Commands.Installation.ParameterHelper.SetContextParameters(IExecutionContext >context, InstallParameters installParameters)\r\n at >VMware.IaaS.Management.Commands.Installation.UpgradeManagementAgentCommand.Execute(IExecutionContext context, IList`1 >parameters)\n Error: {"1":[{"resultDescr":"System.InvalidOperationException: No valid endpoints found in the Management >Agent\'s configuration.\\r\\n at >VMware.IaaS.Management.Commands.Installation.ParameterHelper.GetFirstAvailableEndpointFromContext(IExecutionContext >context)\\r\\n at VMware.IaaS.Management.Commands.Installation.ParameterHelper.SetContextParameters(IExecutionContext >context, InstallParameters installParameters)\\r\\n at >VMware.IaaS.Management.Commands.Installation.UpgradeManagementAgentCommand.Execute(IExecutionContext context, IList`1 >parameters)","resultMsg":"No valid endpoints found in the Management Agent\'s configuration."}]}\n Status: FAILED\n\n', >'Error executing command') This resolution works only if there aren't any patch applied in the environment i.e Environment is on GA Here's the resolution 1. SSH into the virtual appliance master node and replace the "isApplied" value to true by running this command: sed -i 's/false/true/g' /usr/lib/vcac/patches/repo/contents/vRA-patch/self-patch.json 2. Take "vCAC-IaaSManagementAgent-Setup.msi" file from this location of virtual appliance master node: "/usr/lib/vcac/patches/repo/contents/vRA-patch" and put into all the IAAS Nodes. 3. Uninstall previously installed management agent and install new management agent by clicking this new vCAC-IaaSManagementAgent->Setup.msi file. 4. After management agent is installed successfully in all IAAS Nodes, Verify in cluster tab of vRA that version of >management agent has been updated for all the IAAS Nodes. 5. Run precheck. Once precheck is successful start installation of the patch. 6. After patch is installed succesfully, SSH into the virtual appliance master node and run selfpatch again by executing this command: sh /usr/lib/vcac/patches/repo/contents/vRA-patch/patchscript.sh 7. Now you shouldn't see the error "No valid endpoints found in the Management Agent\'s configuration" in the logs. If >above executed command completes successfully, Installation is completed.

  • Check vRA Services status via API

    Login into vRA appliance Then execute below command curl --insecure -f -s -H "Content-Type: application/json" "https:/$HOSTNAME/component-registry/services/status/current?limit=200" | sed "s/}/\n/g" | grep -E -o ".serviceName.*serviceInitializationStatus.[^,]*" | sed "s/\"serviceTypeId.*,//g" | sed -e "s/\"//g" -e "s/:/=/g" -e "s/,/, /" | sed -e "s/serviceName\|serviceInitializationStatus\|=\|,\|null//g" | column -t | sort | cat -n The output would show the list of services registered on this appliance

  • vRealize Automation DataCollection schedules

    Data Collection Status information is stored under dbo.DataCollectionStatus table of IaaS Database select * from DataCollectionStatus This table contains AgentID , LastCollectedTime , LastConnectedStatus,EntitiyID,DataCollectionStatusID,FilterSpecID and so on .... FilterSpec refers to the type of endpoint we are collecting data from. dbo.DataCollectionStatus has this FilterSpecID which is coming from dbo.FilterSpec table dbo.FilterSpec table has FilterSpecName,FilterSpecGroupID,AgentCapabilityName in it Let's take the only vSphereEndpoint into consideration and then filter dbo.FilterSpec w.r.t to this endpoint only Since we selected vSphere the AgentCapabilityName will only be vSphereHypervisor select * from dbo.FilterSpec where FilterSpecName = 'vSphere'; Each FilterSpecGroupID belongs to a certain type of data collection task for a specific endpoint. This information is stored under dbo.FilterSpecGroupID table Now let's identify what these FilterSpecGroupID from dbo.FilterSpec table and check what it refers to from dbo.FilterSpecGroup table select * from FilterSpecGroup where FilterSpecGroupID in ( select FilterSpecGroupID from dbo.FilterSpec where FilterSpecName = 'vSphere') As one can see in the above screenshot each FilterSpecGroupID belongs to a FilterSpecGroupName which is eventually a task under DataCollection What's this ScheduleID then inside dbo.FilterSpecGroup table ? ScheduleID comes from dbo.CollectionSchedule where default collection schedules are defined and is associated with an ID. This is the ID that is present under dbo.FilterSpecGroup So here's the flow dbo.CollectionSchedule --> dbo.FilterGroupSpec --> dbo.FilterSpec --> dbo.DataCollectionStatus If one wants to find out the LastCollectedTime and LastCollectedStatus of data collection from the database for a specific cluster, they can use below query select LastCollectedTime,LastCollectedStatus,HostName from DataCollectionStatus dc, host h where dc.EntityID=h.hostID and h.HostName='ClusterName' order by h.HostName Note: Replace ClusterName with your specific Compute Resource Name appropriately.

bottom of page