
Search Results
252 results found with an empty search
- vRealize Automation 8.2 Enhancements
Support for Custom Internal Pods and Services CIDR. This can be done through Easy Installer vRA Install Wizard in LCM UI Update Internal CIDR ( Day-2 Operation ) Upgrade/Patch precheck enhancements An additional enhancement to check services log partition User consent for log bundle collection on vRA upgrade failure vRA NTP Support vRA Graceful Shutdown and Startup vRA 8.x Catalog App creation in vIDM
- Comprehensive Guide to implement vRealize Automation 8.x Clustered environment
In my last post, I've discussed how one would deploy a vIDM cluster. Now let's see what does one need to deploy a vRealize Automation 8.x clustered instance Load Balancer Configuration I'm using NSX-V as my Load Balancer version 6.4.7 Application Profile Service Monitor Pools Virtual Servers One more thing to remember, create an application rule and then map it to workspace one virtual server configuration as shown below, why is this needed, I'll explain this later Once all of these are done, you are pretty much done with the load balancer configuration needed to deploy a distributed instance of vRealize Automation 8.x Now let's go to the next phase Configure Certificate and Generate vRA 8.x environment creation request As a first step, enter all details needed to generate a certificate. Remember, I am using a default certificate generated by vRLCM so that I can replace it with a CA-signed certificate later Once the certificate is generated, then create a new vRA environment Once the vRA 8.1 Cluster option is selected, we will click next and then accept EULA After EULA, we have to add/apply a license Enter Certificate information Enter Infrastructure information Enter Network Information Perform PreCheck. All Validations must be successful I have attached the precheck report here. Download and review if you would like to Review all the details and then submit a request Stages in vRA Distributed Installation We have 11 stages in total for a distributed vRA deployment Stage 1: validateenvironmentdata Stage 2: infraprevalidation Stage 3: vravaprevalidation Stage 4: vravainstall Stage 5: vravajoincluster Stage 6: vravajoincluster Stage 7: vravasetlicense Stage 8: vravainstallcertificate Stage 9: vravainitializecluster During this phase, I did encounter issues at a stage where it does vravainitialize Basically, at a point where it was trying to initialize client secrets, it was failing prematurely This is the reason we created Application rules while trying to configure the Load Balancer. Once this was configured, we did retry the installation again After ensuring the application rule is present you can now see the stage is has been completed successfully Stage 10: environmentupdate Stage 11: notificationschedules vRealize Automation 8.x Clustered Deployment is now completed Logs to monitor during deployment vRA Appliances : /var/log/deploy.log vRLCM Appliance : /var/log/vrlcm/vmware_vrlcm.log Post Deployment Checks Ensure the status of all pool members is shown as UP This is how your LCM environment pane would look Explore your vRA environment Access your vRA Load Balancer and it responds back with the login page In the next few blogs, we shall explore the vRA 8.x configuration and using it with a few advanced blueprints
- vRA services health check failed. For more information, see Services page
Though all services on my vRealize Automation node are showing up as "REGISTERED", I would see an exception stating "vRA services health check failed. For more information, see Services page" There was a vRA hardening which was done and post that this issue was seen This is the script inside an appliance responsible for the above exception /opt/vmware/share/htdocs/service/café/utils.py This the code responsible def vra_healthcheck(): url = 'http://127.0.0.1:8082/vcac/services/api/health' code = 0 try: request = urllib2.Request(url) response = urllib2.urlopen(request) code = response.getcode() except urllib2.HTTPError as e: code = e.code except urllib2.URLError as e: pass return True if code >= 200 and code < 300 else False When this issue is seen if you just execute a curl to check the health of services, it would return an exception stating the combination of host and port requires TLS. curl -kv http://127.0.0.1:8082/vcac/services/api/health * Expire in 0 ms for 6 (transfer 0x2595030) * Trying 127.0.0.1... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x2595030) * Connected to 127.0.0.1 (127.0.0.1) port 8082 (#0) > GET /vcac/services/api/health HTTP/1.1 > Host: 127.0.0.1:8082 > User-Agent: curl/7.64.0-DEV > Accept: */* > < HTTP/1.1 400 < Content-Type: text/plain;charset=ISO-8859-1 < Connection: close < Bad Request This combination of host and port requires TLS. * Closing connection 0 To fix this issue we have to change the code in the above mentioned script as follows Script Path /opt/vmware/share/htdocs/service/café/utils.py def vra_healthcheck(): url = 'https://127.0.0.1:8082/vcac/services/api/health' code = 0 try: request = urllib2.Request(url) response = urllib2.urlopen(request) code = response.getcode() except urllib2.HTTPError as e: code = e.code except urllib2.URLError as e: pass return True if code >= 200 and code < 300 else False So your just change the URL section from HTTP to HTTPS, the exception seen in the UI is gone
- RHEL8 Guest Customization failures due to missing Perl module
Recently, we did encounter a problem while trying to customize RHEL8 based virtual machine on the vCenter 6.7 platform From the above error, we could infer that there is a Perl module CPAN which is missing CPAN module can be installed using the below command dnf install cpan -y For detailed instructions Click Here Once done, Guest Customization works as expected.
- Increasing Login Session Timeout value for vRealize Suite Lifecycle Manager
You cannot change the Login Session timeout value for vRealize Suite Lifecycle Manager 2.x or 8.x By default session timeout is set to 30 minutes This is hard-coded in the product and cannot be changed
- Connecting to Rabbitmq POD and executing its status
Connect to POD kubectl -n prelude exec -ti rabbitmq-ha-0 bash Checking status rabbitmqctl cluster_status Type exit to get out of the POD
- Restarting Rabbitmq systematically in vRealize Automation 8.x
Distributed Architecture in vRealize Automation 8.x has 3 nodes. Each node is part of the RabbitMQ cluster. It is important RabbitMQ POD's are restarted sequentially. The order does not matter, just kill one POD, wait for it to come (be up and running) do the same for the second one, and finally the last one. To restart a pod just delete it by typing kubectl -n prelude delete pod rabbitmq-ha-0 Wait until the pod is up and running: you may check it by calling kubectl -n prelude get pods -l app=rabbitmq-ha Result shall be something like NAME READY STATUS RESTARTS AGE rabbitmq-ha-0 1/1 Running 0 1m rabbitmq-ha-1 1/1 Running 0 1d rabbitmq-ha-2 1/1 Running 0 1d In the end, all 3 POD shall be restarted Verify RabbitMQ cluster status is OK by executing the below command. Remember, this should be in a single line for n in {0,1,2}; do echo node-$n; kubectl -n prelude exec -ti rabbitmq-ha-$n -- rabbitmqctl cluster_status; done Each of the RMQ cluster nodes shall see all 3 members Then, restart EBS POD. The order doesn't matter, but it is important to restart all the 3 of them kubectl -n prelude delete pod ebs-app-- The POD name could be queried by typing kubectl -n prelude get pods -l app=ebs-app
- Integrating vRA 8.x with vRLI 8.x
You can forward logs from vRealize Automation to vRealize Log Insight to take advantage of more robust log analysis and report generation. vRealize Automation is bundled with a fluentd based logging agent. The agent collects and stores logs so that they can be included in a log bundle and examined later. You can configure the agent to forward a copy of the logs to a vRealize Log Insight server by using the vRealize Log Insight API. The supplied API allows other programs to communicate with vRealize Log Insight. Log Insight. In a high availability (HA) environment, logs are tagged with different hostnames, depending on the node that they originated on. The environment tag is configurable by using the --environment ENV option as described below in the Configure or update integration of vRealize Log Insight section. In an HA environment, the environment tag has the same value for all log lines, regardless of the node they originated on. Check the existing configuration of vRealize Log Insight $ vracli vrli No vRLI integration configured Configure vRA <--> vRLI Integration Command vracli vrli set [options] FQDN_OR_URL Arguments The following command-line arguments are available: FQDN_OR_URL - the FQDN or IP address of the vRealize Log Insight server that is to be used to post logs by using the vRealize Log Insight API configuration. Port 443 and an HTTPS scheme are used by default. If any of these settings must be changed, you can use a URL instead. options --agent-id SOME_ID - Set the ID of the logging agent for this appliance. The default value is 0. Use to identify the logging agent for logs that are posted to vRealize Log Insight by using the vRealize Log Insight API configuration. --environment ENV - set an identifier for the current environment. It will be available in vRealize Log Insight logs as a tag for each log line event. The default value is prod. --ca-file /path/to/server-ca.crt - Specify a file that contains the certificate authority (CA) certificate that was used to sign the vRealize Log Insight server certificate. Force the logging agent to trust the specified CA and enable it to verify the certificate of the vRealize Log Insight server. The file can contain a whole certificate chain if needed to verify the certificate. In case of a self-signed certificate, pass the certificate itself. --ca-cert CA_CERT - Specify a file in the same manner as --ca-file but pass the certificate (chain) inline as a string. --insecure - Disable SSL verification of the server certificate. Force the logging agent to accept any SSL certificate when posting logs. Command that I executed in my lab vracli vrli set https://labvrli.prem.com:9543 -e labvraenv --insecure -id labvra Checking configuration post-execution The moment I can see logs in my vRLI 8.x environment Click here for the doco link
- vRealize Automation 8.x and Google Cloud Platform Integration
vRealize Automation 8.x can integrate with several Cloud Providers As a cloud administrator, you can create a Google Cloud Platform (GCP) cloud account for account regions to which your team will deploy vRealize Automation blueprints. To integrate your Google Cloud Platform (GCP) environment, select Cloud Assembly > Infrastructure > Connections > Cloud Accounts and create each of the cloud integrations you require. GCP Configuration For GCP, there is some preparation we need to do on the GCP Console, so let's explore what do we need to perform as pre-requisites. Step-1: Capture Project Info As a first step on has to login into GCP console and make a note of Project ID Step-2:Enable Compute Engine API The GCP project you have created for your environment must have the Compute Engine API enable. Log into the GCP Console and select your project. From the menu, select APIs & Services > Enable APIs and Services. In the Search for APIs & Services box, type Compute Engine API and select the Compute Engine API Select Enable to enable the API and wait for this to complete as it may take up to a minute. Step-3: Create Service Account After enabling Compute Engine API, we then have to go ahead and then create a service account Once created and you can click on the account and then you would see Unique ID and email information. Also, you would be able to create keys and use them for authentication Step-4: Assign Roles to Service Account Once an account is created then we need to assign appropriate roles to this service account There are several roles listed below which the service account requires to enable full functionality with vRA 8.x. Compute Engine > Compute Admin Kubernetes Engine > Kubernetes Engine Admin Service Accounts > Service Account User Pub/Sub > Pub/Sub Admin As you can see above all the roles have been assigned to the user Step-5: Create KEY The next step, granting users access to this service account, is optional and is not required but may be relevant based on your organization's policies, etc. Once you have reviewed the page to see if it applies, select Create Key. This will create a JSON file and your browser will download the file. Save it somewhere you can retrieve later. Once stored somewhere securely, select Done. A file in such format would be downloaded, this will be used as an input when you try and create a cloud account. vRA Configuration Step-6: Create Cloud Account You create a Google Cloud Platform cloud account by using the credentials Click on IMPORTJSON key, you have to point to the JSON you saved under Step-5 Once saved cloud account is created After cloud account is created, data collection, image synchronization are performed in the background. Once completed there is a message displayed on top of the same pane. Step-7 After Cloud Account is created a Cloud Zone is automatically created Step-8: Create Project and map Cloud Zone Now create a new Project and map it to a Cloud Zone Once we create a project , then map users/groups Then we need to go to the Provisioning tab and then map Cloud Zone to a Project The final summary looks this way Step-9: Create Flavor Mapping You create a flavor mapping for the Google Cloud Platform cloud zone. In the left pane, click to select Flavor Mappings under Configure. Click +NEW FLAVOR MAPPING. Click the Flavor name text box to populate VMW-GCP-Small. Click the Account/Region search box and click to select inner-cinema-280415 / asia-southeast1 Click the Value search box and click to select g1-small. Click CREATE Step-10 : Create an Image Mapping You create a CentOS image for the Google Cloud Platform cloud zone. In the left pane, click to select Image Mappings under Configure. Click +NEW IMAGE MAPPING. Click the Image name text box to populate GCP-CentOS. Click the Account/Region search box and click to select inner-cinema-280415 / asia-southeast1 Click the Image search box to populate CentOS-8 Click to select centos-8-v20191002. Click CREATE Step-11: Create Blueprint You create a simple blueprint by using the Google Cloud Platform flavor and image. Click the Blueprints tab. Click +NEW. Click the Name text box to populate PREMGCP-CentOS. Click the Description text box to populate CentOS Blueprint for GCP. Click the Project search box and click to select PREMGCP. Click CREATE. In the left pane under the Cloud Agnostic section, click to select the Network component to drag it to the design canvas. In the left pane under the GCP section, click to select the Machine component to drag it to the design canvas. Click Cloud_GCP_Machine in the design canvas. A small bubble appears, and a line connects to Cloud_Network. In the YAML editor, click image: and click to select PREMGCP-CentOS. In the YAML editor, click flavor: and click to select PREMGCP-Small. Click DEPLOY Perform TEST to check if your blueprint configuration is correct. If it is then it the test results can be shown as above Step-12: Deploy Machine using the Blueprint created Select the blueprint from Cloud Assembly and then click on Deploy. Type Deployment Name, Blueprint Version and Description then Deploy Step-13: Monitor Deployment in vRA and Google Cloud Platform Console Various events after we deploy a VM can be seen below VM creation events can be seen on Google Cloud console as well You can see the VM deployed as well Step-14: Details of the VM can be seen as below
- Having Multi-Tenancy enabled in vRO causes vRA Upgrade failures during the pre-update phase
Failures are seen during vRA upgrade through vRSLCM 2.1during preupdate phase of vRA appliance upgrade Upgrade was being performed from vRA 7.4 to 7.6 vRLCM was on version 2.1 Patch 2 /var/log/bootstrap/preupdate.log Running a check on replicas: Executing a script on multiple cluster nodes... 2020-05-28 05:34:18+00:00 /etc/bootstrap/preupdate.d/00-00-01-va-resources-check done, succeeded. 2020-05-28 05:34:18+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-replica-availability starting... Executing a script on multiple cluster nodes... 2020-05-28 05:34:26+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-replica-availability done, succeeded. 2020-05-28 05:34:26+00:00 /etc/bootstrap/preupdate.d/00-00-02-check-vro-duplicates starting... --> --> <type 'exceptions.TypeError'>Python 2.7.14: /usr/bin/pythonThu May 28 05:34:28 2020 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred. /etc/bootstrap/preupdate.d/00-00-02-check-vro-duplicates in () 51 for dup in duplicates_check_result: 52 dup_line = "{} items in table {}: {} {}".format(str(dup.get('? column?')), dup.get('type'), => 53 ; '' if len(dup.get('categoryid')) == 0 ;else "ID=" + dup.get('categoryid'), 54 '' if len(dup.get('name')) == 0 else "NAME=" + dup.get('name')) 55 msg_html += "<p style='margin-left:20px'>- {}</p>".format(dup_line)builtin len = <built-in function len>, dup = {'?column?': 1L, 'type': 'vmo_configelementcategory', 'categoryid': None, 'name': 'wbg_SQL'}, du p.get = <built-in method get of RealDictRow object><type 'exceptions.TypeError'>: object of type 'NoneType' has no len() args = ("object of type 'NoneType' has no len()",) message = "object of type 'NoneType' has no len()"
- vRA HF Pre-Check Failure: xenon-service is not running and release-management is UNAVAILABLE
vRealize Automation PRECHECK might fail with the following exception where it states XenonService is not running and release-management service is UNAVAILABLE Solution for this would be to start respective underlying service For Xenon service xenon-service start For release-management service service tekton-server start Once above commands are executed successfully, re-run precheck and you should be able to proceed with the next step of patching
- vRealize Automation Postgres database status is down and inconsistent
Even though all services are registered on an appliance and vpostgres status is actually up and running you might end up in seeing an exception on VAMI stating the database status is inconsistent and down. One of the reasons for this would be that the replication from MASTER to REPLICA's is not happening properly. When we check the status of Postgres we would get a message stating that the pgdata is not a cluster directory and postgressq.auto.conf is missing. [replica] vranode2:/storage/db/pgdata # service vpostgres status Last login: Fri Jul 24 10:31:31 UTC 2020 LOG: skipping missing configuration file "/var/vmware/vpostgres/current/pgdata/postgresql.auto.conf" pg_ctl: directory "/var/vmware/vpostgres/current/pgdata" is not a database cluster directory When we check MASTER's pgdata structure we do see the following files [master] mum01-2-vra01:/storage/db/pgdata # ls -ltrh total 192K drwx------ 2 postgres users 4.0K Mar 28 2019 pg_twophase drwx------ 2 postgres users 4.0K Mar 28 2019 pg_tblspc drwx------ 2 postgres users 4.0K Mar 28 2019 pg_snapshots drwx------ 2 postgres users 4.0K Mar 28 2019 pg_serial drwx------ 2 postgres users 4.0K Mar 28 2019 pg_replslot drwx------ 4 postgres users 4.0K Mar 28 2019 pg_multixact drwx------ 4 postgres users 4.0K Mar 28 2019 pg_logical drwx------ 2 postgres users 4.0K Mar 28 2019 pg_dynshmem drwx------ 2 postgres users 4.0K Mar 28 2019 pg_commit_ts -rw------- 1 postgres users 4 Mar 28 2019 PG_VERSION -rw------- 1 postgres users 1.6K Mar 28 2019 pg_ident.conf -rw------- 1 postgres users 1.7K Jun 17 14:32 server.key -rw-r--r-- 2 postgres users 4.4K Jun 17 14:32 server.crt -rw------- 1 postgres users 22K Jun 17 14:32 postgresql.conf.bak -rw------- 1 postgres users 5.0K Jun 17 14:35 pg_hba.conf drwx------ 8 postgres users 4.0K Jun 23 07:58 base -rw------- 1 root root 22K Jul 22 13:53 postgresql.conf.bak22072020 -rw------- 1 postgres users 272 Jul 22 14:02 postgresql.auto.conf drwx------ 2 postgres users 4.0K Jul 23 13:00 pg_log -rw------- 1 postgres users 22K Jul 24 09:25 postgresql.conf -rw------- 1 postgres users 85 Jul 24 09:54 postmaster.pid -rw------- 1 postgres users 83 Jul 24 09:54 postmaster.opts drwx------ 2 postgres users 4.0K Jul 24 09:54 pg_stat drwx------ 2 postgres users 4.0K Jul 24 09:54 pg_notify -rw------- 1 postgres users 6.8K Jul 24 09:54 serverlog drwx------ 2 postgres users 4.0K Jul 24 10:19 global drwx------ 2 postgres users 4.0K Jul 24 10:30 pg_subtrans drwx------ 2 postgres users 4.0K Jul 24 10:30 pg_clog drwx------ 3 postgres users 4.0K Jul 24 10:31 pg_xlog drwx------ 2 postgres users 4.0K Jul 24 10:33 pg_stat_tmp [master] mum01-2-vra01:/storage/db/pgdata # There is one such file which is the odd man out. The file postgresql.conf.bak.<>. looks like a file created by an admin trying to backup postgresql.conf Remember, whenever we have to take a backup of any particular file to ensure you place them inside a folder in a separate location rather than the same one The moment we removed this manual backup file, the database inconsistent message under VAMI's Cluster tab disappeared and both the replica nodes status was showing UP Moral of the story, do not place any manual backups under pgdata folder





