
Search Results
252 results found with an empty search
- vRealize Automation 7.4 HF6 released
vRealize Automation 7.4 HF6 has been officially released New issues resolved in this patch are User cannot see owned items when Owned By Me is selected When trying to complete new request form, the wizard returns to the initial request screen Requests for 20 instances of a Windows blueprint, most instance fail with 'optimistic locking failed' message. When there is partially successful Scale-out operation on a nested BP deployment and retry to scale-out again, the scale-out request form will hang with the error at the back end in the vRA server log. Cannot change lease of an expired deployment Read Cumulative Update for vRA Knowledge Base for 7.4 for more information on Pre-requisites and Installation
- L1 Terminal Fault a.k.a L1TF
Intel has disclosed details on a new class of CPU speculative-execution vulnerabilities known collectively as “L1 Terminal Fault” that can occur on past and current Intel processors (from at least 2009 – 2018) Like Meltdown, Rogue System Register Read, and "Lazy FP state restore", the “L1 Terminal Fault” vulnerability can occur when affected Intel microprocessors speculate beyond an unpermitted data access. By continuing the speculation in these cases, the affected Intel microprocessors expose a new side-channel for attack Three CVEs collectively cover this form of vulnerability for Intel CPU's CVE-2018-3646 CVE-2018-3620 CVE-2018-3615 Let's discuss these CVE's one at a time CVE-2018-3646 Vulnerability Summary Referred as L1 Terminal Fault - VMM It's one of these Intel microprocessor vulnerabilities and impacts hypervisors. It may allow a malicious VM running on a given CPU core to effectively infer contents of the hypervisor's or another VM's privileged information residing at the same time in the same core's L1 Data cache. Because current Intel processors share the physically-addressed L1 Data Cache across both logical processors of a Hyperthreading (HT) enabled core, indiscriminate simultaneous scheduling of software threads on both logical processors creates the potential for further information leakage CVE-2018-2646 has two currently known attack vectors Sequential-Context Attack A malicious VM can potentially infer recently accessed L1 data of a previous context (hypervisor thread or other VM thread) on either logical processor of a processor core. Concurrent-Context Attack A malicious VM can potentially infer recently accessed L1 data of a concurrently executing context (hypervisor thread or other VM thread) on the other logical processor of the hyper-threading processor core Mitigation Summary Mitigation of Sequential-Context Attack vector is achieved by vSphere updates and patches.This mitigation is enabled by default and does not impose a significant performance impact Mitigation of the Concurrent-Context Attack vector requires enablement of a new feature known as the ESXi Side-Channel-Aware Scheduler. The initial version of this feature will only schedule the hypervisor and VMs on one logical processor of an Intel Hyperthreading-enabled core. This feature may impose a non-trivial performance impact and is not enabled by default Mitigation Process Update Phase The Sequential-context attack vector is mitigated by a vSphere update to the product versions listed in VMware Security Advisory VMSA-2018-0020. This mitigation is dependent on Intel microcode updates (provided in separate ESXi patches for most Intel hardware platforms) which are also documented in VMSA-2018-0020. IMPORTANT NOTE As displayed in the workflow above, vCenter Server should be updated prior to applying ESXi patches. Notification messages were added in the aforementioned updates and patches to explain that the ESXi Side-Channel-Aware Scheduler must be enabled to mitigate the Concurrent-context attack vector of CVE-2018-3646. If ESXi is updated prior to vCenter you may receive cryptic notification messages relating to this. After vCenter has been updated, the notifications will be shown correctly. Planning Phase The Concurrent-context attack vector is mitigated through enablement of the ESXi Side-Channel-Aware Scheduler which is included in the updates and patches listed in VMSA-2018-0020. This scheduler is not enabled by default. Enablement of this scheduler may impose a non-trivial performance impact on applications running in a vSphere environment. The goal of the Planning Phase is to understand if your current environment has sufficient CPU capacity to enable the scheduler without operational impact. The following list summarizes potential problem areas after enabling the ESXi Side-Channel-Aware Scheduler: VMs configured with vCPUs greater than the physical cores available on the ESXi host VMs configured with custom affinity or NUMA settings VMs with latency-sensitive configuration ESXi hosts with Average CPU Usage greater than 70% Hosts with custom CPU resource management options enabled HA Clusters where a rolling upgrade will increase Average CPU Usage above 100% IMPORTANT NOTE The above list is meant to be a brief overview of potential problem areas related to enablement of the ESXi Side-Channel-Aware Scheduler. The VMware Performance Team has provided an in-depth guide as well as performance data in KB 55767. It is strongly suggested to thoroughly review this document prior to enablement of the scheduler. It may be necessary to acquire additional hardware, or rebalance existing workloads, before enablement of the ESXi Side-Channel-Aware Scheduler. Organizations can choose not to enable the ESXi Side-Channel-Aware Scheduler after performing a risk assessment and accepting the risk posed by the Concurrent-context attack vector. This is NOT RECOMMENDED and VMware cannot make this decision on behalf of an organization. Scheduler Enablement Phase After addressing the potential problem areas described above during the Planning Phase, the ESXi Side-Channel-Aware Scheduler must be enabled to mitigate the Concurrent-context attack vector of CVE-2018-3646. The scheduler can be enabled on an individual ESXi host via the advanced configuration option hyperthreadingMitigation. This can be done by performing the following steps: Enabling the ESXi Side-Channel-Aware Scheduler using the vSphere Web Client or vSphere Client Connect to the vCenter Server using either the vSphere Web or vSphere Client. Select an ESXi host in the inventory. Click the Manage (5.5/6.0) or Configure (6.5/6.7) tab. Click the Settings sub-tab. Under the System heading, click Advanced System Settings. Click in the Filter box and search VMkernel.Boot.hyperthreadingMitigation Select the setting by name and click the Edit pencil icon. Change the configuration option to true (default: false). Click OK. Reboot the ESXi host for the configuration change to go into effect. Enabling the ESXi Side-Channel-Aware Scheduler using ESXi Embedded Host Client Connect to the ESXi host by opening a web browser to https://HOSTNAME. Click the Manage tab Click the Advanced settings sub-tab Click in the Filter box and search VMkernel.Boot.hyperthreadingMitigation Select the setting by name and click the Edit pencil icon Change the configuration option to true (default: false) Click Save. Reboot the ESXi host for the configuration change to go into effect. Enable ESXi Side-Channel-Aware Scheduler setting using ESXCLI SSH to an ESXi host or open a console where the remote ESXCLI is installed. For more information, see the http://www.vmware.com/support/developer/vcli/. Check the current runtime value of the HTAware Mitigation Setting by running esxcli system settings kernel list -o hyperthreadingMitigation To enable HT Aware Mitigation , run this command esxcli system settings kernel set -s hyperthreadingMitigation -v TRUE Reboot the ESXi host for the configuration change to go into effect. Refer to the following KB articles for product-specific mitigation procedures and/or vulnerability analysis: vSphere KB 55806 Hosted (Workstation/Fusion) KB 57138 VMware SaaS offerings: KB 55808 CVE-2018-3620 Referred as L1 Terminal Fault - OS ( Operating System-Specific Mitigations ) VMware has investigated the impact CVE-2018-3620 may have on virtual appliances. Details on this investigation including a list of unaffected virtual appliances can be found in KB 55807. Products that ship as an installable windows or linux binary are not directly affected, but patches may be required from the respective operating system vendor that these products are installed on. VMware recommends contacting your 3rd party operating system vendor to determine appropriate actions for mitigation of CVE-2018-3620. This issue may be applicable to customer-controlled environments running in a VMware SaaS offering, review KB 55808. CVE-2018-3615 Referred as L1 Terminal Fault - SGX CVE-2018-3615 does not affect VMware products or services. See KB 54913 for more information
- Remediation for Spectre Vulnerability
As you may be aware, VMware has released Spectre patches for ESXi and VC on 20th March, 2018. Related Articles Click to read SecurityAdvisory ( Updated VMSA-2018-0004.3 ) VMware KB describing Hypervisor-Assisted Guest Mitigation for Branch Target injection Suggested update Sequence It is mandatory to follow the below order to deploy the fix for Meltdown and Spectre Deploy the updated version of vCenter Server listed in VMSA-2018-0004 Deploy the ESXi patches listed in VMSA-2018-0004 (though we have applied the patch to ESXi already we need to apply this patch as well) Deploy the Guest OS patches for CVE-2017-5715. These patches are to be obtained from your OS vendor. VMware recommends applying the firmware update including the CPU microcode over software patch with microcode Ensure that VMs are using Hardware Version 9 or higher. For best performance, Hardware Version 11 or higher is recommended. VMware Knowledge Base article 1010675 discusses Hardware Versions. You should follow the below sequence to update Hardware Version Update VMware tools to the latest available with the patched host. Update the VM hardware version to 9 or above Shutdown VM using Guest OS console Wait for the VM to appear powered Off in the vCenter server UI. Power on the VMs The new versions of vCenter Server set restrictions on ESXi hosts joining an Enhanced vMotion Cluster, see VMware Knowledge Base article 52085 for details. You will not be able to migrate a VM from a patched host to a non-patched host. Please keep this in mind when preparing for upgrades. #vSphere
- Shared swap vMotion of a fully reserved VM with swap file fails due to failure to extend swap file
I was trying to migrate a virtual machine which had around 60 GB memory to a host which had no virtual machines registered on it , failed with below exception hostd.log 2019-03-28T05:17:28.943Z info hostd[11B81B70] [Originator@6876 sub=Vcsvc.VMotionDst (2076941261877375970)] ResolveCb: VMX reports needsUnregister = true for migrateType MIGRATE_TYPE_VMOTION 2019-03-28T05:17:28.943Z info hostd[11B81B70] [Originator@6876 sub=Vcsvc.VMotionDst (2076941261877375970)] ResolveCb: Failed with fault: (vim.fault.GenericVmConfigFault) { --> faultCause = (vmodl.MethodFault) null, --> faultMessage = (vmodl.LocalizableMessage) [ --> (vmodl.LocalizableMessage) { --> key = "msg.checkpoint.destination.resume.fail", --> arg = (vmodl.KeyAnyValue) [ --> (vmodl.KeyAnyValue) { --> key = "1", --> value = "msg.vmk.status.VMK_MEM_ADMIT_FAILED" --> } --> ], --> message = "Failed to resume destination VM: Admission check failed for memory resource. --> " --> }, --> (vmodl.LocalizableMessage) { --> key = "vob.vmotion.swap.extend.failed.status", --> arg = (vmodl.KeyAnyValue) [ --> (vmodl.KeyAnyValue) { --> key = "1", --> value = "-1407197683" --> }, --> (vmodl.KeyAnyValue) { --> key = "2", --> value = "2076941261877375970" --> }, --> (vmodl.KeyAnyValue) { --> key = "3", --> value = "536870912" --> }, --> (vmodl.KeyAnyValue) { --> key = "4", --> value = "Admission check failed for memory resource" --> } --> ], --> message = "vMotion migration [ac1fde0d:2076941261877375970] failed to extend swap file to 536870912 KB: Admission check failed for memory resource. --> " --> } --> ], --> reason = "Failed to resume destination VM: Admission check failed for memory resource. --> " --> msg = "Failed to resume destination VM: Admission check failed for memory resource. --> vMotion migration [ac1fde0d:2076941261877375970] failed to extend swap file to 536870912 KB: Admission check failed for memory resource. This Virtual Machine had 64 GB of Memory which is fully reserved and Memory hot plug enabled Failure was related to swap file growth as it was unable to expand. In an ideal scenario , when a VM has 100% Memory reservation , swap file of that VM should be 0 KB. But in my scenario swap file was same as the size of memory assigned to the VM. This was not a normal situation. The reason you still see a swap file same as the size of memory assigned to this VM os because memory reservation was made while the VM was powered on You must never reserve memory of a virtual machine while it's powered on. It's not a best practice. Now , let me explain exception in detail. This is a bug identified in version 6.0 and 6.5 , even earlier version if someone's using it. Below conditions have to met to encounter this bug VM is fully reserved after it's powered on VM must be configured with more than 56 GB of Memory VM must have Memory hot-plug enabled Happens only on DRS clusters This behavior is fixed in version 6.7 due to the changes made in the code , but not in 6.5 & 6.0 There is no workaround as we cannot delete the swap file while the VM is powered on. Only was to fix is take a proper downtime of virtual machine , shut it down and bring it back. Swap file should be reset to 0 KB. Post that vMotion should work. Here's a small video recording where i reproduced the bug , if you would like to watch it !! Happy Learning !!
- Unable to delete a vRO endpoint
Are you trying to delete a vRO endpoint and is it throwing an exception Error Unable to delete a vCO endpoint of type 'AD'. Reason 'TypeError: Cannot read property "id" from undefined (Workflow:Remove an Active Directory server / Scriptable task (item1)#2)' In our case it was an Active Directory endpoint which was throwing exceptions Endpoints created on this pane are vRO endpoints and are stored under a table called public.asd_endpoint If there is discrepancy in id then it would not allow your to delete endpoint from UI. In that scenario , removing this one from database is the only option Before removing entry from database you have to make sure your removing a right one. Compare "Name" & "rootobjectid" are same on both DB and UI which would give you a clue. Deletion from database ( Ensure Postgres Database backup is taken before you start off ) 1. Login into vRA's Postgres database su - postgres /opt/vmware/vpostgres/current/bin/psql vcac 2. Enable extended display \x 3. Capture ID from public.asd_endpoint for the endpoint you want to remove select id from public.asd_endpoint where name = 'nukescloud'; 4. Using above ID for the endpoint go ahead and execute delete statement delete from public.asd_endpoint where id = '45702e71-3549-410c-95b3-993b77750e49'; 5. Once done , when you refresh the page in UI , you would not see the endpoint anymore
- Reconfigure actions on a Managed VM when triggered using API / Powershell / Workflow fails
When you trigger a reconfigure request using API / Powershell / Custom Workflow on an environment which was recently patched to any of the released vRealize Automation 7.4 patches , they would fail Exception would be as follows Error Message: [Error code: 42300 ] - [Error Msg: Infrastructure service provider error: A server error was encountered. Value cannot be null. Parameter name: value] dynamicops.api.client.ApiClientException at dynamicops.api.client.ClientResponseHandler.handleResponse(BaseHttpClient.java:316) ~[iaas-api-client-7.4.0-SNAPSHOT.jar:?] at dynamicops.api.client.BaseHttpClient$1.handleResponse(BaseHttpClient.java:164) ~[iaas-api-client-7.4.0-SNAPSHOT.jar:?] at org.apache.http.client.fluent.Response.handleResponse(Response.java:90) ~[fluent-hc-4.5.5.jar:4.5.5] at org.apache.http.client.fluent.Async$ExecRunnable.run(Async.java:81) [fluent-hc-4.5.5.jar:4.5.5] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] Though this request fails from vRA perspective on it's console , it does go ahead and performs re-configure operation on VM. You would experience this issue only if there is a encrypted custom property in the blueprint This bug is being fixed in upcoming 7.4 patch release.
- Unable to stop Tomcat instance that was started as a foreground process
I was implementing HF 10 today in one of our environments to fix health broker service which was broken. There were multiple bug w.r.t health broker fixed in HF 10 so that prompted me to do so While installing , it failed at a point where it was unable to stop tomcat for horizon-workspace Exception was as below Before we started HF 10 installation every pre-requisite checks were done As above exception was clearly stating that it's unable to stop tomcat responsible for horizon-workspace , went ahead and verified if the status is actually " Running " or in a different state Not sure what was the reason behind status of these two services being "UNKNOWN" But a quick restart of these services service vco-configurator restart service horizon-workspace restart Did bring them back into " RUNNING" state Post that vRA 7.4 HF10 installation was successful. Moral of the Story Do not just go by status of VAMI service registrations , quickly do cross check if the underlying application services are in "RUNNING" state
- Endpoint with id [XXXXX-XXXXX-XXXXX-XXXXX] is not found in SQL Server on IAAS endpoint
Recently, I was looking at a problem where user was unable to save an NSX endpoint When we edit these endpoints and click on "Test Connection" , it does succeed. But as soon as we click on save, we get below exception under /var/log/vmware/vcac/catalina.out [UTC:2019-02-26 04:12:41,054 Local:2019-02-26 15:12:41,054] vcac: [component="cafe:iaas-proxy" priority="ERROR" thread="tomcat-http--3" tenant="nukescloud" context="Ge5uipgR" parent="Ge5uipgR" token="iVIMcVWX"] com.vmware.vcac.iaas.controller.endpointconfiguration.EndpointController.update:121 - Endpoint update failed: Endpoint with id [XXXXX-XXXXX-XXXXX-XXXXX] is not found in SQL Server on IAAS endpoint. We definitely know that there is a problem w.r.t this endpoint on SQL database , but where was the question. I did create an NSX endpoint in my lab , it creates updates both vPostgres DB for vRA and SQL db for IaaS As first step, let's look into vRA's Postgres database Login into postgres database su - postgres /opt/vmware/vpostgres/current/bin/psql vcac Enable expanded display vcac=# \x Expanded display is on. Then , review the contents of this table vcac=# select * from public.epconf_endpoint; This is how the output would look. You have two endpoints visible One for the vCenter and the other for NSX. id displayed in the above table is the NSXEndpointId what SQL refers in it's IaaS database type_id is the type of endpoint you create name and description are the descriptors you give while creating an endpoint extension_data is the data it fetches from the endpoint w.r.t certificate thumbprint , username & password created_date and last_updated are self explanatory Now let's compare this with SQL's table which has this configuration The table we are interested would be [dbname].[DynamicOps.VCNSModel].[VCNSEndpoints] as show in the below screenshot As i stated earlier Id in vRA's Postgres database should be same as NSXEndpointId in IaaS database In the environment where the failure was observed, NSXEndpointId was set to NULL Now that we established a relationship by trying to understand how this works from DB perspective , it was easy to fix it on the problematic environment All we had to do is to execute an update statement to replace the NULL value with it's appropriate ID captured from postgres database Example :- update [vradistrib].[DynamicOps.VCNSModel].[VCNSEndpoints] set NSXEndpointId = '79fc5423-089b-4b4a-8a7a-b416f68e70bb' where Id = '3'; !! Hope this helps !!
- Selecting a Network Profile unavailable while creating a blueprint
A network profile essentially provides your VM with information such as IP / Netmask / Gateway etc., and vRA then also keeps a record of IPs used from the pool. I was working on one of the environment's when user was creating a blueprint , the pane where he/she has to select a network profile was blank It definitely cannot be a bug as it was working in my lab perfectly. From logs ( /var/log/vmware/vcac/catalina.out ) , [ Enabled debug logging ] [UTC:2019-02-17 23:40:01,192 Local:2019-02-18 12:40:01,192] vcac: [component="cafe:iaas-proxy" priority="DEBUG" thread="tomcat-http--46" tenant="" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.platform.trace.TraceRequestUtil.startTraceRequest:33 - Trace started [UTC:2019-02-17 23:40:01,308 Local:2019-02-18 12:40:01,308] vcac: [component="cafe:iaas-proxy" priority="DEBUG" thread="tomcat-http--46" tenant="vsphere.local" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.platform.service.rest.config.RestRequestMappingHandlerMapping.getHandlerInternal:317 - Returning handler method [public org.springframework.data.domain.Page com.vmware.vcac.iaas.controller.fabric.NetworkProfilesController.listForTenant(com.vmware.vcac.platform.service.rest.PageAndSortRequest)] [UTC:2019-02-17 23:40:01,309 Local:2019-02-18 12:40:01,309] vcac: [component="cafe:iaas-proxy" priority="DEBUG" thread="tomcat-http--46" tenant="vsphere.local" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.platform.service.rest.init.RestWebApplicationInitializer$RestServlet.doDispatch:955 - Last-Modified value for [/iaas-proxy-provider/api/network/profiles/tenant] is: -1 [UTC:2019-02-17 23:40:01,309 Local:2019-02-18 12:40:01,309] vcac: [component="cafe:iaas-proxy" priority="INFO" thread="tomcat-http--46" tenant="vsphere.local" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.iaas.controller.fabric.NetworkProfilesController.listForTenant:197 - Looking up network profiles * * * * [UTC:2019-02-17 23:40:01,469 Local:2019-02-18 12:40:01,469] vcac: [component="cafe:iaas-proxy" priority="INFO" thread="tomcat-http--46" tenant="vsphere.local" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.iaas.controller.fabric.NetworkProfilesController.listForTenant:203 - Finished looking up network profiles [UTC:2019-02-17 23:40:01,473 Local:2019-02-18 12:40:01,473] vcac: [component="cafe:iaas-proxy" priority="DEBUG" thread="tomcat-http--46" tenant="vsphere.local" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.platform.service.rest.init.RestWebApplicationInitializer$RestServlet.processRequest:1000 - Successfully completed request [UTC:2019-02-17 23:40:01,474 Local:2019-02-18 12:40:01,474] vcac: [component="cafe:iaas-proxy" priority="DEBUG" thread="tomcat-http--46" tenant="" context="WgxltG49" parent="" token="WgxltG49"] com.vmware.vcac.platform.trace.TraceRequestUtil.stopTraceRequest:84 - Trace stopped Absolutely no errors / exceptions from logs Then realized that we were missing something very simple While creating reservation in vRA , you do mention / select the mapping between the Network Adapter ( vSphere Network ) and Network Profile If by mistake you leave this mapping blank as shown in the below screenshot then you would end up in this situation where network profile is blank or does not show up. The moment you make the change and select the network profile It populates the profile on the pane !! Hope this helps !!
- Implementing vRA 7.4 Patch HF8 on top of HF3 ( deep-dive )
There was a requirement for us to work on implementing HF8 on a environment which was running on HF3. I created a run-book to explain the procedure. Thought of sharing it with everyone as it would be useful. Pre-Requisites For successful patch deployments, perform these prerequisite steps on the target vRealize Automation cluster: Ensure the Service Account running the 'VMware vCloud Automation Center Management Agent' has the following requirements: Account must be part of the Local Administrator group. Account must have 'Log on as a service' enabled in the Local Security Policies. Account must be formatted as domain\username. Remove old / obsolete nodes from the Distributed Deployment Information Table. For detailed steps, see Remove a Node from the Distributed Deployment Information Table section of vRealize Automation documentation. Ensure that Management Agent in all IaaS machines is latest (7.4) version. In vRA Virtual Appliance nodes, open the /etc/hosts file and locate the entry IPv4 loopback IP Address (127.0.0.1). Ensure that the Fully Qualified Domain Name of the node is added immediately after 127.0.0.1 and before localhost. For example, 127.0.0.1 FQDN_HOSTNAME_OF_NODE localhost Take snapshots/ backups of all nodes in your vRealize Automation installation. If your environment uses load balancers for HA, disable traffic to secondary nodes and disable service monitoring until after installing or removing patches and all services are showing REGISTERED. Obtain the files from below and copy it to the file system available to the browser you use for the vRealize Automation appliance management interface. Files needed to install HF8 Following files are needed to install HF8 on a vRealize Automation 7.4 environment vRA-7.4-HF8-patch self-patch.zip patchscript.sh All three files are available under KB: 56618 Implementing vRA 7.4 HF8 As a first step , ensure all the pre-requisites are met. These are mandatory and cannot be skipped. Now , copy self-patch.zip and patchscript.sh under /tmp of the Master or Primary vRealize Automation appliance Once they are copied , give required permissions to the new file: chmod +x patchscript.sh Then run the patchscript.sh After script execution completes , it would throw a message stating Self-Patch successfully applied Note : **Ensure the prerequisite script has run prior to running the below procedure to implement the actual patch!** Log in to the vRealize Automation appliance management interface (https://vrealize-automation-appliance-FQDN:5480) as root. This has to be your Primary or Master node if it's a distributed vRA instance Click vRA Settings > Patches Under Patch Management, click the option that you need, and follow the prompts to install a new patch Once you click on INSTALL , it would start implementing the patch First creates a local patch repo 2019-01-18T05:08:04.157975+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.getAllEligiblePatchesAndCreatePatchRepo:48 - Creating local patch repo Identifies it has to install HF8 2019-01-18T05:11:31.584608+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchDeployCommand.installPatch:129 - Installing the patch 7342927e-7099-4d8a-bc6b-8ca77c5a876b It starts applying HF8 after extraction. Also identifies that it has a previous patch installed , which we all know it's HF3 with patch ID : 58ec2da5-823b-440e-b918-fbdf6ff7166f 2019-01-18T05:11:33.705636+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.getPatchLocation:112 - Patch location: /usr/lib/vcac/patches/repo/contents/vRA-patch/45994cb81454cba76ebe347e9e149e3a2253d74f889b5b667d117e438cbac4/patch-vRA-7.4.10980652.10980652-HF8 2019-01-18T05:11:33.884824+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.getLastAppliedPatch:249 - The last applied patch 58ec2da5-823b-440e-b918-fbdf6ff7166f 2019-01-18T05:11:33.885757+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchDeployCommand.applyPatch:258 - Applying the patch 58ec2da5-823b-440e-b918-fbdf6ff7166f-Reverse This is when you would see that HF3 is being uninstalled Initiates HF3 Reverse patching 2019-01-18T05:15:05.134068+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.publishInstallBundlesForDownloading:190 - Created cafe.patch in: 2019-01-18T05:15:05.633471+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.publishInstallBundlesForDownloading:195 - Created iaas.patch in: 2019-01-18T05:15:05.634676+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.run:147 - 1. Initiate patching... 2019-01-18T05:15:05.634676+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.initiatePatching:226 - Starting :: initiate patching 2019-01-18T05:15:05.738027+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.addPatch:75 - Adding patch 58ec2da5-823b-440e-b918-fbdf6ff7166f-Reverse to history:: Identifies nodes where HF3 reverse has to be applied 2019-01-18T05:15:05.975140+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.getPatchLocation:112 - Patch location: /usr/lib/vcac/patches/repo/contents/vRA-patch/9a909a94eb9cb15199c686e2e29d8fc83ea5fe3460426e340476544b211dc/patch-vRA-7.4.8182598.8182598-HF3-Reverse 2019-01-18T05:15:06.133201+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.syncPatchHistory:287 - Queueing command update-patch-history 2019-01-18T05:15:06.190063+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeServiceImpl.run:332 - Notifying node with hostname [nukesvra01.nukescloud.com] for command process-cmd... 2019-01-18T05:15:06.190344+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeServiceImpl.run:332 - Notifying node with hostname [nukesvra02.nukescloud.com] for command process-cmd... 2019-01-18T05:15:06.192237+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeServiceImpl.run:332 - Notifying node with hostname [nukesvra03.nukescloud.com] for command process-cmd... Starts a thread to stop services on all of the nodes 2019-01-18T05:15:06.314525+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.platform.rest.client.impl.HttpClientFactory$IdleConnectionEvictor.start:370 - Starting thread Thread[Connection evictor-57136495-4bad-4414-a279-000ae3c34a54,5,main] 2019-01-18T05:15:06.637921+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeCommunicatorImpl.notifyNode:121 - Notifying an existing cluster node with url: [https://nukesvra01.nukescloud.com:5480/config/process-cmd] for configuration changes. 2019-01-18T05:15:06.646167+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeCommunicatorImpl.notifyNode:121 - Notifying an existing cluster node with url: [https://nukesvra03.nukescloud.com:5480/config/process-cmd] for configuration changes. 2019-01-18T05:15:06.649133+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.impl.ClusterNodeCommunicatorImpl.notifyNode:121 - Notifying an existing cluster node with url: [https://nukesvra02.nukescloud.com:5480/config/process-cmd] for configuration changes. Finishes patch iniitiation 2019-01-18T05:15:23.913716+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.currentPatch:115 - The current patch is 58ec2da5-823b-440e-b918-fbdf6ff7166f-Reverse 2019-01-18T05:15:23.913716+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.initiatePatching:242 - Finished :: Initiate patching Starts patch discovery 2019-01-18T05:15:23.913738+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.run:153 - 2. Patch discovery... 2019-01-18T05:15:23.913738+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.discovery:247 - Starting :: component discovery 2019-01-18T05:15:23.924128+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.isAllCommandExecuted:775 - Checking if all commands are executed 2019-01-18T05:15:38.925206+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:788 - Starting:: Command validation 2019-01-18T05:15:38.935453+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.validateCommandStatusForFinishLine:793 - Command status for update-patch-history: COMPLETED Finishes HF3 reverse patch installation 2019-01-18T05:58:04.331648+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.finalizePatch:739 - Starting :: Finlaize patch 2019-01-18T05:58:04.331758+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.currentPatch:115 - The current patch is 58ec2da5-823b-440e-b918-fbdf6ff7166f-Reverse 2019-01-18T05:58:04.336476+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.finishPatch:122 - Finishing patch :: 58ec2da5-823b-440e-b918-fbdf6ff7166f-Reverse As it finished uninstalling HF3 , now it starts installing HF8 2019-01-18T05:58:34.296019+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchDeployCommand.applyPatch:258 - Applying the patch 7342927e-7099-4d8a-bc6b-8ca77c5a876b 2019-01-18T06:01:49.098522+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.run:147 - 1. Initiate patching... 2019-01-18T06:01:49.098920+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.initiatePatching:226 - Starting :: initiate patching 2019-01-18T06:01:49.175530+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.addPatch:75 - Adding patch 7342927e-7099-4d8a-bc6b-8ca77c5a876b to history:: 2019-01-18T06:01:49.829562+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.util.PatchUtil.getPatchLocation:112 - Patch location: /usr/lib/vcac/patches/repo/contents/vRA-patch/45994cb81454cba76ebe347e9e149e3a2253d74f889b5b667d117e438cbac4/patch-vRA-7.4.10980652.10980652-HF8 2019-01-18T06:02:07.223205+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.currentPatch:115 - The current patch is 7342927e-7099-4d8a-bc6b-8ca77c5a876b Finalizes HF8 as it's installation is complete 2019-01-18T07:19:58.675075+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.commands.cluster.patch.PatchExecutor.finalizePatch:739 - Starting :: Finlaize patch 2019-01-18T07:19:58.675240+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.currentPatch:115 - The current patch is 7342927e-7099-4d8a-bc6b-8ca77c5a876b 2019-01-18T07:19:58.689236+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.finishPatch:122 - Finishing patch :: 7342927e-7099-4d8a-bc6b-8ca77c5a876b 2019-01-18T07:19:58.689439+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.cli.configurator.services.cluster.patch.PatchHistoryRepository.finishPatch:173 - Set last applied patch to 7342927e-7099-4d8a-bc6b-8ca77c5a876b Since application of patch is now finished , it starts the services on all of the nodes 2019-01-18T07:20:05.468675+00:00 nukesvra01.nukescloud.com vcac-config: INFO com.vmware.vcac.platform.rest.client.impl.HttpClientFactory$IdleConnectionEvictor.start:370 - Starting thread Thread[Connection evictor-8a21cda8-1edd-4b4e-85d5-82587ad602f6,5,main] As a final step , enable secondary nodes on the load balancer and ensure all health-checks on the environment are passed.
- Docker Containers vs Virtual Machines
Note : Docker Containers are not Virtual Machines What is a Virtual Machine ? Let's explain each layer from the bottom Everything starts with INFRASTRUCTURE. This could be a laptop , a dedicated server running in a datacenter or a server being used on AWS or Google Cloud On top of it runs an operating System ( Windows or Distribution of Linux or a Mac OS ) , if it is a VM then it would be commonly labelled as a Host Operating System Then comes your Hypervisor. There are two types of hypervisors Type 1 Hypervisors which runs directly on the system hardware Type 2 Hypervisor run on a host operating system that provides virtualization services Post Hypervisor , comes our Guest OS. For example , if we have to spin up three applications then we need to have Three Guest OS virtual machines controlled by our Hypervisor Each Guest OS has Memory / Storage & CPU OverHead to it for it to run On top of these we will need to have binaries on each Guest OS to support the application Finally , you would have your application installed. If one want's these applications to be isolated , then these have to installed on separate virtual machines What is a Docker Container ? Looking at the above image you would see a striking difference. Yes, There is no need to run a massive guest operating system. Let's break it down again from bottom to top approach Docker containers do need INFRASTRUCTURE to run them. This could be laptop , a virtual machine running on a datacenter or a server running on AWS or Google Cloud Then comes HOST OPERATING SYSTEM. This can be anything capable of running Docker. All Major distributions of Linux run Dockers. There are ways to install Dockers on Windows and MAC OS as well In the next phase, as you can see DOCKER DAEMON replaces HYPERVISOR. Docker Daemon is a service that runs in the background on your host operating system and manages everything required to run and interact with Docker containers Next up we have our binaries and libraries, just like we do on virtual machines. Instead of then being ran on a guest operating system, they get built into a special packages called Docker Images. Then the Docker Daemon runs those images The last block in this building is our applications.Each applications ends up running in it's own Docker Image and will be managed independently by Docker Daemon.Typically each application and it's library dependencies get packed into the same Docker Image. As shows in the image , applications are still isolated Real World Differences between both Technologies There's a lot less moving parts with Docker. No need of a Hypervisor or a Virtual Machine Docker daemon communicates directly with the host operating system and knows how to distribute resources for running docker containers. It's also an expert at ensuring each container is isolated from both the host OS and other containers If you want to start an application running on a virtual machine , you would have to wait till the operating system boots up. This eventually takes a minute or two. But in Docker container it just takes milliseconds You would save on Storage, Memory and CPU as there is no need to run a lousy and bulky Guest OS for each application you run There's also no virtualization needed with Docker since it runs directly on the host OS Both Technologies are good at what they do the best Virtual Machines are very good at isolating system resources and entire working environments Docker's philosophy is to isolate individual applications , not entire systems !!! Stay Tuned for more on Dockers !!!
- RabbitMQ in vRealize Automation
To understand what's the role of RabbitmQ in vRealize Automation , first of all let's figure out what's RabbitMQ RabbitMQ It's a messaging broker and gives applications a common place to send and receive messages, a safe place to live until they are delivered A centralized messaging enables software applications to connect as a components of larger application Softwares can be written on what state the other applications are in, this enables workload to be distributed to multiple systems for performance and reliability concerns RabbitMQ Architecture The basic architecture of a message queue is simple, there are client applications called producers that create messages and deliver them to the broker (the message queue). Other applications, called consumers, connects to the queue and subscribes to the messages to be processed. A software can be a producer, or consumer, or both a consumer and a producer of messages. Messages placed onto the queue are stored until the consumer retrieves them RabbitMQ's usage in vRealize Automation Used to keep clustered appliances in sync Make sure only one appliance takes action on a given message which prevents race condition Powers the event-broker-service ( EBS ) All of these are done through series of queues , one for each action which has to be kept in sync between each appliance Examples of few Queues are as below ebs.com.vmware.csp.iaas.blueprint.service.machine.lifecycle.active__ ebs.com.vmware.csp.iaas.blueprint.service.machine.lifecycle.provision__ vmware.vcac.core.software-service.taskRequestSubmitted vmware.vcac.core.iaas-proxy-provider.catalogRequestSubmitted vmware.vcac.core.catalog-service.requestSubmitted vmware.vcac.core.event-broker-service.publishReplyEvent RabbitMQ and vRA Clustering Pre-Requisites Host short-names and FQDN's must be resolved among all the appliances which is being clustered This DNS requirement is mandatory because RabbitMQ uses short-name in the node naming convention Ports 4369 , 5672 and 25672 must be open between appliances 4369 is used for peer discovery service by rabbitmq 5672 is used by AMQP 25672 us used for inter-node and cli communication ( erlang distribution server port ) When RabbitMQ is configured in cluster , unlike other clustering applications there is no Master-Slave relationship. The last node to be receiving message is considered to be the "Leading Cluster Node" The only time this becomes an issue is when all nodes in vRA cluster has to be stopped , shutdown all nodes apart from one node , which needs to be restarted which ensures it has all the latest messages from the queue. Then bring back the other nodes which are stopped Listing Message Queues Message Queues are used to ensure multiple clustered vRealize Automation appliances are kept in sync and also to power EBS. From ssh session running rabbitmqctl list_queues will show all currently configured queues. Two pieces of data is returned by default queue name and number of messages in the queue As one can see in the above screenshot The one which starts with ebs.com.vmware.xxxxx.xxxxx is used by event-broker-service The one which starts with vmware.vcac.core.xxxxxx..xxxxx is used for other vRealize Automation functions Configuration Files RabbitMQ uses two main configuration files to set required variables , both files are stored under /etc/rabbitmq /etc/rabbitmq/rabbitmq.config SSL Information TCP Listening Ports Connection Timeouts Heartbeat Interval /etc/rabbitmq/rabbitmq-env.conf NODENAME= Note : If we change USE_LONGNAME to true , then it would use FQDN to name the cluster RabbitMQ Server Service Rabbitmq is controlled by service rabbitmq-server <> Log Locations and it's usage All Rabbitmq logs are stored under /var/log/rabbitmq/* The main operational log is /var/log/rabbitmq/rabbit@.log Above log would contains messages about Startup Shutdown Plugin Information Queue Sync RabbitMQ is only broker , it does not have information on what other systems are doing with the messages. It will only show content about messages received or processed CommandLine Options for Troubleshooting From ssh session of vRA appliance , rabbitmqctl command-let can be used to control rabbitmq system. Some of the options commonly used in troubleshooting are rabbitmqctl cluster_status command would give definitive RabbitMQ clustering status The running nodes line in the above command should contain all the nodes which are part of the cluster rabbitmqctl list_policies command would list all currently enforced policies. In vRA only one policy should be returned that's ha-all Re-Join a node to RabbitMQ cluster If a node is returning less nodes than expected , we can re-join a node to RabbitMQ cluster through VAMI The node which is being joined to the cluster would be reset, which removes all messages and metadata on that node. Since ha-all policy is set as discussed above , all messages and metadata are copied on other nodes , which means even if the node is reset , once it's back into the cluster , metadata and messages are copied back to the node. Reset Rabbitmq Usage As a last resort we can reset rabbitmq to a default state by clicking on Reset Rabbitmq Cluster on VAMI. This would Clear all messages out of queues Should only be used as last resort All historical data would be destroyed !! Happy Learning !!





![Endpoint with id [XXXXX-XXXXX-XXXXX-XXXXX] is not found in SQL Server on IAAS endpoint](https://static.wixstatic.com/media/3521e7_af4dc330f1324ba0a66bebd87b7ef716~mv2.png/v1/fit/w_176,h_124,q_85,usm_0.66_1.00_0.01,blur_3,enc_auto/3521e7_af4dc330f1324ba0a66bebd87b7ef716~mv2.png)



