top of page

Search Results

252 results found with an empty search

  • vRA enhancements with vRSLCM 8.10 | Pendo Integration |

    PDF document Demo vRSLCM 8.10 bring 2 major features on the table with respect to vRealize Automation 1. Enable / Disable Pendo in vRA 8.x through vRSLCM 2. Auto Revert Feature when there's an upgrade failure in vRA 8.x In this blog, we will discuss Pendo Integration and how it works through vRSLCM 8.x Pendo integration for vRA in vRSLCM What's Pendo and Why is it used? Pendo is a product-analytics app built to help software companies develop products that resonate with customers. The tool is used for collecting page clicks, feature clicks, providing in-app guides, and producing various kinds of reports, which are based on tagged pages and features When was it introduced in vRealize Automation 8.x? vRealize Automation participates in VMware's original Customer Experience Improvement Program (CEIP) and also the Pendo Customer Experience Program (Pendo CEIP) for vRealize applications. Pendo CEIP was introduced in vRA 8.8 verison. You can separately join or leave the VMware original Customer Experience Improvement Program (CEIP) and the Pendo Customer Experience Program (Pendo CEIP). Each program collects somewhat different types of customer interaction data, as described below. Original CEIP The original CEIP provides VMware with information that enables designers and engineers to improve products and services, fix problems, and advise you on how best to deploy and use VMware products and services. It collects usage and runtime data to help gauge system stability and the consumption levels of different features. This information also helps VMware designers and engineers determine what to build next based on which use-cases and features are being used or not used. You can join this CEIP when you install vRealize Automation with the vRealize Lifecycle Manager (LCM). After installation, vRealize Automation administrators and enabled users can also join or leave the program by using vracli ceip command line options. Pendo CEIP Pendo is an integrated third-party tool that collects user activities and provides analytics to vRealize Automation product development. The Pendo CEIP collects workflow data based on your interaction with the user interface. This information helps VMware designers and engineers develop data-driven improvements to the usability of products and services. You can join or leave the Pendo CEIP by using vracli ceip pendo command line options. Enabled users can also join or leave the Pendo CEIP by using options in their vRealize Automation user interface. How was Pendo being enabled till now on vRA 8.x? Command Line Method One can join, leave, or verify the Pendo Customer Experience Improvement Program (Pendo CEIP) for vRealize services as follows. Join Log in to the vRealize Automation appliance command line as root. Run the vracli ceip pendo on command. Restart vRealize Automation services by running the /opt/scripts/deploy.sh command. Leave Log in to the vRealize Automation appliance command line as root. Run the vracli ceip pendo off command. Restart vRealize Automation services by running the /opt/scripts/deploy.sh command. Verify Log in to the vRealize Automation appliance command line as root. Run the vracli ceip pendo status command. UI Method You can join or leave the Pendo CEIP for vRealize services by using the following on-screen interaction sequence in vRealize Automation. From the active vRealize Automation service, click the question mark Help toggle ( ? ) in the upper right area of the screen. Alternately and if visible, you can click Cookie Usage in the Cookie banner. If you clicked the ? icon, click Cookie Usage in the lower right area of the subsequent Help page. Review cookie usage and how to opt-out ocntent on subsequent page What's new with Pendo Integration with vRA using vRSLCM 8.10 then ? vRSLCM 8.10 now provides option to toggle pendo settings as a day-2 action on vRealize Automation product. Select the Environment , then the product vRealize Automation. Click on the three dots where you have list of day-2 actions available for the product. There would be an option called "Toggle Pendo Setting" When we click on "Toggle Pendo Setting" , we do get a dialogue box stating when you perofrm toggle operation whether it's "ON" or "OFF" , there would be service restart hence downtime is expected. If it's a clustered environment then the downtime could increase a bit compared to single node. Let's understand what happens when you enable pendo and check it's flow when you disable as well. Enabling Pendo Click on the three dots and then in the drop down select " Toggle Pendo Setting " Took a snapshot before changing as a precautionary measure. Not really needed but does not harm either. Default option is "OFF" , we will change it to "ON" The moment we click submit a request is generated with 2 stages Stage 1: Toggle pendo setting on vRealize Automation Host 1. Start 2. Start toggle pendo on vRealize Automation Host 3. Toggle pendo on vRealize Automation Host initiated 4. Restarting services on vRealize Automation host 5. Check Pendo status on vRealize Automation host 6. Final Stage 2 Update Environment Details 1. Start 2. Update Environment Details 3. Final Request State Machine After it's complete , the property is set to true From logs perspective Reference: vmware_vrlcm.log #### Request is processed ### 2022-10-13 12:47:06.318 INFO [scheduling-1] c.v.v.l.r.c.p.ToggleVraPendoSettingPlanner - -- Processing toggling of vRA Pendo setting request. 2022-10-13 12:47:06.319 INFO [scheduling-1] c.v.v.l.r.c.p.ToggleVraPendoSettingPlanner - -- Processing request - toggleVraPendoSetting for environment cf8ac4ce-a7a7-4958-8401-50efdf4f1489 ### Request is being set to IN PROGRESS### 2022-10-13 12:47:06.511 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- Processing request with ID : d5ef03d5-7821-4755-bd7a-b1a6db3bb5d9 with request type TOGGLE_VRA_PENDO_SETTING with request state INPROGRESS. ### Task is now started ### 2022-10-13 12:47:08.658 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-10-13 12:47:08.658 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-10-13 12:47:08.667 INFO [pool-3-thread-26] c.v.v.l.p.c.v.t.VraVaTogglePendoSettingTask - -- Starting :: vRealize Automation toggle Pendo Setting Task 2022-10-13 12:47:08.669 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org ### pendo status is verified to begin with ### 2022-10-13 12:47:08.669 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: vracli ceip pendo status 2022-10-13 12:47:08.884 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Executing command --> vracli ceip pendo status 2022-10-13 12:47:10.430 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-10-13 12:47:10.431 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Command executed sucessfully ### Response is that it's disabled ### 2022-10-13 12:47:10.443 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "Pendo is disabled.\n", "errorData" : null, "commandTimedOut" : false } 2022-10-13 12:47:10.443 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-10-13 12:47:10.443 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Pendo is disabled. 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.p.c.v.t.VraVaTogglePendoSettingTask - -- vRA Pendo status output : Pendo is disabled. 2022-10-13 12:47:10.444 INFO [pool-3-thread-26] c.v.v.l.p.c.v.t.VraVaTogglePendoSettingTask - -- Pendo is in disabled state on the vRealize Automation host: vra.cap.org ### Command to enable is sent ### 2022-10-13 12:47:10.445 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-10-13 12:47:10.445 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: vracli ceip pendo on 2022-10-13 12:47:10.542 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Executing command --> vracli ceip pendo on 2022-10-13 12:47:12.594 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-10-13 12:47:12.594 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Command executed sucessfully ### After the command is successfully executed , there is a restart of services which needs to be done ### 2022-10-13 12:47:12.595 INFO [pool-3-thread-26] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "Pendo data collection was successfully turned on. Restart the services for command to take effect.\n", "errorData" : null, "commandTimedOut" : false } 2022-10-13 12:47:12.596 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-10-13 12:47:12.596 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:12.597 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-10-13 12:47:12.597 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:12.597 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Pendo data collection was successfully turned on. Restart the services for command to take effect. 2022-10-13 12:47:12.597 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.p.c.v.t.VraVaTogglePendoSettingTask - -- Enabled Pendo on the vRealize Automation host: vra.cap.org 2022-10-13 12:47:12.598 INFO [pool-3-thread-26] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVraVaTogglePendoSuccess * * * 2022-10-13 12:47:12.894 INFO [pool-3-thread-24] c.v.v.l.p.c.v.t.VraVaStartServicesTask - -- Starting :: vRA VA Start Services Task 2022-10-13 12:47:12.895 INFO [pool-3-thread-24] c.v.v.l.p.c.v.t.VraVaStartServicesTask - -- isCavaDeployment :false deployOptions: null 2022-10-13 12:47:12.897 INFO [pool-3-thread-24] c.v.v.l.p.c.v.t.VraVaStartServicesTask - -- Running Deploy Script on vRA VA : vra.cap.org 2022-10-13 12:47:12.899 INFO [pool-3-thread-24] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org ### Service restart command is executed and then it's complete #### 2022-10-13 12:47:12.899 INFO [pool-3-thread-24] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: /opt/scripts/deploy.sh 2022-10-13 12:47:13.016 INFO [pool-3-thread-24] c.v.v.l.u.SshUtils - -- Executing command --> /opt/scripts/deploy.sh * * * 2022-10-13 13:08:40.866 INFO [pool-3-thread-24] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVraVaStartServicesSuccess Even when you toggle it back to OFF state , the flow would remain same Opt In & Opt Out When the user logs in When you click on COOKIE USAGE , you will be presented following pane where you have disclaimer and notes on how to opt out Looking at the previous screenshot user called "arun" is sending data . He has an option to "OPT OUT" I just click on "OPT OUT" and then it does not send any data anymore. Another click on "OPT IN" starts sending the data Again , this is per user basis. Arun might have opted out but a user called harsha can still stay by opting in. Remember , if PENDO is not enabled , this page would just look as below

  • Interproduct Integrations during new products deployment & day-2 in vRSLCM 8.10

    VMware released vRealize Suite Lifecycle Manager 8.10 on 11th October 2022. This release comes with lot's of exciting features. In this video , we will discuss one such feature introduced in this version which would provide integration between vRealize Suite products With vRealize Log Insight, you can now perform log forwarding configuration from other vRealize Suite products to vRealize Log Insight. Similarly, with vRealize Operations Manager, you can now perform management pack configuration of other vRealize products in vRealize Operations Manager. With this info let's get into the lab and explore how this works PDF Demo Procedure Creating an environment named "Staging" which would consists of following products vRealize Automation vRealize Operations vRealize LogInsight vRealize SaltStack Config Accept EULA Assign licenses will be associated with the products being deployed Certificate has already been created inside locker , so it's just time to use it now Select the infrastructure properties The next step is to configure product properties. This is the place where you implement inter integrations Starting with vRealize Automation , we would fill all of the product properties needed for a successful install We have a new option to check a box which would enable vRLI Log Forwarding Configuration and Monitoring with vROps In the same manner, for vRealize LogInsight as well you have an option to monitor with vROps. That's just a check box which needs to be addressed One difference for vROps is that it needs management pack to be installed. The pak files can be uploaded to /data partition of vRSLCM. Then click on the box which says "EDIT ADAPTER FILES SELECTION" There are three management packs 1. Management pack for vRO 2. Management pack for SDDC Health 3. Management Pack for VMware Identity Manager Adapter Mention the path where the pak files are stored. that's /data which is recommended. Then when we click on "DISCOVER" all the pak files available will be displayed. Based on the previous selection , respective pak files will be installed and then configured during installation We also have an option to select what log entities to be pushed to vRLI from vROps Saltstack product properties page Now that we have all of the parameters needed entered , we will now go ahead and click next to perform prechecks Since all validations are now good , we will submit the request so that environment deployment can start Based on the number of products , number of stages would vary Finally installation completed. Took around 1 hour and 52 minutes to deploy all 4 products As discussed we did select vROps and vRA to push logs to vRLI during it's deployment itself. That's what we see in the below screenshot which shows 2 sources who are injecting the logs. Even with vROps the adapter configuration is done automatically and management packs are installed. Staging environment is just deployed so the health information should be available pretty soon Remember , there are Day-2 actions available for Log Forwarding and Management Pack installation on respective products This concludes the blog. Happy Learning

  • View Password stored in vRSLCM 8.x locker using API

    This blog post explains the procedure on how to review the password of a stored locker object using API method Assumptions {{lcmurl}} is vRSLCM FQDN example: https://lcm.domain.example Authentication Aquire LCM Auth Token (admin@local) Request Method : POSTRequest : {{ lcmurl }} / lcm / authzn / api / login Authorization : Basic Auth username : admin@local password : ****** Response A cookie is created and response code is 200 Fetch Passwords Request Method: GET URL: {{lcmurl}}/lcm/locker/api/v2/passwords Response { "page": 0, "total": 1, "passwords": [ { "vmid": "0704babb-5b6c-474f-9d03-c92dffb9ca59", "tenant": "default", "alias": "0", "userName": "", "password": "PASSWORD****", "passwordDescription": "", "createdOn": 1665126022437, "lastUpdatedOn": 1665126022437, "referenced": false } ] } View Passwords To view password , one has to enter {{lcmurl}} , then the vmid of the locker object captured in the previous request as shown above. Also , the body of the post request should contain lcm's root password Request Method: POST URL: {{lcmurl}} /lcm/locker/api/passwords/view/ {{locker_vmid}} Authorization: Inherit from Parent Body: { "rootPassword": " {{lcmrootpassword}} " } Response { "passwordVmid": "0704babb-5b6c-474f-9d03-c92dffb9ca59", "password": "VMware123!" } The response would contain the password vmid along with the password in plain text Note: Password being displayed here is an example

  • Changing Password programatically using API for a managed product in vRSLCM 8.x

    In this blog we shall discuss methods or process needed to change passwords of managed products by vRSLCM using locker API's In all below API calls where mentioned {{idmurl}} is the VMware Identity Manager's hostname (e.g idm.domain.example) {{lcmurl}} is the vRealize Suite Lifecycle Manager's hostname (e.g https://lcm.domain.example) Aquire Session Token ( vIDM ) Request Method: POST Request: {{vidmurl}}/SAAS/API/1.0/REST/auth/system/login Headers: Content-Type: application/json Accept: application/json Request body: { "username": "configadmin", "password": "configadmin_password", "issueToken": "true" } Response false eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJqdGkiOiIwN2VlNDQ0My0yYzYzLTRkNmQtODk4ZC1kY2UzZjQzNDZkYWYiLCJwcm4iOiJjb25maWdhZG1pbkBJRE0iLCJkb21haW4iOiJTeXN0ZW0gRG9tY**********3_qehterCBvH60n_ecUx4tweMj6byOorhEcFBfgCgG5LxDUDKH5Da9XaPmBsOF5qcozCz9YWdJciuwGtCGUxdow2zhdwfVGb-uNk71QyUET6fSh1G-JQCn41K_8rJ4tgtRX8ETm-- BGLY9fy5g A cookie is set in this case as well This session token has been placed under environment details as variable Aquire LCM Auth Token (admin@local) Request Method: POST Request: {{lcmurl}}/lcm/authzn/api/login Authorization: Basic Auth username: admin@local password: ****** Response A cookie is created and response code is 200 As one can see there are two cookies set , one for idm based authentication and the other for lcm local auth Fetch Environment Details We shall use this API to fetch environment details in which the product is present Request Method: GET Request: {{lcmurl}}/lcm/lcops/api/v2/environments?status=COMPLETED Response [ { "environmentId": "globalenvironment", "environmentName": "globalenvironment", "environmentDescription": "", "environmentHealth": null, "logHistory": "[ {\n \"logGeneratedTime\" : 1657682435109,\n \"logLocation\" : \"https://lcm.cap.org/repo/logBundleRepo/environment/globalenvironment/log-globalenvironment-1657682435109.tar.gz\"\n} ]", "environmentStatus": "COMPLETED", "infrastructure": { "properties": { } }, "products": [ { "id": "vidm", "version": "3.3.6", "patchHistory": null, "snapshotHistory": null, "logHistory": null, "clusterVIP": null, "nodes": [ { "type": "vidm-primary", "properties": { "hostName": "********", "cluster": "********", "esxHost": "********", "memory": "**", "diskMode": "***", "vCenterHost": "******", "storage": "****", "network": "*****", "capacity": "***", "vidmRootPassword": "locker:password:b1ed53c1-c6c2-4422-ba3c-68f39b33a04a:dummyalias" , "vidmSystemAdminPassword": "locker:password:17e1d72f-2a2d-4105-ba13-ba26b62473ee:installerPassword", "enableTelemetry": "false", "affinityRules": null, "__vMoid": "vm-43", } }, { "type": "vidm-connector", "properties": { } } ], "collectorGroups": null, "properties": { * * "vidmAdminPassword": "locker:password:17e1d72f-2a2d-4105-ba13-ba26b62473ee:installerPassword", "enableTelemetry": "false", "defaultConfigurationPassword": "locker:password:17e1d72f-2a2d-4105-ba13-ba26b62473ee:installerPassword", * * * "certificate": "locker:certificate:6d7a83c9-40c6-42f8-9d6b-af75227b3689:idm" } } ], "metaData": { "isCloudProxyEnvironment": "false" } }, You will get a json response with all the environment and product data. Look at the screenshot for more information. Based on the environment and the product you have selected to change specific account passwords , those data can be aquired from this response Get the root password from the product As an example in this blog , we will choose to change root password of vIDM Based on the above response we got the the environments api , we will collect current root password of vIDM and keep it aside So that would be "vidmRootPassword": "locker:password:b1ed53c1-c6c2-4422-ba3c-68f39b33a04a:dummyalias" We can confirm that from UI too If you look at the syntax on how it's stored "vidmRootPassword": "locker:password:vmid:locker_alias" "vidmRootPassword": "locker:password:b1ed53c1-c6c2-4422-ba3c-68f39b33a04a:dummyalias" Get the Password using VMID Now let's get the details of the password using the extracted vmid by using following API Request Method: GET Request: {{lcmurl}}/lcm/locker/api/v2/passwords/details/ Response { "vmid": "b1ed53c1-c6c2-4422-ba3c-68f39b33a04a", "tenant": "default", "alias" : "dummyalias", "userName": "dummy", "password": "PASSWORD****", "passwordDescription": "dummypassword", "createdOn": 1664436058965, "lastUpdatedOn": 1664436058965 } View Password To view the password use the below URL Request Method: POST Request: {{lcmurl}}/lcm/locker/api/v2/passwords/details/ Response { "passwordVmid": "b1ed53c1-c6c2-4422-ba3c-68f39b33a04a", "password": "Dummy123!" } Create New Password object in Locker Here's the API to create an object in locker. It's a post call. In the response your returned with the vmid of the password object which has been created Request Method: POST Request: {{lcmurl}}/lcm/locker/api/v2/passwords Response { "vmid": "deab31fa-ea7a-452b-a0ad-a5daa5bb4126", "tenant": "default", "alias": "vidmroot071022", "userName": "root", "password": "PASSWORD****", "passwordDescription": "vidmroot071022", "createdOn": 1665147383168, "lastUpdatedOn": 1665147383168 } We can check the new password in the UI as well Update Password As an example , we shall consider root password of vIDM to be changed Request Method: PUT Request: {{lcmurl}}/lcm/lcops/api/v2/environments/{{envid}}/products/{{idmprodid}}/nodes/{{nodetype}} Note: the above request url should be properly replaced by appropriate values This URL is used to change root password for vIDM node {{envid}}: "globalenvironment" {{idmprodid}}: "vidm" {{nodetype}}: "vidm-primary" We need to compile body of the request Remeber from the previous API , we've collected the current password and also stored the vmid of the new password onject we created to apply as a new root password { "currentPassword": "locker:password:b1ed53c1-c6c2-4422-ba3c-68f39b33a04a:dummyalias", "hostName": "{{nodehostname}}", "newPassword": "locker:password:deab31fa-ea7a-452b-a0ad-a5daa5bb4126:vidmroot071022" , "userNameToUpdate": "root" } Remember the {{nodehostname}} is the node for which the password is being changed. If it's a cluster , this has to be executed thrice on each node to maintain consistency Once we submit the request , as a response a request id is sent which can be tracked too The request id can be polled using following API Request Method: PUT Request: {{lcmurl}}/lcm/request/api/v2/requests/ Response In the UI you may see the request to update password is now complete for root In similar manner if you want to change admin password of vIDM then you have to do following. Remeber the API would change it's not going to be same Request Method: PUT Request: {{lcmurl}} /lcm/lcops/api/v2/environments/ {{envid}} /products/ {{idmprodid}}/admin-password Note: the above request url should be properly replaced by appropriate values This URL is used to change root password for vIDM node {{envid}}: "globalenvironment" {{idmprodid}}: "vidm" {{nodetype}}: "vidm-primary" Request Body { "adminPassword": "locker:password:deab31fa-ea7a-452b-a0ad-a5daa5bb4126:vidmroot071022" , "currentAdminPassword": "locker:password:17e1d72f-2a2d-4105-ba13-ba26b62473ee:installerPassword" } I will replace the values in the body with appropriate values Then execute the API If you poll the request you can see whole lot of details. If it's a failure then stop polling You may now see the request created and completed in UI . Delete Password To delete the password , one can use the following API Request Method: DELETE Request: {{lcmurl}} /lcm/locker/api/v2/passwords/ I'll get the vmid from the url or from the api as shown before Response { "vmid": "b1ed53c1-c6c2-4422-ba3c-68f39b33a04a", "tenant": "default", "alias" : "dummyalias", "userName": "dummy", "password": "Dummy123!", "passwordDescription": "dummypassword", "createdOn": 1664436058965, "lastUpdatedOn": 1664436058965 } In this manner if you know the API's and appropriate values to substitute you should be able to programatically change passwords on any products managed by vRSLCM 8.x

  • Increasing size of /data partition in vRSLCM 8.x

    Have a usecase to increase size of vRSLCM appliance's data partition Current settings of the partitions on appliance root@dlcm [ ~ ]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 2.9G 0 2.9G 0% /dev tmpfs 3.0G 20K 3.0G 1% /dev/shm tmpfs 3.0G 1.1M 3.0G 1% /run tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup /dev/sda4 8.7G 2.1G 6.2G 26% / tmpfs 3.0G 8.4M 2.9G 1% /tmp /dev/sda2 119M 28M 85M 25% /boot /dev/mapper/stroage_vg-storage 9.8G 95M 9.2G 2% /storage /dev/mapper/data_vg-data 148G 23G 119G 16% /data tmpfs 595M 0 595M 0% /run/user/0 From vCenter PVDISPLAY output root@dlcm [ ~ ]# pvdisplay --- Physical volume --- PV Name /dev/sdc VG Name stroage_vg PV Size <10.00 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 2559 Free PE 0 Allocated PE 2559 PV UUID 1FzPjT-ESCV-7HMn-OdXG-Sbof-d2HM-ChEzXg --- Physical volume --- PV Name /dev/sdb VG Name data_vg PV Size <150.00 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 38399 Free PE 0 Allocated PE 38399 PV UUID 3xS6lh-0rjq-1HV5-kO9y-Tc5m-AYVR-uWDdpW --- Physical volume --- PV Name /dev/sdd VG Name swap_vg PV Size <8.00 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 2047 Free PE 0 Allocated PE 2047 PV UUID XHQmv4-yaGc-tBbE-qg9r-Jyto-HREF-1bPNsy Here's the /etc/fstab data for partitions. Since we are only concerned for /data , we will only work with #system mnt-pt type options,nosuid,nodev dump fsck /dev/sda4 / ext3 defaults 1 1 /dev/sda3 swap swap defaults 0 0 /dev/sda2 /boot ext3 defaults,nosuid,noacl,nodev,noexec 1 2 /dev/cdrom /mnt/cdrom iso9660 ro,noauto,nosuid,nodev 0 0 /dev/data_vg/data /data ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1 /dev/stroage_vg/storage /storage ext3 rw,nosuid,nodev,exec,auto,nouser,async 0 1 /dev/swap_vg/swap1 none swap sw 0 0 Increased disk size of hard disk 2 from 150GB to 200GB Now that the disk size is increased on the appliance level reboot appliance The moment appliance is back online or responding over network ssh and execute below command " df -h " root@dlcm [ ~ ]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 2.9G 0 2.9G 0% /dev tmpfs 3.0G 20K 3.0G 1% /dev/shm tmpfs 3.0G 1.1M 3.0G 1% /run tmpfs 3.0G 0 3.0G 0% /sys/fs/cgroup /dev/sda4 8.7G 2.1G 6.2G 26% / tmpfs 3.0G 72K 3.0G 1% /tmp /dev/sda2 119M 28M 85M 25% /boot /dev/mapper/stroage_vg-storage 9.8G 95M 9.2G 2% /storage /dev/mapper/data_vg-data 197G 23G 166G 12% /data tmpfs 595M 0 595M 0% /run/user/0 As one can see it was 150G before and the current state is 200G How does this work ? Below snippets are taken from journalctl -x When you reboot after increasing the space during every reboot everyboot scripts are triggered Sep 16 07:33:56 dlcm.cap.org vaos[959]: This script is executed on all boots, except the first one. Sep 16 07:33:56 dlcm.cap.org vaos[959]: ++ basename /usr/lib/bootstrap/everyboot Sep 16 07:33:56 dlcm.cap.org vaos[959]: + NAME=everyboot Sep 16 07:33:56 dlcm.cap.org vaos[959]: + BOOTSTRAP_DIR=/etc/bootstrap/everyboot.d Sep 16 07:33:56 dlcm.cap.org vaos[959]: + BOOTSTRAP_LOG=/var/log/bootstrap/everyboot.log Sep 16 07:33:56 dlcm.cap.org vaos[959]: + tee -a /var/log/bootstrap/everyboot.log Sep 16 07:33:56 dlcm.cap.org vaos[959]: + set -eu Sep 16 07:33:56 dlcm.cap.org vaos[959]: + set -x Sep 16 07:33:56 dlcm.cap.org vaos[959]: + log 'main bootstrap everyboot started' Sep 16 07:33:56 dlcm.cap.org vaos[959]: ++ date '+%Y-%m-%d %H:%M:%S' Sep 16 07:33:56 dlcm.cap.org vaos[959]: + echo '2022-09-16 07:33:56 main bootstrap everyboot started' Sep 16 07:33:56 dlcm.cap.org vaos[959]: 2022-09-16 07:33:56 main bootstrap everyboot started Sep 16 07:33:56 dlcm.cap.org vaos[959]: + for script in "${BOOTSTRAP_DIR}"/* Sep 16 07:33:56 dlcm.cap.org vaos[959]: + echo Sep 16 07:33:56 dlcm.cap.org vaos[959]: + '[' '!' -e /etc/bootstrap/everyboot.d/20-autogrow-disk ']' Sep 16 07:33:56 dlcm.cap.org vaos[959]: + '[' '!' -x /etc/bootstrap/everyboot.d/20-autogrow-disk ']' Sep 16 07:33:56 dlcm.cap.org vaos[959]: + log '/etc/bootstrap/everyboot.d/20-autogrow-disk starting' Sep 16 07:33:56 dlcm.cap.org vaos[959]: ++ date '+%Y-%m-%d %H:%M:%S' Sep 16 07:33:56 dlcm.cap.org vaos[959]: + echo '2022-09-16 07:33:56 /etc/bootstrap/everyboot.d/20-autogrow-disk starting' Sep 16 07:33:56 dlcm.cap.org vaos[959]: 2022-09-16 07:33:56 /etc/bootstrap/everyboot.d/20-autogrow-disk starting Sep 16 07:33:56 dlcm.cap.org vaos[959]: + /etc/bootstrap/everyboot.d/20-autogrow-disk Sep 16 07:33:56 dlcm.cap.org vaos[959]: Fri Sep 16 07:33:56 UTC 2022 Disk Util: INFO: Scanning Hard disk sizes Sep 16 07:33:56 dlcm.cap.org rsyslogd[712]: imjournal: journal files changed, reloading... [v8.2202.0 try https://www.rsyslog.com/e/0 ] Sep 16 07:33:56 dlcm.cap.org vaos[959]: Syncing file systems Sep 16 07:33:56 dlcm.cap.org vaos[959]: which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/vmware/bin) Sep 16 07:33:56 dlcm.cap.org vaos[959]: Scanning SCSI subsystem for new devices and remove devices that have disappeared Sep 16 07:33:56 dlcm.cap.org vaos[959]: Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Sep 16 07:33:57 dlcm.cap.org vaos[959]: [65B blob data] Sep 16 07:33:57 dlcm.cap.org vaos[959]: OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Sep 16 07:33:57 dlcm.cap.org vaos[959]: Vendor: NECVMWar Model: VMware IDE CDR00 Rev: 1.00 Sep 16 07:33:57 dlcm.cap.org vaos[959]: Type: CD-ROM ANSI SCSI revision: 05 Sep 16 07:33:58 dlcm.cap.org vaos[959]: [182B blob data] Sep 16 07:33:58 dlcm.cap.org vaos[959]: Vendor: NECVMWar Model: VMware IDE CDR00 Rev: 1.00 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Type: CD-ROM ANSI SCSI revision: 05 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Sep 16 07:33:58 dlcm.cap.org vaos[959]: Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Sep 16 07:33:58 dlcm.cap.org vaos[959]: [65B blob data] Sep 16 07:33:58 dlcm.cap.org vaos[959]: OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:58 dlcm.cap.org vaos[959]: [53B blob data] Sep 16 07:33:58 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:58 dlcm.cap.org vaos[959]: [65B blob data] Sep 16 07:33:58 dlcm.cap.org vaos[959]: OLD: Host: scsi2 Channel: 00 Id: 01 Lun: 00 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:58 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: [53B blob data] Sep 16 07:33:59 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: [65B blob data] Sep 16 07:33:59 dlcm.cap.org vaos[959]: OLD: Host: scsi2 Channel: 00 Id: 02 Lun: 00 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: [53B blob data] Sep 16 07:33:59 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: [65B blob data] Sep 16 07:33:59 dlcm.cap.org vaos[959]: OLD: Host: scsi2 Channel: 00 Id: 03 Lun: 00 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: [53B blob data] Sep 16 07:33:59 dlcm.cap.org vaos[959]: Vendor: VMware Model: Virtual disk Rev: 1.0 Sep 16 07:33:59 dlcm.cap.org vaos[959]: Type: Direct-Access ANSI SCSI revision: 02 Sep 16 07:33:59 dlcm.cap.org vaos[959]: 0 new or changed device(s) found. Sep 16 07:33:59 dlcm.cap.org vaos[959]: 0 remapped or resized device(s) found. Sep 16 07:33:59 dlcm.cap.org vaos[959]: 0 device(s) removed. Sep 16 07:33:59 dlcm.cap.org vaos[959]: Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sda Sep 16 07:33:59 dlcm.cap.org vaos[959]: Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdb Sep 16 07:33:59 dlcm.cap.org vaos[959]: Physical volume "/dev/sdb" changed Sep 16 07:33:59 dlcm.cap.org vaos[959]: 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Sep 16 07:33:59 dlcm.cap.org vaos[959]: Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdc Sep 16 07:34:00 dlcm.cap.org vaos[959]: Physical volume "/dev/sdc" changed Sep 16 07:34:00 dlcm.cap.org vaos[959]: 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Sep 16 07:34:00 dlcm.cap.org vaos[959]: Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdd Sep 16 07:34:00 dlcm.cap.org vaos[959]: Physical volume "/dev/sdd" changed Sep 16 07:34:00 dlcm.cap.org vaos[959]: 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Sep 16 07:34:00 dlcm.cap.org vaos[959]: Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: LV Resizing /dev/stroage_vg/storage Sep 16 07:34:00 dlcm.cap.org vaos[959]: Size of logical volume stroage_vg/storage unchanged from <10.00 GiB (2559 extents). Sep 16 07:34:00 dlcm.cap.org vaos[959]: Logical volume stroage_vg/storage successfully resized. Sep 16 07:34:00 dlcm.cap.org vaos[959]: resize2fs 1.45.5 (07-Jan-2020) Sep 16 07:34:00 dlcm.cap.org vaos[959]: The filesystem is already 2620416 (4k) blocks long. Nothing to do! Sep 16 07:34:00 dlcm.cap.org vaos[959]: Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: LV Resizing /dev/data_vg/data Sep 16 07:34:00 dlcm.cap.org vaos[959]: Size of logical volume data_vg/data changed from <150.00 GiB (38399 extents) to <200.00 GiB (51199 extents). Sep 16 07:34:01 dlcm.cap.org vaos[959]: Logical volume data_vg/data successfully resized. Sep 16 07:34:01 dlcm.cap.org vaos[959]: resize2fs 1.45.5 (07-Jan-2020) Sep 16 07:34:01 dlcm.cap.org kernel: EXT4-fs (dm-1): resizing filesystem from 39320576 to 52427776 blocks Sep 16 07:34:01 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:01,289 main ERROR Unable to locate appender "MaskedLogs" for logger config "root" Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:05,233 main ERROR Unable to locate appender "MaskedLogs" for logger config "root" Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: . ____ _ __ _ _ Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: \\/ ___)| |_)| | | | | || (_| | ) ) ) ) Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: ' |____| .__|_| |_|_| |_\__, | / / / / Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: =========|_|==============|___/=/_/_/_/ Sep 16 07:34:05 dlcm.cap.org launch-blackstone-spring[1099]: :: Spring Boot :: (v2.1.16.RELEASE) Sep 16 07:34:06 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:06.313 INFO dlcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStarting] : Start> Sep 16 07:34:06 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:06.368 DEBUG dlcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStarting] : Runni> Sep 16 07:34:06 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:06.369 INFO dlcm.cap.org --- [ main] c.v.b.BlackstoneExternalApplication [logStartupProfileInf> Sep 16 07:34:07 dlcm.cap.org systemd-resolved[1403]: Using degraded feature set (UDP+EDNS0) for DNS server 10.109.44.132. Sep 16 07:34:11 dlcm.cap.org kernel: EXT4-fs (dm-1): resized to 46792704 blocks Sep 16 07:34:11 dlcm.cap.org vlcm-service[1098]: [2022-09-16 07:34:11.274] [INFO ][main] org.spr.con.sup.PostProcessorRegistrationDelegate$BeanPostProcessorChecker: -- Bean 'configuratio> Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: [2022-09-16 07:34:12.102] [INFO ][main] com.vmw.vre.lcm.com.log.MaskingPrintStream: -- * SYSOUT/SYSERR CAPTURED: -- Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: ___ ___ _ ___ __ __ Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: __ __ | _ \ / __| | | / __| | \/ | Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: \ V / | / \__ \ | |__ | (__ | |\/| | Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: _\_/_ |_|_\ |___/ |____| \___| |_| |_| Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: _|"""""| _|"""""| _|"""""| _|"""""| _|"""""| _|"""""| Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: * -- Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: ___ ___ _ ___ __ __ Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: __ __ | _ \ / __| | | / __| | \/ | Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: \ V / | / \__ \ | |__ | (__ | |\/| | Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: _\_/_ |_|_\ |___/ |____| \___| |_| |_| Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: _|"""""| _|"""""| _|"""""| _|"""""| _|"""""| _|"""""| Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' "`-0-0-' Sep 16 07:34:12 dlcm.cap.org vlcm-service[1098]: [2022-09-16 07:34:12.156] [INFO ][main] org.spr.boo.SpringApplication: -- The following profiles are active: apiignore Sep 16 07:34:15 dlcm.cap.org launch-blackstone-spring[1099]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". Sep 16 07:34:15 dlcm.cap.org launch-blackstone-spring[1099]: SLF4J: Defaulting to no-operation (NOP) logger implementation Sep 16 07:34:15 dlcm.cap.org launch-blackstone-spring[1099]: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Sep 16 07:34:17 dlcm.cap.org launch-blackstone-spring[1099]: 2022-09-16 07:34:17.856 INFO dlcm.cap.org --- [ main] trationDelegate$BeanPostProcessorChecker [postProcessAfterInit> Sep 16 07:34:19 dlcm.cap.org kernel: EXT4-fs (dm-1): resized filesystem to 52427776 Sep 16 07:34:19 dlcm.cap.org vaos[959]: Filesystem at /dev/mapper/data_vg-data is mounted on /data; on-line resizing required Sep 16 07:34:19 dlcm.cap.org vaos[959]: old_desc_blocks = 10, new_desc_blocks = 13 Sep 16 07:34:19 dlcm.cap.org vaos[959]: The filesystem on /dev/mapper/data_vg-data is now 52427776 (4k) blocks long. Sep 16 07:34:19 dlcm.cap.org vaos[959]: Fri Sep 16 07:34:19 UTC 2022 Disk Util: INFO: LV Resizing /dev/swap_vg/swap1 Sep 16 07:34:19 dlcm.cap.org vaos[959]: fsadm: Filesystem "swap" on device "/dev/mapper/swap_vg-swap1" is not supported by this tool. Sep 16 07:34:19 dlcm.cap.org vaos[959]: Filesystem check failed. Sep 16 07:34:20 dlcm.cap.org vaos[959]: + RES=0 Sep 16 07:34:20 dlcm.cap.org vaos[959]: + log '/etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0' Sep 16 07:34:20 dlcm.cap.org vaos[959]: ++ date '+%Y-%m-%d %H:%M:%S' Sep 16 07:34:20 dlcm.cap.org vaos[959]: + echo '2022-09-16 07:34:20 /etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0' Sep 16 07:34:20 dlcm.cap.org vaos[959]: 2022-09-16 07:34:20 /etc/bootstrap/everyboot.d/20-autogrow-disk done, status: 0 This is the script where it's located Log Snippet from everyboot.log under /var/log/bootstrap/ + echo '2022-09-16 07:33:56 /etc/bootstrap/everyboot.d/20-autogrow-disk starting' 2022-09-16 07:33:56 /etc/bootstrap/everyboot.d/20-autogrow-disk starting + /etc/bootstrap/everyboot.d/20-autogrow-disk Fri Sep 16 07:33:56 UTC 2022 Disk Util: INFO: Scanning Hard disk sizes Syncing file systems which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/vmware/bin) Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Scanning for device 0 0 0 0 ...^M Scanning for device 0 0 0 0 ... OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: NECVMWar Model: VMware IDE CDR00 Rev: 1.00 Type: CD-ROM ANSI SCSI revision: 05 .0:0:0:0 sg0 (1) ESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[D ESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[DESC[D ESC[D^MESC[AESC[AESC[AOLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: NECVMWar Model: VMware IDE CDR00 Rev: 1.00 Type: CD-ROM ANSI SCSI revision: 05 Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, all LUNs Scanning for device 2 0 0 0 ...^M Scanning for device 2 0 0 0 ... OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 ^MESC[AESC[AESC[AOLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 Scanning for device 2 0 1 0 ...^M Scanning for device 2 0 1 0 ... OLD: Host: scsi2 Channel: 00 Id: 01 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 ^MESC[AESC[AESC[AOLD: Host: scsi2 Channel: 00 Id: 01 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 Scanning for device 2 0 2 0 ...^M Scanning for device 2 0 2 0 ... OLD: Host: scsi2 Channel: 00 Id: 02 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 ^MESC[AESC[AESC[AOLD: Host: scsi2 Channel: 00 Id: 02 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 Scanning for device 2 0 3 0 ...^M Scanning for device 2 0 3 0 ... OLD: Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 ^MESC[AESC[AESC[AOLD: Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 0 new or changed device(s) found. 0 remapped or resized device(s) found. 0 device(s) removed. Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sda Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdb Physical volume "/dev/sdb" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Fri Sep 16 07:33:59 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdc Physical volume "/dev/sdc" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: Resizing PV /dev/sdd Physical volume "/dev/sdd" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: LV Resizing /dev/stroage_vg/storage Size of logical volume stroage_vg/storage unchanged from <10.00 GiB (2559 extents). Logical volume stroage_vg/storage successfully resized. resize2fs 1.45.5 (07-Jan-2020) The filesystem is already 2620416 (4k) blocks long. Nothing to do! Fri Sep 16 07:34:00 UTC 2022 Disk Util: INFO: LV Resizing /dev/data_vg/data Size of logical volume data_vg/data changed from <150.00 GiB (38399 extents) to <200.00 GiB (51199 extents). Logical volume data_vg/data successfully resized. resize2fs 1.45.5 (07-Jan-2020) Filesystem at /dev/mapper/data_vg-data is mounted on /data; on-line resizing required old_desc_blocks = 10, new_desc_blocks = 13

  • Re-Trust vRA with vIDM from vRSLCM | Deep-Dive

    Overview On a high level , these are steps which occurs when you perform Re-Trust with Identity Manager from vRSLCM Start Start VMware Identity Manager flow Check if VMware Identity Manager root certificate is present on vRealize Automation Check for VMware Identity Manager Instance availability Check for VMware Identity Manager login token VMware Identity Manager health check for vRealize Automation Check for VMware Identity Manager configuration user availability Configure VMware Identity Manager for vRealize Automation Initialize vRealize Automation Update vRealize Automation certificate in vRealize Lifecycle Manager Inventory Update VMware Identity Manager allowed redirects Final Disclaimer: Hostnames used are examples from my lab and do not represent any company or organization Deep-Dive Let's deep dive on the actions performed when you Re-Trust with Identity manager on vRA product from vRSLCM 1. When a request is submitted in vRSLCM , a request to request service is created 2022-07-28 00:27:37.142 INFO [http-nio-8080-exec-5] c.v.v.l.l.u.EnvironmentValidationHelper - -- Product with ID : vraproductFound : true 2022-07-28 00:27:37.211 INFO [http-nio-8080-exec-5] c.v.v.l.l.u.RequestSubmissionUtil - -- ++++++++++++++++++ Creating request to Request_Service :::>>> { "vmid" : "eb90d4cc-1c58-4a6c-a7f5-8f3dea65486c", "transactionId" : null, "tenant" : "default", "requestName" : "vidmproductretrust", "requestReason" : "VRA in Environment Production - Re-trust Product with Identity Manager", "requestType" : "PRODUCT_VIDM_RETRUST", "requestSource" : "cf8ac4ce-a7a7-4958-8401-50efdf4f1489", "requestSourceType" : "user", "inputMap" : { "environmentId" : "cf8ac4ce-a7a7-4958-8401-50efdf4f1489", "productId" : "vra" }, "outputMap" : { }, "state" : "CREATED", "executionId" : null, "executionPath" : null, "executionStatus" : null, "errorCause" : null, "resultSet" : null, "isCancelEnabled" : null, "lastUpdatedOn" : 1658968057210, "createdBy" : null } 2. Request ID response 2022-07-28 00:27:37.229 INFO [http-nio-8080-exec-5] c.v.v.l.l.u.RequestSubmissionUtil - -- Generic Request Response : { "requestId" : "eb90d4cc-1c58-4a6c-a7f5-8f3dea65486c" } 3. Identifies if it is a clustered vIDM or a standalone one , fetches hostname and root password for vIDM. 2022-07-28 00:27:37.798 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- Number of request to be processed : 1 2022-07-28 00:27:37.855 INFO [scheduling-1] c.v.v.l.r.c.p.ProductReregisterNRetrustPlanner - -- Found product with id vidm 2022-07-28 00:27:37.865 INFO [scheduling-1] c.v.v.l.r.c.p.CreateEnvironmentPlanner - -- Not a clustered vIDM, fetching the hostname from primary node. 2022-07-28 00:27:37.873 INFO [scheduling-1] c.v.v.l.r.c.p.CreateEnvironmentPlanner - -- Not a clustered vIDM, fetching from primary node. 2022-07-28 00:27:37.875 INFO [scheduling-1] c.v.v.l.r.c.p.CreateEnvironmentPlanner - -- Base tenant id: idm 2022-07-28 00:27:37.876 INFO [scheduling-1] c.v.v.l.r.c.p.CreateEnvironmentPlanner - -- Fetching the hostname and root password YXYXYXYX primary node. 2022-07-28 00:27:37.883 INFO [scheduling-1] c.v.v.l.r.c.p.ProductReregisterNRetrustPlanner - -- Found product with id vra and version above 8.0.0 2022-07-28 00:27:37.904 INFO [scheduling-1] c.v.v.l.r.c.p.ProductReregisterNRetrustPlanner - -- lbTermination value passed YXYXYXYX is :: false 4. Product Re-register and Re-trust Planner SPEC is logged 2022-07-28 00:27:37.906 INFO [scheduling-1] c.v.v.l.r.c.p.ProductReregisterNRetrustPlanner - -- Product Re-register and Re-trust Planner SPEC :: { "vmid" : "e98ba0cb-7a17-4ed0-ae45-93255220cdf7", "tenant" : "default", "originalRequest" : null, "enhancedRequest" : null, "symbolicName" : null, "acceptEula" : false, "variables" : { }, "products" : [ { "symbolicName" : "vravaretrustvidm", "displayName" : null, "productVersion" : null, "priority" : 0, "dependsOn" : [ ], "components" : [ { "component" : { "symbolicName" : "vravaretrustvidm", "type" : null, "componentVersion" : null, "properties" : { "cafeHostNamePrimary" : "vra.cap.org", "cafeRootPasswordPrimary" : "JXJXJXJX", "vidmPrimaryNodeRootPassword" : "JXJXJXJX", "baseTenantId" : "idm", "uberAdminUserType" : "LOCAL", "version" : "8.8.1", "masterVidmAdminPassword" : "JXJXJXJX", "uberAdmin" : "configadmin", "masterVidmEnabled" : "true", "__version" : "8.8.1", "uberAdminPassword" : "JXJXJXJX", "masterVidmHostName" : "idm.cap.org", "masterVidmAdminUserName" : "admin", "isLBSslTerminated" : "false", "authProviderHostnames" : "idm.cap.org", "vidmPrimaryNodeHostname" : "idm.cap.org" } }, "priority" : 0 } ] } ] } 5. Engine request is triggered 2022-07-28 00:27:37.910 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- ENGINE REQUEST :: { "vmid" : "e98ba0cb-7a17-4ed0-ae45-93255220cdf7", "tenant" : "default", "originalRequest" : null, "enhancedRequest" : null, "symbolicName" : null, "acceptEula" : false, "variables" : { }, "products" : [ { "symbolicName" : "vravaretrustvidm", "displayName" : null, "productVersion" : null, "priority" : 0, "dependsOn" : [ ], "components" : [ { "component" : { "symbolicName" : "vravaretrustvidm", "type" : null, "componentVersion" : null, "properties" : { "cafeHostNamePrimary" : "vra.cap.org", "cafeRootPasswordPrimary" : "JXJXJXJX", "vidmPrimaryNodeRootPassword" : "JXJXJXJX", "baseTenantId" : "idm", "uberAdminUserType" : "LOCAL", "version" : "8.8.1", "masterVidmAdminPassword" : "JXJXJXJX", "uberAdmin" : "configadmin", "masterVidmEnabled" : "true", "__version" : "8.8.1", "uberAdminPassword" : "JXJXJXJX", "masterVidmHostName" : "idm.cap.org", "masterVidmAdminUserName" : "admin", "isLBSslTerminated" : "false", "authProviderHostnames" : "idm.cap.org", "vidmPrimaryNodeHostname" : "idm.cap.org" } }, "priority" : 0 } ] } ] } 6. Engine request is processed where a suite creation request is successful 2022-07-28 00:27:37.914 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: vravaretrustvidm 2022-07-28 00:27:37.920 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->vravaretrustvidm 2022-07-28 00:27:37.920 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-28 00:27:37.948 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:37.949 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-28 00:27:37.958 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-28 00:27:37.958 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:37.959 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:37.959 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.005 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='vravaretrustvidm', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/vravaretrustvidm.vmfx'} 2022-07-28 00:27:38.022 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- ENGINE REQUEST STATUS :: { "vmid" : "14adc2bc-39a5-47f1-b139-144a1eab4936", "transactionId" : null, "tenant" : "default", "message" : "Suite Creation Request is Successful", "identifier" : "bcbf42fe-3a24-4109-8e9e-a573b9e67d5b", "type" : "SUCCESS", "executionPath" : "{\"vmid\":\"e98ba0cb-7a17-4ed0-ae45-93255220cdf7\",\"tenant\":\"default\", * * {\"name\":\"OnVidmUpdateAllowRedirectSuccess\",\"source\":\"com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask\",\"destination\":\"com.vmware.vrealize.lcm.platform.automata.service.task.FinalTask\",\"type\":\"SM_EVENT\",\"properties\":{},\"uiProperties\":{\"displayText\":\"Updating VMware Identity Manager Allow redirect success\",\"displayKey\":\"vmf::sm::vravaretrustvidm::edge::name::OnVidmUpdateAllowRedirectSuccess::source::c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask\"}}]}}]}" } 7. Request set to IN PROGRESS 2022-07-28 00:27:38.033 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- REQUEST TO BE UPDATED :: { "vmid" : "eb90d4cc-1c58-4a6c-a7f5-8f3dea65486c", "transactionId" : null, "tenant" : "default", "requestName" : "vidmproductretrust", "requestReason" : "VRA in Environment Production - Re-trust Product with Identity Manager", "requestType" : "PRODUCT_VIDM_RETRUST", "requestSource" : "cf8ac4ce-a7a7-4958-8401-50efdf4f1489", "requestSourceType" : "user", "inputMap" : { "environmentId" : "cf8ac4ce-a7a7-4958-8401-50efdf4f1489", "productId" : "vra" }, "outputMap" : { }, "state" : "INPROGRESS", "executionId" : "bcbf42fe-3a24-4109-8e9e-a573b9e67d5b", "executionPath" : "{\"vmid\":\"e98ba0cb-7a17-4ed0-ae45-93255220cdf7\",\"tenant\":\"default\",\"symbolicName\":null,\"acceptEula\":false,\"variables\":{},\"products\":[{\"symbolicName\":\"vravaretrustvidm\",\"symbolicNameTxt\":null,\"productVersion\":null,\"priority\":0,\"dependsOn\":[],\"components\":[{\"component\":{\"symbolicName\":\"vravaretrustvidm\",\"type\":null,\"componentVersion\":null}}],\"vmf\":{ * * {\"name\":\"OnVidmUpdateAllowRedirectSuccess\",\"source\":\"com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask\",\"destination\":\"com.vmware.vrealize.lcm.platform.automata.service.task.FinalTask\",\"type\":\"SM_EVENT\",\"properties\":{},\"uiProperties\":{\"displayText\":\"Updating VMware Identity Manager Allow redirect success\",\"displayKey\":\"vmf::sm::vravaretrustvidm::edge::name::OnVidmUpdateAllowRedirectSuccess::source::c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask\"}}]}}]}", "executionStatus" : null, "errorCause" : null, "resultSet" : null, "isCancelEnabled" : null, "lastUpdatedOn" : 1658968057228, "createdBy" : "admin@local" } 2022-07-28 00:27:38.038 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- Processing request with ID : eb90d4cc-1c58-4a6c-a7f5-8f3dea65486c with request type PRODUCT_VIDM_RETRUST with request state INPROGRESS. 8. Queing and Saving the request 2022-07-28 00:27:38.322 INFO [scheduling-1] c.v.v.l.a.c.UserRequestProcessor - -- QUEING NEW USER REQUEST :: { "vmid" : "bcbf42fe-3a24-4109-8e9e-a573b9e67d5b", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658968058018, "lastUpdatedOn" : 1658968058018, "version" : "8.1.0.0", "vrn" : null, "status" : "CREATED", "originalRequest" : null, "enhancedRequest" : null, "parameter" : "{\"vmid\":\"e98ba0cb-7a17-4ed0-ae45-93255220cdf7\",\"tenant\":\"default\",\"originalRequest\":null,\"enhancedRequest\":null, * * :{\"displayText\":\"Updating VMware Identity Manager Allow redirect success\",\"displayKey\":\"vmf::sm::vravaretrustvidm::edge::name::OnVidmUpdateAllowRedirectSuccess::source::c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask\"}}]}}]}", "information" : "", "lastExexutedEvent" : "", "type" : "SUITE", "lastProcessedPriority" : -1, "userRequestLock" : 0 2022-07-28 00:27:38.335 INFO [scheduling-1] c.v.v.l.a.g.s.UserRequestServiceImpl - -- Saving user request 9. Product Specification is logged 2022-07-28 00:27:38.346 INFO [scheduling-1] c.v.v.l.a.c.UserRequestProcessor - -- Product Specification :: { "symbolicName" : "vravaretrustvidm", "displayName" : null, "productVersion" : null, "priority" : 0, "dependsOn" : [ ], "components" : [ { "component" : { "symbolicName" : "vravaretrustvidm", "type" : null, "componentVersion" : null, "properties" : { "cafeHostNamePrimary" : "vra.cap.org", "cafeRootPasswordPrimary" : "JXJXJXJX", "vidmPrimaryNodeRootPassword" : "JXJXJXJX", "baseTenantId" : "idm", "uberAdminUserType" : "LOCAL", "version" : "8.8.1", "masterVidmAdminPassword" : "JXJXJXJX", "uberAdmin" : "configadmin", "masterVidmEnabled" : "true", "__version" : "8.8.1", "uberAdminPassword" : "JXJXJXJX", "masterVidmHostName" : "idm.cap.org", "masterVidmAdminUserName" : "admin", "isLBSslTerminated" : "false", "authProviderHostnames" : "idm.cap.org", "vidmPrimaryNodeHostname" : "idm.cap.org" } }, "priority" : 0 } ] } 2022-07-28 00:27:38.347 INFO [scheduling-1] c.v.v.l.a.c.UserRequestProcessor - -- GETTING SPEC FOR (productSymbolicName) :: vravaretrustvidm 2022-07-28 00:27:38.347 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: vravaretrustvidm 2022-07-28 00:27:38.347 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->vravaretrustvidm 2022-07-28 00:27:38.348 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-28 00:27:38.370 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.371 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-28 00:27:38.372 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-28 00:27:38.372 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.372 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.372 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.374 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='vravaretrustvidm', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/vravaretrustvidm.vmfx'} 2022-07-28 00:27:38.375 INFO [scheduling-1] c.v.v.l.a.c.UserRequestProcessor - -- MACHINE :: {"symbolicName":"vravaretrustvidm","type":"STATEMACHINE","version":"1.0.0","startState":"com.vmware.vrealize.lcm.vidm.core.task.StartVidmGenericTask","finishState":"","errorState":"","properties":{},"uiProperties":{"displayText":"Re-trust VMware Identity Manager on vRealize Automation","displayKey":"vmf::sm::vravaretrustvidm"},"nodes":[{"symbolicName":"com.vmware.vrealize.lcm.vidm.core.task.StartVidmGenericTask","symbolicNameTxt":null,"type":"SIMPLE","task":"com.vmware.vrealize.lcm.vidm.core.task.StartVidmGenericTask","properties":{},"uiProperties":{"displayText":"Start VMware Identity Manager flow","displayKey":"vmf::sm::vravaretrustvidm::node::c.v.v.l.v.c.t.StartVidmGenericTask"}}, * * *01 completion","displayKey":"vmf::sm::vravaretrustvidm::edge::name::OnVravaCertificateInventoryCompletion::source::c.v.v.l.p.c.v.t.VraVaUpdateCertificateInInventoryTask"}},{"name":"OnVidmUpdateAllowRedirectSuccess","source":"com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask","destination":"com.vmware.vrealize.lcm.platform.automata.service.task.FinalTask","type":"SM_EVENT","properties":{},"uiProperties":{"displayText":"Updating VMware Identity Manager Allow redirect success","displayKey":"vmf::sm::vravaretrustvidm::edge::name::OnVidmUpdateAllowRedirectSuccess::source::c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask"}}]} 2022-07-28 00:27:38.908 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- => bcbf42fe-3a24-4109-8e9e-a573b9e67d5b 2022-07-28 00:27:38.912 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Processing the Engine Request to create the machine with ID => vravaretrustvidm and the priority is => 0 10. Starts processing the engine request 2022-07-28 00:27:38.908 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- => bcbf42fe-3a24-4109-8e9e-a573b9e67d5b 2022-07-28 00:27:38.912 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Processing the Engine Request to create the machine with ID => vravaretrustvidm and the priority is => 0 2022-07-28 00:27:38.912 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: vravaretrustvidm 2022-07-28 00:27:38.912 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->vravaretrustvidm 2022-07-28 00:27:38.913 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-28 00:27:38.942 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.943 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-28 00:27:38.943 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-28 00:27:38.943 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.944 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.944 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:38.947 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='vravaretrustvidm', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/vravaretrustvidm.vmfx'} 2022-07-28 00:27:38.955 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Injected OnStart Edge for the Machine ID :: vravaretrustvidm 2022-07-28 00:27:38.998 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- INITIALIZING NEW EVENT :: { "vmid" : "846ee8ea-e561-4906-8e5d-f5960a431dbc", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658968058953, "lastUpdatedOn" : 1658968058979, "version" : "8.1.0.0", "vrn" : null, "eventName" : "OnStart", "currentState" : null, "eventArgument" : "{\"productSpec\":{\"name\":\"productSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ProductSpecification\",\"value\":\"{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"displayName\\\":null,\\\"productVersion\\\":null,\\\"priority\\\":0,\\\"dependsOn\\\":[],\\\"components \\\":[{\\\"component\\\":{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"cafeHostNamePrimary\\\":\\\"vra.cap.org\\\",\\\"cafeRootPasswordPrimary\\\":\\\"JXJXJXJX\\\",\\\"vidmPrimaryNodeRootPassword\\\":\\\"JXJXJXJX\\\",\\\"b aseTenantId\\\":\\\"idm\\\",\\\"uberAdminUserType\\\":\\\"LOCAL\\\",\\\"version\\\":\\\"8.8.1\\\",\\\"masterVidmAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"uberAdmin\\\":\\\"configadmin\\\",\\\"masterVidmEnabled\\\":\\\"true\\\",\\\"__version\\\":\\\"8.8.1\\\",\\\"uberAdminPassword\\\":\\\"JXJXJXJX\ \\",\\\"masterVidmHostName\\\":\\\"idm.cap.org\\\",\\\"masterVidmAdminUserName\\\":\\\"admin\\\",\\\"isLBSslTerminated\\\":\\\"false\\\",\\\"authProviderHostnames\\\":\\\"idm.cap.org\\\",\\\"vidmPrimaryNodeHostname\\\":\\\"idm.cap.org\\\"}},\\\"priority\\\":KXKXKXKX\"}}", "status" : "CREATED", "stateMachineInstance" : "ba1d3e0f-6c87-4e29-bca6-c347dd9bda6b", "errorCause" : null, "sequence" : 563559, "eventLock" : 1, "engineNodeId" : "lcm.cap.org" } 11. On vIDM Generic is initialized 2022-07-28 00:27:39.074 INFO [pool-3-thread-24] c.v.v.l.p.a.s.Task - -- KEY PICKER IS :: com.vmware.vrealize.lcm.drivers.commonplugin.task.keypicker.GenericProductSpecKeyPicker 2022-07-28 00:27:39.559 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- INITIALIZING NEW EVENT :: { "vmid" : "e596cea2-9b2d-4454-a5c5-811b52d35d86", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658968059076, "lastUpdatedOn" : 1658968059540, "version" : "8.1.0.0", "vrn" : null, "eventName" : "OnVidmGenericInitialized", "currentState" : null, "eventArgument" : "{\"componentSpec\":{\"name\":\"componentSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ComponentDeploymentSpecification\",\"value\":\"{\\\"component\\\":{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"cafeHostNamePrimary\\\":\\\"vra.cap.org\\\",\\\"cafeRootPasswordPrimary\\\":\\\"JXJXJXJX\\\",\\\"vidmPrimaryNodeRootPassword\\\":\\\"JXJXJXJX\\\",\\\"baseTenantId\\\":\\\"idm\\\",\\\"uberAdminUserType\\\":\\\"LOCAL\\\",\\\"version\\\":\\\"8.8.1\\\",\\\"masterVidmAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"uberAdmin\\\":\\\"configadmin\\\",\\\"masterVidmEnabled\\\":\\\"true\\\",\\\"__version\\\":\\\"8.8.1\\\",\\\"uberAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"masterVidmHostName\\\":\\\"idm.cap.org\\\",\\\"masterVidmAdminUserName\\\":\\\"admin\\\",\\\"isLBSslTerminated\\\":\\\"false\\\",\\\"authProviderHostnames\\\":\\\"idm.cap.org\\\",\\\"vidmPrimaryNodeHostname\\\":\\\"idm.cap.org\\\"}},\\\"priority\\\":KXKXKXKX\"},\"productSpec\":{\"name\":\"productSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ProductSpecification\",\"value\":\"{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"displayName\\\":null,\\\"productVersion\\\":null,\\\"priority\\\":0,\\\"dependsOn\\\":[],\\\"components\\\":[{\\\"component\\\":{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"cafeHostNamePrimary\\\":\\\"vra.cap.org\\\",\\\"cafeRootPasswordPrimary\\\":\\\"JXJXJXJX\\\",\\\"vidmPrimaryNodeRootPassword\\\":\\\"JXJXJXJX\\\",\\\"baseTenantId\\\":\\\"idm\\\",\\\"uberAdminUserType\\\":\\\"LOCAL\\\",\\\"version\\\":\\\"8.8.1\\\",\\\"masterVidmAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"uberAdmin\\\":\\\"configadmin\\\",\\\"masterVidmEnabled\\\":\\\"true\\\",\\\"__version\\\":\\\"8.8.1\\\",\\\"uberAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"masterVidmHostName\\\":\\\"idm.cap.org\\\",\\\"masterVidmAdminUserName\\\":\\\"admin\\\",\\\"isLBSslTerminated\\\":\\\"false\\\",\\\"authProviderHostnames\\\":\\\"idm.cap.org\\\",\\\"vidmPrimaryNodeHostname\\\":\\\"idm.cap.org\\\"}},\\\"priority\\\":KXKXKXKX\"}}", "status" : "CREATED", "stateMachineInstance" : "ba1d3e0f-6c87-4e29-bca6-c347dd9bda6b", "errorCause" : null, "sequence" : 563561, "eventLock" : 1, "engineNodeId" : "lcm.cap.org" } 12. vRA VA check vIDM root certificate task is initiated. On this task "vracli -j vidm " command is executed. A response is successfully received 2022-07-28 00:27:39.613 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaCheckVidmRootCertificateTask 2022-07-28 00:27:39.618 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaCheckVidmRootCertificateTask 2022-07-28 00:27:39.657 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:27:39.663 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.668 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.672 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.675 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:27:39.677 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.679 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.682 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:39.684 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:27:39.684 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:27:39.688 INFO [pool-3-thread-39] c.v.v.l.p.c.v.t.VraVaCheckVidmRootCertificateTask - -- Starting VraVaCheckVidmRootCertificate Task vravaretrustvidm 2022-07-28 00:27:39.699 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command to be run : vracli -j vidm 2022-07-28 00:27:39.701 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:27:39.702 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: vracli -j vidm 2022-07-28 00:27:39.815 INFO [pool-3-thread-39] c.v.v.l.u.SshUtils - -- Executing command --> vracli -j vidm 2022-07-28 00:27:41.377 INFO [pool-3-thread-39] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-07-28 00:27:41.382 INFO [pool-3-thread-39] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:27:41.383 INFO [pool-3-thread-39] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "{\"status_code\": 0, \"output\": {\"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIDzjCCAragAwIBAgIGAX2PFLkmMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM\\nKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G\\nA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTEyMDYwOTMwMzlaFw0yMzEy\\nMDYwOTMwMzlaMFkxFDASBgNVBAMMC2lkbS5jYXAub3JnMQwwCg * * * FAl4dGC3jhaO9+icDZiKBb5hsR/wntUzF9Nqns9JoABFdfq+FDqmhw+U\\nak9o9GsHpGVghl7Vs2ExneVikFm9bbon2QucASTLXvc6wXD7kkRSqTG/DvoDVvuI\\nSyplnOpNDLcnWyNB+V7djqYE6ybARErFLKk4LGiJxvunVTl3U5L8HsxqQy+Au9gF\\ngAnnMWcisaDxDMeuR5/Ome1RybvvZ27YeAPe5t+y6aICi8m1g/bF3+naHtKPFa50\\n/G9JkfAgPJSsczhG6XoDBz0cTz44EK2QLM6fHptn0m1oCi5pNuvg40KWWHgmJhHO\\ntN/HBMZylOQGHWDwiVWkUOw0\\n-----END CERTIFICATE-----\\n\", \"clients\": {\"ClientID\": \"prelude-UyLAxiyMkl\", \"ClientIDUser\": \"prelude-user-elkli9TRYF\", \"ClientSecret\": \"JXJXJXJX\", \"ClientSecretUser\": \"JXJXJXJX\"}, \"defaultOrgAlias\": \"\", \"defaultOrgName\": \"IDM\", \"isDefaultOrgAliasUpdated\": true, \"sha256\": \"b41714bbd62d342281986e0c80533d179de47579ccef7b0037da4c98f23010de\", \"url\": \"https://idm.cap.org\", \"user\": \"configadmin\", \"verify_cert\": false}, \"error\": \"\", \"logs\": \"\"}\n", "errorData" : null, "commandTimedOut" : false } 2022-07-28 00:27:41.386 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-07-28 00:27:41.390 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:27:41.390 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:27:41.391 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:27:41.392 INFO [pool-3-thread-39] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- {"status_code": 0, "output": {"cert": "-----BEGIN CERTIFICATE-----\nMIIDzjCCAragAwIBAgIGAX2PFLkmMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM\nKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G\nA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTEyMDYwOTMwMzlaFw0yMzEy\nMDYwOTMwMzlaMFkxFDASBgNVBAMMC2lkbS5jYXAub3JnMQwwCgYDVQ* * DZiKBb5hsR/wntUzF9Nqns9JoABFdfq+FDqmhw+U\nak9o9GsHpGVghl7Vs2ExneVikFm9bbon2QucASTLXvc6wXD7kkRSqTG/DvoDVvuI\nSyplnOpNDLcnWyNB+V7djqYE6ybARErFLKk4LGiJxvunVTl3U5L8HsxqQy+Au9gF\ngAnnMWcisaDxDMeuR5/Ome1RybvvZ27YeAPe5t+y6aICi8m1g/bF3+naHtKPFa50\n/G9JkfAgPJSsczhG6XoDBz0cTz44EK2QLM6fHptn0m1oCi5pNuvg40KWWHgmJhHO\ntN/HBMZylOQGHWDwiVWkUOw0\n-----END CERTIFICATE-----\n", "clients": {"ClientID": "prelude-UyLAxiyMkl", "ClientIDUser": "prelude-user-elkli9TRYF", "ClientSecret": "JXJXJXJX", "ClientSecretUser": "JXJXJXJX"}, "defaultOrgAlias": "", "defaultOrgName": "IDM", "isDefaultOrgAliasUpdated": true, "sha256": "b41714bbd62d342281986e0c80533d179de47579ccef7b0037da4c98f23010de", "url": "https://idm.cap.org", "user": "configadmin", "verify_cert": false}, "error": "", "logs": ""} * * * 2022-07-28 00:27:41.430 INFO [pool-3-thread-39] c.v.v.l.p.c.v.t.VraVaCheckVidmRootCertificateTask - -- vIDM details retrieved from vRA : Output [clients=Clients [ClientIDUser=null, ClientID=null, ClientSecretUser=KXKXKXKX, sha265=null, cert=-----BEGIN CERTIFICATE----- MIIDzjCCAragAwIBAgIGAX2PFLkmMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G * * * /G9JkfAgPJSsczhG6XoDBz0cTz44EK2QLM6fHptn0m1oCi5pNuvg40KWWHgmJhHO tN/HBMZylOQGHWDwiVWkUOw0 -----END CERTIFICATE----- , defaultOrgName=IDM, user=configadmin, url=https://idm.cap.org] 2022-07-28 00:27:41.453 INFO [pool-3-thread-39] c.v.v.l.u.CertificateUtil - -- requestUrl : https://idm.cap.org 2022-07-28 00:27:41.498 INFO [pool-3-thread-39] c.v.v.l.p.c.v.t.VraVaCheckVidmRootCertificateTask - -- vIDM CA certificates thumbprints: [D5236B54***9746E24052] 2022-07-28 00:27:41.501 INFO [pool-3-thread-39] c.v.v.l.p.c.v.t.VraVaCheckVidmRootCertificateTask - -- Thumbprints of vIDM certificates retrieved from vRA: [D5236B54***9746E24052] 13. Starts vIDM Instance availability task 2022-07-28 00:27:42.036 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: vravaretrustvidm 2022-07-28 00:27:42.037 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->vravaretrustvidm 2022-07-28 00:27:42.037 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-28 00:27:42.065 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:42.066 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-28 00:27:42.066 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-28 00:27:42.066 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:42.067 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:42.067 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:27:42.068 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='vravaretrustvidm', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/vravaretrustvidm.vmfx'} 2022-07-28 00:27:42.069 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVraVaVidmRootCertificateNotPresent 2022-07-28 00:27:42.069 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaCheckVidmRootCertificateTask 2022-07-28 00:27:42.070 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmInstanceAvailabilityCheckTask 2022-07-28 00:27:42.075 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.vidm.core.task.VidmInstanceAvailabilityCheckTask 2022-07-28 00:27:42.099 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:27:42.100 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.104 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.108 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.111 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.113 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:27:42.114 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.115 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.117 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.118 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.120 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:27:42.121 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:27:42.128 INFO [pool-3-thread-36] c.v.v.l.v.c.t.VidmInstanceAvailabilityCheckTask - -- Starting vIDM instance availability check task 2022-07-28 00:27:42.409 INFO [pool-3-thread-36] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVidmInstanceAvailabilityCheckSuccess 14. VidmLoginTokenCheckTask is successful too as the token was obtained 2022-07-28 00:27:42.642 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmInstanceAvailabilityCheckTask 2022-07-28 00:27:42.642 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmLoginTokenCheckTask 2022-07-28 00:27:42.646 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.vidm.core.task.VidmLoginTokenCheckTask 2022-07-28 00:27:42.668 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: configurationPropertyService 2022-07-28 00:27:42.670 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:27:42.672 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.676 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.679 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:27:42.683 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAd 2022-07-28 00:29:42.898 INFO [pool-3-thread-45] c.v.v.l.v.c.t.VidmLoginTokenCheckTask - -- vIDM login token obtained. Proceeding to next task 2022-07-28 00:29:42.960 INFO [pool-3-thread-45] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVidmLoginTokenCheckSuccess minPassword<=KXKXKXKX KEY :: XXXXXX 15. vIDM healthcheck for vRA task is initiated 2022-07-28 00:29:43.232 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVidmLoginTokenCheckSuccess 2022-07-28 00:29:43.233 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmLoginTokenCheckTask 2022-07-28 00:29:43.233 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmHealthCheck4vRATask 2022-07-28 00:29:43.238 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.vidm.core.task.VidmHealthCheck4vRATask 2022-07-28 00:29:43.275 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: configurationPropertyService 2022-07-28 00:29:43.276 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:29:43.277 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.283 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.287 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.290 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.293 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:29:43.294 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:29:43.302 INFO [pool-3-thread-30] c.v.v.l.v.c.t.VidmHealthCheck4vRATask - -- Starting vIDM health check task 2022-07-28 00:29:43.306 INFO [pool-3-thread-30] c.v.v.l.v.c.t.u.VidmInstallTaskUtil - -- vIDM Configuration property healthChkSleepMaxRetries value is obtained from Config Service : 60 2022-07-28 00:29:43.308 INFO [pool-3-thread-30] c.v.v.l.v.c.t.u.VidmInstallTaskUtil - -- vIDM Configuration property healthChkSleepTimeMillis value is obtained from Config Service : 10000 2022-07-28 00:29:43.436 INFO [pool-3-thread-30] c.v.v.l.v.d.r.c.VidmRestClient - -- API Response Status : 200 Response Message : {"clusterInstances":[{"version":"3.3.6.0 Build 19203469","uuid":"34f02008-8429-329b-808f-75724dcc3695","status":"Active","lastUpdated":1658968171385,"hostname":"idm.cap.org","datacenterId":1,"id":2,"ipaddress":"*.**.****.*"}],"_links":{}} 2022-07-28 00:29:43.438 INFO [pool-3-thread-30] c.v.v.l.v.d.r.u.VidmServerRestUtil - -- String response vIDM get all cluster instance : VidmRestClientResponseDTO [statusCode=200, responseMessage={"clusterInstances":[{"version":"3.3.6.0 Build 19203469","uuid":"34f02008-8429-329b-808f-75724dcc3695","status":"Active","lastUpdated":1658968171385,"hostname":"idm.cap.org","datacenterId":1,"id":2,"ipaddress":"*.**.****.*"}],"_links":{}}] 2022-07-28 00:29:43.447 INFO [pool-3-thread-30] c.v.v.l.v.c.t.VidmHealthCheck4vRATask - -- Successfully verified status of the vIDM nodes. 2022-07-28 00:29:43.447 INFO [pool-3-thread-30] c.v.v.l.v.c.t.VidmHealthCheck4vRATask - -- vIDM Health is OK. Proceeding to next task. 2022-07-28 00:29:43.447 INFO [pool-3-thread-30] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVidmHealthCheckTaskCompletion 2022-07-28 00:29:43.448 INFO [pool-3-thread-30] c.v.v.l.p.a.s.Task - -- ======================================== { "componentSpec" : { "object" : { "component" : { "symbolicName" : "vravaretrustvidm", "type" : null, "componentVersion" : null, "properties" : { "cafeHostNamePrimary" : "vra.cap.org", "cafeRootPasswordPrimary" : "JXJXJXJX", "vidmPrimaryNodeRootPassword" : "JXJXJXJX", "baseTenantId" : "idm", "uberAdminUserType" : "LOCAL", "version" : "8.8.1", "masterVidmAdminPassword" : "JXJXJXJX", "uberAdmin" : "configadmin", "masterVidmEnabled" : "true", "__version" : "8.8.1", "uberAdminPassword" : "JXJXJXJX", "masterVidmHostName" : "idm.cap.org", "masterVidmAdminUserName" : "admin", "isLBSslTerminated" : "false", "authProviderHostnames" : "idm.cap.org", "vidmPrimaryNodeHostname" : "idm.cap.org" } }, "priority" : 0 * * * 2022-07-28 00:29:43.450 INFO [pool-3-thread-30] c.v.v.l.p.a.s.Task - -- FIELD NAME :: componentSpec 2022-07-28 00:29:43.450 INFO [pool-3-thread-30] c.v.v.l.p.a.s.Task - -- KEY PICKER IS :: com.vmware.vrealize.lcm.drivers.commonplugin.task.keypicker.GenericComponentSpecKeyPicker 2022-07-28 00:29:43.759 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- INITIALIZING NEW EVENT :: { "vmid" : "e100878f-639e-4689-8584-79e9ebe9170b", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658968183452, "lastUpdatedOn" : 1658968183740, "version" : "8.1.0.0", "vrn" : null, "eventName" : "OnVidmHealthCheckTaskCompletion", "currentState" : null, "eventArgument" : "{\"componentSpec\":{\"name\":\"componentSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ComponentDeploymentSpecification\",\"value\":\"{\\\"component\\\":{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"cafeHostNamePrimary\\\":\\\"vra.cap.org\\\",\\\"cafeRootPasswordPrimary\\\":\\\"JXJXJXJX\\\",\\\"vidmPrimaryNodeRootPassword\\\":\\\"JXJXJXJX\\\",\\\"baseTenantId\\\":\\\"idm\\\",\\\"uberAdminUserType\\\":\\\"LOCAL\\\",\\\"version\\\":\\\"8.8.1\\\",\\\"masterVidmAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"uberAdmin\\\":\\\"configadmin\\\",\\\"masterVidmEnabled\\\":\\\"true\\\",\\\"__version\\\":\\\"8.8.1\\\",\\\"uberAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"masterVidmHostName\\\":\\\"idm.cap.org\\\",\\\"masterVidmAdminUserName\\\":\\\"admin\\\",\\\"isLBSslTerminated\\\":\\\"false\\\",\\\"authProviderHostnames\\\":\\\"idm.cap.org\\\",\\\"vidmPrimaryNodeHostname\\\":\\\"idm.cap.org\\\"}},\\\"priority\\\":KXKXKXKX\"}}", "status" : "CREATED", "stateMachineInstance" : "ba1d3e0f-6c87-4e29-bca6-c347dd9bda6b", "errorCause" : null, "sequence" : 563577, "eventLock" : 1, "engineNodeId" : "lcm.cap.org" } 16. vIDM Super User availability check is triggered and completed 2022-07-28 00:29:43.766 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: vravaretrustvidm 2022-07-28 00:29:43.766 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->vravaretrustvidm 2022-07-28 00:29:43.767 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-28 00:29:43.797 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:29:43.797 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-28 00:29:43.798 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-28 00:29:43.798 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/vravaretrustvidm.vmfx 2022-07-28 00:29:43.798 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:29:43.798 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/vravaretrustvidm.vmfx 2022-07-28 00:29:43.800 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='vravaretrustvidm', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/vravaretrustvidm.vmfx'} 2022-07-28 00:29:43.802 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVidmHealthCheckTaskCompletion 2022-07-28 00:29:43.803 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmHealthCheck4vRATask 2022-07-28 00:29:43.803 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmSuperUserAvailabilityCheckTask 2022-07-28 00:29:43.809 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.vidm.core.task.VidmSuperUserAvailabilityCheckTask 2022-07-28 00:29:43.813 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: configurationPropertyService 2022-07-28 00:29:43.814 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:29:43.815 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.817 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.820 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.821 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:43.823 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:29:43.823 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:29:43.827 INFO [pool-3-thread-3] c.v.v.l.v.c.t.VidmSuperUserAvailabilityCheckTask - -- Starting vIDM super user availability task 2022-07-28 00:29:43.829 INFO [pool-3-thread-3] c.v.v.l.v.c.t.u.VidmInstallTaskUtil - -- vIDM Configuration property userChkSleepMaxRetries value is obtained from Config Service : 120 2022-07-28 00:29:43.830 INFO [pool-3-thread-3] c.v.v.l.v.c.t.u.VidmInstallTaskUtil - -- vIDM Configuration property userChkSleepTimeMillis value is obtained from Config Service : 10000 2022-07-28 00:29:43.911 INFO [pool-3-thread-3] c.v.v.l.v.d.r.u.VidmUserGroupMgmtRestUtil - -- VidmUserGroupMgmtRestUtil::getUserByUsernameAndUserType - Request to fetch user in a group 2022-07-28 00:29:43.950 INFO [pool-3-thread-3] c.v.v.l.v.d.r.c.VidmRestClient - -- API Response Status : 200 Response Message : {"totalResults":1,"itemsPerPage":1,"startIndex":1,"schemas":["urn:scim:schemas:core:1.0","urn:scim:schemas:extension:workspace:1.0","urn:scim:schemas:extension:enterprise:1.0","urn:scim:schemas:extension:workspace:mfa:1.0"],"Resources":[{"active":true,"userName":"configadmin","id":"a58bddc6-314b-4220-9ba4-cea4d9c5dbff","meta":{"created":"2021-12-06T10:05:21.025Z","lastModified":"2021-12-06T10:05:21.992Z","location":"https://idm.cap.org/SAAS/jersey/manager/api/scim/Users/a58bddc6-314b-4220-9ba4-cea4d9c5dbff","version":"W/\"1638785121992\""},"name":{"givenName":"configadmin","familyName":"configadmin"},"emails":[{"value":"configadmin@vsphere.local"}],"groups":[{"value":"237386ee-7f61-4d3a-93fa-1569d4bf673a","type":"direct","display":"ALL USERS"}],"roles":[{"value":"84a56b68-f8d5-4b9e-a365-92ef2adb3fb3","display":"User"},{"value":"55048dee-fe1b-404a-936d-3e0b86a7209e","display":"Administrator"}],"urn:scim:schemas:extension:workspace:1.0":{"internalUserType":"LOCAL","userStatus":"1","domain":"System Domain","userStoreUuid":"2e0fefa1-077e-423a-8095-c5f46b0d1827"}}]} 2022-07-28 00:29:43.952 INFO [pool-3-thread-3] c.v.v.l.v.d.r.u.VidmUserGroupMgmtRestUtil - -- VidmUserGroupMgmtRestUtil::getUser - Successfully fetched user 2022-07-28 00:29:43.953 INFO [pool-3-thread-3] c.v.v.l.v.d.r.u.VidmUserGroupMgmtRestUtil - -- Response : VidmRestClientResponseDTO [statusCode=200, responseMessage={"totalResults":1,"itemsPerPage":1,"startIndex":1,"schemas":["urn:scim:schemas:core:1.0","urn:scim:schemas:extension:workspace:1.0","urn:scim:schemas:extension:enterprise:1.0","urn:scim:schemas:extension:workspace:mfa:1.0"],"Resources":[{"active":true,"userName":"configadmin","id":"a58bddc6-314b-4220-9ba4-cea4d9c5dbff","meta":{"created":"2021-12-06T10:05:21.025Z","lastModified":"2021-12-06T10:05:21.992Z","location":"https://idm.cap.org/SAAS/jersey/manager/api/scim/Users/a58bddc6-314b-4220-9ba4-cea4d9c5dbff","version":"W/\"1638785121992\""},"name":{"givenName":"configadmin","familyName":"configadmin"},"emails":[{"value":"configadmin@vsphere.local"}],"groups":[{"value":"237386ee-7f61-4d3a-93fa-1569d4bf673a","type":"direct","display":"ALL USERS"}],"roles":[{"value":"84a56b68-f8d5-4b9e-a365-92ef2adb3fb3","display":"User"},{"value":"55048dee-fe1b-404a-936d-3e0b86a7209e","display":"Administrator"}],"urn:scim:schemas:extension:workspace:1.0":{"internalUserType":"LOCAL","userStatus":"1","domain":"System Domain","userStoreUuid":"2e0fefa1-077e-423a-8095-c5f46b0d1827"}}]}] 2022-07-28 00:29:43.960 INFO [pool-3-thread-3] c.v.v.l.v.c.t.VidmSuperUserAvailabilityCheckTask - -- vIDM search user. Number of users returned : 1 2022-07-28 00:29:43.960 INFO [pool-3-thread-3] c.v.v.l.v.c.t.VidmSuperUserAvailabilityCheckTask - -- Searching for configadmin in the response ... 2022-07-28 00:29:43.960 INFO [pool-3-thread-3] c.v.v.l.v.c.t.VidmSuperUserAvailabilityCheckTask - -- User : configadmin 2022-07-28 00:29:43.960 INFO [pool-3-thread-3] c.v.v.l.v.c.t.VidmSuperUserAvailabilityCheckTask - -- User configadmin found to be available in vIDM 2022-07-28 00:29:43.961 INFO [pool-3-thread-3] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVidmSuperUserAvailabilitySuccess 17. vRA VA set vIDM task starts 2022-07-28 00:29:44.405 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVidmSuperUserAvailabilitySuccess 2022-07-28 00:29:44.406 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmSuperUserAvailabilityCheckTask 2022-07-28 00:29:44.406 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaSetVidmTask 2022-07-28 00:29:44.412 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaSetVidmTask 2022-07-28 00:29:44.490 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:29:44.492 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:44.496 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:44.500 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:44.503 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:29:44.504 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:29:44.513 INFO [pool-3-thread-10] c.v.v.l.p.c.v.t.VraVaSetVidmTask - -- Starting :: Set vRA VA VIDM Task 2022-07-28 00:29:44.515 INFO [pool-3-thread-10] c.v.v.l.p.c.v.t.VraVaSetVidmTask - -- Trying to set vIDM with root certificate 2022-07-28 00:29:44.515 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:44.516 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: rm /tmp/adminpassword.txt 2022-07-28 00:29:44.596 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> rm /tmp/adminpassword.txt 2022-07-28 00:29:45.145 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 1 2022-07-28 00:29:45.145 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:45.145 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 1, "outputData" : "", "errorData" : "rm: cannot remove '/tmp/adminpassword.txt': YXYXYXYX such file or directory\n", "commandTimedOut" : false } * * * 2022-07-28 00:29:45.147 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:45.147 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: rm /tmp/vidmrootcert.pem 2022-07-28 00:29:45.216 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> rm /tmp/vidmrootcert.pem 2022-07-28 00:29:45.764 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 1 2022-07-28 00:29:45.764 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:45.765 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 1, "outputData" : "", "errorData" : "rm: cannot remove '/tmp/vidmrootcert.pem': KXKXKXKX such file or directory\n", "commandTimedOut" : false } 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 1 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:45.766 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- rm: cannot remove '/tmp/vidmrootcert.pem': KXKXKXKX such file or directory 2022-07-28 00:29:45.767 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:45.767 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:45.767 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: echo 'MXMXMXMX' > /tmp/adminpassword.txt;history YXYXYXYX 1) 2022-07-28 00:29:45.874 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> echo 'MXMXMXMX' > /tmp/adminpassword.txt;history YXYXYXYX 1) 2022-07-28 00:29:46.424 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 2 2022-07-28 00:29:46.425 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:46.426 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 2, "outputData" : "", "errorData" : "bash: line 0: history: -d: option requires an argument\nhistory: usage: history [-c] [-d offset] [n] or history -anrw [filename] or history -ps arg [arg...]\n", "commandTimedOut" : false } 2022-07-28 00:29:46.426 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 2 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:46.427 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- bash: line 0: history: -d: option requires an argument history: usage: history [-c] [-d offset] [n] or history -anrw [filename] or history -ps arg [arg...] 2022-07-28 00:29:46.428 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:46.428 INFO [pool-3-thread-10] c.v.v.l.u.CertificateUtil - -- requestUrl : https://idm.cap.org 2022-07-28 00:29:46.494 INFO [pool-3-thread-10] c.v.v.l.c.l.MaskingPrintStream - -- * SYSOUT/SYSERR CAPTURED: -- [ [0] Version: 3 SerialNumber: 1638415830509 IssuerDN: CN=vRealize Suite Lifecycle Manager Locker CA,O=VMware,C=IN Start Date: Thu Dec 02 03:30:30 UTC 2021 Final Date: Sun Nov 30 03:30:30 UTC 2031 SubjectDN: CN=vRealize Suite Lifecycle Manager Locker CA,O=VMware,C=IN * * * 2022-07-28 00:29:47.157 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-07-28 00:29:47.157 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:47.158 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "", "errorData" : null, "commandTimedOut" : false } 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:47.159 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-07-28 00:29:47.160 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:47.160 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command to be run : vracli vidm set https://idm.cap.org admin /tmp/adminpassword.txt YXYXYXYX -r /tmp/vidmrootcert.pem 2022-07-28 00:29:47.160 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:47.161 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: vracli vidm set https://idm.cap.org admin /tmp/adminpassword.txt YXYXYXYX -r /tmp/vidmrootcert.pem 2022-07-28 00:29:47.261 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> vracli vidm set https://idm.cap.org admin /tmp/adminpassword.txt YXYXYXYX -r /tmp/vidmrootcert.pem 2022-07-28 00:29:50.813 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-07-28 00:29:50.889 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:50.890 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "2022-07-28 00:29:48,209 [INFO] Setting vIDM certificate from /tmp/vidmrootcert.pem\n2022-07-28 00:29:48,354 [INFO] Getting information about client: prelude-UyLAxiyMkl\n2022-07-28 00:29:48,476 [INFO] Updating vIDM client prelude-UyLAxiyMkl with grant_types: client_credentials and redirect URLs: None\n2022-07-28 00:29:48,676 [INFO] Getting information about client: prelude-user-elkli9TRYF\n2022-07-28 00:29:48,794 [INFO] Updating vIDM client prelude-user-elkli9TRYF with grant_types: refresh_token authorization_code password YXYXYXYX and redirect URLs: https://vra.cap.org/identity/api/core/authn/csp, https://idm.vra.cap.org/identity/api/core/authn/csp\n2022-07-28 00:29:49,027 [INFO] Getting information about tenant: IDM\n2022-07-28 00:29:50,509 [INFO] Restarting Identity service pod(s)\n", "errorData" : null, "commandTimedOut" : false } 2022-07-28 00:29:50.892 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-07-28 00:29:50.895 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:50.895 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:50.895 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:48,209 [INFO] Setting vIDM certificate from /tmp/vidmrootcert.pem 2022-07-28 00:29:48,354 [INFO] Getting information about client: prelude-UyLAxiyMkl 2022-07-28 00:29:48,476 [INFO] Updating vIDM client prelude-UyLAxiyMkl with grant_types: client_credentials and redirect URLs: None 2022-07-28 00:29:48,676 [INFO] Getting information about client: prelude-user-elkli9TRYF 2022-07-28 00:29:48,794 [INFO] Updating vIDM client prelude-user-elkli9TRYF with grant_types: refresh_token authorization_code password YXYXYXYX and redirect URLs: https://vra.cap.org/identity/api/core/authn/csp, https://idm.vra.cap.org/identity/api/core/authn/csp 2022-07-28 00:29:49,027 [INFO] Getting information about tenant: IDM 2022-07-28 00:29:50,509 [INFO] Restarting Identity service pod(s) 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-07-28 00:29:50.896 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:50.897 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Set VIDM Host successful on :: vra.cap.org 2022-07-28 00:29:50.898 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:50.898 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: rm /tmp/adminpassword.txt 2022-07-28 00:29:51.298 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> rm /tmp/adminpassword.txt 2022-07-28 00:29:51.844 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-07-28 00:29:51.844 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:51.845 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "", "errorData" : null, "commandTimedOut" : false } 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:51.846 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:51.847 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:51.847 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-07-28 00:29:51.847 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:51.847 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:51.847 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: rm /tmp/vidmrootcert.pem 2022-07-28 00:29:52.277 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Executing command --> rm /tmp/vidmrootcert.pem 2022-07-28 00:29:52.828 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- exit-status: 0 2022-07-28 00:29:52.828 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command executed sucessfully 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.u.SshUtils - -- Command execution response: { "exitStatus" : 0, "outputData" : "", "errorData" : null, "commandTimedOut" : false } 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Command Status code :: 0 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Output Stream :: 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- Error Stream :: 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- null 2022-07-28 00:29:52.829 INFO [pool-3-thread-10] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- ==================================================== 2022-07-28 00:29:52.830 INFO [pool-3-thread-10] c.v.v.l.p.c.v.t.VraVaSetVidmTask - -- Setting VIDM for vRA completed with status : true 2022-07-28 00:29:52.830 INFO [pool-3-thread-10] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnSetVIDMVaCompletion * * * 2022-07-28 00:29:52.834 INFO [pool-3-thread-10] c.v.v.l.p.a.s.Task - -- FIELD NAME :: componentSpec 2022-07-28 00:29:52.834 INFO [pool-3-thread-10] c.v.v.l.p.a.s.Task - -- KEY PICKER IS :: com.vmware.vrealize.lcm.plugin.core.vra80.tasks.keypicker.VravaComponentSpecKeyPicker 2022-07-28 00:29:53.385 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- INITIALIZING NEW EVENT :: { "vmid" : "1cb3e351-4235-4824-8fb5-4cbc9c4a80d2", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658968192835, "lastUpdatedOn" : 1658968193334, "version" : "8.1.0.0", "vrn" : null, "eventName" : "OnSetVIDMVaCompletion", "currentState" : null, "eventArgument" : "{\"productSpec\":{\"name\":\"productSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ProductSpecification\",\"value\":\"null\"},\"componentSpec\":{\"name\":\"componentSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ComponentDeploymentSpecification\",\"value\":\"{\\\"component\\\":{\\\"symbolicName\\\":\\\"vravaretrustvidm\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"cafeHostNamePrimary\\\":\\\"vra.cap.org\\\",\\\"cafeRootPasswordPrimary\\\":\\\"JXJXJXJX\\\",\\\"vidmPrimaryNodeRootPassword\\\":\\\"JXJXJXJX\\\",\\\"baseTenantId\\\":\\\"idm\\\",\\\"uberAdminUserType\\\":\\\"LOCAL\\\",\\\"version\\\":\\\"8.8.1\\\",\\\"masterVidmAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"uberAdmin\\\":\\\"configadmin\\\",\\\"masterVidmEnabled\\\":\\\"true\\\",\\\"__version\\\":\\\"8.8.1\\\",\\\"uberAdminPassword\\\":\\\"JXJXJXJX\\\",\\\"masterVidmHostName\\\":\\\"idm.cap.org\\\",\\\"masterVidmAdminUserName\\\":\\\"admin\\\",\\\"isLBSslTerminated\\\":\\\"false\\\",\\\"authProviderHostnames\\\":\\\"idm.cap.org\\\",\\\"vidmPrimaryNodeHostname\\\":\\\"idm.cap.org\\\"}},\\\"priority\\\":KXKXKXKX\"}}", "status" : "CREATED", "stateMachineInstance" : "ba1d3e0f-6c87-4e29-bca6-c347dd9bda6b", "errorCause" : null, "sequence" : 563581, "eventLock" : 1, "engineNodeId" : "lcm.cap.org" } * * * 2022-07-28 00:29:53.501 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnSetVIDMVaCompletion 18. Set load balancer task is initiated and completed. Because 2022-07-28 00:29:53.501 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaSetVidmTask 2022-07-28 00:29:53.501 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaSetLoadBalancerTask 2022-07-28 00:29:53.603 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaSetLoadBalancerTask 2022-07-28 00:29:53.869 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:29:53.871 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:29:53.872 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:53.875 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:53.877 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:53.880 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: configurationPropertyService 2022-07-28 00:29:53.880 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:29:53.881 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:29:54.005 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.VraVaSetLoadBalancerTask - -- Starting :: vRA Set LB Task.... 2022-07-28 00:29:54.130 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.u.VraVaTaskUtil - -- vRA Configuration property setLoadBalancerCertificate value is obtained from Config Service : true 2022-07-28 00:29:54.130 INFO [pool-3-thread-47] c.v.v.l.p.c.v.t.VraVaSetLoadBalancerTask - -- LB information not provided skipping LB 2022-07-28 00:29:54.130 INFO [pool-3-thread-47] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnLoadBalancerSetCompletion 19. vRA VA initialize task starts which starts the services or runs deploy.sh to initialize pods 2022-07-28 00:29:54.641 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaInitializeTask 2022-07-28 00:29:54.787 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaInitializeTask 2022-07-28 00:29:54.866 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:29:54.868 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:29:54.869 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:54.873 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:54.876 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:29:54.881 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:29:54.881 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:29:54.932 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.VraVaInitializeTask - -- Starting :: Initialize vRA VA Task 2022-07-28 00:29:54.960 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.VraVaInitializeTask - -- isCavaDeployment :false deployOptions: null 2022-07-28 00:29:54.967 INFO [pool-3-thread-7] c.v.v.l.p.c.v.t.VraVaInitializeTask - -- Running Deploy Script on vRA VA : vra.cap.org 2022-07-28 00:29:54.967 INFO [pool-3-thread-7] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- PRELUDE ENDPOINT HOST :: vra.cap.org 2022-07-28 00:29:54.989 INFO [pool-3-thread-7] c.v.v.l.d.v.h.VraPreludeInstallHelper - -- COMMAND :: /opt/scripts/deploy.sh 2022-07-28 00:29:55.187 INFO [pool-3-thread-7] c.v.v.l.u.SshUtils - -- Executing command --> /opt/scripts/deploy.sh 20. This is the time in vRA , you would see the pods are redeployed ========================= [2022-07-28 00:29:55.456+0000] Waiting for deploy healthcheck ========================= ========================= [2022-07-28 00:29:57.740+0000] Backing up databases from existing pods ========================= ========================= [2022-07-28 00:30:03.806+0000] Waiting for command execution pods ========================= ========================= [2022-07-28 00:30:12.060+0000] Tear down existing deployment ========================= ========================= [2022-07-28 00:34:13.878+0000] Creating kubernetes namespaces ========================= ========================= [2022-07-28 00:34:22.435+0000] Applying ingress certificate ========================= ========================= [2022-07-28 00:34:28.153+0000] Updating etcd configuration to include https_proxy if such exists ========================= ========================= [2022-07-28 00:34:38.749+0000] Deploying infrastructure services ========================= ========================= + set +x [2022-07-28 00:34:47.336+0000] Creating database pods in previous mode if necessary for migration ========================= ========================= [2022-07-28 00:36:41.814+0000] Clearing liquibase locks ========================= ========================= [2022-07-28 00:37:06.654+0000] DB upgrade schema ========================= [2022-07-28 00:39:19.814+0000] Populating initial identity-service data ========================= ========================= [2022-07-28 00:39:57.218+0000] Deploying application services ========================= ========================= [2022-07-28 00:52:13.054+0000] Deploying application UI ========================= ========================= [2022-07-28 00:53:47.981+0000] Setting feature UI toggles to Provisioning service ========================= ========================= [2022-07-28 00:54:02.647+0000] Check embedded endpoints ========================= Prelude has been deployed successfully ========================= 21. LCM reports that the vRA VA services has started 2022-07-28 00:54:08.540 INFO [pool-3-thread-7] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnInitilizeVaCompletion 22. Now , it starts vRA VA Update Certificate Inventory task 2022-07-28 00:54:09.056 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnInitilizeVaCompletion 2022-07-28 00:54:09.056 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaInitializeTask 2022-07-28 00:54:09.057 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaUpdateCertificateInInventoryTask 2022-07-28 00:54:09.065 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaUpdateCertificateInInventoryTask 2022-07-28 00:54:09.106 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:54:09.107 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:54:09.108 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.114 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.117 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.118 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:54:09.120 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:54:09.134 INFO [pool-3-thread-34] c.v.v.l.p.c.v.t.VraVaUpdateCertificateInInventoryTask - -- Starting :: vRA Certificate Inventory Update Task.... 2022-07-28 00:54:09.136 INFO [pool-3-thread-34] c.v.v.l.u.CertificateUtil - -- requestUrl : https://vra.cap.org 2022-07-28 00:54:09.236 INFO [pool-3-thread-34] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVravaCertificateInventoryCompletion 23. Update all allowed redirects task is triggered. These are the links available in the products when a user logs into vIDM 2022-07-28 00:54:09.612 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Responding for Edge :: OnVravaCertificateInventoryCompletion 2022-07-28 00:54:09.612 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaUpdateCertificateInInventoryTask 2022-07-28 00:54:09.612 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask 2022-07-28 00:54:09.620 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask 2022-07-28 00:54:09.665 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: productSpec 2022-07-28 00:54:09.667 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Locker Object :: componentSpec 2022-07-28 00:54:09.669 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: cafeRootPasswordPrimary<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.674 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: uberAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.677 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: vidmPrimaryNodeRootPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.680 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- KEY_VARIABLE :: masterVidmAdminPassword<=KXKXKXKX KEY :: XXXXXX 2022-07-28 00:54:09.683 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-28 00:54:09.683 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-28 00:54:09.697 INFO [pool-3-thread-33] c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask - -- Starting vIDM update allowed redirects 2022-07-28 00:54:09.699 INFO [pool-3-thread-33] c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask - -- vRA hostname : vra.cap.org 2022-07-28 00:54:09.699 INFO [pool-3-thread-33] c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask - -- Redirects supposed to added : https://vra.cap.org* 2022-07-28 00:54:09.904 INFO [pool-3-thread-33] c.v.v.l.v.d.r.c.VidmRestClient - -- API Response Status : 200 Response Message : {"allowedRedirects":["https://lcm.cap.org*","https://vra.cap.org*","https://migvra.cap.org*","https://tvra.cap.org*","https://hvra.cap.org*","https://nusvra.cap.org *","https://testvra.cap.org*"],"_links":{"self":{"href":"/SAAS/jersey/manager/api/authsettings/allowedredirects"}}} 2022-07-28 00:54:09.908 INFO [pool-3-thread-33] c.v.v.l.v.d.r.u.VidmServerRestUtil - -- Response for GET allowed redirects : VidmRestClientResponseDTO [statusCode=200, responseMessage={"allowedRedirects":["https://lcm.cap.org*","https://vra.cap.org*","https://migvra.cap.org*","https://tvra.ca p.org*","https://hvra.cap.org*","https://nusvra.cap.org*","https://testvra.cap.org*"],"_links":{"self":{"href":"/SAAS/jersey/manager/api/authsettings/allowedredirects"}}}] 2022-07-28 00:54:09.908 INFO [pool-3-thread-33] c.v.v.l.v.d.r.u.VidmServerRestUtil - -- Response for GET allowed redirects : VidmRestClientResponseDTO [statusCode=200, responseMessage={"allowedRedirects":["https://lcm.cap.org*","https://vra.cap.org*","https://migvra.cap.org*","https://tvra.ca p.org*","https://hvra.cap.org*","https://nusvra.cap.org*","https://testvra.cap.org*"],"_links":{"self":{"href":"/SAAS/jersey/manager/api/authsettings/allowedredirects"}}}] 2022-07-28 00:54:09.914 INFO [pool-3-thread-33] c.v.v.l.v.d.r.u.VidmServerRestUtil - -- Allowed redirects on the vIDM :: [https://lcm.cap.org*, https://vra.cap.org*, https://migvra.cap.org*, https://tvra.cap.org*, https://hvra.cap.org*, https://nusvra.cap.org*, https://testvra.cap.org*] 2022-07-28 00:54:09.914 INFO [pool-3-thread-33] c.v.v.l.v.d.r.u.VidmServerRestUtil - -- Skipping updating allowed redirect URLs, as all the required URLs already exists on the vIDM :: idm.cap.org 2022-07-28 00:54:09.914 INFO [pool-3-thread-33] c.v.v.l.v.c.t.VidmUpdateAllowedRedirectsTask - -- vIDM update allowed redirects task done 2022-07-28 00:54:09.915 INFO [pool-3-thread-33] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVidmUpdateAllowRedirectSuccess 24. Finishes the task 2022-07-28 00:54:10.195 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.vidm.core.task.VidmUpdateAllowedRedirectsTask 2022-07-28 00:54:10.195 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.platform.automata.service.task.FinalTask 2022-07-28 00:54:10.203 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.platform.automata.service.task.FinalTask My vIDM and vRA were standalone and not clustered , will find some time and test this on cluster too. Nothing would change really apart from the number of nodes. Steps would remain same

  • Unable to generate vRSLCM log bundle through UI

    There was a recent scenario where vRSLCM log generation was not happening through UI So let's take a minute to see what happens when you generate a log bundle and then we can inspect what could have gone wrong this the previous case where it wasn't generating one. When I click on generate log bundle .... It generates a request , upgrade planner spec and then the engine request . What's important here is the downloadUrl is pointing to the previously generated log bundle. 2022-07-27 09:25:15.649 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:25:15.660 INFO [http-nio-8080-exec-4] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : loginsightsetting 2022-07-27 09:25:53.795 INFO [http-nio-8080-exec-9] c.v.v.l.l.c.SettingsController - -- Validation result for Setting :logbundledownload result true 2022-07-27 09:25:53.846 INFO [http-nio-8080-exec-9] c.v.v.l.l.c.SettingsController - -- Setting Value before save: "{\n \"downloadUrl\" : \"https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip\"\n}" 2022-07-27 09:25:53.926 INFO [http-nio-8080-exec-9] c.v.v.l.l.u.RequestSubmissionUtil - -- ++++++++++++++++++ Creating request to Request_Service :::>>> { "vmid" : "2d03775b-9f49-4982-879b-26195ed74cac", "transactionId" : null, "tenant" : "default", "requestName" : "lcmsupportbundle", "requestReason" : "Get vRSLCM Support Bundle", "requestType" : "lcmsupportbundle", "requestSource" : null, "requestSourceType" : "user", "inputMap" : { "downloadUrl" : "https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip" }, "outputMap" : { }, "state" : "CREATED", "executionId" : null, "executionPath" : null, "executionStatus" : null, "errorCause" : null, "resultSet" : null, "isCancelEnabled" : null, "lastUpdatedOn" : 1658913953925, "createdBy" : null } 2022-07-27 09:25:54.021 INFO [scheduling-1] c.v.v.l.r.c.p.GenericEnvironmentPlanner - -- Generic Planner SPEC :: { "vmid" : "117d667d-f8d6-4602-9d97-874f7b4fd39d", "tenant" : "default", "originalRequest" : null, "enhancedRequest" : null, "symbolicName" : "90e136b4-4d83-4c61-ab81-7378fdf324ae", "acceptEula" : true, "variables" : { }, "products" : [ { "symbolicName" : "lcmsupportbundle", "displayName" : null, "productVersion" : null, "priority" : 0, "dependsOn" : [ ], "components" : [ { "component" : { "symbolicName" : "lcmsupportbundle", "type" : null, "componentVersion" : null, "properties" : { "downloadUrl" : "https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip", "isVcfUser" : "false" } }, "priority" : 0 } ] } ] } 2022-07-27 09:25:54.027 INFO [scheduling-1] c.v.v.l.r.c.RequestProcessor - -- ENGINE REQUEST :: { "vmid" : "117d667d-f8d6-4602-9d97-874f7b4fd39d", "tenant" : "default", "originalRequest" : null, "enhancedRequest" : null, "symbolicName" : "90e136b4-4d83-4c61-ab81-7378fdf324ae", "acceptEula" : true, "variables" : { }, "products" : [ { "symbolicName" : "lcmsupportbundle", "displayName" : null, "productVersion" : null, "priority" : 0, "dependsOn" : [ ], "components" : [ { "component" : { "symbolicName" : "lcmsupportbundle", "type" : null, "componentVersion" : null, "properties" : { "downloadUrl" : "https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip", "isVcfUser" : "false" } }, "priority" : 0 } ] } ] } The actual or current log bundle generation starts 2022-07-27 09:25:55.157 INFO [scheduling-1] c.v.v.l.a.c.FlowProcessor - -- Injected OnStart Edge for the Machine ID :: lcmsupportbundle 2022-07-27 09:25:55.201 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- INITIALIZING NEW EVENT :: { "vmid" : "2036f6ef-a34c-4c66-a414-671a403057f0", "transactionId" : null, "tenant" : "default", "createdBy" : "root", "lastModifiedBy" : "root", "createdOn" : 1658913955155, "lastUpdatedOn" : 1658913955182, "version" : "8.1.0.0", "vrn" : null, "eventName" : "OnStart", "currentState" : null, "eventArgument" : "{\"productSpec\":{\"name\":\"productSpec\",\"type\":\"com.vmware.vrealize.lcm.domain.ProductSpecification\",\"value\":\"{\\\"symbolicName\\\":\\\"lcmsupportbundle\\\",\\\"displayName\\\":null,\\\"productVersion\\\":null,\\\"priority\\\":0,\\\"dependsOn\\\":[],\\\"components\\\":[{\\\"component\\\":{\\\"symbolicName\\\":\\\"lcmsupportbundle\\\",\\\"type\\\":null,\\\"componentVersion\\\":null,\\\"properties\\\":{\\\"downloadUrl\\\":\\\"https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip\\\",\\\"isVcfUser\\\":\\\"false\\\"}},\\\"priority\\\":0}]}\"}}", "status" : "CREATED", "stateMachineInstance" : "9b69126f-58e7-47de-afa7-3bf907dd6756", "errorCause" : null, "sequence" : 560741, "eventLock" : 1, "engineNodeId" : "lcm.cap.org" } 2022-07-27 09:25:55.209 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- GETTING MACHINE FOR THE KEY :: lcmsupportbundle 2022-07-27 09:25:55.209 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- QUERYING CONTENT :: SystemFlowInventory::flows::flow->lcmsupportbundle 2022-07-27 09:25:55.210 INFO [scheduling-1] c.v.v.l.d.i.u.InventorySchemaQueryUtil - -- GETTING ROOT NODE FOR :: SystemFlowInventory 2022-07-27 09:25:55.228 INFO [scheduling-1] c.v.v.l.a.c.MachineRegistry - -- URL :: /system/flow/lcmsupportbundle.vmfx 2022-07-27 09:25:55.229 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- INSIDE ContentDownloadControllerImpl 2022-07-27 09:25:55.229 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- REPO_NAME :: /systemflowrepo 2022-07-27 09:25:55.229 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- CONTENT_PATH :: /system/flow/lcmsupportbundle.vmfx 2022-07-27 09:25:55.229 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- URL :: /systemflowrepo/system/flow/lcmsupportbundle.vmfx 2022-07-27 09:25:55.229 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- Decoded URL :: /systemflowrepo/system/flow/lcmsupportbundle.vmfx 2022-07-27 09:25:55.230 INFO [scheduling-1] c.v.v.l.c.c.ContentDownloadController - -- ContentDTO{BaseDTO{vmid='lcmsupportbundle', version=8.1.0.0} -> repoName='systemflowrepo', contentState='PUBLISHED', url='/systemflowrepo/system/flow/lcmsupportbundle.vmfx'} 2022-07-27 09:25:55.231 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- State to find :: com.vmware.vrealize.lcm.plugin.lcmplugin.core.task.LcmSupportBundleTask 2022-07-27 09:25:55.235 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Invoking Task :: com.vmware.vrealize.lcm.plugin.lcmplugin.core.task.LcmSupportBundleTask 2022-07-27 09:25:55.502 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: contentRepositoryController 2022-07-27 09:25:55.504 INFO [scheduling-1] c.v.v.l.a.c.EventProcessor - -- Injecting Bean :: settingsController 2022-07-27 09:25:55.504 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Start Instrumenting EventMetadata. 2022-07-27 09:25:55.506 INFO [scheduling-1] c.v.v.l.c.u.EventExecutionTelemetryUtil - -- Stop Instrumenting EventMetadata. 2022-07-27 09:25:55.511 INFO [pool-3-thread-14] c.v.v.l.p.l.c.t.LcmSupportBundleTask - -- Starting :: LCM VA support Bundle Task 2022-07-27 09:25:55.528 INFO [pool-3-thread-14] c.v.v.l.u.ShellExecutor - -- Executing shell command: /var/lib/vlcm-common/vlcm-support -w /data/lcm-logbundle 2022-07-27 09:25:55.530 INFO [pool-3-thread-14] c.v.v.l.u.ProcessUtil - -- Execute /var/lib/vlcm-common/vlcm-support 2022-07-27 09:25:58.091 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:25:58.099 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:02.098 INFO [http-nio-8080-exec-10] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:02.098 INFO [http-nio-8080-exec-10] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:06.092 INFO [http-nio-8080-exec-2] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:06.143 INFO [http-nio-8080-exec-2] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload * * * * 2022-07-27 09:26:10.097 INFO [http-nio-8080-exec-1] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:10.097 INFO [http-nio-8080-exec-1] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:14.101 INFO [http-nio-8080-exec-4] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:14.102 INFO [http-nio-8080-exec-4] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:18.105 INFO [http-nio-8080-exec-3] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:18.173 INFO [http-nio-8080-exec-3] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:22.096 INFO [http-nio-8080-exec-7] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:22.097 INFO [http-nio-8080-exec-7] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:26.104 INFO [http-nio-8080-exec-5] c.v.v.l.l.c.SettingsController - -- queryParams : { } 2022-07-27 09:26:26.104 INFO [http-nio-8080-exec-5] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:28.068 INFO [pool-3-thread-14] c.v.v.l.u.ShellExecutor - -- Result: [Support program for VMware vRealize LCM Appliance - Version 8.2.0 adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/auth.log (stored 0%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/boot.log (deflated 91%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/bootstrap/everyboot.log (deflated 96%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/bootstrap/firstboot.log (stored 0%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/bootstrap/postinstall.log (stored 0%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/bootstrap/postupdate.log (deflated 79%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/bootstrap/preupdate.log (deflated 45%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/cloud-init-output.log (deflated 92%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/cloud-init.log (deflated 90%) zip warning: name not matched: vlcmsupport-2022-07-27_09-25-55.16453/var/log/dracut.log zip error: Nothing to do! (vlcmsupport-2022-07-27_09-25-55.16453.zip) Warning..detected exception condition. Reason: 12 adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/installer-kickstart.log (stored 0%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/installer.log (stored 0%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liagent_2022-07-14_23.log (deflated 95%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2021-10-08_00.log (deflated 78%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2021-12-02_01.log (deflated 71%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2021-12-02_02.log (deflated 71%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2021-12-13_03.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-01-06_04.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-01-06_05.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-01-19_06.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-03-23_07.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-03-23_08.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-04-24_09.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-05-06_10.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-06-15_11.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-07-14_12.log (deflated 70%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/loginsight-agent/liupdater_2022-07-14_13.log (deflated 60%) adding: vlcmsupport-2022-07-27_09-25-55.16453/var/log/nginx/access.log zip warning: file size changed while zipping vlcmsupport-2022-07-27_09-25-55.16453/var/log/nginx/access.log (deflated 96%) * * * adding: vlcmsupport-2022-07-27_09-25-55.16453/tmp/service-status-all.txt (deflated 79%) adding: vlcmsupport-2022-07-27_09-25-55.16453/tmp/chkconfig-list.txt (deflated 43%) adding: vlcmsupport-2022-07-27_09-25-55.16453/tmp/ovfenv-D.16453.txt (deflated 77%) adding: vlcmsupport-2022-07-27_09-25-55.16453/tmp/ovfenv-xml.16453.txt (deflated 71%) adding: vlcmsupport-2022-07-27_09-25-55.16453/tmp/vlcm_build_details.txt (deflated 34%) File: /data/lcm-logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip]. 2022-07-27 09:26:28.835 INFO [pool-3-thread-14] c.v.v.l.l.u.SettingsHelper - -- Setting name : vcfmodesettings , Package name : com.vmware.vrealize.lcm.lcops.common.dto.settings.VcfConfigurationDTO 2022-07-27 09:26:28.848 INFO [pool-3-thread-14] c.v.v.l.l.u.SettingsHelper - -- Final Data : { "vcfHostname" : null, "username" : null, "apiKey" : null, "version" : null } 2022-07-27 09:26:28.856 INFO [pool-3-thread-14] c.v.v.l.p.l.c.t.LcmSupportBundleTask - -- Result archive is: '/data/lcm-logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip' 2022-07-27 09:26:28.856 INFO [pool-3-thread-14] c.v.v.l.p.l.c.t.LcmSupportBundleTask - -- Support bundle created successfully in path : /data/lcm-logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip 2022-07-27 09:26:28.886 INFO [pool-3-thread-14] c.v.v.l.p.a.s.Task - -- Injecting Edge :: OnVaSupportBundleCompletion 2022-07-27 09:26:28.900 INFO [pool-3-thread-14] c.v.v.l.p.l.c.t.LcmSupportBundleTask - -- old download url : https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-01-25_01-33-57.44075.zip 2022-07-27 09:26:28.912 INFO [pool-3-thread-14] c.v.v.l.c.c.ContentRepositoryController - -- Content delete requested for url /logBundleRepo/vrlcm/logbundlevlcmsupport-2022-01-25_01-33-57.44075.zip. 2022-07-27 09:26:28.943 INFO [pool-3-thread-14] c.v.v.l.c.c.ContentRepositoryController - -- Content delete is failed with given URL :: /logBundleRepo/vrlcm/logbundlevlcmsupport-2022-01-25_01-33-57.44075.zip 2022-07-27 09:26:29.056 INFO [pool-3-thread-14] c.v.v.l.c.c.ContentRepositoryController - -- Creating content operation. 2022-07-27 09:26:29.088 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- URL :: /logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip 2022-07-27 09:26:29.111 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- URL :: /logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip 2022-07-27 09:26:29.115 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH LENGTH :: 5 2022-07-27 09:26:29.116 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH LENGTH TEST PASSED 2022-07-27 09:26:29.123 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- SEARCHINE FOR :: REPO -> logBundleRepo :: NAME -> __ROOT__ :KXKXKXKX PARENT -> da0c4d6f-d0de-4837-835b-6f0d6e01b15b 2022-07-27 09:26:29.249 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH :: 2022-07-27 09:26:29.250 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH ::logBundleRepo 2022-07-27 09:26:29.250 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH ::vrlcm 2022-07-27 09:26:29.251 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH ::logbundle 2022-07-27 09:26:29.251 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- PATH ::vlcmsupport-2022-07-27_09-25-55.16453.zip 2022-07-27 09:26:29.253 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- ADDING NODE - PATH LENGTH :: 5 2022-07-27 09:26:29.254 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- SEARCHINE FOR :: REPO -> logBundleRepo :: PARENT -> 2761c604-c889-4c28-834a-23423133cbf1 :: NAME -> vrlcm 2022-07-27 09:26:29.259 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- SEARCHINE FOR :: REPO -> logBundleRepo :: PARENT -> 3172ba88-32f7-4f07-ba92-407abbdb37d3 :: NAME -> logbundle 2022-07-27 09:26:29.262 INFO [pool-3-thread-14] c.v.v.l.c.s.ContentDownloadUrlServiceImpl - -- SEARCHINE FOR :: REPO -> logBundleRepo :: PARENT -> 024b449c-18cc-4dab-8e0b-9e9a47702453 :: NAME -> vlcmsupport-2022-07-27_09-25-55.16453.zip Finally marking the completion of log bundle collection The previous download url is replaced with the current or the recently generated bundle for download 2022-07-27 09:26:30.097 INFO [http-nio-8080-exec-9] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-27 09:26:31.852 INFO [pool-3-thread-14] c.v.v.l.p.l.c.t.LcmSupportBundleTask - -- finalUrl to download log bundle : https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip 2022-07-27 09:26:34.091 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload In the failed scenario , what we see if the below message being repeating always , just does not move forward 2022-07-21 21:05:35.937 INFO [http-nio-8080-exec-4] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:39.932 INFO [http-nio-8080-exec-11] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:43.934 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:47.940 INFO [http-nio-8080-exec-12] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:51.935 INFO [http-nio-8080-exec-10] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:55.943 INFO [http-nio-8080-exec-3] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:05:59.940 INFO [http-nio-8080-exec-8] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:03.929 INFO [http-nio-8080-exec-2] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:07.938 INFO [http-nio-8080-exec-5] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:11.938 INFO [http-nio-8080-exec-9] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:15.932 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:19.938 INFO [http-nio-8080-exec-12] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:23.928 INFO [http-nio-8080-exec-10] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:27.934 INFO [http-nio-8080-exec-1] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:31.930 INFO [http-nio-8080-exec-7] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:35.938 INFO [http-nio-8080-exec-2] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:39.939 INFO [http-nio-8080-exec-5] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:43.928 INFO [http-nio-8080-exec-11] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:47.938 INFO [http-nio-8080-exec-9] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:51.928 INFO [http-nio-8080-exec-6] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:55.938 INFO [http-nio-8080-exec-12] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload 2022-07-21 21:06:59.931 INFO [http-nio-8080-exec-3] c.v.v.l.l.c.SettingsController - -- Dynamic Setting data required for setting : logbundledownload As per data from the database Problematic Environment vrlcm=# select * from vm_lcops_settings where name = 'logbundledownload'; -[ RECORD 1 ]-------+----------------------------------------------------------------------------------------------------------------------------- vmid | b8******e0 createdby | serviceadmin@local createdon | 1638384524109 lastmodifiedby | serviceadmin@local lastupdatedon | 1638384524109 tenant | default version | 8.1.0.0 vrn | additionaldata | datatype | Request description | Trigger Log bundle download detaileddescription | Triggers log collection for all nodes in the environment. Returns request id that can be used to check log collection status name | logbundledownload packagename | value | { + | "requestid" : "28****f0" | } My Lab vrlcm=# select * from vm_lcops_settings where name = 'logbundledownload'; -[ RECORD 1 ]-------+----------------------------------------------------------------------------------------------------------------------------- vmid | 9b675941-a97e-4b66-9f4c-4f0391a77017 createdby | serviceadmin@local createdon | 1638415924109 lastmodifiedby | serviceadmin@local lastupdatedon | 1638415924109 tenant | default version | 8.1.0.0 vrn | additionaldata | datatype | Request description | Trigger Log bundle download detaileddescription | Triggers log collection for all nodes in the environment. Returns request id that can be used to check log collection status name | logbundledownload packagename | value | { + | "downloadUrl" : "https://lcm.cap.org/repo/logBundleRepo/vrlcm/logbundle/vlcmsupport-2022-07-27_09-25-55.16453.zip" + | } The difference is the wrong value in the table vm_lcops_settings table where logbundledownload record has a wrong value set to it. Instead of having a downloadUrl it has a request id set in it. Remediation Plan Take a snapshot of vRSLCM Execute below query to remove existing value update vm_lcops_settings set value = '' where name = 'logbundledownload'; Restart vRSLCM systemctl restart vrslcm-server Login into the UI Generate Log Bundle One can even check from API if the value is set correctly Method: GET URL: {{lcmurl}}/lcm/lcops/api/settings/logbundledownload Hope this helps you to understand what happens in the background and when a failure is seen how do you remediate it

  • Upgrade Planner Controller API's

    There are 2 API's what Upgrade Planner Controller provides Let's explore these API's and see how we can fetch our desired result getVrsProductsVersions Inputs Method: GET URL: {{lcmurl}}/lcm/lcops/api/v2/getTargetVersions Content Type: application/json screenshot showUpgradePath Inputs Method: POST URL: {{lcmurl}}/lcm/lcops/api/v2/findUpgradePath Content Type: application/json upgradePathInputs: [ { "productId": "string", "fromVersion": "string", "toVersion": "string", "required": true } ] screenshot Sample Body [ { "productId": "vrslcm", "fromVersion": "8.8.2", "toVersion": "8.8.2", "required": false }, { "productId": "vra", "fromVersion": "8.8.1", "toVersion": "8.8.2", "required": true }, { "productId": "vrli", "fromVersion": "8.6.2", "toVersion": "8.8.2", "required": true }, { "productId": "vrops", "fromVersion": "8.6.2", "toVersion": "8.6.3", "required": true }, { "productId": "vrni", "fromVersion": "6.6.0", "toVersion": "6.7.0", "required": true }, { "productId": "vidm", "fromVersion": "3.3.6", "toVersion": "3.3.6", "required": true } ]

  • Installing Jenkins

    Use-Case Documenting steps needed to install Jenkins on an Ubuntu Virtual Machine and then integrating this to vRealize Automation to trigger some pipelines Procedure Deploy a Ubuntu virtual machine ( either on-prem or cloud ) Update the Debian apt repositories, install OpenJDK 11 root@jenkins:~# sudo apt update Hit:1 http://in.archive.ubuntu.com/ubuntu focal InRelease Get:2 http://in.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] Get:3 http://in.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB] Get:4 http://in.archive.ubuntu.com/ubuntu focal-security InRelease [114 kB] Get:5 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1,793 kB] Get:6 http://in.archive.ubuntu.com/ubuntu focal-updates/main Translation-en [330 kB] Get:7 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 c-n-f Metadata [15.2 kB] Get:8 http://in.archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [976 kB] Get:9 http://in.archive.ubuntu.com/ubuntu focal-updates/restricted Translation-en [139 kB] Get:10 http://in.archive.ubuntu.com/ubuntu focal-updates/restricted amd64 c-n-f Metadata [520 B] Get:11 http://in.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [924 kB] Get:12 http://in.archive.ubuntu.com/ubuntu focal-updates/universe Translation-en [207 kB] Get:13 http://in.archive.ubuntu.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [20.7 kB] Get:14 http://in.archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [24.4 kB] Get:15 http://in.archive.ubuntu.com/ubuntu focal-updates/multiverse Translation-en [7,336 B] Get:16 http://in.archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 c-n-f Metadata [596 B] Get:17 http://in.archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [68.1 kB] Get:18 http://in.archive.ubuntu.com/ubuntu focal-backports/main Translation-en [10.9 kB] Get:19 http://in.archive.ubuntu.com/ubuntu focal-backports/main amd64 c-n-f Metadata [980 B] Get:20 http://in.archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [26.8 kB] Get:21 http://in.archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [15.9 kB] Get:22 http://in.archive.ubuntu.com/ubuntu focal-backports/universe amd64 c-n-f Metadata [860 B] Get:23 http://in.archive.ubuntu.com/ubuntu focal-security/main amd64 Packages [1,453 kB] Get:24 http://in.archive.ubuntu.com/ubuntu focal-security/main Translation-en [250 kB] Get:25 http://in.archive.ubuntu.com/ubuntu focal-security/main amd64 c-n-f Metadata [10.2 kB] Get:26 http://in.archive.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [914 kB] Get:27 http://in.archive.ubuntu.com/ubuntu focal-security/restricted Translation-en [130 kB] Get:28 http://in.archive.ubuntu.com/ubuntu focal-security/restricted amd64 c-n-f Metadata [520 B] Get:29 http://in.archive.ubuntu.com/ubuntu focal-security/universe amd64 Packages [703 kB] Get:30 http://in.archive.ubuntu.com/ubuntu focal-security/universe Translation-en [125 kB] Get:31 http://in.archive.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [14.4 kB] Get:32 http://in.archive.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [22.2 kB] Get:33 http://in.archive.ubuntu.com/ubuntu focal-security/multiverse Translation-en [5,376 B] Get:34 http://in.archive.ubuntu.com/ubuntu focal-security/multiverse amd64 c-n-f Metadata [512 B] Fetched 8,527 kB in 2s (3,810 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 150 packages can be upgraded. Run 'apt list --upgradable' to see them. root@jenkins:~# sudo apt install openjdk-11-jre Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: at-spi2-core ca-certificates-java fontconfig-config fonts-dejavu-core fonts-dejavu-extra java-common libatk-bridge2.0-0 libatk-wrapper-java libatk-wrapper-java-jni libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libdrm-amdgpu1 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libfontconfig1 libfontenc1 libgif7 libgl1 libgl1-mesa-dri libglapi-mesa libglvnd0 libglx-mesa0 libglx0 libgraphite2-3 libharfbuzz0b libice6 libjpeg-turbo8 libjpeg8 liblcms2-2 libllvm12 libpciaccess0 libpcsclite1 libsensors-config libsensors5 libsm6 libvulkan1 libwayland-client0 libx11-xcb1 libxaw7 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 libxfixes3 libxft2 libxi6 libxinerama1 libxkbfile1 libxmu6 libxpm4 libxrandr2 libxrender1 libxshmfence1 libxt6 libxtst6 libxv1 libxxf86dga1 libxxf86vm1 mesa-vulkan-drivers openjdk-11-jre-headless x11-common x11-utils Suggested packages: default-jre cups-common liblcms2-utils pcscd lm-sensors libnss-mdns fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei | fonts-wqy-zenhei fonts-indic mesa-utils The following NEW packages will be installed: at-spi2-core ca-certificates-java fontconfig-config fonts-dejavu-core fonts-dejavu-extra java-common libatk-bridge2.0-0 libatk-wrapper-java libatk-wrapper-java-jni libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libdrm-amdgpu1 libdrm-intel1 libdrm-nouveau2 libdrm-radeon1 libfontconfig1 libfontenc1 libgif7 libgl1 libgl1-mesa-dri libglapi-mesa libglvnd0 libglx-mesa0 libglx0 libgraphite2-3 libharfbuzz0b libice6 libjpeg-turbo8 libjpeg8 liblcms2-2 libllvm12 libpciaccess0 libpcsclite1 libsensors-config libsensors5 libsm6 libvulkan1 libwayland-client0 libx11-xcb1 libxaw7 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 libxfixes3 libxft2 libxi6 libxinerama1 libxkbfile1 libxmu6 libxpm4 libxrandr2 libxrender1 libxshmfence1 libxt6 libxtst6 libxv1 libxxf86dga1 libxxf86vm1 mesa-vulkan-drivers openjdk-11-jre openjdk-11-jre-headless x11-common x11-utils 0 upgraded, 75 newly installed, 0 to remove and 150 not upgraded. Need to get 79.3 MB of archives. After this operation, 715 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libatspi2.0-0 amd64 2.36.0-2 [64.2 kB] Get:2 http://in.archive.ubuntu.com/ubuntu focal/main amd64 x11-common all 1:7.7+19ubuntu14 [22.3 kB] Get:3 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libxtst6 amd64 2:1.2.3-1 [12.8 kB] Get:4 http://in.archive.ubuntu.com/ubuntu focal/main amd64 at-spi2-core amd64 2.36.0-2 [48.7 kB] Get:5 http://in.archive.ubuntu.com/ubuntu focal/main amd64 java-common all 0.72 [6,816 B] Get:6 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 libavahi-common-data amd64 0.7-4ubuntu7.1 [21.4 kB] Get:7 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 libavahi-common3 amd64 0.7-4ubuntu7.1 [21.7 kB] Get:8 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 libavahi-client3 amd64 0.7-4ubuntu7.1 [25.5 kB] Get:9 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 libcups2 amd64 2.3.1-9ubuntu1.1 [233 kB] Get:10 http://in.archive.ubuntu.com/ubuntu focal/main amd64 liblcms2-2 amd64 2.9-4 [140 kB] Get:11 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 libjpeg-turbo8 amd64 2.0.3-0ubuntu1.20.04.1 [117 kB] Get:12 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libjpeg8 amd64 8c-2ubuntu8 [2,194 B] Get:13 http://in.archive.ubuntu.com/ubuntu focal/main amd64 fonts-dejavu-core all 2.37-1 [1,041 kB] Get:14 http://in.archive.ubuntu.com/ubuntu focal/main amd64 fontconfig-config all 2.13.1-2ubuntu3 [28.8 kB] Get:15 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libfontconfig1 amd64 2.13.1-2ubuntu3 [114 kB] Get:16 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libgraphite2-3 amd64 1.3.13-11build1 [73.5 kB] Get:17 http://in.archive.ubuntu.com/ubuntu focal/main amd64 libharfbuzz0b amd64 2.6.4-1ubuntu4 [391 kB] Get:18 http://in.archive.ubuntu * * Get:74 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 mesa-vulkan-drivers amd64 21.2.6-0ubuntu0.1~20.04.2 [5,788 kB] Get:75 http://in.archive.ubuntu.com/ubuntu focal-updates/main amd64 openjdk-11-jre amd64 11.0.15+10-0ubuntu0.20.04.1 [175 kB] Fetched 79.3 MB in 4s (19.1 MB/s) Extracting templates from packages: 100% Selecting previously unselected package libatspi2.0-0:amd64. (Reading database ... 71625 files and directories currently installed.) Preparing to unpack .../00-libatspi2.0-0_2.36.0-2_amd64.deb ... Unpacking libatspi2.0-0:amd64 (2.36.0-2) ... * * * Unpacking mesa-vulkan-drivers:amd64 (21.2.6-0ubuntu0.1~20.04.2) ... Selecting previously unselected package openjdk-11-jre:amd64. Preparing to unpack .../74-openjdk-11-jre_11.0.15+10-0ubuntu0.20.04.1_amd64.deb ... Unpacking openjdk-11-jre:amd64 (11.0.15+10-0ubuntu0.20.04.1) ... Setting up libgraphite2-3:amd64 (1.3.13-11build1) ... Setting up libxcb-dri3-0:amd64 (1.14-2) ... Setting up liblcms2-2:amd64 (2.9-4) ... Setting up libx11-xcb1:amd64 (2:1.6.9-2ubuntu1.2) ... Setting up libpciaccess0:amd64 (0.16-0ubuntu1) ... Setting up libdrm-nouveau2:amd64 (2.4.107-8ubuntu1~20.04.2) ... Setting up libxcb-xfixes0:amd64 (1.14-2) ... Setting up libxpm4:amd64 (1:3.5.12-1) ... Setting up libxi6:amd64 (2:1.7.10-0ubuntu1) ... Setting up java-common (0.72) ... * * * Setting up libxaw7:amd64 (2:1.0.13-1) ... Setting up x11-utils (7.7+5) ... Setting up libatk-wrapper-java (0.37.1-1) ... Setting up libatk-wrapper-java-jni:amd64 (0.37.1-1) ... Setting up openjdk-11-jre-headless:amd64 (11.0.15+10-0ubuntu0.20.04.1) ... update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/jjs to provide /usr/bin/jjs (jjs) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up openjdk-11-jre:amd64 (11.0.15+10-0ubuntu0.20.04.1) ... Setting up ca-certificates-java (20190405ubuntu1) ... head: cannot open '/etc/ssl/certs/java/cacerts' for reading: No such file or directory Adding debian:QuoVadis_Root_CA_3_G3.pem Adding debian:Buypass_Class_2_Root_CA.pem Adding debian:Amazon_Root_CA_2.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Atos_TrustedRoot_2011.pem Adding debian:Certigna_Root_CA.pem * * * Adding debian:Staat_der_Nederlanden_Root_CA_-_G3.pem Adding debian:ISRG_Root_X1.pem Adding debian:OISTE_WISeKey_Global_Root_GB_CA.pem Adding debian:e-Szigno_Root_CA_2017.pem Adding debian:UCA_Global_G2_Root.pem Adding debian:IdenTrust_Commercial_Root_CA_1.pem Adding debian:GlobalSign_ECC_Root_CA_-_R5.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:IdenTrust_Public_Sector_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:emSign_Root_CA_-_C1.pem Adding debian:certSIGN_Root_CA_G2.pem Adding debian:Amazon_Root_CA_4.pem done. Processing triggers for mime-support (3.64ubuntu1) ... Processing triggers for libc-bin (2.31-0ubuntu9.2) ... Processing triggers for systemd (245.4-4ubuntu3.15) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for ca-certificates (20210119~20.04.2) ... Updating certificates in /etc/ssl/certs... 0 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done. done. root@jenkins:~# Set Repository curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \ /usr/share/keyrings/jenkins-keyring.asc > /dev/null echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \ https://pkg.jenkins.io/debian-stable binary/ | sudo tee \ /etc/apt/sources.list.d/jenkins.list > /dev/null Update Repository again sudo apt-get update Install Jenkins root@jenkins:~# sudo apt-get install jenkins Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libfwupdplugin1 Use 'sudo apt autoremove' to remove it. The following additional packages will be installed: net-tools The following NEW packages will be installed: jenkins net-tools 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 87.9 MB of archives. After this operation, 92.1 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://in.archive.ubuntu.com/ubuntu focal/main amd64 net-tools amd64 1.60+git20180626.aebd88e-1ubuntu1 [196 kB] Get:2 https://pkg.jenkins.io/debian-stable binary/ jenkins 2.346.1 [87.7 MB] Fetched 87.9 MB in 44s (1,978 kB/s) Selecting previously unselected package net-tools. (Reading database ... 109455 files and directories currently installed.) Preparing to unpack .../net-tools_1.60+git20180626.aebd88e-1ubuntu1_amd64.deb ... Unpacking net-tools (1.60+git20180626.aebd88e-1ubuntu1) ... Selecting previously unselected package jenkins. Preparing to unpack .../jenkins_2.346.1_all.deb ... Unpacking jenkins (2.346.1) ... Setting up net-tools (1.60+git20180626.aebd88e-1ubuntu1) ... Setting up jenkins (2.346.1) ... Created symlink /etc/systemd/system/multi-user.target.wants/jenkins.service → /lib/systemd/system/jenkins.service. Processing triggers for man-db (2.9.1-1) ... Processing triggers for systemd (245.4-4ubuntu3.17) ... root@jenkins:~# Jenkins is now successfully installed To access Jenkins , I will now go ahead and enter this url on the browser http://jenkins:8080 Get the password as suggested cat /var/lib/jenkins/secrets/initialAdminPassword Copy the password and enter in the section on the browser Once we click on continue after entering password , you would be presented with a pane where you can install plugins Click on "Install suggested plugins" to start installation Once all of the plugins are installed , it would then prompt to create a user Once we enter the first admin user information , then when you click on save and continue , it would present you with the url needed for you to login into Jenkins. Now click on Save and Finish Jenkins is now ready Now , when we click on start using jenkins you will bv

  • Scaling Down vRealize Automation 7.x

    Use Case With vRealize Automation 7.x reaching it's end of like on September 2022 , most of the customers either adopted version 8.x or are in transition and will get there eventually. It's not that easy to stop an enterprise application and decommission it overnight , but it's possible to scale it down rather than keep it distributed and highly available Keeping this in mind i thought i'll pen down few steps on how to scale down vRA 7.x Environment Built a 3 node vRA appliance and a 2 node IAAS servers and called them as below Server Role svraone primary va svratwo secondary va svrathree tertiary va siaasone primary web , manager service , model manager data , proxy agent , dem worker and dem orchestrator siaastwo secondary web , secondary manager service , proxy agent , dem worker and dem orchestrator Procedure Take Snapshots before performing any of the below steps across all nodes. backup databases too Output of listing all nodes in a cluster looks like below in my lab Node: NodeHost: svraone.cap.org NodeId: cafe.node.631087009.16410 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: True Component: Type: vRO Version: 7.6.0.12923317 Node: NodeHost: siaastwo.cap.org NodeId: 7DD5F70C-976F-4635-89F8-582986851E98 NodeType: IAAS Components: Component: Type: Website Version: 7.6.0.16195 State: Started Component: Type: ModelManagerWeb Version: 7.6.0.16195 State: Started Component: Type: ManagerService Version: 7.6.0.16195 State: Active Component: Type: ManagementAgent Version: 7.6.0.17541 State: Started Component: Type: DemWorker Version: 7.6.0.16195 State: Started Component: Type: DemOrchestrator Version: 7.6.0.16195 State: Started Component: Type: WAPI Version: 7.6.0.16195 State: Started Component: Type: vSphereAgent Version: 7.6.0.16195 State: Started Node: NodeHost: siaasone.cap.org NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9 NodeType: IAAS Components: Component: Type: Database Version: 7.6.0.16195 State: Available Component: Type: Website Version: 7.6.0.16195 State: Started Component: Type: ModelManagerData Version: 7.6.0.16195 State: Available Component: Type: ModelManagerWeb Version: 7.6.0.16195 State: Started Component: Type: ManagerService Version: 7.6.0.16195 State: Passive Component: Type: ManagementAgent Version: 7.6.0.17541 State: Started Component: Type: DemOrchestrator Version: 7.6.0.16195 State: Started Component: Type: DemWorker Version: 7.6.0.16195 State: Started Component: Type: WAPI Version: 7.6.0.16195 State: Started Component: Type: vSphereAgent Version: 7.6.0.16195 State: Started Node: NodeHost: svrathree.cap.org NodeId: cafe.node.384204123.10666 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: False Component: Type: vRO Version: 7.6.0.12923317 Node: NodeHost: svratwo.cap.org NodeId: cafe.node.776067309.27389 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: False Component: Type: vRO Version: 7.6.0.12923317 To scale down , i'd like to remove my secondary nodes and just leave primary in my cluster I'll begin my scaling down approach with IAAS nodes , that's siaastwo.cap.org I'll open VAMI of my Master Node and then click on the cluster tab Because we powered off second iaas node , it won't show in connected state The moment i click on "Delete" next to the IAAS secondary node, I'll get a warning shown as below svraone:5480 says Do you really want to delete the node 7DD5F70C-976F-4635-89F8-582986851E98 which was last connected 11 minutes ago? You will need to remove its hostname from an external load balancer! This ID: 7DD5F70C-976F-4635-89F8-582986851E98 belongs to siaastwo.cap.org , see the output below Node: NodeHost: siaastwo.cap.org NodeId: 7DD5F70C-976F-4635-89F8-582986851E98 NodeType: IAAS Components: Component: Type: Website Version: 7.6.0.16195 State: Started Component: Type: ModelManagerWeb Version: 7.6.0.16195 State: Started Component: Type: ManagerService Version: 7.6.0.16195 State: Active Component: Type: ManagementAgent Version: 7.6.0.17541 State: Started Component: Type: DemWorker Version: 7.6.0.16195 State: Started Component: Type: DemOrchestrator Version: 7.6.0.16195 State: Started Component: Type: WAPI Version: 7.6.0.16195 State: Started Component: Type: vSphereAgent Version: 7.6.0.16195 State: Started Now confirm deletion The node is now successfully removed To monitor one can take a look at /var/log/messages 2022-07-05T23:12:47.929846+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Logging event node-removed 2022-07-05T23:12:47.929877+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync 2022-07-05T23:12:47.930565+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Resolved vCAC host: svraone.cap.org 2022-07-05T23:12:48.005902+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org svrathree.cap.org svratwo.cap.org' 2022-07-05T23:12:48.039511+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 05-db-sync is 2022-07-05T23:12:48.039556+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq 2022-07-05T23:12:48.125189+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org' 2022-07-05T23:12:48.130809+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 10-rabbitmq is 2022-07-05T23:12:48.130832+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy 2022-07-05T23:12:48.233369+00:00 svraone node-removed: Removing 'siaastwo.cap.org' from haproxy config 2022-07-05T23:12:48.265237+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Jul 05, 2022 11:12:48 PM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection 2022-07-05T23:12:48.265459+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896 2022-07-05T23:12:48.308782+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg Reload service haproxy ..done 2022-07-05T23:12:48.308807+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/25-db 2022-07-05T23:12:48.353287+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode 2022-07-05T23:12:48.353314+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info command exit code: 1 2022-07-05T23:12:48.353322+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info cluster-mode-check [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode 2022-07-05T23:12:48.354087+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command... 2022-07-05T23:12:48.458204+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org' 2022-07-05T23:12:48.461776+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 25-db is 2022-07-05T23:12:48.461800+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db 2022-07-05T23:12:48.827039+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org' 2022-07-05T23:12:48.847537+00:00 svraone node-removed: Removing 'siaastwo' from horizon database tables 2022-07-05T23:12:48.852777+00:00 svraone su: (to postgres) root on none 2022-07-05T23:12:50.279007+00:00 svraone su: last message repeated 3 times 2022-07-05T23:12:50.278863+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 30-vidm-db is DELETE 0 DELETE 0 Last login: Tue Jul 5 23:12:48 UTC 2022 DELETE 0 Last login: Tue Jul 5 23:12:49 UTC 2022 DELETE 0 Last login: Tue Jul 5 23:12:49 UTC 2022 2022-07-05T23:12:50.278889+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master 2022-07-05T23:12:50.370747+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org' 2022-07-05T23:12:50.383950+00:00 svraone node-removed: Removing 'rabbit@siaastwo' from rabbitmq cluster 2022-07-05T23:12:50.424780+00:00 svraone su: (to rabbitmq) root on none 2022-07-05T23:12:50.476667+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Event request for siaastwo.cap.org timed out 2022-07-05T23:12:51.335973+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command... 2022-07-05T23:12:52.455441+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 40-rabbitmq-master is Removing node rabbit@siaastwo from cluster 2022-07-05T23:12:52.455467+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch 2022-07-05T23:12:52.550899+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True' 2022-07-05T23:12:52.564092+00:00 svraone node-removed: Restarting elasticsearch service 2022-07-05T23:12:52.733672+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 50-elasticsearch is Stopping elasticsearch: process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done. Starting elasticsearch: 2048 2022-07-05T23:12:52.733690+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health 2022-07-05T23:12:52.883943+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org' Executing list nodes command you can now see there are no references to siaastwo.cap.org Node: NodeHost: svraone.cap.org NodeId: cafe.node.631087009.16410 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: True Component: Type: vRO Version: 7.6.0.12923317 Node: NodeHost: siaasone.cap.org NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9 NodeType: IAAS Components: Component: Type: Database Version: 7.6.0.16195 State: Available Component: Type: Website Version: 7.6.0.16195 State: Started Component: Type: ModelManagerData Version: 7.6.0.16195 State: Available Component: Type: ModelManagerWeb Version: 7.6.0.16195 State: Started Component: Type: ManagerService Version: 7.6.0.16195 State: Active Component: Type: ManagementAgent Version: 7.6.0.17541 State: Started Component: Type: DemOrchestrator Version: 7.6.0.16195 State: Started Component: Type: DemWorker Version: 7.6.0.16195 State: Started Component: Type: WAPI Version: 7.6.0.16195 State: Started Component: Type: vSphereAgent Version: 7.6.0.16195 State: Started Node: NodeHost: svrathree.cap.org NodeId: cafe.node.384204123.10666 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: False Component: Type: vRO Version: 7.6.0.12923317 NodeHost: svratwo.cap.org NodeId: cafe.node.776067309.27389 NodeType: VA Components: Component: Type: vRA Version: 7.6.0.317 Primary: False Component: Type: vRO Version: 7.6.0.12923317 Now , let's move on to remove second and third appliance from the cluster Before i remove nodes from cluster, i'll remove connectors coming from those nodes Now that the connectors are removed, we will now move on with removing the vRA appliances from cluster Take one more round of snapshots Once the snapshot tasks are complete , we will proceed with appliance removal Remember , you cannot and should not remove the master from cluster. Ensure database is in Asynchronous mode Click on delete next to svrathree.cap.org to delete the node or remove it from the cluster While you remove node from cluster , you can check /var/log/messages or /var/log/vmware/vcac/vcac-config.log for more information 2022-07-06T00:11:53.985239+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Processing request PUT /vacluster/event/node-removed 2022-07-06T00:11:54.123523+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39336]: info Resolved vCAC host: svraone.cap.org 2022-07-06T00:11:54.255543+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Logging event node-removed 2022-07-06T00:11:54.255982+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync 2022-07-06T00:11:54.331159+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org svratwo.cap.org' 2022-07-06T00:11:54.339000+00:00 svraone node-removed: Setting database to ASYNC 2022-07-06T00:11:54.898295+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Jul 06, 2022 12:11:54 AM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection 2022-07-06T00:11:54.898572+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896 2022-07-06T00:11:54.961623+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info [2022-07-06 00:11:54] [root] [INFO] Current node in cluster mode 2022-07-06T00:11:54.961643+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info command exit code: 1 2022-07-06T00:11:54.961651+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info cluster-mode-check [2022-07-06 00:11:54] [root] [INFO] Current node in cluster mode 2022-07-06T00:11:54.961660+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command... 2022-07-06T00:11:56.423082+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 05-db-sync is 2022-07-06T00:11:56.423104+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq 2022-07-06T00:11:56.514913+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'svrathree.cap.org', hostname: 'svraone.cap.org' 2022-07-06T00:11:56.518400+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 10-rabbitmq is 2022-07-06T00:11:56.518424+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy 2022-07-06T00:11:56.619648+00:00 svraone node-removed: Removing 'svrathree.cap.org' from haproxy config 2022-07-06T00:11:56.749906+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg Reload service haproxy ..done 2022-07-06T00:11:56.749930+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/25-db 2022-07-06T00:11:56.894988+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'svrathree.cap.org', hostname: 'svraone.cap.org' 2022-07-06T00:11:56.898755+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 25-db is 2022-07-06T00:11:56.898777+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db 2022-07-06T00:11:56.982014+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command... 2022-07-06T00:11:56.986794+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org' 2022-07-06T00:11:56.995099+00:00 svraone node-removed: Removing 'svrathree' from horizon database tables 2022-07-06T00:11:57.000134+00:00 svraone su: (to postgres) root on none 2022-07-06T00:11:57.519218+00:00 svraone su: last message repeated 3 times 2022-07-06T00:11:57.519100+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 30-vidm-db is DELETE 1 DELETE 1 Last login: Wed Jul 6 00:11:56 UTC 2022 DELETE 0 Last login: Wed Jul 6 00:11:57 UTC 2022 DELETE 1 Last login: Wed Jul 6 00:11:57 UTC 2022 2022-07-06T00:11:57.519126+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master 2022-07-06T00:11:57.602251+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org' 2022-07-06T00:11:57.611328+00:00 svraone node-removed: Removing 'rabbit@svrathree' from rabbitmq cluster 2022-07-06T00:11:57.649937+00:00 svraone su: (to rabbitmq) root on none 2022-07-06T00:11:59.704582+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 40-rabbitmq-master is Removing node rabbit@svrathree from cluster 2022-07-06T00:11:59.704608+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch 2022-07-06T00:11:59.785378+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True' 2022-07-06T00:11:59.790096+00:00 svraone node-removed: Restarting elasticsearch service 2022-07-06T00:11:59.964638+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command... 2022-07-06T00:11:59.987551+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 50-elasticsearch is Stopping elasticsearch: process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done. Starting elasticsearch: 2048 2022-07-06T00:11:59.987575+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health 2022-07-06T00:12:00.110181+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org' Rabbitmq cluster status returns only 2 nodes now [master] svraone:~ # rabbitmqctl cluster_status Cluster status of node rabbit@svraone [{nodes,[{disc,[rabbit@svraone,rabbit@svratwo]}]}, {running_nodes,[rabbit@svratwo,rabbit@svraone]}, {cluster_name,<<"rabbit@svraone.cap.org">>}, {partitions,[]}, {alarms,[{rabbit@svratwo,[]},{rabbit@svraone,[]}]}] Do the same with the second node that's svratwo.cap.org 2022-07-06T00:28:23.511761+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Processing request PUT /vacluster/event/node-removed 2022-07-06T00:28:23.584626+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6987]: info Resolved vCAC host: svraone.cap.org 2022-07-06T00:28:23.787863+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Logging event node-removed 2022-07-06T00:28:23.787888+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync 2022-07-06T00:28:23.861100+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org' 2022-07-06T00:28:23.869175+00:00 svraone node-removed: Setting database to ASYNC 2022-07-06T00:28:23.875065+00:00 svraone vami /opt/vmware/share/htdocs/service/cafe/config.py[7226]: info Processing request PUT /config/nodes/B030EDF7-DB2C-4830-942A-F40D9464AAD9/ping, referer: None 2022-07-06T00:28:23.955013+00:00 svraone vami /opt/vmware/share/htdocs/service/cafe/config.py[7226]: info Legacy authentication token received from ::ffff:AA.BBB.CC.DDD 2022-07-06T00:28:24.365470+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Jul 06, 2022 12:28:24 AM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection 2022-07-06T00:28:24.365492+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896 2022-07-06T00:28:24.432795+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info [2022-07-06 00:28:24] [root] [INFO] Current node not in cluster mode 2022-07-06T00:28:24.432956+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info command exit code: 0 2022-07-06T00:28:24.433160+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info cluster-mode-check Current node not in cluster mode 2022-07-06T00:28:24.433520+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Executing shell command... 2022-07-06T00:28:25.847315+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 05-db-sync is 2022-07-06T00:28:25.847337+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq 2022-07-06T00:28:25.925379+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'svratwo.cap.org', hostname: 'svraone.cap.org' 2022-07-06T00:28:25.939461+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 10-rabbitmq is 2022-07-06T00:28:25.939482+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy 2022-07-06T00:28:26.015719+00:00 svraone node-removed: Removing 'svratwo.cap.org' from haproxy config 2022-07-06T00:28:26.116374+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg Reload service haproxy ..done 2022-07-06T00:28:26.116394+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/25-db 2022-07-06T00:28:26.229282+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'svratwo.cap.org', hostname: 'svraone.cap.org' 2022-07-06T00:28:26.232359+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 25-db is 2022-07-06T00:28:26.232378+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db 2022-07-06T00:28:26.303362+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org' 2022-07-06T00:28:26.313158+00:00 svraone node-removed: Removing 'svratwo' from horizon database tables 2022-07-06T00:28:26.318183+00:00 svraone su: (to postgres) root on none 2022-07-06T00:28:26.809478+00:00 svraone su: last message repeated 3 times 2022-07-06T00:28:26.809378+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 30-vidm-db is DELETE 1 Last login: Wed Jul 6 00:11:57 UTC 2022 DELETE 1 Last login: Wed Jul 6 00:28:26 UTC 2022 DELETE 0 Last login: Wed Jul 6 00:28:26 UTC 2022 DELETE 1 Last login: Wed Jul 6 00:28:26 UTC 2022 2022-07-06T00:28:26.809402+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master 2022-07-06T00:28:26.904576+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org' 2022-07-06T00:28:26.917409+00:00 svraone node-removed: Removing 'rabbit@svratwo' from rabbitmq cluster 2022-07-06T00:28:26.971570+00:00 svraone su: (to rabbitmq) root on none 2022-07-06T00:28:28.544064+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 40-rabbitmq-master is Removing node rabbit@svratwo from cluster 2022-07-06T00:28:28.544087+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch 2022-07-06T00:28:28.578987+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Executing shell command... 2022-07-06T00:28:28.620636+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True' 2022-07-06T00:28:28.625562+00:00 svraone node-removed: Restarting elasticsearch service 2022-07-06T00:28:28.811215+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 50-elasticsearch is Stopping elasticsearch: process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done. Starting elasticsearch: 2048 2022-07-06T00:28:28.811240+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health 2022-07-06T00:28:29.042125+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org' Here's the final status Even though svratwo was out of cluster , it was still showing up under rabbitmq cluster status Perform a "Reset Rabbitmq" to get rid of this stale node Once completed , it should all be good [master] svraone:~ # rabbitmqctl cluster_status Cluster status of node rabbit@svraone [{nodes,[{disc,[rabbit@svraone]}]}, {running_nodes,[rabbit@svraone]}, {cluster_name,<<"rabbit@svraone.cap.org">>}, {partitions,[]}, {alarms,[{rabbit@svraone,[]}]}] Login into vRA portal and see if everything is functional Perform a health check on IAAS nodes and endpoints as well , generic health check Also , check if deployments are working if your still using this 7.x version This concludes the blog , before making any changes take snapshots. Once all verifications are completed , go ahead and turn off the other nodes we removed and delete it after a week or so based on your environment

  • Multi Node SaltStack Installation ( flow chart )

    There are 2 methods to install SaltStack Config Standard method vRealize Suite Lifecycle Manager method Second one is pretty straight forward. If we want to have a Multi-Node SaltStack then we need to follow standard method by installing Salt components on multiple nodes Above flow chart explains detailed procedure on how one installs a multi node SaltStack Config Note: There are certain design decisions which you have to make according to your requirement. But the overall procedure would remain same If your unable to view the image properly, then download the high resolution jpg file by downloading and extracting this zip file.

  • Validation of SaltStack endpoint fails when Running Environment "embedded-ABX-onprem" is chosen

    Problem Statement On a greenfield installation of vRA 8.7 which is integrated with SaltStack Config , we have an option to choose "Running Environment" This screenshot below explains how a default SaltStack config deployed through vRSLCM is presented under infrastructure tab When you enter the password for root , without selecting any option under "Running Environment" , it successfully validates But, the moment you select "Running Environment" we see an exception where the validation fails before we see the exception , there is an abx integration run which occurs which gives you more details about the exception Running in polyglot! [2022-04-06 13:56:22,183] [INFO] - [saltstack-integration] Validating Salt Stack Config Server credentials... [2022-04-06 13:56:22,183] [INFO] - [saltstack-integration] Authenticating to a Salt Stack Config Server with url [https://ss.cap.org//account/login]... [2022-04-06 13:56:22,184] [INFO] - [saltstack-integration] Retrieving credentials from auth credentials link at [/core/auth/credentials/f0c26468-4c1b-4a62-b33a-b04d7c03390e]... [2022-04-06 13:56:22,304] [INFO] - [saltstack-integration] Successfully retrieved credentials from auth credentials link [2022-04-06 13:56:22,304] [INFO] - [saltstack-integration] Retrieving Salt Stack Config Server XSRF token from url [https://ss.cap.org//account/login]... /run/abx-polyglot/function/urllib3/connectionpool.py:1050: InsecureRequestWarning: Unverified HTTPS request is being made to host 'ss.cap.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings InsecureRequestWarning, [2022-04-06 13:56:22,327] [ERROR] - [saltstack-integration] Failed to validate Salt Stack Config Server credentials: Failed to authenticate to a Salt Stack Config Server: Failed to retrieve Salt Stack Config Server XSRF token: 403 Client Error: Forbidden for url: https://ss.cap.org//account/login Finished running action code. Exiting python process. Python process exited. Max Memory Used: 22 MB The reason for the exception is as below Failed to validate Salt Stack Config Server credentials: Failed to authenticate to a Salt Stack Config Server: Failed to retrieve Salt Stack Config Server XSRF token: 403 Client Error: Forbidden for url: https://ss.cap.org//account/login exception under provisioning-service-app.log 2022-04-06T16:37:34.118Z WARN provisioning [host='provisioning-service-app-6885766867-kgk4l' thread='reactor-http-epoll-10' user='provisioning-RVgAJFw9LrOYkeUr(arun)' org='c2eae67a-ff6d-4dae-9fd3-6594352a1f8a' trace='dc45aa9b-4b4e-47d3-8176-8321b1a2336a' parent='4a10178e-bf5f-48d0-8928-ae7a84e3aff4' span='d1977a94-1448-45a5-b93a-e449c8a76b60'] c.v.xenon.common.ServiceErrorResponse.create:85 - message: Failed to authenticate, please check your credentials or if the host is reachable, statusCode: 400, serverErrorId: 9c245260-075a-4dc0-bbe2-fb13b0e5d0bd: Caused by java.lang.RuntimeException: Failed to authenticate, please check your credentials or if the host is reachable at com.vmware.xenon.common.SpringHostUtils.responseEntityToOperation(SpringHostUtils.java:952) at com.vmware.xenon.common.SpringHostUtils.lambda$sendRequest$4(SpringHostUtils.java:289) at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) at reactor.core.publisher.MonoToCompletableFuture.onNext(MonoToCompletableFuture.java:64) at reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber.onNext(FluxOnAssembly.java:539) at io.opentracing.contrib.reactor.TracedSubscriber.lambda$onNext$2(TracedSubscriber.java:69) at io.opentracing.contrib.reactor.TracedSubscriber.withActiveSpan(TracedSubscriber.java:95) at io.opentracing.contrib.reactor.TracedSubscriber.onNext(TracedSubscriber.java:69) at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127) at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) at io.opentracing.contrib.reactor.TracedSubscriber.lambda$onNext$2(TracedSubscriber.java:69) at io.opentracing.contrib.reactor.TracedSubscriber.withActiveSpan(TracedSubscriber.java:95) at io.opentracing.contrib.reactor.TracedSubscriber.onNext(TracedSubscriber.java:69) at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127) * * * * * at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) Remediation Method 1 (Greenfeild Scenario ) If you have a SaltStack which was recently deployed and doesn't have any resources mapped to this integration , then simply delete the integration and recreate it Before After See the difference , ensure when your adding the hostname in the integration When vRSLCM add SaltStack integration , it uses URL ( https://<>/ ) which is when you would see the problem Once you remove the integration and then add it back again with just FQDN and not the URL of the SaltStack server , then we select an Running environment , it all works fine. Method 2 ( Brownfield Scenario ) When you have resources being managed by SaltStack Integration information is stored inside provisioning-db of vRealize Automation environment To login into database use vracli dev psql Accept the warning that it's a developer command and ensure you know what you are changing Below is the screenshot and output of the table where the integration information is stored. The table is known as endpoint_state , this is present inside provisioning-db To connect to provisioning-db use the below command \c provisioning-db root@vra [ ~ ]# vracli dev psql This execution will be recorded! 'psql' is a developer command. Type 'yes' if you want to continue, or 'no' to stop: yes 2022-04-06 14:14:43,439 [INFO] Logging into database template1 psql (10.18) Type "help" for help. template1=# \c provisioning-db You are now connected to database "provisioning-db" as user "postgres". provisioning-db=# \x Expanded display is on. provisioning-db=# select * from endpoint_state where name = 'vssc_idm'; -[ RECORD 1 ]-------------------+--------------------------------------------------------------------------------------------------------------------- document_self_link | /resources/endpoints/b2b02510-b0d5-46cf-9248-570b3d1bd58d document_auth_principal_link | /provisioning/auth/csp/users/cgs-lecvl28lpzqwhozt@provisioning-client.local document_expiration_time_micros | 0 document_owner | document_update_action | PATCH document_update_time_micros | 1646141393852000 document_version | 1 id | 6c2679af-a23c-4c88-8af0-3380305e3cde name | vssc_idm c_desc | custom_properties | {"hostName": "https://ss.cap.org/", "isExternal": "true", "privateKeyId": "root"} tenant_links | ["/tenants/organization/c2eae67a-ff6d-4dae-9fd3-6594352a1f8a", "/tenants/project/1f32c781c7bac475-7f703c5265a63d87"] group_links | tag_links | org_auth_link | /tenants/organization/c2eae67a-ff6d-4dae-9fd3-6594352a1f8a project_auth_link | owner_auth_link | msp_auth_link | creation_time_micros | region_id | endpoint_links | compute_host_link | /resources/compute/c6f3a8ac-c700-41b2-a91d-91a3fdd73765 expanded_tags | document_creation_time_micros | 1646141393802000 endpoint_type | saltstack auth_credentials_link | /core/auth/credentials/eeb389af-fcd6-4b06-a0e9-5d178f128eed compute_link | /resources/compute/c6f3a8ac-c700-41b2-a91d-91a3fdd73765 compute_description_link | /resources/compute-descriptions/8a689d42-24f7-4f6d-b362-e85b6dc6f423 resource_pool_link | /resources/pools/1f32c781c7bac475-7f703c5265a63d87 parent_link | associated_endpoint_links | endpoint_properties | {"hostName": "https://ss.cap.org/", "privateKeyId": "root"} maintenance_mode | mobility_endpoint_links | provisioning-db=# Look at the custom_properties section , this is how it is out of the box custom_properties | {"hostName": "https://ss.cap.org/", "isExternal": "true", "privateKeyId": "root"} We would add an additional property called dcID and change the hostname to FQDN than a URL and keep endpointId blank. update endpoint_state set custom_properties = '{"dcId": "onprem", "hostName": " s s.cap.org", "endpointId": "", "isExternal": "true", "privateKeyId": "root"}' where name = 'vssc_idm'; Along with it , we would have to change endpoint_properties too. This has to reflect FQDN than the whole url endpoint_properties | {"hostName": "https://ss.cap.org/", "privateKeyId": "root"} Note : Before making changes i'll take a snapshot of vRA appliance As we already logged into the database before , let's go ahead and make the change. Execute below query and ensure its successful update endpoint_state set custom_properties = '{"dcId": "onprem", "hostName": " s s.cap.org", "endpointId": "", "isExternal": "true", "privateKeyId": "root"}' where name = 'vssc_idm'; update endpoint_state set endpoint_properties = '{"hostName": "ss.cap.org", "privateKeyId": "root"}' where name = 'vssc_idm'; provisioning-db=# update endpoint_state set custom_properties = '{"dcId": "onprem", "hostName": "ss.cap.org", "endpointId": "", "isExternal": "true", "privateKeyId": "root"}' where name = 'vssc_idm'; UPDATE 1 provisioning-db=# update endpoint_state set endpoint_properties = ' {"hostName": "ss.cap.org", "privateKeyId": "root"}' where name = 'vssc_idm'; UPDATE 1 provisioning-db=# select * from endpoint_state where name = 'vssc_idm'; -[ RECORD 1 ]-------------------+--------------------------------------------------------------------------------------------------------------------- document_self_link | /resources/endpoints/b2b02510-b0d5-46cf-9248-570b3d1bd58d document_auth_principal_link | /provisioning/auth/csp/users/cgs-lecvl28lpzqwhozt@provisioning-client.local document_expiration_time_micros | 0 document_owner | document_update_action | PATCH document_update_time_micros | 1646141393852000 document_version | 1 id | 6c2679af-a23c-4c88-8af0-3380305e3cde name | vssc_idm c_desc | custom_properties | {"dcId": "onprem", "hostName": "ss.cap.org", "endpointId": "", "isExternal": "true", "privateKeyId": "root"} tenant_links | ["/tenants/organization/c2eae67a-ff6d-4dae-9fd3-6594352a1f8a", "/tenants/project/1f32c781c7bac475-7f703c5265a63d87"] group_links | tag_links | org_auth_link | /tenants/organization/c2eae67a-ff6d-4dae-9fd3-6594352a1f8a project_auth_link | owner_auth_link | msp_auth_link | creation_time_micros | region_id | endpoint_links | compute_host_link | /resources/compute/c6f3a8ac-c700-41b2-a91d-91a3fdd73765 expanded_tags | document_creation_time_micros | 1646141393802000 endpoint_type | saltstack auth_credentials_link | /core/auth/credentials/eeb389af-fcd6-4b06-a0e9-5d178f128eed compute_link | /resources/compute/c6f3a8ac-c700-41b2-a91d-91a3fdd73765 compute_description_link | /resources/compute-descriptions/8a689d42-24f7-4f6d-b362-e85b6dc6f423 resource_pool_link | /resources/pools/1f32c781c7bac475-7f703c5265a63d87 parent_link | associated_endpoint_links | endpoint_properties | {"hostName": "ss.cap.org", "privateKeyId": "root"} maintenance_mode | mobility_endpoint_links | As one can see from the above update , we did change the custom_properties of the SSC integration in vRA Exit the database by executing \q Now let's reboot saltstack , log out of vRA and log back in again . See if the FQDN is back in the hostname rather than the URL. If that's the case it would successfully authenticate with the "Running Environment " in place Running in polyglot! [2022-04-06 17:13:36,475] [INFO] - [saltstack-integration] Validating Salt Stack Config Server credentials... [2022-04-06 17:13:36,475] [INFO] - [saltstack-integration] Authenticating to a Salt Stack Config Server with url [https://ss.cap.org/account/login]... [2022-04-06 17:13:36,475] [INFO] - [saltstack-integration] Retrieving credentials from auth credentials link at [/core/auth/credentials/d7ea970e-cdca-42bc-b53d-ddac713a8666]... [2022-04-06 17:13:36,519] [INFO] - [saltstack-integration] Successfully retrieved credentials from auth credentials link [2022-04-06 17:13:36,519] [INFO] - [saltstack-integration] Retrieving Salt Stack Config Server XSRF token from url [https://ss.cap.org/account/login]... /run/abx-polyglot/function/urllib3/connectionpool.py:1050: InsecureRequestWarning: Unverified HTTPS request is being made to host 'ss.cap.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings InsecureRequestWarning, [2022-04-06 17:13:36,544] [INFO] - [saltstack-integration] Successfully retrieved Salt Stack Config Server XSRF token /run/abx-polyglot/function/urllib3/connectionpool.py:1050: InsecureRequestWarning: Unverified HTTPS request is being made to host 'ss.cap.org'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings InsecureRequestWarning, [2022-04-06 17:13:36,633] [INFO] - [saltstack-integration] Successfully authenticated to a Salt Stack Config Server [2022-04-06 17:13:36,634] [INFO] - [saltstack-integration] Successfully validated Salt Stack Config Server credentials Finished running action code. Exiting python process. Python process exited. Max Memory Used: 21 MB In Short Issue is seen due to the fact there is a URL rather than FQDN and when it's trying to execute an API to authentication it get's a 403 error Unless we fix this issue you will not be able to successfully validate running environment If it's a new environment with no SaltStack resources , go ahead and delete the integration and re-create it If it's an existing integration with resources in place , then modify the database as shown above 1. connect to postgres database vracli dev psql 2. connect to provisiioning-db \c provisioning-db 3. enable expanded display \x 4. Update custom_properties value where the hostname is set to URL of SaltStack node than an FQDN. Remember to change in endpoint_state table as shown below. Sometimes the name of the integration might be different if it's changed from UI. So change it accordingly. update endpoint_state set custom_properties = '{"dcId": "onprem", "hostName": "FQDN-SALTSTACKNODE", "endpointId": "", "isExternal": "true", "privateKeyId": "root"}' where name = 'vssc_idm'; 5. Update endpoint_properties column value where you have hostname set to URL to FQDN. Almost same as above provisioning-db=# update endpoint_state set endpoint_properties = ' {"hostName": "FQDN-SALTSTACKNODE", "privateKeyId": "root"}' where name = 'vssc_idm'; Now add "Running Environment" and then validate. You should see a successful validation in place

bottom of page