top of page

Experienced Technology Product Manager adept at steering success throughout the entire product lifecycle, from conceptualization to market delivery. Proficient in market analysis, strategic planning, and effective team leadership, utilizing data-driven approaches for ongoing enhancements.

  • Twitter
  • LinkedIn
White Background

Scaling Down vRealize Automation 7.x


 

Use Case


With vRealize Automation 7.x reaching it's end of like on September 2022 , most of the customers either adopted version 8.x or are in transition and will get there eventually.


It's not that easy to stop an enterprise application and decommission it overnight , but it's possible to scale it down rather than keep it distributed and highly available


Keeping this in mind i thought i'll pen down few steps on how to scale down vRA 7.x



 

Environment


Built a 3 node vRA appliance and a 2 node IAAS servers and called them as below



Server

Role

svraone

primary va

svratwo

secondary va

svrathree

tertiary va

siaasone

primary web , manager service , model manager data , proxy agent , dem worker and dem orchestrator

siaastwo

secondary web , secondary manager service , proxy agent , dem worker and dem orchestrator



 


Procedure


  • Take Snapshots before performing any of the below steps across all nodes. backup databases too

  • Output of listing all nodes in a cluster looks like below in my lab



Node:
  NodeHost: svraone.cap.org
  NodeId: cafe.node.631087009.16410
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: True
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
Node:
  NodeHost: siaastwo.cap.org
  NodeId: 7DD5F70C-976F-4635-89F8-582986851E98
  NodeType: IAAS
  Components:
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
        
Node:
  NodeHost: siaasone.cap.org
  NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9
  NodeType: IAAS
  Components:
    Component:
        Type: Database
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerData
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Passive
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
Node:
  NodeHost: svrathree.cap.org
  NodeId: cafe.node.384204123.10666
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
        
Node:
  NodeHost: svratwo.cap.org
  NodeId: cafe.node.776067309.27389
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317


  • To scale down , i'd like to remove my secondary nodes and just leave primary in my cluster

  • I'll begin my scaling down approach with IAAS nodes , that's siaastwo.cap.org


  • I'll open VAMI of my Master Node and then click on the cluster tab

  • Because we powered off second iaas node , it won't show in connected state



  • The moment i click on "Delete" next to the IAAS secondary node, I'll get a warning shown as below


svraone:5480 says

Do you really want to delete the node 7DD5F70C-976F-4635-89F8-582986851E98 which was last connected 11 minutes ago? You will need to remove its hostname from an external load balancer!


  • This ID: 7DD5F70C-976F-4635-89F8-582986851E98 belongs to siaastwo.cap.org , see the output below

        
Node:
  NodeHost: siaastwo.cap.org
  NodeId: 7DD5F70C-976F-4635-89F8-582986851E98
  NodeType: IAAS
  Components:
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        

  • Now confirm deletion




  • The node is now successfully removed

  • To monitor one can take a look at /var/log/messages




2022-07-05T23:12:47.929846+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Logging event node-removed

2022-07-05T23:12:47.929877+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync


2022-07-05T23:12:47.930565+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Resolved vCAC host: svraone.cap.org


2022-07-05T23:12:48.005902+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org svrathree.cap.org svratwo.cap.org'


2022-07-05T23:12:48.039511+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 05-db-sync is

2022-07-05T23:12:48.039556+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq

2022-07-05T23:12:48.125189+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org'

2022-07-05T23:12:48.130809+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 10-rabbitmq is

2022-07-05T23:12:48.130832+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy

2022-07-05T23:12:48.233369+00:00 svraone node-removed: Removing 'siaastwo.cap.org' from haproxy config

2022-07-05T23:12:48.265237+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Jul 05, 2022 11:12:48 PM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection


2022-07-05T23:12:48.265459+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896


2022-07-05T23:12:48.308782+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg
Reload service haproxy ..done


2022-07-05T23:12:48.308807+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/25-db


2022-07-05T23:12:48.353287+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode

2022-07-05T23:12:48.353314+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info command exit code: 1


2022-07-05T23:12:48.353322+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info cluster-mode-check [2022-07-05 23:12:48] [root] [INFO] Current node in cluster mode

2022-07-05T23:12:48.354087+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command...

2022-07-05T23:12:48.458204+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'siaastwo.cap.org', hostname: 'svraone.cap.org'

2022-07-05T23:12:48.461776+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 25-db is

2022-07-05T23:12:48.461800+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db

2022-07-05T23:12:48.827039+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'

2022-07-05T23:12:48.847537+00:00 svraone node-removed: Removing 'siaastwo' from horizon database tables

2022-07-05T23:12:48.852777+00:00 svraone su: (to postgres) root on none

2022-07-05T23:12:50.279007+00:00 svraone su: last message repeated 3 times

2022-07-05T23:12:50.278863+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 30-vidm-db is DELETE 0
DELETE 0
Last login: Tue Jul  5 23:12:48 UTC 2022
DELETE 0
Last login: Tue Jul  5 23:12:49 UTC 2022
DELETE 0
Last login: Tue Jul  5 23:12:49 UTC 2022

2022-07-05T23:12:50.278889+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master

2022-07-05T23:12:50.370747+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'

2022-07-05T23:12:50.383950+00:00 svraone node-removed: Removing 'rabbit@siaastwo' from rabbitmq cluster

2022-07-05T23:12:50.424780+00:00 svraone su: (to rabbitmq) root on none

2022-07-05T23:12:50.476667+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9692]: info Event request for siaastwo.cap.org timed out

2022-07-05T23:12:51.335973+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[9693]: info Executing shell command...

2022-07-05T23:12:52.455441+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 40-rabbitmq-master is Removing node rabbit@siaastwo from cluster

2022-07-05T23:12:52.455467+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch

2022-07-05T23:12:52.550899+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True'

2022-07-05T23:12:52.564092+00:00 svraone node-removed: Restarting elasticsearch service

2022-07-05T23:12:52.733672+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Result from 50-elasticsearch is Stopping elasticsearch:  process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done.
Starting elasticsearch: 2048

2022-07-05T23:12:52.733690+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[9919]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health

2022-07-05T23:12:52.883943+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'siaastwo.cap.org'

  • Executing list nodes command you can now see there are no references to siaastwo.cap.org


Node:
  NodeHost: svraone.cap.org
  NodeId: cafe.node.631087009.16410
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: True
    Component:
        Type: vRO
        Version: 7.6.0.12923317
        
        
        
Node:
  NodeHost: siaasone.cap.org
  NodeId: B030EDF7-DB2C-4830-942A-F40D9464AAD9
  NodeType: IAAS
  Components:
    Component:
        Type: Database
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: Website
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ModelManagerData
        Version: 7.6.0.16195
        State: Available
    Component:
        Type: ModelManagerWeb
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: ManagerService
        Version: 7.6.0.16195
        State: Active
    Component:
        Type: ManagementAgent
        Version: 7.6.0.17541
        State: Started
    Component:
        Type: DemOrchestrator
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: DemWorker
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: WAPI
        Version: 7.6.0.16195
        State: Started
    Component:
        Type: vSphereAgent
        Version: 7.6.0.16195
        State: Started
        
        
Node:
  NodeHost: svrathree.cap.org
  NodeId: cafe.node.384204123.10666
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317


NodeHost: svratwo.cap.org
  NodeId: cafe.node.776067309.27389
  NodeType: VA
  Components:
    Component:
        Type: vRA
        Version: 7.6.0.317
        Primary: False
    Component:
        Type: vRO
        Version: 7.6.0.12923317

  • Now , let's move on to remove second and third appliance from the cluster

  • Before i remove nodes from cluster, i'll remove connectors coming from those nodes













  • Now that the connectors are removed, we will now move on with removing the vRA appliances from cluster

  • Take one more round of snapshots



  • Once the snapshot tasks are complete , we will proceed with appliance removal

  • Remember , you cannot and should not remove the master from cluster.



  • Ensure database is in Asynchronous mode

  • Click on delete next to svrathree.cap.org to delete the node or remove it from the cluster







  • While you remove node from cluster , you can check /var/log/messages or /var/log/vmware/vcac/vcac-config.log for more information




2022-07-06T00:11:53.985239+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Processing request PUT /vacluster/event/node-removed

2022-07-06T00:11:54.123523+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39336]: info Resolved vCAC host: svraone.cap.org

2022-07-06T00:11:54.255543+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Logging event node-removed

2022-07-06T00:11:54.255982+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync

2022-07-06T00:11:54.331159+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org svratwo.cap.org'

2022-07-06T00:11:54.339000+00:00 svraone node-removed: Setting database to ASYNC
2022-07-06T00:11:54.898295+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Jul 06, 2022 12:11:54 AM 
org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection

2022-07-06T00:11:54.898572+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896


2022-07-06T00:11:54.961623+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info [2022-07-06 00:11:54] [root] [INFO] Current node in cluster mode

2022-07-06T00:11:54.961643+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info command exit code: 1

2022-07-06T00:11:54.961651+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info cluster-mode-check [2022-07-06 00:11:54] [root] [INFO] Current node in cluster mode


2022-07-06T00:11:54.961660+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command...


2022-07-06T00:11:56.423082+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 05-db-sync is

2022-07-06T00:11:56.423104+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq

2022-07-06T00:11:56.514913+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'svrathree.cap.org', hostname: 'svraone.cap.org'

2022-07-06T00:11:56.518400+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 10-rabbitmq is

2022-07-06T00:11:56.518424+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy

2022-07-06T00:11:56.619648+00:00 svraone node-removed: Removing 'svrathree.cap.org' from haproxy config

2022-07-06T00:11:56.749906+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg
Reload service haproxy ..done

2022-07-06T00:11:56.749930+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/25-db

2022-07-06T00:11:56.894988+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'svrathree.cap.org', hostname: 'svraone.cap.org'

2022-07-06T00:11:56.898755+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 25-db is

2022-07-06T00:11:56.898777+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db

2022-07-06T00:11:56.982014+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command...
2022-07-06T00:11:56.986794+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org'

2022-07-06T00:11:56.995099+00:00 svraone node-removed: Removing 'svrathree' from horizon database tables

2022-07-06T00:11:57.000134+00:00 svraone su: (to postgres) root on none

2022-07-06T00:11:57.519218+00:00 svraone su: last message repeated 3 times

2022-07-06T00:11:57.519100+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 30-vidm-db is DELETE 1
DELETE 1
Last login: Wed Jul  6 00:11:56 UTC 2022
DELETE 0
Last login: Wed Jul  6 00:11:57 UTC 2022
DELETE 1
Last login: Wed Jul  6 00:11:57 UTC 2022

2022-07-06T00:11:57.519126+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master
2022-07-06T00:11:57.602251+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org'

2022-07-06T00:11:57.611328+00:00 svraone node-removed: Removing 'rabbit@svrathree' from rabbitmq cluster

2022-07-06T00:11:57.649937+00:00 svraone su: (to rabbitmq) root on none
2022-07-06T00:11:59.704582+00:00 svraone vami 

/opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 40-rabbitmq-master is Removing node rabbit@svrathree from cluster
2022-07-06T00:11:59.704608+00:00 svraone vami 

/opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch

2022-07-06T00:11:59.785378+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True'
2022-07-06T00:11:59.790096+00:00 svraone node-removed: Restarting elasticsearch service

2022-07-06T00:11:59.964638+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[39337]: info Executing shell command...

2022-07-06T00:11:59.987551+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Result from 50-elasticsearch is Stopping elasticsearch:  process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done.
Starting elasticsearch: 2048

2022-07-06T00:11:59.987575+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[39546]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health
2022-07-06T00:12:00.110181+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'svrathree.cap.org'

  • Rabbitmq cluster status returns only 2 nodes now

[master] svraone:~ # rabbitmqctl cluster_status
Cluster status of node rabbit@svraone
[{nodes,[{disc,[rabbit@svraone,rabbit@svratwo]}]},
 {running_nodes,[rabbit@svratwo,rabbit@svraone]},
 {cluster_name,<<"rabbit@svraone.cap.org">>},
 {partitions,[]},
 {alarms,[{rabbit@svratwo,[]},{rabbit@svraone,[]}]}]

  • Do the same with the second node that's svratwo.cap.org








2022-07-06T00:28:23.511761+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Processing request PUT /vacluster/event/node-removed
2022-07-06T00:28:23.584626+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6987]: info Resolved vCAC host: svraone.cap.org
2022-07-06T00:28:23.787863+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Logging event node-removed
2022-07-06T00:28:23.787888+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/05-db-sync
2022-07-06T00:28:23.861100+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/05-db-sync: IS_MASTER: 'True', NODES: 'svraone.cap.org'
2022-07-06T00:28:23.869175+00:00 svraone node-removed: Setting database to ASYNC
2022-07-06T00:28:23.875065+00:00 svraone vami /opt/vmware/share/htdocs/service/cafe/config.py[7226]: info Processing request PUT /config/nodes/B030EDF7-DB2C-4830-942A-F40D9464AAD9/ping, referer: None
2022-07-06T00:28:23.955013+00:00 svraone vami /opt/vmware/share/htdocs/service/cafe/config.py[7226]: info Legacy authentication token received from ::ffff:AA.BBB.CC.DDD
2022-07-06T00:28:24.365470+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Jul 06, 2022 12:28:24 AM org.springframework.jdbc.datasource.SingleConnectionDataSource initConnection
2022-07-06T00:28:24.365492+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info INFO: Established shared JDBC Connection: org.postgresql.jdbc.PgConnection@6ab7a896
2022-07-06T00:28:24.432795+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info [2022-07-06 00:28:24] [root] [INFO] Current node not in cluster mode
2022-07-06T00:28:24.432956+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info command exit code: 0
2022-07-06T00:28:24.433160+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info cluster-mode-check Current node not in cluster mode
2022-07-06T00:28:24.433520+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Executing shell command...
2022-07-06T00:28:25.847315+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 05-db-sync is
2022-07-06T00:28:25.847337+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/10-rabbitmq
2022-07-06T00:28:25.925379+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/10-rabbitmq: REMOVED_NODE: 'svratwo.cap.org', hostname: 'svraone.cap.org'
2022-07-06T00:28:25.939461+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 10-rabbitmq is
2022-07-06T00:28:25.939482+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/20-haproxy
2022-07-06T00:28:26.015719+00:00 svraone node-removed: Removing 'svratwo.cap.org' from haproxy config
2022-07-06T00:28:26.116374+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 20-haproxy is Loaded HAProxy configuration file: /etc/haproxy/conf.d/30-vro-config.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/20-vcac.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/40-xenon.cfg
Loaded HAProxy configuration file: /etc/haproxy/conf.d/10-psql.cfg
Reload service haproxy ..done
2022-07-06T00:28:26.116394+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/25-db
2022-07-06T00:28:26.229282+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/25-db: REMOVED_NODE: 'svratwo.cap.org', hostname: 'svraone.cap.org'
2022-07-06T00:28:26.232359+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 25-db is
2022-07-06T00:28:26.232378+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/30-vidm-db
2022-07-06T00:28:26.303362+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/30-vidm-db: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org'
2022-07-06T00:28:26.313158+00:00 svraone node-removed: Removing 'svratwo' from horizon database tables
2022-07-06T00:28:26.318183+00:00 svraone su: (to postgres) root on none
2022-07-06T00:28:26.809478+00:00 svraone su: last message repeated 3 times
2022-07-06T00:28:26.809378+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 30-vidm-db is DELETE 1
Last login: Wed Jul  6 00:11:57 UTC 2022
DELETE 1
Last login: Wed Jul  6 00:28:26 UTC 2022
DELETE 0
Last login: Wed Jul  6 00:28:26 UTC 2022
DELETE 1
Last login: Wed Jul  6 00:28:26 UTC 2022
2022-07-06T00:28:26.809402+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master
2022-07-06T00:28:26.904576+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/40-rabbitmq-master: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org'
2022-07-06T00:28:26.917409+00:00 svraone node-removed: Removing 'rabbit@svratwo' from rabbitmq cluster
2022-07-06T00:28:26.971570+00:00 svraone su: (to rabbitmq) root on none
2022-07-06T00:28:28.544064+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 40-rabbitmq-master is Removing node rabbit@svratwo from cluster
2022-07-06T00:28:28.544087+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/50-elasticsearch
2022-07-06T00:28:28.578987+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster-config.py[6988]: info Executing shell command...
2022-07-06T00:28:28.620636+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/50-elasticsearch: IS_MASTER: 'True'
2022-07-06T00:28:28.625562+00:00 svraone node-removed: Restarting elasticsearch service
2022-07-06T00:28:28.811215+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Result from 50-elasticsearch is Stopping elasticsearch:  process in pidfile `/opt/vmware/elasticsearch/elasticsearch.pid'done.
Starting elasticsearch: 2048
2022-07-06T00:28:28.811240+00:00 svraone vami /opt/vmware/share/htdocs/service/cluster/cluster.py[7185]: info Executing /etc/vr/cluster-event/node-removed.d/60-vidm-health
2022-07-06T00:28:29.042125+00:00 svraone node-removed: /etc/vr/cluster-event/node-removed.d/60-vidm-health: IS_MASTER: 'True', REMOVED_NODE: 'svratwo.cap.org'


  • Here's the final status


  • Even though svratwo was out of cluster , it was still showing up under rabbitmq cluster status



  • Perform a "Reset Rabbitmq" to get rid of this stale node





  • Once completed , it should all be good




[master] svraone:~ # rabbitmqctl cluster_status
Cluster status of node rabbit@svraone
[{nodes,[{disc,[rabbit@svraone]}]},
 {running_nodes,[rabbit@svraone]},
 {cluster_name,<<"rabbit@svraone.cap.org">>},
 {partitions,[]},
 {alarms,[{rabbit@svraone,[]}]}]

  • Login into vRA portal and see if everything is functional

  • Perform a health check on IAAS nodes and endpoints as well , generic health check

  • Also , check if deployments are working if your still using this 7.x version



This concludes the blog , before making any changes take snapshots.

Once all verifications are completed , go ahead and turn off the other nodes we removed and delete it after a week or so based on your environment


 



361 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page