top of page

Search Results

236 results found for ""

  • Configuring a vRO 8.x Cluster

    In this blog, I shall share the procedure to create a vRealize Orchestrator cluster on version 8.2 The recommended number of nodes that can be used to create a clustered vRealize Orchestrator environment is three. vropri.prem.com vrosec.prem.com vroteri.prem.com Once deployed we have to ensure that all pods are in a RUNNING state so that we can move on to the next step to configure clustering Procedure In my case, there isn't any working load balancer so I am using a workaround at this moment. I've created a cname which is pointing to the primary node Ideally, you're expected to configure as written in this documentation Orchestrator Load-Balancing Guide Now that we have all the nodes in a ready state and I've taken a snapshot of all three nodes before making any changes. These snapshots are taken without memory while the appliances are powered on Configure the primary node. Log in to the vRealize Orchestrator Appliance of the primary node over SSH as root . To configure the cluster load balancer server, run the vracli load-balancer set load_balancer_FQDN command Click Change and set the host address of the connected load balancer server. Configure the authentication provider. See Configuring a Standalone vRealize Orchestrator Server. Join secondary nodes to the primary node Log in to the vRealize Orchestrator Appliance of the secondary node over SSH as root . To join the secondary node to the primary node, run the vracli cluster join primary_node_hostname_or_IP Enter the root password of the primary node. Repeat the procedure for other secondary nodes. Following are the events which happen when you execute the cluster join command Below Snippet is from one of the nodes vroteri.prem.com root@vroteri [ ~ ]# vracli cluster join vropri.prem.com 2020-11-24 14:13:19,085 [INFO] Resetting the current node .. 2020-11-24 14:18:09,306 [INFO] Getting join bundle from remote endpoint .. Password: 2020-11-24 14:22:30,362 [INFO] Parsing join bundle 2020-11-24 14:22:30,390 [INFO] Deleting data from previous use on this node .. 2020-11-24 14:22:31,006 [INFO] Creating missing data directories on this node .. 2020-11-24 14:22:31,082 [INFO] Allowing other nodes to access this node .. 2020-11-24 14:22:31,101 [INFO] Updating hosts file for remote endpoint .. 2020-11-24 14:22:31,114 [INFO] Executing cluster join .. I1124 14:22:31.221791 30566 join.go:357] [preflight] found /etc/kubernetes/admin.conf. Use it for skipping discovery I1124 14:22:31.224053 30566 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName I1124 14:22:31.224103 30566 join.go:375] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress I1124 14:22:31.224243 30566 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock I1124 14:22:31.224550 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.224592 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.224842 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.224968 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.225031 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.225068 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.225117 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.225151 30566 interface.go:411] Found active IP 10.109.44.140 [preflight] Running pre-flight checks I1124 14:22:31.225361 30566 preflight.go:90] [preflight] Running general checks I1124 14:22:31.225501 30566 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I1124 14:22:31.225580 30566 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I1124 14:22:31.225626 30566 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:31.225673 30566 checks.go:102] validating the container runtime I1124 14:22:31.342648 30566 checks.go:128] validating if the service is enabled and active [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I1124 14:22:31.532103 30566 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I1124 14:22:31.532217 30566 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I1124 14:22:31.532283 30566 checks.go:649] validating whether swap is enabled or not I1124 14:22:31.532392 30566 checks.go:376] validating the presence of executable conntrack I1124 14:22:31.532532 30566 checks.go:376] validating the presence of executable ip I1124 14:22:31.532583 30566 checks.go:376] validating the presence of executable iptables I1124 14:22:31.532650 30566 checks.go:376] validating the presence of executable mount I1124 14:22:31.532687 30566 checks.go:376] validating the presence of executable nsenter I1124 14:22:31.532741 30566 checks.go:376] validating the presence of executable ebtables I1124 14:22:31.532847 30566 checks.go:376] validating the presence of executable ethtool I1124 14:22:31.532959 30566 checks.go:376] validating the presence of executable socat I1124 14:22:31.533016 30566 checks.go:376] validating the presence of executable tc I1124 14:22:31.533088 30566 checks.go:376] validating the presence of executable touch I1124 14:22:31.533353 30566 checks.go:520] running all checks I1124 14:22:31.631717 30566 checks.go:406] checking whether the given node name is reachable using net.LookupHost I1124 14:22:31.632132 30566 checks.go:618] validating kubelet version I1124 14:22:31.724374 30566 checks.go:128] validating if the service is enabled and active I1124 14:22:31.737672 30566 checks.go:201] validating availability of port 10250 I1124 14:22:31.738591 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.738713 30566 join.go:455] [preflight] Fetching init configuration I1124 14:22:31.738727 30566 join.go:493] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' I1124 14:22:31.798981 30566 interface.go:400] Looking for default routes with IPv4 addresses I1124 14:22:31.799061 30566 interface.go:405] Default route transits interface "eth0" I1124 14:22:31.799375 30566 interface.go:208] Interface eth0 is up I1124 14:22:31.799490 30566 interface.go:256] Interface "eth0" has 1 addresses :[10.109.44.140/20]. I1124 14:22:31.799542 30566 interface.go:223] Checking addr 10.109.44.140/20. I1124 14:22:31.799591 30566 interface.go:230] IP found 10.109.44.140 I1124 14:22:31.799656 30566 interface.go:262] Found valid IPv4 address 10.109.44.140 for interface "eth0". I1124 14:22:31.799699 30566 interface.go:411] Found active IP 10.109.44.140 I1124 14:22:31.799798 30566 preflight.go:101] [preflight] Running configuration dependant checks [preflight] Running pre-flight checks before initializing the new control plane instance I1124 14:22:31.801206 30566 checks.go:577] validating Kubernetes and kubeadm version I1124 14:22:31.801271 30566 checks.go:166] validating if the firewall is enabled and active I1124 14:22:31.813148 30566 checks.go:201] validating availability of port 6443 I1124 14:22:31.813292 30566 checks.go:201] validating availability of port 10259 I1124 14:22:31.813358 30566 checks.go:201] validating availability of port 10257 I1124 14:22:31.813430 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I1124 14:22:31.813486 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I1124 14:22:31.813516 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I1124 14:22:31.813541 30566 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I1124 14:22:31.813566 30566 checks.go:432] validating if the connectivity type is via proxy or direct I1124 14:22:31.813609 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813653 30566 checks.go:471] validating http connectivity to first IP address in the CIDR I1124 14:22:31.813695 30566 checks.go:201] validating availability of port 2379 I1124 14:22:31.813833 30566 checks.go:201] validating availability of port 2380 I1124 14:22:31.813917 30566 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I1124 14:22:31.882093 30566 checks.go:838] image exists: vmware/kube-apiserver:v1.18.5-vmware.1 I1124 14:22:31.945357 30566 checks.go:838] image exists: vmware/kube-controller-manager:v1.18.5-vmware.1 I1124 14:22:32.007510 30566 checks.go:838] image exists: vmware/kube-scheduler:v1.18.5-vmware.1 I1124 14:22:32.071842 30566 checks.go:838] image exists: vmware/kube-proxy:v1.18.5-vmware.1 I1124 14:22:32.139079 30566 checks.go:838] image exists: vmware/pause:3.2 I1124 14:22:32.204019 30566 checks.go:838] image exists: vmware/etcd:3.3.6.670 I1124 14:22:32.269153 30566 checks.go:838] image exists: vmware/coredns:1.2.6.11743048 I1124 14:22:32.269226 30566 controlplaneprepare.go:211] [download-certs] Skipping certs download [certs] Using certificateDir folder "/etc/kubernetes/pki" I1124 14:22:32.269257 30566 certs.go:38] creating PKI assets [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [vroteri.prem.com localhost] and IPs [10.109.44.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [vroteri.prem.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vra-k8s.local] and IPs [10.244.4.1 10.109.44.140] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" I1124 14:22:34.339423 30566 certs.go:69] creating new public/private key files for signing service account users [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I1124 14:22:36.170921 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.171453 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.171988 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I1124 14:22:36.172011 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I1124 14:22:36.172020 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I1124 14:22:36.194181 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I1124 14:22:36.194238 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.194357 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.194806 30566 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I1124 14:22:36.194836 30566 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I1124 14:22:36.194866 30566 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I1124 14:22:36.194875 30566 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I1124 14:22:36.194883 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I1124 14:22:36.196052 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I1124 14:22:36.196092 30566 manifests.go:91] [control-plane] getting StaticPodSpecs W1124 14:22:36.196176 30566 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" I1124 14:22:36.196505 30566 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I1124 14:22:36.197169 30566 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [check-etcd] Checking that the etcd cluster is healthy I1124 14:22:36.214097 30566 local.go:78] [etcd] Checking etcd cluster health I1124 14:22:36.214140 30566 local.go:81] creating etcd client that connects to etcd pods I1124 14:22:36.214188 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:36.242843 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334793 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.334878 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:36.406046 30566 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I1124 14:22:36.412385 30566 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "vroteri.prem.com" and status "Ready" I1124 14:22:36.425067 30566 kubelet.go:159] [kubelet-start] Stopping the kubelet [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... I1124 14:22:41.776119 30566 cert_rotation.go:137] Starting client certificate rotation controller I1124 14:22:41.781038 30566 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node I1124 14:22:41.781103 30566 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "vroteri.prem.com" as an annotation I1124 14:22:50.317665 30566 local.go:130] creating etcd client that connects to etcd pods I1124 14:22:50.317743 30566 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods I1124 14:22:50.348895 30566 etcd.go:102] etcd endpoints read from pods: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399038 30566 etcd.go:250] etcd endpoints read from etcd: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399144 30566 etcd.go:120] update etcd endpoints: https://10.109.44.138:2379,https://10.109.44.139:2379 I1124 14:22:50.399168 30566 local.go:139] Adding etcd member: https://10.109.44.140:2380 [etcd] Announced new etcd member joining to the existing etcd cluster I1124 14:22:50.480013 30566 local.go:145] Updated etcd member list: [{vroteri.prem.com https://10.109.44.140:2380} {vropri.prem.com https://10.109.44.138:2380} {vrosec.prem.com https://10.109.44.139:2380}] [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s I1124 14:22:50.482224 30566 etcd.go:509] [etcd] attempting to see if all cluster endpoints ([https://10.109.44.138:2379 https://10.109.44.139:2379 https://10.109.44.140:2379]) are available 1/8 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node vroteri.prem.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. 2020-11-24 14:22:52,499 [INFO] Enabling the current node to run workloads .. node/vroteri.prem.com untainted 2020-11-24 14:22:52,945 [INFO] Enabling flannel on the current node .. 2020-11-24 14:22:53,175 [INFO] Updating hosts file for local endpoint .. 2020-11-24 14:22:53,192 [INFO] Sleeping for 120 seconds to allow infrastructure services to start. 2020-11-24 14:24:53,289 [INFO] Updating proxy-exclude settings to include this node FQDN (vroteri.prem.com) and IPv4 address (10.109.44.140) I don't have a custom certificate in this environment so I'd skip that step As you would see once we execute kubectl -n prelude get nodes You would see all the three nodes in a ready state. Now let's run the script which would start all the services /opt/scripts/deploy.sh At this point in time monitor /var/log/deploy.log Monitor pod deployments using watch -n 5 "kubectl -n prelude get pods - wide" Finally, all the POD's are up on all the underlying nodes Now heading to vRO Load Balancer FQDN you will be presented with the landing page if this vRO Clicking on the control center you would be able to configure the environment accordingly.

  • VMware Cloud Foundation 5.2.1 | What's New & Product Support Packs

    VMware Cloud Foundation 5.2.1 has just been released. omponent Version What’s New Relevant Links SDDC Manager 5.2.1 Reduced Downtime Upgrade (RDU) support for vCenter NSX in-place upgrades for clusters that use vSphere Lifecycle Manager baselines Support for vSphere Lifecycle Manager baseline and vSphere Lifecycle Manager image-based clusters in same workload domain. Support for the "License Now" option for vSAN add-on licenses based on capacity per tebibyte (TiB) Set up VMware Private AI Foundation infrastructure from the vSphere Client Manage all SDDC certificates and passwords from a single UI Release Notes VCF 5.2.1 Build Number 24307856 vCenter Server 8.0u3c Resolves an issue with the vSphere Client becoming unresponsive when a session stays idle for more than 50 minutes Built on top of VC 8.0u3b which delivered critical security fixes resolving  CVE-2024-38812 and CVE-2024-38813 and bug fixes. Release Notes VC 8.0U3c Build Number 24305161   ESXi 8.0u3b Bugs and Security Fixes DPU/SmartNIC :VMware vSphere Distributed Services Engine support with NVIDIA Bluefield-3 DPUs Design improvements for cloud-init guestInfo variables Release Notes ESXi 8.0U3b Build Number  24280767  NSX 4.2.1 NSX now supports a number of enhancements that help the customer provide high availability for networking including TEP Groups, monitoring for dual DPU deployments, improved alarms on the NSX Edge. As part of our multi-tenancy feature set we now offer the exposure of NSX Projects as folders in vCenter. NSX now supports VRF configuration at the Global Manager level. This allows customers a single place to configure stretched T0 Gateways. Invoke NSX In-Place Upgrades on non-VLCM clusters now in a VCF environment, and leverage faster upgrades of NSX enabled hypervisors without having to migrate your workloads. Release Notes NSX 4.2.1 Build Number 24304122   VMware Aria Suite Lifecycle  8.18 with updated versions of the following: Supported with PSPAK 3 Product Support Pack 3 adds support for:            VMware Aria Automation 8.18.1            VMware Aria Orchestrator 8.18.1            VMware Aria Operations for Networks 6.14.0 Release Notes VMware Aria Suite Lifecycle 8.18 Product Support Pack Build Number 24286744 VMware Aria Automation 8.18.1 NSX network and security group automation with discovered resources from NSX Projects and VPCs Onboarding of vSphere namespaces to be used in the Cloud Consumption Interface Support 64 disks on Paravirtual SCSI (PVSCSI) controller Include DGCM Exporter by default in Deep Learning VM Release Notes vRA 8.18.1 Build Number  24280767  VMware Aria Operations 8.18.1 Resolves important security and functionality issues. Release Notes Aria Ops 8.18.1 Build Number 24267784   VMware Aria Operations for Networks 6.14 Plan blueprint can be created for the workloads based on various scoping criteria and have those workloads migrated through VMware HCX NSX Assessment Dashboard Improvements Flow-based application discovery with CSV upload Release Notes Aria Ops 6.14 Build Number   1725688792 There are a bunch of Product Support Packs released to support VMware Aria Suite Lifecycle's upgrade and support latest products like VMware Aria Automation 8.18.1 VMware Aria Orchestrator 8.18.1 VMware Aria Operations for Networks 6.14.0 Product Support Pack Download Link Release Notes vRealize Suite Lifecycle Manager Product Support Pack 20 8.10 PSPACK 20 8.10 PSPACK RN VMware Aria Suite Lifecycle 8.12.0 PSPACK 12 8.12 PSPACK 12 8.12 PSPACK RN VMware Aria Suite Lifecycle 8.14.0 PSPACK 10 8.14 PSPACK 10 8.14 PSPACK RN VMware Aria Suite Lifecycle 8.16.0 PSPACK 6 8.16.0 PSPACK 6 8.16 PSPACK RN VMware Aria Suite Lifecycle 8.18.0 PSPACK 3 8.18 PSPACK 3 8.18 PSPACK RN

  • VMware Aria Suite Lifecycle 8.18.0 PSPACK 4 is out

    VMware Aria Suite Lifecycle 8.18.0 Product Support Pack 4adds support for: VMware Aria Operations for Logs 8.18.1 VMware Aria Operations 8.18.2 Release Notes: https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes/Chunk708728041.html#Chunk708728041 Download Link: https://support.broadcom.com/web/ecx/solutiondetails?patchId=5630

  • VMware Aria Suite Lifecycle Documentation finds a New Home

    The VMware Aria Suite Lifecycle documentation has a new home! Previously hosted on VMware Docs , it has now been relocated to Broadcom's TechDocs platform . Here is a collection of links that might be helpful Installation and Upgrade Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vmware-aria-suite-lifecycle-installation-upgrade-and-management-8-18.html Security Hardening Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vrealize-suite-lifecycle-manager-security-hardening-guide-8-18.html Release Notes (New Release, Patches and Product Support Packs) https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/release-notes.html API Programming Guide https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/vmware-aria-suite-lifecycle-programming-guide-8-18.html Documentation Legal Notice https://techdocs.broadcom.com/us/en/vmware-cis/aria/aria-suite-lifecycle/8-18/documentation-legal-notice-english-public.html

  • Authenticating via AD users and executing vRSLCM API's -- Detailed Procedure

    This blog is available in PDF format too. Download the PDF attached below to consume it. Demo Pre-Requisites vIDM LB url ( if clustered ) or vIDM FQDN ( single ) vIDM local Account vIDM local Account Password AD User Name AD Password Domain Procedure Phase-1 As a first step, fetch the session token. This can be done by using below API Method POST URL {{idmurl}}/SAAS/API/1.0/REST/auth/system/login Payload { "username": "{{idmlocalusername}}", "password": "{{idmlocalpassword}}", "issueToken": "true" } Response false eyJ0eXAiOiJKV1Q****9XsskFqilcg Note: sessionToken on the above response has been trimmed Copy this session into a variable called vIDMSessionToken in Postman Phase-2 As next step , we will create oauth2clients by running an API. This definition will enable a service or its users to authenticate to VMware Identity Manager using the OAuth2 protocol. In short client is created by admin with trust and APIs can use client:secret to get token and auth happens Method POST ​ URL {{idmurl}}/SAAS/jersey/manager/api/oauth2clients {{idmurl}} is a variable for vIDM FQDN Payload { "clientId":"admintesttwo", "secret":"Vk13YXJlMTIzIQ==", "scope":"user admin", "authGrantTypes":"password", "tokenType":"Bearer", "tokenLength":23, "accessTokenTTL":36000, "refreshTokenTTL":432000 } ​clientId is a name given to the client which would be created. This can be any given name. The secret is the base64 encoded password you would like to assign to this client Response { "clientId": "admintesttwo", "secret": "Vk13YXJlMTIzIQ==", "scope": "user admin", "authGrantTypes": "password", "redirectUri": null, "tokenType": "Bearer", "tokenLength": 32, "accessTokenTTL": 36000, "refreshTokenTTL": 432000, "refreshTokenIdleTTL": null, "rememberAs": null, "resourceUuid": null, "displayUserGrant": true, "internalSystemClient": false, "activationToken": null, "strData": null, "inheritanceAllowed": false, "returnFailureResponse": false, "_links": { "self": { "href": "/SAAS/jersey/manager/api/oauth2clients/admintesttwo" } } } If this API is successful, then there is a 201 Created response is triggered Headers Key Content-Type Value application/vnd.vmware.horizon.manager.oauth2client+json Key Accept Value application/vnd.vmware.horizon.manager.oauth2client+json If we login into vIDM , Under Administration Console click on Catalog and then select Settings. Once we browse to Remote App Access. You would be able to see the client id Clicking on it will provide more details about the OAuth2Client created Phase-3 Once the client id is created , we now need to go ahead and fetch the token for AD authentication Method POST ​ URL {{idmurl}}/SAAS/auth/oauthtoken?grant_type=password ​ Body ( form data ) {{username}} {{password}} {{domain}} {{username}} refers to the AD username {{password}} refers to AD username's password {{domain}} refers to the domain where the user belongs to Authorization Basic {{clientid}}:{{secret}} In the previous step we did create the clientid and then secret ( base64 encoded ) password Content-Type Use it to fetch the t Content-Type multipart/form-data ​ I'd copy this access token into a variable again and call it as a adusertoken Now , let's execute a Get Environment API call to fetch details . These are vRSLCM's APIs. Method Get ​ URL {{lcmurl}}/lcm/lcops/api/v2/environments/{{geenvid}} ​ Authorization Bearer Token {{adusertoken}} {{adusertoken}} is the token captured above Response { "vmid": "90b3269b-9338-4cab-9b3c-f744a2a1e13b", "transactionId": null, "tenant": "default", "environmentName": "globalenvironment", "environmentDescription": "", "environmentId": "globalenvironment", "state": null, "status": "COMPLETED", "environmentData": * * * "{\"environmentId\":\"globalenvironment\",\"environmentName\":\"globalenvironment\",\"environmentDescription\":null,\"environmentHealth\":null,\"logHistory\":\"[ {\\n \\\"logGeneratedTime\\\" : 1657682435109,\\n \ "dataCenterName": null } Truncated version of the response This is how one may generate access token using a AD user account and then use it

  • Understanding Creutzfeldt-Jakob Disease (CJD)

    Understanding Creutzfeldt-Jakob Disease (CJD) Creutzfeldt-Jakob Disease (CJD) is a rare and rapidly progressing neurodegenerative condition that primarily affects the brain. It is classified as a prion disease, which means that it is caused by abnormal proteins (prions) that damage brain tissue, leading to severe neurological symptoms. Although CJD is rare, with approximately one to two cases per million people per year worldwide, its effects can be devastating. This blog aims to provide an overview of CJD, including its symptoms, types, diagnosis, and treatment options. What is CJD? CJD belongs to a family of diseases known as prion diseases, which includes other rare conditions like Kuru, fatal familial insomnia, and Gerstmann-Sträussler-Scheinker syndrome. Prions are misfolded proteins that cause a cascade of abnormal folding in healthy proteins in the brain. This leads to cell death and tissue damage, ultimately causing brain shrinkage and spongiform changes, which are characteristic of the disease. Types of CJD There are four main types of CJD: Sporadic CJD (sCJD): This is the most common form, accounting for 85-90% of cases. It usually occurs randomly in people aged 60-65, with no known risk factors or genetic link. Familial or Inherited CJD (fCJD): Caused by a mutation in the PRNP gene, which encodes the prion protein. Makes up 10-15% of CJD cases. Individuals with a family history of CJD or other prion diseases are at higher risk. Acquired CJD: Results from exposure to infectious prions from contaminated medical equipment, organ transplants, or consumption of infected beef (Bovine Spongiform Encephalopathy or “mad cow disease”). While extremely rare, outbreaks of variant CJD (vCJD) have been linked to BSE. Variant CJD (vCJD): Linked to exposure to BSE in cattle. Typically affects younger individuals, and the disease course is longer than other forms of CJD. Symptoms of CJD CJD is characterized by rapid mental deterioration, leading to severe disability and death within a year in most cases. Common symptoms include: Rapidly progressive dementia Memory loss and confusion Personality changes Hallucinations Muscle stiffness or twitching Difficulty with coordination and balance Speech and vision problems Eventually, patients enter a coma-like state and often succumb to pneumonia or other infections. Diagnosis of CJD Diagnosing CJD is challenging due to its rarity and similarity to other neurological disorders. Typically, a combination of tests is used, including: Electroencephalogram (EEG): Measures brain wave patterns and can show characteristic changes in CJD. MRI Scans: Can reveal brain abnormalities indicative of CJD. Cerebrospinal Fluid (CSF) Tests: Detects certain proteins that are markers for prion disease. Genetic Testing: For individuals with a family history, genetic tests can identify mutations associated with familial CJD. Treatment and Management Currently, there is no cure for CJD. Treatment focuses on alleviating symptoms and providing supportive care. This may include: Pain relief Medications to reduce muscle spasms Psychological support for patients and families Ensuring a comfortable environment for end-of-life care Researchers are actively exploring new treatments, including drugs that target prions and gene therapies, but these are still in experimental stages. Prevention and Risk Mitigation Preventing CJD is challenging due to its sporadic nature. However, some steps can be taken to reduce the risk of acquired CJD: Implementing strict controls on the use of medical equipment, particularly in neurosurgery, to avoid cross-contamination. Regulating the food supply to prevent exposure to BSE. Genetic counseling for individuals with a family history of CJD. Living with CJD CJD is a devastating disease, not only for the individual but also for their families. The rapid progression and lack of effective treatment options make it particularly hard to cope with. Support groups and counseling can provide emotional and psychological support during this difficult time. Conclusion Creutzfeldt-Jakob Disease is a rare but fatal neurodegenerative disorder caused by prions. Despite its rarity, understanding the disease, its symptoms, and its progression is crucial for healthcare professionals and researchers. While there is currently no cure, ongoing research offers hope for the development of effective treatments in the future. For those looking for more information or support, organizations such as the CJD Foundation and local prion disease centers can provide valuable resources and guidance.

  • Suite Lifecycle 8.18 PSPACK 2 Released | Supports VMware Aria Operations 8.18.1 |

    VMware Aria Suite Lifecycle 8.18.0 Product Support Pack 2 is now released. This Product Support Pack provides support for installing or upgrade of VMware Aria Operations 8.18.1 Download Link https://support.broadcom.com/web/ecx/solutiondetails?patchId=5527 Release Notes https://docs.vmware.com/en/VMware-Aria-Suite-Lifecycle/8.18/rn/vmware-aria-suite-lifecycle-818-product-support-pack-release-notes/index.html

  • VMware Aria Suite Lifecycle Product Support Pack Screenshots

    There's always an ask to me on what versions are supported with a specific product support pack. That's the reason i've written this blog which has screenshots of every product support pack released for last few versions of VMware Aria Suite Lifecycle What is a Product Support Pack? In VMware Aria Suite Lifecycle, a Product Support Pack is a software package that updates an existing product support policy to incorporate support for new releases of VMware Aria suite of products. Here's a link to another blog i have written which explains why you need a Product Support Pack: https://www.arunnukula.com/post/why-do-i-need-product-support-packs-let-s-learn VMware Aria Suite Lifecycle 8.18.0 8.18 PSPACK 1 8.18.0 GA VMware Aria Suite Lifecycle 8.16.0 8.16.0 GA VMware Aria Suite Lifecycle 8.14.0 8.14.0 PSPACK 8 8.14.0 PSPACK 7 8.14.0 PSPACK 6 8.14.0 PSPACK 5 8.14.0 PSPACK 4 8.14.0 PSPACK 3 8.14.0 PSPACK 2 8.14.0 PSPACK 1 8.14.0 GA VMware Aria Suite Lifecycle 8.12.0 8.12.0 PSPACK 10 8.12.0 PSPACK 9 8.12.0 PSPACK 8 8.12.0 PSPACK 7 8.12.0 PSPACK 6 8.12.0 PSPACK 5 8.12.0 PSPACK 4 8.12.0 PSPACK 3 8.12.0 PSPACK 2 8.12.0 PSPACK 1 8.12.0 GA

  • Online Upgrade of VMware Aria Suite Lifecycle is back...........

    This option is only available in a non-VCF aware VMware Aria Suite Lifecycle Good News The online upgrade of VMware Aria Suite Lifecycle was affected by changes on Day-2 in May. However, the issue has been resolved and the service is now restored. When Customers are in connected mode, they have the option to select "Check Online" followed by "Check for Upgrade" to verify the presence of a new manifest and confirm the identification of a new upgrade. Subsequently, they can click on "Upgrade" to continue with the workflow. Please note that customers who are using VMware Aria Suite Lifecycle 8.16.0 Patch 1 or PSPACK 4 are advised to install PSPACK 5 for version 8.16.0 in order to restore this feature in the UI

  • Upgrading VMware Aria Suite Lifecycle to version 8.18.0 using Online Method

    Login into VMware Aria Suite Lifecycle Go to Lifecycle Operations and Settings Click on System Upgrade Select "Check Online" and then click on "Check for Upgrade" It would return stating the new VMware Aria Suite Lifecycle 8.18.0 is available Ensure a snapshot is taken before triggering an upgrade Click on "Upgrade" to begin the upgrade workflow Pre-checks are fine and now the upgrade starts Remember the logs you can monitor Pre-Upgrade & Upgrade Phase /var/log/vmware/capengine/cap-non-lvm-update/worflow.log /var/log/vmware/capenginecap-non-lvm-updateinstaller-<>.log Post Update Phase /var/log/bootstrap/postupdate.log /data/script.log /var/log/vrlcm/vmware_vrlcm.log Services would restart which means it's in postupgrade phase Upgrade is now complete Happy Upgrades

  • VMware Aria Suite 8.18.0 Release | VCF 5.2 | Upgrade Repo and Product Support Pack Links |

    VMware Cloud Foundation 5.2 which comprises of VMware Aria Suite 8.18.0 comes with the following new versions of the products Product Version VMware Aria Suite Lifecycle 8.18.0 VMware Aria Operations 8.18.0 VMware Aria Automation 8.18.0 VMware Aria Operations for Networks 6.13.0 VMware Aria Operations for Logs 8.18.0 VMware Aria Automation Orchestrator 8.18.0 From where can I download the VMware Aria Suite Lifecycle 8.18.0 upgrade package? Click here which would open a browser which will take you to Broadcom Support Portal. Following screen is what you would see VMware Aria Suite Lifecycle 8.18 Easy Installer for Automation and vIDM binary helps you to install VMware Aria Suite Lifecycle , VMware Identity Manager and VMware Aria Automation together The VMware Aria Suite Lifecycle 8.18 Update Repository Archive is the upgrade package specifically designed for VMware Aria Suite Lifecycle 8.18.0. If you are currently using an older version of Suite Lifecycle, you can utilize this package to upgrade to version 8.18.0. VMware Aria Suite Lifecycle 8.18 Easy Installer binary helps you to only install VMware Aria Suite Lifecycle , this does not contain VIDM and Automation binaries This link will take you to the page where you can download all the binaries mentioned above Product Support Packs have been launched to assist with upgrading VMware Aria Suite Lifecycle to version 8.18.0 if you are in VCF Aware Mode If your VMware Aria Suite Lifecycle is in Non-VCF aware mode, then there is no need to apply these product support packs on previous versions of Suite Lifecycle, just download the upgrade repo and then upgrade Policy Screenshot once an upgrade to VMware Aria Suite Lifecycle 8.18.0 is completed PSPACK Information Download Link Release Notes vRealize Suite Lifecycle Manager 8.8.2 PSPACK 11 Download Link Release Notes vRealize Suite Lifecycle Manager 8.10.0 PSPACK 19 Download Link Release Notes VMware Aria Suite Lifecycle 8.12.0 PSPACK 11 Download Link Release Notes VMware Aria Suite Lifecycle 8.14.0 PSPACK 9 Download Link Release Notes VMware Aria Suite Lifecycle 8.16.0 PSPACK 5 Download Link Release Notes

  • VMware Aria Suite Lifecycle 8.16.0 Product Support Pack 4 Released

    VMware Aria Suite Lifecycle 8.16.0 Product Support Pack 4 is now released This Product Support Pack provides support for VMware Aria Operations 8.17.2 VMware Aria Operations for Logs 8.16.1 Click here for Release Notes Click here for Downloading Product Support Pack 4 After applying VMware Aria Suite Lifecycle 8.16.0 Product Support Pack 4, you would see that the My VMware Options would no longer be available

bottom of page