Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EXPAND AND CONTRACT FRAMEWORK FOR REDUCED DOWNTIME DURING UPGRADE OF AN APPLICATION HOSTED IN A DATA CENTER
Document Type and Number:
WIPO Patent Application WO/2023/059563
Kind Code:
A1
Abstract:
A method of upgrading an application executing in a software-defined data center (SDDC) includes: expanding a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating the database of the first version to a database of the second version of the application while the services of the second version are inactive; and contracting, in response to activation of the services of the second version and deactivation of the services of the first version, the database of the second version, while the services of the second version re active, to generate a contracted database, the contracted database supporting the services of the second version.

Inventors:
PETROV ROMAN ROMANOV (BG)
DASKALOV GEORGE (BG)
Application Number:
PCT/US2022/045567
Publication Date:
April 13, 2023
Filing Date:
October 03, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VMWARE INC (US)
International Classes:
G06F8/656; G06F8/71; G06F16/21
Foreign References:
US20200159852A12020-05-21
Attorney, Agent or Firm:
MINTZ, Laurie Joan et al. (US)
Download PDF:
Claims:
What is claimed is: 1. A method of upgrading an application executing in a software-defined data center (SDDC), comprising: expanding, by a lifecycle management software, a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating, by the lifecycle management software, the database of the first version of the application to a database of the second version of the application while the services of the second version of the application are inactive; and contracting, by the lifecycle management software in response to activation of the services of the second version of the application and deactivation of the services of the first version of the application, the database of the second version of the application, while the services of the second version of the application are active, to generate a contracted database, the contracted database supporting the services of the second version of the application. 2. The method of claim 1, wherein the first version of the application is hosted by a first virtual computing instance and the second version of the application is hosted by a second virtual computing instance, and wherein the lifecycle management software includes a lifecycle manager installed in each of the first version of the application and the second version of the application. 3. The method of claim 1, wherein the step of expanding comprises: executing structured query language (SQL) code against the database of the first version of the application to make a schema change. 4. The method of claim 3, wherein schema change comprises at least one of creating a table or adding a column to an existing table. 5. The method of claim 3, wherein the schema change comprises adding a column to an existing table with a default value, and wherein the step of expanding comprises adding the column to the existing table without the default value, setting the default value on the column, and backfilling all rows of the column with the default value.

6. The method of claim 1, wherein the step of expanding comprises executing first structured query language (SQL) code against the database of the first version of the application to make a first portion of a schema change, and wherein the step of contracting comprises executing second SQL code against the database of the second version of the application to make a remaining portion of the schema change. 7. The method of claim 6, wherein the schema change comprises renaming a column in an existing table, wherein the step of expanding comprises adding a new column to an existing table, copying data from an existing column to the new column, and adding database triggers to keep the new column in synchronization with the existing column, and wherein the step of contracting comprises removing the existing column and the database triggers. 8. A non-transitory computer readable medium comprising instructions to be executed in a computing device to cause the computing device to carry out a method of upgrading an application executing in a software-defined data center (SDDC), comprising: expanding, by a lifecycle management software, a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating, by the lifecycle management software, the database of the first version of the application to a database of the second version of the application while the services of the second version of the application are inactive; and contracting, by the lifecycle management software in response to activation of the services of the second version of the application and deactivation of the services of the first version of the application, the database of the second version of the application, while the services of the second version of the application are active, to generate a contracted database, the contracted database supporting the services of the second version of the application. 9. The non-transitory computer readable medium of claim 8, wherein the first version of the application is hosted by a first virtual computing instance and the second version of the application is hosted by a second virtual computing instance, and wherein the lifecycle management software includes a lifecycle manager installed in each of the first version of the application and the second version of the application. 10. The non-transitory computer readable medium of claim 8, wherein the step of expanding comprises: executing structured query language (SQL) code against the database of the first version of the application to make a schema change. 11. The non-transitory computer readable medium of claim 10, wherein schema change comprises at least one of creating a table or adding a column to an existing table. 12. The non-transitory computer readable medium of claim 10, wherein the schema change comprises adding a column to an existing table with a default value, and wherein the step of expanding comprises adding the column to the existing table without the default value, setting the default value on the column, and backfilling all rows of the column with the default value. 13. The non-transitory computer readable medium of claim 8, wherein the step of expanding comprises executing first structured query language (SQL) code against the database of the first version of the application to make a first portion of a schema change, and wherein the step of contracting comprises executing second SQL code against the database of the second version of the application to make a remaining portion of the schema change. 14. The non-transitory computer readable medium of claim 13, wherein the schema change comprises renaming a column in an existing table, wherein the step of expanding comprises adding a new column to an existing table, copying data from an existing column to the new column, and adding database triggers to keep the new column in synchronization with the existing column, and wherein the step of contracting comprises removing the existing column and the database triggers. 15. A virtualized computing system, comprising: at least one host having a hardware platform; and a software platform executing on the hardware platform, the software platform including lifecycle management software configured to upgrade an application, the lifecycle management software configured to: expand a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicate the database of the first version of the application to a database of the second version of the application while the services of the second version of the application are inactive; and contract, in response to activation of the services of the second version of the application and deactivation of the services of the first version of the application, the database of the second version of the application, while the services of the second version of the application are active, to generate a contracted database, the contracted database supporting the services of the second version of the application. 16. The virtualized computing system of claim 15, wherein the first version of the application is hosted by a first virtual computing instance and the second version of the application is hosted by a second virtual computing instance, and wherein the lifecycle management software includes a lifecycle manager installed in each of the first version of the application and the second version of the application. 17. The virtualized computing system of claim 15, wherein the expanding comprises: executing structured query language (SQL) code against the database of the first version of the application to make a schema change. 18. The virtualized computing system of claim 17, wherein schema change comprises at least one of creating a table or adding a column to an existing table. 19. The virtualized computing system of claim 17, wherein the schema change comprises adding a column to an existing table with a default value, and wherein the expanding comprises adding the column to the existing table without the default value, setting the default value on the column, and backfilling all rows of the column with the default value.

20. The virtualized computing system of claim 15, wherein the expanding comprises executing first structured query language (SQL) code against the database of the first version of the application to make a first portion of a schema change, and wherein the contracting comprises executing second SQL code against the database of the second version of the application to make a remaining portion of the schema change.

Description:
EXPAND AND CONTRACT FRAMEWORK FOR REDUCED DOWNTIME DURING UPGRADE OF AN APPLICATION HOSTED IN A DATA CENTER Inventors: Roman Romanov Petrov, George Daskalov CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to United States Provisional Patent Application serial number 63/252,125, filed October 4, 2021, and United States Non-Provisional Patent Application serial number 17/550,388, filed December 14, 2021, which are both incorporated by reference herein. BACKGROUND [0002] In a software-defined data center (SDDC), virtual infrastructure, which includes virtual compute, storage, and networking resources, is provisioned from hardware infrastructure that includes a plurality of host computers, storage devices, and networking devices. The provisioning of the virtual infrastructure is carried out by management software that communicates with virtualization software (e.g., hypervisor) installed in the host computers. [0003] As described in U.S. Patent Application No.17/464,733, filed on September 2, 2021, the entire contents of which are incorporated by reference herein, the desired state of the SDDC, which specifies the configuration of the SDDC (e.g., the number of clusters, the hosts that each cluster would manage, and whether or not certain features, such as distributed resource scheduling, high availability, and workload control plane, are enabled), may be defined in a declarative document, and the SDDC is deployed or upgraded according to the desired state defined in the declarative document. [0004] The declarative approach has simplified the deployment and upgrading of the SDDC configuration, but may still be insufficient by itself to meet the needs of customers who have multiple SDDCs deployed across different geographical regions, and deployed in a hybrid manner, e.g., on-premise, in a public cloud, or as a service. These customers want to ensure that all of their SDDCs are compliant with company policies and are looking for an easier way to monitor their SDDCs for compliance with the company policies and manage the upgrade and remediation of such SDDCs. [0005] Today’s applications executing in an SDDC require always-on access. This mandates upgrade downtime for applications to be reduced from hours to minutes, particularly when such applications require frequent upgrades. In addition, it is desirable to provide a way to revert the application to a stable state should an upgrade fail. Thus, there is a need to reduce downtime during upgrade of an application hosted in an SDDC. SUMMARY [0006] A method of upgrading an application executing in a software-defined data center (SDDC) includes: expanding, by a lifecycle management software, a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating, by the lifecycle management software, the database of the first version of the application to a database of the second version of the application while the services of the second version of the application are inactive; and contracting, by the lifecycle management software in response to activation of the services of the second version of the application and deactivation of the services of the first version of the application, the database of the second version of the application, while the services of the second version of the application are active, to generate a contracted database, the contracted database supporting the services of the second version of the application. [0007] Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above method, as well as a computer system configured to carry out the above method. BRIEF DESCRIPTION OF THE DRAWINGS [0008] Fig. 1 depicts a cloud control plane implemented in a public cloud, and a plurality of SDDCs that are managed through the cloud control plane, according to embodiments. [0009] Fig. 2 is a block diagram of an SDDC in which embodiments described herein may be implemented. [0010] Fig. 3 is a block diagram depicting VIM server appliance during an upgrade operation according to embodiments. [0011] Fig.4 is a block diagram depicting lifecycle management software according to an embodiment. [0012] Fig. 5 is a flow diagram depicting a method of upgrading an application according to an embodiment. [0013] Fig. 6 is a flow diagram depicting a method of adding a column with a default value to a database according to an embodiment. [0014] Fig.7 is a flow diagram depicting a method of renaming a column in a database according to an embodiment. DETAILED DESCRIPTION [0015] One or more embodiments employ a cloud control plane for managing the configuration of SDDCs, which may be of different types and which may be deployed across different geographical regions, according to a desired state of the SDDC defined in a declarative document referred to herein as a desired state document. In the embodiment illustrated herein, the desired state document is created in the form of a human readable and editable file, e.g., a JSON (JavaScript Object Notation) file. The cloud control plane is responsible for generating the desired state and specifying configuration operations to be carried out in the SDDCs according to the desired state. Thereafter, configuration agents running locally in the SDDCs establish cloud inbound connections with the cloud control plane to acquire the desired state and the configuration operations to be carried out, and delegate the execution of these configuration operations to services running in a local SDDC control plane. [0016] Fig.1 depicts a cloud control plane 110 implemented in a public cloud 10, and a plurality of SDDCs 20 that are managed through cloud control plane 110. In the embodiment illustrated herein, cloud control plane 110 is accessible by multiple tenants through user interface/application programming interface (UI/API) 101 and each of the different tenants manage a group of SDDCs through cloud control plane 110 according to a desired state of the SDDCs that the tenant defines in a desired state document. In the following description, a group of SDDCs of one particular tenant is depicted as SDDCs 20, and to simplify the description, the operation of cloud control plane 110 will be described with respect to management of SDDCs 20. However, it should be understood that the SDDCs of other tenants have the same appliances, software products, and services running therein as SDDCs 20, and are managed through cloud control plane 110 in the same manner as described below for SDDCs 20. [0017] A user interface (UI) or an application programming interface (API) that interacts with cloud control plane 110 is depicted in Figure 1 as UI/API 101. Through UI/API 101, an administrator of SDDCs 20 can issue commands to: (1) get the desired state or the running state of any of SDDCs 20; (2) create the desired state of SDDCs 20, e.g., by specifying a location of the desired state document or specifying the running state of one of SDDCs 20 to be used as the desired state of all SDDCs 20; (3) perform a compliance check of SDDCs against the desired state; and (4) apply the desired state to SDDCs 20. [0018] Cloud control plane 110 represents a group of services running in virtual infrastructure of public cloud 10 that interact with each other to provide a control plane through which the administrator of SDDCs 20 can manage the desired state of SDDCs 20 by issuing commands through UI/API 101. API gateway 111 is also a service running in the virtual infrastructure of public cloud 10 and this service is responsible for routing cloud inbound connections to the proper service in cloud control plane 110, e.g., SDDC configuration interface endpoint service 120, notification service 170, or coordinator 150. [0019] SDDC configuration interface endpoint service 120 is responsible for accepting commands made through UI/API 101 and returning the result to UI/API 101. An operation requested in the commands can be either synchronous or asynchronous. Asynchronous operations are stored in activity service 130, which keeps track of the progress of the operation, and an activity ID, which can be used to poll for the result of the operation, is returned to UI/API 101. If the operation targets multiple SDDCs 20 (e.g., an operation to apply the desired state to SDDCs 20), SDDC configuration interface endpoint service 120 creates an activity which has children activities. SDDC configuration worker service 140 processes these children activities independently and respectively for multiple SDDCs 20, and activity service 130 tracks these children activities according to results returned by SDDC configuration worker service 140. [0020] SDDC configuration worker service 140 polls activity service 130 for new operations and processes them by passing the tasks to be executed to SDDC task dispatcher service 141. SDDC configuration worker service 140 then polls SDDC task dispatcher service 141 for results and notifies activity service 130 of the results. SDDC configuration worker service 140 also polls SDDC event dispatcher service 142 for events posted to SDDC event dispatcher service 142 and handles these events based on the event type. Examples of events posted to SDDC event dispatcher service 142 include a change event, where “a local change has been made to the desired state of an SDDC” and a drift event, where “the running state of SDDC is out of compliance with its desired state.” [0021] SDDC task dispatcher service 141 dispatches each task passed thereto by SDDC configuration worker service 140, to coordinator 150 and tracks the progress of the task by polling coordinator 150. Coordinator 150 accepts cloud inbound connections, which are routed through API gateway 111, from SDDC configuration agents 220. SDDC configuration agents 220 are responsible for establishing cloud inbound connections with coordinator 150 to acquire tasks dispatched to coordinator 150 for execution in their respective SDDCs 20, and orchestrating the execution of these tasks. Upon completion of the tasks, SDDC configuration agents 220 return results to coordinator 150 through the cloud inbound connections. SDDC configuration agents 220 also notify coordinator 150 of various events through the cloud inbound connections, and coordinator 150 in turn posts these events to SDDC event dispatcher service 142 for handling by SDDC configuration worker service 140. [0022] SDDC profile manager service 160 is responsible for storing the desired state documents in data store 165 (e.g., a virtual disk or a depot accessible using a uniform resource locator (URL)) and, for each of SDDCs 20, tracks the history of the desired state document associated therewith and any changes from its desired state specified in the desired state document, e.g., using a relational database (hereinafter referred to as “desired state tracking database”). When SDDC configuration worker service 140 processes children activities associated with an operation to apply the desired state to SDDCs 20, SDDC configuration worker service 140 calls SDDC profile manager service 160 to store the desired state document in data store 165 and to update the desired state tracking database to record what (e.g., which desired state document) is being applied to where (e.g., to which SDDC) and when (e.g., date and time). Similarly, when SDDC configuration agent 220 of an SDDC notifies coordinator 150 of a change event or a drift event, and coordinator 150 posts the change or drift event to SDDC event dispatcher service 142, SDDC configuration worker service 140 calls SDDC profile manager service 160 to update the desired state tracking database to record what (e.g., which desired state document of which SDDC) has changed and when (e.g., date and time). Thereafter, SDDC profile manager service 160 posts notifications about any changes made to the desired state tracking database to notification service 170, and the administrator can get such notifications through UI/API 101. [0023] An operation requested in the commands made through UI/API 101 may be synchronous, instead of asynchronous. An operation is synchronous if there is a specific time window within which the operation must be completed. Examples of a synchronous operation include an operation to get the desired state of an SDDC or an operation to get SDDCs that are associated with a particular desired state. In the embodiments, to enable such operations to be completed within the specific time window, SDDC configuration interface endpoint service 120 has direct access to data store 165. [0024] As described above, a plurality of SDDCs 20, which may be of different types and which may be deployed across different geographical regions, is managed through cloud control plane 110. In one example, one of SDDCs 20 is deployed in a private data center of the customer and another one of SDDCs 20 is deployed in a public cloud, and all of SDDCs are located in different geographical regions so that they would not be subject to the same natural disasters, such as hurricanes, fires, and earthquakes. [0025] Any of the services of described above (and below) may be a microservice that is implemented as a container image executed on the virtual infrastructure of public cloud 10. In one embodiment, each of the services described above is implemented as one or more container images running within a Kubernetes® pod. [0026] In each SDDC 20, regardless of its type and location, a gateway appliance 210 and virtual infrastructure management (VIM) appliance 230 are provisioned from the virtual resources of SDDC 20. Gateway appliance 210 is able to establish connections with cloud control plane 110 and a local control plane of SDDC 20. In particular, SDDC configuration agent 220 running in gateway appliance 210 communicates with coordinator 150 to retrieve the tasks (e.g., various SDDC configuration actions, such as check compliance against desired state and apply desired state) that were dispatched to coordinator 150 for execution in SDDC 20 and delegates the tasks to SDDC configuration service 225 running in VIM server appliance 230. After the execution of these tasks have completed, SDDC configuration agent 220 sends back the execution result to coordinator 150. [0027] SDDC configuration service 225 is responsible for passing on the tasks delegated by SDDC configuration agent 220 to the local control plane of SDDC 20, which includes: (1) a personality manager 232, which is responsible for applying the desired image of the virtualization software to a cluster of hosts 240 according to the desired state; (2) host profiles manager 233, which is responsible for applying the desired configurations of the cluster of hosts 240 according to the desired state; (3) virtual infrastructure (VI) profiles manager 234, which is responsible for applying the desired configuration of the virtual infrastructure managed by VIM server appliance 230 (e.g., the number of clusters, the hosts that each cluster would manage, etc.) and the desired configuration of various features provided by software products running in VIM server appliance 230 (e.g., distributed resource scheduling, high availability, and workload control plane), according to the desired state; and (4) plug-ins installed in other appliances 250 (e.g., an appliance that hosts the network virtualization software product), which are responsible for applying the desired configurations of the software running in these other appliances 250 and the virtual infrastructure managed by these other appliances 250, according to the desired state. In embodiments, personality manager 232, host profiles manager 233, and VI profiles manager 234 comprise a lifecycle manager (LCM) 235. [0028] The desired state is defined in a desired state document stored locally in data store 226 (e.g., virtual disk) and the desired state document is specified in the tasks retrieved from coordinator 150 (e.g., a task to apply the desired state) and delegated to SDDC configuration service 225. The desired state document may be changed by the administrator of SDDCs 20 through UI/API 201. If the desired state is changed by the administrator of SDDCs 20 through UI/API 201, a change document that contains the changes to the desired state document stored in data store 226 is created and stored locally in data store 226. Thereafter, SDDC configuration agent 220 sends the change document to coordinator 150 along with a notification of this change event, i.e., that “a local change has been made to the desired state of an SDDC.” [0029] SDDC configuration service 225 performs a compliance check of the running state of SDDC 20 against the desired state, both periodically and in response to a “compliance check” task that SDDC configuration agent 220 retrieves from coordinator 150 and delegates to SDDC configuration service 225. If a drift of the running state of SDDC 20 from the desired state is detected as a result of this compliance check, SDDC configuration service 225 creates a change document that contains the differences in the running state of SDDC 20 from the desired state document stored in data store 226 and stores the change document locally in data store 226. Thereafter, SDDC configuration agent 220 sends the change document to coordinator 150 along with a notification of this drift event, i.e., that “the running state of SDDC is out of compliance with its desired state.” [0030] Fig. 2 is a block diagram of an SDDC 20 in which embodiments described herein may be implemented. SDDC 20 includes a cluster of hosts 240 (“host cluster 218”) that may be constructed on server-grade hardware platforms such as an x86 architecture platforms. For purposes of clarity, only one host cluster 218 is shown. However, SDDC 20 can include many of such host clusters 218. As shown, a hardware platform 222 of each host 240 includes conventional components of a computing device, such as one or more central processing units (CPUs) 260, system memory (e.g., random access memory (RAM) 262), one or more network interface controllers (NICs) 264, and optionally local storage 263. CPUs 260 are configured to execute instructions, for example, executable instructions that perform one or more operations described herein, which may be stored in RAM 262. NICs 264 enable host 240 to communicate with other devices through a physical network 280. Physical network 280 enables communication between hosts 240 and between other components and hosts 240 (other components discussed further herein). [0031] In the embodiment illustrated in Fig.2, hosts 240 access shared storage 270 by using NICs 264 to connect to network 280. In another embodiment, each host 240 contains a host bus adapter (HBA) through which input/output operations (IOs) are sent to shared storage 270 over a separate network (e.g., a fibre channel (FC) network). Shared storage 270 include one or more storage arrays, such as a storage area network (SAN), network attached storage (NAS), or the like. Shared storage 270 may comprise magnetic disks, solid-state disks, flash memory, and the like as well as combinations thereof. In some embodiments, hosts 240 include local storage 263 (e.g., hard disk drives, solid-state drives, etc.). Local storage 263 in each host 240 can be aggregated and provisioned as part of a virtual SAN (vSAN), which is another form of shared storage 270. [0032] A software platform 224 of each host 240 provides a virtualization layer, referred to herein as a hypervisor 228, which directly executes on hardware platform 222. In an embodiment, there is no intervening software, such as a host operating system (OS), between hypervisor 228 and hardware platform 222. Thus, hypervisor 228 is a Type-1 hypervisor (also known as a “bare-metal” hypervisor). As a result, the virtualization layer in host cluster 218 (collectively hypervisors 228) is a bare-metal virtualization layer executing directly on host hardware platforms. Hypervisor 228 abstracts processor, memory, storage, and network resources of hardware platform 222 to provide a virtual machine execution space within which multiple virtual machines (VM) 236 may be concurrently instantiated and executed. One example of hypervisor 228 that may be configured and used in embodiments described herein is a VMware ESXi™ hypervisor provided as part of the VMware vSphere® solution made commercially available by VMware, Inc. of Palo Alto, CA. Applications and/or appliances 244 execute in VMs 236 and/or containers 238 (discussed below). [0033] Host cluster 218 is configured with a software-defined (SD) network layer 275. SD network layer 275 includes logical network services executing on virtualized infrastructure in host cluster 218. The virtualized infrastructure that supports the logical network services includes hypervisor-based components, such as resource pools, distributed switches, distributed switch port groups and uplinks, etc., as well as VM-based components, such as router control VMs, load balancer VMs, edge service VMs, etc. Logical network services include logical switches and logical routers, as well as logical firewalls, logical virtual private networks (VPNs), logical load balancers, and the like, implemented on top of the virtualized infrastructure. In embodiments, SDDC 20 includes edge transport nodes 278 that provide an interface of host cluster 218 to a wide area network (WAN) (e.g., a corporate network, the public Internet, etc.). [0034] VM server appliance 230 is a physical or virtual server that manages host cluster 218 and the virtualization layer therein. VM server appliance 230 installs agent(s) in hypervisor 228 to add a host 240 as a managed entity. VM server appliance 230 logically groups hosts 240 into host cluster 218 to provide cluster-level functions to hosts 240, such as VM migration between hosts 240 (e.g., for load balancing), distributed power management, dynamic VM placement according to affinity and anti-affinity rules, and high-availability. The number of hosts 240 in host cluster 218 may be one or many. VM server appliance 230 can manage more than one host cluster 218. [0035] In an embodiment, SDDC 20 further includes a network manager 212. Network manager 212 is a physical or virtual server that orchestrates SD network layer 275. In an embodiment, network manager 212 comprises one or more virtual servers deployed as VMs. Network manager 212 installs additional agents in hypervisor 227 to add a host 240 as a managed entity, referred to as a transport node. In this manner, host cluster 218 can be a cluster of transport nodes. One example of an SD networking platform that can be configured and used in embodiments described herein as network manager 212 and SD network layer 275 is a VMware NSX® platform made commercially available by VMware, Inc. of Palo Alto, CA. [0036] VM server appliance 230 and network manager 212 comprise a virtual infrastructure (VI) control plane 213 of SDDC 20. VM server appliance 230 can include various VI services 208. VI services 208 include various virtualization management services, such as a distributed resource scheduler (DRS), high-availability (HA) service, single sign-on (SSO) service, virtualization management daemon, and the like. An SSO service, for example, can include a security token service, administration server, directory service, identity management service, and the like configured to implement an SSO platform for authenticating users. VI services 208 can further include LCM 235. VI services 208 store and manage state in data store 226. Data store 226 can be any type of database (e.g., postgres, mysql, etc.). [0037] In embodiments, SDDC 20 can include a container orchestrator 277. Container orchestrator 277 implements an orchestration control plane, such as Kubernetes®, to deploy and manage applications or services thereof on host cluster 218 using containers 238. In embodiments, hypervisor 228 can support containers 238 executing directly thereon. In other embodiments, containers 238 are deployed in VMs 236 or in specialized VMs referred to as “pod VMs 242.” A pod VM 242 is a VM that includes a kernel and container engine that supports execution of containers, as well as an agent (referred to as a pod VM agent) that cooperates with a controller executing in hypervisor 228 (referred to as a pod VM controller). Container orchestrator 277 can include one or more master servers configured to command and configure pod VM controllers in host cluster 218. Master server(s) can be physical computers attached to network 280 or VMs 236 in host cluster 218. VMs 236 and containers 238 are generally referred to herein as “virtual computing instances.” [0038] Fig.3 is a block diagram depicting VIM server appliance 230 during an upgrade operation according to embodiments. During an upgrade, VIM server appliance 230 executes and an upgraded VM server appliance 230U is deployed and running but inactive. That is, VI services 208 are active and executing while VI services 208U are inactive. VI services 208U are upgraded with respect to VI services 208. In addition, a data store 226U in VIM server appliance 230U uses an upgraded schema with respect to schema of data store 226. In embodiments, LCM 235 in VIM server appliance 230 is upgraded in-place such that each of VIM server appliance 230 and VIM server appliance 230U executes an upgraded LCM 235U. LCM 235U in each of VIM service appliances 230 and 230U are collectively referred to as LCM software. Although the LCM software is shown as executing in the SDDC, in some embodiments, the LCM software can be executing remotely from the VIM server appliances 230 either within the SDDC or external to the SDDC. [0039] The LCM software is configured to perform the upgrade process, which includes the following phases: (1) the expand phase; (2) the switchover phase; and (3) the contract phase. During the expand phase, LCM 235U expands data store 226 to support both the deprecated and upgraded schemas, where the schema of data store 226 is referred to as the “deprecated scheme” and the schema of data store 226U is referred to as the “upgraded schema.” VI services 208 continue to be active and execute while data store 226 is expanded and can continue to interface with data store 226, since the deprecated schema is still supported. After data store 226 is expanded, LCM 235U replicates data store 226 to data store 226U. During the switchover phase, LCM 235 deactivates VI services 208 and activates VI services 208U. During the contract phase, LCM 235U contracts data store 226U to remove deprecated portions of the schema (e.g., after contraction, data store 226U supports only the upgraded schema). VI services 208U are active and execute during the contract phase. [0040] Fig. 4 is a block diagram depicting LCM 235U according to an embodiment. LCM 235U include expand structured query language (SQL) code 402, contract SQL code 404, and revert SQL code 406. LCM 235U executes expand SQL code 402 to perform the expand phase of data store 226. LCM 235U executes contract SQL code 404 to perform the contract phase of data store 226U. Revert SQL 406 includes reciprocal SQL code of expand SQL code. LCM 235U can execute revert SQL 406 to perform a revert/rollback of the expansion of data store 226 in case of a failure for example. [0041] Fig. 5 is a flow diagram depicting a method 500 of upgrading an application according to an embodiment. In embodiments, the application is a VIM server appliance. Method 500 can be performed by LCM software executing in a VIM server appliance 230 and an upgraded VIM server appliance 230U. Method 500 begins at step 502, where LCM software initiates application upgrade for VIM server appliance 230. At step 504, the LCM software expands the database of the current application to support both deprecated and upgraded schemas (e.g., data store 226 of VIM server appliance 230). The schema of data store 226 defines various elements, such as tables, columns, fields, relationships, indexes, triggers, and the like. The LCM software can perform various operations to expand the database schema including the following (described further below): ALTER TABLE ADD COLUMN ADD COLUMN WITH DEFAULT VALUE ADD COMMENT ALTER TABLE RENAME COLUMN CREATE INDEX CREATE TABLE ALTER TABLE ALTER COLUMN SET DATA TYPE [0042] The LCM software can perform some expand operations only within the expand phase, perform some expand operations in either the expand phase or the contract phase, and perform some expand operations across both the expand and contract phases. At step 506, the LCM software executes expand SQL code against the database of the current application to make one or more schema changes (e.g., expand SQL code 402 against data store 226). Some schema changes are backward compatible and/or nondisruptive schema changes (508). Other schema changes are backward incompatible and/or disruptive schema changes (510). For example, the ALTER TABLE ADD COLUMN operation adds a column to an existing table. The CREATE TABLE operation adds a table to the database. The ADD COLUMN WITH DEFAULT VALUE operation adds a column to an existing table and sets a default value for unpopulated rows. These are considered incremental changes that do not affect the compatibility of the state stored in the database (backward compatible changes). These changes are added only during the expand change to ensure that the upgraded application operates as expected after switchover. Note that adding a table and adding a column that does not have a default value are also nondisruptive. Adding a column with a default value, however, requires that the table be locked while the default values are propagated for unpopulated rows. Such an operation is disruptive. An alternative approach to adding a column with default value can be executed during the expand phase without disruption, as described below with respect to Fig.6. [0043] Fig. 6 is a flow diagram depicting a method of adding a column with a default value to a database according to an embodiment. Method 600 begins at step 602, where the LCM software adds a column to an existing table without the default value. Adding a column without a default value does not require the table to be locked and is a nondisruptive operation. At step 604, the LCM software updates column rows with the default values lazily without locking the table. At step 606, the LCM software alters the added column to set the default value for future added rows. [0044] Returning to Fig.5, the ADD COMMENT operation, which adds a comment to a table or column, can be performed in either or both the expand phase or the contract phase. The ALTER TABLE RENAME COLUMN operation modifies an existing table by renaming a column therein. This is an example of a schema operation that is performed across both the expand and contract phases as the operation is backward incompatible (i.e., renaming a column would break compatibility with the current application). To maintain compatibility, a portion of the operation is performed in the expand phase and the remaining portion is performed in the contract phase. [0045] Fig. 7 is a flow diagram depicting a method 700 of renaming a column in a database according to an embodiment. Method 700 begins at step 702, where the LCM software adds a new column during the expand phase with a new name. At step 704, the LCM software copies the row data from the old column to the new column. At step 706, the LCM software adds database triggers during the expand phase to keep the old and new columns synchronized. Thus, while the current application adds rows or modifies rows of the existing column, those changes are propagated to the new column of which the current application is unaware. At step 708, the LCM software removes the old column and the database triggers during the contract phase. This completes the operation of renaming a column in an existing table. [0046] The renaming of a table cannot be fully performed during the expand phase as doing such would break compatibility. Such an operation, however, can be avoided entirely or achieved by splitting the operation over several statements. For example, the LCM software can create a new table, backfill the new table from the old table, and create database triggers to keep the old and new tables synchronized. The old table and the database triggers are removed during the contract phase. Thus, renaming a table is similar to renaming a column as described above. [0047] The operation ALTER TABLE ALTER COLUMN SET DATA TYPE can be used to change the data type of a column. This operation would break compatibility of the current application and should be avoided. Rather than explicitly changing the data type of a column, this operation can be performed by using the rename column approach in Fig. 7, but setting a different data type on the new column. The data can be transformed to the new data type during the copy operation from the old column to the new column (705). [0048] Returning to Fig. 5, the operation CREATE INDEX creates an index over a table. The index creation operation can be performed in either the expand phase or the contract phase. To avoid any performance impact, however, index operations can be performed during the contract phase. Creating an index over a table depends on the table size, where larger row count takes more time. Both create and drop index operations require an exclusive lock on the table, meaning the application cannot access the table during the operation. In embodiments, during the contract phase, the LCM software can execute a create index concurrently operation that allows for parallel operations on the table. However, the operation must wait for all transactions using the table to complete before returning. [0049] The operation ALTER TABLE ALTER COLUMN SET DATA TYPE can be used to change the data type of a column. This operation would break compatibility of the current application and should be avoided. Rather than explicitly changing the data type of a column, this operation can be performed by using the rename column approach in Fig. 7, but setting a different data type on the new column. The data can be transformed to the new data type during the copy operation from the old column to the new column (705). [0050] Method 500 proceeds from step 504 to step 512. At step 512, the LCM software replicates the database from the current application to the upgraded application. For example, the LCM software can replicate data store 226 (as expanded) to the data store 226U. At step 514, the LCM software executes a switchover from the current application to the upgraded application. After switchover, VI services 208 are inactive and VI services 208U are active and using data store 226U. Since the schema of data store 226U has been expanded prior to switchover, data store 226U is compatible with VI services 208U. [0051] At step 516, the LCM software contracts the database of the upgraded application to remove deprecated portions of the schema. In embodiments, the LCM software executes contract SQL code against the database of the upgraded application to make one or more schema changes (518). The schema changes can include decremental changes (520). Additionally, some schema changes can be the remaining portions of backward incompatible changes initiated during the expand phase (e.g., renaming a column) (522). The LCM software can perform various operations to contract the database schema including the following (described further below): DROP INDEX DROP TABLE ALTER TABLE DROP COLUMN ALTER TABLE DROP DEFAULT VALUE ALTER TABLE ADD CONSTRAINT UNIQUE ALTER TABLE CHECK CONSTRAINT ALTER TABLE ADD FOREIGN KEY CONSTRAINT ALTER TABLE DROP CONSTRAINT [0052] The operations DROP TABLE and ALTER TABLE DROP COLUMN drop a table or column, respectively, and are considered decremental changes. These changes are performed during the contract phase and are nondisruptive. The operation DROP INDEX removes an index from a table. The operation ALTER TABLE DROP DEFAULT VALUE removes a default value from a column. [0053] Database constraints disturb the application operations and require special handling. In embodiments, constraint operations are applied during the contract phase (524), as these operations can break compatibility for the old application. The new application must not violate the old constraints. Further, the old application’s data might not be compatible with the newly enforced constraint. In such case, constraint validation can fail and the LCM software prompts the user to fix the noncompliant data and retry the constraint operation. [0054] Adding a foreign key constraint requires a shared row exclusive lock on both the altered and reference tables. The operation does not block SELECT queries but does block row modification operations potentially for a long time. To avoid long held locks, the operation for adding a foreign key can be split into two parts: (1) add the foreign key without activating it for the already inserted data (e.g., using a standard query for adding a foreign key but adding “NOT VALID” to the query); and (2) validate the constraint for the already inserted data using an ALTER TABLE VALIDATE CONSTRAINT operation. If the validation fails, the LCM software prompts the user to fix the noncompliant data and retry the constraint operation. [0055] The check constraint operation acquires exclusive access lock and full table scan, which is needed to verify that all the values meet the constraint condition. To avoid long held locks, the LCM software can split the alter into two parts as in the foreign key approach discussed above. [0056] Adding a unique constraint acquires an exclusive access lock and requires a full table scan, which cannot be avoided and can significantly impact the duration of the lock. To avoid long held locks, the LCM software can split the operation into two parts: (1) create a unique index concurrently; and (2) add the unique constraint using the new index, which requires an exclusive access lock for a short time (e.g., UNIQUE USING INDEX). [0057] One or more embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for required purposes, or the apparatus may be a general-purpose computer selectively activated or configured by a computer program stored in the computer. Various general-purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. [0058] The embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor- based or programmable consumer electronics, minicomputers, mainframe computers, etc. [0059] One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system. Computer readable media may be based on any existing or subsequently developed technology that embodies computer programs in a manner that enables a computer to read the programs. Examples of computer readable media are hard drives, NAS systems, read-only memory (ROM), RAM, compact disks (CDs), digital versatile disks (DVDs), magnetic tapes, and other optical and non-optical data storage devices. A computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion. [0060] Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, certain changes may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation unless explicitly stated in the claims. [0061] Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments, or as embodiments that blur distinctions between the two. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data. [0062] Many variations, additions, and improvements are possible, regardless of the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest OS that perform virtualization functions. [0063] Plural instances may be provided for components, operations, or structures described herein as a single instance. Boundaries between components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention. In general, structures and functionalities presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionalities presented as a single component may be implemented as separate components. These and other variations, additions, and improvements may fall within the scope of the appended claims.