Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION
Document Type and Number:
WIPO Patent Application WO/2013/005208
Kind Code:
A1
Abstract:
A system for cloud-based data services for multiple enterprises, including a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor including a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise, at least one cloud virtual disk that is read from and written to by the at least one virtual server, each cloud virtual disk being associated with an enterprise, and a virtual data services appliance, which provides cloud-based data services, and multiple data services managers, one data services manager per respective enterprise, each of which coordinates the respective virtual data services appliances for those cloud hypervisors that service its corresponding enterprise.

Inventors:
KEDEM ZIV (IL)
LEVONAI GIL (IL)
KUSZPET YAIR (IL)
BURSHAN CHEN (IL)
Application Number:
PCT/IL2012/000271
Publication Date:
January 10, 2013
Filing Date:
July 03, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZERTO LTD (IL)
KEDEM ZIV (IL)
LEVONAI GIL (IL)
KUSZPET YAIR (IL)
BURSHAN CHEN (IL)
International Classes:
G06F15/16
Foreign References:
US20100198972A12010-08-05
US20080195624A12008-08-14
US20040068561A12004-04-08
US7475207B22009-01-06
US20110022812A12011-01-27
US20100017801A12010-01-21
Attorney, Agent or Firm:
SANFORD T. COLB & CO. et al. (Rehovot, IL)
Download PDF:
Claims:
CLAIMS

What is claimed is: 1. A cross-host multi-hypervisor system, comprising:

a plurality of host sites, each site comprising:

at least one hypervisor, each of which comprises:

at least one virtual server;

at least one virtual disk that is read from and written to by said at least one virtual server;

a tapping driver in communication with said at least one virtual server, which intercepts write requests made by any one of said at least one virtual server to any one of said at least one virtual disk; and

a virtual data services appliance, in communication with said tapping driver, which receives the intercepted write requests from said tapping driver, and which provides data services based thereon; and

a data services manager for coordinating the virtual data services appliances at the site; and

a network for communicatively coupling said plurality of sites, wherein said data services managers coordinate data transfer across said plurality of sites via said network.

2. The system of claim 1 wherein at least one of said virtual data services appliances at a first of said plurality of host sites transmits intercepted write requests to at least one of said virtual data services appliances at a second of said plurality of host sites, via said network.

3. The system of claim 2 wherein the at least one of said virtual data services appliances at the second site periodically apply the intercepted write requests to at least one of said virtual disks at the second site.

4. The system of claim 1 wherein said data services managers pair a group of at least one of said virtual servers at one or more first ones of said plurality of host sites with a corresponding group of at least one of said virtual servers at one or more second ones of said plurality of host sites, as source and target virtual protection groups.

5. The system of claim 4 wherein said virtual data services appliance at each hypervisor at the one or more first sites preserves write order fidelity for the write requests intercepted from virtual servers in the source virtual protection group at the hypervisor.

6. The system of claim 5 wherein said virtual data services appliances at the hypervisors at the one or more first sites transmit, via said network, the write requests intercepted from virtual servers in the source virtual protection group, to said virtual data services appliances in the hypervisors at the one or more second sites that include virtual servers in the target virtual protection group.

7. The system of claim 6 wherein said virtual data services appliances in the hypervisors at the one or more second sites that include virtual servers in the target virtual protection group periodically apply the intercepted write requests to at least one of said virtual disks in the hypervisors at the one or more second sites.

8. The system of claim 4 wherein the virtual servers in the source virtual protection group belong to the same hypervisor. 9. The system of claim 4 wherein the virtual servers in the source virtual protection group belong to different hypervisors at the same first site.

10. The system of claim 4 wherein the virtual servers in the source virtual protection group belong to hypervisors at different first sites.

11. The system of claim 4 wherein the virtual servers in the target virtual protection group belong to the same hypervisor.

12. The system of claim 4 wherein the virtual servers in the target virtual protection group belong to different hypervisors at the same second site.

5 13. The system of claim 4 wherein the virtual servers in the target virtual protection group belong to hypervisors at different second sites.

14. The system of claim 4 wherein the source virtual protection group comprises the same number of virtual servers as the target virtual protection group.0

15. The system of claim 4 wherein the source virtual protection group comprises a different number of virtual servers than the target virtual protection group.

16. The system of claim 4 wherein the source virtual protection group spans5 the same number of hypervisors as does the target virtual protection group.

17. The system of claim 4 wherein the source virtual protection group spans a different number of hypervisors than does the target virtual protection group. o 18. The system of claim 4 wherein said data services managers provide data recovery for the virtual servers in the source virtual protection group from the virtual servers in the target virtual protection group.

19. The system of claim 18 wherein said data services managers also provide5 data recovery for the virtual servers in the target virtual protection group from the virtual servers in the source virtual protection group.

20. The system of claim 1 wherein said data services manager at each site monitors environmental changes, including movement of a virtual server from one0 hypervisor to another, movement of a virtual disk from one hypervisor to another, and addition of a virtual server to a hypervisor.

21. A system for cloud-based data services for multiple enterprises, comprising:

a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor comprising:

5 a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise;

at least one cloud virtual disk that is read from and written to by said at least one virtual server, each cloud virtual disk being associated with an enterprise; and

i o a virtual data services appliance, which provides cloud-based data services; and

multiple data services managers, one data services manager per respective enterprise, each of which coordinates the respective virtual data services appliances for those cloud hypervisors that service its corresponding enterprise.

15

22. The system of claim 21 wherein each enterprise comprises one or more virtual servers, wherein each of said data services managers designates a group of one or more of said cloud virtual servers as being a target virtual protection group for one or more virtual servers of its corresponding enterprise, and wherein the virtual data

2 o services appliance of each of said cloud hypervisors that comprises a cloud virtual data server belonging to a target virtual protection group of an enterprise, receives copies of write requests issued from one or more of the virtual servers of the enterprise.

23. The system of claim 22 wherein the virtual data services appliance of 25 each of said cloud hypervisors that comprises a cloud virtual data server belonging to a target virtual protection group of an enterprise, periodically applies the copies of write requests to one or more of said cloud virtual disks in the cloud hypervisor that are associated with the enterprise.

30 24. The system of claim 22 wherein the target virtual protection group of an enterprise comprises cloud virtual servers from the same one of said cloud hypervisors.

25. The system of claim 22 wherein the target virtual protection group of an enterprise comprises cloud virtual servers from more than one of said cloud hypervisors.

26. The system of claim 22 wherein at least one of said cloud hypervisors comprises at least one cloud virtual server that belongs to a target virtual protection group for a first enterprise, and at least one cloud virtual server that belongs to a virtual protection group for a second enterprise.

27. The system of claim 22 wherein each of said data services managers enables data recovery from the virtual servers of its enterprise to its enterprise target virtual protection group.

28. The system of claim 22 wherein each of said cloud hypervisors further comprises a tapping driver in communication with its said at least one cloud virtual server, wherein said tapping driver intercepts I/O requests made by any one of its said at least one cloud virtual server to any one of its said at least one cloud virtual disk, and wherein its said virtual data services appliance receives the intercepted I/O requests from said tapping driver, and provides its cloud-based data services therefrom. 29. The system of claim 28 wherein each of said data services managers enables data recovery from its enterprise target virtual protection group to the virtual servers of its enterprise.

30. A system for cloud-based data services for multiple enterprises, comprising:

a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor comprising:

a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise;

at least one cloud virtual disk that is read from and written to by said at least one virtual server, each cloud virtual disk being associated with an enterprise; and at least one virtual data services appliance, one virtual data services appliance per respective enterprise serviced by the hypervisor, which provides cloud- based data services to its respective enterprise; and

multiple data services managers, one data services manager per respective enterprise, each of which coordinates the virtual data services appliances that service its respective enterprise.

31. The system of claim 30 wherein each enterprise comprises one or more virtual servers, wherein each of said data services managers designates a group of one or more of said cloud virtual servers as being a target virtual protection group for one or more virtual servers of its corresponding enterprise, and wherein each one of the virtual data services appliances for an enterprise, that resides within a cloud hypervisor that comprises a cloud virtual data server belonging to the target virtual protection group of the enterprise, receives copies of write requests issued from one or more of the virtual servers of its enterprise.

32. The system of claim 31 wherein each one of the virtual data services appliances for the enterprise, that resides within a cloud hypervisor that comprises a cloud virtual data server belonging to the target virtual protection group of the enterprise, periodically applies the copies of write requests to one or more of said cloud virtual disks in the cloud hypervisor that are associated with the enterprise.

33. The system of claim 31 wherein the target virtual protection group of an enterprise comprises cloud virtual servers from the same one of said cloud hypervisors.

34. The system of claim 31 wherein the target virtual protection group of an enterprise comprises cloud virtual servers from more than one of said cloud hypervisors.

35. The system of claim 31 wherein at least one of said cloud hypervisors comprises at least one cloud virtual server that belongs to a target virtual protection group for a first enterprise, and at least one cloud virtual server that belongs to a virtual protection group for a second enterprise.

36. The system of claim 31 wherein each of said data services managers enables data recovery from the virtual servers of its enterprise to its enterprise target virtual protection group.

37. The system of claim 31 wherein each of said cloud hypervisors further comprises a tapping driver in communication with said at least one virtual server, which intercepts I/O requests made by any one of its said at least one virtual server to any one of its said at least one virtual disk, and wherein its said at least one virtual data services appliance receives the intercepted I/O requests from said tapping driver, and provides its cloud-based data services therefrom.

38. The system of claim 37 wherein each of said data services managers enables data recovery from its enterprise target virtual protection group to the virtual servers of its enterprise.

39. A system for cloud-based data services for multiple enterprises, comprising:

a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each enterprise being located in a corresponding jurisdiction, and each hypervisor being associated with a corresponding jurisdiction, wherein each hypervisor comprises:

a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise; and

at least one cloud virtual disk that is read from and written to by said at least one virtual server, each cloud virtual disk being associated with an enterprise; and

a rights manager that restricts access of an enterprise to a cloud hypervisor associated with the enterprise if there is a restriction to export data between the jurisdiction associated with the enterprise and the jurisdiction associated with the cloud hypervisor. 2013/005208

40. The system of claim 39 further comprising a user interface for specifying pairs of jurisdictions that are restricted from transferring data therebetween, and wherein said rights manager restricts access based on the pairs of jurisdictions specified via said user interface.

Description:
METHODS AND APPARATUS FOR PROVIDING HYPERVISOR

LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION

FIELD OF THE INVENTION

The present invention relates to virtual server computing environments.

BACKGROUND OF THE INVENTION

Data center virtualization technologies are now well adopted into information technology infrastructures. As more and more applications are deployed in a virtualized infrastructure, there is a growing need for recovery mechanisms to support mission critical application deployment, while providing complete business continuity and disaster recovery.

Virtual servers are logical entities that run as software in a server virtualization infrastructure, referred to as a "hypervisor". Examples of hypervisors are VMWARE ® ESX manufactured by VMware, Inc. of Palo Alto, CA, HyperV manufactured by Microsoft Corporation of Redmond, WA, XENSERVER ® manufactured by Citrix Systems, Inc. of Fort Lauderdale, FL, Redhat KVM manufactured by Redhat, Inc. of Raleigh, NC, and Oracle VM manufactured by Oracle Corporation of Redwood Shores, CA. A hypervisor provides storage device emulation, referred to as "virtual disks", to virtual servers. Hypervisor implements virtual disks using back-end technologies such as files on a dedicated file system, or raw mapping to physical devices.

As distinct from physical servers that run on hardware, virtual servers run their operating systems within an emulation layer that is provided by a hypervisor. Although virtual servers are software, nevertheless they perform the same tasks as physical servers, including running server applications such as database applications, customer relation management applications and MICROSOFT EXCHANGE SERVER ® . Most applications that run on physical servers are portable to run on virtual servers. As distinct from virtual desktops that run client side applications and service individual users, virtual servers run applications that service a large number of clients. As such, virtual servers depend critically on data services for their availability, security, mobility and compliance requirements. Data services include inter alia continuous data protection, disaster recovery, remote replication, data security, mobility, and data retention and archiving policies.

Conventional replication and disaster recovery systems were not designed to deal with the demands created by the virtualization paradigm. Most conventional replication systems are not implemented at the hypervisor level, with the virtual servers and virtual disks, but instead are implemented at the physical disk level. As such, these conventional systems are not fully virtualization-aware. In turn, the lack of virtualization awareness creates an operational and administrative burden, and a certain degree of inflexibility.

It would thus be of advantage to have data services that are fully virtualization-aware.

SUMMARY OF THE DESCRIPTION

Aspects of the present invention relate to a dedicated virtual data service appliance (VDSA) within a hypervisor that can provide a variety of data services. Data services provided by the VDSA include inter alia replication, monitoring and quality of service. The VDSA is fully application-aware.

In an embodiment of the present invention, a tapping filter driver is installed within the hypervisor kernel. The tapping driver has visibility to I/O requests made by virtual servers running on the hypervisor.

A VDSA runs on each physical hypervisor. The VDSA is a dedicated virtual server that provides data services; however, the VDSA does not necessarily reside in the actual I/O data path. When a data service processes I/O asynchronously, the VDSA receives the data outside the data path.

Whenever a virtual server performs I/O to a virtual disk, the tapping driver identifies the I/O requests to the virtual disk. The tapping driver copies the I/O requests, forwards one copy to the hypervisor' s backend, and forwards another copy to the VDSA.

Upon receiving an I/O request, the VDSA performs a set of actions to enable various data services. A first action is data analysis, to analyze the data content of the I/O request and to infer information regarding the virtual server's data state. E.g., the VDSA may infer the operating system level and the status of the virtual server. This information is subsequently used for reporting and policy purposes.

A second action, optionally performed by the VDSA, is to store each I/O write request in a dedicated virtual disk for journaling. Since all I/O write requests are journaled on this virtual disk, the virtual disk enables recovery data services for the virtual server, such as restoring the virtual server to an historical image.

A third action, optionally performed by the VDSA, is to send I/O write requests to different VDSAs, residing on hypervisors located at different locations, thus enabling disaster recovery data services.

The hypervisor architecture of the present invention scales to multiple host sites, each of which hosts multiple hypervisors. The scaling flexibly allows for different numbers of hypervisors at different sites, and different numbers of virtual services and virtual disks within different hypervisors. Each hypervisor includes a VDSA, and each site includes a data services manager to coordinate the VSDA's at the site, and across other sites.

Embodiments of the present invention enable flexibly designating one or more virtual servers within one or more hypervisors at a site as being a virtual protection group, and flexibly designating one or more hypervisors, or alternatively one or more virtual servers within one or more hypervisors at another site as being a replication target for the virtual protection group. Write order fidelity is maintained for virtual protection groups. A site may comprise any number of source and target virtual protection groups. A virtual protection group may have more than one replication target. The number of hypervisors and virtual servers within a virtual protection group and its replication target are not required to be the same.

The hypervisor architecture of the present invention may be used to provide cloud-based hypervisor level data services to multiple enterprises on a shared physical infrastructure, while maintaining control and data path separation between enterprises for security.

The present invention provides bi-directional cloud-based data replication services; i.e., from the enterprise to the cloud, and from the cloud to the enterprise. Moreover, replication targets may be assigned to a pool of resources that do not expose the enterprise infrastructure, thus providing an additional layer of security and privacy between enterprises that share a target physical infrastructure.

The cloud-based data replication services of the present invention support enforcement of data export regulations. As such, data transfer between a source and a destination is automatically restricted if data export regulations restrict data transfer between the corresponding jurisdictions of the source and the destination.

There is thus provided in accordance with an embodiment of the present invention a cross-host multi-hypervisor system, including a plurality of host sites, each site including at least one hypervisor, each of which includes at least one virtual server, at least one virtual disk that is read from and written to by the at least one virtual server, a tapping driver in communication with the at least one virtual server, which intercepts write requests made by any one of the at least one virtual server to any one of the at least one virtual disk, and a virtual data services appliance, in communication with the tapping driver, which receives the intercepted write requests from the tapping driver, and which provides data services based thereon, and a data services manager for coordinating the virtual data services appliances at the site, and a network for communicatively coupling the plurality of sites, wherein the data services managers coordinate data transfer across the plurality of sites via the network.

There is additionally provided in accordance with an embodiment of the present invention a system for cloud-based data services for multiple enterprises, including a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor including a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise, at least one cloud virtual disk that is read from and written to by the at least one virtual server, each cloud virtual disk being associated with an enterprise, and a virtual data services appliance, which provides cloud-based data services, and multiple data services managers, one data services manager per respective enterprise, each of which coordinates the respective virtual data services appliances for those cloud hypervisors that service its corresponding enterprise.

There is further provided in accordance with an embodiment of the present invention a system for cloud-based data services for multiple enterprises, including a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor including a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise, at least one cloud virtual disk that is read from and written to by the at least one virtual server, each cloud virtual disk being associated with an enterprise, and at least one virtual data services appliance, one virtual data services appliance per respective enterprise serviced by the hypervisor, which provides cloud-based data services to its respective enterprise, and multiple data services managers, one data services manager per respective enterprise, each of which coordinates the virtual data services appliances that service its respective enterprise.

There is yet further provided in accordance with an embodiment of the present invention a system for cloud-based data services for multiple enterprises, including a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each enterprise being located in a corresponding jurisdiction, and each hypervisor being associated with a corresponding jurisdiction, wherein each hypervisor includes a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise, and at least one cloud virtual disk that is read from and written to by the at least one virtual server, each cloud virtual disk being associated with an enterprise, and a rights manager that restricts access of an enterprise to a cloud hypervisor associated with the enterprise if there is a restriction to export data between the jurisdiction associated with the enterprise and the jurisdiction associated with the cloud hypervisor.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:

FIG. 1 is a simplified block diagram of a hypervisor architecture that includes a tapping driver and a virtual data services appliance, in accordance with an embodiment of the present invention;

FIG. 2 is a simplified data flow chart for a virtual data services appliance, in accordance with an embodiment of the present invention;

FIG. 3 is a simplified block diagram of a virtual replication system, in accordance with an embodiment of the present invention;

FIG. 4 is a simplified block diagram of a cross-host multiple hypervisor system that includes data services managers for multiple sites that have multiple hypervisors, in accordance with an embodiment of the present invention;

FIG. 5 is a user interface screenshot of bi-directional replication of virtual protection groups, in accordance with an embodiment of the present invention;

FIG. 6 is a user interface screenshot of assignment of a replication target for a virtual protection group, in accordance with an embodiment of the present invention;

FIG. 7 is an example an environment for the system of FIG. 4, in accordance with an embodiment of the present invention;

FIG. 8 is a simplified block diagram of a system for multi-tenant and multi-site cloud-based data services, in accordance with an embodiment of the present invention;

FIG. 9 is a simplified block diagram of a first configuration for providing hypervisor level multi-tenant cloud-based data services, in accordance with an embodiment of the present invention;

FIG. 10 is a simplified block diagram of a second configuration for providing hypervisor level multi-tenant cloud-based data services, in accordance with an embodiment of the present invention; and FIG. 11 is a simplified block diagram of a system for multi-tenant and multi-site cloud-based data services with jurisdictional data separation, in accordance with an embodiment of the present invention.

LIST OF APPENDICES

Appendix I is an application programming interface for virtual replication site controller web services, in accordance with an embodiment of the present invention;

Appendix II is an application programming interface for virtual replication host controller web services, in accordance with an embodiment of the present invention;

Appendix III is an application programming interface for virtual replication protection group controller web services, in accordance with an embodiment of the present invention;

Appendix IV is an application programming interface for virtual replication command tracker web services, in accordance with an embodiment of the present invention; and

Appendix V is an application programming interface for virtual replication log collector web services, in accordance with an embodiment of the present invention.

DETAILED DESCRIPTION

Aspects of the present invention relate to a dedicated virtual data services appliance (VDSA) within a hypervisor, which is used to provide a variety of hypervisor data services. Data services provided by a VDSA include inter alia replication, monitoring and quality of service.

Reference is made to FIG. 1, which is a simplified block diagram of a hypervisor architecture that includes a tapping driver and a VDSA, in accordance with an embodiment of the present invention. Shown in FIG, 1 is a hypervisor 100 with three virtual servers 110, three virtual disks 120, an I/O backend 130 and a physical storage array 140. Hypervisor 100 uses a single physical server, but runs multiple virtual servers 110. Virtual disks 120 are a storage emulation layer that provide storage for virtual servers 110. Virtual disks 120 are implemented by hypervisor 100 via I/O backend 130, which connects to physical disk 140.

Hypervisor 100 also includes a tapping driver 150 installed within the hypervisor kernel. As shown in FIG. 1, tapping driver 150 resides in a software layer between virtual servers 110 and virtual disks 120. As such, tapping driver 150 is able to access I/O requests performed by virtual servers 110 on virtual disks 120. Tapping driver 150 has visibility to I/O requests made by virtual servers 110.

Hypervisor 100 also includes a VDSA 160. In accordance with an embodiment of the present invention, a VDSA 160 runs on a separate virtual server within each physical hypervisor. VDSA 160 is a dedicated virtual server that provides data services via one or more data services engines 170. However, VDSA 160 does not reside in the actual I/O data path between I/O backend 130 and physical disk 140. Instead, VDSA 160 resides in a virtual I/O data path.

Whenever a virtual server 110 performs I/O on a virtual disk 120, tapping driver 150 identifies the I/O requests that the virtual server makes. Tapping driver 150 copies the I/O requests, forwards one copy via the conventional path to I/O backend 130, and forwards another copy to VDSA 160. In turn, VDSA 160 enables the one or more data services engines 170 to provide data services based on these I/O requests.

Reference is made to FIG. 2, which is a simplified data flow chart for a VDSA, in accordance with an embodiment of the present invention. Shown in FIG. 2 are an I/O receiver 210, a hash generator 220, a TCP transmitter 230, a data analyzer and reporter 240, a journal manager 250 and a remote VDSA 260. Remote VDSA 260 resides on different physical hardware, at a possibly different location.

As shown in FIG. 2, I/O receiver 210 receives an intercepted I/O request from tapping driver 150. VDSA 160 makes up to three copies of the received I/O requests, in order to perform a set of actions which enable the one or more data services engines 170 to provide various services.

A first copy is stored in persistent storage, and used to provide continuous data protection. Specifically, VDSA 160 sends the first copy to journal manager 250, for storage in a dedicated virtual disk 270. Since all I/O requests are journaled on virtual disk 270, journal manager 250 provides recovery data services for virtual servers 110, such as restoring virtual servers 110 to an historical image. In order to conserve disk space, hash generator 220 derives a one-way hash from the I/O requests. Use of a hash ensures that only a single copy of any I/O request data is stored on disk.

An optional second copy is used for disaster recovery. It is sent via TCP transmitter 230 to remote VDSA 260. As such, access to all data is ensured even when the production hardware is not available, thus enabling disaster recovery data services.

An optional third copy is sent to data analyzer and reporter 240, which generates a report with information about the content of the data. Data analyzer and reporter 240 analyzes data content of the I/O requests and infers information regarding the data state of virtual servers 110. E.g., data analyzer and reporter 240 may infer the operating system level and the status of a virtual server 110.

Reference is made to FIG. 3, which is a simplified block diagram of a virtual replication system, in accordance with an embodiment of the present invention. Shown in FIG. 3 is a protected site designated Site A, and a recovery site designated Site B. Site A includes a hypervisor 100A with three virtual servers llOA-1, 110A-2 and 110A- 3, and a VDSA 160A. Site A includes two physical disks 140A-1 and 140A-2. Site B includes a hypervisor 100B with a VDSA 160B. Site B includes two physical disks 140B-1 and 140B-2. All or some of virtual servers llOA-l, 110A-2 and 110A-3 may be designated as protected. Once a virtual server is designated as protected, all changes made on the virtual server are replicated at the recovery site.

In accordance with an embodiment of the present invention, every write command from a protected virtual server in hypervisor 100A is intercepted by tapping driver 150 (FIG. 1) and sent asynchronously by VDSA 160A to VDSA 160B for replication, via a wide area network (WAN) 320, while the write command continues to be processed by the protected server.

At Site B, the write command is passed to a journal manager 250 (FIG. 2), for journaling on a Site B virtual disk 270 (FIG. 2). After every few seconds, a checkpoint is written to the Site B journal, and during a recovery one of the checkpoints may be selected for recovering to that point. Additionally, checkpoints may be manually added to the Site B journal by an administrator, along with a description of the checkpoint. E.g., a checkpoint may be added immediately prior to an event taking place that may result in the need to perform a recovery, such as a planned switch over to an emergency generator.

In addition to write commands being written to the Site B journal, mirrors llOB-1, 110B-2 and 110B-3 of the respective protected virtual servers llOA-1, 110A-2 and 110A-3 at Site A are created at Site B. The mirrors at Site B are updated at each checkpoint, so that they are mirrors of the corresponding virtual servers at Site A at the point of the last checkpoint. During a failover, an administrator can specify that he wants to recover the virtual servers using the latest data sent from the Site A. Alternatively the administrator can specify an earlier checkpoint, in which case the mirrors on the virtual servers llOB-l, 110-B-2 and 110B-3 are rolled back to the earlier checkpoint, and then the virtual servers are recovered to Site B. As such, the administrator can recover the environment to the point before any corruption, such as a crash or a virus, occurred, and ignore the write commands in the journal that were corrupted.

VDSAs 160A and 160B ensure write order fidelity; i.e., data at Site B is maintained in the same sequence as it was written at Site A. Write commands are kept in sequence by assigning a timestamp or a sequence number to each write at Site A. The write commands are sequenced at Site A, then transmitted to Site B asynchronously, then reordered at Site B to the proper time sequence, and then written to the Site B journal.

The journal file is cyclic; i.e., after a pre-designated time period, the earliest entries in the journal are overwritten by the newest entries. It will be appreciated by those skilled in the art that the virtual replication appliance of the present invention operates at the hypervisor level, and thus obviates the need to consider physical disks. In distinction, conventional replication systems operate at the physical disk level. Embodiments of the present invention recover write commands at the application level. Conventional replication systems recover write commands at the SCSI level. As such, conventional replication systems are not fully application-aware, whereas embodiment of the present invention are full application- aware, and replicate write commands from an application in a consistent manner.

The present invention offers many advantages.

• Hardware Agnostic: Because VDSA 160 manages recovery of virtual servers and virtual disks, it is not tied to specific hardware that is used at the protected site or at the recovery site. The hardware may be from the same vendor, or from different vendors. As long as the storage device supports the iSCSI protocol, any storage device, known today or to be developed in the future, can be used.

• Fully Scalable: Because VDSA 160 resides in the hypervisor level, architectures of the present invention scale to multiple sites having multiple hypervisors, as described hereinbelow with reference to FIG. 4.

• Efficient Asynchronous Replication: Write commands are captured by VDSA 160 before they are written to a physical disk at the protected site. The write commands are sent to the recovery site asynchronously, and thus avoid long distance replication latency. Moreover, only delta changes are sent to the recovery site, and not a whole file or disk, which reduces the network traffic, thereby reducing WAN requirements and improving recovery time objective and recovery point objective.

• Control of Recovery: An administrator controls when a recovery is initiated, and to what point in time it recovers.

• Near-Zero Recovery Point Objective (RPO): VDSA 160 continuously protects data, sending a record of every write command transacted at the protected site to the recovery site. As such, recovery may be performed within a requested RPO.

• Near-Zero Recovery Time Objective (RTO): During recovery the mirrors of the protected virtual servers are recovered at the recovery site from VDSA 160B, and synchronized to a requested checkpoint. In accordance with an embodiment of the present invention, during synchronization and while the virtual servers at the recovery site are not yet fully synchronized, users can nevertheless access the virtual servers at the recovery site. Each user request to a virtual server is analyzed, and a response is returned either from the virtual server directly, or from the journal if the information in the journal is more up-to-date. Such analysis of user requests continues until the recovery site virtual environment is fully synchronized.

• WAN Optimization between Protected and Recovery Sites: In accordance with an embodiment of the present invention, write commands re compressed before being sent from VDSA 160A to VDSA 160B, with throttling used to prioritize network traffic. As such, communication between the protected site and the recovery site is optimized.

• WAN Failover Resilience: In accordance with an embodiment of the present invention, data is cached prior to being transmitted to the recovery site. If WAN

320 goes down, the cached data is saved and, as soon as WAN 320 comes up again, the data is sent to the recovery site and both sites are re-synchronized.

• Single Point of Control: In accordance with an embodiment of the present invention, both the protected and the recovery site are managed from the same client console.

• Secure Multi-Tenant Solution on Shared Hardware: The present invention enables multiple enterprises to use shared hardware at a recovery site for data replication, while maintaining security between the enterprises, as described hereinbelow with reference to FIGS. 9 and 10.

As indicated hereinabove, the architecture of FIG. 1 scales to multiple sites having multiple hypervisors. Reference is made to FIG. 4, which is a simplified block diagram of a cross-host multiple hypervisor system 300 that includes data services managers for multiple sites that have multiple hypervisors, in accordance with an embodiment of the present invention. The architecture of FIG. 4 includes three sites, designated Site A, Site B and Site C, the three sites being communicatively coupled via a network 320. Each site includes one or more hypervisors 100. Specifically, Site A includes three hypervisors, lOOA/1, lOOA/2 and 100A/3, Site B includes two hypervisors, lOOB/1 and 100B/2, and Site C includes one hypervisor lOOC/1. The sites have respective one or more physical disks 140A, 140B and 140C.

The hypervisors are shown in system 300 with their respective VDSA's 160A/1, 160A/2, ... , and the other components of the hypervisors, such as the virtual servers 110 and virtual disks 120, are not shown for the sake of clarity. An example system with virtual servers 110 is shown in FIG. 7, and described hereinbelow.

The sites include respective data services managers 310A, 310B and 310C that coordinate hypervisors in the sites, and coordinate hypervisors across the sites.

The system of FIG. 4 may be used for data replication, whereby data at one site is replicated at one or more other sites, for protection. The solid communication lines 330 in FIG. 4 are used for in-site traffic, the dashed communication lines 340 are used for replication traffic between sites, and the dotted communication lines 350 are used for control traffic between data services managers.

Data services managers 310A, 310B and 310C are control elements. The data services managers at each site communicate with one another to coordinate state and instructions. The data services managers track the hypervisors in the environment, and track health and status of the VDS As 160A/1, 160A/2, ....

It will be appreciated by those skilled in the art that the environment shown in FIG. 4 may be re-configured by moving one or more virtual servers 110 from one hypervisor 100 to another, by moving one or more virtual disks 120 from one hypervisor 100 to another, and by adding one or more additional virtual servers 110 to a hypervisor 100.

In accordance with an embodiment of the present invention, the data services managers enable designating groups of specific virtual servers 110, referred to as virtual protection groups, to be protected. For virtual protection groups, write order fidelity is maintained. The data services managers enable designating a replication target for each virtual protection group; i.e., one or more sites, and one or more hypervisors in the one or more sites, at which the virtual protection group is replicated. A virtual protection group may have more than one replication target. The number of hypervisors and virtual servers within a virtual protection group and its replication target are not required to be the same. Reference is made to FIG. 5, which is a user interface screenshot of bidirectional replication of virtual protection groups, in accordance with an embodiment of the present invention. Shown in FIG. 4 are virtual protection groups 301 ("Exchange"), 302 ("WebApp"), 303 ("Dummy-Rl"), 304 ("Windows 2003") and 305 (Dummies-L"). Arrows 306 indicate direction of replication.

Reference is made to FIG. 6, which is a user interface screenshot of assignment of a replication target for a virtual protection group, in accordance with an embodiment of the present invention. Shown in FIG. 6 is an entry 307 for designating a recovery host, and an entry 308 for designating a recovery datastore for virtual protection group 304 ("Windows 2003") of FIG. 5. Respective source and target datastores, [SAN ZeRTO-30] 309A and [datastorel] 309B, are shown as being paired.

More generally, the recovery host may be assigned to a cluster, instead of to a single hypervisor, and the recovery datastore may be assigned to a pool of resources, instead of to a single datastore. Such assignments are of particular advantage when different enterprises share the same physical infrastructure for target replication, as such assignments mask the virtual infrastructure between the different enterprises.

The data services managers synchronize site topology information. As such, a target site's hypervisors and datastores may be configured from a source site.

Virtual protection groups enable protection of applications that run on multiple virtual servers and disks as a single unit. E.g., an application that runs on virtual servers many require a web server and a database, each of which run on a different virtual server than the virtual server that runs the application. These virtual servers may be bundled together using a virtual protection group.

Referring back to FIG. 4, data services managers 310A, 310B and 310C monitor changes in the environment, and automatically update virtual protection group settings accordingly. Such changes in the environment include inter alia moving a virtual server 110 from one hypervisor 100 to another, moving a virtual disk 120 from one hypervisor 100 to another, and adding a virtual server 110 to a hypervisor 100.

For each virtual server 110 and its target host, each VDSA 160A/1, 160A/2, ... replicates IOs to its corresponding replication target. The VDSA can replicate all virtual servers to the same hypervisor, or to different hypervisors. Each VDSA maintains write order fidelity for the IOs passing through it, and the data services manager coordinates the writes among the VDSAs.

Since the replication target hypervisor for each virtual server 110 in a virtual protection group may be specified arbitrarily, all virtual servers 110 in the virtual protection group may be replicated at a single hypervisor, or at multiple hypervisors. Moreover, the virtual servers 110 in the source site may migrate across hosts during replication, and the data services manager tracks the migration and accounts for it seamlessly.

Reference is made to FIG. 7, which is an example an environment for system 300, in accordance with an embodiment of the present invention. As shown in FIG. 7, system 300 includes the following components.

Site A

Hypervisor lOOA/l: virtual servers llOA/1-1, llOA/1-2, llOA/1-3.

Hypervisor 100A/2: virtual servers 110A/2-1, 110A/2-2, 110A/2-3.

Hypervisor 100A/3: virtual servers 110A/3-1, llOA 3-2, 110A/3-3.

Site B

Hypervisor lOOB/l: virtual servers llOB/1-1, llOB/1-2, llOB/1-3.

Hypervisor 100B/2: virtual servers 110B/2-1, 110B/2-2, 110B/2-3.

Site C

Hypervisor lOOC/l: virtual servers llOC/1-1, llOC/1-2, llOC/1-3, llOC/1-4.

As further shown in FIG. 7, system 300 includes the following virtual protection groups. Each virtual protection group is shown with a different hatching, for clarity.

VPG1 (shown with upward-sloping hatching)

• Source at Site A: virtual servers llOA/1-1, 110A/2-1, 110A/3-1

• Replication Target at Site B: virtual servers llOB/1-1, llOB/1-2, 110B/2-1

VPG2 (shown with downward-sloping hatching)

• Source at Site B: virtual servers llOB/1-3, 110B/2-2

• Replication Target at Site A: virtual servers llOA/1-2, 110A/2-2 VPG3 (shown with horizontal hatching)

• Source at Site A: virtual server 110A/3-3

• Replication Target at Site B: virtual serer 110B/2-3

• Replication Target at Site C: virtual server llOC/1-4

VPG4 (shown with vertical hatching)

• Source at Site A: virtual servers llOA/1-3, 110A/2-3, 110A/3-2

• Replication Target at Site C: virtual servers llOC/1-1, llOC/1-2, llOC/1-3

As such, it will be appreciated by those skilled in the art that the hypervisor architecture of FIG. 1 scales to multiple host sites, each of which hosts multiple hypervisors. The scaling flexibly allows for different numbers of hypervisors at different sites, and different numbers of virtual services and virtual disks within different hypervisors.

The scaling flexibility of the present invention also allows extension to cloud-based data services provided by a cloud provider on a shared infrastructure, as explained hereinbelow.

Cloud-based data services enable data center providers to service multiple enterprises at data centers that are remote from the enterprises. Cloud-based data services offer many advantages. Enterprises that use cloud-based data services obviate the needs for servers, SAN/NAS, networks, communication lines, installation, configuration and ongoing maintenance of information technology systems, and overhead expenses for electricity, cooling and space. However, conventional cloud- based data suffer from weakness of security due to multiple enterprises sharing the same physical infrastructure, and due to multiple enterprises using the same networks and IPs for their services.

Cloud-based systems of the present invention overcome these weaknesses. Reference is made to FIG. 8, which is a simplified block diagram of a system 500 for multi-tenant and multi-site cloud-based data services, in accordance with an embodiment of the present invention. Shown in FIG. 8 are three enterprises, designated A, B and C, and a remote cloud-based facility 480 with two data centers, designated 1 and 2. Enterprise A uses a NETAPP ® data management system, Enterprise B uses an IBM data management system, and Enterprise C uses an EMC data management system. Data Center 1 uses an EMC data management system and services Enterprise A. Data Center 2 uses a NETAPP ® data management system and services Enterprises B and C.

System 500 has many advantages over conventional data service systems. Inter alia, system 500 enables protection of heterogenic environments, enables remote control of enterprise sites, enables economies of scale, enables complete workload mobility, enables a complete web services API for seamless integration, and enables integration with other cloud-based management systems.

Reference is made to FIG. 9, which is a simplified block diagram of a first configuration of system 500 for providing hypervisor level multi-tenant cloud-based data services, in accordance with an embodiment of the present invention. The configuration shown in FIG. 9 includes two enterprises; namely, a smaller enterprise A and a larger enterprise B. Enterprise A infrastructure includes a single hypervisor lOOA/l with five virtual servers lOOA/1-1, lOOA/1-2, lOOA/1-3, lOOA/1-4 and lOOA/1- 5 and a VDSA 160A 1, includes two physical disks 120A-1 and 120A-2, and includes a data services manager 310A. Enterprise B infrastructure includes two hypervisors lOOB/l and 100B/2, includes three physical disks 120B-1, 120B-2 and 120B-3, and a data services manager 310B. Hypervisor lOOB/l includes five virtual servers lOOB/1-1, lOOB/1-2, lOOB/1-3, lOOB/1-4 and lOOB/1-5 and a VDSA 160B/1; and hypervisor 100B/2 includes five virtual servers 100B/2-1, 100B/2-2, 100B/2-3, 100B/2-4 and 100B/2-5 and a VDSA 160B/2.

Cloud-based facility 480 infrastructure includes two hypervisors 400/1 and 400/2, and four physical disks 420-1, 420-2, 420-3 and 420-4. Hypervisor 400/1 includes six virtual servers 410/1-1, 410/1-2, 410/1-3, 410/1-4, 410/1-5 and 410/1-6; and hypervisor 400/2 includes two virtual servers 410/2-1 and 410/2-2. Hypervisor 400/1 services Enterprises A and B, and hypervisor 400/2 services Enterprise B. As such, the infrastructure of cloud-based facility 480 is shared between Enterprises A and B.

The configuration of FIG. 9 allocates one data services manager per enterprise and one VDSA per hypervisor. Specifically, hypervisor 400/1 includes a VDSA 460/1 and hypervisor 400/2 includes a VDSA 460/2. A data services manager 510A services Enterprise A, and a data services manager 510B services Enterprise B.

Reference is made to FIG. 10, which is a simplified block diagram of a second configuration of system 500 for providing hypervisor level multi-tenant cloud- based data services, in accordance with an embodiment of the present invention. The configuration of FIG. 10 allocates one data services manager per enterprise and one VDSA per enterprise per hypervisor. Specifically, hypervisor 400/1 includes a VDSA 460A/1 for Enterprise A and a VDSA 460B/1 for Enterprise B, and hypervisor 400/2 includes a VDSA 460B/2 for Enterprise B.

The two different configurations shown in FIGS. 9 and 10 offer different advantages. The configuration of FIG. 9 minimizes the cloud footprint, by using only one data services manager per hypervisor, but offers less data path security for enterprises. Specifically, the infrastructure is shared between enterprises and security is enforced via software. The configuration of FIG. 10 maximizes data path security for enterprises, but the cloud footprint is larger, since multiple data services managers may be required per hypervisor.

As such, it will be appreciated by those skilled in the art that the cloud-based hypervisor level data services systems of the present invention enable multi-tenancy and multi-side services; i.e., multiple enterprises and multiple sites may be serviced by the same physical infrastructure including inter alia the same hypervisors and storage. By providing each enterprise with its own data services manager, as in FIGS. 9 and 10, the systems of the present invention provide path separation between enterprises, thus ensuring that each enterprise is only able to view and access his own virtual servers and storage, and each enterprise can only control its own data replication. The systems of the present invention may be installed behind network address translators (NATs), and do not require a virtual private network (VPN). As such, these systems may obviate VPN setups for enterprises, and obviate use of public IPs.

The systems of the present invention provide bi-directional cloud-based data replication services; i.e., from an enterprise to the cloud, and from the cloud to an enterprise, for the same enterprise or for different enterprises, simultaneously using the same shared infrastructure. Moreover, replication targets may be set as resources that do not expose the enterprise infrastructure, thus providing an additional layer of security and privacy between enterprises.

It will be appreciated by those skilled in the art that systems of the present invention may be used to enforce jurisdictional data export regulations. Specifically, cloud-based facility 480 infrastructure is partitioned according to jurisdictions, and data recovery and failover for an enterprise is limited to one or more specific partitions according to jurisdictional regulations.

Reference is made to FIG. 11, which is a simplified block diagram of a system 600 for multi-tenant and multi-site cloud-based data services with jurisdictional data separation, in accordance with an embodiment of the present invention. Shown in FIG. 11 are three jurisdictional enterprises; namely, a USA Enterprise A, a UK Enterprise B, and an APAC Enterprise C. Also shown in FIG. 11 are four data centers; namely, Data Center 1 and Data Center 2 in the USA, Data Center 3 in the UK and Data Center 4 in APAC.

Export regulations prevent data from being exported from one jurisdiction to another. In order to enforce these regulations, system 600 includes a rights manager 610 that blocks access to a data center by an enterprise if data export is regulations restrict data transfer between their respective jurisdictions. Thus rights manager 610 blocks access by Enterprise A to Data Centers 3 and 4, blocks access by Enterprise B to Data Centers 1,2 and 4, and blocks access by Enterprise C to Data Centers 1, 2, and 3. Enterprises A, B and C may be commonly owned, but access of the data centers by the enterprises is nevertheless blocked, in order to comply with data export regulations.

In accordance with an embodiment of the present invention, when configuring a virtual protection group, an administrator may set a territory/data center restriction. When the administrator subsequently selects a destination resource for data replication for a virtual protection group, system 600 verifies that the resource is located in a geography that does not violate a territory/data center restriction.

The present invention may be implemented through an application programming interface (API), exposed as web service operations. Reference is made to Appendices I - V, which define an API for virtual replication web services, in accordance with an embodiment of the present invention. It will thus be appreciated that the present invention provides many advantages, including inter alia:

• heterogeneous hypervisor replication, for different types of sources and target hypervisor; e.g., from a VMWare hypervisor to a Xen hypervisor;

• heterogeneous storage replication, for different types of storage systems; e.g., from an EMC storage system to a NetApp storage systems;

• bi-directional replication, whereby one enterprise may replicate from the enterprise to a cloud data center, while another enterprise simultaneously replicates from a cloud data center back to the enterprise; and

• security, whereby the cloud infrastructure is not exposed.

In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

APPENDIX I

Site Controller Web Services

These web services include methods and properties for pairing and un-pairing sites, and for managing site details.

Properties

• PeerSiteldentifier

This property is a globally unique identifier for the peer site of a given site.

• SiteConnectionParameters

This property includes parameters to access a site, including

string Hostname - the host name or IP address of the site; and

int Port - the port for the virtual data services appliances at the site.

• Siteldentifier

This property is a globally unique identifier for a site.

• SiteManagementDetails

This property includes a name, location and contact information for a site, including

string contactmfo - the information used to contact an administrator in case recovery is required;

string Location - the location of the site; and

string siteName - the name of the site. • ThisSiteldentifier

This property indicates the globally unique identifier for the local site.

Methods

• GetPeerConnectionParameters

public SiteConnectionParameters Get

PeerConnectionParameters ( )

This method retrieves the IP address and port of the site paired with a given site.

• GetSiteDetails

public SiteDetails GetSiteDetails ( Siteldentifier

siteldentifier )

This method retrieves site details, including inter alia the IP address and port of a designated server.

• GetSiteManagementDetails

public SiteManagementDetails GetSiteManagementDetails (

Siteldentifier siteldentifier )

This method retrieves the name, location and contact information specified by an administrator for a designated site.

• GetSiteMappings

public IEnumerable<SiteMapping> GetSiteMappings ( )

This method retrieves the identifiers for a site and for its paired site.

• Get TcpPort

public int GetTcpPort ( Siteldentifier siteldentifier )

This method retrieves the TCP port to access the virtual data services application for a designated site. • GetUsername

public string GetUsername ( )

This method retrieves the username for a hypervisor. · GetIP

public string GetIP ( )

This method retrieves the IP address or a hostname for a hypervisor.

• IsPaired

public bool IsPaired( )

This method returns true if the local site is paired with another site. Otherwise, it returns false.

• Pair

public void Pair ( SiteConnectionParameters

peerConnectionParameters )

This method pairs a local site with another site.

• Reconfigure

public void Reconfigure (

String hostname,

String username,

String password)

This method reconfigures hypervisor information.

• SetTcpPort

public int SetTcpPort ( Siteldentifier siteldentifier , int port )

This method sets the TCP port used to access the virtual data services appliances at a designated site. • Unpair

public void Unpair ( )

This method un-pairs a local and remote site.

APPENDIX II

Host Controller Web Services

These web services include methods and properties to identify hypervisors, and to deploy virtual data services appliances on hypervisors.

Properties

• Hostldentifier

This property identifies a hypervisor, and includes

string internaiHostName - a name used to identify the hypervisor; and

Serverldentifier serverldentifier - the globally unique identifier for the hypervisor. Methods

• GetCurrentlylnstallingHosts

public Hostldentifier GetCurrentlylnstallingHosts (

Siteldentifier siteldentifier )

This method retrieves a list of hypervisors where a virtual data services appliance is in the process of being installed, at a designated site.

• GetCurrentlyRemovingHosts

public Hostldentifier GetCurrentlyRemovingHosts (

Siteldentifier siteldentifier )

This method retrieves a list of hypervisors where a virtual data services appliance is in the process of being un-deployed, at a designated site. • GetlnstalledHosts

public Hostldentifier GetlnstalledHosts (

Siteldentifier siteldentifier )

This method retrieves a list of hypervisors where a virtual data services appliance is installed, at a designated site.

• GetUninstalledHosts

public Hostldentifier GetUninstalledHosts (

Siteldentifier siteldentifier )

This method retrieves a list of hypervisors where a virtual data services appliance is not installed, at a designated site.

• InstallVdsaOnHost

public CommandTaskldentifier InstallVdsaOnHost (

Siteldentifier siteldentifier ,

Hostldentifier hostldentifier ,

Datastoreldentifier datastoreIdentifier, Networkldentifier networkldentifier,

VdsalpConf vdsalpConf)

This method deploys a virtual data services appliance on a specified hypervisor at a designated site, in accordance with a specified datastore, a specified type of network, and access details including inter alia an IP a subnet mask and gateway for the VDSA.

• UninstallVdsaFromHost

public void UninstallVdsaFromHos ( Siteldentifier

siteldentifier ,

Hostldentifier hostldentifier )

This method un-deploys a virtual data services appliance from a specified hypervisor, at a designated site. APPENDIX III

Protection Group Controller Web Services

These web services include methods and properties to manage virtual protection groups.

Properties

• Checkpoint

This property identifies a checkpoint by an unsigned integer.

• FailoverTestlnfo

This property includes information returned from a failover test, including

DateTime startTime - The time the test started,

represented as a date and time of day;

DateTime EndTime - The time the test ended, represented as a date and time of day;

Teststatus status - A status of the test, including Unknown, FailedBySystem, AbortedByFailoverCommand,

AbortedBySystem, FailedByUser, Success, CurrentlyRunning;

string Summary - a summary of the test;

string AttachmentName - a name of an attachment; and byte[ ] Attachment - a key to an attachment.

• ProtectionGroupIdentifier

This property is a globally unique identifier for a virtual protection group. • ProtectionGroupManagementSettings

This property defines settings for a protection group, including

protectionGroupsettings - properties for a protection group;

vAppManagementsettings - properties for a virtual application, when the virtual protection group protects a virtual application; and

vsManagementsettings - properties for the virtual server.

• ProtectionGroupsettings

This property defines settings for a virtual protection group, including string Name - a name used to identify the virtual protection group;

unsigned int RpoinSeconds - the maximum number of seconds between checkpoints;

unsigned long LogVolumeMaxSizelnSectors - the maximum log size;

unsigned int LogBackloglnMinutes - the time during which log entries are kept in cache;

booi useCompression - set to true if data should be compressed when transmitted to the peer site;

ProtectionGroupDefaults Defaults - defaults used to identify the recovery site, including the IP address of the hypervisor host, the name of the datastore, and the networks to use for tests and for failovers

ProtectionGroupPriority Priority - the priority assigned to the virtual protection group; namely, LOW, Medium or

High;

int Remoteid - the identifier assigned to the peer site; and int MaxTestintervaiinMinutes - the maximum time, in minutes, between tests.

• ProtectionGroupStatus

This property indicates the status of a virtual protection group, from among Protecting, NeedReverseConfiguration, Promoting,

PromotingAndNeedReverseConfiguration, Test, Failover,

PromotionCompleteMirrorsNotYetActivated,

MissingConfiguration, PromotingAndMissingConfiguration,

RemovelnProgress.

• vAppManagementSettings

This property indicates settings for a virtual application, including

vAppIdentifier

string InternalVAppName - a name for the virtual application; and

Serverldentifier serverldentifier - a globally unique identifier for the virtual application; and

vAppSettings

string internaiName - a name of the virtual application replication destination;

Serverldentifier serverldentifier - a globally unique identifier tor the virtual application replication destination; and

Type - the type of server hosting the virtual application at the destination; namely, host(0) or cluster(l). • VSManagementSettings

This property indicates settings for a virtual server, including

VSIdentifier

string internaivsName - a name given to the virtual server; and

Serverldentifier serverldentifier - 3 globally unique identifier for the virtual server;

VSSettings

ReplicationDestination - the destination name, identifier and type (host / cluster) and the datastore name and identifier; and

vsDestinationFoiders - a failover destination folder and a failover test destination folder, including a name and an identifier;

VNicManagementSettingsCollection - Virtual NIC settings; and

VolumeManagementSettings - settings that identify a volume, such as the directory, path and name of the disk and type of disk, such as a raw drive, and whether or not it is a swap disk.

Methods

• bCheckpointsExist

public bool bCheckpointsExis (

ProtectionGroupIdentifier identifier )

This method returns true if checkpoints exit for a designated virtual protection group. • ClearSite

public CommandTaskldentifier ClearSite ( Siteldentifier

site )

This method removes the virtual protection groups defined at a

designated site.

• CreateProtectionGroup

public CommandTaskldentifier CreateProtectionGroup (

ProtectionGroupManagementSettings

managementSet ings

Siteldentifier siteldentifier )

This method creates a virtual protection group at a designated site. · FailOver

public CommandTaskldentifier FailOver (

ProtectionGroupIdentifier identifier ,

Checkpointldentifier checkpoint,

ProtectionGroupManagementSettings postFailoverSettings ) This method performs a failover of the virtual servers in a designated virtual protection group, to a designated checkpoint instance or to the latest checkpoint.

• FailOverTest

public CommandTaskldentifier FailOverTest (

ProtectionGroupIdentifier identifier,

Checkpointldentifier checkpoint)

This method performs a failover of the virtual servers in a designated virtual protection group, to a designated checkpoint or to the latest checkpoint, without creating reverse replication and without stopping protection of the virtual servers in the designated virtual protection group.

• ForceRemoveProtectionGroup

public CommandTaskldentifier ForceRemoveProtectionGroup (

ProtectionGroupIdentifier identifier )

This method removes a virtual protection group irrespective of the state of the group. This method is used if the

RemoveProtectionGroup method is unable to complete successfully.

• ForceUpdateProtectionGroup

public CommandTaskldentifier ForceUpdateProtectionGrou (

ProtectionGroupIdentifier identifier ,

ProtectionGroupManagementSettings

managementSettings )

This method updates virtual protection group settings, including removal of virtual servers and disks that should have been removed using the RemoveProtectionGroup method. This method is used if the updateProtectionGroup method is unable to complete

successfully. • GetCheckpointList

public List<Checkpoint> GetCheckpointLis (

ProtectionGroupIdentifier identifier )

This method retrieves a list of checkpoints for a specified virtual protection group.

• GetFailOverTestlnfo

public List<FailoverTestInfo> GetFailoverTestlnfo (

ProtectionGroupIdentifier identifier, int

numOf ests )

This method retrieves information about failover tests for a specified virtual protection group. · GetProtectionGroupDefaultReverseSettings

public ProtectionGroupManagementSettings

GetProtectionGroupDefaultReverseSettings (

ProtectionGroupIdentifier identifier )

This method retrieves the virtual protection group settings for a specified virtual protection group, for use as default values for reverse replication.

• GetProtectionGroupManagementSettings

public ProtectionGroupManagementSettings

GetProtectionGroupManagementSettings (

ProtectionGroupIdentifier identifier )

This method retrieves the settings for a designated virtual protection group. • GetProtectionGroups

public IEnumerable<ProtectionGroupIden ifiers>

GetProtectionGroups ( )

This method retrieves a list of virtual protection groups.

• GetProtectionGroupState

public ProtectionGroupState GetProtectionGroupState (

ProtectionGroupIdentifier identifier )

This method retrieves the state of a specified virtual protection group, the state being "protected" or "recovered". If the group is protected, 0 is returned; and if the group is recovered, 1 is returned.

• GetProtectionGroupStatus

public ProtectionGroupStatus GetProtectionGroupStatus (

ProtectionGroupIdentifier identifier )

This method retrieves the status of a specified virtual protection group, the status being inter alia "protecting", "testing" or

"promoting".

• InsertTaggedCheckpoint

public CommandTaskldentifier InsertTaggedCheckpoint (

ProtectionGroupIdentifier identifier, string tag)

This method inserts a named checkpoint for a designated virtual protection group. The method returns immediately, without verifying whether or not the checkpoint was successfully written to the journal in the peer site.

• IsPairConnected

public bool IsPairConnected( )

This method returns true if the connection to the paired site is up. • Move

public ComamndTaskldentifier Move)

ProtectionGroupIdentifier identifier,

ProtectionGroupManagementSettings postMoveSettings )

This method migrates a specified virtual protection group to the peer site.

• ProtectVS

public CommandTaskldentifier ProtectVS (

ProtectionGroupIdentifier identifier,

VSManagementSettings vsManagementSettings )

This method adds a designated virtual server to a virtual protection group, in accordance with designated settings.

• RemoveProtectionGroup

public CommandTaskldentifier RemoveProtectionGroup)

ProtectionGroupIdentifier identifier )

This method removes a virtual protection group, unless the group is being replicated during a test failover or an actual failover, and unless the group is being migrated to the peer site. If this method does not return a success completion code, the

ForceRemoveProtectionGroup method may be used to force removal of the group.

• StopFailOverTest

public CommandTaskldentifier StopFailOverTest (

ProtectionGroupIdentifier identifier ,

FailoverTestlnfo testlnfo)

This method stops a failover test, and removes the test virtual servers from the peer site. • TrimOldFailoverTest

public void TrimOldFailoverTes (

ProtectionGroupIdentifier identifier,

int numOfOldTestsToTrim)

This method discards information about a specified number of old failover tests for a designated virtual protection group, from the oldest test to the most recent test.

• UnprotectVS

public CommandTaskldentifier UnprotectVS (

ProtectionGroupIdentifier identifier ,

VSIdentifier vsldentifier )

This method removes a designated virtual server from a designated virtual protection group.

• UpdateProtectionGroup

public CommandTaskldentifier UpdateProtectionGroup (

ProtectionGroupIdentifier identifier,

ProtectionGroupManagementSettings

managementSettings )

This method updates settings of a specified virtual protection group. If the method does not return a success completion code, the

ForceupdateProtectionGroup method can be used to force the update.

• WaitForCheckpoint

public CommandTaskldentifier WaitForCheckpoint (

ProtectionGroupIdentifier identifier ,

Checkpoint checkpoint,

int waitinglntervallnSeconds ,

int timeoutlnSeconds )

This method waits for a checkpoint to be written to a journal on the peer site, after it was inserted, or times out if it takes too long.

APPENDIX IV

Command Tracker Web Services

These web services include methods and properties to monitor procedures being executed.

Properties

• CommandTasklnfoProperties

This property includes.

enum CommandType -- valid methods;

enum compietionCode— the result of running a method, including NotAvailable, Success, Aborted, Failed and

HadException;

int OriginaiinputParms— input parameters; and int pgid — the globally unique identifier for a virtual protection group.

Methods

• GetAllTasks

public List<CommandTaskIdentifier> GetAllTasks ( )

This method retrieves a list of all tasks that are currently active.

• Get CompietionCode

public CommandTaskCompletionCode GetCompetionCode (

CommandTaskldentifier taskid)

This method returns the completion code of a specified task.

Completion codes include Success, Aborted, Failed or

HadException. If the task is still running, NotAvailable is returned. • GetCommandTasklnfο

public CommandTasklnfo GetCommandTasklnfo (

CommandTaskldentifier cmdldentifier )

This method returns the command type, the completion code input parameters, and the virtual protection group identifier of a

designated task.

• GetExceptionString

public string GetExceptionString ( CommandTaskldentifier taskld)

This method returns the string associated with an exception, for a designated task that had an exception. The method

Getcompietioncode returns HadException if a task had an

exception.

• GetProgress

public int GetProgress ( CommandTaskldentifier taskid)

This method returns progress as a percentage of a whole task, as an integer, for a specified task.

• GetProtectionGroupCommandTaskid

public CommandTaskldentifier

GetProtectionGroupCommandTaskld(

ProtectionGroupIdentifier pgld)

This method returns the identifier of a task currently being performed on a designated protection group.

• GetProtectionGroupCommandTaskidLocally

public CommandTaskldentifier

GetProtectionGroupCommandTaskldLocally (

ProtectionGroupIdentifier pgld) This method returns the identifier of a task currently being performed on a specified protection group at a local site.

• GetResult

public Object GetResult ( CommandTaskldentifier taskld)

This commend returns the result for a designated task. The returned result may be one of the following :

the identifier of the virtual protection group identified by the task ID;

the settings of the virtual protection group identified by the task ID;

a checkpoint whose insertion is identified by the task ID; or the identifier of the log collection identified by the task ID.

• GetStatus

public CommandTaskStatus GetStatus (CommandTaskldentifier taskld)

This method retrieves the current status of a specified task. The

Status may be Active, Running, Aborted ΟΓ Completed.

• Wait

public void CommandTaskldentifier Wait(

ComamndTaskldentifier taskld,

int IntervallnMillis ,

int TimeOutlnMillis )

This method waits for a specified task to complete, by polling the task at specified time intervals, until a specified time out. APPENDIX V

Log Collector Web Services

These web services include methods and properties to retrieve information for troubleshooting.

Properties

• LogCollectionRequest

This property indicates details of a log request, including a level of detail of information, indicating whether information about a virtual data service appliance and core information should be included, and including start and end times for the information.

Methods

• CollectSystemLogs

public CommandTaskldentifier CollectSystemLogs (

LogCollectionRequest request)

This method initiates a log request.

• RetrieveLogPackage

public Stream RetrieveLogPackage (LogCollectionldentifier id)

This method retrieves results of a log request.