Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MANAGEMENT METHOD AND APPARATUS FOR CONFIGURING OPTIMIZED PATH
Document Type and Number:
WIPO Patent Application WO/2018/004519
Kind Code:
A1
Abstract:
Example implementations described herein are directed to a management computer configured to manage a plurality of integrated systems. The management computer can include a memory configured to store a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; and a processor, configured to determine, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and select a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

More Like This:
Inventors:
TERAYAMA ATSUMI (US)
RYSKO GARRETT (US)
CHONG RANDALL (US)
LANE SOLOMON (US)
Application Number:
PCT/US2016/039615
Publication Date:
January 04, 2018
Filing Date:
June 27, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HITACHI LTD (JP)
International Classes:
H04L12/701; H04L12/28
Foreign References:
US7275103B12007-09-25
US20050169188A12005-08-04
US8595434B22013-11-26
US20040243699A12004-12-02
US7499410B22009-03-03
Attorney, Agent or Firm:
MEHTA, Mainak H. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A management computer configured to manage a plurality of integrated systems, each of the plurality of integrated systems comprising a server, a network and a storage system, the management computer comprising: a memory configured to store a topology map comprising a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; and a processor, configured to: determine, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and select a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

2. The management computer of claim 1, wherein the first server and the first storage system are determined from a user selection.

3. The management computer of claim 1, wherein the performance metric is a cost function based on the first server and the first storage system, and wherein the processor is configured to select the path from the plurality of paths based on a selection minimizing the cost function.

4. The management computer of claim 1, wherein the performance metric is determined based on population information comprising values associated with a number of elements of the first storage system and the first server.

5. The management computer of claim 1, wherein the performance metric comprises a server performance metric, a network performance metric, and a storage system performance metric, wherein the server performance metric is based on CPU usage and memory usage of the first server, the network performance metric is based on switch port usage of a network connecting the first server and the first storage system, and wherein the storage system performance metric is based on storage pool busy rate of the first storage system.

6. The management computer of claim 1, wherein the processor is configured to create the topology map through a collection of server configuration information, network configuration information and storage system configuration and through a matching of port configurations to generate paths for the topology map, wherein the network configuration information comprises location information for at least one of a sharing site or a rack, and wherein the processor is configured to determine whether a path from the generated paths for the topology map is an inter-switch link based on the switch configuration information.

7. The management computer of claim 6, wherein the processor is configured to, for the selected path being an inter-switch link, determine whether an alternate path from the plurality of paths having a shorter path than the selected path is available; for the alternate path being available, determine a second storage system associated with the alternate path as a recommended replacement of the first storage system.

8. The management computer of claim 1, wherein the topology map comprises one or more cross-site paths across multiple physical locations, wherein the processor is configured to divide the topology map into one or more sections based on the one or more cross- site paths.

9. A non-transitory computer readable medium storing instructions for executing a process for managing a plurality of integrated systems, each of the plurality of integrated systems comprising a server, a network and a storage system, the management computer comprising: managing a topology map comprising a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; determining, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and selecting a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

10. The non-transitory computer readable medium of claim 9, wherein the first server and the first storage system are determined from a user selection.

11. The non-transitory computer readable medium of claim 9, wherein the performance metric is a cost function based on the first server and the first storage system, and wherein the selecting the path from the plurality of paths is based on a selection minimizing the cost function.

12. The non-transitory computer readable medium of claim 9, wherein the performance metric is determined based on population information comprising values associated with a number of elements of the first storage system and the first server.

13. The non-transitory computer readable medium of claim 9, wherein the performance metric comprises a server performance metric, a network performance metric, and a storage system performance metric, wherein the server performance metric is based on CPU usage and memory usage of the first server, the network performance metric is based on switch port usage of a network connecting the first server and the first storage system, and wherein the storage system performance metric is based on storage pool busy rate of the first storage system.

14. The non-transitory computer readable medium of claim 9, further comprising creating the topology map through a collection of server configuration information, network configuration information and storage system configuration and through a matching of port configurations to generate paths for the topology map, wherein the network configuration information comprises location information for at least one of a sharing site or a rack, and determining whether a path from the generated paths for the topology map is an inter-switch link based on the switch configuration information.

15. The non-transitory computer readable medium of claim 14, the instructions further comprising, for the selected path being an inter-switch link, determining whether an alternate path from the plurality of paths having a shorter path than the selected path is available; and for the alternate path being available, determining a second storage system associated with the alternate path as a recommended replacement of the first storage system.

Description:
MANAGEMENT METHOD AND APPARATUS FOR CONFIGURING

OPTIMIZED PATH

BACKGROUND

Field

[0001] The present disclosure is related to storage and data centers, and more specifically, to network and path configurations for storage platforms.

Related Art

[0002] In related art datacenters, there can be thousands of servers, switches, and storage subsystems. Such related art configurations causes complicated network configurations requiring users to invest effort to provision and maintain said configuration so that it can meet their business demands.

[0003] Specifically, SAN (Storage Area Network) architectures may have dynamic hardware boundaries such as a multiple racks, aggregation with core switches, and so on. Given the various paths between a host and a storage subsystem, there are preferred and non-preferred paths caused by architecture boundaries. In general, paths across the boundary (e.g. paths crossing over multiple racks) have narrow bandwidth and increased latency, so the user would rather avoid this non-preferred path. However, in related art datacenters the number of overall paths is too large to manage and optimize. Further, such paths can be dynamically changed depending on the workload lifecycle. Therefore, optimizing the entire path is not achieved by existing configuration management techniques of the related art.

[0004] In one example of such related art techniques, U.S. Application No. 2003/0005119 Al, herein incorporated by reference in its entirety, involves an automated path creation method for SAN having a large number of switches, servers and shared storage. The related art technique provides a path selection criteria which allow users to imply rules for selecting preferred path optimization. However, such related art techniques only consider bandwidth to evaluate path congestion, and do not provide any detailed method for considering end-to-end load balancing.

[0005] In another example of a related art technique, U.S. Application No. 2013/0318228 Al, herein incorporated by reference in its entirety, involves a logical path selection method which considers the bandwidth on an existing fabric and topology changes in a multi-path environment. Such related art implementations involve a path searching technique for finding an optimal path while evaluating the minimum path bandwidth. However, such related art techniques are only directed to the virtualization (hypervisor) layer and does not involve pathways for the physical layer which can include the physical SAN switch configuration and storage subsystem configuration.

SUMMARY

[0006] In implementations involving the hyper-scale converged infrastructure, related art techniques may have performance bottlenecks on an aggregation switch. So there is a need for architecture boundary-aware techniques to optimize path distribution for storage access. Example implementations of the present disclosure are directed to addressing such infrastructure and other related art problems, and can involve the provision of an optimized path management with awareness of the hardware boundary.

[0007] Example implementations described herein involve a method and system that optimizes SAN network path configuration between hosts and storage subsystems with boundary awareness. In example implementations of the present disclosure, there is a provision and configuration of an optimal path from a server to a storage subsystem with minimum performance degradation. Example implementations provide a boundary identification method by building a graph which represents an underlying physical topology. Example implementations also include end-to-end consideration for each integrated system (e.g. computer/storage/network) component by evaluating the amount of resources consumed by the integrated system. Example implementations can also involve a method for including boundaries and provides local path optimization for each topology subset. The example implementations provide a selective method with boundary awareness to configure flexible path depending on the user demands.

[0008] Aspects of the present disclosure include a management computer configured to manage a plurality of integrated systems, each of the plurality of integrated systems including a server, a network and a storage system. The management computer can include a memory configured to store a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; and a processor, configured to determine, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and select a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

[0009] Aspects of the present disclosure can further include a non-transitory computer readable medium storing instructions for executing a process for managing a plurality of integrated systems, each of the plurality of integrated systems having a server, a network and a storage system. The instructions can include managing a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; determining, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and selecting a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

[0010] Aspects of the present disclosure can further include a method for managing a plurality of integrated systems, each of the plurality of integrated systems having a server, a network and a storage system. The method can include managing a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; determining, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and selecting a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

[0011] Aspects of the present disclosure can further include an apparatus for managing a plurality of integrated systems, each of the plurality of integrated systems having a server, a network and a storage system. The apparatus can include means for managing a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems; means for determining, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and means for selecting a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths.

BRIEF DESCRIPTION OF DRAWINGS

[0012] FIG. 1 illustrates a high level diagram in accordance with an example implementation.

[0013] FIG. 2 illustrates a physical system configuration, in accordance with an example implementation.

[0014] FIG. 3 illustrates a logical system configuration, in accordance with an example implementation.

[0015] FIG. 4 illustrates a fabric in accordance with an example implementation.

[0016] FIG. 5 illustrates management programs in the management computer, in accordance with an example implementation.

[0017] FIG. 6 illustrates an example of VM configuration management table to store virtual machine and hypervisor configurations on the management computer, in accordance with an example implementation.

[0018] FIG. 7 illustrates an example of server configuration management table to store physical server configuration on the management computer, in accordance with an example implementation.

[0019] FIG. 8 illustrates an example of switch configuration management table to store switch configurations on the management computer, in accordance with an example implementation. [0020] FIG. 9 illustrates an example of storage configuration management table to store storage subsystem configuration on the management computer, in accordance with an example implementation.

[0021] FIG. 10 illustrates an example of path management table to store each path configuration controlled by path configuration engine, in accordance with an example implementation.

[0022] FIG. 11 illustrates an example of topology map, in accordance with an example implementation.

[0023] FIG. 12 illustrates an example of a candidate path evaluation table, in accordance with an example implementation.

[0024] FIG. 13 illustrates an example flow chart for providing path provisioning functionality, in accordance with an example implementation.

[0025] FIG. 14 illustrates an example fabric configuration, in accordance with an example implementation.

[0026] FIG. 15 illustrates an example flow chart, in accordance with an example implementation.

[0027] FIG. 16 illustrates a system configuration, in accordance with an example implementation.

[0028] FIG. 17 illustrates an example of topology map, in accordance with an example implementation.

DETAILED DESCRIPTION

[0029] The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term "automatic" may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.

[0030] In a first example implementation, there is a system which allows users to configure an optimized path within a local datacenter.

[0031] FIG. 1 illustrates a high level diagram in accordance with an example implementation. In datacenters as illustrated in FIG. 1, there is a SAN fabric which connects servers (400) and storage subsystems (100) and is configured to share storage areas with a number of servers according to the desired implementation. The network switch (58a, 58b) is installed to extend the SAN fabric, but there is an upper limit for the number of aggregated servers due to the limited number of switch ports. For example, servers (405) loaded on a rack (400a) are connected directly to only a core switch (58a) having two paths to reach storage subsystem (100). One path is directly connected from switch (58a) to storage subsystem (100), but another path must traverse an ISL (Inter- switch link) (60) to reach the same storage subsystem (100). In this case, the servers in racks on the left (400a, 400b) and racks on the right (400c, 400d) should be distinguished because they are sharing different core switch (58a, 58b), and the ISL (60) will be its boundary.

[0032] Thus the SAN fabric may not be uniform in general as there is hardware boundary and non-preferred path across the boundary. The racks, host clusters, aggregation switches, and shared storage subsystem can cause such hardware boundaries for example. Constructing a closed path inside of boundary can provide improved optimization for the configurations of the entire system in the datacenter.

[0033] FIG. 2 illustrates a physical system configuration, in accordance with an example implementation. The system as illustrated in FIG. 2 contains one or plural physical servers (10), one or physical storage subsystems (100) that are connected to each other through the SAN. Network switches can be used for building the fabric between the servers and the storage subsystems to provide connections according to the desired implementation. In FC (Fibre Channel) SAN (50), FC switch (55) is used for building the fabric, and each server (10) and each of the storage subsystems (100) have a corresponding interface, such as HBA (Host Bus Adapter) (15) and storage port (155). Each HBA (15) port and storage port (155) have a corresponding WWN (World Wide Name) as an identifier for each global name. However, example implementations of the present disclosure are not restricted to FC switch fabrics for the SAN. Other protocols (e.g., internet protocol, InfiniBand, etc.) or other devices may also be used depending on the desired implementation.

[0034] The physical server (10) has an open computer architecture which physically includes CPU (Central Processing Unit) (11) involving one or more physical processors, Memory (12), and network interface (15, 16). The operating system (OS) software (13) is loaded on memory (12) and executed to provide the required logical resource for loaded application software. The application software can be database, web server, or any other business applications depending on the desired implementation and is not particularly restricted to any application software. The virtualization program (14), or hypervisor, is an example of such an application and provides functionality to configure VM (virtual machine). The physical server (10) is also referred to as the "host", as it can host VM (17) or any other business application.

[0035] The storage subsystem (100) can include one or more storage controllers (150) and one or more storage drives (101). The storage controller (150) manages the configuration of the storage subsystem (100) and implements features to present the data storage volume to the server (10). In an example implementation, two or more storage controllers (150) can also be coupled to work as a redundant controller.

[0036] The storage subsystem (100) accommodates storage drives (101) such as HDD (Hard Disk Drive), SSD (Solid State Drive) and any other storage medium devices depending on the desired implementation. Each of the storage devices are presented to the physical server (10) as a storage area. Processor (158) on the storage controller (150) processes I/O (Input/Output) request and response with physical server (10). The datagram received at the storage port (155) is stored at cache (159) and processor (158) performs to return the appropriate response for received instruction from the physical server (10). The cache (159) also preloads copy of stored data on the persistent storage drives (101) to reduce response time.

[0037] The system also includes a management computer (500) which manages all system components. The management computer (500) has an open computer architecture which can physically include of CPU (511), Memory (512), and network interface (516). The OS (513) and management program (514) are loaded and executed to implement the functionality of management computer (500).

[0038] Each of the system components, such as servers (10), are connected to management computer (500) through management network (90). The components also have one or more network interfaces, such as Ethernet interface (16, 56, 156), to be connected to management network (90). Corresponding network switches (95) can be installed depending on the desired implementation. Each component can have dedicated devices for management, such as BMC (Baseboard Management Controller), SVP (Service Processor) and others, depending on the desired implementation.

[0039] The servers can include a front-end interface to be connected to service network. The front-end interface, such as Ethernet interface (16), can be shared but logically separated from the management network interface. The applications on the servers use this service network to transfer the datagram to be utilized.

[0040] As illustrated in FIG. 2, management computer (500) can be configured to manage a plurality of integrated systems, with each of the integrated systems involving a server (10), a network switch (55) and a storage system (100). In example implementations memory (512) can be configured to store a topology map having a plurality of paths, each of the plurality of paths indicative of a connection between a first server from the plurality of interconnected systems and a first storage system from the plurality of interconnected systems as illustrated, for example in FIG. 11. The first storage system and the first server may be selected by a user through a user interface or by any other means according to the desired implementation.

[0041] Processor (511) may be configured to determine, from the topology map, a number of hops and a performance metric or each of the plurality of paths between a first server from the plurality of integrated systems, and a first storage system from the plurality of integrated systems; and select a path from the plurality of paths between the first server and the first storage system based on the determined number of hops and the performance metric of the each of the plurality of paths as illustrated in FIGS. 13 and 15. The performance metric is a cost function based on the first server and the first storage system (e.g. involving storage cost, server cost, network cost and so on associated with a desired formula to produce a total cost as illustrated in FIG. 12), wherein the processor (511) is configured to select the path from the plurality of paths based on a selection minimizing the cost function as described, for example, in FIG. 12.

[0042] The performance metric may also be determined based on population information comprising values associated with a number of elements of the first storage system and the first server. Examples of such values are described, for example, in FIG. 11 and FIG. 12. Such values can be integrated into any formula according to the desired implementation to determine a server cost, a network cost, and/or a storage system cost for the cost function. As an example, the performance metric can include a server performance metric, a network performance metric, and a storage system performance metric. Examples of the server performance metric can include formulas that take CPU usage and memory usage of the first server into account, the network performance metric can involve formulas based on switch port usage of a network connecting the first server and the first storage system, and storage system performance metric is based on storage pool busy rate of the first storage system. Other elements can also be utilized for the server performance metric, the network performance metric and the storage system performance metric according to the desired implementation.

[0043] Processor (511) is configured to create the topology map through a collection of server configuration information, network configuration information and storage system configuration and through a matching of port configurations to generate paths for the topology map, wherein the network configuration information can include location information for at least one of a sharing site or a rack, and wherein the processor is configured to determine whether a path from the generated paths for the topology map is an inter-switch link based on the switch configuration information, as illustrated, for example, in FIGS. 3, 5 and 10. Processor (511) is also configured to, for the selected path being an inter-switch link, determine whether an alternate path from the plurality of paths having a shorter path than the selected path is available; and for the alternate path being available, determine a second storage system associated with the alternate path as a recommended replacement of the first storage system as illustrated in FIG. 15.

[0044] FIG. 3 illustrates a logical system configuration, in accordance with an example implementation. The servers can be configured in the form of a host cluster (18) to provide high-availability to the application or the VMs (17) which are running on the cluster (18). The cluster provides functionality for failover and can maintain the execution of application/VM (17) in case of a physical server failure. With the cluster configuration, shared volume (110) and corresponding paths are configured.

[0045] The data storage area which used by the OS on the physical server (10) or VMs (17) is defined as volume (110) in the storage subsystem (100). Storage subsystems (100) configure raw storage medium as logical volume (110) so that OS can recognize it as available persistent storage space to write/read data.

[0046] To provide connectivity from the OS to volume (110), each component is configured to allow the connection. On each component, there is some access control technique, such as zoning (57) on the FC switch (55) and HSD (Host Storage Domain) (111) on the storage subsystem (100) side. The zoning (57) defines which ports (58) or connected device WWNs are allowed to communicate each other. The end-to-end connection and related access control configuration are referred to as a "path". In FC SAN, there is a SCSI (Small Computer System Interface) target ID (identifier) and LUN (Logical Unit Number) to identify the volume (110) from the initiator (e.g., from the OS). HSD (111), in singular or in conjunction with techniques categorized into LUN masking or LUN security, manages combination of initiator WWN and LUN for each storage port (155) in order to provide a path.

[0047] To provide redundancy for the host I/O, multiple paths also can be configured. Independent components, such as HBA ports (15), FC switches (55) and storage controllers (150) can be chosen to guarantee redundancy. In an enterprise system, there can be two, four or any number of redundant paths that are configured from the initiator physical server (10) to the target storage volume (110) to eliminate SPOF (single point of failure).

[0048] In the storage subsystem (100), the storage controller (150) defines further logical structure to manage storage drives (101) as one or more volumes (110). For example, RAID (redundant array of inexpensive disks) group (113) combines storage drives (101) to provide device-level redundancy and striping, and storage pool (112) provides flexible storage space to allocate to the volume (110).

[0049] The FC switch (55) provides access control techniques known as "zoning" (57). The zoning (57) can define which WWN or port can communicate with each other, and allows the OS to reach the appropriate storage port (155). The switch (55) also includes functionality such as name server functionality in order to discover which node is connected the fabric (51).

[0050] From a performance perspective, each hardware component has performance metrics to evaluate the load on the component. Each hardware component has the interface to export/transfer such performance information. The management computer (500) collects such performance information through management network (90).

[0051] FIG. 4 illustrates a fabric in accordance with an example implementation. FIG. 4 illustrates a target physical server (10a) having two HBA ports which are connected to each fabric. The example of FIG. 4, depicts only one fabric through chassis switch (57a) and omits another fabric through chassis switch (57b). The following description is directed to one fabric involving a ToR (Top of Rack) switch 1A (58a) and 2A (58b), and chassis switch (57a), but the same method is applicable for the other fabric in accordance with a desired implementation. ToR switch 1A (58a) and 2A (58b) are wired by ISL (Inter Switch Link) (60) to extend fabric over separated rack #1 (400a) and #2 (400b).

[0052] Assuming the user attempts to configure new volume (110a) in the storage subsystem A (100a) for physical server (10a), there are two reachable routes through ToR switch 1A (58a) in the example of FIG. 4. The first reachable route is directly connected from ToR switch 1A (58a) to storage subsystem A (100a), and the second reachable route is connected through ToR switch 2A (58b) including ISL (60). In this case, the first route is the shortest and preferred path from a performance perspective. The number of ISLs may be smaller than the number of direct connections to the storage subsystem, and has relatively narrow bandwidth. ISL can be logically identified by the number of switch hops in the fabric.

[0053] FIG. 5 illustrates management programs in the management computer, in accordance with an example implementation. In the example of FIG. 5, the functionality of the management computer (500) is to configure the path between the server (or server cluster) and the storage subsystem for presenting storage volume to the server, however other equivalent implementations can be utilized depending on the desired implementation. The management computer (500) implements an automated workflow of configuration flows and orchestration for underlying devices such as the servers, storage subsystems, and FC switches. [0054] The functionality is directed to configuring a path when the user selected a target combination of a physical server (10) and a volume (110). The user can also select the pool (112) to be used if the volume (110) has not been created yet. The management computer (500) provides an interface so that the user can review the current physical server (10) and storage volume (110)/pool (112) inventory, and make a request for the path provision. The path configuration engine (520) works on a received request to configure actual path in conjunction with any other component managers. Each of the entities on the path can be determined to optimize the entire system throughput and to meet the user demand.

[0055] The path configuration engine (520) implements path management functionality working with path management table (521) and candidate path evaluation table (522). Path configuration engine (520) also communicates with hypervisor manager (523), physical server manager (526), SAN switch manager (529), and storage manager (532) to orchestrate the configuration on each of the system components.

[0056] Each manager (523, 526, 529, 532) communicates with hardware components for the configuration process. For example, physical server manager (526) can collect and change configurations remotely for each registered physical server (10).

[0057] In example implementations there is a table manager to store configuration information as described in further detail herein. Each manager (523, 526, 529, 532) also have performance database (Perf. DB) (525, 528, 531, 534) to store and query the performance information. The performance information is gathered from each hardware component, and used for monitoring and path configuration feature through each component manager.

[0058] The performance metrics stored in the performance database varies depending on the type of each component and the desired implementation. The hypervisor manager (523) and physical server manager (526), for example, gather compute performance information such as CPU usage, consumed memory capacity, network interface usage (bandwidth, sent/received datagram rate), disk usage (throughput IOPS (I/O operations per second), latency), and so on. The SAN switch manager (529) gathers traffic performance information such as port status (online/offline, linked up/down), data transferred (sent/received datagram rate), and so on. The storage manager (532) gathers storage performance information such as processor (158) usage, cache (159) usage (capacity in used, dirty write rate), storage port (155) usage (IOPS, bandwidth, latency), pool (112), and RAID group (113) usage (through put IOPS, consumed capacity, latency), storage drive (101) busy rate, and so on.

[0059] FIG. 6 illustrates an example of VM configuration management table (524) to store virtual machine (17) and hypervisor (14) configurations on the management computer (500), in accordance with an example implementation. In the example of FIG. 6, the hypervisor manager (523) is configured to edit each field and record, however other equivalent implementations can be utilized depending on the desired implementation and the present disclosure is not limited thereto. The table (524) contains host ID (524b) to identify which physical server (10) hosts the VM (17) as VM ID (524a). The host ID (524b) field holds the cluster ID if the connected hypervisor (17) is configured as host cluster (18). The table (524) also contains the VM (17) resource information indicating resources allocated logically such as vCPU (524c), vRAM (524d), vNIC (524e), vDisk (524f), vHBA (524g), and so on. Such resource allocation can be used for determining the total workload configured on the physical server (10).

[0060] FIG. 7 illustrates an example of server configuration management table (527) to store physical server (10) configuration on the management computer (500), in accordance with an example implementation. In the example of FIG. 7, the physical server manager (526) edits each field and record, however other equivalent implementations can be utilized depending on the desired implementation and the present disclosure is not limited thereto. The table (527) contains host ID (527a) and the location (527b) for each physical server (10). The location (527b) is determined during installation, replacement or expansion of each physical server (10). The table (527) also contains the loaded physical device information such as CPU (527c), RAM (random access memory) (527d), NIC (Network Interface Card, e.g. Ether IF)(527e), disk (527f), HBA (527g), and so on.

[0061] FIG. 8 illustrates an example of switch configuration management table (530) to store switch (55) configurations on the management computer (500), in accordance with an example implementation. In the example of FIG. 8, the SAN switch manager (529) edits each field and record, however other equivalent implementations can be utilized depending on the desired implementation and the present disclosure is not limited thereto. The table (530) contains switch ID (530a), its location (530b), and port ID (530c) on the switch (55). The table also contains detailed port information such as port WWN (530d), port status (530e), connected port/WWN (530f), and so on. Port status (530e) and connected port/WWN (530f) can be updated when the topology is changed by installation, replacement, expansion, or power status change of other connected devices.

[0062] FIG. 9 illustrates an example of storage configuration management table (533) to store storage subsystem (100) configuration on the management computer (500), in accordance with an example implementation. In the example of FIG. 9, the storage manager (532) edits each field and record, however other equivalent implementations can be utilized depending on the desired implementation and the present disclosure is not limited thereto. The table (533) contains the storage subsystem ID (533a) and the volume ID (533c) for each storage subsystem (100). The table (533) also contains the location (533b) of storage subsystem (100), the connected port ID for each volume (533d), HSD ID (533e), pool ID (533f), and so on. The port ID (533d) includes controller (150) identifier and port WWN. There can be some record which has no volume defined to store unused but available port on the storage subsystem (100) with showing volume ID (533c) as "[Not defined]".

[0063] FIG. 10 illustrates an example of path management table (521) to store each path configuration controlled by path configuration engine (520), in accordance with an example implementation. The table (521) maintains each entity on the path ID (521c) from host (10) to volume (110). The table (521) also contains host ID (521a), volume ID (521b), host port (521d) that represents HBA port on the physical server (10), storage port (521e), storage pool (521f), storage HSD (521g), and so on. Each record is filled with the identifier of each component as retrieved from each component manager (523, 526, 529, 532). The path configuration engine (520) configures a path based on this table (521) in conjunction with each component manager, however other equivalent implementations can be utilized depending on the desired implementation and the present disclosure is not limited thereto.

[0064] The path configuration engine (520) provides a path determine algorithm by utilizing three phases, a phase for building a topology map, a phase for evaluating the cost per path, and a phase for resolving the best path. The actual topology in the datacenter can be changed dynamically and the request for the path provisioning can occur concurrently, so the path configuration engine (520) sequentializes the provisioning request and provides an up-to-date optimization of the paths. The path configuration engine (520) uses path management table (521, FIG. 10) and the candidate path evaluation table (522, FIG. 12) to implement its functionalities.

[0065] In the building topology map phase, the path configuration engine (520) develops map that represents relations between each component. FIG. 11 illustrates an example of topology map (610), in accordance with an example implementation. The engine (520) calls each manager to collect the neighbor WWN in the fabric of each component. For example, SAN switch manager (529) can refer to the switch configuration management table (530), verify the port status (530e) on the switch, and collect connected switch/HBA port information (530f). The path configuration engine (520) can identify the connected WWN (530f) as being the host HBA (15) or the storage subsystem port (155) by matching the HBA field (527g) of the server configuration management table (527) and the port ID field (533d) of the storage configuration management table (533). The path configuration engine (520) gathers and summarizes the neighbors information into topology map (610). For example, the edge (607) between the physical server (600) and chassis switch A (601a) is identified by referring to the connected port field (530f) of the corresponding FC switch record on switch configuration management table (530) and matching the HBA field (527g) of the corresponding physical server record on the server configuration management table (527). The edge (608) between ToR switch 1A (602) and storage controller Al (604a) is also identified from the switch configuration management table (530) and storage configuration management table (533) as well. The path configuration engine (520) can also identify relationships within the same storage subsystem (100), such as storage controller Al (604a) and storage pool (605). As an example, the storage configuration management table (533) provides information about which volume (533c) is connected to the controller Al, port (533d), and pool (533f), including ports which do not have a volume yet.

[0066] Each part of topology map can be implemented in a more detailed or abstract manner, depending on the desired implementation For example, FIG. 11 only shows two controllers (604a, 604b) and one pool (605) about storage subsystem A, but in another example implementation there can be detailed structures including the storage port (155), the volume (110), and the RAID group (113) as depicted in other figures of the present disclosure. The detailed map can be utilized if the performance metrics or path costs are calculated for each device. In example implementations, any known graph contraction technique can be used to abstract or to merge sub maps according to the desired implementation.

[0067] Each component manager also gives boundary information such as rack, stock keeping unit (SKU), which port is shared with the next component, and so on as a group. For example, storage manager (532) can determine that storage controller Al (604a) and storage controller A2 (604b) are in the same storage subsystem. The storage configuration management table (533) identifies which storage controller (150) is loaded the same storage subsystem (100) based on connected port ID (533d) and storage subsystem ID (533a). In an example, the map (610) also reveals that physical server S 101 (600), chassis switch A (601a), and ToR switch 1A (602) resides within the same rack because the physical server manager (526) and the SAN switch manager (528) can get the locations from the management tables. The location field (527b) on the server configuration management table (527) and the location field (530b) on the switch configuration management table (530) shows that the physical server and switches are accommodated in the same rack "Rackl" in this case. According to the topology map (610), the edge (606) is also identified as ISL (60) which crossing over two racks.

[0068] The topology map (610) is represented as directed graph mathematically and general graph theory can be applicable at later phases, but can also be represented in other ways according to the desired implementation. In an example, the topology map can be given statically if the system has a fixed configuration. However, the map may be updated for every installation/replacement/expansion incurred, and should be periodically checked connection status in this case.

[0069] In this phase for building the topology map, the path configuration engine (520) lists up all candidate routes to the candidate path evaluation table (522). FIG. 12 illustrates an example of a candidate path evaluation table (522), in accordance with an example implementation. The candidate path evaluation table (522, FIG. 12) stores path ID (522a) with path definition (522b) as a record for every candidate route, however, other equivalent implementations may also be utilized and the present disclosure is not limited thereto. The other fields regarding path cost, such as server cost (522c), network cost (522c), and storage cost (522e) can be empty at this phase.

[0070] The path configuration engine (520) can also reduce the candidate route by removing records having too many hop counts (as indicated by the number of the bracketed entity in FIG. 12, or by any other desired implementation). The number of acceptable hop counts can be set by a threshold or by any other method according to the desired implementation. The candidate route across hardware boundary may have a large number of hops, so it is a reasonable way to reduce path calculation in later cost evaluation phases.

[0071] In the evaluating cost phase, the path configuration engine (520) puts costs on the topology map. The cost indicates how busy the edge (e.g. an arrow between components) is. The busier edge is not generally preferred, so the most preferred path should have minimum cost in this algorithm. However, how the minimum cost is determined can also include other factors besides the business of the edge, depending on the desired implementation and the present disclosure is not limited thereto. These costs are represented as weight for each edge in graph theory and weight minimization problem.

[0072] The performance metrics provide the path cost as a normalized number or through other valuation means. As mentioned above, each performance metrics can involve different units such as IOPS, byte per second, and latency time. The path configuration engine (520) evaluates these metrics as an arbitrary unit number or through other normalization schemes. For example, one possible evaluation technique is to use the amount value divided by mean value or maximum capacity value. For example, 60MB per second throughput is calculated as 0.6 which normalized by total bandwidth 100MB per second. Other methods (e.g., normalization to a scale) are also possible depending on the desired implementation.

[0073] Each of the performance metrics can be considered to be a path cost and the user can select the higher prioritized metrics by adding more weight to the value. For some metrics, such as latency, the value may become large for a busy component. Such metrics can be calculated as negative value in this algorithm, depending on the desired implementation. In an example implementation as described below, the performance metric can be in the form of a cost function involving one or more costs such as a server cost, a network cost, a storage system cost, and so on. The cost function can be any function utilizing any costs according to the desired implementation, and path selection can be conducted based on a selection minimizing the cost function as shown in examples provided below. [0074] The metrics are added to graph edge as weight or categorized for each type of server/network/storage for example. The server cost is calculated for each server side edge and stored in server cost field (522c) on the candidate path evaluation table (522). The server side performance metrics value, such as CPU (11) usage and memory (12) usage, are added to all edges starts from corresponding physical server (ex. arrows fan-out from physical server (600)) and server cost field (522c). The metric which edge can be specified, such as HBA port (15) usage, is added to corresponding edge (ex. edge (607)) and server cost field (522c). The network cost is calculated for each switch and stored in network cost field (522d). The network side performance metrics, such as switch port (58) usage, are added to corresponding edges (which starts/ends with switch) and network cost field (522d). To emphasize ISL (60) in the topology map, additional cost can be added to specified edge (606). This additional cost can be a pre-defined threshold or automatically calculated value (ex. maximum network cost before applying such additional cost), or other methods depending on the desired implementation. The storage cost is calculated for each storage side edge and stored in storage cost field (522e) as well. The storage side performance metrics, such as storage pool (112) busy rate, is added to corresponding edge (ex. arrows fan-out from/fan-in to storage controller Al (604a)) and storage cost field (522e). The sum of each categorized cost is stored in total cost field (522f).

[0075] For other from performance metrics, the population information (such as the number of existing path, number of running VM on the host and so on) can be considered as a path cost. From a server resource perspective, values such as the number of VM or total capacity (e.g., vCPU, vRAM, vNIC, etc.) consumed by VMs are examples of population information. From a storage perspective, values such as the number of HSD defined with storage ports, or number of volumes carved out from the same storage pool are examples of population information. The user can also select whether the population information needs to be considered or not as well as other performance metrics. The population information, as well as performance metrics mentioned above, are calculated and added to the corresponding edge or candidate path evaluation table (522).

[0076] In the resolving the best path phase, the path configuration engine (520) calculates the preferred path. In example implementations, any known graph solver for weighed graph problem can calculate and prioritize path in the candidate path evaluation table (522) according to the desired implementation. The simplest way to determine preferred path is the ordering candidate path by total cost (522f) and selecting result on top. After the preferred path is uniquely determined, the path configuration engine (520) begins configuring path in conjunction with each component manager (523, 526, 529, 532).

[0077] From redundancy perspective, the user can choose other a few higher rated paths including the most preferred path in order to configure multiple path from the same host to the same volume. In this case, redundant paths will be configured which use different component such as chassis switch, storage controller (150), storage port (155) and so on.

[0078] FIG. 13 illustrates an example flow chart for providing path provisioning functionality, in accordance with an example implementation. At 701, topology map is updated to reflect the current configuration. All reachable components such as the FC switch and storage port are discovered and the map is built to represent their relations. The flow at 701 can be conducted periodically in advance to reduce the calculation cost on the workflow.

[0079] At 702, a selection of a physical server (10) is processed to provision a volume and to inform the path configuration engine (520). At 703, a selection of a storage pool (112) is processed to create volume for the physical server (10) selected at 702. The combination of the target host and storage pool in the storage subsystem can be determined by this flow.

[0080] At 704, the path configuration engine (520) scans the topology map and determines the FC switch combination which provides the smallest number of hops. It allows the engine to focus on the graphical shortest path and reduces candidate paths to evaluate. The graphical shortest path may not be exact preferred path which gives minimum path cost, but in this example implementation, ISL can be avoided.

[0081] At 705, the path configuration engine (520) updates the candidate path evaluation table (522) to list possible paths through the switch as determined from 704.

[0082] At 706, the path configuration engine (520) selects one candidate path from evaluation table (522) to calculate the path cost. Each of the performance metrics and/or population information can be used for the cost calculation. As an example path cost, the engine (520) can consider the number of HSD for each storage port. It gives the relative ratings how busy the target storage port is. Each path cost is simply stored in candidate evaluation table (522) with categorized or added to topology map (610) as weight. The path configuration engine (520) calculates the entire end-to-end path costs and records the result to total path cost field (522f).

[0083] At 707, the path configuration engine (520) checks if there are any paths remained record in the evaluation table (522) which has not been evaluated yet. If any candidate path remains (YES), the flow proceeds back to 706, otherwise if it completes evaluation for all path candidates (NO), then the flow proceeds to 708.

[0084] At 708, the path configuration engine (520) orders records in the evaluation table (522) by a smaller path cost (522f) and pick the top record up as minimum cost path. Then the path configuration engine (520) also stores selected candidate record to path management table (521) and runs automation workflow to configure volume and its path from the host. If redundant paths are required, the path configuration engine (520) selects multiple higher rated paths which include different storage port (155) and/or storage controller (150) by verifying each entity included in path definition (522b) of the candidate path evaluation table (522).

[0085] The example implementations described above provide a way to optimize path configuration over a large number of switch and end-components. Specifically, the example implementations can resolve the optimal path when the user gives the target host and volume/pool in the storage subsystem. The management computer accepts the path provisioning requests and execute exclusively based on up-to-date SAN topology. The example implementations enable the system to keep reliable and agile in the modern datacenter.

[0086] In another example implementation, there is a system which allows users to configure optimized path including volume allocation is disclosed

[0087] In the previous example implementations, the user needs to select the volume or pool as a path destination. Assuming the details of the physical SAN topology are not needed, the user may choose a non-preferred storage subsystem to be destination. Example implementations described below consider which storage subsystem can be optimal destination from topology and path cost stand point, and suggests the user to optimal configuration. [0088] Example implementations described below has the same system and management computer (500) implementation as the previously described example implementations. The differences in SAN topology and the flow diagram are described below.

[0089] FIG. 14 illustrates an example fabric configuration, in accordance with an example implementation. In the example of FIG. 14, ToR switch 1A (58a) only goes to storage subsystem A (100a) directly and the physical server (10a) need to cross the ISL (60) to reach storage subsystem B (100b). Assuming the user selects the physical server (10a) and storage subsystem B (100b) to configure new storage volume (110b), it is not the most preferred combination because the combination must include ISL (60) in the path. In this case, the path configuration engine (520) suggests storage subsystem A (110a) to create new volume (110a) and path.

[0090] FIG. 15 illustrates an example flow chart, in accordance with an example implementation.

[0091] The flow at 711, 712, and 713 are the same as the flow at 701, 702 and 703 (FIG. 13), respectively, and assumes that a non-preferred pool was selected. At 714, the path configuration engine (520) verifies the topology map and detects the all routes that may include ISL (60). The topology map can identify the ISL (60), for example, as a hardware boundary across rack. At 715, the path configuration engine (520) verifies whether there are other alternative routes that provide a shorter path, and search the destination storage subsystem if such an alternative route was found. These alternative routes can be found by calculating smaller numbers of switch hop count from the same selected physical server (10) to arbitrary storage subsystem (100). If such a path is found (YES), the flow proceeds to 716. Otherwise, if the path is not found (NO), the flow proceeds to 718.

[0092] At 716, the path configuration engine (520) asks for a change of the destination storage subsystem (100) to create a new volume with an alternative route. At 717, the flow processes a selection as to whether the suggested alternative can be accepted or not. If it was accepted (YES), then the flow proceeds to 718. If not accepted (NO), then the process ends with an error status. In another example implementation, the user can also force the path configuration even if it is a non-preferred route instead of terminating the process with an error. [0093] At 719, 720 and 721, the flow proceeds as in the flow at 706, 707 and 708. The example implementation as shown in FIG. 15 provides not only for a mechanism for a user override (e.g., manual path configuration if desired), but also the improvement for volume allocation among a configuration over multiple storage subsystems. The management computer (500) can help the user to consider what the volume location is better without a consideration requirement about the underlying physical fabric configuration.

[0094] In the following example implementation, the system allows users to configure an optimized path across multiple sites. In the previous example implementations, the shorter path having the smaller hop count number is preferred in a local datacenter. However, with respect to multiple site configurations, there is an exception for the cross- site path. The management computer should be aware of cross-site paths derived from the underlying topology. In the following example implementation, two spatially distant systems, such as separated floors, separated buildings in a datacenter facility and distant cities where the datacenter locate are defined as sites.

[0095] FIG. 16 illustrates a system configuration, in accordance with an example implementation. In the following example implementations, the pair of storage subsystems provides an active-active synchronization feature (410) for the volume. The feature provides non-disruptive failover in case of primary volume (110c) failure, by redirecting host I/O to the secondary volume (l lOd). The physical servers (10a) can see both volume (110c) in storage subsystem C (100c) and volume (l lOd) in storage subsystem D (lOOd) as an unique volume, and write/read data goes to both paired storage subsystems (100c, lOOd). This configuration requires both the path going to storage subsystem C (100c) and the other going to storage subsystem D (lOOd). The path from physical server (10a) to volume (l lOd) in the storage subsystem D (lOOd) must traverse cross-site link (61) to implement a secondary path. In the prior example implementations, routes which include the cross-site link (61) will be considered as a high cost path and can be avoided as non- preferred path. Furthermore, the path configuration engine (520) based on the prior example implementations may have to utilizes resources for the path calculations that do not include cross-site link (61), even if the user wishes to have such links considered. In the present example implementation, the system allows the user to select cross-site link as prerequisites and reduce the cost for path calculation. [0096] The physical, logical configuration and management program of this example implementation have almost the same implementation as the prior example implementations as shown in FIGS. 2, 3, and 5. Management computer (500) also has capability to manage all hardware components in both sites (site C and site D in the example of FIG. 16). Otherwise, management computers (500) can be configured separately for each sites, but in such implementations, the management information should be shared and synchronized to keep consistency across sites.

[0097] In the example implementation of FIG. 16, the flow at 704 from FIG. 13 to determine the switch combination is modified to detect cross-site link (61).

[0098] FIG. 17 illustrates an example of topology map (620), in accordance with an example implementation. In the same step (704), the path configuration engine (520) can determine that the rack connected to destination storage subsystem D (lOOd) is located in another site from the site hosting physical server (10a). Each component manager (526, 529, 532) can identify site locations based on the location field (527b, 530b, 533b) in each configuration management table. Further, the location field (533b) on the storage configuration management table (533) can provide site information. The activation of the active-active synchronization (410) feature also can be triggered to branch off into this process.

[0099] After the engine (520) succeeds to identify cross-site link (61), the engine (520) divides the topology map (620) into two sub maps at the edge which represents cross-site link (616). One sub map includes initiator physical server (10) as start- node (600) and switch on boundary as end-node (615c). Another sub map includes switch on boundary as start-node (615d) and destination volume (HOd) as end-node (616). Path evaluation and resolution process (705-708) are applied for each sub maps and each minimum cost path can be determined. By dividing topology map into sub maps, both the optimization from physical server (10) to cross-site link (61) and from cross-site link (61) to remote storage subsystem D (lOOd) can be achieved. In this algorithm, the path configuration engine (520) does not need to take local storage subsystem C (100c) into consideration, as the first sub map is ended ahead of the subsystem (100c). Such example implementations allows the engine (520) to focus on cross-site path and will reduce processing cost for path evaluation drastically. [0100] The example implementations as disclosed in FIG. 16 and 17 allows the user to set some condition for each path selection. Users need not to care about physical fabric configuration even if they want to use cross-site feature such like mentioned above.

[0101] By using the example implementations described herein, the user who administrates large-scale datacenter can optimize entire SAN configuration dynamically. For end-users who are running their workload on the datacenter, they can be in appropriate service level to meet their business demands.

[0102] Even in future datacenters with hyper scale architecture, the example implementations described herein can provide location aware optimization for storage path. This architecture will be also restricted by hardware boundary such as rack or aggregation switch. The example implementations can suggest a preferred path even if the user does not take the physical configuration into consideration.

[0103] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result.

[0104] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," "displaying," or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system' s registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.

[0105] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.

[0106] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.

[0107] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

[0108] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.