Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM, METHOD, AND APPARATUS FOR MANAGING A STORAGE AREA NETWORK
Document Type and Number:
WIPO Patent Application WO/2019/112581
Kind Code:
A1
Abstract:
Provided is a computer-implemented method for managing a storage area network (SAN), including generating a first data structure based on a plurality of hardware components of the SAN, generating a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, determining a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure, and merging the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes. A system for managing a storage area network (SAN) is also provided.

Inventors:
SHEN YI (US)
MOSER GEORGE (US)
PATTANAIK SANGRAM (US)
Application Number:
US2017/065000
Publication Date:
June 13, 2019
Filing Date:
December 07, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VISA INT SERVICE ASS (US)
International Classes:
H04L29/08; G06F3/06; G06F12/00; G06F15/16; G06F15/173
Foreign References:
US20130333000A12013-12-12
US20150355863A12015-12-10
US20030220974A12003-11-27
US7194538B12007-03-20
US8060630B12011-11-15
US20070078988A12007-04-05
US20140052845A12014-02-20
Attorney, Agent or Firm:
PREPELKA, Nathan, J. et al. (The Webb Law Firm, One Gateway Center420 Ft. Duquesne Blvd., Suite 120, Pittsburgh Pennsylvania, 15222, US)
Download PDF:
Claims:
THE INVENTION CLAIMED IS

1. A computer-implemented method for managing a storage area network (SAN), comprising:

generating, with at least one processor, a first data structure based on a plurality of hardware components of the SAN, the first data structure comprising a plurality of nodes representing a configuration of the plurality of hardware components of the SAN;

generating, with at least one processor, a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing a configuration of the plurality of software applications and the at least one software service;

determining, with at least one processor, a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and

merging, with at least one processor, the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

2. The computer-implemented method of claim 1 , wherein the plurality of nodes of the first data structure comprises: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes.

3. The computer-implemented method of claim 2, wherein the plurality of nodes of the second data structure comprises: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

4. The computer-implemented method of claim 3, wherein the plurality of common nodes comprises the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure.

5. The computer-implemented method of claim 4, wherein determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure.

6. The computer-implemented method of claim 2, wherein the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes.

7. The computer-implemented method of claim 1 , further comprising:

receiving at least one allocation request; and

automatically allocating storage capacity utilization based on the least one allocation request and analyzing the merged graph data structure.

8. The computer-implemented method of claim 1 , further comprising:

detecting an impact of at least one software application or software service of the plurality of software applications and the at least one software service;

in response to detecting the impact, analyzing the merged graph data structure to correlate the impact to at least one hardware component of the SAN; and remediating the impact by automatically initiating a remediation process for the at least one hardware component of the SAN.

9. The computer-implemented method of claim 1 , further comprising:

detecting that a host server is decommissioned or is planned to be decommissioned;

in response to detecting that the host server is decommissioned or planned to be decommissioned, analyzing the merged graph data structure to correlate the host server to a plurality of logical storage unit allocations; and

generating a message based on the plurality of logical storage unit allocations.

10. A system for managing a storage area network (SAN), comprising at least one processor programmed or configured to:

generate a first data structure based on an architecture of the SAN, the first data structure comprising a plurality of nodes representing a plurality of hardware components of the SAN;

generate a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing the plurality of software applications and the at least one software service;

determine a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and

merge the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

1 1 . The system of claim 10, wherein the plurality of nodes of the first data structure comprises: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes.

12. The system of claim 1 1 , wherein the plurality of nodes of the second data structure comprises: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

13. The system of claim 12, wherein the plurality of common nodes comprises the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure.

14. The system of claim 13, wherein determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure.

15. The system of claim 1 1 , wherein the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes.

16. The system of claim 10, wherein the at least one processor is further programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure and at least one allocation request.

17. The system of claim 10, wherein the at least one processor is further programmed or configured to:

detect an impact of at least one software application or service of the plurality of software applications and the at least one software service; in response to detecting the impact, analyze the graph data structure to correlate the impact to at least one hardware component of the SAN; and

remediate the impact by automatically initiating a remediation process for the at least one hardware component of the SAN.

18. The system of claim 10, wherein the at least one processor is further programmed or configured to:

detect that a host server is decommissioned or is planned to be decommissioned;

in response to detecting that the host server is decommissioned or planned to be decommissioned, analyze the graph data structure to correlate the host server to a plurality of logical storage unit allocations; and

generate a message based on the plurality of logical storage unit allocations.

19. A system for managing a storage area network (SAN), comprising:

(a) at least one non-transitory data storage medium comprising a graph data structure including a plurality of nodes, the plurality of nodes comprising: (i) a plurality of host nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, (iv) a plurality of logical unit nodes, (v) a plurality of service nodes, and (vi) a plurality of application nodes, the graph data structure based at least partially on a configuration of hardware components of the SAN and a plurality of software applications and at least one software service that utilize at least one hardware component of the SAN; and

(b) at least one processor in communication with the at least one non- transitory data storage medium, the at least one processor programmed or configured to:

(i) analyze the graph data structure; and

(ii) automatically allocate storage capacity utilization based on analyzing the graph data structure, automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact, automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact, or any combination thereof.

20. The system of claim 19, wherein the at least one processor is programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure.

21 . The system of claim 19, wherein the at least one processor is programmed or configured to automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact.

22. The system of claim 19, wherein the at least one processor is programmed or configured to automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact.

23. An apparatus system for managing a storage area network (SAN), comprising at least one non-transitory computer-readable medium including program instructions that, when executed by at least one processor, cause the at least one processor to:

generate a first data structure based on an architecture of the SAN, the first data structure comprising a plurality of nodes representing a plurality of hardware components of the SAN;

generate a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing the plurality of software applications and the at least one software service;

determine a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and

merge the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

Description:
SYSTEM, METHOD, AND APPARATUS FOR MANAGING A STORAGE AREA

NETWORK

BACKGROUND OF THE INVENTION

1 . Field of the Invention

[0001] This invention relates generally to storage area networks and, in one particular embodiment, to a system, method, and apparatus for managing a storage area network.

2. Technical Considerations

[0002] Storage area networks (SANs) are specialized, high-speed networks of storage devices and switches connected to hosts, providing shared pools of storage devices corresponding to multiple hosts. This allows the hosts to access the storage as though the storage was local and directly connected.

[0003] With the growth of SANs and virtualization, configuring data storage comes increasingly complicated. As the number of storage devices and hosts increase, so does the complexity of determining specific parameters of the SAN configuration, such as which hosts are connected to which storage devices. For example, it is difficult to analyze the impact that software applications, software services, and individual SAN components may have on the SAN, or the impact that the SAN may have on software applications and/or software services. As an example, to determine which software applications and/or software services may be impacted by a specific storage array experiencing errors, individuals must have knowledge of the domain and perform time intensive research to determine the configuration. Since SANs can quickly change and expand, the SAN has to be reanalyzed each time.

SUMMARY OF THE INVENTION

[0004] According to a non-limiting embodiment, provided is a computer- implemented method for managing a storage area network (SAN), comprising: generating, with at least one processor, a first data structure based on a plurality of hardware components of the SAN, the first data structure comprising a plurality of nodes representing a configuration of the plurality of hardware components of the SAN; generating, with at least one processor, a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing a configuration of the plurality of software applications and the at least one software service; determining, with at least one processor, a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and merging, with at least one processor, the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

[0005] In non-limiting embodiments, the plurality of nodes of the first data structure comprises: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes. In non-limiting embodiments, the plurality of nodes of the second data structure comprise: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

[0006] In non-limiting embodiments, the plurality of common nodes comprises the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure. In non-limiting embodiments, determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure. In non-limiting embodiments, the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes.

[0007] In non-limiting embodiments, the method further comprises receiving at least one allocation request; and automatically allocating storage capacity utilization based on the least one allocation request and analyzing the merged graph data structure. In non-limiting embodiments, the method further comprises detecting an impact of at least one software application or software service of the plurality of software applications and the at least one software service; in response to detecting the impact, analyzing the merged graph data structure to correlate the impact to at least one hardware component of the SAN; and remediating the impact by automatically initiating a remediation process for the at least one hardware component of the SAN. In non-limiting embodiments, the method further comprises detecting that a host server is decommissioned or is planned to be decommissioned; in response to detecting that the host server is decommissioned or planned to be decommissioned, analyzing the merged graph data structure to correlate the host server to a plurality of logical storage unit allocations; and generating a message based on the plurality of logical storage unit allocations.

[0008] According to another non-limiting embodiment, provided is a system for managing a storage area network (SAN), comprising at least one processor programmed or configured to: generate a first data structure based on an architecture of the SAN, the first data structure comprising a plurality of nodes representing a plurality of hardware components of the SAN; generate a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing the plurality of software applications and the at least one software service; determine a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and merge the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

[0009] In non-limiting embodiments, the plurality of nodes of the first data structure comprises: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes. In non-limiting embodiments, the plurality of nodes of the second data structure comprise: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

[0010] In non-limiting embodiments, the plurality of common nodes comprises the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure. In non-limiting embodiments, determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure. In non-limiting embodiments, the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes.

[0011] In non-limiting embodiments, the at least one processor is further programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure and at least one allocation request. In non limiting embodiments, the at least one processor is further programmed or configured to detect an impact of at least one software application or service of the plurality of software applications and the at least one software service; in response to detecting the impact, analyze the graph data structure to correlate the impact to at least one hardware component of the SAN; and remediate the impact by automatically initiating a remediation process for the at least one hardware component of the SAN. In non limiting embodiments, the at least one processor is further programmed or configured to detect that a host server is decommissioned or is planned to be decommissioned; in response to detecting that the host server is decommissioned or planned to be decommissioned, analyze the graph data structure to correlate the host server to a plurality of logical storage unit allocations; and generate a message based on the plurality of logical storage unit allocations.

[0012] According to a further non-limiting embodiment, provided is a system for managing a storage area network (SAN), comprising: (a) at least one non-transitory data storage medium comprising a graph data structure including a plurality of nodes, the plurality of nodes comprising: (i) a plurality of host nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, (iv) a plurality of logical unit nodes, (v) a plurality of service nodes, and (vi) a plurality of application nodes, the merged graph data structure based at least partially on a configuration of hardware components of a SAN and a plurality of software applications and at least one software service that utilize at least one hardware component of the SAN; and (b) at least one processor in communication with the at least one non-transitory data storage medium, the at least one processor programmed or configured to: (i) analyze the graph data structure; and (ii) automatically allocate storage capacity utilization based on analyzing the graph data structure, automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact, automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact, or any combination thereof.

[0013] In non-limiting embodiments, the at least one processor is further programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure. In non-limiting embodiments, the at least one processor is further programmed or configured to automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact. In non-limiting embodiments, the at least one processor is further programmed or configured to automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact.

[0014] Further preferred and non-limiting embodiments or aspects are set forth in the following numbered clauses.

[0015] Clause 1 : A computer-implemented method for managing a storage area network (SAN), comprising: generating, with at least one processor, a first data structure based on a plurality of hardware components of the SAN, the first data structure comprising a plurality of nodes representing a configuration of the plurality of hardware components of the SAN; generating, with at least one processor, a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing a configuration of the plurality of software applications and the at least one software service; determining, with at least one processor, a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and merging, with at least one processor, the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

[0016] Clause 2: The computer-implemented method of clause 1 , wherein the plurality of nodes of the first data structure comprise: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes.

[0017] Clause 3: The computer-implemented method of clauses 1 or 2, wherein the plurality of nodes of the second data structure comprise: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

[0018] Clause 4: The computer-implemented method of any of clauses 1 -3, wherein the plurality of common nodes comprise the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure.

[0019] Clause 5: The computer-implemented method of any of clauses 1 -4, wherein determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure.

[0020] Clause 6: The computer-implemented method of any of clauses 1 -5, wherein the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes. [0021] Clause 7: The computer-implemented method of any of clauses 1 -6, further comprising: receiving at least one allocation request; and automatically allocating storage capacity utilization based on the least one allocation request and analyzing the merged graph data structure.

[0022] Clause 8: The computer-implemented method of any of clauses 1 -7, further comprising: detecting an impact of at least one software application or software service of the plurality of software applications and the at least one software service; in response to detecting the impact, analyzing the merged graph data structure to correlate the impact to at least one hardware component of the SAN; and remediating the impact by automatically initiating a remediation process for the at least one hardware component of the SAN.

[0023] Clause 9: The computer-implemented method of any of clauses 1 -8, further comprising: detecting that a host server is decommissioned or is planned to be decommissioned; in response to detecting that the host server is decommissioned or planned to be decommissioned, analyzing the merged graph data structure to correlate the host server to a plurality of logical storage unit allocations; and generating a message based on the plurality of logical storage unit allocations.

[0024] Clause 10: A system for managing a storage area network (SAN), comprising at least one processor programmed or configured to: generate a first data structure based on an architecture of the SAN, the first data structure comprising a plurality of nodes representing a plurality of hardware components of the SAN; generate a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing the plurality of software applications and the at least one software service; determine a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and merge the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

[0025] Clause 1 1 : The system of clause 10, wherein the plurality of nodes of the first data structure comprise: (i) a plurality of host server nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, and (iv) a plurality of logical unit nodes, such that each host server node of the plurality of host server nodes is a parent node to at least one storage array node of the plurality of storage array nodes, each storage array node of the plurality of storage array nodes is a parent node to at least one pool node of the plurality of storage pool nodes, and each pool node of the plurality of storage pool nodes is a parent node to at least one logical unit node of the plurality of logical unit nodes.

[0026] Clause 12: The system of clauses 10 or 1 1 , wherein the plurality of nodes of the second data structure comprise: (i) a plurality of service nodes, (ii) a plurality of application nodes, and (iii) a plurality of host nodes, such that each service node of the plurality of service nodes is a parent node of at least one application node of the plurality of application nodes, and each application node of the plurality of application nodes is a parent node to at least one host node of the plurality of host nodes.

[0027] Clause 13: The system of any of clauses 10-12, wherein the plurality of common nodes comprises the plurality of host nodes of the second data structure and the plurality of host server nodes of the first data structure.

[0028] Clause 14: The system of any of clauses 10-13, wherein determining the plurality of common nodes comprises matching at least one host node of the plurality of host nodes of the second data structure to at least one host server node of the plurality of host server nodes of the first data structure.

[0029] Clause 15: The system of any of clauses 10-14, wherein the plurality of nodes of the first data structure further includes a plurality of fabric nodes, wherein each host server node of the plurality of host server nodes of the first data structure is a parent node to at least one fabric node of the plurality of fabric nodes, and wherein each fabric node of the plurality of fabric nodes is a parent node to at least one storage array node of the plurality of storage array nodes.

[0030] Clause 16: The system of any of clauses 10-15, wherein the at least one processor is further programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure and at least one allocation request.

[0031] Clause 17: The system of any of clauses 10-16, wherein the at least one processor is further programmed or configured to: detect an impact of at least one software application or service of the plurality of software applications and the at least one software service; in response to detecting the impact, analyze the graph data structure to correlate the impact to at least one hardware component of the SAN; and remediate the impact by automatically initiating a remediation process for the at least one hardware component of the SAN. [0032] Clause 18: The system of any of clauses 10-17, wherein the at least one processor is further programmed or configured to: detect that a host server is decommissioned or is planned to be decommissioned; in response to detecting that the host server is decommissioned or planned to be decommissioned, analyze the graph data structure to correlate the host server to a plurality of logical storage unit allocations; and generate a message based on the plurality of logical storage unit allocations.

[0033] Clause 19: A system for managing a storage area network (SAN), comprising: (a) at least one non-transitory data storage medium comprising a graph data structure including a plurality of nodes, the plurality of nodes comprising: (i) a plurality of host nodes, (ii) a plurality of storage array nodes, (iii) a plurality of storage pool nodes, (iv) a plurality of logical unit nodes, (v) a plurality of service nodes, and (vi) a plurality of application nodes, the merged graph data structure based at least partially on a configuration of hardware components of a SAN and a plurality of software applications and at least one software service that utilize at least one hardware component of the SAN; and (b) at least one processor in communication with the at least one non-transitory data storage medium, the at least one processor programmed or configured to: (i) analyze the graph data structure; and (ii) automatically allocate storage capacity utilization based on analyzing the graph data structure, automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact, automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact, or any combination thereof.

[0034] Clause 20: The system of clause 19, wherein the at least one processor is programmed or configured to automatically allocate storage capacity utilization based on analyzing the graph data structure.

[0035] Clause 21 : The system of clauses 19 or 20, wherein the at least one processor is programmed or configured to automatically remediate at least one component of the SAN based on analyzing the graph data structure and in response to detecting an impact.

[0036] Clause 22: The system of any of clauses 19-21 , wherein the at least one processor is programmed or configured to automatically remediate at least one software application or software service based on analyzing the graph data structure and in response to detecting an impact.

[0037] Clause 23: An apparatus for managing a storage area network (SAN), comprising at least one non-transitory computer-readable medium including program instructions that, when executed by at least one processor, cause the at least one processor to: generate a first data structure based on an architecture of the SAN, the first data structure comprising a plurality of nodes representing a plurality of hardware components of the SAN; generate a second data structure based on a plurality of software applications and at least one software service that utilize at least one component of the plurality of hardware components of the SAN, the second data structure comprising a plurality of nodes representing the plurality of software applications and the at least one software service; determine a plurality of common nodes from the plurality of nodes in the first data structure and the plurality of nodes in the second data structure; and merge the first data structure and the second data structure to generate a merged graph data structure by merging the plurality of common nodes.

[0038] These and other features and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structures and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and the claims, the singular form of “a,”“an,” and“the” include plural referents unless the context clearly dictates otherwise.

BRIEF DESCRIPTION OF THE DRAWINGS

[0039] Additional advantages and details of the invention are explained in greater detail below with reference to the exemplary embodiments that are illustrated in the accompanying schematic figures, in which:

[0040] FIG. 1 is a schematic diagram of a system for managing a SAN according to a non-limiting embodiment;

[0041] FIG. 2 is a schematic diagram of a system for managing a SAN according to a non-limiting embodiment; [0042] FIG. 3 is a diagram of a data structure representing hardware components of a SAN according to a non-limiting embodiment;

[0043] FIG. 4 is a diagram of a data structure representing software components according to a non-limiting embodiment;

[0044] FIG. 5 is a flow diagram of a method for managing a SAN according to a non-limiting embodiment; and

[0045] FIG. 6 is a diagram of a merged graph data structure according to a non limiting embodiment.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0046] For purposes of the description hereinafter, the terms “end,” “upper,” “lower,”“right,”“left,”“vertical,”“horizon tal,”“top,”“bottom,”“lateral,”“longitudinal, ” and derivatives thereof shall relate to the invention as it is oriented in the drawing figures. However, it is to be understood that the invention may assume various alternative variations and step sequences, except where expressly specified to the contrary. It is also to be understood that the specific devices and processes illustrated in the attached drawings, and described in the following specification, are simply exemplary embodiments or aspects of the invention. Hence, specific dimensions and other physical characteristics related to the embodiments or aspects disclosed herein are not to be considered as limiting.

[0047] As used herein, the terms“communication” and“communicate” may refer to the reception, receipt, transmission, transfer, provision, and/or the like of information (e.g., data, signals, messages, instructions, commands, and/or the like). For one unit (e.g., a device, a system, a component of a device or system, combinations thereof, and/or the like) to be in communication with another unit means that the one unit is able to directly or indirectly receive information from and/or transmit information to the other unit. This may refer to a direct or indirect connection (e.g., a direct communication connection, an indirect communication connection, and/or the like) that is wired and/or wireless in nature. Additionally, two units may be in communication with each other even though the information transmitted may be modified, processed, relayed, and/or routed between the first and second unit. For example, a first unit may be in communication with a second unit even though the first unit passively receives information and does not actively transmit information to the second unit. As another example, a first unit may be in communication with a second unit if at least one intermediary unit (e.g., a third unit located between the first unit and the second unit) processes information received from the first unit and communicates the processed information to the second unit. In some non-limiting embodiments, a message may refer to a network packet (e.g., a data packet, and/or the like) that includes data. It will be appreciated that numerous other arrangements are possible.

[0048] Reference to“a processor,” as used herein, may refer to a previously-recited processor that is recited as performing a previous step or function, a different processor, and/or a combination of processors. For example, as used in the specification and the claims, a first processor that is recited as performing a first step or function may refer to the same or different processor recited as performing a second step or function.

[0049] As used herein, the term“software” may refer to a software application and/or software service. A software service may include any software functionality that is made available by hardware, an operating system, middleware, and/or the like that can be used and/or accessed by a software application. A software application may include any executable application that can be utilized by a user or other application to carry out one or more functions or purposes.

[0050] Non-limiting embodiments of the present invention are directed to a system, method, and apparatus for managing a storage area network (SAN) to provide enhanced analytical and processing capabilities. Non-limiting embodiments are directed to a new, merged graph data structure and a method for generating the same that allows for increased efficiency in analyzing a SAN. Non-limiting embodiments of the merged graph data structure allow for the efficient and automatic remediation of impacts, such as a failure, outage, performance degradation, connectivity delay, etc., within the SAN or in software utilizing the SAN to create an improved, more efficient, and more reliable SAN infrastructure. Non-limiting embodiments of the merged graph data structure also allow for the efficient and automatic allocation and reallocation of storage capacity utilization, resulting in an improved SAN infrastructure having increased storage capacity. The merged graph data structure may also be utilized to plan for system or device upgrades, conduct risk assessment for planned changes or upgrades to a SAN, and to enhance alert handling procedures to ensure that alerts from high impact SAN components are handled at a higher priority. It will be appreciated that various other advantages are possible.

[0051] Referring now to FIGS. 1 and 2, a system 1000 for managing a SAN is shown according to a non-limiting embodiment. The system 1000 includes a SAN 100. In FIG. 1 , the SAN 100 includes a host 102, fabric components 1 14, 1 15, SAN switches 104, 106, and a storage array 108. In FIG. 2, the SAN 100 includes hosts 102, 103, fabric components 1 14, 1 15, SAN switches 104, 106, and storage arrays 108, 109. The hosts 102, 103 may each include host bus adapters 1 16, 1 17, 120, 121 for facilitating communication between the hosts 102, 103 and the fabric components 1 14, 1 15. The hosts 102, 103 may include one or more physical host server computers, although it will be appreciated that virtual servers may also be used in non limiting embodiments.

[0052] With continued reference to FIGS. 1 and 2, the fabric components 1 14, 1 15 each include a portion of the SAN 100 that is created when one or more SAN switches 104, 106 are connected. For example, the fabric components 1 14, 1 15 may include one or more physical hardware elements of a network environment that connects the hosts 102, 103 and workstations (not shown in FIGS. 1 and 2) to one or more storage arrays 108, 109. The SAN 100 may have multiple interconnected fabric components 1 14, 1 15, and multiple fabric components 1 14, 1 15 may also be used for redundancy. Each storage array 108, 109 in a SAN 100 may also have storage processors 1 18, 1 19, 122, 123 for connecting the storage arrays 108, 109 to the fabric components 1 14, 1 15. Although each storage array 108, 109 is shown with two storage processors 1 18, 1 19, 122, 123, it will be appreciated that each storage array may have one or more storage processors configured to access the storage arrays 108, 109.

[0053] It will also be appreciated that a SAN 100 may have numerous other components and that the simple arrangements shown in FIGS. 1 and 2 are for ease of illustration and example purposes only. For example, actual implementations of a SAN 100 may involve tens, hundreds, or thousands of hosts, storage arrays, and interconnected SAN switches. The system 1000 also includes a processor 1 12 and a data storage device 1 10. The data storage device 1 10 may have stored thereon one or more data structures, such as a data structure representing hardware components of the SAN 100, a data structure representing software components utilizing the SAN 100, and a merged graph data structure representing both hardware components of the SAN 100 and software components utilizing the SAN 100. The processor 1 12 may, in some non-limiting embodiments, be a component or part of a component of the SAN 100, such as hosts 102, 103. In other non-limiting embodiments, the processor 1 12 may be part of a separate computer system. [0054] With continued reference to FIGS. 1 and 2, a plurality of software applications and software services may utilize one or more hardware components of the SAN 100. As an example, a software application may use a plurality of logical units (e.g., Logical Unit Numbers (LUNs)) of a storage array 108 for loading temporary runtime data, storing output data, and/or the like. That software application may also utilize one or more software services that, in turn, utilize one or more hardware components of the SAN 100 directly or through the software application. A software application may also use one or more hosts 102, 103 to be executed through a network environment. It will be appreciated that software applications and software services may interact with the SAN 100 in any number of ways, such as reading data from the SAN 100, writing data to the SAN 100, deleting data from the SAN 100, moving data within the SAN 100, managing communications within the SAN 100, managing access to the SAN, and/or other like functions.

[0055] Referring now to FIG. 3, a data structure 300 is shown representing a plurality of components of a SAN in a hierarchy. The data structure 300 may be generated based on the SAN component data and represent a physical layer of the SAN. The data structure 300 shown in FIG. 3 is a tree data structure such that the nodes are arranged hierarchically, although non-hierarchical data structures may also be used. In non-limiting embodiments, a graph data structure may be used. The root node of the data structure 300 is a host server node 302 representing a host. Below the host node 302 are fabric nodes 304a-b representing fabric components of the SAN. Below the fabric nodes 304a-b are storage array nodes 306a-c representing storage arrays in a SAN. Below the storage array nodes 306a-c are storage pool nodes 308a-f representing pools of logical units (e.g., represented by LUNs). Below the storage pool nodes 308a-f are LUN nodes 31 Oa-j representing individual LUNs. It will be appreciated that the data structure 300 shown in FIG. 3 is a simple representation of SAN components and that, in many non-limiting implementations, numerous additional SAN components may be part of the data structure 300.

[0056] Referring now to FIG. 4, a data structure 400 is shown representing a plurality of software application and software services utilizing one or more hardware components of a SAN in a hierarchy. A root node 402 may represent a top level domain user, such as a head of a business unit. Below the root node 402 are service nodes 404a-c representing software services in that domain. Below the service nodes 404a-c are application nodes 406a-g representing software applications that utilize the software services represented by the service nodes 404a-c. Below the application nodes 406a-g are host nodes 408a-408e representing hosts (e.g., physical or virtual host servers) that utilize the software applications represented by application nodes 406a-g. It will be appreciated that numerous other arrangements are possible and that not all data structures 400 representing software applications and software services will be hierarchical or structured like the data structure 400 shown in FIG. 4.

[0057] Each node of a data structure 300, 400 may be associated with component data that includes one or more parameters related to a hardware component, software application, or software service represented by the node. As an example, a node may be treated as an object in an object-oriented environment such that each node includes one or more attributes. A node may also be represented by any other type of data structure or element. Node data may include, for example, a unique identifier (e.g., device identifier, service identifier, software identifier, user identifier, etc.), a status (e.g., operational, non-operational, etc.), an installation date/time, a service data/time, and/or other like information that may be associated with a SAN hardware component or software application or service.

[0058] Referring now to FIG. 5, and with continued reference to FIGS. 1 and 2, a method for managing a SAN is shown according to a non-limiting embodiment. At a first step 502, a first data structure may be generated based on a plurality of SAN hardware components 102-123. In non-limiting embodiments, the processor 1 12 generates the first data structure by retrieving SAN component data from the SAN 100 representing the architecture of the SAN 100. This SAN component data may be retrieved from any number of sources, such as a SAN configuration file, a network bot that maps the SAN architecture, manually entered SAN data, and/or the like. In non limiting embodiments, the SAN component data may be retrieved from configuration settings stored in core SAN switches or other components of the SAN. SAN component data may also be retrieved from a Common Manager Database (CMDB). The SAN component data may not be arranged or formatted in some non-limiting embodiments. The SAN component data may represent point-to-point connections (e.g., which host bus adapters are connected to which storage array) and/or other relationships between components of the SAN. Once the SAN component data is retrieved, the processor 1 12, based on the SAN component data, generates a data structure representing each component of the SAN 100 and its interrelation to other components in the SAN 100. Direct connections between components may be represented by edges in the graph. The data structure may be stored in a data storage device 1 10. In non-limiting embodiments, each hardware component of the SAN 100 has one or more unique identifiers, such as a device identifier, that uniquely identifies the hardware component and allows for a single node in the data structure to represent a single component with many possible connections. A non-limiting example of a first data structure is shown in FIG. 3.

[0059] With continued reference to FIG. 5, at a second step 504, a second data structure may be generated based on a plurality of software applications and/or software services. In non-limiting embodiments, the processor 1 12 generates the second data structure by retrieving service data and application data from the SAN 100 and/or other systems representing software applications and software services utilizing one or more hardware components of the SAN 100. This SAN software data may be retrieved from any number of sources, such as one or more application configuration files, network or usage logs, SAN configuration files, manually entered SAN software data, and/or the like. In non-limiting embodiments, SAN software data may be retrieved from Information Technology Infrastructure Library (ITIL) and Information Technology Disaster Recovery (ITDR) services. SAN software data may also be retrieved from a Common Manager Database (CMDB). The SAN software data may not be arranged or formatted in some non-limiting embodiments. The processor 1 12, based on the SAN software data, generates a data structure representing each software application and software service utilizing a hardware component of the SAN 100 and the interrelation of such software applications and software services of other software applications, software services, and hardware components on the SAN 100. The data structure may be stored in a data storage device 1 10. A non-limiting example of a second data structure is shown in FIG. 4.

[0060] Although the example shown in FIG. 5 and described herein involves two data structures, including a first data structure representing hardware components of a SAN and a second data structure representing software applications and software services, it will be appreciated that numerous data structures may be generated and used in non-limiting embodiments. For example, multiple data structures may represent hardware components of a SAN and software components utilizing a SAN.

[0061] Still referring to FIG. 5, at a next step 506, the processor 1 12 determines one or more common nodes of the data structures that were generated in steps 502 and 504. Common nodes may be determined by, for example, comparing a plurality of identifiers from both the first data structures and the second data structure to identify one or more matching identifiers. The nodes corresponding to the matching identifiers are identified as common nodes. For example, in non-limiting embodiments and referring to FIGS. 3 and 4, one or more host nodes 302 of the first data structure may be common nodes with one or more host nodes 408a-e of the second data structure. For example, host 302 and host 408b may each be associated with matching identifiers and therefore refer to the same component of the SAN. There may be one or more groupings of common nodes, and each group may include two or more common nodes.

[0062] With continued reference to FIG. 5, at step 508, once one or more common nodes are determined, the common nodes are merged together to form a merged graph data structure. The nodes may be merged in various ways. In one example, the common nodes may be merged together to form a merged graph data structure by generating a new graph data structure that includes the first data structure and the second data structure, where the common nodes from both the first data structure and the second data structure are reduced to a single merged node that connects to all of the nodes that the common nodes are connected to in all data structures. As another example, the common nodes may be merged to form a merged graph data structure by linking the common nodes of the first data structure and the second data structure such that the linked data structures can be traversed and logically processed like a single merged data structure, even though the structures are separate. Once the common nodes are merged together, the result is a merged graph data structure that is stored in a database. The merged data structure enables a user to traverse between a physical layer of the SAN and an application layer using one or more node traversal algorithms.

[0063] FIG. 6 shows a merged graph data structure 600 according to a non-limiting embodiment. This merged graph data structure 600 illustrates hardware SAN components (“Storage Array,”“Pools,”“LUNs,”“Host Servers”), software applications (“Apps”), and software services (“SRU”) under a user domain (“DTL”). In this example, it can be determined that three SRUs are connected to multiple Apps and depend upon Storage Array 100148. In non-limiting embodiments, various visualizations of the merged graph data structure may be generated based on user-provided parameters. [0064] The term “impact,” as used herein, refers to an effect on one or more components of a SAN or one or more software applications or services utilizing a SAN. An impact may include, for example, one or more hardware failures, software failures, outages, instances of performance degradation, connectivity delays, and/or any other like events or conditions affecting performance or a system, device, application, or function that, in some instances, may be undesirable.

[0065] In some non-limiting embodiments, the merged graph data structure is used to automatically remediate one or more impacts in the SAN. Referring again to FIG. 5, at step 510, the method may include detecting an impact. The impact may be a failure of a software application or software service, as an example, or may be a failure of a SAN hardware component. In response to detecting such an impact, at step 512, the processor may analyze the merged graph data structure to correlate the impact to at least one component of the SAN. The merged graph data structure may be analyzed with one or more node traversal algorithms to search for the node representing the failed software application or software service. Once the impacted node is identified, the processor may then identify one or more nodes representing SAN components that are adjacent or connected to the impacted node. In some instances where multiple software applications and/or software services fail, the merged graph data structure may be analyzed to identify each related SAN component node and then to determine which of those SAN component nodes are common to the impacts. It will be appreciated that various other techniques may be used to traverse the merged graph data structure to identify one or more nodes representing SAN components that may have given rise or contributed to the impact. In non-limiting embodiments, graph traversal algorithms such as breadth first search algorithms, depth first searching algorithms, shortest past algorithms, and/or the like may be used.

[0066] At step 514 of FIG. 5, after one or more nodes representing SAN components have been correlated to the impact(s), the processor may automatically remediate the impact. In non-limiting embodiments, a failed SAN component may be removed from the SAN and replaced. For example, if it is determined that a storage array contributed to a failure, an available storage array may be automatically commissioned to replace the failed storage array and the failed storage array may be automatically decommissioned or taken offline. In some non-limiting embodiments, the processor may generate one or more alerts identifying the SAN component(s) for remediation, which may be communicated to and acted upon by one or more automated bots, IT managers, third-party IT services, and/or the like.

[0067] In some non-limiting embodiments, the merged graph data structure is used to automatically allocate storage capacity within the SAN. Still referring to FIG. 5, at step 516, the method may include receiving an allocation request. The allocation request may be from a human operator or, in other examples, may be automatically generated in response to a host server being decommissioned, a detected impact of a storage array, and/or the like. The allocation request may specify an amount of storage capacity needed. In response to receiving an allocation request, at step 518, the processor may analyze the merged graph data structure to identify a plurality of LUNs relating to one or more storage arrays that are available for storage and, in some examples, connected to the appropriate SAN components or software applications or services. The merged graph data structure may be analyzed with one or more node traversal algorithms to identify the LUNs and/or storage arrays. At step 520, the processor automatically allocates storage capacity to the LUNs pursuant to the allocation request.

[0068] In non-limiting embodiments, the processor may detect that a host server is being decommissioned or planned to be decommissioned. In such scenarios, it may be desired to reclaim the storage that was provisioned to the host server so that it can be reused quickly, preventing a waste of unused storage and processing resources. For example, the processor may determine that the host server is flagged or included on a list, or receive an alert message identifying the host server. In response to detecting that the host server is decommissioned or planned to be decommissioned, the processor may analyze the merged graph data structure to correlate the host server to a plurality of logical storage unit allocations (e.g., LUNs). The data stored on the identified LUNs may therefore be reallocated or linked to another host server in the SAN. As an example, the processor may generate a message identifying the LUNs associated with the decommissioned or soon-to-be decommissioned host server such that the message may be acted upon automatically by the processor or by a human operator.

[0069] In non-limiting embodiments, the merged graph data structure may be analyzed to generate usage statistics and to plan future usage of SAN components. Based on analyzing the merged graph data structure, the processor may generate aggregated information and reports based on how much storage is being used per software application or per software service, as examples.

[0070] In non-limiting embodiments, the processor may be programmed or configured to generate alerts based on predicted or actual impacts within a SAN. Using the merged graph data structure, such alerts may be generated based on an impact of a particular node. For example, if an alert is generated concerning a particular host in a SAN, the merged graph data structure may be analyzed to determine which hardware components and/or which software applications or services may utilize or depend on the host. Based on the impact that the host may have on the SAN and/or user workflow, the alert may indicate a priority to request expedited remediation. A high impact SAN component may be remediated at a higher priority than other SAN components having less of an impact.

[0071] Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.