Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ACCELERATED DATA CENTER TRANSFERS
Document Type and Number:
WIPO Patent Application WO/2021/016274
Kind Code:
A1
Abstract:
A block-oriented lossless decompressor is used to decode encoded data fetched from storage that is subsequently transferred across a network in encoded (compressed) form. In examples described herein, applications executing at network nodes send GET requests, or similar messages, to storage systems, which can return, compressed data this is decompressed in an intermediate node (between the storage node and the app), and can return compressed data that is decoded in the same network node in which the requesting application is running.

Inventors:
WEGENER ALBERT W (US)
Application Number:
PCT/US2020/042932
Publication Date:
January 28, 2021
Filing Date:
July 21, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ANACODE LABS INC (US)
International Classes:
G06F12/08; G06F15/16; H03M7/30
Domestic Patent References:
WO2018204768A12018-11-08
WO2016185459A12016-11-24
Foreign References:
US20120089781A12012-04-12
US20190114108A12019-04-18
US20170068458A12017-03-09
US20150324385A12015-11-12
Attorney, Agent or Firm:
HAYNES, Mark A. et al. (US)
Download PDF:
Claims:
CLAIMS

1. A method for managing data flow in a netw ork including a plurality of network nodes, comprising:

storing a plurality of compressed files or objects in data storage at a first network node in the network, wherein a compressed file or object in the plurality of compressed files or objects includes a plurality' of compressed blocks;

receiving at a storage access manager t at a second network node in the network, a message from an application instance at a third network node, the message requesting access to a segment of a selected file or object in uncompressed form, wherein the requested segment is a subset of the selected file or object, not including all of the selected file or object; and

generating at the storage access manager in response to the message, a request to transfer though a network route to the application instance at the third network node, one or more compressed blocks of the compressed file or object that encode the requested segment of the selected file, the one or more compressed blocks not including all of the plurality of compressed blocks, and wherein the network route includes a decompression module executed in the network capable of decompressing the one or more compressed blocks to recover the requested segment in uncompressed form, and of forwarding the requested segment in uncompressed form to the application instance at the third network node.

2. The method of claim 1, including transferring the decompression module to the third network node, in response to the message from the application instance, the decompression module being executed at the third network node.

3. The method of claim 1 , wherein the third network node includes the executable

decompression module.

4. The method of claim 1, wherein the network route includes a fourth network node between tiie second node and the third node, and the fourth network node includes the executable decompression module and decompresses the compressed data.

5. The method of claim 1 , wherein the network includes an inter-rack network, and a plurality of racks having intra-rack networks, and wherein a rack in the plurality of racks includes a rack switch and one or more other network nodes on the intra-rack network of the given rack, and the third network node is one of the one or more other network nodes on a given rack.

6. The method of claim 1 , wherein the network connecting the second and the third nodes is the Internet.

7. The method of claim 1, wherein the segment is identified by an offset from a beginning of the file and a length of the segment in uncompressed form, and the offset includes a byte offset, and the length includes a number of bytes.

8. A computer program product, comprising:

non-transitory memory storing computer program instructions executable at a first network node, the computer program instructions configured to execute a method for managing data flow in a network including a plurality of network nodes, in which a plurality of compressed files or objects are stored in data storage a first network node in the network, wherein a compressed file or object in the plurality of compressed files or objects includes a plurality' of compressed blocks; the method including

receiving at a second network node in the network, a message from an application instance at a third network node, the message requesting access to a segment of a selected file or object in uncompressed form, wherein the requested segment is a subset of the selected file or object, not including all of the selected file or object; and

generating in response to the message, a request to transfer though a network route to the application instance at the third network node, one or more compressed blocks of the compressed file or object that encode the requested segment of the selected file, the one or more compressed blocks not including all of the plurality of compressed blocks, and wherein the network route includes a decompression module executed in the network capable of decompressing the one or more compressed blocks to recover the requested segment in uncompressed form, and of forwarding die requested segment in uncompressed form to the application instance at the third network node.

9. The computer program product of claim 8, including transferring the decompression module to the third network node, in response to the message from the application instance, the decompression module being executed at the third network node.

10. The computer program product of claim 8, wherein the third network node includes the executable decompression module.

1 1. The computer program product of claim 8, wherein the network route includes a fourth network node between the second node and the third node, and the fourth network node includes the executable decompression module and decompresses the compressed data.

12. The computer program product of claim 8, wherein the network includes an inter-rack network, and a plurality of intra-rack networks on respective racks in a plurality of racks, and wherein the third network node and fourth network node are disposed on a given rack connected to the inter-rack network.

13. The computer program product of claim 8, wherein the network includes an inter-rack network, and a plurality of intra-rack netw orks on respective racks in a plurality of racks, and wherein the third network node is disposed on a given rack connected to the inter-rack network, the given rack including the rack switch and one or more other network nodes on the intra-rack network of the given rack.

14. The computer program product of claim 8, wherein the plurality of compressed blocks of a compressed file or object includes a first block, and wherein the one or more compressed blocks do not include the first block of the plurality of compressed blocks of the compressed file or object.

15. The computer program product of claim 8, wherein the network connecting the second and the third nodes is the Internet.

16. The computer program product of claim 8, wherein the segment is identified by an offset from a beginning of tire requested file or object in uncompressed form and a length, and the offset includes a byte offset, and the length includes a number of bytes.

17. A computing system, comprising:

a plurality of network nodes configured for communications via a network, a first network node in the plurality of network nodes having access to storage resources storing a plurality of compressed files or objects in data storage, wherein a compressed file or object in the plurality of compressed files or objects includes a plurality of compressed blocks;

a storage access manager at a second network node in the plurality of network nodes configured to receive a message from an application instance at a third network node in the plurality of network nodes, the message requesting access to a segment of a selected file or object in uncompressed form, wherein the requested segment is a subset of the selected file or object, not including all of the selected file or object;

the storage access manager configured to send a request to the storage resources at the first network node, in response to the message, to transfer though a network route to the application instance at the third network node, one or more compressed blocks of a compressed file or object in the storage resources that encode the requested segment of the selected file, the one or more compressed blocks not including all of the plurality of compressed blocks; and wherein the network route includes a decompression module executable in the network to decompress the one or more compressed blocks to recover the requested segment in

uncompressed form, and of forwarding the requested segment in uncompressed form to the application instance at the third network node.

18. The computing system of claim 17, wherein the storage access manager is configured to transfer the decompression module to the third network node, in response to the message from the application instance.

19. The computing system of claim 17, wherein the third network node includes the executable decompression module.

20. The computing system of claim 17, wherein some or all of the plurality of network nodes are mounted in racks in a plurality of racks, and network includes an inter-rack network, and a plurality of intra-rack networks on respective racks in a plurality of racks, and wherein the third network node is one of tire one or more other network nodes on a given rack.

21. The computing system of claim 17, wherein the network includes an inter-rack network, and a plurality of racks having intra-rack networks, and wherein a rack in the plurality of racks includes a rack switch and one or more other network nodes on the intra-rack network of the given rack, and second network node and the third network node are each one of the one or more other network nodes on a given rack.

22. The computing system of claim 17, wherein the plurality of compressed blocks of a compressed file or object includes a starting block, and wherein the one or more compressed blocks do not include a starting block of the plurality of compressed blocks of the compressed file or object.

23. The computing system of claim 17, wherein the network connecting the second and the third nodes is the Internet.

24. The computing system of claim 17, wherein the segment is identified by an offset from a beginning of the file in uncompressed form and a length.

25. The computing system of claim 24, wherein the offset includes a byte offset, and the length includes a number of bytes.

Description:
ACCELERATED DATA CENTER TRANSFERS

PRIORITY APPLICATION

[0001] This application claims the benefit of U.S. Non-Provisional Application No.

16/828,509, filed 24 March 2020, which claims the benefit of U.S. Provisional Patent Application No. 62/877,150, filed 22 July 2019.

BACKGROUND

[0002] Computations by computers are increasingly being performed on rented software and hardware, instead of on purchased software and hardware. The term“private Cloud” infrastructure, or“private data centers” often refers to purchased hardware and software that operates Information Technology (IT) applications. In contrast, the term“public Cloud” infrastructure or“public data centers” often refers to rented hardware and software that operates IT applications. The term“data center” refers to such IT infrastructure that operates IT applications. In general, public Cloud costs are billed on a monthly basis as an operating expense (OpEx) and are based on usage levels. In contrast, private Cloud costs are billed once as a Capital Expense (CapEx) that is amortized over the life of the data center, based on purchased capacity (such as storage capacity). Current (2019) trends favor public Cloud being less expensive (and thus more popular, growing more quickly, and more widely deployed) than private Cloud.

[0003] A typical data center houses multiple (typically thousands of) instances of IT infrastructure.

[0004] IT infrastructure is typically divided into three types of components:

• servers (which run IT applications using one or more Central Processing Units [CPUs]),

• storage (which hold the input data upon which IT applications operate, and also hold the resulting intermediate and/or output data generated by IT applications), and

• networking equipment (consisting of routers, switches, and networking cables that

together connect storage with servers, and also connect servers with other servers).

[0005] As of 2019, storage requirements for Cloud data are increasing at about 35% per year. At this growth rate, Cloud storage represents a significant and growing IT CapEx investment for both private and public Clouds. Despite public Clouds receiving revenue as OpEx from customers, public Cloud providers still spend CapEx for IT infrastructure (storage, networking, and servers). [0006] Increases in the throughput (bandwidth) of networking equipment components typically occur every few years, such as when 1 Gbps networking equipment was replaced by 10 Gbps networking equipment during the approximate years 2000 - 2010. Such networking speed improvements are attractive because they support faster data transfers between storage and servers, or between servers and other servers. Typically, such networking transfers require a“forklift upgrade” of all IT infrastructure networking equipment components, but “forklift upgrades” are expensive because they require tire replacement of most or all data center networking equipment.

[0007] Compression algorithms reduce the cost of storage space and can increase tire speed of storage transfers by reducing the amount of data transferred across networks by reducing the size or number (or both) of network data packets. However, compression algorithms have historically been restricted to certain limited use cases, have not performed well on all types of data, and have not supported random access into the compressed stream of data. For this reason, compression algorithms have not been used to accelerate generic network transfers. If a compression method were available that effectively compressed all data types stored in data centers while also supporting random access, such a compression method w'ould improve the speed of network transfers because fewer (compressed) bits could be transferred in place of the original (uncompressed) data was requested by various IT applications. Further, if the decoding of this compression method were performed in such a way that networking transfers only or mostly carried compressed data, rather than uncompressed data, such transfers would occur at an accelerated speed.

SUMMARY

[0008] This specification describes a system that accelerates network transfers without requiring a“forklift upgrade” of existing data carter networking equipment, by using one or more software threads to decode data in the same rack where the application that requested tire data is running. Technology described herein uses a block-oriented lossless compressor that encodes data using one or more servers prior to writing tire encoded (compressed) data to storage. Technology described herein uses a block-oriented lossless decompressor to decode encoded data fetched from storage that is subsequently transferred across a network in encoded (compressed) form. In examples described herein, applications executing at network nodes send GET requests to storage systems, which can return compressed data this is decompressed in an intermediate node (between the storage node and the app), and can return compressed data that is decoded in the same network node in which the requesting application is running. Decoding can thus be performed using one or more cores within the same server rack, prior to delivering the decoded (decompressed) data to the IT application that requested that data. The present technology can both reduce storage costs and increase effective networking throughput (bandwidth) without requiring a forklift upgrade of data center networking equipment.

[0009] Disclosed is a method for managing data flow in a network including a plurality of network nodes, which can optimize both memory and bandwidth requirements for a data center that includes many network nodes. Such disclosed method comprises storing a plurality of compressed files or objects in data storage at a first network node (or many network nodes) in the network, wherein a compressed file or object in tire plurality of compressed files or objects includes a plurality of compressed blocks. A storage access manager at a second network node in the network, receives a message from an application instance at a third network node, the message requesting access to a segment of a selected file or object in uncompressed form. The requested segment is a subset of the selected file or object, not including all of the selected file or object in many active use cases. The method includes generating at the storage access manager in response to the message, a request to transfer though a network route to the application instance at the third network node, one or more compressed blocks of tire compressed file or object that encode the requested segment of the selected file, the one or more compressed blocks not including all of the plurality' of compressed blocks. The netw'ork route includes a decompression module executed in the network capable of decompressing the one or more compressed blocks to recover the requested segment in uncompressed form, and capable of forwarding the requested segment in uncompressed form to the application instance at the third netw'ork node. The decompression module can be an instance of an executable decompression module located in a fourth network node between the second node and the third node on the network route, or in some cases at the third network node.

[0010] An addition to the method can include transferring the decompression module to the third network node, in response to the message from the application instance, the decompression module being executed at tire third network node.

[0011] In a disclosed embodiment the network includes an inter-rack network:, and a plurality of racks having intra-rack networks, and wherein a rack in the plurality of racks includes a rack switch and one or more other network nodes on the intra-rack network of the given rack, and the third network node is one of the one or more other network nodes on a given rack. [0012] In a disclosed embodiment the network connecting the second and the third nodes is the Internet.

[0013] In a disclosed embodiment the segment is identified by an offset from a beginning of the file and a length of the segment in uncompressed form, and the offset includes a byte offset, and the length includes a number of bytes.

[0014] A computer program product, is described configured to execute the storage manager functions described herein.

[0015] A computing system is described comprising a plurality of network nodes configured for communications via a network, a first network node in the plurality of network nodes having access to storage resources storing a plurality of compressed files or objects in data storage, as described above. The systems includes a storage access manager at a second netw'ork node in the plurality of network nodes configured to receive a message from an application instance at a third network node in the plurality of network nodes, the message requesting access to a segment of a selected file or object in uncompressed form, wherein the requested segment is a subset of the selected file or object, not including all of the selected file or object;. Also, the storage access manager is configured to send a request to the storage resources at the first network node, in response to the message, to transfer though a network route to the application instance at the third network node, one or more compressed blocks of a compressed file or object in the storage resources that encode the requested segment of the selected file, the one or more compressed blocks not including all of the plurality of compressed blocks. The network route includes a decompression module executable in the netw ork to decompress the one or more compressed blocks to recover the requested segment in uncompressed form, and of forwarding the requested segment in uncompressed form to the application instance at the third network node, as discussed above. Other features of the disclosed methods are implemented in the computing system as well.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016] Fig. 1 illustrates typical Cloud data center server, storage, and netw'orking

components that are typically mounted in data center racks.

[0017] Fig. 2 shows an expanded view of one server, with two sockets having four CPU cores per socket.

[0018] Fig. 3 illustrates two software threads operating in one hyper-threaded core.

[0019] Fig. 4a shows an application’s data being encoded by multiple, block-oriented microservices. [0020] Fig. 4b illustrates the temporal sequencing of software functions during the encoding of Fig. 4a.

[0021] Fig. 4c illustrates the mapping between components in Fig. 4a and the SW

sequence in Fig. 4b.

[0022] Fig. 5a shows an application’s data being decoded by multiple, block-oriented microservices.

[0023] Fig. 5b illustrates the temporal sequencing of software functions during the

decoding of Fig. 5a.

[0024] Fig. 5c illustrates the mapping between components in Fig. 5a and the SW

sequence in Fig. 5b.

[0025] Fig. 6a shows an application’s data being decoded by one or more threads that execute in the same rack where the application is running.

[0026] Fig. 6b illustrates the temporal sequencing of software functions during the

decoding of Fig. 6a.

[0027] Fig. 6c illustrates the mapping between components in Fig. 6a and the SW

sequence in Fig. 6b.

[0028] Fig. 7a illustrates decoding performed in the same server as the application

requesting the data.

[0029] Fig. 7b illustrates decoding performed in a different server than where the

application requesting the data runs.

[0030] Figure 8 illustrates how a user-specified {startByte 1510, NBytes 1520} random access specifier is converted into three random-access parameters {startBlk 1540, endBlock 1550, and NtossStart 1560}.

DETAILED DESCRIPTION

[0031] As used herein, a network node is an active electronic device that is attached to one or more networks having a data link address, such as a MAC (Media Access Layer) address, for each of the one or more networks, and executes applications capable of sending, receiving, or forwarding data on the physical media (e.g., wireless, optical, wired) of the one or more networks. Examples of electronic devices which can be deployed as network nodes, include all varieties of computers, rack mounted multi-core processors, work stations, laptop computers, hand held computers and smart phones. Network nodes can be classified in some networks, such as data center networks, as compute nodes and as storage nodes, depending on the primary functions of applications executed on the nodes. In some networks, netw ork nodes may include individual personal computers, laptops, etc. that are attached to the Internet, where the Internet itself serves as the Network attaching nodes to each other, via the Internet.

[0032] As used herein, the term "Internet" is used to refer to a communications network that connects computer networks and organizational computer facilities around the world, including for example networks utilizing IP addresses in a network layer.

[0033] The term server is used herein at times apparent from context, to refer to one or more network nodes configured to execute a server application, and at times to refer to server side applications themselves which can be executed using one or more network nodes.

[0034] Figure 1 illustrates data center compute and storage components. Fig. 1 illustrates the three components found in atypical private or public data center: servers 120, networking 140, and storage 160. Storage 160 components can include both hard disk drive (HDDs) and solid state disks (SSDs). In the sense defined in the previous paragraph, servers 120 and storage 160 are typical“nodes” can communicate with each other using networking 140, which may be the Internet, a dedicated Ethernet network within a data center, etc. Figure 1 illustrates atypical rack-oriented deployment of both compute servers and storage components in data centers. Racks are typically metal enclosures having the following approximate dimensions: 24” wide, 78” tall, and 40” deep. Data center rack dimensions can vary, but their dimensions do not affect the utility or implementation of the present innovation. Most data centers include a Top of Rack (TOR) switch, shown as components 140 in Fig. 1.

[0035] Figure 1 illustrates two types of data center racks 130 and 150, holding server 120 components and storage 160 components, respectively. Racks may contain server 120 components (CPU plus memory plus optional SSDs and/or HDDs) exclusively, storage (SSDs and/or HDDs plus their associated controllers and/or servers), or a combination of server 120 and storage 160 components. Many data centers use the top-most rack position to hold a top-of-rack switch 140 (TOR), which provides the ports for networking gear that connects server racks 130 to each other, and to storage racks 150, typically via one or more Ethernet or fiber-optic cables. In Figure 1, elements 120a thru 120z indicate multiple servers 120 in Server Rack 130a. Similarly, Figure 1 illustrates multiple storage components 160a..160z in Data Storage Rack 150a. Figure 1 intends components 120 to represent one or more server components in data center racks 120, not necessarily 26 components as might be assumed from the letters a...z. Similarly, Figure 1 intends components 160 to represent one or more storage components in data center storage racks 150, not necessarily 26 components as might be assumed from the letters a..z.

[0036] Figure 2 contains an expanded view of the compute components within server 120a. In Fig. 2, the example compute server 120a contains two multi-core compute sockets 110a and 110b. In Fig. 2, each compute socket 110a and 110b contains four cores 100. Core 100a represents the first of four cores in compute socket 110a, and core lOOh represents the fourth of four cores in compute socket 110b. For simplicity, Figure 2 omits various electronic and mechanical components that are typically also found within server racks, such as dynamic random-access memory [DRAM] dual in-line memory modules [DIMMs], printed circuit [PC] boards, backplanes, cables, power supplies, cooling channels, fans, etc. Those skilled in the art of data center design will recognize that when element 120 in Figure 2 represents a compute server, it can be used as a core (computational element) to execute any IT application, including block-based encoder software 500 (not shown in Fig. 2), block- based decoder software 600 (not shown in Fig. 2), or any generic IT application 102 (shown in Fig. 2 and subsequent figures). As of 2019, examples of multi-core compute ICs 110 include the Intel Xeon Scalable family of data center CPUs (sockets) and the AMD Epyc family of multi-core data center CPUs (sockets).

[0037] Figure 3 illustrates how a core 110 supports hyper-threading: the ability' to run two software threads sequentially using one core. The example shown in Fig. 3, in which core 100h supports hyper-threading, illustrates how two threads are interleaved to support hyper-threading of two software processes. A first software process A consists of Thread T1 102a and Thread T3 102c, while a second software process B consists of Thread T2 102b and Thread T4 102d. Whenever a thread is“stalled” (i.e. is waiting for data), tire other hyper- threaded process is swapped in and begins processing. In Fig. 3, time (not shown in Fig. 3) is assumed to run from the top of the figure to tire bottom of the figure, so thread T1 102a runs, then thread T2 102b runs, then thread T3 102c runs, then thread T4 102d runs. In the context of the innovation herein described, software processes A and B may include encoder software 500, decoder software 600, or generic IT application 102.

[0038] Figure 4a illustrates an example of the operation of a block-oriented encoder software 500 that uses microservices software, such as Amazon Web Services Lambda, Microsoft Azure Compute functions, or Google Compute Functions. Microservices are a perfect fit for a block-oriented encoder (compressor), since each block of a block-oriented encoder is automatically scheduled and run under the control of microservice manager 700.

In Fig. 4a, N e encoders run in parallel, encoding the data provided by application 102 that runs in Server 120a that is located in server rack 130a. Using networking connection 145a that operates at a rate of (for example) 100 Gbps 147a, application 102 transmits its data to be encoded to servers in cluster 130m, in which Ne encoders are implemented using servers 120c..120q, under the control of microservices manager 700. Encoder software 500 instances 500c..500q operate in servers 120c..120q, respectively, under the control of microservices manager 700, generating encoded data packets. Encoder software 500 typically compresses its input blocks, generating encoded data packets that are then transmitted across netw'ork connection 145z that operates at (for example) 100 Gbps 147z, to storage element DS1 160a in storage rack 150. Because the compressed blocks sent across network connection 145z are smaller than the original blocks that were sent across network connection 145a, the total amount of data stored in storage element DS1 160a is smaller than it w'ould have been without encoding (compression) provided by encoder software 500.

Thus, after block-oriented encoding provided by encoder software 500, application data sent by application 102 can be stored as encoded data packets in fewer bits, and at a lower cost, in data storage element 160a Without loss of generality, it is noted that multiple storage elements 160, rather than one storage element 160a, could store the encoded data packets generated by encoder software 500 that is subsequently stored in storage elements 160 in storage rack 150.

[0039] Figure 4b illustrates the Application Programming Interface (API) calls executed by application 102 in Fig. 4a, starting with PUT(inData, objName) API call 800. Referring back to Fig. 4a, data flow's between physical components server 120a through network connection 145a to one or more encode servers 120c..120q, and then from encode servers 120c..120q through network connection 145z to storage component 160a. Fig. 4c lists in tabular form details of the corresponding transfers and or computations performed in Fig. 4b. PUT(inData, objName) API call 800 is issued to microservices manager 700. API call 800 specifies where the input data (inData) to be encoded is found in application 102’s memory space, and where the data is stored in storage element 160c (objName). PUT API call 800 is implemented through two data transfer operations 810 and one call to encoder software 500. The fist XFER API call 810 manages the transfer of uncompressed inData from server 120a running application 102 to encoder software 500c..500q running in encoder cloud 130m on servers 120c to 120q, under the control of microservices manager 700. The encoding process (controlled by microservices manager 700) is then performed by encoder software 500, which takes subset uData from inData and generates a compressed packet cData from each input block of uData. The second XFER API call 810 transfers cData for storage in storage component 160a, associated with objName.

[0040] In the encoding example of Figure 4, we note that fewer bits are stored in storage element 160a that are delivered by application 102 to encoder software 500, because encoder software 500 compresses each input data block into a smaller compressed output packet, or into fewer output packets, which are subsequently stored in storage element 160a. Because compression is performed in encoding cluster 130m (under the control of microservices manager 700), uncompressed data flows between application 102 running on server 120a to encoder software 500 running on encode servers 120c..120q. In contrast, during encoding, compressed data flows from encoder servers 120c..120q to storage element 160a. Thus in Fig. 4, during encoding (PUT API calls), more (uncompressed) data flows across the network connections between app server 120a and encoder servers 120c..120q, while less

(compressed) data flows across the network connections between encoder servers 120c..120q to storage element 160a.

[0041] Figure 5a illustrates the operation of a block-oriented decoder software 600 that uses microservices software, such as Amazon Web Services Lambda, Microsoft Azure Compute functions, or Google Compute Functions. Microservices are a perfect fit for a block-oriented decoder (decompressor), since each block of a block-oriented decoder can be automatically scheduled and run under the control of the microservice. In the example shown in Fig. 5a, Nd instances of decoder software 600 run in parallel, decoding the data provided by storage element DS1 160a from storage rack 150a. Using networking connection 145z that operates at a rate of (for example) 100 Gbps 147z, storage element 160a transmits compressed data to cluster 130m, in which Nd decoders are implemented using available software threads, under the control of microservices manager 700. Decoder software 600 operates in one thread of servers 120c..120q, under the control of microservices manager 700, generating decoded data blocks. Decoder software 600 decompresses compressed packets, generating uData decompressed blocks that are then transmitted across network connection 145a that operates at (for example) 100 Gbps 147a, to application 102 running in server 120a of server rack 130a. Because the compressed blocks sent across network connection 145z from storage element 160a to cluster 130m are smaller than the decompressed blocks that are sent across network connection 145a, data moves faster across network connection 145z (which carries compressed packets from storage element 160a to decoder servers 120c..120q) than across network connection 145a, which carries

decompressed blocks from decoder servers 120c..120q to app server 120a. [0042] Figure 5b illustrates the Application Programming Interface (API) calls executed by application 102 in Fig. 5a, starting with GET(objName, myBuf) API call 900. Referring back to Fig. 5a, data flows between physical components data storage element 160a through network connection 145z to a decode servers 120c..120q, and then from decoder servers 120c..120q through network connection 145a to application server 120a running application

102.

[0043] Fig. 5c lists in tabular form the corresponding transfers and or computations performed in Fig. 5b. GET(objName, myBuf) API call 900 is issued to microservices manager 700, which specifies at tire highest level where the compressed data (iriData) to be decoded is found in storage element 160c (where the compressed object objName is stored). The GET API call is then implemented through a series of two XFER data transfer operations 810 and a series of decoder software 600. The fist XFER API call manages the transfer of objName compressed packets from data storage element 160a to decoder software 600c..600q running in encoder cloud 130m on servers 120c to 120q, under the control of microservices manager 700. The decoding process (controlled by microservices manager 700) is then performed by decoder software 600 running on servers 120c..120q, which takes subset cData and generates the corresponding decompressed block uData from each compressed packet cData. The second XFER API call transfer uData to application 102 running in server component 120a, where each uData blocks received from decode servers 120c..120q are re-assembled in the proper order by software controlled by microservices manager 700.

[0044] In the decoding example of Figure 5, because decompression is performed in encoding cluster 130m (under the control of microservices manager 700), compressed data flows between storage element 160a to decoder software 600 running on decode servers 120c..l20q. In contrast, during decoding, decompressed data flows from decoder servers 120c..120q to application server 120a. Thus in Fig. 5, during decoding (GET API calls), less (compressed) data flows across the network connections between storage element 160a to decode servers 120c..120q, while more (decompressed) dataflows across the network connections between decode servers 120c..120q to application server 120a.

[0045] We now identify a weakness of the decoding method illustrated in Fig. 5.

Because decoding is done BEFORE server 120a that runs application 102, network connections 145a from decode servers 120c..120q carry uncompressed data to application server 120a, which slows down tire GET API call compared to the alternative method that we are about to describe in Figure 6. The inventive method described in Fig. 6 of this innovation preserves the compressed data format as the data flows from storage element 160a (which stores compressed data packets) to application server 120a and then performs“just in time” decoding in tire same server 120a where application 102 is running (see Fig. 7a), or in a server that is also located in the same server rack 130 (see Fig. 7b) where server 120a is located.

[0046] Figure 6 illustrates that by running decoder software 600 in server 120a, all

networking components 145a..145z transmit encoded (compressed) data. Thus by transferring compressed data all the way from storage element 160a (where the data is already stored in compressed packets) to application server 120a, network transfers across all networking components 145 transfer compressed packets. In contrast, tire method described in Figure 5, which performed decompression using decoder software 600 that was implemented using microservices, sent decompressed data, not compressed data, from the decode microservices servers 120c..l20q to application server 120a.

[0047] Figure 6a illustrates the operation of a block-oriented decoder software 600 that operates in application server 120a, where application 102 also runs. In Fig. 6a, Nd instances of decoder software 600 ran in parallel, decoding compressed data packets that were transmitted across networking interface 145. Using networking connection 145 that operates at a rate of (for example) 100 Gbps 147z and 147a, storage element 160a transmits compressed packets to application server 120a. Decoder software 600 operates in one thread in a core of server 120a, under the control of microservices manager 700 or a container scheduler (such as Kubemetes, not shown in Fig. 6), generating decoded data blocks on the same network node as application 102, or in other cases on a different network node on the same rack as server 102a. Decoder software 600 decompresses compressed packets, generating uData decompressed blocks that are then stored in myBuf DRAM buffer specified by GET call 900 of Fig. 6b. Because the compressed blocks sent on a network route across network connection 145z from storage element 160a to server 120a (which runs both decoder software 600 and application software 102), data is transferred faster (in compressed format) from storage element 160a to application server 120a running application software 102. In other embodiments, the decoder software instances are located on netw'ork nodes between the manager 700 and the application server 120a, and the compressed blocks sent across netw'ork connection 145z from storage element 160a to the fourth node which runs decoder software 600, and then on a shorter segment of the network route in uncompressed form to application server 120a running application softw'are 102. In this embodiment using a fourth node, the uncompressed data is delivered to the application server via a network link, rather than in a memory sharing operation. Nonetheless, this network link can be an intra-rack link or otherwise a short length of the network relative to the network route to the access manager 700 in some cases, or a short length of the network relative to the network route to storage node including storage element 160a.

[0048] Figure 6b illustrates the Application Programming Interface (API) calls executed by application 102 in Fig. 6a, starting with GET(objName, myBuf) API call 900. Note that this is the SAME API call shown in Fig. 5a. Referring back to Fig. 6a, compressed data packets flow between data storage element 160a through network connection 145 to decode server 120a running decoder software 600. Decoded data is passed from decoder software 600 running in server 120a to application 102, also running in server 120a.

[0049] Fig. 6c lists in tabular form the corresponding transfers and or computations performed in Fig. 6b. GET(objName, myBuf) API call 900 is issued by application 102 to microservices manager 700, which specifies where the compressed data (inData) to be decoded is found in storage element 160c, where the compressed object objName is stored. The GET API call is then translated into two data transfer operation 810 and one call to decoder software 600. API call 810 manages the transfer of compressed packets

corresponding to objName from data storage element 160a to decoder software 600c..600q running on application servers 120a, under the control of microservices manager 700. The decoding process (controlled by microservices manager 700) is then performed by decoder software 600 running on application server 120a, which takes compressed packet cData and generates the corresponding decompressed block uData, which is stored in myBuf as requested by the original GET AP call.

[0050] As illustrated in Fig. 6, data remains compressed from storage element 160 to application server 120a, where application 102 is running. Application 102 requested the data via the original GET API call).

[0051] It will be understood by those skilled in the art of information technology that the innovation described herein simultaneously reduces storage costs and accelerates data transfers in both private and public data centers. Thus applications using this innovation will operate faster, because the data they request is transmitted from storage to servers in compressed form and thus is transmitted in less time than transmitting uncompressed data would have required, and will also cost less, because the data is stored in compressed form.

[0052] We note that the decoder software 600 operates significantly faster than encoder software 500. This asymmetry in processing rates between encoding (compression) and decoding (decompression) is typically true of all compression algorithms. Because the decoder software operates faster than die encoder software, running the decoder software on server 120 that also runs the application 102 that requested the data via a GET API call to a compressed storage service, it will be possible to run multiple decode software threads on server 120, in a manner that matches the“wire speed” of network connection 145. For example, if network connection 145 in Fig. 6 operates at 10 Gbps (which corresponds to 1.25 GB/sec), four software decoding threads will match that“wire speed”, because if each decode thread operates at 300 MB/sec (as shown in Fig. 6c), four threads x 300

MB/sec/thread = 1.2 GB/sec. Using hyper-threading (2 software threads per core) means that server 120a must only allocate 2 cores (4 threads) to maintain a“wire speed” of 10 Gbps.

[0053] Figure 7a illustrates decoder software 600 running in the same server 120a that runs application 102. In this configuration, data transfer link 139 may be simply a shared buffer in Dynamic Random Access Memory (DRAM) within Server 120a. Decoder processing in Figure 7a is in all other respects identical to the processing described in Figure

6.

[0054] Figure 7b illustrates decoder software 600 running in sever 120b, a server that differs from the server 120a that runs application 102. In this example, data transfer link 139 may be an Ethernet link, a shared Peripheral Component Interface Express (PCIe) link, or other data transfer mechanism that supports data exchange between servers located in the same server rack 130. Decoder processing in Figure 7b is in all other respects identical to the processing described in Figure 6.

[0055] To summarize, Fig. 7a illustrates decoder software 600 running in the same server as application 102. In contrast, Fig. 7b illustrates decoder software 600 running in a different server (server 102b) than application 102, which runs in server 102a.

[0056] Those skilled in the art of data center architecture will note that most data center servers 120 in server racks 130 are connected today (2019) to the TOR switches 140 via 10 Gbps or slower networking links. Thus, the innovation described in this specification, being software-based, can scale as network links between servers and TO R switches get faster.

For example, as data center servers 120 are connected someday to TOR switches 140 via 25 Gbps (~3 GB/sec) or 40 Gbps (~5 GB/sec) network connections, server 120a running application software 102 can simply allocate more cores or threads to decoder software 600. To maintain ~3 GB/sec (25 Gbps) decoding will require ~10 threads (~5 hyper-threaded cores), while to maintain ~5 GB/sec (50 Gbps) decoding will require ~16 threads (~8 hyper- threaded cores). Thus, the present innovation is scalable to match future, faster network connections from TOR switch 140 to server 120 that runs decoder software 600. [0057] Those skilled in the art of data center architecture will also appreciate that encoder software 500 and decoder software 600 could also be implemented in other processors, such as Graphical Processing Units (GPUs) offered by Nvidia and AMD, or Field-Programmable Gate Arrays (FPGAs) offered by Xilinx and Intel (formerly called Altera, then called the Intel Programmable Systems Group), without deviating from the spirit and intent of the present innovation.

[0058] A request for data can comprise a user-specified random access specifier in the form {startByte, NBytes}, where both startByte and NBytes refer to the original, uncompressed elements (typically Bytes) that are requested from the original input to the encoder that generated encoded blocks conforming to the encoded block format of the disclosed technology. An equally useful, equivalent random access specifier may also have the form {startByte, endByte}, where endByte = startByte +- NBytes - 1. These two random access specifiers are equivalent and interchangeable. Either one can be used to specify which decoded elements the disclosed technology returns to the user.

[0059] Figure 8 illustrates how a user-specified {startByte 1510, NBytes 1520} random access specifier is converted into three random-access parameters {startBlk 1540, endBlock 1550, and NtossStart 1560}. In order to avoid having to decode all elements of a stream of encoded block that precede desired startByte 1510, a group of indexes 1410 can be used to begin decoding of the encoded block that contains startByte 1510, which is not the first block in the plurality of blocks that encode the selected file or object. A decoder that supports need only decode NtotBlocks 1570 in order to return the user-specified NBytes 1520.

[0060] As shown in the example of Fig. 8, startByte 1510 is located in encoded block

21 Op. In a preferred embodiment of the disclosed technology, block size 215 is equal for all input blocks, so startBlk 1540 is determined by dividing startByte 1510 by block size 215. Similarly, the last block to be decoded by the decoder is endBlk 1550, calculated by dividing endByte 1530 by block size 215. The parameter endByte 1530 is the sum of startByte 1510 and Nbytes 1520, minus one. The total number of blocks to be decoded is one plus the difference between endBlk 1550 and startBlk 1540. Since startByte 1510 does not necessarily correspond to the first decoded element of block 21 Op (startBlk 1540), the variable NtossStart 1560 specifies how many elements (typically Bytes) of the first block will be "tossed" (discarded) prior to returning the NBytes 1520 that the user requested in {startByte 1510, NBytes 1520} random access specifier.

[0061] The location of decoding can substantially improve data center performance. All or most storage can be in compressed form, in whatever way compressed data gets put into storage. Once all data is in compressed form, the focus becomes where decompression occurs. So the transfer of encoded data can occur across the "long links" network connections from storage (where the data is already stored in compressed form) to the app that requested the data (or a subset of the data). The decompression can be done in softw'are, in the rack that houses the IT application that originally requested the data. That way the decompressed data that was requested by the IT application is transferred across "short links". The "long links" are typically between top-of-rack (TOR) switches, from the TOR switch in the data storage rack to the TOR switch in the server rack where the IT application is running.

[0062] The short links could be in shared memory. The "long links" (typically Ethernet links between top-of-rack switches from storage racks to server racks) carry compressed data, while the "short links" carry uncompressed data.

[0063] Included herein are copies of my prior Patent Applications, including, describing examples of encoding and decoding technologies suitable for use in the configurations described herein.