Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS, SYSTEM AND METHOD FOR CACHING DATA
Document Type and Number:
WIPO Patent Application WO/2012/021847
Kind Code:
A2
Abstract:
An apparatus, system, and method are disclosed for caching data. A storage request module 602 detects an input/output ("I/O") request for a storage device 118 cached by solid-state storage media 110 of a cache 102. A direct mapping module 606 references a single mapping structure 1100 to determine that the cache 102 comprises data of the I/O request. The single mapping structure 1100 maps each logical block address of the storage device 118 directly to a logical block address of the cache 102. The single mapping structure 1100 maintains a fully associative relationship between logical block addresses of the storage device 118 and physical storage addresses on the solid-state storage media 110. A cache fulfillment module 604 satisfies the I/O request using the cache 102 in response to the direct mapping module 606 determining that the cache 102 comprises at least one data block of the I/O request.

Inventors:
FLYNN DAVID (US)
Application Number:
PCT/US2011/047659
Publication Date:
February 16, 2012
Filing Date:
August 12, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FUSION IO INC (US)
FLYNN DAVID (US)
International Classes:
G06F12/08; G06F12/02; G06F12/0893; G06F13/14
Foreign References:
US6745292B12004-06-01
US6334173B12001-12-25
US20080195801A12008-08-14
KR20100022811A2010-03-03
Attorney, Agent or Firm:
HILTON, Scott et al. (Salt Lake City, Utah, US)
Download PDF:
Claims:
CLAIMS

1. A method for caching data, the method comprising:

detecting an input/output ("I/O") request for a storage device cached by solid state storage media of a cache;

referencing a single mapping structure to determine that the cache comprises data of the I O request, the single mapping structure mapping each logical block address of the storage device directly to a logical block address of the cache, the single mapping structure comprising a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media; and satisfying the I/O request using the cache in response to determining that the cache comprises at least one data block of the I/O request.

2. The method of Claim 1, wherein satisfying the I/O request comprises storing the data of the I/O request to the cache sequentially on the solid-state storage media to preserve an ordered sequence of storage operations performed on the solid-state storage media at one or more logical block addresses of the cache.

3. The method of Claim 2, wherein storing the data of the I/O request to the cache sequentially comprises appending the data of the I/O request to an append point of a sequential, log-based, cyclic writing structure of the solid-state storage media, and wherein the single mapping structure is configured such that each logical block address of the storage device maps to a distinct unique entry in the single mapping structure and the unique entry maps to a distinct physical storage address on the solid-state storage media.

4. The method of Claim 1, wherein satisfying the I/O request comprises reading the data of the I/O request from the cache using a physical storage address of the solid-state storage media associated with a logical block address of the I/O request.

5. The method of Claim 1, further comprising storing the data of the I/O request on the solid-state storage media in a format that associates the data with respective logical block addresses of the data.

6. The method of Claim 1, further comprising associating sequence indicators with the data of the I/O request on the solid-state storage media, wherein the sequence indicators record an ordered sequence of storage operations performed on the solid-state storage media.

7. The method of Claim 6, further comprising: storing the data of the I/O request in a format that associates the data with respective logical block addresses on the solid-state storage media;

maintaining an entry in the single mapping structure, the entry comprising an association between logical block addresses of the data of the I/O request and physical storage locations comprising the data of the I/O request on the solid-state storage media; and

reconstructing the entry in the single mapping structure using the logical block addresses of the data and the sequence indicators associated with the data on the solid-state storage media.

8. The method of Claim 1, wherein the I/O request comprises one or more sets of noncontiguous logical block address ranges for the storage device.

9. The method of Claim 1, wherein the solid-state storage media is managed by a solid-state storage controller configured to mask differences in latency for storage operations performed on the solid-state storage media.

10. The method of Claim 1, wherein the single mapping structure comprises a plurality of entries, each entry mapping a variable length range of logical block addresses of the cache to a location on the solid-state storage media of the cache.

11. The method of Claim 1, wherein the presence of an entry for a logical block address of the cache in the single mapping structure indicates that the cache stores data associated with a corresponding logical block address of the storage device at a physical storage address on the solid-state storage media indicated by the entry.

12. The method of Claim 1, further comprising performing one or more actions on the cache in response to detecting one or more predefined commands for the storage device, wherein the one or more predefined commands are selected from the group consisting of a flush command, a pin command, an unpin command, a freeze command, and a thaw command.

13. The method of Claim 1, further comprising dynamically reducing a cache size for the cache in response to an age characteristic for storage media of the cache.

14. The method of Claim 1, wherein the single mapping structure maps each logical block address of the storage device directly to a logical block address of the cache by using an address of the I/O request directly to identify both an entry in the single mapping structure for a logical block address of the cache and to identify a logical block address of the storage device.

15. The method of Claim 1, wherein a cache data block associated with a logical block address of the cache is equal in size to a storage device data block associated with a logical block address of the storage device.

An apparatus for caching data, the apparatus comprising:

a storage request module that detects an input/output ("I/O") request for a storage device cached by solid-state storage media of a cache;

a direct mapping module that references a single mapping structure using an address of the I/O request to determine that the cache comprises data of the I O request, the address of the I/O request directly identifying both an entry in the single mapping structure and a logical block address of the storage device, the single mapping structure maintaining a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media; and a cache fulfillment module that satisfies the I O request using the cache in response to determining that the cache comprises at least one data block of the I/O request.

The apparatus of Claim 16, wherein the cache fulfillment module satisfies the I/O request by storing the data of the I/O request to the cache sequentially on the solid-state storage media to preserve an ordered sequence of storage operations performed on the solid-state storage media at one or more logical block addresses of the cache.

The apparatus of Claim 16, wherein the cache fulfillment module satisfies the I/O request by reading the data of the I/O request from the cache using a physical storage address of the solid-state storage media associated with a logical block address of the I/O request. A system for caching data, the system comprising:

a processor;

a storage controller for a nonvolatile solid-state storage cache, the cache in communication with the processor over one or more communications buses;

a cache controller in communication with the storage controller, the cache controller comprising,

a storage request module that detects an input/output ("I/O") request for a storage device cached by solid-state storage media of a cache; a direct mapping module that references a single mapping structure using an address of the I/O request to determine that the cache comprises data of the I/O request, the address of the I/O request directly identifying both an entry in the single mapping structure and a logical block address of the storage device, the single mapping structure maintaining a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media; and a cache fulfillment module that satisfies the I/O request using the cache in response to determining that the cache comprises at least one data block of the I/O request.

The system of Claim 19, further comprising a host computer system, the host computer system comprising the processor, wherein the storage controller and the cache controller each comprise a device driver executing on the processor of the host computer system.

Description:
APPARATUS, SYSTEM, AND METHOD FOR CACHING DATA

BACKGROUND OF THE INVENTION FIELD OF THE INVENTION

This invention relates to caching data and more particularly relates to caching data using solid-state storage media.

DESCRIPTION OF THE RELATED ART

Data storage caches are typically direct mapped, fully associative, or set associative. In direct mapped caches, each storage block of a backing store is directly mapped to a single cache block, but since a cache typically has a smaller capacity than an associated backing store, several storage blocks often share the same cache block, causing cache collisions. Direct mapped caches usually address a cache collision for a cache block by overwriting the cache block with the most recently accessed data.

In fully associative caches, storage blocks typically are not mapped to a specific cache block, but can be cached in any cache block. The processing overhead for locating cached data in a fully associative cache is typically greater than for a direct mapped cache, because a cache map, cache index, cache tags, or another separate cache translation layer is used to locate the cached data.

Set associative caches typically divide cache storage into sets, where each storage block of a backing store is mapped to a set and can be stored in any cache block in the set. Set associative caches typically have more cache collision issues than fully associative caches and more processing overhead for locating cached data than direct mapped caches.

SUMMARY OF THE INVENTION

The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available data storage caches. Accordingly, the present invention has been developed to provide an apparatus, system, and method for caching data that overcome many or all of the above-discussed shortcomings in the art.

A method of the present invention is presented for caching data. The method in the disclosed embodiments substantially includes the steps necessary to carry out the functions presented below with respect to the operation of the described apparatus and system. In one embodiment, the method includes detecting an input/output ("I/O") request for a storage device cached by solid-state storage media of a cache. The method, in a further embodiment, may include referencing a single mapping structure to determine that the cache comprises data of the I/O request. In certain embodiments, the method includes satisfying the I/O request using the cache in response to determining that the cache comprises at least one data block of the I/O request.

In one embodiment, the single mapping structure maps each logical block address of the storage device directly to a logical block address of the cache. The single mapping structure, in a further embodiment, comprises or maintains a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media.

Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.

These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter

BRIEF DESCRIPTION OF THE DRAWINGS

In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:

Figure 1 is a schematic block diagram illustrating one embodiment of a system for caching data in accordance with the present invention;

Figure 2 is a schematic block diagram illustrating one embodiment of a host device in accordance with the present invention;

Figure 3 is a schematic block diagram illustrating one embodiment of a direct cache module in accordance with the present invention;

Figure 4 is a schematic block diagram illustrating another embodiment of a direct cache module in accordance with the present invention;

Figure 5 is a schematic block diagram illustrating one embodiment of a storage controller in accordance with the present invention; Figure 6 is a schematic block diagram illustrating another embodiment of a storage controller in accordance with the present invention;

Figure 7 is a schematic block diagram illustrating one embodiment of a forward map and a reverse map in accordance with the present invention;

Figure 8 is a schematic block diagram illustrating one embodiment of a mapping structure, a logical address space of a cache, a sequential, log-based, append-only writing structure, and an address space of a storage device in accordance with the present invention;

Figure 9 is a schematic flow chart diagram illustrating one embodiment of a method for caching data in accordance with the present invention; and

Figure 10 is a schematic flow chart diagram illustrating another embodiment of a method for caching data in accordance with the present invention.

DETAILED DESCRIPTION OF THE INVENTION

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

CACHING SYSTEM

Figure 1 depicts one embodiment of a system 100 for caching data in accordance with the present invention. The system 100, in the depicted embodiment, includes a cache 102 a host device 114, a direct cache module 116, and a storage device 118. The cache 102, in the depicted embodiment, includes a solid-state storage controller 104, a write data pipeline 106, a read data pipeline 108, and a solid-state storage media 110. In general, the system 100 caches data for the storage device 118 in the cache 102.

In the depicted embodiment, the system 100 includes a single cache 102. In another embodiment, the system 100 may include two or more caches 102. For example, in various embodiments, the system 100 may mirror cached data between several caches 102, may virtually stripe cached data across multiple caches 102, or otherwise cache data in more than one cache 102. In general, the cache 102 serves as a read and/or a write cache for the storage device 118 and the storage device 118 is a backing store for the cache 102. In the depicted embodiment, the cache 102 is embodied by a non- volatile, solid-state storage device, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110. The non- volatile, solid-state storage media 110 may include flash memory, nano random access memory ("nano RAM or NRAM"), magneto-resistive RAM ("MRAM"), phase change RAM ("PRAM"), etc. In further embodiments, the cache 102 may include other types of non-volatile and/or volatile data storage, such as dynamic RAM ("DRAM"), static RAM ("SRAM"), magnetic data storage, optical data storage, and/or other data storage technologies.

The solid-state storage controller 104, in certain embodiments, may mask differences in latency for storage operations performed on the solid-state storage media 110 by grouping erase blocks by access time, wear level, and/or health, by queuing storage operations based on expected completion times, by splitting storage operations, by coordinating storage operation execution in parallel among multiple buses, or the like.

In general, the cache 102 caches data for the storage device 118. The storage device 118, in one embodiment, is a backing store associated with the cache 102 and/or with the direct cache module 116. The storage device 118 may include a hard disk drive, an optical drive with optical media, a magnetic tape drive, or another type of storage device. In one embodiment, the storage device 118 may have a greater data storage capacity than the cache 102. In another embodiment, the storage device 118 may have a higher latency, a lower throughput, or the like, than the cache 102.

The storage device 118 may have a higher latency, a lower throughput, or the like due to properties of the storage device 118 itself, or due to properties of a connection to the storage device 118. For example, in one embodiment, the cache 102 and the storage device 118 may each include non- volatile, solid-state storage media 110 with similar properties, but the storage device 118 may be in communication with the host device 114 over a data network, while the cache 102 may be directly connected to the host device 114, causing the storage device 118 to have a higher latency relative to the host 114 than the cache 102.

In the depicted embodiment, the cache 102 and the storage device 118 are in communication with the host device 114 through the direct cache module 116. The cache 102 and/or the storage device 118, in one embodiment, may be direct attached storage ("DAS") of the host device 114. DAS, as used herein, is data storage that is connected to a device, either internally or externally, without a storage network in between.

In one embodiment, the cache 102 and/or the storage device 118 are internal to the host device 114 and are connected using a system bus, such as a peripheral component interconnect express ("PCI-e") bus, a Serial Advanced Technology Attachment ("SATA") bus, or the like. In another embodiment, the cache 102 and/or the storage device 118 may be external to the host device 114 and may be connected using a universal serial bus ("USB") connection, an Institute of Electrical and Electronics Engineers ("IEEE") 1394 bus ("FireWire"), an external SATA ("eSATA") connection, or the like. In other embodiments, the cache 102 and/or the storage device 118 may be connected to the host device 114 using a peripheral component interconnect ("PCI") express bus using external electrical or optical bus extension or bus networking solution such as Infiniband or PCI Express Advanced Switching ("PCIe-AS"), or the like.

In various embodiments, the cache 102 and/or the storage device 118 may be in the form of a dual-inline memory module ("DIMM"), a daughter card, or a micro-module. In another embodiment, the cache 102 and/or the storage device 118 may be elements within a rackmounted blade. In another embodiment, the cache 102 and/or the storage device 118 may be contained within packages that are integrated directly onto a higher level assembly (e.g. mother board, lap top, graphics processor). In another embodiment, individual components comprising the cache 102 and/or the storage device 118 are integrated directly onto a higher level assembly without intermediate packaging. In the depicted embodiment, the cache 102 includes one or more solid-state storage controllers 104 with a write data pipeline 106 and a read data pipeline 108, and a solid-state storage media 110.

In a further embodiment, instead of being connected directly to the host device 114 as DAS, the cache 102 and/or the storage device 118 may be connected to the host device 114 over a data network. For example, the cache 102 and/or the storage device 118 may include a storage area network ("SAN") storage device, a network attached storage ("NAS") device, a network share, or the like. In one embodiment, the system 100 may include a data network, such as the Internet, a wide area network ("WAN"), a metropolitan area network ("MAN"), a local area network ("LAN"), a token ring, a wireless network, a fiber channel network, a SAN, a NAS, ESCON, or the like, or any combination of networks. A data network may also include a network from the IEEE 802 family of network technologies, such Ethernet, token ring, Wi-Fi, Wi-Max, and the like. A data network may include servers, switches, routers, cabling, radios, and other equipment used to facilitate networking between the host device 114 and the cache 102 and/or the storage device 118.

In one embodiment, at least the cache 102 is connected directly to the host device 114 as a DAS device. In a further embodiment, the cache 102 is directly connected to the host device 114 as a DAS device and the storage device 118 is directly connected to the cache 102. For example, the cache 102 may be connected directly to the host device 114, and the storage device 118 may be connected directly to the cache 102 using a direct, wire-line connection, such as a PCI express bus, an SATA bus, a USB connection, an IEEE 1394 connection, an eSATA connection, a proprietary direct connection, an external electrical or optical bus extension or bus networking solution such as Infiniband or PCIe-AS, or the like. One of skill in the art, in light of this disclosure, will recognize other arrangements and configurations of the host device 114, the cache 102, and the storage device 118 suitable for use in the system 100.

The system 100 includes the host device 114 in communication with the cache 102 and the storage device 118 through the direct cache module 116. A host device 114 may be a host, a server, a storage controller of a SAN, a workstation, a personal computer, a laptop computer, a handheld computer, a supercomputer, a computer cluster, a network switch, router, or appliance, a database or storage appliance, a data acquisition or data capture system, a diagnostic system, a test system, a robot, a portable electronic device, a wireless device, or the like.

In the depicted embodiment, the host device 114 is in communication with the direct cache module 116. The direct cache module 116, in general, receives or otherwise detects read and write requests from the host device 114 for the storage device 118 and manages the caching of data in the cache 102. In one embodiment, the direct cache module 116 comprises a software application, file system filter driver, or the like.

The direct cache module 116, in various embodiments, may include one or more software drivers on the host device 114, one or more storage controllers, such as the solid-state storage controllers 104 of the cache 102, a combination of one or more software drivers and storage controllers, or the like. In certain embodiments, hardware and/or software of the direct cache module 116 comprises a cache controller that is in communication with the solid-state storage controller 104 to manage operation of the cache 102.

In one embodiment, the storage controller 104 sequentially writes data on the solid-state storage media 110 in a log structured format and within one or more physical structures of the storage elements, the data is sequentially stored on the solid-state storage media 110. Sequentially writing data involves the storage controller 104 streaming data packets into storage write buffers for storage elements, such as a chip (a package of one or more dies) or a die on a circuit board. When the storage write buffers are full, the data packets are programmed to a designated virtual or logical page ("LP"). Data packets then refill the storage write buffers and, when full, the data packets are written to the next LP. The next virtual page may be in the same bank or another bank. This process continues, LP after LP, typically until a virtual or logical erase block ("LEB") is filled.

In another embodiment, the streaming may continue across LEB boundaries with the process continuing, LEB after LEB. Typically, the storage controller 104 sequentially stores data packets in an LEB by order of processing. In one embodiment, where a write data pipeline 106 is used, the storage controller 104 stores packets in the order that they come out of the write data pipeline 106. This order may be a result of data segments arriving from a requesting device mixed with packets of valid data that are being read from another storage location as valid data is being recovered from another LEB during a recovery operation.

The sequentially stored data, in one embodiment, can serve as a log to reconstruct data indexes and other metadata using information from data packet headers. For example, in one embodiment, the storage controller 104 may reconstruct a storage index by reading headers to determine the data structure to which each packet belongs and sequence information to determine where in the data structure the data or metadata belongs. The storage controller 104, in one embodiment, uses physical address information for each packet and timestamp or sequence information to create a mapping between the physical locations of the packets and the data structure identifier and data segment sequence. Timestamp or sequence information is used by the storage controller 104 to replay the sequence of changes made to the index and thereby reestablish the most recent state.

In one embodiment, erase blocks are time stamped or given a sequence number as packets are written and the timestamp or sequence information of an erase block is used along with information gathered from container headers and packet headers to reconstruct the storage index. In another embodiment, timestamp or sequence information is written to an erase block when the erase block is recovered.

In a read, modify, write operation, data packets associated with the logical structure are located and read in a read operation. Data segments of the modified structure that have been modified are not written to the location from which they are read. Instead, the modified data segments are again converted to data packets and then written to the next available location in the virtual page currently being written. Index entries for the respective data packets are modified to point to the packets that contain the modified data segments. The entry or entries in the index for data packets associated with the same logical structure that have not been modified will include pointers to original location of the unmodified data packets. Thus, if the original logical structure is maintained, for example to maintain a previous version of the logical structure, the original logical structure will have pointers in the index to all data packets as originally written. The new logical structure will have pointers in the index to some of the original data packets and pointers to the modified data packets in the virtual page that is currently being written. In a copy operation, the index includes an entry for the original logical structure mapped to a number of packets stored on the solid-state storage media 110. When a copy is made, a new logical structure is created and a new entry is created in the index mapping the new logical structure to the original packets. The new logical structure is also written to the solid-state storage media 110 with its location mapped to the new entry in the index. The new logical structure packets may be used to identify the packets within the original logical structure that are referenced in case changes have been made in the original logical structure that have not been propagated to the copy and the index is lost or corrupted. In another embodiment, the index includes a logical entry for a logical block.

Beneficially, sequentially writing packets facilitates a more even use of the solid-state storage media 110 and allows a solid- storage device controller to monitor storage hot spots and level usage of the various virtual pages in the solid-state storage media 110. Sequentially writing packets also facilitates a powerful, efficient garbage collection system, which is described in detail below. One of skill in the art will recognize other benefits of sequential storage of data packets.

The system 100 may comprise a log-structured storage system or log-structured array similar to a log-structured file system and the order that data is stored may be used to recreate an index. Typically an index that includes a logical-to-physical mapping is stored in volatile memory. If the index is corrupted or lost, the index may be reconstructed by addressing the solid-state storage media 110 in the order that the data was written. Within a logical erase block ("LEB"), data is typically stored sequentially by filling a first logical page, then a second logical page, etc. until the LEB is filled. The solid-state storage controller 104 then chooses another LEB and the process repeats. By maintaining an order that the LEBs were written to and by knowing that each LEB is written sequentially, the index can be rebuilt by traversing the solid- state storage media 110 in order from beginning to end. In other embodiments, if part of the index is stored in non-volatile memory, such as on the solid-state storage media 110, the solid- state storage controller 104 may only need to replay a portion of the solid-state storage media 110 to rebuild a portion of the index that was not stored in non- volatile memory. One of skill in the art will recognize other benefits of sequential storage of data packets.

In one embodiment, the host device 114 loads one or more device drivers for the cache

102 and/or the storage device 118 and the direct cache module 116 communicates with the one or more device drivers on the host device 114. In another embodiment, the direct cache module 116 may communicate directly with a hardware interface of the cache 102 and/or the storage device 118. In a further embodiment, the direct cache module 116 may be integrated with the cache 102 and/or the storage device 118.

In one embodiment, the cache 102 and/or the storage device 118 have block device interfaces that support block device commands. For example, the cache 102 and/or the storage device 118 may support the standard block device interface, the ATA interface standard, the ATA Packet Interface ("ATAPI") standard, the small computer system interface ("SCSI") standard, and/or the Fibre Channel standard which are maintained by the InterNational Committee for Information Technology Standards ("IN CITS"). The direct cache module 116 may interact with the cache 102 and/or the storage device 118 using block device commands to read, write, and clear (or trim) data.

In one embodiment, the direct cache module 116 serves as a proxy for the storage device 118, receiving read and write requests for the storage device 118 directly from the host device 114. The direct cache module 116 may represent itself to the host device 114 as a storage device having a capacity similar to and/or matching the capacity of the storage device 118. In certain embodiments, the direct cache module 116 and/or the solid-state storage controller 104 dynamically reduce a cache size for the cache 102 in response to an age characteristic for the solid-state storage media 110 of the cache 102. For example, as storage elements of the cache 102 age, the direct cache module 116 and/or the solid-state storage controller 104 may remove the storage elements from operation, thereby reducing the cache size for the cache 102. Examples of age characteristics, in various embodiments, may include a program/erase count, a bit error rate, an uncorrectable bit error rate, or the like that satisfies a predefined age threshold.

The direct cache module 116, upon receiving a read request or write request from the host device 114, in one embodiment, fulfills the request by caching write data in the cache 102 or by retrieving read data from one of the cache 102 and the storage device 118 and returning the read data to the host device 114.

Data caches are typically organized into cache lines which divide up the physical capacity of the cache, these cache lines may be divided into several sets. A cache line is typically larger than a block or sector of a backing store associated with a data cache, to provide for prefetching of additional blocks or sectors and to reduce cache misses and increase the cache hit rate. Data caches also typically evict an entire, fixed size, cache line at a time to make room for newly requested data in satisfying a cache miss. Data caches may be direct mapped, fully associative, N-way set associative, or the like.

In a direct mapped cache, each block or sector of a backing store has a one-to-one mapping to a cache line in the direct mapped cache. For example, if a direct mapped cache has T number of cache lines, the backing store associated with the direct mapped cache may be divided into T sections, and the direct mapped cache caches data from a section exclusively in the cache line corresponding to the section. Because a direct mapped cache always caches a block or sector in the same location or cache line, the mapping between a block or sector address and a cache line can be a simple manipulation of an address of the block or sector.

In a fully associative cache, any cache line can store data from any block or sector of a backing store. A fully associative cache typically has lower cache miss rates than a direct mapped cache, but has longer hit times (i.e. it takes longer to locate data in the cache) than a direct mapped cache. To locate data in a fully associative cache, either cache tags of the entire cache can be searched, a separate cache index can be used, or the like.

In an N-way set associative cache, each sector or block of a backing store may be cached in any of a set of N different cache lines. For example, in a 2-way set associative cache, either of two different cache lines may cache data for a sector or block. In an N-way set associative cache, both the cache and the backing store are typically divided into sections or sets, with one or more sets of sectors or blocks of the backing store assigned to a set of N cache lines. To locate data in an N-way set associative cache, a block or sector address is typically mapped to a set of cache lines, and cache tags of the set of cache lines are searched, a separate cache index is searched, or the like to determine which cache line in the set is storing data for the block or sector. An N-way set associative cache typically has miss rates and hit rates between those of a direct mapped cache and those of a fully associative cache.

The cache 102, in one embodiment, has characteristics of both a directly mapped cache and a fully associative cache. A logical address space of the cache 102, in one embodiment, is directly mapped to an address space of the storage device 118 while the physical storage media 110 of the cache 102 is fully associative with regard to the storage device 118. In other words, each block or sector of the storage device 118, in one embodiment, is directly mapped to a single logical address of the cache 102 while any portion of the physical storage media 110 of the cache 102 may store data for any block or sector of the storage device 118. In one embodiment, a logical address is an identifier of a block of data and is distinct from a physical address of the block of data, but may be mapped to the physical address of the block of data. Examples of logical addresses, in various embodiments, include logical block addresses ("LBAs"), logical identifiers, object identifiers, pointers, references, and the like.

Instead of traditional cache lines, in one embodiment, the cache 102 has logical or physical cache data blocks associated with each logical address that are equal in size to a block or sector of the storage device 118. In a further embodiment, the cache 102 caches ranges and/or sets of ranges of blocks or sectors for the storage device 118 at a time, providing dynamic or variable length cache line functionality. A range or set of ranges of blocks or sectors, in a further embodiment, may include a mixture of contiguous and/or noncontiguous blocks. For example, the cache 102, in one embodiment, supports block device requests that include a mixture of contiguous and/or noncontiguous blocks and that may include "holes" or intervening blocks that the cache 102 does not cache or otherwise store.

In one embodiment, one or more groups of addresses of the storage device 118 are directly mapped to corresponding logical addresses of the cache 102. The addresses of the storage device 118 may comprise physical addresses or logical addresses. Directly mapping logical addresses of the storage device 118 to logical addresses of the cache 102, in one embodiment, provides a one-to-one relationship between the logical addresses of the storage device 118 and the logical addresses of the cache 102. Directly mapping logical or physical address space of the storage device 118 to logical addresses of the cache 102, in one embodiment, precludes the use of an extra translation layer in the direct cache module 116, such as the use of cache tags, a cache index, the maintenance of a translation data structure, or the like. In one embodiment, while the logical address space of the cache 102 may be larger than a logical address space of the storage device 118, both logical address spaces include at least logical addresses 0-N. In a further embodiment, at least a portion of the logical address space of the cache 102 represents or appears as the logical address space of the storage device 118 to a client, such as the host device 114.

Alternatively, in certain embodiments where physical blocks or sectors of the storage device 118 are directly accessible using physical addresses, at least a portion of logical addresses in a logical address space of the cache 102 may be mapped to physical addresses of the storage device 118. At least a portion of the logical address space of the cache 102, in one embodiment, may correspond to the physical address space of the storage device 118. At least a subset of the logical addresses of the cache 102, in this embodiment, are directly mapped to corresponding physical addresses of the storage device 118.

In one embodiment, the logical address space of the cache 102 is a sparse address space that is either as large as or is larger than the physical storage capacity of the cache 102. This allows the storage device 118 to have a larger storage capacity than the cache 102, while maintaining a direct mapping between the logical addresses of the cache 102 and logical or physical addresses of the storage device 118. The sparse logical address space may be thinly provisioned, in one embodiment. In a further embodiment, as the direct cache module 116 writes data to the cache 102 using logical addresses, the cache 102 directly maps the logical addresses to distinct physical addresses or locations on the solid-state storage media 110 of the cache 102, such that the physical addresses or locations of data on the solid-state storage media 110 are fully associative with the storage device 118. In one embodiment, the direct cache module 116 and/or the cache 102 use the same single mapping structure to map addresses (either logical or physical) of the storage device 118 to logical addresses of the cache 102 and to map logical addresses of the cache 102 to locations/physical addresses of a block or sector (or range of blocks or sectors) on the physical solid state storage media 110. In one embodiment, using a single mapping structure for both functions eliminates the need for a separate cache map, cache index, cache tags, or the like, decreasing access times of the cache 102.

As the direct cache module 116 clears, trims, replaces, expires, and/or evicts, cached data from the cache 102, the physical addresses and associated physical storage media, the solid state storage media 110 in the depicted embodiment, are freed to store data for other logical addresses. In one embodiment, the solid state storage controller 104 stores the data at the physical addresses using a log-based, append only writing structure such that data evicted from the cache 102 or overwritten by a subsequent write request invalidates other data in the log. Consequently, a garbage collection process recovers the physical capacity of the invalid data in the log. One embodiment of the log-based, append only writing structure is a logically ring-like, cyclic data structure, as new data is appended to the log-based writing structure, previously used physical capacity is reused in a circular, theoretically infinite manner.

DATA CACHING

Figure 2 depicts one embodiment of a host device 114. The host device 114 may be similar, in certain embodiments, to the host device 114 depicted in Figure 1. The depicted embodiment includes a user application 502 in communication with a storage client 504. The storage client 504 is in communication with a direct cache module 116, which, in one embodiment, is substantially similar to the direct cache module 116 of Figure 1, described above. The direct cache module 116, in the depicted embodiment, is in communication with the cache 102 and the storage device 118.

In one embodiment, the user application 502 is a software application operating on or in conjunction with the storage client 504. The storage client 504 manages file systems, files, data, and the like and utilizes the functions and features of the direct cache module 116, the cache 102, and the storage device 118. Representative examples of storage clients include, but are not limited to, a server, a file system, an operating system, a database management system ("DBMS"), a volume manager, and the like. In the depicted embodiment, the storage client 504 is in communication with the direct cache module 116. In a further embodiment, the storage client 504 may also be in communication with the cache 102 and/or the storage device 118 directly. The storage client 504, in one embodiment, reads data from and writes data to the storage device 118 through the direct cache module 116, which uses the cache 102 to cache read data and write data for the storage device 118. In a further embodiment, the direct cache module 116 caches data in a manner that is substantially transparent to the storage client 504, with the storage client 504 sending read requests and write requests directly to the direct cache module 116.

In one embodiment, the direct cache module 116 has exclusive access to, and/or control over the cache 102 and the storage device 118. The direct cache module 116 may represent itself to the storage client 504 as a storage device. For example, the direct cache module 116 may represent itself as a conventional block storage device. In a particular embodiment, the direct cache module 116 may represent itself to the storage client 504 as a storage device having the same number of logical blocks (0 to N) as the storage device 118. In another embodiment, the direct cache module 116 may represent itself to the storage client 504 as a storage device have the more logical blocks (0 to N+X) as the storage device 118, where X = the number of logical blocks addressable by the direct cache module 116 beyond N. In certain embodiments, X = 2 Λ 64 - N.

As described above with regard to the direct cache module 116 depicted in the embodiment of Figure 1, in various embodiments, the direct cache module 116 may be embodied by one or more of a storage controller of the cache 102 and/or a storage controller of the storage device 118; a separate hardware controller device that interfaces with the cache 102 and the storage device 118; a device driver/software controller loaded on the host device 114; and the like.

In one embodiment, the host device 114 loads a device driver for the direct cache module

116. In a further embodiment, the host device 114 loads device drivers for the cache 102 and/or the storage device 118. The direct cache module 116 may communicate with the cache 102 and/or the storage device 118 through device drivers loaded on the host device 114, through a storage controller of the cache 102 and/or through a storage controller of the storage device 118, or the like. Hardware and/or software elements of the direct cache module 116 may form a cache controller for the cache 102 and may be in communication with the solid-state storage controller 104, sending commands to the solid-state storage controller 104 to manage operation of the cache 102. In one embodiment, the storage client 504 communicates with the direct cache module 116 through an input/output ("I/O") interface represented by a block I/O emulation layer 506. In certain embodiments, the fact that the direct cache module 116 is providing caching services in front of one or more caches 102, and/or one or more backing stores, such as the storage device 118, may be transparent to the storage client 504. In such an embodiment, the direct cache module 116 may present (i.e. identify itself as) a conventional block device to the storage client 504.

In a further embodiment, the cache 102 and/or the storage device 118 either include a distinct block I/O emulation layer 506 or are conventional block storage devices. Certain conventional block storage devices divide the storage media into volumes or partitions. Each volume or partition may include a plurality of sectors. One or more sectors are organized into a logical block. In certain storage systems, such as those interfacing with the Windows® operating systems, the logical blocks are referred to as clusters. In other storage systems, such as those interfacing with UNIX, Linux, or similar operating systems, the logical blocks are referred to simply as blocks. A logical block or cluster represents a smallest physical amount of storage space on the storage media that is addressable by the storage client 504. A block storage device may associate n logical blocks available for user data storage across the storage media with a logical block address, numbered from 0 to n. In certain block storage devices, the logical block addresses may range from 0 to n per volume or partition. In conventional block storage devices, a logical block address maps directly to a particular logical block. In conventional block storage devices, each logical block maps to a particular set of physical sectors on the storage media.

However, the direct cache module 116, the cache 102 and/or the storage device 118 may not directly or necessarily associate logical block addresses with particular physical blocks. The direct cache module 116, the cache 102, and/or the storage device 118 may emulate a conventional block storage interface to maintain compatibility with block storage clients 504 and with conventional block storage commands and protocols.

When the storage client 504 communicates through the block I/O emulation layer 506, the direct cache module 116 appears to the storage client 504 as a conventional block storage device. In one embodiment, the direct cache module 116 provides the block I/O emulation layer 506 which serves as a block device interface, or API. In this embodiment, the storage client 504 communicates with the direct cache module 116 through this block device interface. In one embodiment, the block I/O emulation layer 506 receives commands and logical block addresses from the storage client 504 in accordance with this block device interface. As a result, the block I/O emulation layer 506 provides the direct cache module 116 compatibility with block storage clients 504. In a further embodiment, the direct cache module 116 may communicate with the cache 102 and/or the storage device 118 using corresponding block device interfaces.

In one embodiment, a storage client 504 communicates with the direct cache module 116 through a direct interface layer 508. In this embodiment, the direct cache module 116 directly exchanges information specific to the cache 102 and/or the storage device 118 with the storage client 504. Similarly, the direct cache module 116, in one embodiment, may communicate with the cache 102 and/or the storage device 118 through direct interface layers 508.

A direct cache module 116 using the direct interface 508 may store data on the cache 102 and/or the storage device 118 as blocks, sectors, pages, logical blocks, logical pages, erase blocks, logical erase blocks, ECC chunks or in any other format or structure advantageous to the technical characteristics of the cache 102 and/or the storage device 118. For example, in one embodiment, the storage device 118 comprises a hard disk drive and the direct cache module 116 stores data on the storage device 118 as contiguous sectors of 512 bytes, or the like, using physical cylinder-head-sector addresses for each sector, logical block addresses for each sector, or the like. The direct cache module 116 may receive a logical address and a command from the storage client 504 and perform the corresponding operation in relation to the cache 102, and/or the storage device 118. The direct cache module 116, the cache 102, and/or the storage device 118 may support a block I/O emulation layer 506, a direct interface 508, or both a block I/O emulation layer 506 and a direct interface 508.

As described above, certain storage devices, while appearing to a storage client 504 to be a block storage device, do not directly associate particular logical block addresses with particular physical blocks, also referred to in the art as sectors. Such storage devices may use a logical-to- physical translation layer 510. In the depicted embodiment, the cache 102 includes a logical-to- physical translation layer 510. In a further embodiment, the storage device 118 may also include a logical-to-physical translation layer 510. In another embodiment, the direct cache module 116 maintains a single logical-to-physical translation layer 510 for the cache 102 and the storage device 118. In another embodiment, the direct cache module 116 maintains a distinct logical-to- physical translation layer 510 for each of the cache 102 and the storage device 118.

The logical-to-physical translation layer 510 provides a level of abstraction between the logical block addresses used by the storage client 504 and the physical block addresses at which the cache 102 and/or the storage device 118 store the data. In the depicted embodiment, the logical-to-physical translation layer 510 maps logical block addresses to physical block addresses of data stored on the media of the cache 102. This mapping allows data to be referenced in a logical address space using logical identifiers, such as a logical block address. A logical identifier does not indicate the physical location of data in the cache 102, but is an abstract reference to the data. Further examples of a logical-to-physical translation layer 510, in various embodiments, include the direct mapping module 606 of Figures 3and 4, the forward mapping module 802 of Figure 5, and the reverse mapping module 804 of Figure 5, each of which are discussed below.

In the depicted embodiment, the cache 102 and the storage device 118 separately manage the physical block addresses in the distinct, separate physical address spaces of the cache 102 and the storage device 118. In one example, contiguous logical block addresses may in fact be stored in non-contiguous physical block addresses as the logical-to-physical translation layer 510 determines the location on the physical media of the cache 102 at which to perform data operations.

Furthermore, in one embodiment, the logical address space of the cache 102 is substantially larger than the physical address space or storage capacity of the cache 102. This "thinly provisioned" or "sparse address space" embodiment, allows the number of logical addresses for data references to greatly exceed the number of possible physical addresses. A thinly provisioned and/or sparse address space also allows the cache 102 to cache data for a storage device 118 with a larger address space (i.e. a larger storage capacity) than the physical address space of the cache 102.

In one embodiment, the logical-to-physical translation layer 510 includes a map or index that maps logical block addresses to physical block addresses. The map or index may be in the form of a B-tree, a content addressable memory ("CAM"), a binary tree, and/or a hash table, and the like. In certain embodiments, the logical-to-physical translation layer 510 is a tree with nodes that represent logical block addresses and include references to corresponding physical block addresses. Example embodiments of B-tree mapping structure are described below with regard to Figures 7 and 8.

As stated above, in conventional block storage devices, a logical block address maps directly to a particular physical block. When a storage client 504 communicating with the conventional block storage device deletes data for a particular logical block address, the storage client 504 may note that the particular logical block address is deleted and can re-use the physical block associated with that deleted logical block address without the need to perform any other action.

Conversely, when a storage client 504, communicating with a storage controller 104 or device driver with a logical-to-physical translation layer 510 (a storage controller 104 or device driver that does not map a logical block address directly to a particular physical block), deletes data of a logical block address, the corresponding physical block address may remain allocated because the storage client 504 may not communicate the change in used blocks to the storage controller 104 or device driver. The storage client 504 may not be configured to communicate changes in used blocks (also referred to herein as "data block usage information"). Because the storage client 504, in one embodiment, uses the block I/O emulation 506 layer, the storage client 504 may erroneously believe that the direct cache module 116, the cache 102, and/or the storage device 118 is a conventional block storage device that would not utilize the data block usage information. Or, in certain embodiments, other software layers between the storage client 504 and the direct cache module 116, the cache 102, and/or the storage device 118 may fail to pass on data block usage information.

Consequently, the storage controller 104 or device driver may preserve the relationship between the logical block address and a physical address and the data on the cache 102 and/or the storage device 118 corresponding to the physical block. As the number of allocated blocks increases, the performance of the cache 102 and/or the storage device 118 may suffer depending on the configuration of the cache 102 and/or the storage device 118.

Specifically, in certain embodiments, the cache 102, and/or the storage device 118 are configured to store data sequentially, using an append-only writing process, and use a storage space recovery process that re-uses non- volatile storage media storing deallocated/unused logical blocks. Specifically, as described above, the cache 102, and/or the storage device 118 may sequentially write data on the solid-state storage media 110 in a log structured format and within one or more physical structures of the storage elements, the data is sequentially stored on the solid-state storage media 110. Those of skill in the art will recognize that other embodiments that include several caches 102 can use the same append-only writing process and storage space recovery process.

As a result of storing data sequentially and using an append-only writing process, the cache 102 and/or the storage device 118 achieve a high write throughput and a high number of I/O operations per second ("IOPS"). The cache 102 and/or the storage device 118 may include a storage space recovery, or garbage collection process that re-uses data storage cells to provide sufficient storage capacity. The storage space recovery process reuses storage cells for logical blocks marked as deallocated, invalid, unused, or otherwise designated as available for storage space recovery in the logical-physical translation layer 510. In one embodiment, the direct cache module 116 marks logical blocks as deallocated or invalid based on a cache eviction policy, to recover storage capacity for caching additional data for the storage device 118. The storage space recovery process is described in greater detail below with regard to the garbage collection module 710 of Figure 4.

As described above, the storage space recovery process determines that a particular section of storage may be recovered. Once a section of storage has been marked for recovery, the cache 102 and/or the storage device 118 may relocate valid blocks in the section. The storage space recovery process, when relocating valid blocks, copies the packets and writes them to another location so that the particular section of storage may be reused as available storage space, typically after an erase operation on the particular section. The cache 102 and/or the storage device 118 may then use the available storage space to continue sequentially writing data in an append-only fashion. Consequently, the storage controller 104 expends resources and overhead in preserving data in valid blocks. Therefore, physical blocks corresponding to deleted logical blocks may be unnecessarily preserved by the storage controller 104, which expends unnecessary resources in relocating the physical blocks during storage space recovery.

Some storage devices are configured to receive messages or commands notifying the storage device of these unused logical blocks so that the storage device may deallocate the corresponding physical blocks. As used herein, to deallocate a physical block includes marking the physical block as invalid, unused, or otherwise designating the physical block as available for storage space recovery, its contents on storage media no longer needing to be preserved by the storage device. Data block usage information may also refer to information maintained by a storage device regarding which physical blocks are allocated and/or deallocated/unallocated and changes in the allocation of physical blocks and/or logical-to-physical block mapping information. Data block usage information may also refer to information maintained by a storage device regarding which blocks are in use and which blocks are not in use by a storage client 504. Use of a block may include storing of data in the block on behalf of the storage client 504, reserving the block for use by the storage client 504, and the like.

While physical blocks may be deallocated, in certain embodiments, the cache 102 and/or the storage device 118 may not immediately erase the data on the storage media. An erase operation may be performed later in time. In certain embodiments, the data in a deallocated physical block may be marked as unavailable by the cache 102 and/or the storage device 118 such that subsequent requests for data in the physical block return a null result or an empty set of data.

One example of a command or message for such deallocation is the "TRIM" function of the "Data Set Management" command under the T13 technical committee command set specification maintained by INCITS. A storage device, upon receiving a TRIM command, may deallocate physical blocks for logical blocks whose data is no longer needed by the storage client 504. A storage device that deallocates physical blocks may achieve better performance and increased storage space, especially storage devices that write data using certain processes and/or use a similar data storage recovery process as that described above.

Consequently, the performance of the storage device is enhanced as physical blocks are deallocated when they are no longer needed such as through the TRIM command or other similar deallocation commands issued to the cache 102 and/or the storage device 118. In one embodiment, the direct cache module 116 clears, trims, and/or evicts cached data from the cache 102based on a cache eviction policy, or the like. As used herein, clearing, trimming, or evicting data includes deallocating physical media associated with the data, marking the data as invalid or unused (using either a logical or physical address of the data), erasing physical media associated with the data, overwriting the data with different data, issuing a TRIM command or other deallocation command relative to the data, or otherwise recovering storage capacity of physical storage media corresponding to the data. Clearing cached data from the cache 102 based on a cache eviction policy frees storage capacity in the cache 102 to cache more data for the storage device 118.

The direct cache module 116, in various embodiments, may represent itself, the cache 102, and the storage device 118 to the storage client 504 in different configurations. In one embodiment, the direct cache module 116 may represent itself to the storage client 504 as a single storage device (i.e. as the storage device 118, as a storage device with a similar physical capacity as the storage device 118, or the like) and the cache 102 may be transparent or invisible to the storage client 504. In another embodiment, the direct cache module 116 may represent itself to the direct cache module 116 as a cache device (i.e. as the cache 102, as a cache device with certain cache functions or APIs available, or the like) and the storage device 118 may be separately visible and/or available to the storage client 504 (with part of the physical capacity of the storage device 118 reserved for the cache 102). In a further embodiment, the direct cache module 116 may represent itself to the storage client 504 as a hybrid cache/storage device including both the cache 102 and the storage device 118.

Depending on the configuration, the direct cache module 116 may pass certain commands down to the cache 102 and/or to the storage device 118 and may not pass down other commands. In a further embodiment, the direct cache module 116 may support certain custom or new block I/O commands. In one embodiment, the direct cache module 116 supports a deallocation or trim command that clears corresponding data from both the cache 102 and the storage device 118, i.e. the direct cache module 116 passes the command to both the cache 102 and the storage device 118. In a further embodiment, the direct cache module 116 supports a flush type trim or deallocation command that ensures that corresponding data is stored in the storage device 118 (i.e. that the corresponding data in the cache 102 is clean) and clears the corresponding data from the cache 102, without clearing the corresponding data from the storage device 118. In another embodiment, the direct cache module 116 supports an evict type trim or deallocation command that evicts corresponding data from the cache 102, marks corresponding data for eviction in the cache 102, or the like, without clearing the corresponding data from the storage device 118.

In a further embodiment, the direct cache module 116 may receive, detect, and/or intercept one or more predefined commands that a storage client 504 or another storage manager sent to the storage device 118, that a storage manager sends to a storage client 504, or the like. For example, in various embodiments, the direct cache module 116 or a portion of the direct cache module 116 may be part of a filter driver that receives or detects the predefined commands, the direct cache module 116 may register with an event server to receive a notification of the predefined commands, or the like. In another embodiment, the direct cache module 116 may present an API through which the direct cache module 116 receives predefined commands. The direct cache module 116, in one embodiment, performs one or more actions on the cache 102 in response to detecting or receiving one or more predefined commands for the storage device 118, such as writing or flushing data related to a command from the cache 102 to the storage device 118, evicting data related to a command from the cache 102, switching from a write back policy to a write through policy for data related to a command, or the like.

One example of predefined commands that the direct cache module 116 may intercept or respond to, in one embodiment, includes a "freeze/thaw" commands. "Freeze/thaw" commands are used in SANs, storage arrays, and the like, to suspend storage access, such as access to the storage device 118 or the like, to take an snapshot or backup of the storage without interrupting operation of the applications using the storage. "Freeze/thaw" commands alert a storage client 504 that a snapshot is about to take place, the storage client 504 flushes pending operations, for example in-flight transactions, or data cached in volatile memory, the snapshot takes place while the storage client 504 use of the storage is in a "frozen" or ready state, and once the snapshot is complete the storage client 504 continues normal use of the storage in response to a thaw command.

The direct cache module 116, in one embodiment, flushes or cleans dirty data from the cache 102 to the storage device 118 in response to detecting a "freeze/thaw" command. In a further embodiment, the direct cache module 116 suspends access to the storage device 118 during a snapshot or other backup of a detected "freeze/thaw" command and resumes access in response to a completion of the snapshot or other backup. In another embodiment, the direct cache module 116 may cache data for the storage device 118 during a snapshot or other backup without interrupting the snapshot or other backup procedure. In other words, rather than the backup/snapshot software signaling the application to quiesce I/O operations, the direct cache module 116 receives and responds to the freeze/thaw commands. Other embodiments of predefined commands may include one or more of a read command, a write command, a TRIM command, an erase command, a flush command, a pin command, an unpin command, and the like.

Figure 3 depicts one embodiment of the direct cache module 116. In the depicted embodiment, the direct cache module 116 includes a storage request module 602, a cache fulfillment module 604, and a direct mapping module 606. The direct cache module 116 of Figure 3, in one embodiment, is substantially similar to the direct cache module 116 described above with regard to Figure 1 and/or Figure 2. In general, the direct cache module 116 caches data for the storage device 118 without an extra cache mapping layer. Instead of using a cache mapping layer, in one embodiment, the direct cache module 116 directly maps logical addresses of the storage device 118 to logical addresses of the cache 102 using the same mapping structure that maps the logical addresses of the cache 102 to the physical storage media 110 of the cache 102.

In one embodiment, the storage request module 602 detects input/output ("I/O") requests for the storage device 118, such as read requests, write requests, erase requests, TRIM requests, and/or other I/O requests for the storage device 118. The storage request module 602 may detect an I/O request by receiving the I/O request directly, detecting an I/O request sent to a different module or entity (such as detecting an I/O request sent directly to the storage device 118), or the like. In one embodiment, the host device 114 sends the I O request. The direct cache module 116, in one embodiment, represents itself to the host device 114 as a storage device, and the host device 114 sends I O requests directly to the storage request module 602.

An I/O request, in one embodiment, may include or may request data that is not stored on the cache 102. Data that is not stored on the cache 102, in various embodiments, may include new data not yet stored on the storage device 118, modifications to data that is stored on the storage device 118, data that is stored on the storage device 118 but not currently stored in the cache 102, or the like. An I O request, in various embodiments, may directly include data, may include a reference, a pointer, or an address for data, or the like. For example, in one embodiment, an I/O request (such as a write request or the like) may include a range of addresses indicating data to be stored on the storage device 118 by way of a Direct Memory Access ("DMA") or Remote DMA ("RDMA") operation.

In a further embodiment, a single I O request may include several different contiguous and/or noncontiguous ranges of addresses or blocks. In a further embodiment, an I/O request may include one or more destination addresses for data, such as logical and/or physical addresses for the data on the cache 102 and/or on the storage device 118. The storage request module 602 and/or another cooperating module, in various embodiments, may retrieve the data of an I/O request directly from an I/O request itself, from a storage location referenced by an I/O request (i.e. from a location in system memory or other data storage referenced in a DMA or RDMA request), or the like.

The direct mapping module 606, in one embodiment, directly maps logical or physical addresses of the storage device 118 to logical addresses of the cache 102 and directly maps logical addresses of the cache 102 to logical addresses of the storage device 118. As used herein, direct mapping of addresses means that for a given address in a first address space there is exactly one corresponding address in a second address space with no translation or manipulation of the address to get from an address in the first address space to the corresponding address in the second address space. The direct mapping module 606, in a further embodiment, maps addresses of the storage device 118 to logical addresses of the cache 102 such that each storage device 118 address has a one to one relationship with a logical address of the cache 102.

As described above, in certain embodiments, logical addresses of the cache 102 are independent of physical storage addresses of the solid-state storage media 110 for the cache 102, making the physical storage addresses of the solid-state storage media 110 fully associative with the storage device 118. Because the solid-state storage media 110 is fully associative with the storage device 118, any physical storage block of the cache 102 may store data associated with any storage device address of the storage device 118.

The cache 102, in one embodiment, is logically directly mapped and physically fully associative, combining the benefits of both cache types. The direct mapping module 606 maps each storage block of the storage device 118 to a distinct unique logical address of the cache 102 and associated distinct unique entry in the mapping structure, which may be associated with any distinct storage address of the solid-state storage media 110. This means that the direct mapping module 606 maps a storage block of the storage device 118 (represented by an LB A or other address) consistently to the same distinct unique logical address of the cache 102 while any distinct storage address of the solid-state storage media 110 may store the associated data, depending on a location of an append point of a sequential log-based writing structure, or the like.

The combination of logical direct mapping and full physical associativity that the direct mapping module 606 provides, in one embodiment, precludes cache collisions from occurring because logical addresses of the cache 102 are not shared and any storage block of the solid-state storage media 110 may store data for any address of the storage device 118, providing caching flexibility and optimal cache performance. Instead of overwriting data due to cache collisions, in one embodiment described below with regard to Figure 4, a garbage collection module 710 and/or an eviction module 712 clear invalid or old data from the cache 102 to free storage capacity for caching data. Further, because the direct mapping module 606 maps storage device addresses to logical addresses of the cache 102 directly, in certain embodiments, the cache 102 provides fully associative physical storage media 110 without the processing overhead and memory consumption of a separate cache map, cache index, cache tags, or other lookup means traditionally associated with fully associative caches, eliminating a cache translation layer. Instead of a separate cache translation layer, the direct mapping module 606 (which may be embodied by the logical-to-physical translation layer 510 described above and/or the forward mapping module 802 described below) and the associated single mapping structure serve as both a cache index or lookup structure and as a storage address mapping layer.

In one embodiment, the direct mapping module 606 maps addresses of the storage device 118 directly to logical addresses of the cache 102 so that the addresses of the storage device 118 and the logical addresses of the cache 102 are equal or equivalent. In one example of this embodiment, the addresses of the storage device 118 and the logical addresses of the cache 102 share a lower range of the logical address space of the cache 102, such as 0-2 32 , or the like. In embodiments where the direct mapping module 606 maps addresses of the storage device 118 as equivalents of logical addresses of the cache 102, the direct mapping module 606 may use the addresses of the storage device 118 and the logical addresses of the cache 102 interchangeably, substituting one for the other without translating between them. Because the direct mapping module 606 directly maps addresses of the storage device 118 to logical addresses of the cache 102, an address of an I/O request directly identifies both an entry in the mapping structure for a logical address of the cache 102 and an associated address of the storage device 118. In one embodiment, logical block addresses of the storage device 118 are used to index both the logical address space of the cache 102 and the logical address space of the storage device 118. This is enabled by the direct mapping module 606 presenting an address space to the host device 114 that is the same size or larger than the address space of the storage device 118. In one embodiment, the direct mapping module 606 maps logical addresses of the cache 102 (and associated addresses of the storage device 118) to physical addresses and/or locations on the physical storage media 110 of the cache 102. In a further embodiment, the direct mapping module 606 uses a single mapping structure to map addresses of the storage device 118 to logical addresses of the cache 102 and to map logical addresses of the cache 102 to locations on the physical storage media 110 of the cache 102. The direct mapping module 606 references the single mapping structure to determine whether or not the cache 102 stores data associated with an address of an I/O request. An address of an I/O request may comprise an address of the storage device 118, a logical address of the cache 102, or the like.

The single mapping structure, in various embodiments, may include a B-tree, B*-tree,

B+-tree, a CAM, a binary tree, a hash table, an index, an array, a linked-list, a look-up table, or another mapping data structure. Use of a B-tree as the mapping structure in certain embodiments, is particularly advantageous where the logical address space presented to the client is a very large address space (2 Λ 64 addressable blocks - which may or may not be sparsely populated). Because B-trees maintain an ordered structure, searching such a large space remains very fast. Example embodiments of a B-tree as a mapping structure are described in greater detail with regard to Figures 7 and 8. For example, in one embodiment, the mapping structure includes a B-tree with multiple nodes and each node may store several entries. In the example embodiment, each entry may map a variable sized range or ranges of logical addresses of the cache 102 to a location on the physical storage media 110 of the cache 102. Furthermore, the number of nodes in the B-tree may vary as the B-tree grows wider and/or deeper. Caching variable sized ranges of data associated with contiguous and/or non-contiguous ranges of storage device addresses, in certain embodiments, is more efficient than caching fixed size cache lines, as the cache 102 may more closely match data use patterns without restrictions imposed by fixed size cache lines.

In one embodiment, the mapping structure of the direct mapping module 606 only includes a node or entry for logical addresses of the cache 102 that are associated with currently cached data in the cache 102. In this embodiment, membership in the mapping structure represents membership in the cache 102. The direct mapping module 606, in one embodiment, adds entries, nodes, and the like to the mapping structure as data is stored in the cache and removes entries, nodes, and the like from the mapping structure in response to data being evicted, cleared, trimmed, or otherwise removed from the cache 102. Similarly, membership in the mapping structure may represent valid allocated blocks on the solid-state storage media 110. The solid-state storage controller 104, in one embodiment, adds entries, nodes, and the like to the mapping structure as data is stored on the solid-state storage media 110 and removes entries, nodes, and the like from the mapping structure in response to data being invalidated cleared, trimmed, or otherwise removed from the solid-state storage media 110. In the case where the mapping structure is shared for both cache management and data storage management on the solid-state storage media, the present invention also tracks whether the data is dirty or not to determine whether the data is persisted on the storage device 118.

In a further embodiment, the mapping structure of the direct mapping module 606 may include one or more nodes or entries for logical addresses of the cache 102 that are not associated with currently stored data in the cache 102, but that are mapped to addresses of the storage device 118 that currently store data. The nodes or entries for logical addresses of the cache 102 that are not associated with currently stored data in the cache 102, in one embodiment, are not mapped to locations on the physical storage media 110 of the cache 102, but include an indicator that the cache 102 does not store data corresponding to the logical addresses. The nodes or entries, in a further embodiment, may include information that the data resides in the storage device 118.

Nodes, entries, records, or the like of the mapping structure, in one embodiment, may include information (such as physical addresses, offsets, indicators, etc.) directly, as part of the mapping structure, or may include pointers, references, or the like for locating information in memory, in a table, or in another data structure. The direct mapping module 606, in one embodiment, optimizes the mapping structure by monitoring the shape of the mapping structure, monitoring the size of the mapping structure, balancing the mapping structure, enforcing one or more predefined rules with regard to the mapping structure, ensuring that leaf nodes of the mapping structure are at the same depth, combining nodes, splitting nodes, and/or otherwise optimizing the mapping structure.

The direct mapping module 606, in one embodiment, stores at least a copy of the mapping structure to the solid-state storage media 110 of the cache 102 periodically. By storing the mapping structure on the cache 102, in a further embodiment, the mapping of addresses of the storage device 118 to the logical addresses of the cache 102 and/or the mapping of the logical addresses of the cache 102 to locations on the physical storage media 110 of the cache 102 are persistent, even if the cache 102 is subsequently paired with a different host device 114, the cache 102 undergoes an unexpected or improper shutdown, the cache 102 undergoes a power loss, or the like. In one embodiment, the storage device 118 is also subsequently paired with the different host device 114 along with the cache 102. In a further embodiment, the cache 102 rebuilds or restores at least a portion of data from the storage device 118 on a new storage device associated with the different host device 114, based on the mapping structure and data stored on the cache 102.

The direct mapping module 606, in one embodiment, reconstructs the mapping structure and included entries by scanning data on the solid-state storage media 110, such as a sequential log-based writing structure or the like, and extracting logical addresses, sequence indicators, and the like from data at physical locations on the solid-state storage media 110. For example, as described below, in certain embodiments the cache fulfillment module 604 stores data of I/O requests in a format that associates the data with sequence indicators for the data and with respective logical addresses of the cache 102 for the data. If the mapping structure becomes lost or corrupted, the direct mapping module 606 may use the physical address or location of data on the solid-state storage media 110 with the associated sequence indicators, logical addresses, and/or other metadata stored with the data, to reconstruct entries of the mapping structure. The forward map module 802 described below with regard to Figures 5 and 6 is another embodiment of the direct mapping module 606.

In one embodiment, the direct mapping module 606 receives one or more addresses of an

I/O request, such as logical block addresses of the storage device 118 or the like, from the storage request module 602 and the direct mapping module 606 references the mapping structure to determine whether or not the cache 102 stores data associated with the I/O request. The direct mapping module 606, in response to referencing the mapping structure, may provide information from the mapping structure to the cache fulfillment module 604, such as a determination whether the cache 102 stores data of the I/O request, a physical storage address on the solid-state storage media 110 for data of the I/O request, or the like to assist the cache fulfillment module 604 in satisfying the I/O request. In response to the cache fulfillment module 604 satisfying an I/O request, in certain embodiments, the direct mapping module 606 updates the mapping structure to reflect changes or updates to the cache 102 that the cache fulfillment module 604 made to satisfy the I/O request.

The cache fulfillment module 604 satisfies I/O requests that the storage request module 602 detects. In certain embodiments, if the direct mapping module 606 determines that the cache 102 stores data of an I/O request, such as storing at least one data block of the I/O request or the like, the cache fulfillment module 604 satisfies the I/O request at least partially using the cache 102. The cache fulfillment module 604 satisfies an I/O request based on the type of I/O request. For example, the cache fulfillment module 604 may satisfy a write I/O request by storing data of the I/O request to the cache 102, may satisfy a read I/O request by reading data of the I/O request from the cache 102, and the like. An embodiment of the cache fulfillment module 604 that includes a write request module 703 for fulfilling write I/O requests and a read request module 704 for fulfilling read I O requests is described below in greater detail with regard to Figure 4.

In one embodiment, if the direct mapping module 606 determines that the cache 102 does not store data of an I/O request, i.e. there is a cache miss, the cache fulfillment module 604 stores data of the I/O request to the cache 102. The cache fulfillment module 604, in response to a write I/O request, a cache miss, or the like, in certain embodiments, stores data of an I/O request to the solid-state storage media 110 of the cache 102 sequentially to preserve an ordered sequence of I/O operations performed on the solid-state storage media 110. For example, the cache fulfillment module 604 may store the data of I/O requests to the cache 102 sequentially by appending the data to an append point of a sequential, log-based, cyclic writing structure of the solid-state storage media 110, in the order that the storage request module 602 receives the I O requests. One embodiment of a sequential, log-based, cyclic writing structure is described below with regard to Figure 8.

The cache fulfillment module 604, in one embodiment, stores data in a manner that associates the data with a sequence indicator for the data. The cache fulfillment module 604 may store a numerical sequence indicator as metadata with data of an I/O request, may use the sequential order of a log-based writing structure as a sequence indicator, or the like. In a further embodiment, the cache fulfillment module 604 stores data in a manner that associates the data with respective logical addresses of the data, storing one or more logical block addresses of the data with the data in a sequential, log-based writing structure or the like. By storing sequence indicators and logical addresses of data with the data on the solid-state storage media 110 of the cache 102, the cache fulfillment module 604 enables the direct mapping module 606 to reconstruct, rebuild, and/or recover entries in the mapping structure using the stored sequence indicators and logical addresses, as described above.

Figure 4 depicts another embodiment of the direct cache module 116. In the depicted embodiment, the direct cache module 116 includes the block I/O emulation layer 506, the direct interface layer 508, the storage request module 602, the cache fulfillment module 604, and the direct mapping module 606, substantially as described above with regard to Figures 2 and 3. The direct cache module 116, in the depicted embodiment, further includes a storage device interface module 702, a write acknowledgement module 706, a cleaner module 708, a garbage collection module 710, and an eviction module 712. The cache fulfillment module 604, in the depicted embodiment, includes a write request module 703 and a read request module 704.

In one embodiment, the write request module 703 services and satisfies write I O requests that the storage request module 602 detects. A write request, in one embodiment, includes data that is not stored on the storage device 118, such as new data not yet stored on the storage device 118, modifications to data that is stored on the storage device 118, and the like. A write request, in various embodiments, may directly include the data, may include a reference, a pointer, or an address for the data, or the like. For example, in one embodiment, a write request includes a range of addresses indicating data to be stored on the storage device 118 by way of a Direct Memory Access ("DMA") or Remote DMA ("RDMA") operation.

In a further embodiment, a single write request may include several different contiguous and/or noncontiguous ranges of addresses or blocks. In a further embodiment, a write request includes one or more destination addresses for the associated data, such as logical and/or physical addresses for the data on the storage device 118. The write request module 703 and/or another cooperating module, in various embodiments, may retrieve the data of a write request directly from the write request itself, from a storage location referenced by a write request (i.e. from a location in system memory or other data storage referenced in a DMA or RDMA request), or the like to service the write request.

The write request module 703, in one embodiment, writes data of a write request to the cache 102 at one or more logical addresses of the cache 102 corresponding to the addresses of the write request as mapped by the direct mapping module 606. In a further embodiment, the write request module 703 writes the data of the write request to the cache 102 by appending the data to a sequential, log-based, cyclic writing structure of the physical solid-state storage media 110 of the cache 102 at an append point. The write request module 703, in one embodiment, returns one or more physical addresses or locations corresponding to the append point and the direct mapping module 606 maps the one or more logical addresses of the cache 102 to the one or more physical addresses corresponding to the append point.

In one embodiment, the read request module 704 services and satisfies read I/O requests that the storage request module 602 detects for data stored in the cache 102 and/or the storage device 118. A read request is a read command with an indicator, such as a logical address or range of logical addresses, of the data being requested. In one embodiment, the read request module 704 supports read requests with several contiguous and/or noncontiguous ranges of logical addresses, as discussed above with regard to the storage request module 602.

In the depicted embodiment, the read request module 704 includes a read miss module

718 and a read retrieve module 720. The read miss module 718, in one embodiment, determines whether or not requested data is stored in the cache 102, in cooperation with the direct mapping module 606 or the like. The read miss module 718 may query the cache 102 directly, query the direct mapping module 606, query the mapping structure of the direct mapping module 606, or the like to determine whether or not requested data is stored in the cache 102.

The read retrieve module 720, in one embodiment, returns requested data to the requesting entity, such as the host device 114. If the read miss module 718 and/or the direct mapping module 606 determine that the cache 102 stores the requested data, in one embodiment, the read retrieve module 720 reads the requested data from the cache 102 and returns the data to the requesting entity. The direct mapping module 606, in one embodiment, provides the read retrieve module 720 with one or more physical addresses of the requested data in the cache 102 by mapping one or more logical addresses of the requested data to the one or more physical addresses of the requested data.

If the read miss module 718 and/or the direct mapping module 606 determines that the cache 102 does not store the requested data, in one embodiment, the read retrieve module 720 reads the requested data from the storage device 118, stores the requested data to the cache 102, and returns the requested data to the requesting entity to satisfy the associated read request. In one embodiment, the read retrieve module 720 writes the requested data to the cache 102 by appending the requested data to an append point of a sequential, log-based, cyclic writing structure of the cache 102. In a further embodiment, the read retrieve module 720 provides one or more physical addresses corresponding to the append point to the direct mapping module 606 with the one or more logical addresses of the requested data and the direct mapping module 606 adds and/or updates the mapping structure with the mapping of logical and physical addresses for the requested data. The read retrieve module 720, in one embodiment, writes the requested data to the cache 102 using and/or in conjunction with the cache fulfillment module 604.

In one embodiment, the read miss module 718 detects a partial miss, where the cache 102 stores one portion of the requested data but does not store another. A partial miss, in various embodiments, may be the result of eviction of the unstored data, a block I/O request for noncontiguous data, or the like. The read miss module 718, in one embodiment, reads the missing data or "hole" data from the storage device 118 and returns both the portion of the requested data from the cache 102 and the portion of the requested data from the storage device 118 to the requesting entity. In one embodiment, the read miss module 718 stores the missing data retrieved from the storage device 118 in the cache 102.

In one embodiment, the write acknowledgement module 706 acknowledges, to a requesting entity such as the host device 114, a write request that the storage request module 602 receives. The write acknowledgement module 706, in a further embodiment, acknowledges persistence of the write request. In one embodiment, the write acknowledgement module 706 implements a particular data integrity policy. Advantageously, embodiments of the present invention permit variations in the data integrity policy that is implemented. The write acknowledgement module 706, in one embodiment, acknowledges the write request in response to the cache fulfillment module 604 writing data of the write request to the cache 102. In a further embodiment, the write acknowledgement module 706 acknowledges the write request in response to the cleaner module 708 writing data of the write request to the storage device 118, as described below.

In one embodiment, the cleaner module 708 writes data from the cache 102 to the storage device 118, destaging or cleaning the data. Data that is stored in the cache 102 that is not yet stored in the storage device 118 is referred to as "dirty" data. Once the storage device 118 stores data, the data is referred to as "clean." The cleaner module 708 cleans data in the cache 102 by writing the data to the storage device 118. The cleaner module 708, in one embodiment, may determine an address for the data in the storage device 118 based on a write request corresponding to the data. In a further embodiment, the cleaner module 708 determines an address for the data in the storage device 118 based on a logical address of the data in the cache 102, based on the mapping structure of the direct mapping module 606, or the like. In another embodiment, the cleaner module 708 uses the reverse mapping module 804 to determine an address for the data in the storage device 118 based on a physical address of the data in the cache 102.

The cleaner module 708, in one embodiment, writes data to the storage device 118 based on a write policy. In one embodiment, the cleaner module 708 uses a write-back write policy, and does not immediately write data of a write request to the storage device 118 upon receiving the write request. Instead, the cleaner module 708, in one embodiment, performs an opportunistic or "lazy" write, writing data to the storage device 118 when the data is evicted from the cache 102, when the cache 102 and/or the direct cache module 116 has a light load, when available storage capacity in the cache 102 falls below a threshold, or the like. In a writeback embodiment, the cleaner module 708 reads data from the cache 102, writes the data to the storage device 118, and sets an indicator that the storage device 118 stores the data, in response to successfully writing the data to the storage device 118. Setting the indicator that the storage device 118 stores the data alerts the garbage collection module 710 that the data may be cleared from the cache 102 and/or alerts the eviction module 712 that the data may be evicted from the cache 102.

In one embodiment, the cleaner module 708 sets an indicator that the storage device 118 stores data by marking the data as clean in the cache 102. In a further embodiment, the cleaner module 708 may set an indicator that the storage device 118 stores data by communicating an address of the data to the direct mapping module 606, sending a request to the direct mapping module 606 to update an indicator in a logical to physical mapping or other mapping structure, or the like.

In one embodiment, the cleaner module 708 maintains a separate data structure indicating which data in the cache 102 is clean and which data is dirty. In another embodiment, the cleaner module 708 references indicators in a mapping of logical addresses to physical media addresses, such as a mapping structure maintained by the direct mapping module 606, to determine which data in the cache 102 is clean and which data is dirty.

In another embodiment, instead of cleaning data according to a write-back write policy, the cleaner module 708 uses a write-through policy, performing a synchronous write to the storage device 118 for each write request that the storage request module 602 receives. The cleaner module 708, in one embodiment, transitions from a write-back to a write-through write policy in response to a predefined error condition, such as an error or failure of the cache 102, or the like.

In one embodiment, the garbage collection module 710 recovers storage capacity of physical storage media corresponding to data that is marked as invalid, such as data cleaned by the cleaner module 708 and/or evicted by the eviction module 712. The garbage collection module 710, in one embodiment, recovers storage capacity of physical storage media corresponding to data that the cleaner module 708 has cleaned and that the eviction module 712 has evicted, or that has been otherwise marked as invalid. In one embodiment, the garbage collection module 710 allows clean data to remain in the cache 102 as long as possible until the eviction module 712 evicts the data or the data is otherwise marked as invalid, to decrease the number of cache misses.

In one embodiment, the garbage collection module 710 recovers storage capacity of physical storage media corresponding to invalid data opportunistically. For example, the garbage collection module 710 may recover storage capacity in response to a lack of available storage capacity, a percentage of data marked as invalid reaching a predefined threshold level, a consolidation of valid data, an error detection rate for a section of physical storage media reaching a threshold value, performance crossing a threshold value, a scheduled garbage collection cycle, identifying a section of the physical storage media 110 with a high amount of invalid data, identifying a section of the physical storage media 110 with a low amount of wear, or the like. In one embodiment, the garbage collection module 710 relocates valid data that is in a section of the physical storage media 110 in the cache 102 that the garbage collection module 710 is recovering to preserve the valid data. In one embodiment, the garbage collection module 710 is part of an autonomous garbage collector system that operates within the cache 102. This allows the cache 102 to manage data so that data is systematically spread throughout the solid- state storage media 110, or other physical storage media, to improve performance, data reliability and to avoid overuse and underuse of any one location or area of the solid-state storage media 110 and to lengthen the useful life of the solid-state storage media 110.

The garbage collection module 710, upon recovering a section of the physical storage media 110, allows the cache 102 to re-use the section of the physical storage media 110 to store different data. In one embodiment, the garbage collection module 710 adds the recovered section of physical storage media to an available storage pool for the cache 102, or the like. The garbage collection module 710, in one embodiment, erases existing data in a recovered section. In a further embodiment, the garbage collection module 710 allows the cache 102 to overwrite existing data in a recovered section. Whether or not the garbage collection module 710, in one embodiment, erases existing data in a recovered section may depend on the nature of the physical storage media. For example, Flash media requires that cells be erased prior to reuse where magnetic media such as hard drives does not have that requirement. In an embodiment where the garbage collection module 710 does not erase data in a recovered section, but allows the cache 102 to overwrite data in the recovered section, the garbage collection module 710, in certain embodiments, may mark the data in the recovered section as unavailable to service read requests so that subsequent requests for data in the recovered section return a null result or an empty set of data until the cache 102 overwrites the data.

In one embodiment, the garbage collection module 710 recovers storage capacity of the cache 102 one or more storage divisions at a time. A storage division, in one embodiment, is an erase block or other predefined division. For flash memory, an erase operation on an erase block writes ones to every bit in the erase block. This is a lengthy process compared to a program operation which starts with a location being all ones, and as data is written, some bits are changed to zero. However, where the solid-state storage 110 is not flash memory or has flash memory where an erase cycle takes a similar amount of time as other operations, such as a read or a program, the eviction module 712 may erase the data of a storage division as it evicts data, instead of the garbage collection module 710.

In one embodiment, allowing the eviction module 712 to mark data as invalid rather than actually erasing the data and allowing the garbage collection module 710 to recover the physical media associated with invalid data, increases efficiency because, as mentioned above, for flash memory and other similar storage an erase operation takes a significant amount of time. Allowing the garbage collection module 710 to operate autonomously and opportunistically within the cache 102 provides a way to separate erase operations from reads, writes, and other faster operations so that the cache 102 operates very efficiently.

In one embodiment, the garbage collection module 710 is integrated with and/or works in conjunction with the cleaner module 708 and/or the eviction module 712. For example, the garbage collection module 710, in one embodiment, clears data from the cache 102 in response to an indicator that the storage device stores the data (i.e. that the cleaner module 708 has cleaned the data) based on a cache eviction policy (i.e. in response to the eviction module 712 evicting the data). The eviction module 712, in one embodiment, evicts data by marking the data as invalid. In other embodiments, the eviction module 712 may evict data by erasing the data, overwriting the data, trimming the data, deallocating physical storage media associated with the data, or otherwise clearing the data from the cache 102.

The eviction module 712, in one embodiment, evicts data from the cache 102 based on a cache eviction policy. The cache eviction policy, in one embodiment, is based on a combination or a comparison of one or more cache eviction factors. In one embodiment, the cache eviction factors include wear leveling of the physical storage media 110. In another embodiment, the cache eviction factors include a determined reliability of a section of the physical storage media 110. In a further embodiment, the cache eviction factors include a failure of a section of the physical storage media 110. The cache eviction factors, in one embodiment, include a least recently used ("LRU") block of data. In another embodiment, the cache eviction factors include a frequency of access of a block of data, i.e. how "hot" or "cold" a block of data is. In one embodiment, the cache eviction factors include a position of a block of data in the physical storage media 110 relative to other "hot" data. One of skill in the art, in light of this specification, will recognize other cache eviction factors suitable for use in the cache eviction policy.

In one embodiment, the direct mapping module 606 determines one or more of the cache eviction factors based on a history of access to the mapping structure. The direct mapping module 606, in a further embodiment, identifies areas of high frequency, "hot," use and/or low frequency, "cold," use by monitoring accesses of branches or nodes in the mapping structure. The direct mapping module 606, in a further embodiment, determines a count or frequency of access to a branch, directed edge, or node in the mapping structure. In one embodiment, a count associated with each node of a b-tree like mapping structure may be incremented for each I/O read operation and/or each I/O write operation that visits the node in a traversal of the mapping structure. Of course, separate read counts and write counts may be maintained for each node. Certain counts may be aggregated to different levels in the mapping structure in other embodiments. The eviction module 712, in one embodiment, evicts data from the cache 102 intelligently and/or opportunistically based on activity in the mapping structure monitored by the direct mapping module 606, based on information about the physical storage media 110, and/or based on other cache eviction factors.

The direct mapping module 606, the eviction module 712, and/or the garbage collection module 710, in one embodiment, share information to increase the efficiency of the cache 102, to reduce cache misses, to make intelligent eviction decisions, and the like. In one embodiment, the direct mapping module 606 tracks or monitors a frequency that I/O requests access logical addresses in the mapping structure. The direct mapping module 606, in a further embodiment, stores the access frequency information in the mapping structure, communicates the access frequency information to the eviction module 712 and/or to the garbage collection module 710, or the like. The direct mapping module 606, in another embodiment, may track, collect, or monitor other usage/access statistics relating to the logical to physical mapping of addresses for the cache 102 and/or relating to the mapping between the logical address space of the cache 102 and the address space of the storage device 118, and may share that data with the eviction module 712 and/or with the garbage collection module 710.

One example of a benefit of sharing information between the direct mapping module 606, the eviction module 712, and the garbage collection module 710, in certain embodiments, is that write amplification can be reduced. As described above, in one embodiment, the garbage collection module 710 copies any valid data in an erase block forward to the current append point of the log-based append-only writing structure of the cache 102 before recovering the physical storage capacity of the erase block. By cooperating with the direct mapping module 606 and/or with the eviction module 712, in one embodiment, the garbage collection module 710 may clear certain valid data from an erase block without copying the data forward (for example because the replacement algorithm for the eviction module 712 indicates that the valid data is unlikely to be re-requested soon), reducing write amplification, increasing available physical storage capacity and efficiency.

For example, in one embodiment, the garbage collection module 710 preserves valid data with an access frequency in the mapping structure that is above a predefined threshold, and clears valid data from an erase block if the valid data has an access frequency below the predefined threshold. In a further embodiment, the eviction module 712 may mark certain data as conditionally evictable, conditionally invalid, or the like, and the garbage collection module 710 may evict the conditionally invalid data based on an access frequency or other data that the direct mapping module 606 provides. In another example, the direct mapping module 606, the eviction module 712, and the garbage collection module 710, in one embodiment, cooperate such that valid data that is in the cache 102 and is dirty gets stored on the storage device 118 by the garbage collection module 710 rather than copied to the front of the log, because the eviction module 712 indicated that it is more advantageous to do so.

Those of skill in the art will appreciate a variety of other examples and scenarios in which the modules responsible for managing the non-volatile storage media that uses a log-based append-only writing structure can leverage the information available in the direct cache module 116. Furthermore, those of skill in the art will appreciate a variety of other examples and scenarios in which the modules responsible for managing the cache 102 (direct cache module 116, cleaning and eviction determinations) can leverage the information available in solid-state controller 104 regarding the condition of the non- volatile storage media.

In another example, the direct mapping module 606, the eviction module 712, and the garbage collection module 710, in one embodiment, cooperate such that selection of one or more blocks of data by the eviction module 712 is influenced by the Uncorrectable Bit Error Rates (UBER), Correctable Bit Error Rates (BER), Program / Erase (PE) cycle counts, read frequency, or other non- volatile solid state storage specific attributes of the region of the solid-state storage media 110 in the cache 102 that presently holds the valid data. High BER, UBER, PEs may be used as factors to increase the likelihood that the eviction module 712 will evict a particular block range stored on media having those characteristics.

In one embodiment, the storage device interface module 702 provides an interface between the direct cache module 116, the cache 102, and/or the storage device 118. As described above with regard to Figure 2, in various embodiments, the direct cache module 116 may interact with the cache 102 and/or the storage device 118 through a block device interface, a direct interface, a device driver on the host device 114, a storage controller, or the like. In one embodiment, the storage device interface module 702 provides the direct cache module 116 with access to one or more of these interfaces. For example, the storage device interface module 702 may receive read commands, write commands, and clear (or TRIM) commands from one or more of the cache fulfillment module 604, the direct mapping module 606, the read request module 704, the cleaner module 708, the garbage collection module 710, and the like and relay the commands to the cache 102 and/or the storage device 118. In a further embodiment, the storage device interface module 702 may translate or format a command into a format compatible with an interface for the cache 102 and/or the storage device 118.

In one embodiment, the storage device interface module 702 has exclusive ownership over the storage device 118 and the direct cache module 116 is an exclusive gateway to accessing the storage device 118. Providing the storage device interface module 702 with exclusive ownership over the storage device 118 and preventing access to the storage device 118 by other routes obviates stale data issues and cache coherency requirements, because all changes to data in the storage device 114 are processed by the direct cache module 116.

In a further embodiment, the storage device interface module 702 does not have exclusive ownership of the storage device 118, and the storage device interface module 702 manages cache coherency for the cache 102. For example, in various embodiments, the storage device interface module 702 may access a common directory with other users of the storage device 118 to maintain coherency, may monitor write operations from other users of the storage device 118, may participate in a predefined coherency protocol with other users of the storage device 118, or the like.

Figure 5 is a schematic block diagram illustrating one embodiment of an apparatus 800 to efficiently map physical and logical addresses in accordance with the present invention. The apparatus 800 includes a forward mapping module 802, a reverse mapping module 804, and a storage space recovery module 806, which are described below. At least a portion of one or more of the forward mapping module 802, the reverse mapping module 804, and the storage space recovery module 806 is located within one or more of a requesting device that transmits the storage request, the solid-state storage media 110, the storage controller 104, and a computing device separate from the requesting device, the solid-state storage media 110, and the storage controller 104.

In one embodiment, the forward mapping module 802 and the reverse mapping module

804 work in conjunction with the direct mapping module 606. The forward mapping module 802 and the reverse mapping module 804 may be part of the direct mapping module 606, may be separate and work together with the direct mapping module 606, or the like.

The apparatus 800 includes a forward mapping module 802 that uses a forward map to identify one or more physical addresses of data of a data segment. The physical addresses are identified from one or more logical addresses of the data segment, which are identified in a storage request directed to the solid-state storage media 110. For example, a storage request may include a request to read data stored in the solid-state storage media 110. The storage request to read data includes a logical address or logical identifier associated with the data stored on the solid-state storage media 110. The read request may include a logical or virtual address of a file from which the data segment originated, which may be interpreted that the read request is a request to read an entire data segment associated with the logical or virtual address.

The read request, in another example, includes a logical address along with an offset as well a data length of the data requested in the read request. For example, if a data segment is 20 blocks, a read request may include an offset of 16 blocks (i.e. start at block 16 of 20) and a data length of 5 so that the read request reads the last 5 blocks of the data segment. The read request may include an offset and data length also in a request to read an entire data segment or to read from the beginning of a data segment. Other requests may also be included in a storage request, such as a status request. Other types and other forms of storage requests are contemplated within the scope of the present invention and will be recognized by one of skill in the art.

The apparatus 800 includes a forward map that maps of one or more logical addresses to one or more physical addresses of data stored in the solid-state storage media 110. The logical addresses correspond to one or more data segments relating to the data stored in the solid-state storage media 110. The one or more logical addresses typically include discrete addresses within a logical address space where the logical addresses sparsely populate the logical address space. For a logical address of a data segment, data length information may also be associated with the logical address and may also be included in the forward map. The data length typically corresponds to the size of the data segment. Combining a logical address and data length information associated with the logical address may be used to facilitate reading a particular portion within a data segment.

Often logical addresses used to identify stored data represent a very small number of logical addresses that are possible within a name space or range of possible logical addresses. Searching this sparsely populated space may be cumbersome. For this reason, the forward map is typically a data structure that facilitates quickly traversing the forward map to find a physical address based on a logical address. For example, the forward map may include a B-tree, a content addressable memory ("CAM"), a binary tree, a hash table, or other data structure that facilitates quickly searching a sparsely populated space or range. By using a forward map that quickly searches a sparsely populated logical namespace or address space, the apparatus 800 provides an efficient way to determine one or more physical addresses from a logical address.

While the forward map may be optimized, or at least designed, for quickly determining a physical address from a logical address, typically the forward map is not optimized for locating all of the data within a specific region of the solid-state storage media 110. For this reason, the apparatus 800 includes a reverse mapping module 804 that uses a reverse map to determine a logical address of a data segment from a physical address. The reverse map is used to map the one or more physical addresses to one or more logical addresses and can be used by the reverse mapping module 804 or other process to determine a logical address from a physical address. The reverse map beneficially maps the solid-state storage media 110 into erase regions such that a portion of the reverse map spans an erase region of the solid-state storage media 110 erased together during a storage space recovery operation. The storage space recovery operation (or garbage collection operation) recovers erase regions for future storage of data. By organizing the reverse map by erase region, the storage space recovery module 806 can efficiently identify an erase region for storage space recovery and identify valid data. The storage space recovery module 806 is discussed in more detail below.

The physical addresses in the reverse map are associated or linked with the forward map so that if logical address A is mapped to physical address B in the forward map, physical address B is mapped to logical address A in the reverse map. In one embodiment, the forward map includes physical addresses that are linked to entries in the reverse map. In another embodiment, the forward map includes pointers to physical addresses in the reverse map or some other intermediate list, table, etc. One of skill in the art will recognize other ways to link physical addresses to the forward map and reverse map.

In one embodiment, the reverse map includes one or more source parameters. The source parameters are typically received in conjunction with a storage request and include at least one or more logical addresses. The source parameters may also include data lengths associated with data of a data segment received in conjunction with a storage request. In another embodiment, the reverse map does not include source parameters in the form of logical addresses or data lengths and the source are stored with data of the data segment stored on the solid-state storage media 110. In this embodiment, the source parameters may be discovered from a physical address in the reverse map which leads to the source parameters stored with the data. Said differently, the reverse map may use the primary logical-to-physical map rather than the secondary-logical-to-physical map.

Storing the source parameters with the data is advantageous in a sequential storage device because the data stored in the solid-state storage media 110 becomes a log that can be replayed to rebuild the forward and reverse maps. This is due to the fact that the data is stored in a sequence matching when storage requests are received, and thus the source data serves a dual role; rebuilding the forward and reverse maps and determining a logical address from a physical address. The apparatus 800 includes a storage space recovery module 806 that uses the reverse map to identify valid data in an erase region prior to an operation to recover the erase region. The identified valid data is moved to another erase region prior to the recovery operation. By organizing the reverse map by erase region, the storage space recovery module 806 can scan through a portion of the reverse map corresponding to an erase region to quickly identify valid data or to determine a quantity of valid data in the erase region. An erase region may include an erase block, a fixed number of pages, etc. erased together. The reverse map may be organized so that once the entries for a particular erase region are scanned, the contents of the erase region are known.

By organizing the reverse map by erase region, searching the contents of an erase region is more efficient than searching a B-tree, binary tree, or other similar structure used for logical- to-physical address searches. Searching forward map in the form of a B-tree, binary tree, etc. is cumbersome because the B-tree, binary tree, etc. would frequently have to be searched in its entirety to identify all of the valid data of the erase region. The reverse may include a table, data base, or other structure that allows entries for data of an erase region to be stored together to facilitate operations on data of an erase region.

In one embodiment, the forward map and the reverse map are independent of a file structure, a name space, a directory, etc. that organize data for the requesting device transmitting the storage request, such as a file server or client operating in a server or the host device 114. By maintaining the forward map and the reverse map separate from any file server of the requesting device, the apparatus 800 is able to emulate a random access, logical block storage device storing data as requested by the storage request.

Use of the forward map and reverse map allows the apparatus 800 to appear to be storing data in specific locations as directed by a storage request while actually storing data sequentially in the solid-state storage media 110. Beneficially, the apparatus 800 overcomes problems that random access causes for solid-state storage, such as flash memory, by emulating logical block storage while actually storing data sequentially. The apparatus 800 also allows flexibility because one storage request may be a logical block storage request while a second storage request may be an object storage request, file storage request, etc. Maintaining independence from file structures, namespaces, etc. of the requesting device provides great flexibility as to which type of storage requests may be serviced by the apparatus 800.

Figure 6 is a schematic block diagram illustrating another embodiment of an apparatus 900 for efficient mapping of logical and physical addresses in accordance with the present invention. The apparatus 900 includes a forward mapping module 802, a reverse mapping module 804, and a storage space recovery module 806, which are substantially similar to those described above in relation to the apparatus 800 of Figure 5. The apparatus 900 also includes a map rebuild module 902, a checkpoint module 904, a map sync module 906, an invalidate module 908, and a map update module 910, which are described below.

The apparatus 900 includes a map rebuild module 902 that rebuilds the forward map and the reverse map using the source parameters stored with the data. Where data is stored on the solid-state storage media 110 sequentially, by keeping track of the order in which erase regions or erase blocks in the solid-state storage media 110 were filled and by storing source parameters with the data, the solid-state storage media 110 becomes a sequential log. The map rebuild module 902 replays the log by sequentially reading data packets stored on the solid-state storage media 110. Each physical address and data packet length is paired with the source parameters found in each data packet to recreate the forward and reverse maps.

In another embodiment, the apparatus 900 includes a checkpoint module 904 that stores information related to the forward map and the reverse map where the checkpoint is related to a point in time or state of the data storage device. The stored information is sufficient to restore the forward map and the reverse map to a status related to the checkpoint. For example, the stored information may include storing the forward and reverse maps in non-volatile storage, such as on the data storage device, along with some identifier indicating a state or time checkpoint.

For example, a timestamp could be stored with the checkpoint information. The timestamp could then be correlated with a location in the solid-state storage media 110 where data packets were currently being stored at the checkpoint. In another example, state information is stored with the checkpoint information, such as a location in the solid-state storage media 110 where data is currently being stored. One of skill in the art will recognize other checkpoint information that may be stored by the checkpoint module 904 to restore the forward and reverse maps to the checkpoint.

In another embodiment, the apparatus 900 includes a map sync module 906 that updates the forward map and the reverse map from the status related to the checkpoint to a current status by sequentially applying source parameters and physical addresses. The source parameters applied are stored with data that was sequentially stored after the checkpoint. The physical addresses are derived from a location of the data on the solid-state storage media 110.

Beneficially the map sync module 906 restores the forward and reverse maps to a current state from a checkpoint rather than starting from scratch and replaying the entire contents of the solid-state storage media 110. The map sync module 906 uses the checkpoint to go to the data packet stored just after the checkpoint and then replays data packets from that point to a current state where data packets are currently being stored on the solid-state storage media 110. The map sync module 906 typically takes less time to restore the forward and reverse maps than the map rebuild module 902.

In one embodiment, the forward and reverse maps are stored on the solid-state storage media 110 and another set of forward and reverse maps are created to map the stored forward and reverse maps. For example, data packets may be stored on a first storage channel while the forward and reverse maps for the stored data packets may be stored as data on a second storage channel; the forward and reverse maps for the data on the second storage channel may be stored as data on a third storage channel, and so forth.. This recursive process may continue as needed for additional forward and reverse maps. The storage channels may be on a single element of solid-state storage media 110 or on separate elements of solid-state storage media 110.

The apparatus 900 includes an invalidate module 908 that marks an entry for data in the reverse map indicating that data referenced by the entry is invalid in response to an operation resulting in the data being invalidated. The invalidate module 908 may mark an entry invalid as a result of a delete request, a read-modify-write request, and the like. The reverse map includes some type of invalid marker or tag that may be changed by the invalidate module 908 to indicate data associated with an entry in the reverse map is invalid. For example, the reverse map may include a bit that is set by the invalidate module 908 when data is invalid.

In one embodiment, the reverse map includes information for valid data and invalid data stored in the solid-state storage media 110 and the forward includes information for valid data stored in the solid-state storage media 110. Since the reverse map is useful for storage space recovery operations, information indicating which data in an erase block is invalid is included in the reverse map. By maintaining the information indicating invalid data in the reverse map, the forward map, in one embodiment, need only maintain information related to valid data stored on the solid-state storage media 110, thus improving the efficiency and speed of forward lookup.

The storage space recovery module 806 may then use the invalid marker to determine a quantity of invalid data in an erase region by scanning the reverse map for the erase region to determine a quantity of invalid data in relation to a storage capacity of the erase region. The storage space recovery module 806 can then use the determined quantity of invalid data in the erase region to select an erase region for recovery. By scanning several erase regions, or even all available erase regions, the storage space recovery module 806 can use selection criteria, such as highest amount of invalid data in an erase region, to then select an erase region for recovery. Once an erase region is selected for recovery, in one embodiment the storage space recovery module 806 may then write valid data from the selected erase region to a new location in the solid-state storage media 110. The new location is typically within a page of an erase region where data is currently being stored sequentially. The storage space recovery module 806 may write the valid data using a data pipeline as described in U.S. Patent Application No. 11,952,091 entitled "Apparatus, System, and Method for Managing Data Using a Data Pipeline" for David Flynn et al. and filed December 6, 2007, which is hereinafter incorporated by reference.

In one embodiment, the storage space recovery module 806 also updates the reverse map to indicate that the valid data written to the new location is invalid in the selected erase region and updates the forward and reverse maps based on the valid data written to the new location. In another embodiment, the storage space recovery module 806 coordinates with the map update module 910 (described below) to update the forward and reverse maps.

In a preferred embodiment, the storage space recovery module 806 operates autonomously with respect to data storage and retrieval associated with storage requests and other commands. Storage space recovery operations that may be incorporated in the storage space recovery module 806 are described in more detail in the Storage Space Recovery Application referenced above.

In one embodiment, the apparatus 900 includes a map update module 910 that updates the forward map and/or the reverse map in response to contents of the solid-state storage media 110 being altered. In a further embodiment, the map update module 910 receives information linking a physical address of stored data to a logical address from the data storage device based on a location where the data storage device stored the data. In the embodiment, the location where a data packet is stored may not be available until the solid-state storage media 110 stores the data packet.

For example, where data from a data segment is compressed to form a data packet, the size of each data packet may be unknown until after compression. Where the solid-state storage media 110 stores data sequentially, once a data packet is compressed and stored, an append point is set to a location after the stored data packet and a next data packet is stored. Once the append point is known, the solid-state storage media 110 may then report back the physical address corresponding to the append point where the next data packet is stored. The map update module 910 uses the reported physical address and associated data length of the stored data packet to update the forward and reverse maps. One of skill in the art will recognize other embodiments of a map update module 910 to update the forward and reverse maps based on physical addresses and associated data lengths of data stored on the solid-state storage media 110.

Figure 7 is a schematic block diagram of an example of a forward map 1004 and a reverse map 1022 in accordance with the present invention. Typically, the apparatus 800, 900 receives a storage request, such as storage request to read an address. For example, the apparatus 800, 900 may receive a logical block storage request 1002 to start reading read address "182" and read 3 blocks. Typically the forward map 1004 stores logical block addresses as virtual/logical addresses along with other virtual/logical addresses so the forward mapping module 802 uses forward map 1004 to identify a physical address from the virtual/logical address "182" of the storage request 1002. In the example, for simplicity only logical addresses that are numeric are shown, but one of skill in the art will recognize that any logical address may be used and represented in the forward map 1004. A forward map 1004, in other embodiments, may include alpha-numerical characters, hexadecimal characters, and the like.

In the example, the forward map 1004 is a simple B-tree. In other embodiments, the forward map 1004 may be a content addressable memory ("CAM"), a binary tree, a hash table, or other data structure known to those of skill in the art. In the depicted embodiment, a B-Tree includes nodes (e.g. the root node 1008) that may include entries of two logical addresses. Each entry, in one embodiment, may include a range of logical addresses. For example, a logical address may be in the form of a logical identifier with a range (e.g. offset and length) or may represent a range using a first and a last address or location.

Where a single logical address is included at a particular node, such as the root node 1008, if a logical address 1006 being searched is lower than the logical address of the node, the search will continue down a directed edge 1010 to the left of the node 1008. If the searched logical address 1006 matches the current node 1008 (i.e. is located within the range identified in the node), the search stops and the pointer, link, physical address, etc. at the current node 1008 is identified. If the searched logical address 1006 is greater than the range of the current node 1008, the search continues down directed edge 1012 to the right of the current node 1008. Where a node includes two logical addresses and a searched logical address 1006 falls between the listed logical addresses of the node, the search continues down a center directed edge (not shown) to nodes with logical addresses that fall between the two logical addresses of the current node 1008. A search continues down the B-tree until either locating a desired logical address or determining that the searched logical address 1006 does not exist in the B-tree. As described above, in one embodiment, membership in the B-tree denotes membership in the cache 102, and determining that the searched logical address 1006 is not in the B-tree is a cache miss. In the example depicted in Figure 7, the forward mapping module 802 searches for logical address "182" 1006 starting at the root node 1008. Since the searched logical address 1006 is lower than the logical address of 205-212 in the root node 1008, the forward mapping module 802 searches down the directed edge 1010 to the left to the next node 1014. The searched logical address "182" 1006 is more than the logical address (072-083) stored in the next node 1014 so the forward mapping module 802 searches down a directed edge 1016 to the right of the node 1014 to the next node 1018. In this example, the next node 1018 includes a logical address of 178-192 so that the searched logical address "182" 1006 matches the logical address 178-192 of this node 1018 because the searched logical address "182" 1006 falls within the range 178-192 of the node 1018.

Once the forward mapping module 802 determines a match in the forward map 1004, the forward mapping module 802 returns a physical address, either found within the node 1018 or linked to the node 1018. In the depicted example, the node 1018 identified by the forward mapping module 802 as containing the searched logical address 1006 includes a link "f" that maps to an entry 1020 in the reverse map 1022.

In the depicted embodiment, for each entry 1020 in the reverse map 1022 (depicted as a row in a table), the reverse map 1022 includes an entry ID 1024, a physical address 1026, a data length 1028 associated with the data stored at the physical address 1026 on the solid-state storage media 110 (in this case the data is compressed), a valid tag 1030, a logical address 1032 (optional), a data length 1034 (optional) associated with the logical address 1032, and other miscellaneous data 1036. The reverse map 1022 is organized into erase blocks (erase regions). In this example, the entry 1020 that corresponds to the selected node 1018 is located in erase block n 1038. Erase block n 1038 is preceded by erase block n-1 1040 and followed by erase block n+1 1042 (the contents of erase blocks n-1 and n+1 are not shown). An erase block may be some erase region that includes a predetermined number of pages. An erase region is an area in the solid-state storage media 110 erased together in a storage recovery operation.

While the entry ID 1024 is shown as being part of the reverse map 1022, the entry ID 1024 may be an address, a virtual link, or other means to tie an entry in the reverse map 1022 to a node in the forward map 1004. The physical address 1026 is an address in the solid-state storage media 110 where data that corresponds to the searched logical address 1006 resides. The data length 1028 associated with the physical address 1026 identifies a length of the data packet stored at the physical address 1026. (Together the physical address 1026 and data length 1028 may be called destination parameters 1044 and the logical address 1032 and associated data length 1034 may be called source parameters 1046 for convenience.) In the example, the data length 1028 of the destination parameters 1044 is different from the data length 1034 of the source parameters 1046 in one embodiment compression the data packet stored on the solid-state storage media 110 was compressed prior to storage. For the data associated with the entry 1020, the data was highly compressible and was compressed from 64 blocks to 1 block.

The valid tag 1030 indicates if the data mapped to the entry 1020 is valid or not. In this case, the data associated with the entry 1020 is valid and is depicted in Figure 7 as a "Y" in the row of the entry 1020. Typically the reverse map 1022 tracks both valid and invalid data and the forward map 1004 tracks valid data. In the example, entry "c" 1048 indicates that data associated with the entry 1048 is invalid. Note that the forward map 1004 does not include logical addresses associated with entry "c" 1048. The reverse map 1022 typically maintains entries for invalid data so that valid and invalid data can be quickly distinguished during a storage recovery operation.

The depicted reverse map 1022 includes source parameters 1046 for convenience, but the reverse map 1022 may or may not include the source parameters 1046. For example, if the source parameters 1046 are stored with the data, possibly in a header of the stored data, the reverse map 1022 could identify a logical address indirectly by including a physical address 1026 associated with the data and the source parameters 1046 could be identified from the stored data. One of skill in the art will recognize when storing source parameters 1046 in a reverse map 1022 would be beneficial.

The reverse map 1022 may also include other miscellaneous data 1036, such as a file name, object name, source data, etc. One of skill in the art will recognize other information useful in a reverse map 1022. While physical addresses 1026 are depicted in the reverse map 1022, in other embodiments, physical addresses 1026, or other destination parameters 1044, may be included in other locations, such as in the forward map 1004, an intermediate table or data structure, etc.

Typically, the reverse map 1022 is arranged by erase block or erase region so that traversing a section of the map associated with an erase block (e.g. erase block n 1038) allows the storage space recovery module 806 to identify valid data in the erase block 1038 and to quantify an amount of valid data, or conversely invalid data, in the erase block 1038. Arranging an index into a forward map 1004 that can be quickly searched to identify a physical address 1026 from a logical address 1006 and a reverse map 1022 that can be quickly searched to identify valid data and quantity of valid data in an erase block 1038 is beneficial because the index may be optimized for searches and storage recovery operations. One of skill in the art will recognize other benefits of an index with a forward map 1004 and a reverse map 1022. Figure 8 depicts one embodiment of a mapping structure 1100, a logical address space 1120 of the cache 102, a combined logical address space 1119 that is accessible to a storage client, a sequential, log-based, append-only writing structure 1140, and a storage device address space 1170 of the storage device 118. The mapping structure 1100, in one embodiment, is maintained by the direct mapping module 606. The mapping structure 1100, in the depicted embodiment, is a B-tree that is substantially similar to the forward map 1004 described above with regard to Figure 7, with several additional entries. Further, instead of links that map to entries in a reverse map 1022, the nodes of the mapping structure 1100 include direct references to physical locations in the cache 102. The mapping structure 1100, in various embodiments, may be used either with or without a reverse map 1022. As described above with regard to the forward map 1004 of Figure 7, in other embodiments, the references in the mapping structure 1100 may include alpha-numerical characters, hexadecimal characters, pointers, links, and the like.

The mapping structure 1100, in the depicted embodiment, includes a plurality of nodes. Each node, in the depicted embodiment, is capable of storing two entries. In other embodiments, each node may be capable of storing a greater number of entries, the number of entries at each level may change as the mapping structure 1100 grows or shrinks through use, or the like.

Each entry, in the depicted embodiment, maps a variable length range of logical addresses of the cache 102 to a physical location in the storage media 110 for the cache 102. Further, while variable length ranges of logical addresses, in the depicted embodiment, are represented by a starting address and an ending address, in other embodiments, a variable length range of addresses may be represented by a starting address and a length, or the like. In one embodiment, the capital letters 'A' through 'M' represent a logical or physical erase block in the physical storage media 110 of the cache 102 that stores the data of the corresponding range of logical addresses. In other embodiments, the capital letters may represent other physical addresses or locations of the cache 102. In the depicted embodiment, the capital letters 'A' through 'M' are also depicted in the writing structure 1140 which represents the physical storage media 110 of the cache 102.

In the depicted embodiment, membership in the mapping structure 1100 denotes membership (or storage) in the cache 102. In another embodiment, an entry may further include an indicator of whether the cache 102 stores data corresponding to a logical block within the range of logical addresses, data of the reverse map 1022 described above, and/or other data. For example, in one embodiment, the mapping structure 1100 may also map logical addresses of the storage device 118 to physical addresses or locations within the storage device 118, and an entry may include an indicator that the cache 102 does not store the data and a physical address or location for the data on the storage device 118. The mapping structure 1100, in the depicted embodiment, is accessed and traversed in a similar manner as that described above with regard to the forward map 1004.

In the depicted embodiment, the root node 1008 includes entries 1102, 1104 with noncontiguous ranges of logical addresses. A "hole" exists at logical address "208" between the two entries 1102, 1104 of the root node. In one embodiment, a "hole" indicates that the cache 102 does not store data corresponding to one or more logical addresses corresponding to the "hole." In one embodiment, a "hole" may exist because the eviction module 712 evicted data corresponding to the "hole" from the cache 102. If the eviction module 712 evicted data corresponding to a "hole," in one embodiment, the storage device 118 still stores data corresponding to the "hole." In another embodiment, the cache 102 and/or the storage device 118 supports block I/O requests (read, write, trim, etc.) with multiple contiguous and/or noncontiguous ranges of addresses (i.e. ranges that include one or more "holes" in them). A "hole," in one embodiment, may be the result of a single block I/O request with two or more noncontiguous ranges of addresses. In a further embodiment, a "hole" may be the result of several different block I/O requests with address ranges bordering the "hole."

In Figure 7, the root node 1008 includes a single entry with a logical address range of "205-212," without the hole at "208." If the entry of the root node 1008 were a fixed size cache line of a traditional cache, the entire range of logical addresses "205-212" would be evicted together. Instead, in the embodiment depicted in Figure 8, the eviction module 712 evicts data of a single logical address "208" and splits the range of logical addresses into two separate entries 1102, 1104. In one embodiment, the direct mapping module 606 may rebalance the mapping structure 1100, adjust the location of a directed edge, root node, or child node, or the like in response to splitting a range of logical addresses. Similarly, in one embodiment, each range of logical addresses may have a dynamic and/or variable length, allowing the cache 102 to store dynamically selected and/or variable lengths of logical block ranges.

In the depicted embodiment, similar "holes" or noncontiguous ranges of logical addresses exist between the entries 1106, 1108 of the node 1014, between the entries 1110, 1112 of the left child node of the node 1014, between entries 1114, 1116 of the node 1018, and between entries of the node 1118. In one embodiment, similar "holes" may also exist between entries in parent nodes and child nodes. For example, in the depicted embodiment, a "hole" of logical addresses "060-071" exists between the left entry 1106 of the node 1014 and the right entry 1112 of the left child node of the node 1014. The "hole" at logical address "003," in the depicted embodiment, can also be seen in the logical address space 1120 of the cache 102 at logical address "003" 1130. The hash marks at logical address "003" 1140 represent an empty location, or a location for which the cache 102 does not store data. In the depicted embodiment, storage device address "003" 1180 of the storage device address space 1170 does store data (identified as 'b'), indicating that the eviction module 712 evicted data from logical address "003" 1130 of the cache 102. The "hole" at logical address 1134 in the logical address space 1120, however, has no corresponding data in storage device address 1184, indicating that the "hole" is due to one or more block I/O requests with noncontiguous ranges, a trim or other deallocation command to both the cache 102 and the storage device 118, or the like.

The "hole" at logical address "003" 1130 of the logical address space 1120, however, in one embodiment, is not viewable or detectable to a storage client. In the depicted embodiment, the combined logical address space 1119 represents the data that is available to a storage client, with data that is stored in the cache 102 and data that is stored in the storage device 118 but not in the cache 102. As described above, the read miss module 718 of Figure 4 handles misses and returns requested data to a requesting entity. In the depicted embodiment, if a storage client requests data at logical address "003" 1130, the read miss module 718 will retrieve the data from the storage device 118, as depicted at address "003" 1180 of the storage device address space 1170, and return the requested data to the storage client. The requested data at logical address "003" 1130 may then also be placed back in the cache 102 and thus logical address 1130 would indicate 'b' as present in the cache 102.

For a partial miss, the read miss module 718 may return a combination of data from both the cache 102 and the storage device 118. For this reason, the combined logical address space

1119 includes data 'b' at logical address "003" 1130, and the "hole" in the logical address space 1120 of the cache 102 is transparent. In the depicted embodiment, the combined logical address space 1119 is the size of the logical address space 1120 of the cache 102 and is larger than the storage device address space 1180. In another embodiment, the direct cache module 116 may size the combined logical address space 1119 as the size of the storage device address space 1180, or as another size.

The logical address space 1120 of the cache 102, in the depicted embodiment, is larger than the physical storage capacity and corresponding storage device address space 1170 of the storage device 118. In the depicted embodiment, the cache 102 has a 64 bit logical address space

1120 beginning at logical address "0" 1122 and extending to logical address "2 64 -l" 1126. The storage device address space 1170 begins at storage device address "0" 1172 and extends to storage device address "N" 1174. Storage device address "N" 1174, in the depicted embodiment, corresponds to logical address "N" 1124 in the logical address space 1120 of the cache 102. Because the storage device address space 1170 corresponds to only a subset of the logical address space 1120 of the cache 102, the rest of the logical address space 1120 may be shared with an additional cache 102, may be mapped to a different storage device 118, may store data in the cache 102 (such as a Non-volatile memory cache) that is not stored in the storage device 1170, or the like.

For example, in the depicted embodiment, the first range of logical addresses "000-002" 1128 stores data corresponding to the first range of storage device addresses "000-002" 1178. Data corresponding to logical address "003" 1130, as described above, was evicted from the cache 102 forming a "hole" and a potential cache miss. The second range of logical addresses "004-059" 1132 corresponds to the second range of storage device addresses "004-059" 1182. However, the final range of logical addresses 1136 extending from logical address "N" 1124 extends beyond storage device address "N" 1174. No storage device address in the storage device address space 1170 corresponds to the final range of logical addresses 1136. The cache 102 may store the data corresponding to the final range of logical addresses 1136 until the data storage device 118 is replaced with larger storage or is expanded logically, until an additional data storage device 118 is added, simply use the non- volatile storage capability of the cache to indefinitely provide storage capacity directly to a storage client 504 independent of a storage device 118, or the like. In a further embodiment, the direct cache module 116 alerts a storage client 504, an operating system, a user application 502, or the like in response to detecting a write request with a range of addresses, such as the final range of logical addresses 1136, that extends beyond the storage device address space 1170. The user may then perform some maintenance or other remedial operation to address the situation. Depending on the nature of the data, no further action may be taken. For example, the data may represent temporary data which if lost would cause no ill effects.

The sequential, log -based, append-only writing structure 1140, in the depicted embodiment, is a logical representation of the physical storage media 110 of the cache 102. In a further embodiment, the storage device 118 may use a substantially similar sequential, log-based, append-only writing structure 1140. In certain embodiments, the cache 102 stores data sequentially, appending data to the writing structure 1140 at an append point 1144. The cache 102, in a further embodiment, uses a storage space recovery process, such as the garbage collection module 710 and/or the storage space recovery module 806 that re-uses non-volatile storage media storing deallocated/unused logical blocks. Non-volatile storage media storing deallocated/unused logical blocks, in the depicted embodiment, is added to an available storage pool 1146 for the cache 102. By evicting and clearing certain data from the cache 102, as described above, and adding the physical storage capacity corresponding to the evicted and/or cleared data back to the available storage pool 1146, in one embodiment, the writing structure 1140 is cyclic, ring-like, and has a theoretically infinite capacity.

In the depicted embodiment, the append point 1144 progresses around the log-based, append-only writing structure 1140 in a circular pattern 1142. In one embodiment, the circular pattern 1142 wear balances the solid-state storage media 110, increasing a usable life of the solid-state storage media 110. In the depicted embodiment, the eviction module 712 and/or the garbage collection module 710 have marked several blocks 1148, 1150, 1152, 1154 as invalid, represented by an "X" marking on the blocks 1148, 1150, 1152, 1154. The garbage collection module 710, in one embodiment, will recover the physical storage capacity of the invalid blocks 1148, 1150, 1152, 1154 and add the recovered capacity to the available storage pool 1146. In the depicted embodiment, modified versions of the blocks 1148, 1150, 1152, 1154 have been appended to the writing structure 1140 as new blocks 1156, 1158, 1160, 1162 in a read, modify, write operation or the like, allowing the original blocks 1148, 1150, 1152, 1154 to be recovered.

Figure 9 depicts one embodiment of a method 1200 for caching data. The method 1200 begins and the storage request module 602 detects 1202 an I O request for a storage device 118 cached by solid-state storage media 110 of a cache 102. The direct mapping module 606 references 1204 a single mapping structure to determine whether the cache 102 comprises data of the detected 1202 I/O request. The single mapping structure maps each logical block address of the storage device 102 directly to a logical block address of the cache 102 and also comprises a fully associative relationship between logical block addresses of the storage device 118 and physical storage addresses of the solid-state storage media 110. The cache fulfillment module 604 satisfies 1206 the detected 1202 I/O request using the cache 102 in response to the direct mapping module 606 determining 1204 that the cache 102 comprises at least one data block of the detected 1202 I/O request. The storage request module 602 continues to detect 1202 I/O requests and the method 1200 repeats.

Figure 10 depicts another embodiment of a method 1300 for caching data. The method 1300 begins and the storage request module 602 determines 1302 whether there are any I/O requests for a storage device 118 cached by solid-state storage media 110 of a cache 102. If the storage request module 602 does not detect 1302 an I/O request, the storage request module 602 continues to monitor 1302 I/O requests. If the storage request module 602 detects 1302 an I/O request, the storage request module 602 determines 1304 a storage device logical block address for the I O request.

The direct mapping module 606 references 1306 a single mapping structure using the determined 1304 storage device logical block address to determine 1308 whether the cache 102 comprises/stores data of the I/O request. If the direct mapping module 606 determines 1308 that the cache 102 does not comprise data of the I/O request, the cache fulfillment module 604 stores 1310 data of the I/O request to the cache 102 in a manner that associates the data with the determined 1304 logical block address and a sequence indicator for the I/O request, to satisfy the I/O request.

If the direct mapping module 606 determines 1308 that the cache 102 comprises at least one data block of the I/O request, the cache fulfillment module 604 satisfies 1312 the I/O request, at least partially, using the cache 102. For a write I/O request, the cache fulfillment module 604 may satisfy 1312 the I/O request by storing data of the I/O request to the cache 102 sequentially on the solid-state storage media 110 to preserve an ordered sequence of storage operations. For a read I/O request, the cache fulfillment module 604 may satisfy 1312 the I/O request by reading data of the I/O request from the cache 102 using a physical storage address of the solid-state storage media 110 associated with the determined 1304 logical block address of the I/O request.

The direct mapping module 606 determines 1314 whether to update the mapping structure to maintain an entry in the mapping structure associating the determined 1304 logical block address and physical storage locations or addresses on the solid-state storage media 110. For example, the direct mapping module 606 may determine 1314 to update the mapping structure if storing 1310 data of the I/O request to the cache 102 or otherwise satisfying 1312 the I/O request changed the state of data on the cache 102, such as for a write I/O request, a cache miss, a TRIM request, an erase request, or the like.

If the direct mapping module 606 determines 1314 to update the mapping structure, the direct mapping module 606 updates 1316 the mapping structure to map the determined 1304 storage device logical block address for the I/O request directly to a logical block address of the cache 102 and to a physical storage address or location of data associated with the I/O request on the solid-state storage media 110 of the cache 102. If the direct mapping module 606 determines 1314 not to update the mapping structure, for a read I/O request resulting in a cache hit or the like, the method 1300 continues without the direct mapping module 606 updating 1316 the mapping structure. The direct mapping module 606 determines 1318 whether to reconstruct the mapping structure, in response to a reconstruction event such as a power failure, a corruption of the mapping structure, an improper shutdown, or the like. If the direct mapping module 606 determines 1318 to reconstruct the mapping structure, the direct mapping module 606 reconstructs 1320 the mapping structure using the logical block addresses and sequence indicators associated with data on the solid-state storage media 110 of the cache 102, scanning a sequential, log-based, cyclic writing structure or the like. If the direct mapping module 606 determines 1318 not to reconstruct the mapping structure, the method 1300 skips the reconstruction step 1320 and the storage request module 602 continues to monitor 1302 I/O requests for the storage device 118.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.