Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CACHE MANAGEMENT OF LOGICAL-PHYSICAL TRANSLATION METADATA
Document Type and Number:
WIPO Patent Application WO/2020/174428
Kind Code:
A4
Abstract:
The present disclosure describes aspects of cache management of logical-physical translation metadata. In some aspects, a cache (260) for logical-physical translation entries of a storage media system (114) is divided into a plurality of segments (264). An indexer (364) is configured to efficiently balance a distribution of the logical-physical translation entries (252) between the segments (252). A search engine (362) associated with the cache is configured to search respective cache segments (264) and a cache manager (160) may leverage masked search functionality of the search engine (362) to reduce the overhead of cache flush operations.

Inventors:
ZENG YU (CN)
GAO SHENGHAO (CN)
Application Number:
PCT/IB2020/051659
Publication Date:
December 10, 2020
Filing Date:
February 26, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MARVELL ASIA PTE LTD (SG)
International Classes:
G06F12/02
Download PDF:
Claims:
1

AMENDED CLAIMS

received by the International Bureau on 27.10.2020

1. A method for managing logical-physical translation metadata, comprising:

caching mapping entries configured to associate logical addresses with physical addresses of a non-volatile memory system within a cache comprising a plurality of segments, the caching of one of the mapping entries comprising:

assigning the mapping entry to respective segment of the plurality of segments of the cache;

deriving a hash value from the logical address of the mapping entry; and indexing the respective segment to which the mapping entry is assigned by the derived hash value; and

flushing mapping entries corresponding to a group of logical addresses from the cache to persistent storage, the flushing comprising:

searching segments of the cache with a masked search pattern configured to match mapping entries having logical addresses within the group, and

storing mapping entries determined to match the masked search pattern to the persistent storage.

2. The method of claim 1, wherein the mapping entries are assigned to respective segments of the cache in accordance with a logical address distribution scheme configured to balance distribution of entries between the respective segments of the cache. 2

3. The method of claim 2, wherein:

the logical address distribution scheme is a first logical address distribution scheme; and

the group of logical addresses are distributed in accordance with a second logical address distribution scheme different from the first logical address distribution scheme.

4. The method of any of claims 1 to 3, wherein searching a segment of the cache with the masked search pattern comprises:

populating a pattern buffer of a search engine with a logical address of the group; and

configuring the search engine to ignore logical address comparisons corresponding to a designated region of the pattern buffer.

5. The method of any of claims 1 to 4, wherein the group of logical addresses comprises a contiguous range of logical addresses, and wherein searching the segments of the cache with the masked search pattern comprises:

setting a target logical address of a search engine to a logical address within the contiguous range; and

configuring the search engine to mask low-order bits of the target logical address.

6. The method of any of claims 1 to 5, wherein searching a segment of the cache with the masked search pattern comprises comparing the masked search pattern to logical addresses of each of a plurality of mapping entries cached within the segment at least partially in parallel. 3

7. The method of any of claims 1 to 6, further comprising admitting mapping entries into the cache, wherein admitting a mapping entry comprises:

retrieving the mapping entry from persistent storage;

assigning the mapping entry to a segment of the cache based on a logical address of the mapping entry; and

caching the mapping entry within one of the determined segments and an overflow segment of the cache.

8. The method of claim 7, further comprising retrieving mapping entries from the cache, wherein retrieving a mapping entry corresponding to a specified logical address from the cache comprises:

determining one or more segments of the cache assigned to the specified logical address based on a digest of the specified logical address; and

searching one or more of the determined segments of the cache and the overflow segment of the cache for a mapping entry matching the specified logical address.

4

9. An apparatus, comprising:

an indexer configured to:

derive hash values for logical addresses of translation entries pertaining to a non-volatile memory device;

assign the translation entries pertaining to the non-volatile memory device to respective segments of a cache comprising a plurality of segments based on the hash values of logical addresses of the translation entries;

a search engine configured to search respective segments of the cache; and a cache manager, wherein, in response to a request to retrieve a translation entry of a logical address from the cache, the cache manager is configured to:

assign the logical address to a segment of the cache by use of the indexer; and

compare the logical address to translation entries cached within the assigned segment of the cache by use of the search engine.

5

10. The apparatus of claim 9, wherein:

in response to the request to retrieve the translation entry of the logical address from the cache, the cache manager is further configured to:

compare the logical address to translation entries cached within an overflow segment of the cache; and

in response to a cache miss for the translation entry of the logical address, the cache manager is further configured to:

retrieve the translation entry for the logical address from persistent storage; and

cache the translation entry within one of the assigned segments and the overflow segment of the cache.

11. The apparatus of claim 9 or claim 10, wherein the search engine is configured to compare the logical address to translation entries cached within the assigned segment and the overflow segment of the cache at least partially parallel.

12. The apparatus of any of claims 9 to 1 1, wherein the search engine comprises:

a pattern buffer configured to hold a target logical address, wherein a mask register of the pattern buffer is configured to selectively enable respective regions of the pattern buffer; and

a match component configured to determine whether an entry cached within a segment of the cache matches the pattern buffer based on comparisons between regions of the pattern buffer enabled by the mask register and corresponding regions of the logical address of the translation entry. 6

13. The apparatus of claim 12, wherein the search engine comprises a plurality of match components, each match component configured to determine whether a respective entry cached within the segment of the cache matches the pattern buffer.

14. The apparatus of claim 12, wherein, in response to a request to flush translation entries to a mapping page comprising an extent of logical addresses, the cache manager is further configured to:

set a logical address of the extent as the target logical address of the pattern buffer;

configure the mask register of the pattern buffer to disable a specified region of the pattern buffer, the specified region corresponding to a portion of the logical addresses within the extent determined to vary between the logical addresses of the extent; and

cause the search engine to identify translation entries within respective segments of the cache that match the masked target logical address of the pattern buffer.

15. The apparatus of claim 14, wherein the cache manager is further configured to:

update the mapping page with the identified translation entries; and

write the updated mapping page to persistent storage. 7

16. A System-on-Chip (SoC), comprising:

a host interface to communicate with a host system;

a cache comprising a plurality of segments, each segment configured to store entries of a logical-physical translation layer pertaining to a non-volatile memory (NVM) medium;

an indexer configured to:

derive hash values for logical addresses of the entries of the logical- physical translation layer pertaining to the NVM medium;

assign the entries pertaining to the NVM medium to respective segments of the plurality of segments based on the hash values of logical addresses of the entries;

a search engine to identify entries cached within the respective segments of the cache that match criteria comprising a search pattern and mask, the mask configured to selectively disable specified regions of the search pattern;

a hardware-based processor; and

a memory storing processor-executable instructions that, responsive to execution by the hardware-based processor, implement a cache manager configured to:

select an extent of logical addresses to update on persistent storage in a flush operation,

cause the search engine to search respective segments of the cache for entries matching first criteria in response to selecting the extent, the search pattern of the first criteria comprising a logical address within the extent of logical addresses and the mask of the first criteria configured to disable at least one region of the search pattern, and 8

write entries determined to match the second search criteria to the persistent storage.

17. The SoC of claim 16, wherein:

the indexer is further configured to derive the hash values for the logical addresses of the entries based on a logical address hashing scheme configured to balance distribution of the entries between the plurality of segments; and

to admit an entry pertaining to a designated logical address into the cache, the cache manager is further configured to cache the entry within one of the determined segment of the cache and a secondary segment of the cache.

18. The SoC of claim 16 or claim 17, wherein to retrieve the entry pertaining to the designated logical address from the cache, the cache manager is further configured to:

determine the segment of the cache assigned to the designated logical address, and

cause the search engine to search one or more of the determined segment of the cache and the secondary segment of the cache for an entry matching second criteria, the search pattern of the second criteria comprising the designated logical address and the mask of the second criteria configured such that none of the regions of the search pattern are disabled.

19. The SoC of claim 18, wherein the cache manager is configured to cause the search engine to search the determined segment of the cache and the secondary segment of the cache at least partially in parallel.