Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR MEMORY COMPRESSION FOR DEEP LEARNING NETWORKS
Document Type and Number:
WIPO Patent Application WO/2021/226720
Kind Code:
A1
Abstract:
A system and method for memory compression for deep learning networks. The method includes: compacting an input data stream by identifying a bit width necessary to accommodate the value from the input data stream with the highest magnitude; storing a least significant bits of the input data stream in a first memory store, the number of bits equal to the bit width, wherein if the value requires more bits than those currently left unused in the first memory store, the remaining bits are written into a second memory store; and outputting the value of the first memory store, as a consecutive part of a compressed data stream, with an associated width of the data in the first memory store when the first memory store becomes full and copying the value of the second memory store to the first memory store; and decompressing the compressed data stream.

Inventors:
EDO VIVANCOS ISAK (CA)
MOSHOVOS ANDREAS (CA)
SHARIFYMOGHADDAM SAYEH (CA)
DELMAS LASCORZ ALBERTO (CA)
Application Number:
PCT/CA2021/050664
Publication Date:
November 18, 2021
Filing Date:
May 14, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOVERNING COUNCIL UNIV TORONTO (CA)
International Classes:
G06N3/063; G06F12/00; G06N3/04; H03M7/30
Foreign References:
US20200050918A12020-02-13
US20200026998A12020-01-23
Other References:
LASCORZ ALBERTO DELMáS DELMASL1@ECE.UTORONTO.CA; SHARIFY SAYEH SAYEH@ECE.UTORONTO.CA; EDO ISAK ISAK.EDO@MAIL.UTORONTO.CA; STU: "ShapeShifter Enabling Fine-Grain Data Width Adaptation in Deep Learning", PROCEEDINGS OF THE 52ND ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO '52, ACM PRESS, NEW YORK, NEW YORK, USA, 12 October 2019 (2019-10-12) - 16 October 2019 (2019-10-16), New York, New York, USA , pages 28 - 41, XP058476943, ISBN: 978-1-4503-6938-1, DOI: 10.1145/3352460.3358295
VIVANCOS ISAK EDO; SHARIFY SAYEH; NIKOLIC MILOS; BANNON CIARAN; MAHMOUD MOSTAFA; LASCORZ ALBERTO DELMAS; MOSHOVOS ANDREAS: "Late Breaking Results: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick", 2020 57TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), IEEE, 20 July 2020 (2020-07-20), pages 1 - 2, XP033838245, DOI: 10.1109/DAC18072.2020.9218728
EDO ISAK, SHARIFY SAYEH, LY-MA DANIEL, ABDELHADI AMEER, BANNON CIARAN, NIKOLIC MILOS, MAHMOUD MOSTAFA, DELMAS LASCORZ ALBERTO, PEK: "BOVEDA: BUILDING AN ON-CHIP DEEP LEARNING MEMORY HIERARCHY BRICK BY BRICK", PROCEEDINGS OF THE 4TH MLSYS CONFERENCE, 5 April 2021 (2021-04-05), pages 1 - 20, XP055872134
Attorney, Agent or Firm:
BHOLE IP LAW (CA)
Download PDF:
Claims:
CLAIMS

We claim:

1. A method for memory compression for a deep learning network, comprising: defining, for a first memory of a deep learning network, a plurality of rows each having a specified number of columns, each column having a column width; receiving an input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; compressing the data stream by sequentially compacting each subset, comprising: identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude; storing the bit width in a bit width register associated with the row; storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and wherein the compressed data stream can be decompressed to reproduce the input data stream by: identifying a location of a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the location of the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

2. The method of claim 1, wherein the location of a block of compressed values can be located by one or more pointers.

3. The method of claim 2, wherein the block is a filter map data block or an input or output activations data block.

4. The method of claim 2, wherein the location is for the first compressed value of the block.

5. The method of claim 2, wherein the one or more pointers comprise a first set of pointers to data for input or output activations maps and a second set of pointers to data for filter maps.

6. The method of claim 2, wherein receiving an input data stream comprises sequentially receiving portions of the block beginning at the location of the one or more pointers, compressing the portion of the block, and updating an offset pointer for recalling the next portion to be received.

7. The method of claim 2, wherein receiving an input data stream comprises sequentially receiving portions of the block, wherein a location for each portion is identified by one of the pointers.

8. The method of claim 2, wherein a portion of the compressed data values are forced to be stored starting at the least significant bit of a column by padding unoccupied most significant bits of a preceding data value.

9. The method of claim 1, wherein the bit width register for some rows stores a binary representation of the length of the bit width.

10. The method of claim 9, wherein the bit width register for other rows stores a single bit designating whether the bit width of the corresponding row is the same or different than the previous row.

11. The method of claim 1 , wherein the method is used to store floating point values, the floating point values comprising a sign portion, and exponent portion and a mantissa portion, the input data stream consisting of the exponent portions of the floating point values, and wherein compressing further comprises, for each floating point value, storing the sign portion and mantissa portion adjacent to the compressed exponent portion.

12. The method of claim 1, wherein during decompression a pointer is established for the location of a particular one of the blocks that is known to be needed at a future time.

13. The method of claim 1 , further comprising tracking a next unoccuplied location in each column of the first memory while compressing and storing the values.

14. The method of claim 1 , further comprising initializing a first storage location of the first memory as being unoccupied prior to compressing the data stream.

15. The method of claim 1 , wherein the plurality of values are of a fixed bit width less than or equal to the column width.

16. The method of claim 1, wherein the reproduced data stream is output directly to an arithmetic/logic unit.

17. The method of claim 1, wherein the reproduced data stream is output to a second memory having a plurality of rows each having a plurality of columns corresponding to the first memory.

18. The method of claim 1, wherein compressing further comprises prior to identifying the compressed bit width, evaluating a function on the values of the input data stream to reduce the compressed bit width and reversing the function for decompression.

19. A method for memory decompression for a deep learning network, comprising: obtaining a compressed data stream representing an input data stream, the compressed data stream prepared by: defining, for a first memory of a deep learning network, a plurality of rows each having a specified number of columns, each column having a column width; receiving the input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; compressing the data stream by sequentially compacting each subset, comprising: identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude; storing the bit width in a bit width register associated with the row; storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and decompressing the compressed data stream to reproduce the input data stream by: identifying a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

20. A system for memory compression for a deep learning network, comprising: a first memory having a plurality of rows each having a specified number of columns, each column having a column width; an input module for: receiving an input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; and dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; a width detector module having a plurality of bit width registers each associated with a row, the width detector module identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude and storing the bit width in the bit width register associated with the row; a compacting module for storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and a decompression module for decompressing the compressed data stream to reproduce the input data stream by: identifying a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

21. The system of claim 20, further comprising a pointer module having one or more pointers for tracking the location of a block of compressed values.

22. The system of claim 21, wherein the block is a filter map data block or an input or output activations data block.

23. The system of claim 21, wherein the location is for the first compressed value of the block.

24. The system of claim 21, wherein the one or more pointers comprise a first set of pointers to data for input or output activations maps and a second set of pointers to data for filter maps.

25. The system of claim 21 , further comprising an offset pointer, wherein receiving an input data stream comprises sequentially receiving portions of the block beginning at the location of the one or more pointers, compressing the portion of the block, and updating the offset pointer for recalling the next portion to be received.

26. The system of claim 21, wherein receiving an input data stream comprises sequentially receiving portions of the block, wherein a location for each portion is identified by one of the pointers.

27. The system of claim 20, wherein a portion of the compressed data values are forced to be stored starting at the least significant bit of a column by padding unoccupied most significant bits of a preceding data value.

28. The system of claim 20, wherein the bit width register for some rows stores a binary representation of the length of the bit width.

29. The system of claim 20, wherein the bit width register for other rows stores a single bit designating whether the bit width of the corresponding row is the same or different than the previous row.

30. The system of claim 20, wherein the system is for storing floating point values, the floating point values comprising a sign portion, and exponent portion and a mantissa portion, the input data stream consisting of the exponent portions of the floating point values, and wherein compressing further comprises, for each floating point value, storing the sign portion and mantissa portion adjacent to the compressed exponent portion.

31. The system of claim 20, wherein during decompression a pointer is established for the location of a particular one of the blocks that is known to be needed at a future time.

32. The system of claim 20, wherein the compacting module is configured to track a next unoccuplied location in each column of the first memory while compressing and storing the values.

33. The system of claim 20, wherein the compacting module is configured to initialize a first storage location of the first memory as being unoccupied prior to compressing the data stream.

34. The system of claim 20, wherein the plurality of values are of a fixed bit width less than or equal to the column width.

35. The system of claim 20, wherein the reproduced data stream is output directly to an arithmetic/logic unit.

36. The system of claim 20, wherein the reproduced data stream is output to a second memory having a plurality of rows each having a plurality of columns corresponding to the first memory.

37. The system of claim 20, wherein compressing further comprises prior to identifying the compressed bit width, evaluating a function on the values of the input data stream to reduce the compressed bit width and reversing the function for decompression.

Description:
SYSTEM AND METHOD FOR MEMORY COMPRESSION FOR DEEP LEARNING

NETWORKS

TECHNICAL FIELD

[0001] The following relates generally to deep learning networks and more specifically to a system and method for memory compression for deep learning networks.

BACKGROUND

[0002] Compression in memory hierarchy has received considerable attention especially in the context of general-purpose systems. However, there are different sets of technical challenges that exist for compression approaches for deep learning workloads. For example, general- purpose compression approaches generally need to support random, fine-grain accesses. Additionally, programs in general-purpose systems tend to exhibit value patterns and a variety of data types that are generally not present in neural networks.

SUMMARY

[0003] In one aspect, a method for memory compression for a deep learning network is provided, the method comprising: defining, for a first memory of a deep learning network, a plurality of rows each having a specified number of columns, each column having a column width; receiving an input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; compressing the data stream by sequentially compacting each subset, comprising: identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude; storing the bit width in a bit width register associated with the row; storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and wherein the compressed data stream can be decompressed to reproduce the input data stream by: identifying a location of a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the location of the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

[0004] In a particular case of the method, the location of a block of compressed values can be located by one or more pointers.

[0005] In a particular case of the method, the block is a filter map data block or an input or output activations data block.

[0006] In a particular case of the method, the location is for the first compressed value of the block.

[0007] In a particular case of the method, the one or more pointers comprise a first set of pointers to data for input or output activations maps and a second set of pointers to data for filter maps.

[0008] In a particular case of the method, receiving an input data stream comprises sequentially receiving portions of the block beginning at the location of the one or more pointers, compressing the portion of the block, and updating an offset pointer for recalling the next portion to be received.

[0009] In a particular case of the method, receiving an input data stream comprises sequentially receiving portions of the block, wherein a location for each portion is identified by one of the pointers.

[0010] In a particular case of the method, a portion of the compressed data values are forced to be stored starting at the least significant bit of a column by padding unoccupied most significant bits of a preceding data value.

[0011] In a particular case of the method, the bit width register for some rows stores a binary representation of the length of the bit width.

[0012] In a particular case of the method, the bit width register for other rows stores a single bit designating whether the bit width of the corresponding row is the same or different than the previous row.

[0013] In a particular case of the method, the method is used to store floating point values, the floating point values comprising a sign portion, and exponent portion and a mantissa portion, the input data stream consisting of the exponent portions of the floating point values, and wherein compressing further comprises, for each floating point value, storing the sign portion and mantissa portion adjacent to the compressed exponent portion. [0014] In a particular case of the method, during decompression a pointer is established for the location of a particular one of the blocks that is known to be needed at a future time.

[0015] In a particular case of the method, the method further comprises tracking a next unoccuplied location in each column of the first memory while compressing and storing the values.

[0016] In a particular case of the method, the method further comprises initializing a first storage location of the first memory as being unoccupied prior to compressing the data stream.

[0017] In a particular case of the method, the plurality of values are of a fixed bit width less than or equal to the column width.

[0018] In a particular case of the method, the reproduced data stream is output directly to an arithmetic/logic unit.

[0019] In a particular case of the method, the reproduced data stream is output to a second memory having a plurality of rows each having a plurality of columns corresponding to the first memory.

[0020] In a particular case of the method, compressing further comprises prior to identifying the compressed bit width, evaluating a function on the values of the input data stream to reduce the compressed bit width and reversing the function for decompression.

[0021] In another aspect, a method for memory decompression for a deep learning network is provided, the method comprising: obtaining a compressed data stream representing an input data stream, the compressed data stream prepared by: defining, for a first memory of a deep learning network, a plurality of rows each having a specified number of columns, each column having a column width; receiving the input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; compressing the data stream by sequentially compacting each subset, comprising: identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude; storing the bit width in a bit width register associated with the row; storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and decompressing the compressed data stream to reproduce the input data stream by: identifying a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

[0022] In yet another aspect, a system for memory compression for a deep learning network is provided, the system comprising: a first memory having a plurality of rows each having a specified number of columns, each column having a column width; an input module for: receiving an input data stream to be processed by one or more layers of the deep learning network, the input data stream having a plurality of values of a fixed bit width; and dividing the input data stream into subsets, the number of values in each subset being equal to the number of columns; a width detector module having a plurality of bit width registers each associated with a row, the width detector module identifying, for values within the subset, a compressed bit width necessary to accommodate the value with the highest magnitude and storing the bit width in the bit width register associated with the row; a compacting module for storing, in the respective column of the memory beginning from a first unoccupied bit, a least significant bits of each value within the subset, the number of bits equal to the bit width, wherein if storing the number of bits requires more bits than those currently left unused in the respective column of the respective row, the remaining bits are written into the respective column of a subsequent row; and a decompression module for decompressing the compressed data stream to reproduce the input data stream by: identifying a first unread bit of each column of the compressed data stream; sequentially outputting the reproduced input data by: obtaining the bit width of each subset from the respective bit width register; retrieving from each column of the first memory, beginning at the first unread bit of the column, the number of bits corresponding to the bit width and outputting the retrieved bits to the least significant bits of an output; updating the first unread bit of each column to correspond to the bit location subsequent to the retrieved bits; zero or sign extending remaining most significant bits of the output to obtain the reproduced input data value.

[0023] In a particular case of the system, the system further comprises a pointer module having one or more pointers for tracking the location of a block of compressed values.

[0024] In a particular case of the system, the block is a filter map data block or an input or output activations data block.

[0025] In a particular case of the system, the location is for the first compressed value of the block.

[0026] In a particular case of the system, the one or more pointers comprise a first set of pointers to data for input or output activations maps and a second set of pointers to data for filter maps.

[0027] In a particular case of the system, the system further comprises an offset pointer, wherein receiving an input data stream comprises sequentially receiving portions of the block beginning at the location of the one or more pointers, compressing the portion of the block, and updating the offset pointer for recalling the next portion to be received.

[0028] In a particular case of the system, receiving an input data stream comprises sequentially receiving portions of the block, wherein a location for each portion is identified by one of the pointers.

[0029] In a particular case of the system, a portion of the compressed data values are forced to be stored starting at the least significant bit of a column by padding unoccupied most significant bits of a preceding data value.

[0030] In a particular case of the system, the bit width register for some rows stores a binary representation of the length of the bit width.

[0031] In a particular case of the system, the bit width register for other rows stores a single bit designating whether the bit width of the corresponding row is the same or different than the previous row.

[0032] In a particular case of the system, the system is for storing floating point values, the floating point values comprising a sign portion, and exponent portion and a mantissa portion, the input data stream consisting of the exponent portions of the floating point values, and wherein compressing further comprises, for each floating point value, storing the sign portion and mantissa portion adjacent to the compressed exponent portion.

[0033] In a particular case of the system, during decompression a pointer is established for the location of a particular one of the blocks that is known to be needed at a future time.

[0034] In a particular case of the system, the compacting module is configured to track a next unoccuplied location in each column of the first memory while compressing and storing the values.

[0035] In a particular case of the system, the compacting module is configured to initialize a first storage location of the first memory as being unoccupied prior to compressing the data stream.

[0036] In a particular case of the system, the plurality of values are of a fixed bit width less than or equal to the column width. [0037] In a particular case of the system, the reproduced data stream is output directly to an arithmetic/logic unit. [0038] In a particular case of the system, the reproduced data stream is output to a second memory having a plurality of rows each having a plurality of columns corresponding to the first memory. [0039] In a particular case of the system, compressing further comprises prior to identifying the compressed bit width, evaluating a function on the values of the input data stream to reduce the compressed bit width and reversing the function for decompression. [0040] These and other aspects are contemplated and described herein. It will be appreciated that the foregoing summary sets out representative aspects of embodiments to assist skilled readers in understanding the following detailed description. DESCRIPTION OF THE DRAWINGS [0041] A greater understanding of the embodiments will be had with reference to the Figures, in which: [0042] FIG. 1 is a schematic diagram of a system for memory compression for deep learning networks, in accordance with an embodiment; [0043] FIG. 2 is a schematic diagram showing the system of FIG. 1 and an exemplary operating environment; [0044] FIG. 3 is a flow chart of a method for memory compression for deep learning networks, in accordance with an embodiment; [0045] FIG. 4A shows an example of imap value distribution over a batch of 64 randomly selected inputs; [0046] FIG. 4B is shows example fmap value distribution that is input independent; [0047] FIG. 5A shows imap cumulative distribution over the batch of 64 randomly selected inputs; [0048] FIG. 5B shows example fmap cumulative distribution that is input independent; [0049] FIG. 6 shows a diagram of an example of a convolutional layer for the purposes of illustration; [0050] FIG. 7 shows a diagram of an example of organization of a sparse convolutional neural networks (SCNN) tile; [0051] FIG. 8A illustrates an example of a fixed datawidth buffer;

[0052] FIG. 8B illustrates an example of a naive approach to supporting variable datawidths;

[0053] FIG. 8C illustrates an example of supporting variable datawidths in accordance with the system of FIG. 1;

[0054] FIG. 9A shows a diagram of an example of a decompression module in accordance with the system of FIG. 1;

[0055] FIG. 9B is an example of a second cycle (iteration) of the decompression module of FIG. 9A;

[0056] FIG. 9B is an example of a third cycle (iteration) of the decompression module of FIG. 9A;

[0057] FIG. 10A shows a diagram of an example of a compacting module in accordance with the system of FIG. 1;

[0058] FIG. 10B shows an example structure of a compacting block of the compacting module of FIG. 10A;

[0059] FIG. 11A illustrates an example of the system of FIG. 1 used in a data-parallel accelerator that targets dense models;

[0060] FIG. 11 B illustrates an example of a a grid of processing elements in accordance with the system of FIG. 1;

[0061] FIG. 12A illustrates a chart reporting the memory footprint for a whole neural network for example experiments;

[0062] FIG. 12B shows a chart illustrating this reduction in traffic for the example experiments;

[0063] FIG. 13A is a chart illustrating on-chip memory capacity needed under each sizing policy for the example experiments;

[0064] FIG. 13B shows a chart illustrating off-chip traffic per model for the example experiments;

[0065] FIG. 14A shows a chart illustrating speedups normalized to a baseline for the example experiments;

[0066] FIG. 14B shows a chart illustrating the reduction in total model footprint for the example experiments;

[0067] FIG. 15A shows a chart illustrating speedup for the example experiments; [0068] FIG. 15B is a chart showing footprint reduction for the example experiments;

[0069] FIG. 16A is a chart illustrating the memory energy breakdown for the example experiments;

[0070] FIG. 16B is a chart showing ideal compression rates in view of the example experiments and

[0071] FIG. 17 is a chart showing a comparison of various showing memory footprint reduction for optimised bit width size overhead.

FDETAILED DESCRIPTION

[0072] Embodiments will now be described with reference to the figures. For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.

[0073] Any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.

[0074] Compression in memory hierarchy is particularly appealing for deep learning workloads and accelerators where memory accesses are responsible for a large fraction of overall energy consumption. Compression can provide a technical advantage to the operation of computers and, more particularly in the present case, deep learning networks. First, for example, compression can increase the hierarchy’s effective capacity and bandwidth, boost energy efficiency and reduce overall access latency. Specifically, compressing data at any level of the hierarchy can boost its effective capacity as each value requires fewer physical bits when encoded. Second, it reduces accesses to higher levels of the hierarchy which require much more energy and time per access, thus improving effective latency and energy efficiency. Third, compression reduces the number of bits that are read or written per value, boosting effective bandwidth and energy efficiency. Further, it complements dataflow and blocking for reuse, the frontline techniques for boosting energy efficiency in the memory hierarchy. These benefits have motivated work on off-chip memory compression for neural networks. Embodiments of the present disclosure advantageously provide compression in the on-chip memory hierarchy.

[0075] Compression in the memory hierarchy has received considerable attention in the context of general-purpose computing systems. Compression for general-purpose computing systems has to support arbitrary access patterns and generally relies on value patterns that are common in computer programs (e.g., pointer or repeated values). However, the inventors have determined that deep learning workloads exhibit specific behaviours which present additional opportunities and technical challenges. For example, the access patterns for deep learning workloads are typically regular and consist of long sequential accesses. This mitigates the benefit of supporting random access patterns. Additionally, neural network values generally consist of feature and filter maps that generally do not exhibit the properties of typical program variables. Further, neural network hardware tends to be data-parallel, necessitating wide accesses.

[0076] Supporting random, fine-grain accesses requires the ability to locate the compressed values in memory both quickly and at a fine-grain granularity. This forces general-purpose compression methods to use small blocks, which generally severely inhibits the effective capacity. As a result, many compression approaches reduce the amount of data transferred, but not the size of the containers they use in storage. For example, they encode data within a cache line so that it needs to read or write fewer bits. However, the full cache line is still reserved. Alternatively, methods use a level of indirection to identify where data is currently located in memory, requiring careful balancing between flexible placement and metadata overhead.

[0077] Typical programs tend to exhibit full or partial value redundancy. For example, due to the use of memory pointers, several values tend to share prefixes (e.g., pointers to the stack or to heap allocated structures). Programs often use aggregate data structures which tend to exhibit partially repeated value patterns (e.g., flag fields). Approaches to compression generally need to handle a variety of datatypes, including integers and floating point numbers, or characters from a variety of character sets (e.g., UTF-16). Further, programs manage datatypes of various power- of-two datawidths, such as 8b, 16b, 32b or more. Finally, programmers often use the “default” integer or floating point datatypes (e.g., 32b or 64b). Compression techniques can capitalise on these characteristics to reduce data footprint.

[0078] In contrast, inventors have determined that deep learning workloads tend to exhibit long sequential accesses even when blocking for reuse is used. This can mitigate the need to support random accesses to fine-grain blocks. Further, values in deep learning workloads do not generally exhibit the repeated patterns of general computer programs. The bulk of their memory footprint is for storing large arrays of short datatypes, such as 8b or 16b. Generally, given the large volume of data and computations, deep learning models choose their datatypes carefully to be as small as possible. Quantisation techniques to even smaller datatypes, such as 4b, can also be used. In some cases, there are models for which 16b is still necessary; for example, for certain segmentation models where even small drops in accuracy translate in highly visible artefacts. Further, while programs tend to perform narrow memory requests, neural networks generally exhibit data parallelism and prefer wide references.

[0079] Embodiments of the present disclosure advantageously provide an on-chip compression scheme where data remains encoded as much as possible. In some cases, data can be decompressed before the processing elements of the deep learning approach, which favours simple to implement schemes, especially for decoding. Many compression techniques for general purpose systems generally operate between the last-level cache and other caches of the on-chip hierarchy where latency is not as critical and thus can tolerate additional complexity. Advantageously, embodiments of the present disclosure provide a lossless on-chip compression scheme which, for example: (1) can support the relatively long sequential accesses generally needed by neural networks, (2) can support multiple wide accesses to maintain high utilisation of processing units, (3) allows decoding to happen just before the processing units, thus keeping data compressed for as long as possible, and (4) takes advantage of value behaviour that is typical of neural networks.

[0080] Embodiments of the present disclosure (in some cases, informally referred to as ‘Boveda’) provide an on-chip memory hierarchy compression scheme that advantageously exploits typical distribution of values in neural networks that operate on fixed-point values; particularly, in each layer, very few values are of a high magnitude as most tend to be close to zero. Accordingly, rather than storing all values using the same number of bits, embodiments of the present disclosure adjust datawidth to value content so that they use only as many bits as necessary. Allowing each value to select its datawidth independently would result in unacceptable metadata overhead (a width field per value). Instead, embodiments of the present disclosure group values and select a common datawidth, which is sufficiently wide to accommodate the value with the highest magnitude in the group. For example, for a group of eight 8b (8-bit) values where the highest magnitude value is 0x12, a container of 8 c 5b can be used, whereas for another group, where the maximum magnitude value is 0x0a, 8 c 4b can be used. In either case, a metadata field of 3b will specify the number of bits used per value (5 and 4 respectively). Since variable data width containers can be used, decoding the values and properly aligning them to feed to the processing units would normally require wide crossbars. For example, a processing element operating on 8 values of 8b each would require a crossbar of 64b to 64b, as well as additional logic to handle values that spread over two memory rows. Embodiments of the present disclosure exploit the regular access pattern of neural networks to organize the compressed data in memory such that it instead requires multiple, yet much smaller “crossbars”.

[0081] Advantageously, embodiments of the present disclosure can boost the effective on-chip capacity without requiring any modifications to the neural network model. This can yield energy and/or performance benefits depending on whether the model is off-chip or compute bound. An architect can deploy the present embodiments during design time to reduce the amount of on- chip memory and thus the cost needed to meet a desired performance target. To a neural network developer, the present embodiments provide an approach that needs to go off-chip less often and that rewards quantisation without requiring it for all models. In the present disclosure, to demonstrate that the present approaches are not specific to a particular accelerator architecture, example experiments are applied on an accelerator for dense models and for sparse convolutional neural networks (SCNN), an accelerator targeting pruned models. For SCNN, the example experiments illustrate that the present embodiments can operate on top of SCNN’s zero compression. For the purposes of illustration, the example experiments use computer vision tasks, particularly image classification, to illustrate the effectiveness of the present embodiments. While this represents a fraction of the vast array of domains to which deep learning can be applied, it is of high importance and value due to the variety and volume of applications where image processing systems are employed. The example experiments determined that the present embodiments: • Reduced total model footprint to 49%. For models that are quantised using specialised methods, it achieved nearly ideal compression rates. In the case of one method, it nearly doubled the compression rate compared to what specialised hardware would have delivered due to taking advantage of value content.

• Reduced the volume of bits accessed on-chip to 50%.

• Improved performance by 1.4x and energy by 28% for a dense accelerator with a 96KB global buffer.

• Reduced overall model footprint to 66% over SCNN’s zero compression.

• Reduced energy when merged with SCNN by 26% compared to an average of 20% for the configurations studied.

[0082] Referring now to FIG. 1 and FIG. 2, a system 100 for memory compression for deep learning networks, in accordance with an embodiment, is shown. In this embodiment, the system 100 is run on a computing device 26 and accesses content located on a server 32 over a network 24, such as the internet. In further embodiments, the system 100 can be run only on the device 26 or only on the server 32, or run and/or distributed on any other computing device; for example, a desktop computer, a laptop computer, a smartphone, a tablet computer, a server, a smartwatch, distributed or cloud computing device(s), or the like. In some embodiments, the components of the system 100 are stored by and executed on a single computer system. In other embodiments, the components of the system 100 are distributed among two or more computer systems that may be locally or remotely distributed.

[0083] FIG. 1 shows various physical and logical components of an embodiment of the system 100. As shown, the system 100 has a number of physical and logical components, including a central processing unit (“CPU”) 102 (comprising one or more processors), random access memory (“RAM”) 104, an input interface 106, an output interface 108, a network interface 110, non-volatile storage 112, and a local bus 114 enabling CPU 102 to communicate with the other components. CPU 102 executes an operating system, and various modules, as described below in greater detail. RAM 104 provides relatively responsive volatile storage to CPU 102. The input interface 106 enables an administrator or user to provide input via an input device, for example a keyboard and mouse. The output interface 108 outputs information to output devices, for example, a display and/or speakers. The network interface 110 permits communication with other systems, such as other computing devices and servers remotely located from the system 100, such as for a typical cloud-based access model. Non-volatile storage 112 stores the operating system and programs, including computer-executable instructions for implementing the operating system and modules, as well as any data used by these services. Additional stored data, as described below, can be stored in a database 116. During operation of the system 100, the operating system, the modules, and the related data may be retrieved from the non-volatile storage 112 and placed in RAM 104 to facilitate execution.

[0084] In an embodiment, the system 100 includes number of functional modules, such as an input module 120, a decompression module 122, a width detector module 126, a compacting module 124, a deep learning (DL) module 128, and a pointer module 130. In further embodiments, the functions of the modules can be combined or run on other modules. In some cases, the functions of the modules can be run at least partially on dedicated hardware, while in other cases, at least some of the functions of the modules are executed on the CPU 102.

[0085] Distributions of input feature maps (imap) and filter maps (fmap) values are generally heavily skewed towards low magnitudes. It is this behaviour that can be exploited by the system 100 to construct a low-cost, energy efficient compression technique. To capitalise on these distributions, the system 100 can adapt the number of bits (datawidth) used per element to be just long enough to fit its current value. Since fmaps are typically static, the datawidth used will be different across fmap elements but will be input independent. On the other hand, imap values are input dependent, therefore the datawidth used by the system 100 can adapt to the value each element takes. In contrast, other memory hierarchies store all imap or fmap elements using a datawidth, which is sufficiently long to accommodate any value possible. However, as the present inventors have empirically determined, this proves excessive for most elements. For the purposes of illustration, two models are highlighted: ResNet18 (image classification), and SSD MobileNet (object-detection), both quantised to 8b. FIGS. 4A, 4B, 5A, and 5B show regular and cumulative distributions of imap and fmap values for some representative convolutional and fully-connected layers. FIGS. 4A and 5A show imap value distribution and cumulative distribution, respectively, over a batch of 64 randomly selected inputs, and FIGS. 4B and 5B show fmap value distribution and cumulative distribution that are input independent.

[0086] FIGS. 4A and 5A show that in ResNet18’s res2a .branch 1, most of the imap values can be represented with 5b, which under ideal conditions translates into a 37.5% reduction in footprint over the 8b used. Just 4b, a 50% reduction over 8b, are sufficient for virtually all imap values in its fully-connected layer fc. SSD Mobilenet exhibits similar behaviour. In its 2D convolution layer depthwise12 90% of the values need 6b or fewer, which is also enough to represent virtually all imap values in its object detection SSD module layer pointwise13_2_2. FIGS. 4B and 5B show similar trends for the fmaps. ResNet18’s res2 branchl only 5b are sufficient for most of the fmap values whereas 6b are sufficient for virtually all values in its fc layer. However, in fc 95% of the fmap values need at most 5b. SSD-MobileNet’s fmaps are similar. Virtually all values fit in 6b, 90% fit in 5b and more than 80% in 4b.

[0087] In some cases, the system 100 can be applied over an SCNN accelerator, which is an accelerator for convolutional layers of pruned CNN models. For purposes of illustration, the system 100 is described as applied to convolutional layers of an SCNN; however, it is appreciated that it can be applied to other data-parallel deep learning accelerators and other types of layers, such as fully-connected layers.

[0088] FIG. 6 shows a diagram of an example of a convolutional layer for the purposes of illustration. The inputs are K fmaps of dimension SxRxC (height, width, channel), an HxW*C imap, where typically H > S and W > L, and a stride s. The fmaps are statically known values (weights), whereas the imaps are runtime calculated values (activations). The output is an 1 !) xK omap (activations). This example assumes s = 1. Each omap value is determined as a three-dimensional (3D) convolution of an fmap with an equally sized window of the imap. Each fmap produces the omap values for one channel by sliding the window over the imap using the stride s along the H and W dimensions. The 3D convolution involves a pair-wise multiplication of an fmap element with its corresponding imap element, followed by the accumulation of all these products into the omap value. Each 3D convolution is equivalently the sum of C two-dimensional (2D) convolutions on each input channel.

[0089] SCNN stores values in an N.Samples-Channel-Height-Width (NCHW) order and the omap is determined by a spatial input stationary convolution. This allows SCNN to process imaps and fmaps one channel at time, which in turn allows it to exploit sparsity. FIG. 7 shows a diagram of an example of organization of an SCNN tile. SCNN uses a grid of such tiles to scale up performance. For the purposes of illustration and ease of understanding, it can be assumed that there is only one tile. However, it is understood that the present embodiments of the system 100 can be used for multiple tiles.

[0090] The tile has three buffers respectively holding imaps (and omaps), fmaps, and accumulators. The accumulators accumulate omap values. SCNN uses a spatial dataflow where it performs all 2D convolutions for all windows of a single channel of the imap at a time. SCNN builds on the observation that in convolutional layers the product of any fmap value with any imap value from the same channel contributes to some omap value. Accordingly, at maximum throughput, the tile processes 4 imap and 4 fmap values all from the same channel and calculates the products for all 16 possible (imap, fmap) pairs. It then directs, via a crossbar, all these products into their corresponding accumulator. The accumulator buffer is organized into 32 banks in order to reduce conflicts which occur when multiple products map onto accumulators in the same bank. To take advantage of sparsity, the imap and the fmap omit zero values storing non-zero values as ((value), (skip)) pairs where (skip) is the number of zero values omitted after each. By using these (skip) fields, SCNN deduces the original position of each value and maps the products to their respective accumulators. For the purposes of illustration and ease of understanding, the skip fields are omitted and 8b values are assumed. As described herein, 16b (original) and 8b SCNN configurations are considered.

[0091] Typically, SCNN would process two consecutive blocks as follows. Consider (lo,.. 3) and (I4, ... , I7) of 4 imap values each; referred to as BBIock 0 and BBIock 1. Note that the values within each block are conceptually ordered: lo is the first value within BBIockO whereas I7 is the first value within BBIockl Initially, it can be assumed that these are unsigned numbers. FIG. 8A illustrates an example of a fixed datawidth buffer. In this example, SCNN’s imap buffer uses a container of 8b per value and supports 4-value-wide reads (32b). With this organization, the values as read from the imap buffer align directly with the multiplier inputs. However, all values in BBIock 0 have a prefix of at least 2 zero bits, and those in BBIock 1 have 3 bits. In contrast, one of the goals of the system 100 is to avoid storing these prefix bits.

[0092] FIG. 8B illustrates an example of a naive approach to supporting variable datawidths. This approach is a straightforward, yet generally undesirable, way to store the compressed values. For each BBIock of four values, a width field specifies the number of bits per value; in this example, 5 for BBIock 0 and 6 for BBIock 1 (encoded as 4 (100) and 5 (101)). A single width field per BBIock amortizes its overhead over multiple values. In this example, the values are stored sequentially across BBIockO and, once BBIockO is fully occupied the values are stored sequentially across BBIockl

[0093] Unfortunately, decompression comes at a hefty price because the values are no longer aligned with the multiplier inputs and may even spread over two rows. For each multiplier column, width bits (varies per BBIock) need to be extracted and routed to a multiplier input after expanding to 8b. This routing requires a 32b-to-8b crossbar-like interconnect. Since there are four multiplier columns, four such crossbars are needed, which represents a significant cost in area and energy. If the multiplier grid had 8 x 8 multipliers 64b-to-8b crossbars would have been needed.

[0094] The system 100 can perform approaches that are advantageously of much lower complexity and cost. In an embodiment, the values can be treated as belonging to one of four groupings called hileras, which correspond to multiplier columns; the first value in each BBIock belongs to hilera 0, the second value to hilera 1, and so on. The approach of FIG. 8B breaks this mapping and allows compressed values to flow freely across hileras.

[0095] The system 100 instead restricts values to stay within their original hilera; as exemplified in the diagram of FIG. 8C. This example shows lo and U are packed together into the hilera mapped onto the buffer’s first 8 bits, whereas band are packed into the hilera mapped onto the last 8 bits. To draw an analogy, values are used as bovedillas (bricks) to fill its hileras. The “crossbars” needed in this example are now 8b-to-8b, their size depends only on the maximum datawidth and is independent of the number of values read per cycle; an 8 c 8 multiplier grid would require eight 8b-to-8b crossbars instead of eight 64b-to-8b ones.

[0096] FIG. 9A shows a diagram of an example of the decompression module 122. The decompression module 122 decompresses a single value per cycle, properly handling those values that spread over two rows. While the illustrated decompression module 122 depicts one decompression block, keeping with the example of FIG. 8C, four decompression blocks of the decompression module 122 would operate in parallel to decompress four imap values per cycle. Reads from the imap buffer remain 32b wide. From each read, each block receives the corresponding 8b for its hilera. Within each block, two 8b registers L and register R hold compressed data. Every time a new set of 8b is read in, it is written into register L while simultaneously the current contents of register L are “copied” into register R. In some cases, rather than physically copying register L into register R, a bit pointer can be used to “swap” the two. In steady state, register L and register R will contain two consecutive rows from one hilera from the imap buffer, and thus, all bits necessary to decompress an 8b value regardless of width. A 16b-to-8b shifter extracts the current value from the value formed by concatenating the output of register L and register R. In this 16b example, the shifter only needs to support shifts up to 7 positions left and as specified by a 3b “offset” register (“OFS”). A 3b register (“W”) holds the data width of the current BBIock. OFS and W, and the associated control logic, can be shared among all four decompression blocks. Initially, in this example, OFS=0 and W=7, both corresponding to the maximum datawidth. A “Bit-Extend” block passes the W LSbs from the output of the shifter and sign-extends them to 8b. The compression block can operate as a two-stage pipeline where the first stage loads values into register L and register R, while the second stage extracts the next decompressed 8b value from the contents of register L and register R. In this example, the decompression block requires three cycles in total (an extra cycle is needed for the initiation interval) for the first multiplier column to decompress and U. In steady state, the decompression block can output a value per cycle.

[0097] FIGS. 9B and 9C illustrate examples of cycles 2 and 3 for the above example of the decompression module 122. In cycle 1, the imap buffer supplies the first set of 8b of the input data stream 0110 1100 which are written into register L. Concurrently, W is loaded from the width memory with the datawidth 101 for BBIock 0. OFS is updated to OFS=(OFS+W+1) mod 8=0. Since OFS+W+1 exceeded 8 (carry out from the adder), register R contains no useful bits and thus the positions of register L are register R are swapped at the end of cycle 1 and a read from the imap buffer is triggered for the next cycle. In cycle 2, illustrated in FIG. 9B, the decompression block reads in the next 8b copying them into register L at end of the cycle. Now register L and register R contain two consecutive rows of compressed values from the same hilera; and thus, are now in steady state. During cycle 2, and since OFS is 0, the 16b output of (register L, register R) is shifted by 0; thus, aligning the LSb of the compressed lowith the LSb of the output. A bit- extension block, upon guidance of W, passes through the lower 6 bits and fills in the upper 2 bits accordingly. In this example, it zero extends the value to 8b since this imap is known to have only positive values. If the layer had signed imap values, the extender block would sign-extend instead. As a result, the value 0010 1100, the original lo is sent to the multipliers. OFS is updated as before: OFS = (0+5+1) mod 8 = 6. Since this does not exceed 8, the system will not read a value from the imap buffer in the next cycle. A new width field is read into W by the end of the cycle. This is the width for BBIock 1. In cycle 3, as FIG. 9C illustrates, OFS is used to instruct the shifter to slide the (register L, register R) by 6 positions. The extender block passes through the 5 least significant bits since W is 100 zero-extending to 8b. The OFS can then be updated to (6+4+1) mod 8 and since this exceeds 8, register L and register R will be swapped and the next imap row will be loaded into L in the next cycle.

[0098] Once the imap and fmap values for all channels of a layer are processed, the accumulators contain the output map. In most cases, the SCNN reads these values, passes them through the activation function, removes those that are zero, and copies the remaining into the omap buffer (in some cases, it then swaps a pointer so that the omap buffer becomes the imap buffer for the next layer). The system 100 uses the output of the zero compression. The number of values per BBIock can be chosen by a user and/or designer. FIG. 10B shows an example of a compacting block of the compacting module 124, in accordance with the present embodiments, that processes four input values per cycle and where the BBIock size is four.

[0099] FIG. 10A shows that an example of the compacting module 124 can include three major components: (1) a width detector, (2) four compactor units (CUs), and (3) a 32b output register. The compression module reads in four 8b values per cycle and encodes them into a BBIock storing them into the output register. When all 32b of the register are filled in, it sends it to the omap buffer. It is fully capable of producing a full row per cycle. In some cases, it may output buffer rows at a slower pace, since compression can allow it to pack more values per row. For this reason, fewer bits will have to be copied into the imap buffer, advantageously saving energy.

[0100] The width detector module 126 identifies the bit width necessary to accommodate the value with the highest magnitude. If, for example, values are assumed to be positive (which is the case when using ReLU), the width detector module 126 first produces 8 signals, one per bit plane each being the OR of all corresponding bits across the four values. The 8 signals then go through a leading-one detector module that identifies the MSb that is 1 amongst all values. This is the width the BBIock needs. When a layer may have signed values, they can be inverted prior to the leading-one detector (for negative numbers, the detector determines if the MSb zero). The width in this case needs one more bit for the sign. Whether a map may negative numbers is known statically. The width detected can be written into the width buffer. Accordingly, for data values that may contain negative values, the values may be signed extended after unpacking based on the value of this sign bit. Positive values may be extended to the full width with zero bits added in the most significant positions, whereas negative values, as determined by a sign bit of one, may be extended using bits of value 1.

[0101] FIG. 10B shows the structure of a compacting block of the compacting module 124 that approximately mirrors the decompression module 122. In some cases, there can be one compacting block per hilera. Register L and register R hold the current and the next row for the hilera. Every cycle the compacting block processes a value. It extracts its width (detector) least significant bits and, via the “shift-and-mask“ block, stores them into register R at an appropriate position. If the value requires more bits than those currently left unused in register R, the remaining bits are written into register L. When register R fills up it is copied to the output row register (component (3)) and the two registers are swapped using a single bit pointer (not shown). A 3b continuation register specifies at which bit position filling register R should continue. The shift-and-mask block contains an 8b-to-16b shifter and needs to support shifts up to 7 positions to the right; in most cases, the system never needs to shift more than 7 bits since that would mean that register R had no free bits left and thus would have been written out.

[0102] In some cases, SCNN can store values in a N.SamplesChannel-Height-Width (NCHW) order. In this way, SCNN sizes its on-chip buffers so the imap and the omap per layer fit in the on-chip buffers and reads fmaps from off-chip in channel order. When there are multiple tiles, each imap channel is mapped onto the tiles in equally sized portions and the fmaps are broadcast. The i ap portion assigned to each tile depends only on the dimensions of the layer. However, since SCNN uses zero compression, the number of imap values contained in each portion will vary. The system 100 can use these properties for compression, which can be used to further compact data. Processing can still start at the beginning of the imap buffer. When values are written at the output of the layer, they are placed starting at the first position of the local omap buffer (each layer SCNN swaps the imap with the omap so that the omap of the preceding layer becomes the imap for the next layer).

[0103] The DL module 128, operating with the SCNN, stores the fmaps channel first, packing the values for all fmaps together; first the values for fmap 0, channel 0, then the values for fmap 1 , channel 0, and so on. During processing, the tiles cycle through all fmap values for channel 0, then though all for channel 1 and so on. The DL module 128 can determine when it reaches the end of each channel since the fmap dimensions and count are known statically and it can count how many values it processed and how many zeros it skipped.

[0104] SCNN uses a per value skip field to remove zeroes. Since the skip fields are used only in the control logic of the tile (e.g., to determine the original positions of values) it may be better to store them into a separate structure next to the control logic rather than close to the datapath. The DL module 128 widens this buffer to also store the per BBIock width fields. In an example, if skip fields of 3b and 8b values are assumed, then the width field requires an overhead of 3b per BBIock, or an overhead in bits of less than 7% when BBIocks of 4 values are used. The overhead halves for BBIocks of 8 values.

[0105] FIG. 11A illustrates an example of the system 100 used in a data-parallel accelerator that targets dense models (i.e. , it does not exploit sparsity to improve performance). The accelerator has a global buffer to avoid off-chip accesses and a grid of processing elements (PEs). The PEs, illustrated in the example of FIG. 11 B, can process 16 (imap, fmap) value pairs per cycle all accumulated to the same omap. Each PE has its own local imap, fmap and omap buffers. If necessary, the convert block first converts values read from off-chip into a useable format prior to writing them into the global on-chip buffer (and vice versa). The PE’s local buffers read values from the global buffer at which point they are decompressed. Omap values are compressed before writing them to the global buffer. The width fields are stored in a separate bank and address space of the global buffer.

[0106] Compared to a mere SCNN implementation, there are advantageous differences, which, for example, stem in part from the need to support a diverse set of dataflows and in part from the need to support predominantly dense models: (a) on-chip implementations do not implement zero compression; and (b) supporting a diverse set of dataflows requires support for blocking accesses to the imap and the fmap at various levels, and hence being able locate the starting point for each reuse block as needed by the dataflow.

[0107] Supporting other dataflows, other than zero compression, requires additional support as the system 100 alters the mapping of values to memory. When all values are of the same length, the system 100 can directly index any value within the imap, fmaps and omap. Since the system 100 compresses these values, their location in memory becomes content dependent. The pointer module 130 can use pointers to support the blocking scheme of the chosen dataflow. Generally, only a few pointers are needed and only a few of them have to be explicitly stored when the data is compressed on-chip or off-chip. Most of the pointers can be generated in a timely fashion while processing and can be discarded once used. This is possible because: (a) dataflows use blocking to maximize reuse, and (b) as processing proceeds according to the dataflow, the system 100 naturally encounters the starting positions for the reuse block that will be processed next. This approach will be described first in the context of a fully-connected layer and then for convolutional layers; where it understood that it can be applied to any suitable layer type.

[0108] In most cases, a fully connected layer takes as input one imap and fmaps and produces an omap having as many elements as fmaps. The imap and the fmaps all have the same number of elements C. Each of the K omap elements is the inner product of the imap with one of the fmaps. The system can take advantage of imap reuse accessing from on-chip. For the purposes of illustration, consider an accelerator having just a single PE. If the imap fits on-chip, it will be possible to read the imap once from off-chip then cycle through the fmaps. In this case, the accesses to the imap and each of the fmaps will be sequential. When the imap is too large to fit on-chip, the system 100 can use blocking, where only a portion of the imap (reuse block) is loaded on-chip at any time while the system cycles through the corresponding portions of the fmaps. The resulting access patterns on-chip remain sequential for each reuse block. Once the system 100 is done processing the current imap reuse block, it can move to the next one. Thus, for fully- connected layers, the system 100 generally only needs to support sequential accesses to relatively long blocks of the imap or the fmaps. When values are not compressed, the starting position of each reuse block is a linear function of the block’s size and of its relative position. In most cases, these positions will be dependent on value content. Since the access pattern is sequential, the DL module 128 will arrive at the start of each reuse block in sequence, as required by the dataflow. Thus, in most cases, the pointer module 130 only needs to maintain a single access pointer per fmap and for the imap. When there are multiple PEs, the maps can be partitioned into smaller reuse blocks, which the DL module 128 can process concurrently. The system 100 then needs as many pointers as the number of reuse blocks it is required to process concurrently, which can be stored as additional metadata for the layer.

[0109] N.Samples-Height-Width-Channel (NHWC) memory mapping can be used to increase data locality for convolutional layers. Compared to fully-connected layers, the added challenge for convolutional layers is the need to be able to initiate accesses to multiple, often overlapping, windows. Without loss of generality, consider a channel-first output stationary dataflow where each window is processed in channel-width-height order. A term column can be used to refer to all imap values with the same (width, height) coordinates. To determine a single omap, a dataflow can access the values within a column sequentially and then access other columns in width-height order. Boveda can group values into BBIocks sequentially along each column adhering to the NHWC mapping.

[0110] A technical challenge for the system 100 is that the starting position of each column will generally no longer be a linear function of its (width, height) coordinates. A naive solution would be to keep pointers to each column (2D coordinates of the first channel). This is excessive since: (a) each column is needed during the processing of a few windows (e.g., for a 3x3 fmap, each column will be accessed 9 times), and (b) windows typically overlap and thus the starting position of each column will be encountered while processing an earlier window. Accordingly, the pointer module 130 reduces the number of pointers that are explicitly stored as metadata while “recovering” the rest during processing; and keeping them around only as long as necessary. The number of pointers that needed to be stored along the imap depends on the imap and fmap dimensions and the number of windows. In an example, can be used; where

H, S, and windows, respectively, are the imap rows, the fmap rows, and the maximum number of windows to process concurrently. For on-chip processing, in most cases, two sets of registers are needed. One for holding the current set of points and one to “recover” the next set. For example, for a layer with an imap of 230x230 imap and a 3x3 fmap, storing around 700 pointers is enough to have more than 200 windows being processed in parallel. Since each fmap is read once per window, the pointer module 130 can also keep a pointer per fmap. The overhead is small, and with the exception of depthwise separable convolutions, even the smallest filters are of 3 c 3 width and height and several tens of channels deep. In some cases, rather than storing absolute pointers, the pointer module 130 can store a base address and all other pointers as offsets.

[0111] To maintain the ability to perform reads as wide as necessary for high PE utilization, the starting positions for some BBIocks can be restricted so that they align with rows in the on-chip memories. In some cases, the first value of every fmap and every S column of the imap (where S is the stride) can be restricted such that they are aligned at the beginning of a memory row. Accordingly, padding may be occasionally needed; however, this padding does not increase footprint compared to not compressing the values, as it minimally reduces the effective compression rate.

[0112] The system 100 can be applied on any other layer; such as depthwise separate convolutions and pooling. Since each BBIock can be decoded in parallel, the system 100 may need to store parallelismxblocksize pointers to initiate parallelism operations in parallel.

[0113] Besides reduction of pointers overhead, system 100 can reduce group overhead too. Original design uses log2(bit_width)b of the values to store the BBIock size, but this can be further reduced given the observation that BBIock size value tends to be repeated. System 100 can use an extra bit per BBIock to detect if the size of the BBIock is the same as the previous one, in that case, it doesn’t need to read a new size from memory. Hence, a new BBIock size would have an overhead of 1b+log2(bit_width)b, and repeated sizes an overhead of 1b.

[0114] Advantageously, in various embodiments, the system 100 can target inference and is lossless and transparent. It can rely on the expected distribution of all values, and while it benefits from sparsity it does not require it.

[0115] Some neural networks exhibit spatial correlation of values and this results in values that are in the same BBIock having similar magnitudes. In such cases it is advantageous to perform a function upon the values to reduce the amount of data needed to be stored. For example, it may be advantageous to first express all values as a difference from a common bias value. A good choice for the bias is for example the maximum value within the BBIock or a constant. When the differences are of much smaller magnitude than the original values, this approach results in fewer bits used per packed value. The bias can be stored in an extra optional field. Functions other than difference may be used.

[0116] Some neural networks use a floating point representation of numbers. The representation uses a triplet (sign, exponent, mantissa). For example, a popular representation uses 32 bits where the sign is 1b, the exponent 8b and the mantissa 23b. The method can be used to dynamically adjust the length of the exponent after removing the bias. For example, for a block of four floating point values (a,b,c,d), where the exponents are respectively Ea, Eb, Ec, and Ed, the encoded block can store instead (Ea-Bias, Eb-Ea, Ec-Ea, Ec-Ed). The width field in this case encodes the number of bits needed to represent the maximum of the values in the encoded block. The Bias is a constant defined by the floating-point standard. A set of adders after decoding can recover the original block (Ea, Eb, Ec, Ed) after the (Ea-Bias, Eb-Ea, Ec-Ea, Ec-Ed) where decoded. During compression a subtractor prior to the compression unit can calculate (Ea-Bias, Eb-Ea, Ec-Ea, Ec-Ed) given the original (Ea, Eb, Ec, Ed) and the Bias. Optionally, mantissas can be stored using a global common width without requiring an extra width field.

[0117] Other approaches, such as an Efficient Inference Engine (EIE), use deep compression to drastically reduce fmap sizes for fully-connected layers. Deep Compression is very specialized as it alters the fmap to use a limited set of values (for example, 16), and uses Huffman encoding and lookup tables to decode values at runtime. In contrast, the system can operate on “out-of- the-box” neural networks.

[0118] In other approaches, such as compressing DMA, use of a bit vector per block can be used to remove zero values off-chip. In contrast, in various embodiments, the system can target on-chip compression and all values. In other approaches, such as Extended BitPlane Compression (EBPC), off-chip compression can be used that combines zero-length encoding with bit-plane compression; particularly for pruned models. EBPC’s decompression module requires 8 cycles per block of eight 8b values. In contrast, in various embodiments, the system can benefit from both dense and sparse networks and decompresses a block per cycle. In other approaches, such as ShapeShifter, off-chip compression can be used that adapts the data container to value content and uses a zero bit vector. ShapeShifter’s containers are stored sequentially in memory space with no regards to alignment. Decompression per block is done sequentially for a value at a time per block. Accordingly, ShapeShifter is not appropriate for on-chip compression. Other approaches, such as Diffy, extends ShapeShifter by storing values as deltas. Diffy targets computational imaging neural networks where the imap values exhibit high spatial correlation. Diffy is significantly more computationally expensive than embodiments of the system as encoding and decoding require calculating deltas. In other approaches, such as Proteus, values can be stored on-chip and off-chip using profile-derived per layer data widths; thus it cannot exploit the lopsided distribution of the values within the layer and the maximum magnitude per layer dictates the width for all its values. Embodiments of the present system can be used to adapt the data width at a substantially finer granularity.

[0119] FIG. 3 illustrates a flowchart for a method 300 for memory compression for deep learning networks, according to an embodiment.

[0120] At block 302, the input module 120 receives an input data stream to be processed by one or more layers of a deep learning model.

[0121] At block 304, the width detector module 126 determines a bit width necessary to accommodate the value from the input data stream with the highest magnitude.

[0122] At block 306, the compacting module 124 stores a least significant bits of the input data stream in a first memory store (e.g., register ‘R’). The number of bits equal to the bit width. If the value requires more bits than those currently left unused in the first memory store, the remaining bits are written into a second memory store (e.g., register ‘L’).

[0123] At block 308, the compacting module 124 outputs the value of the first memory store, as a consecutive part of a compressed data stream, with an associated width of the data in the first memory store when the first memory store becomes full. The compacting module 124 copies the value of the second memory store to the first memory store.

[0124] At block 310, the decompression module 122 receives data from the compressed data stream having a respective width and moves the data from a first memory store to a second memory store, where the first memory store contains previously stored data from the compressed data stream.

[0125] At block 312, the decompression module 122 stores respective bits of the compressed data stream into the first memory store having a length equal to the width of the first memory store.

[0126] At block 314, the decompression module 122 concatenates the data in the first memory store and the second memory store.

[0127] At block 316, the decompression module 122 outputs the concatenated data, the concatenated data having a width equal to an associated width of the concatenated value received from the compressed data stream.

[0128] The present inventors conducted example experiments to evaluate the technical advantages of the present embodiments. In the example experiments, a custom cycle-accurate simulator was used to model execution time and energy. The simulator used DRAMSim2 to model off-chip memory accesses. All accelerators and hardware modules were implemented in Verilog, synthesized with the Synopsys Design Compiler and laid out with Cadence Innovus for a TSMC 65nm cell library due to licensee constraints. Power was estimated via Innovus using the circuit activity reported by Mentor Graphics ModelSim. CACTI was used to model the area and power of the on-chip memories. All accelerators operated at 1GHz matching CACTI speed estimate for the on-chip memories. TABLE 1 lists the network models studied and the footprint for the fmaps and the imaps. Most models were quantized to 8b. Several models use more aggressive quantization. Originally, these models were developed in conjunction with specialized architectures.

TABLE 1

[0129] The example experiments demonstrated that the present embodiments delivered the highest memory benefits possible without requiring method-specific hardware. These models include:

• Intel’s INQ, whose fmap values are limited to sixteen signed powers of two or zero. Representing weights as magnitudes requires 16b, whereas 5b were enough with specialized hardware.

• PACT, which requires a modified ReLU with a configurable saturation threshold and used 4b imaps and fmaps for all but the first and the last layer, which used 8b. Outlier-Aware quantisation aggressively reduced the number of bits for most values (e.g., 4b), except for a few large values (outliers of 8b) that were handled separately.

• Intel’s Skim Caffe repository and MIT’s Eyeriss group, since SCNN generally excels for pruned models.

[0130] The example experiments included examining the system with respect to a dense model accelerator with 256 processing engines organized in 16x16 rows. Each processing engine performed 8 MACS in parallel producing a single value. Each PE had 64-entry imap, fmap, and omap buffers. The system used a BBIock size of 8. A 32-bank global buffer supplied the processing engines.

[0131] FIG. 12A illustrates a chart reporting the memory footprint for the whole neural network. Footprint is measured in bits and the figure reports footprint with the system relative to the baseline. Boveda uses memory to store: a) the encoded values, b) the per BBIock width metadata, c) padding due to memory alignment, d) pointers. On average, the system reduces footprint to 49%. SSD-MobileNet and MobileNet see the least benefit of 16%, which is still considerable given that off-chip accesses are an order of magnitude more expensive. Models with specialized quantization are highlighted in FIG. 12A to demonstrate the ideal memory footprint, where the memory hierarchy was designed specifically for them. The system reduces their footprints to within 4% of what is ideally possible. In the case of ResNetl 8-PACT, the system reduces footprint much more than what would have been possible on 4b hardware. This is because the system takes advantage of actual value content.

[0132] The system increases the information content per bit of on-chip storage. Accordingly, the processing engines need to fetch less data from the on-chip hierarchy. FIG. 12B shows a chart illustrating this reduction in traffic. Without the system, an access only reads data whereas with the system, an access may also read metadata. Accordingly, two measurements are shown: a) accesses, and b) bits transferred. Both are normalized to the baseline. The system performed 62% fewer transfers on average and transfers 50% fewer bits in total. As expected, the bulk of the accesses were for the fmaps and imaps. The reduction in bit traffic were less than the reduction in accesses due to the metadata. The trends observed are similar to those for overall footprint. This reduction is directly translatable into energy savings.

[0133] A major design choice when architecting accelerators is the amount of on-chip storage to use. Larger on-chip memory reduces the frequency of data fetches from off-chip. For example, SCNN’s on-chip buffers were sized so that it rarely had to spill the feature maps off-chip. The example experiments studied four policies for sizing the on-chip capacity. Being able to fit: a) the imap, omap and the fmaps for the largest layer, b) the fmaps and a full row of windows from the imap, and c) a full row of windows from the imap and an fmap per processing engine. With policy (a), only the input and the final output went off-chip. With policy (b), there was a guarantee per layer that each value is accessed once from off-chip. With policy (c), there was a guarantee that a single access per layer only for the imap and the omap. Also considered was (d) layer fusion, which processes subsets of several layers without going off-chip for the intermediate i/omap values. [0134] FIG. 13A is a chart illustrating on-chip memory capacity needed under each sizing policy, above. Capacities were normalized to a baseline under the same policy (it is different per policy). Overall, the reduction in storage needs followed the compression rates closely. In one case, SSD- MobileNet with the first policy (full layer on-chip) had no reduction possible. There was a single layer for which the system did not reduce overall on-chip data volume. There were energy and performance benefits regardless since the system reduced overall model traffic and footprint. Regardless of the access policy used, the system reduced how often the accelerator had to go off-chip. FIG. 13B shows a chart illustrating off-chip traffic per model with the system (solid lines) and without (dotted lines). For clarity, only a subset of the networks are shown. Traffic was normalized so that, where possible, every value was accessed once per layer. As the on-chip memory size increased, traffic approached this minimum. The system allows for the use smaller on-chip memories. Moreover, for a given memory capacity, the system reduces off-chip traffic. For example, in the case of SegNet, even 512KB of on-chip storage was not enough to achieve minimal traffic without use of the system. With 32KB of on-chip storage, the system reduces off- chip traffic by 3.8x for ResNet18 (traffic with the system is 1.48x and 5.66x without vs. reading values once) and by 2.6x for ResNet50S OA.

[0135] The example experiments measured performance for three configurations with on-chip global buffers of 96KB, 192KB, and 256KB. All used DDR4-3200 dual-channel off-chip memory. FIG. 14A shows a chart illustrating speedups normalized to the baseline with the 96KB global buffer. The system improves performance by 1.4x, 1.2x and 1.1x on average, respectively. Improvements are the highest for SegNet whose convolutional layers are rather large and where the system compresses data considerably. Benefits of the system are also pronounced for MobileNetV2-OA, MobileNet, and ResNet18-INQ where the system manages to avoid spilling off chip for several layers. Since the on-chip hierarchy of the system can sustain peak execution bandwidth for the baseline, performance benefits with the system can come from reduced off-chip traffic. FIG. 14A also shows relative energy for the same memory configurations. The system saves 28%, 16% and 10% of the energy on average for the 96KB, 192KB and 256KB configurations, respectively. These benefits are due to less off-xhip and on-chip traffic. As the on- chip capacity increases, off-chip accesses reduce and so does their overall energy cost.

[0136] TABLE 2 reports area and power for the compression and decompression. The width detector module 126 is shared per BBIock. Total area overhead is 6.7%, 3.8%, and 3.2% for the 96KB, 192KB, and 256KB on-chip configurations. However, if this area is spent in extra memory for the baseline, the system is still 1.29x, 1.15x, and 1.1x faster on average, and is slightly more energy efficient since on-chip accesses for the baseline become slightly more expensive.

TABLE 2

Area (urn 2 ) Power (mW)

BBIock 8b 16b 8b 16b

Width 4 109.8 247.68 0.042 0.498 detector 8 199.08 447.48 0.054 0.652

Compressor - 288.36 710.64 0.238 0.470

[0137] SCNN used zero compression on-chip and off-chip. For 16b networks, SCNN used 4b zero skip indexes. In the example experiments, the system used 3b indexes instead for the 8b networks to reduce metadata overhead. It was found that doing so does not affect the number of zeros that are eliminated. The system, in this case, does not compress the zero skip indexes. FIG. 14B shows a chart illustrating the reduction in total model footprint using the system over SCNN’s zero compression. The system reduces memory footprint by an average of 34% relative to zero compression. SCNN generally sizes its on-chip memory to fit all imaps on-chip for AlexNet, and GoogLeNet. This configuration results in larger networks such as ResNet50 spilling data off- chip. Furthermore, the accumulator size limits the number of omap values and the number of concurrent filters as a result. By amplifying on-chip storage capacity, the system reduces spills. These effects were studied over three different per PE imap/accumulator configurations: 10KB/6KB as in SCNN; 4KB/4KB; and 2KB/2KB. The off-chip memory used two channels of DDR4-3200. The area overhead for these configurations was 3.1%, 2.3% and, 1.8%, respectively for SCNN 16b. Overheads are smaller for SCNN 8b.

[0138] FIG. 15A shows a chart illustrating speedup over the 2KB/2KB configuration with and without use of embodiments of the system. For the 2KB/2KB configuration, the system improved performance by 29%. The improvement with the system was more pronounced for the more recent ResNet50 models as their imaps are larger. With 10KB/6KB, the system improved performance by 15%. FIG. 15A shows that energy is reduced by 26%, 24%, and 20% on average respectively for the three configurations. The example experiments shows that the system always reduced energy. Compute-bound models, such as GoogLeNet or ResNet50, saw more benefits since on-chip traffic accounts for a higher fraction of overall energy.

[0139] The example experiments demonstrated that the system can also benefit first generation tensor processing unit (TPU). The TPU incorporated 28MB of on-chip imap memory and streamed the fmaps from an off-chip DRAM with a weight stationery dataflow. A 256x256x8b systolic array computed the omaps. The Fmaps were kept compressed in DRAM and the on-chip buffers decompressing them just the before the systolic array. Similarly, the imaps were kept compressed in the on-chip DRAM and are decompressed by the Systolic Data Setup unit. FIG. 16A is a chart illustrating the memory energy breakdown forTPU with and without the system for BBIocks of 16. The system on top of the TPU had a negligible area overhead of less than 0.1%.

[0140] While initially models used 16b fixed-point, 8b is standard today for many models. To further investigate the system’s potential effectiveness for narrow datatypes across a broader set of models, the example experiments generated synthetic 6b, 4b, and 3b networks by scaling existing 8b layers to fewer bits while maintaining the original relative distribution of values (linear quantization). FIG. 16B is a chart showing ideal compression rates for a representative subset of these layers compressed in BBIocks of 8. The results show that the system remains effective for 4b layers. For 3b layers, occasionally, the system fails to reduce footprint or even expands it, but generally still provides computational advantages.

[0141] In general, the system’s compression rate depends on the value distribution and is given by: where Bmax is the maximum bit length, P{X) the probability to have a certain bit length given by the value distribution, and Bmin is 2 for signed values and 1 otherwise. For signed values, maximum compression is achieved when P(X = 2) = 1. For 3b and a group size of 8, maximum compression is limited to 25%, while for 4b it is limited to 43.75%. The above formula does not take into account the overhead of padding and pointers which depends on the dataflow, accelerator, and layer dimensions.

[0142] FIG. 17 is a chart showing footprint reduction for optimised BBIock size overhead. On average, repeated group optimisation reduces BBIock size overhead by 28% on average. ResNetl 8-PACT finds the best reduction at 58% since BBIock size for 4b values are more likely to be repeated.

[0143] FIG. 15B is a chart showing footprint reduction for Frequent Pattern Compression (FPC) and Base-Delta-lmmediate (BDI), which are cache compression schemes for general purpose systems. Both target value width in addition to other properties. FPC was motivated by the observation that the programmers tend to use 32b variables with no regard to the actual value range they need. FPC detects whether values can be stored in power of two sized containers (4b being the smallest). On average it reduces footprint by 18%. This is mostly from removing zeros. BDI exploits the low-dynamic range of values in programs (neighbouring values tend to be close in value). It operates on chunks of 64 bytes and reduces width at a byte granularity. It represents values as deltas of 4, 2, or 1 bytes from either zero or the first value of 8, 4, or 2 bytes. All zero chunks are represented as 1 byte plus metadata. This byte granularity is too large for neural networks. At best it reduces footprint by 7% ResNet50S-OA where it takes advantage of zero values.

[0144] The example experiments evaluated a variant of the system, the system-BDΐ, which incorporates elements from BDI: it applied the per value compression method of BDI but at a smaller granularity. The compression options were: all bits are zero and delta sizes of 8b, 4b, and 2b. It packed values in hileras so that decompression can be processed in parallel and without requiring a large crossbar at the output. The base was set to be always 1 byte, while the working set of values was reduced to BBIocks of 8. The system, using BDI, achieved 44% compression on average; ignoring the overheads of width and pointer metadata. This is close to what the system, without using BDI, achieves. However, decompressing values with the system using BDI was considerably more complex and required more energy. For example, decompressing a block needs 8 additions in parallel, plus broadcasting the base across all of them. Compression is also more involved: it performs all compression possibilities in parallel before choosing the best. The system without BDI both achieves a better compression rate and is simpler to implement.

[0145] In addition, the example experiments were compared to run-length encoding and dictionary-based compression that also exploit value content. Run-length encoding was limited to 8 values and the dictionary table to 8 entries to avoid prohibiting overheads for 8b values. Both of these approaches, when compared to the system, achieved lower compression rate while requiring an expensive crossbar for decompression.

[0146] The example experiments illustrate that the present embodiments are easy to implement and provide an effective on-chip compression technique for neural networks. They reduces on- chip traffic while boosting the effective on-chip capacity. As a result, they reduce the amount of on-chip storage needed to avoid excessive off-chip accesses. Moreover, for a given on-chip storage configuration, they reduce how often off-chip accesses are needed.

[0147] Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the claims appended hereto.