Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NEURAL NETWORK PROCESSING ELEMENT
Document Type and Number:
WIPO Patent Application WO/2019/213745
Kind Code:
A1
Abstract:
Described is a neural network accelerator tile. It includes an activation memory interface for interfacing with an activation memory to receive a set of activation representations and a weight memory interface for interfacing with a weight memory to receive a set of weight representations, and a processing element. The processing element is configured to implement a one-hot encoder, a histogrammer, an aligner, a reducer, and an accumulation sub-element which process the set of activation representations and the set of weight representations to produce a set of output representations.

Inventors:
MOSHOVOS ANDREAS (CA)
SHARIFY SAYEH (CA)
MAHMOUD MOSTAFA (CA)
Application Number:
PCT/CA2019/050525
Publication Date:
November 14, 2019
Filing Date:
April 25, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOVERNING COUNCIL UNIV TORONTO (CA)
International Classes:
G06N3/06; G06N3/04
Foreign References:
CA2990709A12017-11-30
CA2990712A12017-12-21
Other References:
KNAGGE, GEOFF: "Booth Recoding", ASIC DESIGN FOR SIGNAL PROCESSING, 27 July 2010 (2010-07-27), XP055650694, Retrieved from the Internet [retrieved on 20190628]
JORGE ALBERICIO ET AL.: "Bit-pragmatic deep neural network computing", IEEE /ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, vol. 2017, 14 October 2017 (2017-10-14), pages 382 - 394, XP055450862, Retrieved from the Internet [retrieved on 20190628], DOI: 10.1145/3123939.3123982
ALBERTO DELMAS: "Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How", 9 March 2018, CORNELL UNIVERSITY LIBRARY
ALBERICO, JORGE: "Bit-Pragmatic Deep Neural Network Computing", PROCEEDINGS OF THE 50TH ANNUAL IEEE/ACM INTERNATIONAL SYPOSIUM ON MICROARCHITECTURE, 14 October 2017 (2017-10-14)
JAMES GARLAND: "Low Complexity Multiply-Accumulate Units for Convolutional Neural Networks with Weight-Sharing", 30 January 2018, CORNELL UNIVERSITY LIBRARY
See also references of EP 3769266A4
Attorney, Agent or Firm:
HEER, Christopher (CA)
Download PDF:
Claims:
What is claimed is:

1. A neural network accelerator tile, comprising: an activation memory interface for interfacing with an activation memory to receive a set of activation representations; a weight memory interface for interfacing with a weight memory to receive a set of weight representations; and a processing element configured to implement a one-hot encoder, a histogrammer, an aligner, a reducer, and an accumulation sub-element to process the set of activation representations and the set of weight representations to produce a set of output representations.

2. The accelerator tile of claim 1, wherein the activation memory interface is configured to provide the set of activation representations to the processing element as a set of activation one-offset pairs, and the weight memory interface is configured to provide the set of weight representations to the processing element as a set of weight one-offset pairs.

3. The accelerator tile of claim 2, where in the processing element is further configured to implement an exponent sub-element.

4. The accelerator tile of claim 3, wherein the exponent sub-element is configured to

combine the set of activation one-offset pairs and the set of weight one-offset pairs to produce a set of one-offset pair products.

5. The accelerator tile of claim 4, wherein the exponent sub-element includes a set of

magnitude adders and a corresponding set of sign gates, one pair of magnitude adder and sign gate to provide each one-offset pair product of the set of one-offset pair products.

6. The accelerator tile of claim 1, wherein the one-hot encoder includes a set of decoders to perform one-hot encoding of a set of one-hot encoder inputs.

7. The accelerator tile of claim 2, wherein the histogrammer is configured sort a set of histogrammer inputs by value.

8. The accelerator tile of claim 2, wherein the aligner is configured to shift a set of aligner inputs to provide a set of shifted inputs to be reduced.

9. The accelerator tile of claim 2, wherein the reducer includes an adder tree to reduce a set of reducer inputs into a partial sum.

10. The accelerator tile of claim 2, wherein the accumulation sub-element includes an

accumulator and is configured to receive a partial sum and add the partial sum to the accumulator to accumulate a product over multiple cycles.

11. The accelerator tile of claim 1, wherein the processing element is further configured to implement a concatenator.

12. The accelerator tile of claim 10, wherein the concatenator is provided to concatenate a set of concatenator inputs to produce a set of grouped counts to be shifted and reduced by the aligner and reducer to produce a partial product.

13. A neural network comprising a set of neural network accelerator tiles according to claim 1

14. The neural network of claim 13, wherein the neural network is implemented as a

convolutional neural network.

15. The neural network of claim 13, wherein the set of neural network accelerator tiles

includes a column of accelerator tiles, the tiles of the column of accelerator tiles configured to each receive the same set of activation representations and to each receive a unique set of weight representations.

16. The method of claim 16, wherein the processing element is configured to, over multiple cycles, process an activation representation of the set of activation representations with a weight representation of the set of weight representations.

17. A method of producing a neural network partial product, comprising: receiving a set of activation representations; receiving a set of weight representations, each weight representation

corresponding to an activation representation of the set of activation

representations; combining the set of weight representations with the set of activation representations by combining each weight representation with its corresponding activation representation to produce a set of partial results; encoding the set of partial results to produce a set of one-hot representations; accumulating the set of one-hot representations into a set of histogram bucket counts; aligning the counts of the set of histogram bucket counts according to their size; and reducing the aligned counts of the set of histogram bucket counts to produce the neural network partial product.

18. The method of claim 17, further comprising outputting the neural network partial product to an accumulator to accumulate a product.

19. The method of claim 18, further comprising outputting the product to an activation

memory.

20. The method of claim 17, wherein each activation representation is received as an

activation pair of an activation magnitude and an activation sign, and each weight representation is received as a weight pair of a weight magnitude and a weight sign.

21. The method of claim 20, further comprising, prior to aligning the counts of the set of histogram bucket counts, recursively grouping the counts of the set of histogram bucket counts into a set of grouped counts and providing the set of grouped counts to be aligned and reduced in place of the set of histogram bucket counts.

22. The method of claim 17, wherein combining a weight representation of the set of weight representations with an activation representation of the set of activation representations is performed over multiple cycles.

Description:
NEURAL NETWORK PROCESSING ELEMENT

FIELD OF THE INVENTION

[0001] The present specification relates generally to neural networks, and relates specifically to implementing a processing element of a neural network.

BACKGROUND OF THE INVENTION

[0002] Modern computing hardware is energy constrained. Reducing the energy needed to perform computations is often essential in improving performance.

[0003] For example, many of the computations performed by convolutional neural networks during inference are due to 2D convolutions. 2D convolutions entail numerous multiply- accumulate operations where most of the work is due to the multiplication of an activation and a weight. Many of these multiplications are ineffectual.

[0004] The training or running or other use of neural networks often includes the performance of a vast number of computations. Performing less computations typically results in efficiencies such as time and energy efficiencies.

SUMMARY OF THE INVENTION

[0005] In an aspect of the present invention, there is provided a neural network accelerator tile, comprising: an activation memory interface for interfacing with an activation memory to receive a set of activation representations; a weight memory interface for interfacing with a weight memory to receive a set of weight representations; and a processing element configured to implement a one-hot encoder, a histogrammer, an aligner, a reducer, and an accumulation sub- element to process the set of activation representations and the set of weight representations to produce a set of output representations.

[0006] In an aspect of the present invention, there is provided a method of producing a neural network partial product, comprising receiving a set of activation representations; receiving a set of weight representations, each weight representation corresponding to an activation representation of the set of activation representations; combining the set of weight representations with the set of activation representations by combining each weight

representation with its corresponding activation representation to produce a set of partial results; encoding the set of partial results to produce a set of one-hot representations; accumulating the set of one-hot representations into a set of histogram bucket counts; aligning the counts of the set of histogram bucket counts according to their size; and reducing the aligned counts of the set of histogram bucket counts to produce the neural network partial product.

[0007] Other aspects and features according to the present application will become apparent to those ordinarily skilled in the art upon review of the following description of embodiments of the invention in conjunction with the accompanying figures.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] The principles of the invention may better be understood with reference to the accompanying figures provided by way of illustration of an exemplary embodiment, or embodiments, incorporating principles and aspects of the present invention, and in which:

[0009] FIG. 1 is a set of bar graphs comparing the average work reduction of eight example computational arrangements across a set of networks;

[0010] FIG. 2 is a schematic diagram of an example convolutional layer;

[0011] FIGs. 3A, 3B, and 3C are schematic diagrams of three example processing engines;

[0012] FIG. 4A is a schematic diagram of a processing element, according to an embodiment;

[0013] FIG. 4B is a schematic diagram of a portion of a processing element, according to an embodiment;

[0014] FIGs. 5A and 5B are schematic diagrams of concatenation units, according to embodiments;

[0015] FIG. 6 is a schematic diagram of an accelerator tile, according to an embodiment;

[0016] FIGs. 7A and 7B are schematic diagrams of example accelerator tile configurations; [0017] FIG. 8 is a bar graph comparing the performance of a set of example tile configurations across a number of networks;

[0018] FIG. 9 is a schematic diagram of an example accelerator tile configuration;

[0019] FIG. 10 is a schematic diagram of an example accelerator tile configuration; and [0020] FIG. 11 is a schematic diagram of an accumulator configuration.

[0021] Like reference numerals indicated like or corresponding elements in the drawings.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0022] The description that follows, and the embodiments described therein, are provided by way of illustration of an example, or examples, of particular embodiments of the principles of the present invention. These examples are provided for the purposes of explanation, and not of limitation, of those principles and of the invention. In the description, like parts are marked throughout the specification and the drawings with the same respective reference numerals. The drawings are not necessarily to scale and in some instances proportions may have been exaggerated in order to more clearly depict certain features of the invention.

[0023] This description relates to accelerators for decomposing multiplications down to the bit level to reduce the amount of work performed, such as the amount of work performed during inference for image classification models. Such reductions can improve execution time and improve energy efficiency.

[0024] This description further relates to accelerators which can improve the execution time and energy efficiency of Deep Neural Network (DNN) inferences. Although, in some embodiments some of the work reduction potential is given up yielding a low cost, simple, and energy efficient design.

[0025] As much modern computing hardware is energy-constrained, developing techniques to reduce the amount of energy needed to perform a computation is often essential for improving performance. For example, the bulk of the work performed by most convolutional neural networks during inference is due to 2D convolutions. These convolutions involve a great many multiply-accumulate operations, for which most work is due to the multiplication of an activation A and a weight W. Reducing the number of ineffectual operations may greatly improve energy efficiency.

[0026] A variety of computational arrangements have been suggested to decompose an A c W multiplication into a collection of simpler operations. For example, if A and W are l6b fixed- point numbers, A c W can be approached as 256 lb c lb multiplications or 16 l6b c lb multiplications.

[0027] FIG. 1 includes six bar graphs, one graph for each of six models used in testing a set of eight example computational arrangements. Each bar graph compares the potential reduction in work for the eight compared computational arrangements.

[0028] The leftmost bar 1100 in each bar graph represents a first computational arrangement which avoids multiplications where the activation is zero, and is representative of the first generation of value-based accelerators which were motivated by the relatively large fraction of zero activations that occur in convolutional neural networks (see for example J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. Enright Jerger, and A. Moshovos,“CNVLUTIN:

Ineffectual-Neuron-Free Deep Neural Network Computing,” in Proceedings of the International Symposium on Computer Architecture, 2016, hereby incorporated by reference).

[0029] The second bar from the left 1200 in each graph represents a second computational arrangement which skips those multiplications where either the activation or the weight are zero, and is representative of accelerators that target sparse models where a significant fraction of synaptic connections have been pruned (see for example A. Parashar, M. Rhu, A. Mukkara, A. Puglielli, R. Venkatesan, B. Khailany, J. Emer, S. W. Keckler, and W. J. Dally,“Scnn: An accelerator for compressed-sparse convolutional neural networks,” in Proceedings of the 44th Annual International Symposium on Computer Architecture, ISC A Ί7, (New York, NY, USA), pp. 27-40, ACM, 2017, hereby incorporated by reference).

[0030] The third and fourth bars from the left 1300 and 1400 represent third and fourth computational arrangements, respectively, which target precision. The third computational arrangement, represented by bar 1300, targets the precision of the activations (see for example P. Judd, J. Albericio, T. Hetherington, T. Aamodt, and A. Moshovos,“Stripes: Bit-serial Deep Neural Network Computing in Proceedings of the 49th Annual IEEE/ ACM International Symposium on Microarchitecture, MICRO-49, 2016 and A. Delmas, P. Judd, S. Sharify, and A. Moshovos,“Dynamic stripes: Exploiting the dynamic precision requirements of activation values in neural networks,” CoRR, vol. abs/l706.00504, 2017, both hereby incorporated by reference), and the fourth computational arrangement, represented by bar 1400, targets the precision of activations and weights (see for example S. Sharify, A. D. Lascorz, P. Judd, and A. Moshovos,“Loom: Exploiting weight and activation precisions to accelerate convolutional neural networks,” CoRR, vol. abs/l706.07853, 2017, hereby incorporated by reference).

[0031] Further potential for work reduction exists if multiplication is decomposed at the bit level. For example, assuming these multiplications operate on l6b fixed-point values, the multiplication is given by equation (1) below:

[0032] In equation (1) above, A and I/K j are bits of A and W respectively. When decomposed down to the individual 256 single bit multiplications, only those multiplications where A t and W i are non-zero are effectual.

[0033] The fifth and sixth bars from the left 1500 and 1600 represent fifth and sixth

computational arrangements, respectively, which decompose multiplications into single bit multiplications. The fifth computational arrangement, represented by bar 1500, skips single bit multiplications where the activation bit is zero (see for example J. Albericio, A. Delmas, P. Judd, S. Sharify, G. O’Leary, R. Genov, and A. Moshovos,“Bit-pragmatic deep neural network computing,” in Proceedings of the 50th Annual IEEE/ ACM International Symposium on

Microarchitecture, MICRO-50 Ί7, pp. 382-394, 2017, hereby incorporated by reference). The sixth arrangement, represented by bar 1600, skips single bit multiplications where either the activation or the weight bit is zero. [0034] However, in some arrangements rather than representing A and W as bit vectors, they can instead be Booth-encoded as signed powers of two, or higher radix terms. The seventh and eighth bars from the left 1700 and 1800 represent seventh and eighth computational

arrangements, respectively, in which values are Booth-encoded as signed powers of two or as higher radix terms instead of being represented as bit vectors. The seventh arrangement, represented by bar 1700, Booth-encodes the activation values, while the eighth arrangement, represented by bar 1800, Booth-encodes both activation and weight values. The multiplication of activations by weights is then given by equation (2) below:

[0035] In equation (2) above, At t and Wt j are of the form ±2 X . As with the positional representation, it is only those products where both At t and Wt j are non-zero that are effectual. Accordingly, FIG. 1 shows the potential reduction in work where ineffectual terms for a Booth- encoded activation are skipped with 1700 (see for example J. Albericio, A. Delmas, P. Judd, S. Sharify, G. O’Leary, R. Genov, and A. Moshovos,“Bit-pragmatic deep neural network computing,” in Proceedings of the 50th Annual IEEE/ ACM International Symposium on Microarchitecture, MICRO-50 Ί7, pp. 382-394, 2017, incorporated by reference above), and shows the potential reduction in work where ineffectual terms for a Booth-encoded activation and Booth-encoded weight are skipped with 1800.

[0036] As indicated by FIG. 1, a hardware accelerator which computes only effective terms, such as effective terms of Booth-encoded activations and weights, has the potential to greatly reduce computational work. In many embodiments, configurations which target Booth-encoded computations can also be used to compute bit vector representations as well.

[0037] Computational arrangements may be used in the implementation of neural networks, such as convolutional neural networks (‘CNN’). CNNs usually consist of several convolutional layers followed by a few fully connected layers, and in image processing most of the operation time is spent on processing convolutional layers in which a 3D convolution operation is applied to the input activations producing output activations. An example of a convolutional layer is shown in FIG. 2, which shows a c x x x y input activation block 2100 and a set of N c x h x k filters 2200. The layer dot products each of these N filters (denoted /°, f 1 , / w_1 ) 2200 by a c x h x k subarray of input activation (or‘window’), such as window 2300, to generate a single ° h x ° k output activation 2400. Convolving N filters and an activation window results in N o h / o k outputs which will be passed to the input of the next layer. The convolution of activation windows and filters takes place in a sliding window fashion with a constant stride S. Fully connected layers can be implemented as convolutional layers in which filters and input activations have the same dimensions (x = h and y = k).

[0038] Data parallel engines, such as using l6b fixed-point activations and weights, have been suggested for use in implementing neural networks (see for example the DaDianNao accelerator disclosed in Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam,“Dadiannao: A machine-learning supercomputer,” in Microarchitecture

(MICRO), 2014 47th Annual IEEE/ ACM International Symposium on, pp. 609-622, Dec 2014, hereby incorporated by reference). In an example of such an engine, which will be referred to as a BASE engine, 8 inner product units (IP) may be provided, each accepting 16 input activations and 16 weights as inputs. Where 8 IPs are used, the 16 input activations may be broadcast to all 8 IPs and each IP may receive its own 16 weights; every cycle each IP multiplies 16 input activations by their 16 corresponding weights and reduces them into a single partial output activation using a 16 32b input adder tree. The partial results may be accumulated over multiple cycles to generate the final output activation. An activation memory may provide the activations and a weight memory may provide the weights.

[0039] Variations of data parallel engines may be used to implement the computational arrangements discussed above, such as the examples shown in FIG. 3.

[0040] FIG. 3 A is a simplified schematic diagram of a bit-parallel engine 3100, shown multiplying two 4b activation and weight pairs, generating a single 4b output activation per cycle. 4b and other bit sizes are used in various parts of this description as examples, and other sizes may be used in other embodiments. The throughput of engine 3100 is two 4b x 4b products per cycle.

[0041] FIG. 3B is a simplified schematic diagram of a bit-serial engine 3200 (see for example S. Sharify, A. D. Lascorz, P. Judd, and A. Moshovos,“Loom: Exploiting weight and activation precisions to accelerate convolutional neural networks,” CoRR, vol. abs/l706.07853, 2017, incorporated by reference above). To match the bit-parallel engine 3100 throughput, engine 3200 processes 8 input activations and 8 weights every cycle producing 32 lb x lb products. As both activations and weights are processed bit-serially, engine 3200 results in 16 output activations in P a x P w cycles, where P a and P w are the activation and weight precisions, respectively. As a result, engine 3200 outperforms engine 3100 by——— . As depicted in FIG. 3B, since both

Pa x Pw

activations and weights can be represented in three bits, the speedup of engine 3200 over engine 3100 of FIG. 3 A is l.78x. Yet, engine 3200 still processes some ineffectual terms; for example, in the first cycle 27 of the 32 lb c lb products are zero and thus ineffectual.

[0042] FIG. 3C is a simplified schematic diagram of an engine 3300 in which both the activations and weights are represented as vectors of essential powers of two (or‘one-offsets’). For example, A 0 = (110) is represented as a vector of its one-offsets A 0 = (2,1). Every cycle each processing element (‘PE’) accepts a 4b one-offset of an input activation and a 4b one-offset of a weight and adds them up to produce the power of the corresponding product term in the output activation. Since engine 3300 processes activation and weights term-serially, it takes t a c t w cycles for each PE to produce the product terms of an output activation, where t a and t w are the number of one-offsets in the corresponding input activation and weight. The engine processes the next set of activation and weight one-offsets after T cycles, where T is the maximum t a c t w among all the PEs. In the example of FIG. 3C, the maximum T is 6 corresponding to the pair of A 0 = (2,1) and W Q 1 = (2,1,0) from PE(l,0). Thus, the engine 3300 can start processing the next set of activations and weights after 6 cycles, achieving 2.67 c speedup over the bit-parallel engine 3100 of FIG. 3 A.

[0043] Some embodiments of the present invention are designed to minimize the required computation for producing the products of input activations and weights by processing only the essential bits of both the input activations and weights. In some embodiments, input activations and weights are converted on-the-fly into a representation which contains only the essential bits, and processes one pair of essential bits each cycle: one activation bit and one weight bit.

[0044] In some embodiments, a hardware accelerator may be provided for processing only essential bits, whether those essential bits are processed in the form of one-offsets, regular positional representations, non-fixed-point representations, or other representations of the essential bits.

[0045] In embodiments in which the essential bits are processed in the form of one-offsets, an accelerator may represent each activation or weight as a list of its one-offsets (o n , ... , o 0 ). Each one-offset is represented as a {sign, magnitude ) pair. For example, an activation A =— 2 (10) = 1110 (-2) with a Booth-encoding of 0010 (2) would be represented as (-,1) and a A = 7 (10) = 0111 (2) will t> e presented as ((+,3), (-,0)). The sign can be encoded using a single bit, with, for example, 0 representing“+” and 1 representing

[0046] In some embodiments, a weight W = (W terms ) and an input activation A = (A terms ) are each represented as a (sign, magnitude) pair, (S j ,t j ) and respectively, and the product is calculated as set out in equation (3) below:

[0047] Implementing equation (3), instead of processing the full A x W product in a single cycle, an accelerator processes each product of a single t' term of the input activation A and of a single t term of the weight W individually. Since these terms are powers of two, their product will also be a power of two. As such, embodiments implementing equation (3) can first add the corresponding exponents t' + t. If a single product is processed per cycle, the 2 ' + 1 final value can be calculated via a decoder. Where more than one term pair is processed per cycle, embodiments can use one decoder per term pair to calculate the individual 2 t + 1 products and then employ an efficient adder tree to accumulate all, as described further below with reference to an exemplary embodiment.

[0048] FIG. 4A is a schematic diagram of exemplary processing element (PE) 4000 of an embodiment, such as may be used in an engine or computational arrangement implementing a neural network. PE 4000 implements six steps, each step implemented using a processing sub- element. In some embodiments, various sub-elements of a processing element, such as the sub- elements of PE 4000 described below, may be merged or split with various hardware and software implementations. Processing element 4000 is set up to multiply 16 weights, W 0 , ,

W 15 , by 16 input activations, A 0 , ... , A 15.

[0049] A first sub-element is an exponent sub-element 4100, which accepts 16 4b weight one- offsets, t 0 , ..., t 15 and their 16 corresponding sign bits s 0 , ..., s 15 , along with 16 4-bit activation one-offsets, t' 0 , ..., t' 15 and their signs s' 0 , ..., s' 15 , and calculates 16 one-offset pair products. Since all one-offsets are powers of two, their products will also be powers of two. Accordingly, to multiply 16 activations by their corresponding weights PE 4000 adds their one-offsets to generate the 5-bit exponents (t 0 + t0), ..., (t 15 + t} 5 ) and uses 16 XOR gates to determine the signs of the products.

[0050] A second sub-element is a one-hot encoder 4200. For the i th pair of activation and weight, wherein i is e (0, ..., 15}, one-hot encoder 4200 calculates 2 tl + h via a 5b-to-32b decoder which converts the 5-bit exponent result (t,· + t·) into its corresponding one-hot format, being a 32-bit number with one‘ G bit and 31‘O’ bits. The single‘ G bit in the j th position of a decoder output corresponds to a value of either +2 J or— 2 7 depending on the sign of the corresponding product, being EiSign as shown in FIG. 4A.

[0051] A third sub-element is a histogrammer 4300, which generates the equivalent of a histogram of the decoder output values. Histogrammer 4300 accumulates the 16 32b numbers from one-hot encoder 4200 into 32 buckets, JV°, ..., N 31 corresponding to the values of 2°, ... , 2 31 , as there are 32 powers of two. The signs of these numbers, being EiSign as taken from one- hot encoder 4200, are also taken into account. Following this, each bucket contains the count of the number of inputs that had the corresponding value. Since each bucket has 16 signed inputs, the resulting count would be in the range of -16 to 16 and thus is represented by 6 bits in 2’s complement.

[0052] Fourth and fifth sub-elements are aligner 4400 and reducer 4500, respectively. Aligner 4400 shifts the counts according to their weight, converting all to 31 + 6 = 37 b and then reducer uses a 32-input adder tree to reduce the 32 6b counts into the final output, as indicated in FIG. 4A. [0053] Following reduction, a sixth sub-element is an accumulation sub-element 4600.

Accumulation sub-element 4600 accepts a partial sum from reducer 4500. Accumulation sub- element 4600 then accumulates the newly received partial sum with any partial sum held in an accumulator. This way the complete A c W product can be calculated over multiple cycles, one effectual pair of one-offsets per cycle.

[0054] In some embodiments, sub-element designs may be better able to take advantage of the structure of information being processed. For example, FIG. 4B is a schematic diagram of a concatenator, aligner and reducer sub-element 4700. In some embodiments, both aligner 4400 and reducer 4500 of PE 4000 are replaced by concatenator, aligner and reducer sub-element 4700, which comprises concatenator 4710, aligner 4720, and reducer 4730.

[0055] Replacing aligner 4400 and reducer 4500 of PE 4000 with concatenator, aligner and reducer sub-element 4700 has the effect of adding a new concatenator to PE 4000, the new concatenator being sub-element 4710. The addition of a new concatenator also allows changes to be made to the aligner and reducer, such as to make these sub-elements smaller and more efficient; reflective changes to aligner 4400 to implement aligner sub-element 4720, and reflective changes to reducer 4500 to implement reducer sub-element 4730.

[0056] Instead of shifting and adding the 32 6b counts, concatenator, aligner and reducer sub- element 4700 seeks to reduce costs and energy by exploiting the relative weighting of each count by grouping and concatenating them as shown in FIG. 4B. For example, rather than adding N° and N 6 they are simply concatenated as they are guaranteed to have no overlapping bits that are ‘ V . The concatenated values are then added via a 6-input adder tree to produce a 38b partial sum to be output to accumulation sub-element 4600.

[0057] Concatenator, aligner and reducer sub-element 4700 implements a more energy and area efficient adder tree than possible using aligner 4400 and reducer 4500, and takes advantage of the fact that the outputs of histogrammer 4300 contain groups of numbers that have no overlapping bits that are‘ 1’ .

[0058] As an example, consider adding the 6 th 6b input ( N 6 = n nfnf n n U Q ) with the 0 th 6b input (N° = Using the adder of aligner 4400 and reducer 4500 the 6 th input N 6 must be shifted by 6 bits, which amounts to adding 6 zeros as the 6 least significant bits of the result. In this case, there will be no bit position in which both N 6 shifted by 6 and JV° will have a bit that is 1. Accordingly, adding (N 6 « 6) and JV° is equivalent to concatenating eitherN 6 and N° or (iV 6 -l) and JV° based on the sign bit of JV°, as depicted schematically in FIG. 5A as concatenate unit 5100 and numerically in computation (4) below:

/V 6 :< 2' : ' + ,V : · = (;V 6 < < 6) + JV°

[0059] Accordingly, this process is applied recursively, by grouping those N l where (z MOD 6) is equal. That is, the z th input would be concatenated with (z + 6) th , (i + l2) th , and so on.

Example concatenating unit 5200 of FIG. 5B implements recursive concatenation as a stack for those N 1 inputs where (z MOD 6) = 0, although other implementations are possible in other embodiments. For the 16-product unit described above, the process yields the six grouped counts of depiction (5) below: Go = { Y 30 . Y 24 . Y 18 Y 12 Y 6 . Y° }

On = {Y 27 . Y 21 - Y 15 - Y 9 . Y 3 }

G 4 = {Y 28 . Y 22 . Y 16 . Y 10 . Y 4 }

Gg = {Y 29 .Y 2 YY 17 .Y n . Y 5 }

[0060] The final partial sum is then given by equation (6) below:

[0061] A hardware accelerator tile 6000 is depicted in FIG. 6. Tile 6000 is a 2D array of PEs, such as PE 4000 of FIG. 4, processing 16 windows of input activations and K = 8 filters every cycle. PEs along the same column share the same input activations and PEs along the same row receive the same weights. Every cycle PE(ij) receives the next one-offset from each input activation from the j th window and multiplies it by a one-offset of the corresponding weight from the / th filter. Tile 6000 starts processing the next set of activations and weights when all the PEs are finished with processing the terms of the current set of 16 activations and their corresponding weights.

[0062] As tiles such as tile 6000 process both activations and weights term-serially, to match the BASE configuration it must process more filters or windows concurrently. In the worst case each activation and weight possess 16 terms, thus tiles such as tile 6000 should process 8 x 16 = 128 filters in parallel to match the peak compute bandwidth of BASE. However, as indicated in FIG. 1, with 16 c more filters, the performance of some example implementations using tiles such as tile 6000 is more than two orders of magnitude improved over the BASE performance. Further, an embodiment can use a modified Booth encoding and reduce the worst case of terms to 8 per weight or activation.

[0063] FIGs. 7A and 7B are schematic drawings of several hardware accelerator

configurations, the performance of which is compared in FIG. 8. As indicated in FIG. 7B, a BASE configuration 7100 can process 8 filters with 16 weights per filter using around 2,000 wires. As indicated in FIG. 7A, using tiles such as tile 6000, configuration 7200 can process 8 filters using 128 wires, configuration 7300 can process 16 filters using 256 wires, configuration 7400 can process 32 filters using 512 wires, and configuration 7500 can process 64 filters using around 1,000 wires. In each configuration of FIG. 7 the number of activation wires is set to 256. In other embodiments, other configurations could be employed, such as fixing the number of filters and weight wires and increasing the number of activation windows and wires.

[0064] FIG. 8 compares the performance of configurations 7100 to 7500 illustrated in FIG. 7. The bar graph of FIG. 8 shows the relative performance improvement of configurations 7200 to 7500 over configuration 7100 as a set of four bar graphs for each of six models, the six models identified below each set of bars. The relative improvement of configuration 7200 is shown as the leftmost bar of each set, the relative improvement of configuration 7300 is shown as the second bar from the left of each set, the relative improvement of configuration 7400 is shown as the third bar from the left of each set, and the relative improvement of configuration 7500 is shown as rightmost bar of each set.

[0065] Simulations of embodiments of the present invention indicate that such embodiments deliver improvements in execution time, energy efficiency, and area efficiency. A custom cycle- accurate simulator was used to model execution time of tested embodiments. Post layout simulations of the designs were used to test energy and area results. Synopsys Design Compiler (see for example Synopsys,“Design Compiler.”

http://www.synopsys.com/Tools/Implementation/RTLSynthesis /DesignCompiler/Pages, hereby incorporated by reference) was used to synthesize the designs with TSMC 65nm library. Layouts were produced with Cadence Innovus (see for example Cadence,“Encounter rtl compiler.” https://www.cadence.com/content/cadencewww/global/en_US/home /training/all- courses/8444l.html, hereby incorporated by reference) using synthesis results. Intel PSG

ModelSim was used to generate data-driven activity factors to report the power numbers. The clock frequency of all designs was set to 1 GHz. The ABin and About SRAM buffers were modeled with CACTI (see for example N. Muralimanohar and R. Balasubramonian,“Cacti 6.0:

A tool to understand large caches,” 2015, hereby incorporated by reference) and the activation memory and weight memory were modeled as eDRAM with Destiny (see for example M.

Poremba, S. Mittal, D. Li, J. Vetter, and Y. Xie,“Destiny: A tool for modeling emerging 3d nvm and edram caches,” in Design, Automation Test in Europe Conference Exhibition (DATE), 2015, pp. 1543-1546, March 2015, hereby incorporated by reference).

[0066] FIG. 8 indicates the performance of configurations 7200 to 7500 relative to BASE configuration 7100 for convolutional layers with the 100% relative TOP-l accuracy precision profiles of Table 1, compared using the indicated network architectures (see for example Yang, Tien-Ju and Chen, Yu-Hsin and Sze, Vivienne,“Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, hereby incorporated by reference):

Table 1 : Activation and weight precision profiles in bits for the convolutional layers.

[0067] In some embodiments, further rewards result from the use of embodiment of the present invention with certain models, such as models designed to have a reduced precision, models which use alternative numeric representations that reduce the number of bits that are‘ U, or models with increased weight or activation sparsity. However, embodiments of the present invention target both dense and sparse networks and improve performance by processing only essential terms. [0068] As indicated in FIG. 8, on average, configuration 7200 outperforms configuration 7100 by more than 2x while for AlexNet-Sparse, a sparse network, it achieves a speedup of 3 over configuration 7100. As indicated in FIG. 8, configurations 7300 to 7500 deliver speedups of up to 4.8x, 9.6x, and 18.3x over configuration 7100, respectively.

[0069] The energy efficiency of configurations 7200 to 7500 relative to configuration 7100 are shown in Table 2, below. As configurations using tiles such as tile 6000 require less on-chip memory and communicate fewer bits per weight and activation, the overall energy efficiency is generally higher.

Table 2: compute unit energy efficiency relative to BASE K ·

[0070] Post layout measurements were used to measure the area of configurations 7100 to 7500. Configuration 7200 requires l.78x the area of configuration 7100 while achieve an average speedup of 2.4x. The area overhead for 7300 is 3.56x, the area overhead for 7400 is 7.08x, and the area overhead for 7500 is 14.15 c , while execution time improvements over configuration 7100 are 4.2 c , 8.4x, and 16 c on average, respectively. As such, tiles such as tile 6000 provide better performance vs. area scaling than configuration 7100.

[0071] The number of activation and weight pairs processed by a tile or accelerator can be varied. Some processor elements process 16 activation and weight pairs per cycle, all

contributing to the same output activation. Some processor elements process other than 16 pairs per cycle. Some accelerator embodiments combine multiple tiles or processor elements of the same or different configurations. FIG. 9 is a schematic diagram of a tile 9000 containing 32 processor elements 9100 organized in an 8 by 4 grid. Input scratchpads, i.e., small local memory, provide activation and weight inputs, an activation pad 9200 providing activation inputs and a weight pad 9300 providing weight inputs. In some layers the activation pad 9200 provides weight inputs and the weight pad 9300 provides activation inputs. A third scratchpad, storage pad 9400, is used to store partial or complete output neurons.

[0072] A bus 9110 is provided for each row of processing elements 9100 to connect them to the storage pad 9400. Partial sums are read out of the processor element grid and accumulated in accumulator 9500 and then written to the storage pad 9400 one column of the processing elements at a time. There is enough time to drain the processing element grid via the common bus 9800. Since processing of even a single group of activation and weight pair inputs is typically performed over multiple cycles and typically multiple activation and weight pair groups can be processed before the partial sums need to be read out, this provides enough time for each column of processing elements to access the common bus 9800 sequentially to drain its output while the other columns of processing elements are still processing their corresponding activations and weights.

[0073] Tile 9000 includes encoders 9600 between the input scratchpads and the processing element grid, one encoder corresponding to each input scratchpad. Encoders 9600 convert values into a series of terms. An optional composer column 9700 provides support for the spatial composition of l6b arithmetic while maintaining 8b processing elements. Tile 9000 allows for the reuse of activations and weights in space and time and minimizes the number of connections or wires needed to supply tile 9000 with activations and weights from the rest of a memory hierarchy.

[0074] For example, tile 9000 can proceed with 4 windows of activations and 8 filters per cycle. In this case, the weight pad 9300 provides 16 weights per filter and the activation pad 9200 provides the corresponding 16 activations per window. Processing elements 9100 along the same column share the same input activations, while processing elements 9100 along the same row share the same input weights. Encoders 9600 convert input values into terms at a rate of one term per cycle as each PE can process a single term of activation and a single term of weight every cycle. Each cycle, the processing element in row and column‘j’ multiples the input activation from the jth window by the weight from the ith filter. Once all activation terms have been multiplied with the current weight term, the next weight term is produced. The processing element cycles through all the corresponding activation terms again to multiply them with the new weight term. The product is complete once all weight and activation terms have been processed. If there are 3 activation terms and 4 weight terms, at least 12 cycles will be needed. In total, tile 9000 processes 4 windows, 16 activations per window, and 8 filters; 4 by 16 by 8 activation and weight pairs concurrently.

[0075] In practice, the number of terms will vary across weight and activation values, and as a result some processing elements will need more cycles than others to process their product. Tile 9000 implicitly treats all concurrently processed activation and weight pairs as a group and synchronizes processing across different groups; tile 9000 starts processing the next group when all the processing elements are finished processing all the terms of the current group. However, this gives up some speedup potential.

[0076] In some embodiments, computation is allowed to proceed in 16 independent groups.

For example, the first synchronization group will contain Ho, A i 6 , A 24, A 44 and weights

Wo, Wi 6 , the second group will contain Hi, H 17, A25, ..., H49 and weights Wi, Wn, and so on for the remaining 14 groups. This example is referred to as comb synchronization, since the groups physically form a comb-like pattern over the grid. A set of buffers at the inputs of the booth-encoders 9600 can be used to allow the groups to slide ahead of one another.

[0077] Some neural networks required l6b data widths or precisions only for some layers. Some neural networks require l6b data widths or precisions only for the activations, and few values require more than 8b. In some embodiments, a tile supports the worst-case data width required across all layers and all values. However, in some embodiments, tiles support data type composition in space or time or both space and time.

[0078] For example, a tile design can allow for l6b calculations over 8b processing elements for activations and optionally for weights. Although other bit widths can also be used. Tile designs can be useful for neural networks which require more than 8b for only some of their layers.

[0079] A spatial composition tile is shown in FIG. 10. Tile 10000 uses multiple yet unmodified 8b processing elements. The spatial composition tile requires extra processing elements whenever it processes a layer that has l6b values. Tile 10000 is a 2 by 2 grid of 8b processing elements 10100 to extened to support the following weight and activation combinations: 8b and 8b, l6b and 8b, and l6b and l6b. The normalized peak compute throughput is respectively: 1, ½, and ¼. To support l6b and 8b computation the activation terms are split into those corresponding to their lower and upper bytes which are processed respectively by two adjacent processing element (“PE”) columns.

[0080] In the example indicated in FIG. 10, PE(0,0) and PE(0, 1) process the lower bytes of Ao and Ai and PE(l,0) and PE(l, 1) process the upper bytes. Rows 0 and 1 respectively process filters 0 and 1 as before. Processing proceeds until the accumulation sub-element 4600 have accumulated the sums of the upper and lower bytes for the data block. At the end, the partial sum registers are drained one column at a time. A composer column 10200 is added at the grid’s output. When the partial sums of column 0 are read out, they are captured in the respective temporary registers (“tmp”) 10300. Next cycle, the partial sums of column 1 appear on the output bus. The per row adders of the composer 10200 add the two halves and produce the final partial sum. While the example shows a 2 by 2 grid, the concept applies without modification to larger processing element grids. A single composer column is sufficient as the grid uses a common bus per row to output the partial sums one column at a time. While not shown, the composer can reuse the accumulate column adders instead of introducing a new set. In general, supporting l6b activations would require two adjacent per row processor elements.

[0081] Tile 10000 also supports l6b weights by splitting them along two rows. This requires four processing elements each assigned to one of the four combinations of lower and upper bytes. In FIG. 10, PE(0,0), PE(l,0), PE(0,l) and PE(l,l) will respectively calculate (A/., WL),

(AH , WL),(AL, WH ), and (AH , WD). The second level adder in the composer column takes care of combining the results from the rows by zero-padding row 1 appropriately.

[0082] A temporal composition tile may also be used. A temporal composition tile would employ temporal composition processing elements. An embodiment of a temporal composition processing element is shown in FIG. 11. Processing element 11000 supports both 8b and l6b operation, albeit at a lower area cost than a native l6b processing element. The temporal composition tile requires extra cycles whenever it has to process l6b values. [0083] Processing element 11000 allows for the splitting of the terms of activations and weights into those belonging to the upper and the lower bytes and the processing of them separately in time. The output from the front-stage adder is appropriately padded with zeros and then added to the extended precision partial sum. There are three cases based on the source of the activation and weight terms being processed: both belong to lower bytes (L/L), both belong to upper bytes (H/H), or one belongs to an upper byte and the other to a lower one (H/L or L/H).

The multiplexer 11100 selects the appropriately padded value. The multiplexer’s select signal can be shared among all processing elements in a tile. Processing 8b values incurs no overhead. Processing l6b activations and 8b weights (or vice versa) requires an extra cycle, whereas processing l6b weights and activations requires 3 extra cycles. However, this time overhead has to be paid only when there is a value that really needs l6b with the tile having processing element 11000.

[0084] A temporal composition tile does not reserve resources for whole layers, and since values that require more than 8b may be few it can be expected to achieve higher throughout per processing element than a spatial composition tile. However, a temporal composition tile requires larger processing elements and more sophisticated control at the booth encoders. In an embodiment, an 8b processing element capable of temporal extension to l6b is 22% smaller compared to a native l6b processing element. In some embodiments, combinations of spatial and temporal designs may be used, such as spatial composition for weights and temporal composition for activations.

[0085] In some networks and for some layers (especially the first layer), the data type needed as determined by profiling sometimes exceeds 8b slightly. For example, 9b or lOb are found to be needed. In some embodiments, executing these layers or networks is possible with an unmodified 8b processing element and with a minor modification to the Booth-Encoder. For example, in the case of processing a value which needs the 9th bit, that is where that bit is 1.

Since an 8b processing element only supports calculations with up to +2 7 the Booth-Encoder can effectively synthesize +2 8 by sending +2 7 twice. Importantly, this will be needed only for the values where the 9th bit needs to be 1; all negative values in 9b. As an added benefit, this flexibility makes quantization easier for certain networks. [0086] Embodiments presented above exploit inter-value bit-level parallelism. However, some embodiments can exploit intra-value bit-level parallelism and to do so differently than bit- parallel hardware. This is possible, since the processing element produces the correct result even if the terms of a value are processed spatially instead of temporally. For example, if two input lanes are assigned per weight and Booth-encoder is modified so that it outputs up to two terms per cycle. This also enables an accelerator to exploit bit-level parallelism within values, which may be useful to reduce synchronization overheads. Exploiting bit-level parallelism within values may also be useful to improve utilization for layers where there isn’t enough reuse of weights or activations or both activations and weights to fill in all columns or rows or both rows and columns, respectively. This is the case, for example, for fully connected layers where there is no reuse of weights. This optimization helps with energy efficiency as fully connected layers are typically memory bound. It is also useful for depth-separable convolutional layers. Exploiting bit-level parallelism within values may also be useful when there are not enough filters to fill in all rows. This is different than the intra-value bit-level parallelism exploited by conventional bit- parallel units: they process all bits regardless of value whereas embodiments of the present accelerator would process only the effectual ones.

[0087] The performance of an 8b accelerator with spatial composable processing elements and the performance of an accelerator with native l6b processing elements have been compared. The 8b spatial composable processing elements supported 8b and l6b weights and activations, and the 8b accelerator was scaled up to use the same compute area as the l6b accelerator. Testing was conducted using the GoogleNet-S and Resnet50-S models. The 8b spatial composable processing accelerator used multiple processing elements as needed only for those layers that required more than 8b precision. The 8b spatial composable processing accelerator was faster than the native l6b accelerator: 1.4 times faster for GoogleNet-S and 1.2 times faster for Resnet50-S.

[0088] In other embodiments, other configurations could be used, such as an increased number of weight wires. In some embodiments, performance improves sublinearly, such as due to inter filter imbalance aggravated by scaling up only by increasing the number of filters. In some embodiments, the number of simultaneously processed activations may be increased instead or in addition to increasing the number of weights. Combining configuration designs with minimal buffering configurations (see for example J. Albericio, A. Delmas, P. Judd, S. Sharify, G.

O’Leary, R. Genov, and A. Moshovos,“Bit-pragmatic deep neural network computing,” in Proceedings of the 50th Annual IEEE/ ACM International Symposium on Microarchitecture, MICRO-50 Ί7, pp. 382-394, 2017, incorporated by reference above) may also reduce cross- activation imbalances. In other embodiments, the activation and weight memories may be distributed along the tiles or shared among all or groups of them.

[0089] Embodiments of the present invention are compatible with compression approaches (see for example A. Delmas, S. Sharify, P. Judd, M. Nikolic, and A. Moshovos,“Dpred: Making typical activation values matter in deep learning computing,” CoRR, vol. abs/l 804.06732, 2018, hereby incorporated by reference), and can be expected to perform well with practical off-chip memory configurations and interfaces.

[0090] It is noted that while portions of the above description and associated figures may describe or suggest the use of hardware, some embodiments may be emulated in software on a processor, such as a GPU (Graphic Processing Unit) and may produce similar performance enhancements. In addition, various components may be dedicated, shared, distributed, or combined with other components; for example, activation and weight memories supplying activation and weight representations may be dedicated, shared, distributed, or combined in various embodiments.

[0091] Various embodiments of the invention have been described in detail. Since changes in and or additions to the above-described best mode may be made without departing from the nature, spirit or scope of the invention, the invention is not to be limited to those details but only by the appended claims.