Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMPROVED TECHNIQUES FOR NEURAL NETWORK EXECUTION UTILIZING MEMOIZATION
Document Type and Number:
WIPO Patent Application WO/2023/073591
Kind Code:
A1
Abstract:
A system and method for reducing power consumption in processing artificial neural networks utilizes memoization techniques. The method includes receiving computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detecting in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determining a first power consumption corresponding to retrieving a value from a value cache; determining a second power consumption corresponding to executing the cacheable block of instructions; and storing in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

Inventors:
KERAMIDAS GEORGIOS (GR)
STAMOULIS IAKOVOS (GR)
Application Number:
PCT/IB2022/060310
Publication Date:
May 04, 2023
Filing Date:
October 26, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THINK SILICON RESEARCH AND TECH SINGLE MEMBER S A (GR)
International Classes:
G06N3/04; G06F1/3225; G06F1/3234; G06F9/50; G06F12/0895; G06N3/06; G06N3/08
Foreign References:
US20150347139A12015-12-03
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A method for reducing power consumption in processing artificial neural networks utilizing memoization, comprising: receiving computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detecting in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determining a first power consumption corresponding to retrieving a value from a value cache; determining a second power consumption corresponding to executing the cacheable block of instructions; and storing in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

2. The method of claim 1 , further comprising: determining a number of least significant bits (LSBs) of the input; and storing in the value cache a first input value and a second input value, each with a respective output value, wherein the first input value and the second input value differ from each other by a number of LSBs equal to or less than the determined number of LSBs.

3. The method of claim 2, wherein the first input value and the second input value are stored in the value cache without the LSBs.

4. The method of claim 1 , further comprising: determining a number of LSBs of the output; and storing in the value cache a first output value and a second output value, each with a respective input value, wherein the first output value and the second output value differ from each other by a number of LSBs equal to or less than the determined number of LSBs.

5. The method of claim 4, wherein the first output value and the second output value are stored in the value cache without the LSBs.

6. The method of claim 1 , further comprising: determining a frequency which an input value is received as an input to the cacheable block of instructions, and; storing in the value cache an input value corresponding to the input further in response to determining that the frequency is above a predetermined threshold.

7. The method of claim 1 , further comprising: evicting a value from the value cache.

8. The method of claim 1 , wherein the cacheable block of instructions corresponds to a single layer of the neural network.

9. The method of claim 1 , wherein the cacheable block of instructions corresponds to a first layer of the neural network and a second layer of the neural network, wherein the second layer is consecutive to the first layer.

10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to execute a process, the process comprising: receiving computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detecting in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determining a first power consumption corresponding to retrieving a value from a value cache; determining a second power consumption corresponding to executing the cacheable block of instructions; and storing in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

11. A system for reducing power consumption in processing artificial neural networks utilizing memoization, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detect in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determine a first power consumption corresponding to retrieving a value from a value cache; determine a second power consumption corresponding to executing the cacheable block of instructions; and store in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

18

12. The system of claim 11 , wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: determine a number of least significant bits (LSBs) of the input; and store in the value cache a first input value and a second input value, each with a respective output value, wherein the first input value and the second input value differ from each other by a number of LSBs equal to or less than the determined number of LSBs.

13. The system of claim 12, wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: store the first input value and the second input value in the value cache without the LSBs.

14. The system of claim 11 , wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: determine a number of LSBs of the output; and store in the value cache a first output value and a second output value, each with a respective input value, wherein the first output value and the second output value differ from each other by a number of LSBs equal to or less than the determined number of LSBs.

15. The system of claim 14, wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: store the first output value and the second output value in the value cache without the LSBs.

16. The system of claim 11 , wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: determine a frequency which an input value is received as an input to the cacheable block of instructions, and;

19 store in the value cache an input value corresponding to the input further in response to determining that the frequency is above a predetermined threshold.

17. The system of claim 11 , wherein the memory includes further instructions which when executed by the processing circuitry further configure the system to: evict a value from the value cache.

18. The system of claim 11 , wherein the cacheable block of instructions corresponds to a single layer of the neural network.

19. The system of claim 1 , wherein the cacheable block of instructions corresponds to a first layer of the neural network and a second layer of the neural network, wherein the second layer is consecutive to the first layer.

20

Description:
IMPROVED TECHNIQUES FOR NEURAL NETWORK EXECUTION UTILIZING MEMOIZATION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application No. 63/278,747 filed on November 12, 2021 , and EP Patent No. 21386065.3 filed on October 29, 2021 , all contents of which are hereby incorporated by reference.

All of the applications referenced above are hereby incorporated by reference.

TECHNICAL FIELD

[0002] The present disclosure relates generally to neural networks, and specifically to improved techniques for processing neural network on parallel processors.

BACKGROUND

[0003] Artificial intelligence is a computing field which has been gaining momentum in the past few years as an increasingly larger number of applications is found, together with supporting hardware. Though a field originally with groundwork laid decades ago, only recent advances in hardware, and software provisioning said hardware, have allowed artificial intelligence to tackle a host of applications.

[0004] However ubiquitous artificial intelligence seems, there are still hurdles in adoption of the different technologies under this umbrella term. Neural networks, as one example, require training data, and a substantial use of compute resources such as memory and processing circuitry. Such substantial requirements impedes adoption of neural network solutions, for example on edge devices where power and memory are often conserved as much as possible.

[0005] It would therefore be advantageous to provide a solution that would overcome the challenges noted above.

SUMMARY

[0006] A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” or “certain embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.

[0007] Certain embodiments disclosed herein include a method for reducing power consumption in processing artificial neural networks utilizing memoization. The method comprises: receiving computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detecting in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determining a first power consumption corresponding to retrieving a value from a value cache; determining a second power consumption corresponding to executing the cacheable block of instructions; and storing in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

[0008] Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon causing a processing circuitry to execute a process, the process comprising: receiving computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detecting in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determining a first power consumption corresponding to retrieving a value from a value cache; determining a second power consumption corresponding to executing the cacheable block of instructions; and storing in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

[0009] Certain embodiments disclosed herein also include a system for reducing power consumption in processing artificial neural networks utilizing memoization. The system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: receive computer code representing a neural network model, the neural network model including an input layer having a first plurality of nodes, and an output layer having a second plurality of nodes; detect in the computer code a cacheable block of instructions, the cacheable block of instructions including an input and an output, wherein the input and the output are local to the cacheable block of instructions; determine a first power consumption corresponding to retrieving a value from a value cache; determine a second power consumption corresponding to executing the cacheable block of instructions; and store in the value cache an input value corresponding to the input and an output value corresponding to the output, in response to determining that the second power consumption is higher than the first power consumption.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.

[0011] Figure 1 is a schematic diagram of a neural network (NN) implemented in accordance with an embodiment.

[0012] Figure 2 is a diagram of a node of a neural network having multiple inputs and an output, implemented in accordance with an embodiment.

[0013] Figure 3 is a schematic diagram of a neural network processing system according to an embodiment.

[0014] Figure 4 is a flowchart of a method for utilizing memoization for processing a neural network model, implemented in accordance with an embodiment. [0015] Figure 5 is a diagram of a neural network processing system, implemented in accordance with an embodiment.

DETAILED DESCRIPTION

[0016] It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.

[0017] The various disclosed embodiments include a method and system for improved processing of neural networks utilizing memoization techniques. In processing a neural network, and specifically convolutional neural networks (CNNs), locality of values may be taken advantage of in order to reduce processing operations, thereby reducing power consumption by a processing circuitry. Specifically, memoization is utilized in order to store in a value cache input and output values which pertain to cacheable instruction blocks which are identified in the neural network model. CNNs perform convolutions, which are multiplication operations that are repeated, on images which tend to have value localization, meaning that for any selected pixel, there is a high probability that the pixels in its vicinity share similar values. Therefore, when processing an image through a CNN, for example during training, many processing operations are performed redundantly. It is therefore advantageous to store the inputs and outputs of such operations in a value cache and retrieve them from the value cache when there is a hit, when the power consumption of retrieving a value from the value cache is lower than the power consumption required by a processing circuitry to execute the cacheable instruction block based on the provided input.

[0018] Memoization techniques are disclosed which allow a system to efficiently process artificial intelligence neural networks. Code of a neural network model is received and parsed to detect cacheable blocks of instructions. A cacheable block of instructions includes an input and an output which are predefined, and does not include control instructions. In some embodiments cacheability may be determined based on the similarity of certain inputs, certain corresponding outputs, or a combination thereof. For example, a least significant bit (LSB) is dropped in an embodiment for certain inputs if their corresponding outputs are close to one another, for example within a predetermined threshold. In an embodiment, a plurality of least significant bits are dropped. Dropping an LSB trades accuracy (i.e. , calculation accuracy by a processing circuitry) for conservation of memory space, and in an embodiment is adjusted based on the application of the neural network.

[0019] Fig. 1 is a schematic diagram of a neural network (NN) 100 implemented in accordance with an embodiment. The neural network 100 is, in an embodiment, an artificial neural network (ANN), such a recursive neural network (RNN), a convolution neural network (CNN), and the like.

[0020] In an embodiment, the neural network 100 includes an input numerical translator matrix 105. The input numerical translator matrix 105 is configured to receive an input and translate the input to a numerical representation. In an embodiment, the input is an alphanumeric representation. In certain embodiments, the numerical representation is a binary representation. In an embodiment the numerical representation is provided as input to an input neuron 115 of the neural network 100. An input neuron 115 is a function, for example defined by computer code, which provides an output based on a received input.

[0021] In an embodiment, the input numerical translator matrix 105 is configured to receive an input which includes a plurality of elements. For example, each element of the plurality of elements is represented by an alphanumeric string, in an embodiment. In certain embodiments, each element is mapped by an injective function to a unique numerical representation. For example, an RGB value which is represented using a hexadecimal representation is mapped to a unique binary value.

[0022] In an embodiment an input from the translation matrix 105 is provided to an input neuron 115, which is one of a plurality of input neurons, the plurality of input neurons together forming an input layer 110 of the neural network 100. In an embodiment, the input layer 110 includes neurons which receive an external input, i.e., an input which is not an output of another neuron of the neural network 100. In certain embodiments, the input layer 110 includes neurons which receive both the external input and an internal input

[0023] Each neuron (also referred to as a node) of the neural network 100 is configured to apply a function to its input and to send the output of the function forward (e.g., to another neuron). Each neuron may include a weight function. A weight function of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value used by the weight function is, the greater the effect that the neuron’s computation carries on the output of the neural network.

[0024] The neural network 100 further includes a plurality of hidden neurons 125 in a hidden layer 120. In this example embodiment, a single hidden layer 120 is shown, however a plurality of hidden layers may be implemented without departing from the scope of the disclosed embodiments.

[0025] In an embodiment, the neural network 100 is configured such that each output of an input neuron 115 of the input layer 110 is used as an input to one or more hidden neurons 125 in the hidden layer 120. In an embodiment, each output of the input neurons 115 is used as an input to all of the hidden neurons 125 of the hidden layer 120. In embodiments where multiple hidden layers are implemented, the output of the input layer 110 is used as the input for the hidden neurons of a first hidden layer among the multiple hidden layers. In certain embodiments, some input neurons may output to only some of the hidden layer nodes, while other input neurons provide their output to some other hidden layer nodes.

[0026] In certain embodiments, the neural network 110, such as a CNN, is configured to include a plurality of convolutional layers. A convolutional layer includes a filter, which is represented as a matrix. The filter is applied on an input, which produces a dot product, also known as an activation map. In certain embodiments, the plurality of convolutional layers are partially interconnected, to provide local connectivity, rather than fully interconnected. Full connectivity between convolutional layers is not practical due to the amount of memory required to store such connections. Furthermore, for certain application, such as image recognition, full connectivity would not take advantage of the fact that image data is typically spatially local. [0027] The neural network 100 further includes an output layer 130, which includes one or more output neurons 135. The output of the hidden layer 120 is the input of the output layer 130. In an embodiment where a plurality of hidden layers is implemented, the output of the final hidden layer is used as the input to the output layer 130. In some embodiments, the output neurons 135 of the output layer 130 may provide a result to an output numerical translator matrix 106, which is configured to translate the output of the output layer 130 from a numerical representation to an output. For example, the translation matrix 106 may translate a binary output comprised of ones and zeros to an alphanumeric string.

[0028] In an embodiment where a CNN is utilized, the output layer is realized as a fully connected layer. In certain embodiments, the output layer, and at least another layer preceding the output layer, are fully connected layers.

[0029] It should be noted that the neural network discussed with respect to Fig. 1 can be executed over, or realized by, a general purpose or dedicated hardware. Examples for such hardware include analog or digital neuro-computer realizing any one of a combination of electronic components, optical components, a von-Neumann multiprocessor, a graphical processing unit (GPU), a general purpose GPU (GPGPU), a vector processor, an array processor, a tensor processing unit, a combination thereof, and the like.

[0030] Fig. 2 is an example of a diagram 200 of a node of a neural network having multiple inputs and an output, implemented in accordance with an embodiment. A node 240 receives at least an input, in this example, the node 240 receives a first input (i) 210, a second input (j) 220 and a third input (k) 230. Each input is assigned a corresponding weight value, such that the first input (i) 210 has a weight wi, the second input (j) 220 has a weight W2, and the third input (k) 230 has a weight W3. The node executes a function f(x,y,z) which generates an output 250, which is equal to f(i w1, j w2, k w3). In certain embodiments the function f(x,y,z) may be computationally expensive relative to the cost of storing a cached result for certain inputs. The process of storing outputs of a computationally expensive function for given inputs is known as memoization. Memoization is a process to decrease overall computation time in exchange for memory space. [0031] In this example, if there are frequent occurrences of inputs which result in likewise recurring outputs, computation may be reduced (i.e. use of less resources) or made faster (using the same resources to get a faster result) by caching the output for given inputs. The input-output pairing may be stored in a lookup table, such that looking up the value for a given input(s) will result in an output, if one exists (i.e. cache hit). If an output does not exist, the output may be stored in the cache. In an embodiment, memoization techniques populate a cache in run time, rather than in advance. In certain embodiments, it is advantageous to provide at least a portion of precomputed functions to populate the cache, to increase the chance of initial cache hits. For example, in certain embodiments input-output pairs (e.g., pairs including an input value and an output value, a plurality of each, and the like) are detected which are utilized with a frequency above a predetermined threshold. In an embodiment, the input-output pairs are preloaded into a value cache. In certain embodiments, the input-output pairs which are preloaded are stored in a value cache which applies an eviction policy only to input-output pairs which are not the preloaded input-output pairs.

[0032] In an embodiment, the output 250 requires performing three multiplications and two additions. Arithmetic operations (e.g., multiplication and addition) are performed, in an embodiment, by an arithmetic logic unit (ALU). An ALU is an electronic circuit which performs arithmetic and bitwise operations on integer binary numbers. It is recognized that such ALUs require less power when performing, for example, addition than when performing multiplication. There are certain caveats to this, for example when the multiplication is with a number which is a power of two (i.e., 2 X , where ‘X’ is an integer) the multiplication operation can be replaced by a shift operation (e.g., using a shift register). In an embodiment, the inputs are used to determine if an input-output should be stored in a value cache.

[0033] For example, where an input causing a multiplication involves a power of two, the input is not a good candidate for storing in the value cache, as the multiplication operation can be replaced by a shift operation. In certain embodiments, inputs having a value close to two may also be determined to not be good candidates for storing in a value cache. For example, if the first input 210 is 5 and the weight wi is 17, the multiplication between the two digits can be replaced by 5 multiplied by (16 + 1). The binary value representing 5 would be shifted 4 places (as 2 4 =16), to the result of which another 5 would be added, thus replacing a multiplication operation with a shift operation and an addition operation. In certain embodiments, a check is performed to determine if a weight represents a value which, when multiplied, can be converted from a multiplication operation to a shift operation. In certain embodiments, the check is further performed to determine if a weight value can be converted from a multiplication operation to a shift operation, and a predetermined number of addition operations.

[0034] In an embodiment, a power consumption is determined for a value cache read operation. A value cache read operation requires power in order to retrieve a value from the cache. It is desirable, in an embodiment, that the power required by the value cache read operation be lower than the power required to perform a multiplication operation, an addition operation, a shift operation, and a combination thereof. In certain embodiments a check is performed for each potential cache insertion operation to determine a power consumption of performing the operation, and comparing the determined power consumption to a predetermined value of cache lookup power consumption. In an embodiment, a potential cache insertion is inserted into cache if input values exceed a frequency threshold (e.g., the inputs appear together a number of times exceeding a threshold) and also the power consumption of performing a computation using the inputs is below the power consumption of performing a lookup from the value cache.

[0035] Fig. 3 is an example schematic diagram of a neural network processing system 300 according to an embodiment. The system 300 includes a parallel processing circuitry 310 and a serial processing circuitry 330, coupled to a memory 320, a storage 340, and a network interface 350. In an embodiment, the components of the system 300 may be communicatively connected via a bus 305.

[0036] The parallel processing circuitry 310 may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include graphics processing units (GPUs), general purpose GPUs (GPGPUs), and tensor processing units (TPUs).

[0037] The serial processing circuitry 330 may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.

[0038] The memory 320 may be volatile (e.g., random access memory, etc.), non-volatile (e.g., read only memory, flash memory, etc.), or a combination thereof. In an embodiment the memory 320 includes memory portion 325 which stores therein a neural network model, such as the one described in more detail above with respect to Fig. 1 .

[0039] In one configuration, software for implementing one or more embodiments disclosed herein may be stored in the storage 340. In another configuration, the memory 320 is configured to store such software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the parallel processing circuitry 310, the serial processing circuitry 330, or a combination thereof, cause the parallel processing circuitry 310, the serial processing circuitry 330, or a combination thereof, to perform the various processes described herein.

[0040] The storage 340 may be magnetic storage, optical storage, solid state storage, and the like, and may be realized, for example, as flash memory or other memory technology, or any other medium which can be used to store the desired information.

[0041] The network interface 350 allows the system 300 to communicate with a network (not shown).

[0042] It should be understood that the embodiments described herein are not limited to the specific architecture illustrated in Fig. 3, and other architectures may be equally used without departing from the scope of the disclosed embodiments.

[0043] Fig. 4 is a flowchart 400 of a method for utilizing memoization for processing a neural network model, implemented in accordance with an embodiment. In certain embodiments, the method is utilized on a convolutional neural network. [0044] At S410, source code of a neural network model is scanned to find at least a cacheable instruction block. In an embodiment, scanning may be performed by an instruction parser. In certain embodiments the instruction parser is part of a compiler toolchain. In other embodiments a stand-alone instruction parser is configured to perform scanning for a cacheable instruction block. In certain embodiments the instruction parser performs syntactic analysis of the source code and generates a syntax tree of the source code and generates basic blocks of the source code (i.e., instruction block). In an embodiment, a basic instruction block is a portion of code having predefined input and output parameters, meaning the basic instruction block must not include control-like instructions, such as for example an if statement, a branch, or a “go-to” statement.

[0045] In some embodiments, scanning of the executable code (i.e., object code) is performed by a binary instrumentation tool. In other embodiments scanning may be performed by utilizing a just-in-time environment to scan a target executing program and accordingly modify its executable code. Furthermore, in certain embodiments scanning is performed at the intermediate code level, at decompiled code level, or other form of executable code.

[0046] In an embodiment the cacheable instruction block is generated at compile time for execution on a processor, such as a general purpose graphics processing unit (GPGPU). In an embodiment, the executable code is generated by a CPU, GPGPU, a combination thereof, and the like. A cacheable instruction block is a block of instructions which include an input and an output, such that the input and output are local to the block of instructions. An instruction block having a local input and a local output cannot call other functions for input outside of the instruction block.

[0047] At S420, the cacheable instruction block is executed on an input. In an embodiment, the cacheable instruction block is executed with multiple different inputs, each resulting in a corresponding output. In certain embodiments the cacheable instruction block includes instructions corresponding to a single node (i.e., function) of a neural network. In some embodiments the cacheable instruction block includes instructions corresponding to a plurality of nodes. In an embodiment, the plurality of nodes are each in the same layer. For example, the plurality of nodes are each in a convolutional layer of a convolutional neural network (CNN). In certain embodiments a first group of the plurality of nodes are associated with a first layer of a neural network, and a second group of the plurality of nodes are associated with a second layer of the neural network, where the second layer is consecutive to the first layer. In some embodiments the cacheable instruction block includes instructions corresponding to a subset of the instructions of a node e.g., only a part of the instruction of a function formulates a cacheable instruction block.

[0048] In an embodiment, a cacheable instruction block is determined to be cacheable by determining a power consumption required by a processing circuitry to perform a computation based on the instruction block. In an embodiment the cacheable instruction block is a potential cache insertion until it is determined that executing the cacheable instruction by the processing circuitry requires more power than retrieving a result of executing the cacheable instruction block from a value cache.

[0049] At S430, a cache is populated with an input and output of the cacheable instruction block. In certain embodiments, an input-output pair includes a plurality of inputs, a plurality of outputs, and a combination thereof. In an embodiment, population of the cache occurs in run time, after the executable code is compiled. In an embodiment, the cache includes an eviction policy. An eviction policy clears the cache of stored values, for example in order to make room for more frequently used values. Eviction policies may be, for example least recently used (LRU), FIFO (First In, First Out), combinations thereof and the like.

[0050] In some embodiments, similar inputs may result in substantially similar outputs. In certain embodiments, populating the cache further includes: comparing a first input to a second input while ignoring the least significant bit of the input, comparing a first output to a second output ignoring the least significant bit (LSB), combinations thereof, and the like.

[0051] For example, if a first binary input ‘10001000’ results in a first binary output ‘100010’ and a second binary input ‘10001001’ results in a second binary output ‘100011’, the cache is populated by assigning an input having a value of ‘1000100_’, where signifies the LSB, and an output defined as ‘100011’ (the second binary output). This increases the probability of a cache hit occurring, which is desirable as frequent cache hits result in reduced use of processing circuitries which translates to a reduction in power consumption. This is beneficial, for example in CNNs which are used for image processing, as this takes advantage of images, as inputs, having value locality (i.e. , pixels proximate to each other are similar in value). In an embodiment, similarity is determined, for example, by a predetermined number of LSBs. In certain embodiments, the predetermined number of LSBs is different for different layers of the neural network model. For example, in a first layer two values are considered similar if they differ by one LSB, and in a second, consecutive, layer two values are considered similar if they differ by two LSBs. In certain embodiments, a number of LSBs is determined by the length of the input. In some embodiments, the number of LSBs is determined by a ratio between the number of LSBs to the total length of the input. In an embodiment, the ratio between the number of LSBs to the total length of the input is one LSB to every eight bits. For example, if the total length of the input is ten bits, the number of LSBs is one, if the total length of the input is sixteen bits, the number of LSBs is two.

[0052] In an embodiment similarity between a first input and a second input, between a first output and a second output, and a combination thereof, is further determined based on a length of the input, a length of the output, a combination thereof, and a number of LSBs. For example, if the length of the input is 8 bits, and the number of LSBs is equal to 1 , inputs may be considered similar if they differ from each other by one bit (the LSB). In certain embodiments, the length of the input (or output) and the number of LSBs may be taken into account when determining similarity. For example, if the input length is 4 bits, an LSB length equal to 2 may result in outputs which are wildly different resulting in errors which are unacceptable, or yield meaningless results. However if the input length is 16 bits, then an LSB length of 2 bits is acceptable, in an embodiment.

[0053] In some embodiments, the number of bits that can be ignored from the inputs (i.e., the LSBs) are determined, guided, and the like, by an circuitry external to the processing circuitry, that is configured to calculate the number of bits for each node, for each layer of nodes, and the like, based on a profiling analysis.

[0054] Fig. 5 is an example diagram of a neural network processing system 500, implemented in accordance with an embodiment. Executable code 520 is generated from a neural network model 510 which is initialized using some known input values. In an embodiment the executable code 520 is generated by the processing circuitry described with respect to Fig. 3 above. In certain embodiments the executable code 520 is provided to a code scanner 540 and a processing circuitry, such as a central processing unit (CPU), digital signal processor (DSP), Field Programmable Gate Array (FPGA), or multicore GPGPU 530. In certain embodiments, the executable code 520 is provided to a parallel processor, for example such as described above in Fig. 3. In an embodiment the multicore GPGPU 530 is connected with a memory 535, which is used as a memoization cache. In an embodiment the memory 535 utilizes the techniques disclosed herein.

[0055] In an embodiment, a code scanner 540 is configured to receive the executable code 520, and is further configured to determine if there are cacheable instruction blocks. The code scanner 540 is configured, in an embodiment, to implement the methods described herein. In an embodiment, the code scanner 540 is further configured to determine similarity between inputs, between outputs, between input-output pairings, and any combination thereof, to determine if the inputs, outputs, input-output pairings, or any combination thereof, can be used to populate a memoization cache in the memory 535. In certain embodiments, the code scanner 540 is implemented as part of the system 300 of Fig. 3.

[0056] The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal. [0057] All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

[0058] It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.

[0059] As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; 2A; 2B; 2C; 3A; A and B in combination; B and C in combination; A and C in combination; A, B, and C in combination; 2A and C in combination; A, 3B, and 2C in combination; and the like.