Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR CONVOLUTION MODEL HARDWARE ACCELERATOR
Document Type and Number:
WIPO Patent Application WO/2020/160653
Kind Code:
A1
Abstract:
A method and system for a convolution model hardware accelerator. The method comprises receiving a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers, for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks, and in accordance with the reconfigured computational order, generating output features that are interpretive of the input feature map.

Inventors:
ZHANG LEI (CA)
QIAN JUN (US)
Application Number:
PCT/CA2020/050136
Publication Date:
August 13, 2020
Filing Date:
February 04, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZHANG LEI (CA)
QIAN JUN (US)
International Classes:
G06F17/10; G06F15/76; G06N20/00
Domestic Patent References:
WO2016092323A12016-06-16
Foreign References:
US20190028752A12019-01-24
Attorney, Agent or Firm:
MBM INTELLECTUAL PROPERTY LAW LLP (CA)
Download PDF:
Claims:
What is claimed is:

1. A method for implementing a convolution model hardware accelerator in one or more processors, the method comprising : receiving a stream of an input feature map into the one or more processors, the input feature map utilizing a convolution model that includes a plurality of convolution layers; for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks; and in accordance with the reconfigured computational order, generating a plurality of output features that are interpretive of the input feature map.

2. The method of claim 1 wherein reconfiguring the computational order further comprises identifying at least one of a number of 0's (zeros) in the input feature data and the output filters associated with at least a set of the plurality of hardware accelerator sub-blocks.

3. The method of claim 2 further comprising dynamically re-allocating respective ones of the plurality of output filters amongst the hardware accelerator sub-blocks in a hardware implementation .

4. The method of claim 2 further comprising statically re-allocating respective ones of the plurality of output filters amongst the hardware accelerator sub-blocks in a firmware implementation controlled by an embedded central processing unit (CPU).

5. The method of claim 2 wherein reconfiguring the computational order minimizes the processing time for the given convolution layer.

6. The method of claim 1, wherein the convolution model hardware accelerator is implemented in one or more of a field-programmable gate array (FPGA) device, a massively parallel processor array device, a graphics processing unit (GPU) device, a central processing unit (CPU) device, and an application- specific integrated circuit (ASIC) .

7. The method of claim 1 wherein the input feature map comprises an image.

8. A processing system comprising : one or more processors; a non-transient memory storing instructions executable in the one or more processors to implement a convolution model hardware accelerator by: receiving a stream of an input feature map into the one or more processors, the input feature map utilizing a convolution model that includes a plurality of convolution layers; for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks; and in accordance with the reconfigured computational order, generating a plurality of output features that are interpretive of the input feature map.

9. The processing system of claim 8 wherein reconfiguring the computational order further comprises identifying at least one of a number of 0's (zeros) in the input feature data and the plurality of output filters associated with at least a set of the plurality of hardware accelerator sub-blocks.

10. The processing system of claim 9 further comprising dynamically re allocating respective ones of the plurality of output filters amongst the hardware accelerator sub-blocks in a hardware implementation.

11. The processing system of claim 9 further comprising statically re allocating respective ones of the plurality of output filters amongst the hardware accelerator sub-blocks in a firmware implementation controlled by an embedded central processing unit (CPU).

12. The processing system of claim 8 wherein reconfiguring the computational order minimizes the processing time for the given convolution layer.

13. The processing system of claim 8 wherein the convolution model hardware accelerator is implemented in one or more of a field- programmable gate array (FPGA) device, a massively parallel processor array device, a graphics processing unit (GPU) device, a central processing unit (CPU) device, and an application- specific integrated circuit (ASIC).

14. The processing system of claim 8 wherein the input feature map comprises an image.

15. The processing system of claim 8 wherein the hardware accelerator is a first hardware accelerator, and further comprising at least a second hardware accelerator.

16. A non-transient processor- readable memory including instructions executable in one or more processors to : receive a stream of an input feature map into the one or more processors, the input feature map utilizing a convolution model that includes a plurality of convolution layers; for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks; and in accordance with the reconfigured computational, generate a plurality of output features that are interpretive of the input feature map.

Description:
METHOD AND SYSTEM FOR CONVOLUTION MODEL HARDWARE

ACCELERATOR

TECHNICAL FIELD

[0001] The disclosure herein relates to the field of processor techniques, devices and systems for machine learning models including convolution networks.

BACKGROUND

[0002] Machine learning systems provide critical tools to advance new technologies including automatic speech recognition, autonomous vehicles, computer vision, and natural language understanding. Convolution models including convolution neural networks have been shown to be effective tools for performing image recognition, detection, and retrieval. Before a neural network can be used for these inference tasks, it must be trained using a data corpus in a computationally very intensive process, in which existing systems may typically require weeks to months of time on graphic processing units (GPUs) or central processing units.

[0003] As more and more data are included for training and machine learning inference networks, the time required is further exacerbated. Hardware accelerators are more energy efficient than existing GPU-based based approaches, and significantly reduce the energy consumption required for neural network training and inference tasks. BRIEF DESCRIPTION OF THE DRAWINGS

[0004] FIGS. 1A- IB illustrate example embodiment convolution model instances for implementing a hardware accelerator.

[0005] FIG. 2 illustrates, in one example embodiment, an architecture of a platform device, including one or more processors, implementing a convolution model hardware accelerator.

[0006] FIG. 3 illustrates a method of operation, in one example embodiment, for implementing a convolution model hardware accelerator.

DETAILED DESCRIPTION

[0007] Among other technical advantages and benefits, solutions herein provide for re-shuffling, or reallocating, an initial order of output filters (also referred to herein as filters, weights or kernels) in a convolution model in a sparsity mode for machine learning inference and training accelerators. Solutions herein recognize that hardware accelerators used for machine learning inference and training workloads often provide higher throughput whilst consuming lower power than CPUs or GPUs. With regard to convolution models in particular, multi-instance machine learning hardware accelerators may be implemented to provide higher throughput compared to a single instance hardware accelerator, further enhancing speed and efficiency with regard to machine learning workloads.

[0008] Multi-instance hardware accelerators can be all used for one single machine learning job. For example, all the instances of the hardware accelerator can be used to do machine learning inference work of a single image at the same time, typically for batch one inference. A specific mode, the sparsity mode, utilizes the fact there can be a lot of zeros (0's) in the input feature data and the output filter (or weight) portion of the convolution model. The data and weight with 0's components are not used in multiplication part of the computations in a given machine learning job, and this aspect may be applied using the techniques and systems herein to hardware accelerators to further speed up machine learning tasks. The disclosure herein describes a novel way to re-balance computational loading among the multi-instance convolution model machine learning inference and training hardware accelerators, especially in the sparsity mode, to increase a level of parallelism and reduce overall computational times.

[0009] In accordance with a first example embodiment, a method of implementing a convolution model hardware accelerator is provided. The method includes receiving a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers, for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks, and in accordance with the reconfigured computational data flow, generating output features that are interpretive of the input feature map.

[0010] In accordance with a second example embodiment, a processing system that includes one or more processors and a memory storing instructions executable in the one or more processor to provide a convolution model hardware accelerator is disclosed. The memory includes instructions executable to receive a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers, for a given convolution layer within the plurality of convolution layers, reconfigure a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks, and in accordance with the reconfigured computational order, generate output features that are interpretive of the input feature map [0011] In accordance with a third example embodiment, a non transient memory including instructions executable in one or more processors is provided. The instructions are executable in the one or more processors to implement a convolution model hardware accelerator by receiving a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers, for a given convolution layer within the plurality of convolution layers, reconfiguring a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks, and in accordance with the reconfigured computational order, generating output features that are interpretive of the input feature map.

[0012] One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device.

[0013] Furthermore, one or more embodiments described herein may be implemented through the use of logic instructions that are executable by one or more processors. These instructions may be carried on a computer-readable medium. In particular, machines shown with embodiments herein include processor(s), various forms of memory for storing data and instructions, including interface and associated circuitry. Examples of computer-readable mediums and computer storage mediums include flash memory and portable memory storage units. A processor device as described herein utilizes memory, and logic instructions stored on computer-readable medium. Embodiments described herein may be implemented in the form of computer processor- executable logic instructions in conjunction with programs stored on computer memory mediums, and in varying combinations of hardware in conjunction with the processor- executable instructions or code.

SYSTEM DESCRIPTION

[0014] FIG. 1A illustrates, in an example embodiment, a convolution model instance for implementing a hardware accelerator, having a single output filter support. The convolution operation typically embodies two parts of inputs: one is input feature map data, and the other is a filter (variously referred to as output filter, or kernel, or weight). Given the input channel data with W(Width) x H(Height) x IC data cube and RxSxIC filter, the output of direct convolution may be formulated as:

where:

X= input data/input feature/input feature map

w= width of the input or output data

h= height of the input or output data

R= kernel size (width)

S= kernel size (height)

C= number of input channel

Y= output data/output feature/output feature map

W = filter/kernel/weight

[0015] FIG. 1A illustrates an input of 7x7xIC, where IC is the number of input channels. The input of 7x7 is used in this example case and the input resolution size can vary. A filter can have different sizes, typical sizes are lxl, 3x3, 5x5, 7x7, etc. A filter of 3x3 comprises 9 weights (or 9 values) in the example here. For each input channel, the 3x3 filter, or weight, are convoluted with 3x3 data and generates 1 output data. The same location of data of all the input channels are summed together and generate 1 output data channel. The final output of 5x5 output data is shown in FIG. 1A.

[0016] An output filter is applied to detect a particular feature of the input map from an input data stream, for example, to detect lines that curve outward and to the right. Other filters may detect other features of the input map, such as for lines that curve to the left or for straight edges. The more filters, the greater the depth of the activation map, and the more information we have about the input volume.

[0017] This leads to output channel (OC) definitions. Each OC is represented by an output filter used to detect one particular feature or pattern of the input feature map data stream. FIG. 1A 1 shows 1 output filter (1 OC). Normally in deep learning networks there are many OCs (output filters) to look for different information, features or patterns in the data stream of an input feature map.

[0018] FIG. IB illustrates, in another example embodiment, another convolution model instance for implementing a hardware accelerator; in particular, a convolution model having multiple output filters support. In the example of FIG. IB, the input feature data is still 7x7xIC. For each output filter, after convolution, a 5x5 output data is generated, as in FIG. 1A. Total of 5x5xOC output data is generated for K-l number of output channel filters.

[0019] Machine learning inference and training networks are typically are modeled to include many convolution layers. Typically, the output of one layer becomes the input of the next layer. For example, in FIG. IB, if IC of the current layer is 128 and OC is 256, then the input of the current layer is 7x7x128 and the output is 7x7x256. The input of the next layer is 7x7x256. [0020] While hardware accelerators are primarily described in the disclosure herein, it is contemplated that the techniques and system can be extended to central processing unit (CPU) and general purpose processing unit (GPU) implementation of the machine learning inference and training workloads.

[0021] FIG. 2 illustrates, in one example embodiment, an architecture

200 of a platform device or processing system, including one or more processors, implementing a convolution model hardware accelerator.

[0022] Convolution model hardware accelerator logic module 205 may include instructions stored in memory 202 executable in conjunction with processor 201. In implementations, the functionality ascribed to processor

201 may be performed using multiple processors deployed in cooperation. Convolution model hardware accelerator logic module 205 may comprise portions or sub-modules including feature input module 210, output filter re-shuffling module 211, and output feature generation module 212. In alternative implementations, it is contemplated that at least some hard wired circuitry may be used in place of, or in combination with, all or certain portions of the software logic instructions of convolution model hardware accelerator 205 to implement hardware accelerator examples described herein. Thus, the examples described herein are not limited to particular fixed arrangements of hardware circuitry and software instructions.

[0023] Feature input module 210 of convolution model hardware accelerator logic module 205 may include instructions executable in processor 201 to receive a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers.

[0024] Output filter re-shuffling module 211 of convolution model hardware accelerator logic module 205 may include instructions executable in processor 201 to, for a given convolution layer within the plurality of convolution layers, reconfigure a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks. In some embodiments, more than one hardware accelerators working in conjunction may be implemented in the processing system.

[0025] Output feature generation module 212 of convolution model hardware accelerator logic module 205 may include instructions executable in processor 201 to, in accordance with the reconfigured computational order, generate at least output features that are interpretive of the input feature map.

METHODOLOGY

[0026] FIG. 3 illustrates, in an example embodiment, method 300 of operation for implementing a convolution model hardware accelerator. In describing the example of FIG. 3, reference is made to the examples of FIG. 1 through FIG. 2 for purposes of illustrating suitable components or elements for performing a step or sub-step being described.

[0027] Examples of method steps described herein relate to the use of processing system 200 including convolution model hardware accelerator logic module 205 for implementing the techniques described. According to one embodiment, the techniques are performed in response to the processor 201 executing one or more sequences of software logic instructions that constitute convolution model hardware accelerator logic module 205. In embodiments, convolution model hardware accelerator logic module 205 may include the one or more sequences of instructions within sub-modules including feature input module 210, output filter re shuffling module 211, and output feature generation module 212. Such instructions may be read into memory 202 from machine-readable medium, such as memory storage devices. In executing the sequences of instructions contained in feature input module 210, output filter re shuffling module 211, and output feature generation module 212 of convolution model hardware accelerator logic module 205, processor 201 performs the process steps described herein.

[0028] In alternative implementations, at least some hard-wired circuitry may be used in place of, or in combination with, the software logic instructions to implement examples described herein. Thus, the examples described herein are not limited to any particular combination of hardware circuitry and software instructions. Additionally, it is also contemplated that in alternative embodiments, the techniques herein, or portions thereof, may be distributed between several processors working in conjunction.

[0029] A single instance of hardware accelerator is normally used to process a few numbers of output filters simultaneously. A simple example is as follows: total of 128 output filters (128 OCs), and a hardware accelerator processes 8 OC simultaneously. This will take 16 iterations to process all 128 OCs.

[0030] Multi-instance hardware accelerators can be all used for one single machine learning job. For example, all the instances of the hardware accelerator can be used to do machine learning inference work of a single image at the same time.

[0031] In the multi-instance hardware accelerators case, a simple example is for each hardware accelerator to process a total number of output filters divide by N, where N is the number of hardware accelerators.

[0032] The following network and systems are used to illustrate an example embodiment: 1) a network layer with 128 output weight filters (128 OCs), 2) a hardware accelerator with 8 sub-blocks and each processes 1 OC at a time, so total of 8 OCs simultaneously; 3) 4 parallel hardware accelerators. In this example, it takes 4 iterations to process all 128 OCs.

[0033] There are fixed number of multipliers pool in hardware accelerators to do the multiplications/convolutions of the data and weights. Normally, there are a lot of 0's (zeros) in the input feature data and/or weight (in an output filter) portion of the convolution. In the non sparsity mode (normal mode), multipliers are used to do the multiplications of data and weights even if one or both are zero. In this case, fixed amount of time (a fixed number of hardware clock cycle) is consumed. Therefore, in both single hardware accelerator case or multiple hardware accelerators case, the number of cycles to finish an output channel (OC) are identical, as each sub-block inside hardware accelerator takes about the same amount of time to process an OC.

[0034] A specific mode, the sparsity mode, utilizes the fact there can be a lot of 0's (zeros) in the input feature data and/or the weight portion of the convolution. The data and/or weight with 0's components are not used in multiplication part of the machine learning job, and this further speed up the machine learning jobs

[0035] In this special sparsity mode case, the number of cycles to process each OC can vary, depends on the number of 0's in the input feature data and also the number of 0's constituted in the output filters.

[0036] For example (128 OCs total, a hardware accelerator processes 8 OCs simultaneously, 4 hardware accelerators), there are 32 OCs being processed simultaneously across 4 hardware accelerators. These 32 OCs can finish in different time (in different number of hardware clock cycles) due to different number of 0's in the respective weights of the filters.

[0037] This invention describes a novel way to balance the loading among the multi-instance machine learning Inference or training hardware accelerators, especially in the sparsity mode. [0038] In the above example, it takes 16 iterations for one hardware accelerator to process all OCs and 4 iterations for 4 hardware accelerators to process all OCs.

[0039] In the case of a single hardware accelerator example, it processes OCO-7 in the first iteration, OC8-15 in the 2nd iteration, and OC120-127 in the 15th iteration. There are 8 sub-blocks in a hardware accelerator. Each sub-block process 1 OC so a single hardware accelerator can process 8 OCs simultaneously. The first sub-block processes OCO, 8, 16, 24, ... 120, and 2nd sub-block processes OC1, 9, 17, 25, ..., 121, and 7th sub-block processes OC7, 15, 23, ... 127. The total process time of the first sub-block is the total time to process OCO, 8, 16, 14, ... 120.

[0040] In the case of 4 hardware accelerators, the first hardware accelerator processes OCO-7 in the first iteration, OC8-15 in the 2nd iteration, OC16-23 in the 3rd iteration, OC24-31 in the 4th iteration. The 2nd hardware accelerator processes OC32-39 in the first iteration, OC40- 47 in the 2nd iteration, and so on. The 4th hardware accelerator processes OC96-127 in 4 iterations. The total process time of the first sub-block of the first hardware accelerator is the total time to process OCO, OC8, OC16 and OC24.

[0041] Alternatively, in the case of 4 hardware accelerators, the first hardware accelerator processes OCO-7 in the first iteration, OC32-39 in the 2nd iteration, OC64-71 in the 3rd iteration, and so on. The 2nd hardware accelerator processes OC8-15 in the first iteration, OC40-47 in the 2nd iteration and so on. The total process time of the first sub-block of the first hardware accelerator is the total time to process OCO, OC32, OC64 and OC96.

[0042] In all the cases above, regardless of one hardware accelerator or 4 hardware accelerators, the OCs assigned to the later iterations are in a fixed pattern and do not take into the considerations of the time or hardware clock cycles consumed by the earlier iterations in the sparsity mode.

[0043] In the present invention, the OC assignment of the later iterations take into the consideration of the estimated or actual time consumed by the earlier OC.

[0044] Normally, for an OC with weights having many 0's (zero's), less multiplications are needed and hence less time to generate output data for this OC.

[0045] Note for 3x3 convolution, filters of OC have size of 3x3xIC, where IC is number of input channels. The number of 0's in 3x3xIC determines number of multiplications needed. Furthermore, when both data sparsity and weight sparsity are considered, the number of 0's in the data along with the number of 0's in 3x3xIC of an OC determines the number of multiplications needed for this OC.

[0046] For example, in the filter with 3x3 weight case, there are up to total of 9 non-zero weights in each input channel. A filter of 6 zero weights (3 non-zero weights) takes less multiplications (and hence consumes less time) than a filter with no zero weights (9 valid weights)

[0047] In the above example of a single hardware accelerator, take sub-blockO as an example, it's possible that all the OCO, 8, 16, 14, ...120 have filters with many 0 weights, while sub-blockl OC1, 9, 17, ... 121 have filters have little 0 weights. In this case, sub-blockl can take much longer time than sub-blockO. This is a non-optimal case as only when all the OCs are finished processing can the current layer of the network completes and move on to the next layer.

[0048] The present invention dynamically or statically combines OCs with less 0 weights in the filters with OCs with more 0 weights in the filters in the multiple iterations for the same sub-block of a hardware accelerator. This optimization increases the chances that all the sub- blocks of a hardware accelerator or all the sub-blocks of all the hardware accelerators finish as close to the same time as possible for a given layer of a network.

[0049] For example, in the previous example, OCO, 8, 16, 24, ...120 all have filters with many 0 weights and they are all assigned to sub-blockO, while OC1, 9, 17, 25, ...121 all have filters with little 0 and they are all assigned to sub-blockl. A simple re-shuffle resulting sub-blockO having OCO, 9, 16, 25...., 121 while sub-blockl having OC1, 8, 17, 24, ... 120. This makes sure input data of both sub-blocks are multiplied with filters with similar density of 0's. The example here can also be extended to all the sub-blocks of a single hardware accelerator or all the hardware accelerators. The above is an example of re-shuffle/re-allocation only.

[0050] The decision how which OCs are assigned to which sub-block during the re-shuffling can be done statically by firmware (controlled by an embedded CPU) or dynamically by hardware. Example of decision criteria for allocating different OCs to different sub-blocks of a hardware accelerator or hardware accelerators: 1) number of non-zero weights in a single output filter, 2) number of non-zero-weights across multiple output filters, 3) data sparsity in combination of the filter/weight sparsity - this can only be done dynamically instead of statically. 4) Actual processing time of a previous iterations

[0051] As mentioned, machine learning inference and/or training network typically has many layers of convolutions. Typically, the output of one layer becomes the input of the next layer. For example, in Figure 2, if IC of the current layer is 128 and OC is 256, then the input of the current layer is 7x7x128 and the output is 7x7x256. The input of the next layer is 7x7x256. In our present invention, OCs are re-allocated across different sub-blocks of hardware accelerator/accelerators. The 256 output channels of 7x7x256 for the current layer (or 256 input channels of 7x7x256 in the next layer), due to re-shuffling or re-allocation of the sub-blocks, are thus subjected to a re-ordering of multiplication operations for the sub-blocks as re-allocated. This does not present a problem to the final summation and output, as all the input channels are summed or added together after the convolution operation, regardless of the particular order.

[0052] In an example hardware accelerator operation embodying at least some aspects of the foregoing example embodiments of the disclosure herein, at step 310, processor 201 executes instructions of feature input module 210 to receive a stream of an input feature map into the one or more processors utilizing a convolution model that includes a plurality of convolution layers.

[0053] In one aspect, the input feature map comprises an image, which may include a plurality of image features such lines curving to left, to the right, upward or downward, for example.

[0054] At step 320, processor 201 of the hardware accelerator executes instructions included in output filter re-shuffling module 211 to for a given convolution layer within the plurality of convolution layers, reconfigure a computational order for a plurality of hardware accelerator sub-blocks by re-shuffling a plurality of output filters among the plurality of sub-blocks.

[0055] In one embodiment, reconfiguring the computational data flow comprises, based on identifying at least one of a number of 0's (zeros) in the input feature data and the plurality of output filters associated with at least a set of the plurality of the hardware accelerator sub-blocks.

[0056] In one variation, the reallocating, or re-shuffling the order of output filters comprises dynamically re-allocating the output filters amongst the hardware accelerator sub-blocks in a hardware implementation.

[0057] In another variation, the re-shuffling comprises statically re allocating the output filters amongst the hardware accelerator sub-blocks in a firmware implementation controlled by an embedded central processing unit (CPU).

[0058] In embodiments, as a result of the re-shuffling of the output filters, the processing time is reduced for the given convolution layer to which the hardware accelerator technique and system is being applied.

[0059] At step 330, processor 201 executes instructions included in output filter re-shuffling module 211 to, in accordance with the reconfigured computational order, generate output features that are interpretive of the input feature map.

[0060] It is contemplated that the convolution model hardware accelerator may be implemented in one or more of a field-programmable gate array (FPGA) device, a massively parallel processor array device, a graphics processing unit (GPU) device, a central processing unit (CPU) device, and an application- specific integrated circuit (ASIC).

[0061] It is contemplated that embodiments described herein be extended and applicable to individual elements and concepts described herein, independently of other concepts, ideas or system, as well as for embodiments to include combinations of elements in conjunction with combinations of steps recited anywhere in this application. Although embodiments are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments. As such, many modifications and variations will be apparent to practitioners skilled in this art. Accordingly, it is intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment can be combined with other individually described features, or parts of other embodiments, even if the other features and embodiments make no mention of the particular feature. Thus, any absence of describing combinations does not preclude the inventors from claiming rights to such combinations.