Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ALTERNATIVE LOOP LIMITS
Document Type and Number:
WIPO Patent Application WO/2018/236468
Kind Code:
A1
Abstract:
Methods, systems, and apparatus for accessing a N-dimensional tensor are described. In some implementations, a method includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop are performed for a final iteration of the first nested loop until an alternative loop bound is reached.

Inventors:
TEMAM OLIVIER (FR)
KHAITAN HARSHIT (US)
NARAYANASWAMI RAVI (US)
WOO DONG HYUK (US)
Application Number:
PCT/US2018/029796
Publication Date:
December 27, 2018
Filing Date:
April 27, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
G06F8/41; G06N20/00; G06F9/30; G06F9/50; G06T1/20
Foreign References:
US201615335769A2016-10-27
US201615014265A2016-02-03
Other References:
"Compilers - Principles, Techniques, Tools", 1 January 2007, ADDISON-WESLEY, ISBN: 978-0-321-48681-3, article ALFRED V AHO ET AL: "Compilers - Principles, Techniques, Tools", pages: 782 - 787, XP055490480
DMITRY I. LYAKH: "An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU", COMPUTER PHYSICS COMMUNICATION., vol. 189, 1 April 2015 (2015-04-01), NL, pages 84 - 91, XP055493371, ISSN: 0010-4655, DOI: 10.1016/j.cpc.2014.12.013
ADAM COATES ET AL: "Deep learning with COTS HPC systems", JOURNAL OF MACHINE LEARNING RESEARCH, 21 June 2013 (2013-06-21), pages 1337 - 1345, XP055226960, Retrieved from the Internet [retrieved on 20151109]
Attorney, Agent or Firm:
WRIGHT, Christopher D. (US)
Download PDF:
Claims:
CLAIMS

1 . A method performed by a computing system for accessing an /V-dimensional tensor, comprising:

for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached, wherein a number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a number of computing units of the computing system; and

after a penultimate iteration of the first nested loop has completed, performing one or more iterations of the second nested loop for a final iteration of the first nested loop until an alternative loop bound is reached, wherein the alternative loop bound is less than the first loop bound.

2. The method of claim 1 , further comprising substituting the alternative bound for the first loop bound for the final iteration of the first nested loop in response to

determining that the penultimate iteration of the first nested loop has completed.

3. The method of claim 1 , wherein each individual computing unit comprises a compute tile, a processor, or a math unit.

4. The method of claim 2, wherein:

performing iterations of a second nested loop that is nested within the first nested loop based until a first loop bound for the second nested loop is reached comprises performing each iteration of the second nested loop in parallel using the computing units; and

each computing unit performs a respective iteration of the second nested loop.

5. The method of claim 2, wherein the alternative loop bound is based on a remainder value resulting from dividing the total number of iterations of the second nested loop by the number of computing units.

6. The method of claim 1 , wherein a set of nested loops including the first nested loop and the second nested loop includes one or more loops nested between the first nested loop and the second nested loop and the second nested loop is nested within another loop.

7. The method of claim 1 , wherein the second nested loop is nested directly within the first nested loop without any other loops nested between the first nested loop and the second nested loop.

8. A system for accessing an /V-dimensional tensor, the system comprising:

a plurality of individual computing units;

one or more processors configured to:

for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached, wherein a number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a number of the computing units; and

after a penultimate iteration of the first nested loop has completed, performing one or more iterations of the second nested loop for a final iteration of the first nested loop until an alternative loop bound is reached, wherein the alternative loop bound is less than the first loop bound.

9. The system of claim 8, wherein the one or more processors are further configured to substitute the alternative bound for the first loop bound for the final iteration of the first nested loop in response to determining that the penultimate iteration of the first nested loop has completed.

10. The system of claim 8, wherein each individual computing unit comprises a compute tile, a processor, or a math unit.

1 1 . The system of claim 8, wherein:

performing iterations of a second nested loop that is nested within the first nested loop based until a first loop bound for the second nested loop is reached comprises performing each iteration of the second nested loop in parallel using the computing units; and

each computing unit performs a respective iteration of the second nested loop.

12. The system of claim 8, wherein the alternative loop bound is based on a remainder value resulting from dividing the total number of iterations of the second nested loop by the number of computing units.

13. The system of claim 8, wherein a set of nested loops including the first nested loop and the second nested loop includes one or more loops nested between the first nested loop and the second nested loop and the second nested loop is nested within another loop.

14. The system of claim 8, wherein the second nested loop is nested directly within the first nested loop without any other loops nested between the first nested loop and the second nested loop.

15. An apparatus for accessing an /V-dimensional tensor, the apparatus comprising: a plurality of individual computing units that each compute memory addresses for tensor elements; and

a controller configured to assign iterations of nested loops to the individual computing units by performing operations comprising:

for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached, wherein a number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a number of the computing units and wherein one of the computing units determines a memory address for a tensor element for each iteration of the second nested loop; and

after a penultimate iteration of the first nested loop has completed, performing one or more iterations of the second nested loop for a final iteration of the first nested loop until an alternative loop bound is reached, wherein the alternative loop bound is less than the first loop bound.

16. The apparatus of claim 15, wherein the controller is configured to perform further operations comprising substituting the alternative bound for the first loop bound for the final iteration of the first nested loop in response to determining that the penultimate iteration of the first nested loop has completed.

17. The apparatus of claim 15, wherein each individual computing unit comprises a compute tile, a processor, or a math unit.

18. The apparatus of claim 15, wherein:

performing iterations of a second nested loop that is nested within the first nested loop based until a first loop bound for the second nested loop is reached comprises performing each iteration of the second nested loop in parallel using the computing units; and

each computing unit performs a respective iteration of the second nested loop.

19. The apparatus of claim 15, wherein the alternative loop bound is based on a remainder value resulting from dividing the total number of iterations of the second nested loop by the number of computing units.

Description:
ALTERNATIVE LOOP LIMITS

BACKGROUND

[0001] This specification generally relates to performing machine learning

computations using a special purpose computational unit that includes multiple computing units.

[0002] Neural networks are machine learning models that employ one or more layers of models to generate an output, e.g., a classification, for a received input. Some neural networks include one or more hidden layers in addition to an outer layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer of the network. Each layer of the network generates an output from a received input in accordance with current values of a respective set of

parameters.

[0003] Some neural networks include one or more convolutional neural network layers. Each convolutional neural network layer has an associated set of kernels.

Kernels can be represented as a matrix structure of weight inputs. Each convolutional layer uses the kernels to process inputs to the layer. A set of inputs to the layer can also be represented as a matrix structure.

SUMMARY

[0004] According to one innovative aspect of the subject matter described in this specification, a method for accessing an /V-dimensional tensor includes, for each of one or more first iterations of a first nested loop, performing iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached. A number of iterations of the second nested loop for the one or more first iterations of the first nested loop can be limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing system. After a penultimate iteration of the first nested loop has completed, one or more iterations of the second nested loop can be performed for a final iteration of the first nested loop until an alternative loop bound is reached, wherein the alternative loop bound is less than the first loop bound. [0005] These and other implementations can each optionally include one or more of the following features. Some aspects can include substituting the alternative bound for the first loop bound for the final iteration of the first nested loop in response to determining that the penultimate iteration of the first nested loop has completed.

[0006] In some aspects, the value of the hardware property includes a number of individual computing units of the computing system. Each individual computing unit can include a compute tile, a processor, or a math unit.

[0007] Performing iterations of a second nested loop that is nested within the first nested loop based until a first loop bound for the second nested loop is reached can include performing each iteration of the second nested loop in parallel using the computing units. Each computing unit can perform a respective iteration of the second nested loop.

[0008] In some aspects, the alternative loop bound is based on a remainder value resulting from dividing the total number of iterations of the second nested loop by the number of computing units. A set of nested loops including the first nested loop and the second nested loop can include one or more loops nested between the first nested loop and the second nested loop and the second nested loop can be nested within another loop. The second nested loop can nested directly within the first nested loop without any other loops nested between the first nested loop and the second nested loop.

[0009] The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. By performing machine learning computations in parallel using multiple computing units, e.g., multiple compute tiles, multiple processors, or multiple math units, the

computational speed and efficiency is increased allowing for more complex machine learning computations to be performed in a shorter amount of time. An adjustable loop bound for a nested loop allows for parallel processing of iterations of the nested loop even when the number of iterations is not a multiple of the number of individual computing units or other hardware property. The loop bound for an inner loop can be set such that the number of iterations of the inner loop equals the number of individual computing units for all but a final iteration of an outer loop in which the inner loop is nested. This allows each iteration of the inner loop to be performed in parallel, e.g., at the same time, for each iteration of the outer loop. In addition, for all but the last iteration of the outer loop, each individual computing unit is utilized for each iteration of the outer loop, resulting in faster and more efficient computations. By substituting an alternative loop bound for the inner loop for the final iteration of the outer loop, the number of instructions needed to perform the iterations of the inner loop can be reduced, allowing for fewer memory devices and/or more available memory.

[0010] Other implementations of this and other aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. A system of one or more computers can be so configured by virtue of software, firmware, hardware, or a combination of them installed on the system that in operation cause the system to perform the actions. One or more computer programs can be so configured by virtue of having instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.

[0011] The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other potential features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] FIG. 1 is a block diagram of an environment in which an example computing system accelerates tensor computations.

[0013] FIG. 2 illustrates example nested loops for traversing a tensor using multiple computing units.

[0014] FIG. 3 is a flow diagram that illustrates an example process for performing tensor computations.

[0015] Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION

[0016] The subject matter described in this specification relates to using alternative loop limits for processing iterations of nested loops in parallel, e.g., using a hardware computing system that includes multiple computing units. Each computing unit may be implemented as a compute tile, a processor, or a math unit. The multiple computing units can be configured to accelerate inference workloads of a neural network and/or to accelerate computations for determining memory addresses for tensor elements. Each computing unit of the hardware computing system is self-contained and can

independently execute computations required by a given layer of a multi-layer neural network.

[0017] A neural network having multiple layers can be used to compute inferences. For example, given an input, the neural network can compute an inference for the input. The neural network computes this inference by processing the input through each of the layers of the neural network. In particular, the layers of the neural network each have a respective set of weights. Each layer receives an input and processes the input in accordance with the set of weights for the layer to generate an output.

[0018] Therefore, in order to compute an inference from a received input, the neural network receives the input and processes it through each of the neural network layers in order to generate the inference, with the output from one neural network layer being provided as input to the next neural network layer. Data inputs to a neural network layer, e.g., either the input to the neural network or the outputs of the layer below the layer in the sequence, to a neural network layer can be referred to as activation inputs to the layer.

[0019] Techniques described in this specification can perform the computation of memory addresses for tensor elements by distributing tensor computations across multiple computing units, e.g., multiple compute tiles, multiple processors, or multiple math units. The computation of a memory address can include determining a memory address offset based on tensor status elements and adding the offset to a base address for the tensor elements. [0020] A tensor is a multi-dimensional geometric object and example multidimensional geometric objects include matrices and data arrays. In general, a software algorithm is executed by one or more compute tiles to perform tensor computations by processing a nested loop to traverse an N-dimensional tensor. In one example computational process, each loop may be responsible for traversing a particular dimension of the N-dimensional tensor. For a given tensor construct, a compute tile may require access to an element of a particular tensor to execute one or more dot product computations associated with the tensor. A computation process performed within a neural network layer may include a multiplication of an input tensor including input activations with a parameter tensor including weights. The computation includes multiplying an input activation with a weight on one or more cycles and performing an accumulation of the products over many cycles. Computation occurs when an input activation provided by a memory structure is multiplied with a parameter or weight provided by another memory structure. Because the tensor is stored in a memory, a set of tensor indices may require translation to a set of memory addresses in order to retrieve the correct element of the tensor from the memory. In general, a tensor traversal unit of a compute tile executes control operations that provide the index of each dimension associated with the tensor and an order in which index elements are traversed to perform computations. Tensor computations end when multiplication results are written to an output bus and stored in memory.

[0021] Multiple math units within a compute tile (or multiple compute tiles) can perform the memory address computations for an N-dimensional tensor in parallel. For example, a computation may be performed for each iteration of an inner-most loop of the nested loops. Each loop in which a tensor computation is performed is referred to as a "tensor computation loop" and may not always be the inner-most loop. The computations for these iterations can be performed in parallel using the math units.

[0022] Traversing a tensor in a nested loop requires a computation of a memory address value of an element to load or store the corresponding data value of the element. For example, the elements of the three dimensional tensor may represent the features of an image being classified by a neural network. A first dimension (Z) may represent the width of the image, the second dimension (Y) may represent the height of the image, and the third dimension (X) may represent RGB values for pixels in the image. To classify the image, each RBG value may be multiplied by a filter value of a convolutional layer to generate an activation map.

[0023] A nested loop can be used to determining the memory address for accessing each RBG value of the tensor. The nested loop can include a loop for each dimension of the tensor. For example, an outer loop (z) may be used to traverse the Z dimension (the width of the image), a middle loop (y) may be used to traverse the Y dimension (the height of the image), and an inner loop (x) may be used to traverse the X dimension (the three RGB values for each pixel). At each iteration of the inner loop, a memory address is determined for one of the three RGB values for a particular pixel of the image represented by the value of the outer loop z and the middle loop y. For example, the memory address for the R value of the pixel of the image represented by Z=0 and Y=0, may be determined during the first iteration of the inner loop x when z=0 and y=0 (e.g., z=0; y=0; x=0). Similarly, the memory address for the G value of the pixel of the image represented by Z=5 and Y=2 may be determined during the third iteration of the inner loop x when z=5 and y=2 (e.g., z=5; y=2; x=2).

[0024] The memory address computations can be performed in parallel using multiple computing units. For example, if there are three computing units, the memory address value for each RGB value of a particular pixel can be determined in parallel. A first computing unit can determine the memory address for the R value for the pixel, a second computing unit can determine the memory address for the G value for the pixel, and a third computing unit can determine the memory address for the B value for the pixel. After a memory address is determined for a RGB value, a processing unit can access the value using the memory address and multiply the value with a filter value.

[0025] In some cases, the number of iterations of the tensor computation loop may exceed the number of computing units. In such cases, the iterations of the tensor computation loop can be divided into multiple parallel iterations of an outer loop in which the tensor computation loop is nested. For example, the dimension of an N-dimensional tensor that corresponds to the tensor computation loop may include 128 elements and the computing system may include 64 computing units. In this example, the tensor computation loop includes 128 iterations that can be divided into two outer loop iterations of 64 such that 64 computations are performed in parallel for each of two iterations of an outer loop. In this example, the first 64 iterations may be distributed amongst the computing units. After the first 64 iterations are complete, the next 64 iterations can be distributed amongst the computing units.

[0026] In some cases, the number of iterations of a tensor computation loop may not be an exact multiple of the number of computing units. For example, the dimension that corresponds to the tensor computation loop may include 160 elements and the computing system may have 64 computing units. In this example, the tensor

computation loop includes 160 iterations that can be divided into two outer loop iterations of 64 and a third outer loop iteration of 32. To adjust the number of iterations of the tensor computation loop for the third outer loop iteration, the loop bound for the tensor computation loop may be changed from 64 to 32 after the second iteration of the outer loop, e.g., before the final iteration of the outer loop.

[0027] FIG. 1 is a block diagram of an environment 100 in which an example computing system 102 accelerates tensor computations. For example, the computing system 102 may accelerate computations associated with deep neural networks

(DNNs). The computing system 102 includes a controller 105 and multiple individual compute tiles 1 12A - 1 12-Z. The controller 105 is configured to execute one or more instructions relating to tensor computations within the computing system 102. Although not shown, the controller 105 can include data memory for storing and accessing a variety of data relating to computations that occur within the computing system 102 and instruction memory for storing one or more machine readable instructions that are executable by one or more processors of the controller 105.

[0028] The controller 105 can receive input 132, e.g., instructions, compiled programs, etc., from a host 130. After the computing system 102 performs tensor computations, the controller 105 can provide output 134 to the host. For example, the output 134 may be memory addresses for tensor elements. The controller 105 can receive the input 132 from and provide the output 134 to the host 130 via a host interface (not shown). [0029] The controller 105 can communicate with the compute tiles 1 12-A - 1 12-Z via one or more data communication paths, e.g., one or more buses. Similarly, the compute tiles 1 12-A - 1 12-Z can communicate with each other via one or more buses. An example computing system having multiple compute tiles is described in U.S. Patent Application No. 15/335,769 titled "Neural Network Compute Tile" and filed on October 27, 2016, which is hereby incorporated by reference in its entirety.

[0030] Each compute tile 1 12-A - 1 12-Z includes a processing unit 1 14, a data storage medium 1 16, and a tensor traversal unit 120. The storage medium 1 16 stores information within the computing system 102. In some implementations, the storage medium 1 16 is a volatile memory unit or units. In some other implementations, the storage medium 1 16 is a non-volatile memory unit or units. The storage medium 1 16 may also be another form of computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. The instructions, when executed by the

processing unit 1 14, cause the processing unit 1 14 to perform one or more tasks.

[0031] The processing unit 1 14 can include one or more processors and/or one or more finite-state machines (FSM). The processing unit 1 14 can execute instructions received from the controller 105. For example, the processing unit 1 14 can execute instructions for computing memory addresses (or memory address offsets) for tensor elements using the tensor traversal unit 120. For processing units that include a FSM, the FSM can query memory addresses for tensor elements from the tensor traversal unit 120.

[0032] In general, the tensor traversal unit 120 determines a status associated with one or more tensors. In some implementations, the status may include loop bound values, current loop index variable values, partial address offset values for determining a memory address value, and/or program counter values for handling branch loop bounds. The tensor traversal unit 120 may be implemented as an application-specific integrated circuit. [0033] The tensor traversal unit 120 translates tensor indices into memory addresses. For example, the tensor traversal unit 120 may translate a set of N-dimensional tensor indices into a one-dimensional address space. The tensor traversal unit 120 can perform such translations by making a tensor element's memory address a combination (e.g., a linear combination) of the element's dimension indices.

[0034] The tensor traversal unit 120 can include one or more tensor status elements 122 and one or more math units 124. For example, the tensor traversal unit 120 of the compute tile 1 12-A includes four math units 124-A - 124-D. Other tensor traversal units of other compute tiles may include other numbers of math units. Each of the tensor status elements 122 may be a storage element, for example, a register or any suitable storage circuitry. Each math unit 124 can include one or more arithmetic logic units (ALUs) and/or one or more hardware adders. The math unit 124 can be used to compute a memory address or memory address offset value for tensor elements, e.g., based on values stored in the tensor status elements. Example techniques for determining memory addresses using a tensor traversal unit are described in U.S.

Patent Application No. 15/335,769 titled "Neural Network Compute Tile" and filed on October 27, 2016 and U.S. Patent Application No. 15/014,265 titled "Accessing Data in Multi-Dimensional Tensors" and filed on February 3, 2016. The controller 105 can coordinate tensor computations using the compute tiles 1 12-A - 1 12-Z. For example, the controller 105 can receive instructions to determine memory addresses for tensor elements. The controller 105 can perform the tensor computations using nested loops.

[0035] Each loop can be responsible for traversing a respective dimension of the N- dimensional tensor. A multi-dimensional tensor may be a matrix or a multi-dimensional matrix. For example, a 2-dimensional tensor is a matrix, while a 3-dimensional tensor is a three-dimensional matrix made up of multiple two-dimensional matrices. Each dimension of the N-dimensional tensor may include one or more elements, where each element may store a respective data value. For example, a tensor may be a variable in a program, where the variable may have three dimensions. The first dimension may have a length of three hundred elements, the second dimension may have a length of a thousand elements, and the third dimension may have a length of twenty elements. Of course, other numbers of elements in each dimension are possible. [0036] Traversing the tensor in a nested loop can include a computation of a memory address value of an element to load or store the corresponding data value of the element. A for-loop is an example of a nested loop, where three loops tracked by three loop index variables (e.g., i, j, and k) can be nested to traverse through a three- dimensional tensor. In a neural network, a value of an element may be used in one or more dot product computations associated with the tensor. For example, the value of the element may be multiplied with a corresponding parameter or weight. The elements of the tensor may be traversed in order using nested for-loops to access the element and perform one or more computations using the value of the element. Continuing the three dimensional tensor example, an outer for-loop may be used to traverse the loop tracked by variable i, a middle for-loop loop may be used to traverse the loop tracked by variable j, and an inner for-loop may be used to traverse the loop tracked by variable k. In this example, the first element accessed may be (i=0, j=0, k=0), the second element may be (i=0, j=0, k=1 ), and so on. The tensor traversal units 120 of the compute tiles 1 12-A - 1 12-Z can be used to determine the memory address for the elements in order using nested loops so that a processing unit can access the value of the element and perform the one or more computations using the value of the element. The values of weights or parameters can also be accessed similarly using nested for-loops. The tensor traversal unit 120 can also be used to determine the addresses for weights or parameters used in the computations and/or for the outputs of the computations, which may be used as inputs to a hidden layer of the neural network.

[0037] For example, as described in U.S. Patent Application No. 15/014,265, the tensor status elements 122 can include a group of tensor index elements, a group of tensor bound elements, and a group of dimension multiplier elements. Each group of elements can be arranged as a 2-D array having M rows and N columns. Each row for a group can represent tensor index information for a tensor. Each column for a group can represent information (e.g., tensor index value, tensor bound value, or dimension multiplier value) for nested loop index variable values that are associated with a tensor. For example, one column in the 2-D array for the tensor index element can represent the tensor index information for variable i, one column can represent the tensor index information for variable i, and one column can represent the tensor index information for variable k.

[0038] Each tensor index element can track a nested loop variable for a loop in the nested loop. For example, one tensor index element may be assigned to track the nested loop index variable i, one tensor index element may be assigned to track the nested loop index variable j, and one tensor index element may be assigned to track the nested loop index variable k. Each tensor bound element has a corresponding element in the tensor index elements. Each tensor bound element may represent tensor bound information for nested loop index variable values that are associated with the tensor. For example, one tensor bound element may represent tensor bound information for nested loop index variable i, one tensor bound element may represent tensor bound information for nested loop index variable j, and one tensor bound element may represent tensor bound information for nested loop index variable k.

[0039] Each dimension multiplier element can represent a multiplier by which a corresponding element in the tensor index elements is multiplied. To determine a memory address for an element, the tensor traversal unit 120 can determine a memory address offset for each nested loop index variable by multiplying the value stored in the tensor index element for the nested loop index variable by the multiplier for the nested loop index variable. The tensor traversal unit 120 can then sum all of the multiplied products together to determine the memory address that corresponds to the element being accessed.

[0040] The tensor traversal unit 120 can update the tensor index elements after each iteration of the inner loop of the nested loop. For each iteration of the inner loop, the tensor traversal unit 120 can update tensor index element for the loop, e.g., by incrementing the tensor index element for the inner loop. If the updated tensor index element for the inner loop equals the value stored in tensor bound element for the inner loop, the tensor index element can be reset and the tensor index element for the next outer loop in which the inner loop in nested can be updated. The tensor traversal unit 120 can then determine the memory address for the next element corresponding to this iteration of the inner loop by multiplying the tensor index elements by their corresponding multipliers and summing the products, as described above.

[0041] The controller 105 may coordinate the tensor computations by iterating nested loops of a program and performing a computation for each iteration of one or more of the loops, e.g. , for each iteration of an inner-most (or other) loop of the nested loops. To accelerate the tensor computations, the controller 105 may use multiple computing units to perform at least some of the tensor computations in parallel. The computing units may be individual compute tiles or individual math units. For example, the controller 105 may request that the compute tile 1 12-A perform a first tensor

computation and, at the same time, request that the compute tile 1 12-B perform a second tensor computation. In another example, the controller 105 may request that the compute tile 1 12-A perform tensor computations for a particular tensor. The tensor traversal unit 120 can then use the math units 124-A - 124-D to perform tensor computations in parallel.

[0042] A loop is generally completed when an index variable for the loop equals (or exceeds) a bound for the loop. For example, a loop may be programmed as "for (/=0; /<3; / ' ++) in which / is the index variable and the bound is 3. This example loop includes three iterations (/=0, /=1 , and i=2). If the index variable equals 3, the loop is exited without computation. When performing parallel computations using multiple computing units (e.g. , using multiple compute tiles 1 12 or multiple math units 124), the controller 105 may iterate the index variable each time a computation is assigned to a computing unit and compare the index variable to the bound before assigning another iteration of the loop to another computing unit.

[0043] In some implementations, the nested loops of a program executed by the controller 105 may have loop bounds that have been determined based on a property of the computing system 102. For example, the loop bounds for one or more of the loops may be determined based on the number of compute tiles 1 12-A - 1 12-Z of the computing system 102 or the number of math units of a tensor traversal unit 120.

[0044] In some implementations, a compiler 136 compiles a program for performing tensor computations for a tensor. The compiler 136 can be configured to determine the loop bounds for one or more of the loops based on the number of elements included in one or more of the dimensions of the tensor and/or the number of computing units of the computing system 102. The loop bound for a loop is the number that, when the index value for the loop equals the loop bound, the loop is completed. In other words, the loop bound for a loop can equal the number of iterations of the loop.

[0045] The compiler 136 may be configured to create an outer loop for one or more tensor computation loops (a loop in which a tensor computation is performed) and determine one or more loop bounds for the outer loop. The created outer loop may be used to divide the iterations of the tensor computation loop into multiple iterations of the outer loop. For example, the computing system 102 may include 64 computing units (e.g., compute tiles or math units) and the tensor computation loop may include 128 iterations. In this example, the computing system 102 is capable of performing 64 tensor computations in parallel. To perform the 64 tensor computations in parallel, the 128 iterations of the tensor computation loop can be divided into two outer loop iterations that each includes 64 iterations of the tensor computation loop. For example, the first iteration of the outer loop may include iterations 1 -64 of the tensor computation loop. The second iteration of the outer loop may include iterations 65-128 of the tensor computation loop. In this way, 64 tensor computations are performed in parallel for the first iteration of the outer loop using each of the 64 computing units of the computing system (e.g., one computation per tile) and 64 tensor computations are performed in parallel for the second iteration of the outer loop using the 64 computing units.

[0046] The compiler 136 can determine whether an outer loop should be created and, if so, create the outer loop in the compiled program. In some implementations, the compiler 136 only creates an outer loop (in addition to any outer loops in the program being compiled) when a tensor computation loop has more iterations than the number of computing units of the computing system 102 on which the program will be executed. If a tensor computation loop has more iterations than the number of computing units, the compiler 136 can create an outer loop to divide the iterations of the tensor computation loop into multiple outer loop iterations. [0047] The compiler 136 can also determine a loop bound for the created outer loop based on the number of iterations of the loop in which the tensor computation is performed and/or the number of computing units of the computing system 102 on which the program will be executed. The number of iterations of the tensor computation loop may be equal to the number of elements in the dimension corresponding to the loop if the number of iterations is a multiple of the number of computing units. The compiler 136 can divide the number of iterations of the tensor computation loop by the number of computing units as the number of computing units represents the highest number of iterations that can be performed in parallel using the computing units. For example, if the number of iterations of the tensor computation loop is 128 and the number of computing units is 64, the loop bound for the created outer loop may be two (128/64). Thus, in this example, the first iteration of the outer loop will include 64 parallel iterations of the tensor computation loop and the second iteration of the outer loop will include 64 parallel iterations of the tensor computation loop. If the division results in a remainder, as discussed below, the loop bound for the outer loop may be incremented by one.

[0048] The compiler 136 can also determine one or more loop bounds for the tensor computation loop based on the number of iterations of the tensor computation loop and the number of computing units of the computing system 102 on which the program will be executed. If the number of iterations of the tensor computation loop is an exact multiple of the number of computing units, the loop bound for tensor computation loop can be equal to the number of computing units for each iteration of the outer loop created for the tensor computation loop. For example, if the tensor computation loop has 120 iterations and the computing system includes 60 computing units, the loop bound for the tensor computation loop may be 60 and the loop bound for the outer loop may be 2. In this example, the first iteration of the outer loop will include 60 iterations (e.g., parallel iterations) of the tensor computation loop and the second iteration of the outer loop may include 60 iterations of the tensor computation loop.

[0049] If the number of iterations of the tensor computation loop is not an exact multiple of the number of computing units, the compiler 136 may determine two or more loop bounds for the tensor computation loop. For example, the compiler 136 may divide the number of iterations of the tensor computation loop by the number of computing units. As the number of iterations is not an exact multiple, the result of this division will include a remainder value. For example, the number of iterations may be 160 and the number of computing units may be 50. In this example, the compiler 136 may divide the number of iterations (160) by the number of computing units (50) to get a quotient of 3 and a remainder of 10. The compiler 136 can set a first loop bound for the tensor computation loop equal to the number of computing units (e.g., 50) and an alternative loop bound for the tensor computation loop equal to the remainder (e.g., 10). During execution of the program, the alternative loop bound may be used for the tensor computation loop for the final iteration of the outer loop and the first loop bound may be used for each other iteration of the outer loop. Continuing the previous example, the outer loop would have a loop bound of 4 as 160/50 = 3 with a remainder of 10 and the loop bound for the outer loop is incremented by one based on the remainder. For the first three iterations of the outer loop, the loop bound for the tensor computation loop would be 50. Thus, for each of the first three iterations of the outer loop, 50 iterations of the tensor computation loop would be performed in parallel, resulting in 150 iterations being performed. For the last iteration of the outer loop, the loop bound for the tensor computation loop would be 10, resulting in all 160 iterations of the tensor computation loop being performed in four iterations of the outer loop.

[0050] The compiled program can include instructions that cause the processor of the controller 105 to change the loop bound for the tensor computation loop from the first loop bound to the alternative loop bound after the penultimate iteration of the outer loop has completed and before the final iteration of the outer loop is performed. In this way, the loop bound for the alternative loop bound is used as the loop bound for the tensor computation loop for the final iteration of the outer loop that was created to divide the iterations of the tensor computation loop into multiple outer loop iterations.

[0051] In some implementations, the compiler 136 can create the outer loop for a tensor computation loop as the immediate outer loop in which the tensor computation loop is nested, i.e., no other loops nested between the outer loop and the tensor computation loop. In some implementations, the compiler 136 can create the outer loop as the most-outer loop of the nested loops in which the tensor computation loop is nested, i.e., the outer loop is not nested within another loop. By creating the outer loop at the most-outer loop of the nested loops, memory addresses determined using the nest loops and the tensor traversal units 120 align the tensor elements more

contiguously. For example, without adjusting the loop bound for the final iteration, memory addresses may be determined for iterations of the loop at which no data will be stored resulting in wasted memory space. The alternative limit for the last iteration of the loop allows the tensor traversal unit to determine memory addresses only for tensor elements without using additional instructions.

[0052] FIG. 2 illustrates example nested loops 215 and 220 for traversing a tensor 205 using multiple computing units 210. Each individual computing unit 210 can be a compute tile or a math unit. In this example, the tensor 205 is a three dimensional tensor with an X dimension, a Y dimension, and a Z dimension. The X dimension has a length of 160 elements, the Y dimension has a length of 30 elements, and the Z dimension has a length of 100 elements, although the tensor 205 is not drawn to scale. Each element in the tensor 105 can store a respective data value that is used in a neural network computation.

[0053] In general, the tensor can be traversed using the nested loops 215. In this example, the X dimension is traversed using the inner loop, the Y dimension is traversed using the middle loop, and the Z dimension is traversed using the outer loop. For each iteration of the inner loop, a memory address is computed for the tensor element corresponding to the values of x, y, and z for the iteration of the inner loop.

[0054] The multiple computing units 210 can be a part of a computing system, e.g. , each computing unit 210 can be the same as or similar to the compute tiles 1 12-A - 1 12-Z of FIG. 1 or the math units 124 of FIG. 1 . In this example, the computing system includes 64 computing units although other numbers of computing units are possible. The computing units 210 can perform tensor computations for the tensor 205 in parallel, e.g., using the nested loops 220.

[0055] A compiler, e.g., the compiler 136 of FIG. 1 , can generate the nested loops 220 based on a program that includes the nested loops 215 (or code that represents the nested loops 215) and the number of computing units 210 of a computing system on which the program will be executed. For example, the compiler may determine that an outer loop should be created to divide iterations of the tensor computation loop (the loop for the X dimension in this example) into multiple outer loop iterations.

[0056] To determine whether an outer loop should be created, the compiler may compare the number of iterations of each tensor computation loop to a hardware property of the computing system. For example, the hardware property may be the number of computing units 210 or the total number of computations that the computing system can perform in parallel. If the number of iterations of the loop for the tensor computation loop exceeds the value of the hardware property, the compiler may create an outer loop. In this example, the number of iterations (160) of the loop for the X dimension exceeds the number of computing units (64). Thus, the compiler has created an outer loop with an index variable "i".

[0057] The compiler can also determine a loop bound for the outer loop based on the number of iterations of the tensor computation loop and the value of the hardware property (e.g., number of computing units). For example, the compiler may determine the bound by dividing the number of iterations (160) of the tensor computation loop by the number of computing units (64), resulting in 2 with a remainder of 32. As described above, the outer loop bound may be incremented by 1 for any remainder. Thus, the outer loop bound in this example is 3.

[0058] The compiler can also determine one or more loop bounds for the tensor computation loop based on the number of iterations of the tensor computation loop and the value of the hardware property. If the number of iterations of the tensor computation loop does not exceed the value of the hardware property, the loop bound for the tensor computation loop can be equal to the number of iterations. If the number of iterations of the tensor computation loop is an exact multiple of the hardware property, the loop bound for the tensor computation loop may be equal to the value of the hardware property. If the number of iterations of the tensor computation loop exceeds the value of the hardware property but is not an exact multiple of the value of the hardware property, the tensor computation loop may have a first loop bound for all but the final iteration of the loop and an alternative loop bound for the final iteration of the loop. The first loop bound may be equal to the value of the hardware property and the alternative loop bound may be equal to the remainder after dividing the number of iterations of the tensor computation loop by the value of the hardware property.

[0059] In this example, the number of iterations (160) of the tensor computation loop exceeds the number of computing units (64) but is not an exact multiple of the number of computing units. Thus, the first loop bound for the X dimension is 64 and the alternative bound is 32 (160/64 = 2 with a remainder of 32). For the first two iterations of the outer loop (loop / ' ), the loop bound for the loop for the X dimension will be 64. For the final iteration of the outer loop, the loop bound for the X dimension will be 32.

[0060] For the first iteration of the outer loop, 64 memory addresses of the tensor may be determined in parallel using 64 computing units. For example, a first computing unit may determine the memory address for z=0; y=0; x=0; a second computing units may determine the memory address for z=0; y=0; x=0 ... and a sixty-fourth computing unit may compute the memory address for z=0; y=0; x=63. For the last iteration of the outer loop, 32 of the 64 computing units may be used to determine the last 32 iterations of the inner loop.

[0061] FIG. 3 is a flow diagram that illustrates an example process 300 for performing tensor computations. The process 300 may be performed by a system of one or more computers, e.g., the computing system 102 of FIG. 1 .

[0062] For each of one or more first iterations of a first nested loop, the system performs iterations of a second nested loop that is nested within the first nested loop until a first loop bound for the second nested loop is reached (302). For example, the second nested loop may be a loop in which a tensor computation (e.g., dot product computation of memory address computation) is performed as part of a program.

[0063] The first loop may be an outer loop, e.g., created by a compiler that compiled a program that includes the first and second nest loops. For example, the compiler can identify a tensor computation loop, determine whether to create an outer loop for the tensor computation loop, and, if so, determine one or more loop bounds for the created outer loop and/or the tensor computation loop. [0064] The compiler can determine the first loop bound for the second nested loop based on the total number of iterations of the second nested loop (e.g., the total number of elements in a dimension of a tensor corresponding to the second loop) and a number of computing units of the system. For example, if the total number of iterations of the second loop is less than the number of computing units, the first loop bound may be equal to the total number of iterations of the second nested loop. If the total number of iterations of the second nested loop is an exact multiple of the number of computing units, the first loop bound may be equal to the number of iterations. If the total number of iterations of the second nested loop is greater than the number of computing units, but not an exact multiple of the computing units, the compiler may set the first loop bound to the number of computing units and determine an alternative loop bound that is equal to the remainder of the total number of iterations of the second nested loop divided by the number of computing units.

[0065] In this example, assume that the total number of iterations of the second nested loop is greater than the number of computing units and that the total number of iterations of the second nested loop is not an exact multiple of the number of computing units. Thus, in this example, the number of iterations of the second nested loop for the one or more first iterations of the first nested loop is limited by the first loop bound in response to the second nested loop having a total number of iterations that exceeds a value of a hardware property of the computing unit.

[0066] The system may perform the iterations of the second nested loop in parallel. For example, as described above, the first loop bound for the second nested loop may be determined such that the number of iterations of the second nested loop does not exceed the number of computing units. In this example, each iteration of the second loop for each of the one or more first iterations of the first nested loop can be performed in parallel. As the system assigns an iteration of the second nested loop to a computing unit, the system can iterate an index variable for the second loop. When the index variable equals the first loop bound, the second loop has completed.

[0067] The system determines whether the penultimate (i.e., next to last) iteration of the first loop has completed (304). For example, the system may compare an index variable for the first loop to a loop bound for the first loop. If a difference between the loop bound and index variable is a specified number (e.g. , 1 ), the system may determine that the penultimate iteration of the first loop has not completed. For example, a loop with an index variable of Ύ may include three iterations. In this example, the loop may be programmed as "for (/=0; /<3)" or "for (/=1 ; /<4).

[0068] In the first example, the loop bound is 3, the final iteration of the loop is performed for i=2, and the penultimate iteration of the loop is performed for /=1 . In general, the index variable is typically incremented when or just after an iteration of a loop is performed. In this example, if the index variable is 2 after an iteration of the loop was performed, the performed iteration was the penultimate iteration. Thus, if the difference between the bound (3) and the index variable is equal to 1 , then the penultimate iteration of the loop was the iteration that completed.

[0069] Similarly, in the second example, the loop bound is 4, the final iteration of the loop is performed for /=3, and the penultimate iteration of the loop is performed for i=2. In this example, if the index variable is 3 after an iteration of the loop was performed, the performed iteration was the penultimate iteration. Thus, if the difference between the bound (3) and the index variable is equal to 1 , then the penultimate iteration of the loop was the iteration that completed. If the penultimate iteration of the first loop has not completed, the system returns to operation 302 to perform the iterations of the second nested loop for the next iteration of the first nested loop corresponding to the updated index variable value.

[0070] If the penultimate iteration of the first loop has completed, the system substitutes, for the second loop, an alternative bound in place of the first loop bound (308). For example, the system may use the alternative bound for the second nested loop for the final iteration of the first nested loop.

[0071] The system performs one or more iterations of the second nested loop for the final iteration for the first nested loop until the alternative loop bound is reached (310). For example, if there are multiple iterations of the second nested loop remaining, the system may perform the iterations in parallel using multiple computing units. [0072] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.

[0073] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a GPGPU (General purpose graphics processing unit).

[0074] Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.

[0075] Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g. , EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[0076] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

[0077] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be

understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0078] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

What is claimed is:




 
Previous Patent: A COATING APPARATUS

Next Patent: MONORAIL TRAY CONVEYOR