Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONVOLUTIONAL NEURAL NETWORK
Document Type and Number:
WIPO Patent Application WO/2017/106464
Kind Code:
A1
Abstract:
Systems and methods of implementing a more efficient and less resource-intensive CNN are disclosed herein. In particular, applications of CNN in the analog domain using Sampled Analog Technology (SAT) methods are disclosed. Using a CNN design with SAT results in lower power usage and faster operation as compared to a CNN design with digital logic and memory. The lower power usage of a CNN design with SAT can allow for sensor devices that also detect features at very low power for isolated operation.

Inventors:
NESTLER ERIC G (US)
OSQUI MITRA M (US)
BERNSTEIN JEFFREY G (US)
Application Number:
PCT/US2016/066869
Publication Date:
June 22, 2017
Filing Date:
December 15, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ANALOG DEVICES INC (US)
International Classes:
G06N3/063
Domestic Patent References:
WO1995030962A11995-11-16
Foreign References:
US20140355381A12014-12-04
US20050151680A12005-07-14
US20110029471A12011-02-03
Attorney, Agent or Firm:
IMBRIE, Annika K. (US)
Download PDF:
Claims:
What is claimed is:

1. A convolutional neural network using sampled analog technology, comprising:

an input source including first and second analog input data points,

a first set of capacitors for analyzing the first analog input data point and outputting a first analog convolution output, and

a second set of capacitors for analyzing the second analog input data point and outputing a second analog convolution output,

wherein the first and second analog convolution outputs each include a plurality of features.

2. The convolutional neural network of claim 1, further comprising an array of variable capacitance structures, wherein the first and second outputs are multiplexed through the array of variable capacitance structures to generate a multiplexed convolution output.

3. The convolutional neural network of claim 1, wherein the first and second sets of capacitors comprise fixed capacitors, and wherein the fixed capacitors are analog memory cells.

4 The convolutional neural network of claim 1, wherein the first and second sets of capacitors are variable capacitance cells with fixed weights.

5. The convolutional neural network of claim 4, wherein the fixed weights are implemented using a memory capacitor size, and wherein the memory capacitor size is equal to a weight of the fixed weights.

6. The convolutional neural network of claim 1, wherein the first set of capacitors and the second set of capacitors are driven by the input source.

7. The convolutional neural network of claim 1, further comprising:

a first fixed voltage source having a charge, and

a first capacitor digital-to-analog converter (capDAC),

wherein a charge from the first fixed voltage source is sampled by the first capDAC to generate a first bias value, and

wherein the first bias value is added to the first output

8. The convolutional neural network of claim 1, further comprising a sub-sampler coupled to the first and second analog convolution outputs, wherein the sub-sampler averages the first and second analog convolution outputs to generate a mean convolution output, and wherein the sub- sampler processes the mean convolution output with a nonlinear transfer function.

9. The convolutional neural network of claim 8, wherein the nonlinear transfer function is an analog rectification function.

10. The convolutional neural network of claim 1, further comprising a sub-sampler coupled to the first analog convolution output,

wherein the first analog convolution output includes a subwindow of values, and wherein the sub-sampler includes a plurality of analog voltage comparators for determining a maximum value of the subwindow of values of the first analog convolution output.

11. A method for implementing a neural network using sampled analog technology, comprising:

receiving analog input data including first and second analog input data points, analyzing the first analog input data point with a first set of capacitors to generate a first analog convolution output, and

analyzing the second analog input data point with a second set of capacitors to generate a second analog convolution output,

wherein generating the first and second analog convolution outputs includes performing an analog convolution operation on a plurality of features.

12. The method of claim 11, further comprising multiplexing the first and second analog convolution outputs through an array of variable capacitance structures to generate an analog multiplexed convolution output.

13. The method of claim 11, wherein the first and second sets of capacitors are variable capacitance cells, the first set of capacitors have a first fixed weight, and the second set of capacitors have a second fixed weight, and wherein

generating the first analog convolution output includes multiplying the first input data point with the first fixed weight, and

generating the second analog convolution output includes multiplying the second input data point with the second fixed weight.

14. The method of claim 11, further comprising:

generating a first bias value by sampling, using a first capacitor digital-to-analog converter, a scaled charge from a first fixed voltage source, and

adding the first bias value to the first output.

15. The method of claim 11, further comprising averaging the first analog convolution output with the second analog convolution output at a sub-sampler.

16. The method of claim 11, wherein the first analog convolution output includes a subwindow of values, and further comprising determining a maximum value of the subwindow of values.

17. A convolutional neural network using sampled analog technology, comprising

an input including analog input data,

a plurality of sets of capacitors, each set of capacitors configured to analyze a respective subwindow of the analog input data and output an analog convolution output for the respective subwindow, wherein the analog convolution output includes a plurality of features, and

an analog sub-sampler coupled to the analog convolution output, wherein the analog sub- sampler is configured to reduce a size of at least one of the plurality of features of the analog convolution output.

18. The convolutional neural network of claim 17, wherein the plurality of sets of capacitors comprise fixed capacitors, and wherein the fixed capacitors are analog memory cells.

19. The convolutional neural network of claim 17, wherein the plurality of sets of capacitors comprise variable capacitance cells with fixed weights.

20. The convolutional neural network of claim 17, further comprising:

a fixed voltage source having a charge, and

a capacitor digital-to-analog converter (capDAC),

wherein a charge from the fixed voltage source is sampled by the capDAC to generate a bias value, and

wherein the bias value is added to an output from one of the plurality of sets of capacitors.

Description:
CONVOLUTIONAL NEURAL NETWORK

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This Application claims the benefit of priority under 35 U.S.C. §120 of U.S. Application Serial Nos. 62/267,847 filed December 15, 2015, and 15/379,114 filed December 14, 2016, and entitled "Convolutional Neural Network" naming Eric Nestler et al. as inventors. The disclosure of the prior Applications is considered part of and is incorporated by reference in the disclosure of this Application.

FIELD OF THE DISCLOSURE

[0002] The present invention relates to the field of neural networks, and in particular to convolutional neural networks.

BACKGROUND

[0003] Neural networks are mathematical models used to estimate approximate functions that can depend on a large number of inputs. Convolutional neural networks are a type of feed-forward neural network used for feature detection, in which the artificial neurons (mathematical functions) are tiled such that they respond to overlapping regions in the input field. Neural networks are computationally and resource intensive.

SUMMARY OF THE DISCLOSURE

[0004] A neural network using sampled analog technology is disclosed. Convolution Neural Networks (CNNs) are algorithms and circuits used for feature detection. In some implementations, detection or analysis of features can be for image data, audio data, or any other complex data that requires a sophisticated analysis to detect some feature of it. According to various implementations, CNNs can be helpful when the input data is in the form of an array with highly correlated local variables and have shift invariance. The CNN algorithm is typically implemented with digital logic and memory. However, implementing the CNN algorithm with digital logic and memory is resource intensive.

[0005] Systems and methods of implementing a more efficient and less resource- intensive CNN are disclosed herein. In particular, applications of CNN in the analog domain using Sampled Analog Technology (SAT) methods are disclosed. Using a CNN design with SAT results in much lower power usage and faster operation as compared to a CNN design with digital logic and memory. In one example, using a CNN design with SAT uses less than ten times the power that a typical digital CNN design uses. In one example, using a CNN design with SAT results in greater than ten times faster operation. One reason the operation of a CNN using SAT is faster is that an analog version can do in a single clock cycle an operation that takes many clock cycles in a digital version, due to the simultaneous nature of charge sharing. The lower power usage of a CNN design with SAT can allow for sensor devices that also detect features at very low power for isolated operation such as with IOT (Internet of Things) devices.

[0006] According to one implementation, a convolutional neural network using sampled analog technology includes an input source including first and second analog input data points, a first set of capacitors for analyzing the first analog input data point and outputting a first analog convolution output, and a second set of capacitors for analyzing the second analog input data point and outputing a second analog convolution output. The first and second analog convolution outputs each include a plurality of features.

[0007] In some implementations, a convolutional neural network further includes an array of variable capacitance structures, wherein the first and second convolution outputs are multiplexed through the array of variable capacitance structures to generate a multiplexed convolution output.

[0008] In some implementations, the first and second sets of capacitors include fixed capacitors, and the fixed capacitors are analog memory cells. In some implementations, the first and second sets of capacitors are variable capacitance cells with fixed weights. In some examples, the fixed weights are implemented using a memory capacitor size, and the memory capacitor size is equal to a weight of the fixed weights.

[0009] In some implementations, the first set of capacitors and the second set of capacitors are driven by the input source.

[0010] In various implementations, the convolution neural network further includes a first fixed voltage source having a charge, and a first capacitor digital-to-analog converter (capDAC). The charge from the first fixed voltage source is sampled by the first capDAC to generate a first bias value, and the first bias value is added to the first output.

[0011] In some implementations, the convolutional neural network further includes a sub-sampler coupled to the first and second analog convolution outputs, wherein the sub-sampler averages the first and second analog convolution outputs to generate a mean convolution output, and wherein the sub-sampler processes the mean convolution output with a nonlinear transfer function. In some examples, the nonlinear transfer function is an analog rectification function.

[0012] In some implementations, the convolutional neural network further includes a sub-sampler coupled to the first analog convolution output, wherein the first analog convolution output includes a subwindow of values, and wherein the sub-sampler includes a plurality of analog voltage comparators for determining a maximum value of the subwindow of values of the first analog convolution output.

[0013] According to some examples, the input source includes multiple analog input data points, and multiple sets of capacitors are used, with each set of capacitors analyzing a subwindow of the analog input data points.

[0014] According to one implementation, a method for implementing a neural network using sampled analog technology includes receiving analog input data including first and second analog input data points, analyzing the first analog input data point with a first set of capacitors to generate a first analog convolution output, and analyzing the second analog input data point with a second set of capacitors to generate a second analog convolution output, wherein generating the first and second analog convolution outputs includes performing an analog convolution operation on a plurality of features. According to one example, analyzing includes performing a convolution operation.

[0015] In some implementations, the method further includes multiplexing the first and second analog convolution outputs through an array of variable capacitance structures to generate an analog multiplexed convolution output.

[0016] In some implementations, the first and second sets of capacitors are variable capacitance cells, the first set of capacitors have a first fixed weight, and the second set of capacitors have a second fixed weight, and generating the first analog convolution output includes multiplying the first input data point with the first fixed weight, and generating the second analog convolution output includes multiplying the second input data point with the second fixed weight.

[0017] In some implementations, the method includes generating a first bias value by sampling, using a first capacitor digital-to-analog converter, a scaled charge from a first fixed voltage source, and adding the first bias value to the first output.

[0018] In some implementation, the method includes averaging the first analog convolution output with the second analog convolution output at a sub-sampler. In some implementations, the first analog convolution output includes a subwindow of values, and the method includes determining a maximum value of the subwindow of values.

[0019] According to one implementation, a convolutional neural network using sampled analog technology includes an input including analog input data,a plurality of sets of capacitors, each set of capacitors configured to analyze a respective subwindow of the analog input data and output an analog convolution output for the respective subwindow, wherein the analog convolution output includes a plurality of features, and an analog sub-sampler coupled to the analog convolution output. The analog sub-sampler is configured to reduce a size of at least one of the plurality of features of the analog convolution output.

[0020] In some implementations, the plurality of sets of capacitors output a respective plurality of analog convolution outputs, and a plurality of analog sub-samplers are coupled to the plurality of analog convolution outputs. The plurality of analog convolution outputs each includes a plurality of features, and each of the plurality of analog convolution outputs is based on a convolution of an output from a respective set of capacitors. Each of the plurality of analog sub- samplers is configured to reduce a size of at least one of the plurality of features of a respective analog convolution output.

[0021] In some implementations, the sets of capacitors comprise fixed capacitors, and the fixed capacitors are analog memory cells. In some implementations, the sets of capacitors comprises variable capacitance cells with fixed weights.

[0022] In one implementation, the convolution neural network includes a fixed voltage source having a charge, and a capacitor digital-to-analog converter (capDAC). A charge from the fixed voltage source is sampled by the capDAC to generate a bias value, and the bias value is added to an output from one of the plurality of sets of capacitors.

BRIEF DESCRIPTION OF THE DRAWINGS

[0023] To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:

[0024] FIGURE 1 is a diagram illustrating a convolutional neural network;

[0025] FIGURE 2 is a diagram illustrating a convolution multiplexed circuit implementation, according to some embodiments of the disclosure;

[0026] FIGURE 3 is a diagram illustrating another convolution multiplexed circuit implementation, according to some embodiments of the disclosure;

[0027] FIGURE 4 is a diagram illustrating a convolution circuit, according to some embodiments of the disclosure;

[0028] FIGURE 5 is a diagram illustrating a non-overlapping subsampling, according to some embodiments of the disclosure;

[0029] FIGURE 6 is a diagram illustrating circuit for sub-sampling, according to some embodiments of the disclosure;

[0030] FIGURE 7 is a diagram illustrating a nonsymmetric transform, according to some embodiments of the disclosure; [0031] FIGURE 8 is a diagram illustrating another nonsymmetric transform, according to some embodiments of the disclosure;

[0032] FIGURE 9 is a diagram illustrating a symmetric nonlinear transform, according to some embodiments of the disclosure;

[0033] FIGURE 10 is a diagram illustrating a nonsymmetric transform, according to some embodiments of the disclosure; and

[0034] FIGURE 11 is a flowchart illustrating a method for implementing a neural network using sampled analog technology, according to some embodiments of the disclosure.

DETAILED DESCRIPTION

[0035] Systems and methods are provided for reducing the power and latency for the calculation of convolutional neural networks (CNNs) using Sampled Analog Technology (SAT). CNNs are used for a variety of applications. For example, CNNs are used for character recognition. CNNs are a very powerful way of detecting patterns or features in complex data.

[0036] Sampled Analog Technology signal processing is performed in the analog domain by charge sharing among capacitors using only electronic switches and capacitor elements. A sampled analog filter filters incoming analog signals without first digitizing the signals. Sampled analog technology uses discrete time filter architectures combined with analog signal processing, which eliminates any data path quantization noise issues and analog-to-digital and digital-to- analog conversion steps.

Convolutional Neural Networks

[0037] There are many different forms of CNN structures. Figure 1 shows a block diagram for a deep neural network structure with seven layers. The layers include convolution layers alternating with subsampling layers. Each layer is computationally intensive.

[0038] Each layer in the CNN shown in Figure 1 includes a convolution of an NxN sub- window of the input image pixel data 102. In the first layer 104 of Figure 1, the sub-window is 5x5 pixels with a stride of one. Thus, each sub-window is shifted one pixel from the last sub-window as the image data is scanned and convolved 120. The sub-window can be overlapping or non- overlapping by choice of N and the stride value.

[0039] The second operation 122 in the second layer 106 is a subsampling operation. It is a 2x2 subwindow weighted mean followed by a nonlinear function, or squashing function, to generate the output data of each subwindow. The subsampling result is a 14x14 array of processed pixel data when the subsampling uses a 2x2 window. The resulting subsampled data 106 is then processed by a convolution operation 124 resulting in a third layer 108, which is a convolution layer. The data from the third layer 108 is subsampled 126 resulting in a fourth layer 110, which is a subsampling layer. As shown in Figure 1, there can be many layer pairs alternating between a convolution layer and a subsampling layer. In Figure 1, a full connection operation 128 on the fourth layer 110 results in a fifth layer 112, which is a convolution layer. In one example, the fourth layer 110 is fully connected to the fifth layer 112 such that every output of the fourth layer 110 is connected to every input of the fifth layer 112. Each output of the fourth layer 110 can be connected to an input of the fifth layer via individual weights and non-linear functions. Note that the individual weights are learned weights. Similarly, a full connection operation on the fifth layer 112 results in the sixth layer 114. A Guassian connection operation is performed on the sixth layer 114 to yield the seventh layer 116, which is the output.

[0040] In other implementations, the second operation begins with the nonlinear function followed by a subwindow weighted mean. In other implementations, the nonlinear function is part of the convolution layer, such that the output of the convolution layer is non-linear. Convolution and subsampling layers are described in greater detail below.

Convokjtional Layer

[0041] According to one implementation, each convolution step takes a subwindow of the image data and weights each input to the convolution by a trainable and independent weight. In one example, there are 25 programmable weights used in each sum. The same weights are used for every subwindow scan of the image data for each feature. Additionally, there is a trainable bias weight added to the convolution sums.

[0042] A feature is an individual measurable property of the input data. For example, features may include edge detectors and color blob detectors. In other examples, features focus on finer details specific to the input data set and labels. In various applications, features can be numeric or structural. The neural network learns the features from the input data, and each layer of the neural network extracts some features from the input data. In some implementations, additional information is provided to the neural network in the form of derived features from the data.

[0043] In Figure 1, the convolution output from the first convolution operation 120 is shown as six features in the first layer 104. The number of features is application dependent. Each feature is an independent set of programmable weights for convolutional scans of the image data. The same weights are used for all convolution sums of the data of a particular feature and each feature is associated with a unique set of weights. [0044] Each convolution of each feature is implemented as a sum of products, as shown in Equation 1.1 below. In the example of Figure 1, using a 5x5 subwindow, N=25:

[0045] The weights, wi and b„ are programmable and represent the learned behavior. Using SAT, the entire convolution using programmable weights and bias can be implemented passively.

[0046] In this application, the multiple convolutions can be implemented in various ways. Image data is updated at a particular frame rate (frames per second (fps)). To operate in real time, the convolutions of all features of a single layer are completed before the next image data update (1/fps seconds). Two exemplary methods for implementing the convolutions in real time using SAT are described below. Other methods for implementing CNNs using SAT include combinations of the methods specifically described herein.

[0047] According to various implementations, the data can be in many forms. For example, in one implementation, CNN is used for DNA mapping.

Sub-Sampling Layer

[0048] The input to a CNN goes through multiple layers. In some implementations, such as illustrated in Figure 1, the input alternates between convolution layers (e.g., first 104, third 106, and fifth 112 layers) and sub-sampling layers (e.g., second 106 and fourth 110 layers). In other implementations, the convolution and sub-sampling layers are in non-alternating order. For example, one implementation includes multiple consecutive convolution layers. Another implementation includes multiple consecutive sub-sampling layers.

[0049] Sub-sampling reduces the complexity and spatial resolution of the image data, which reduces the sensitivity of the output to variation. Sub-sampling also reduces the size of the features by some factor. In one example, the reduction in feature size is accomplished by summing a group of MxM elements of the output of the previous convolution layer. In another example, the reduction in feature size is accomplished by averaging a group of MxM elements, and multiplying the average by a constant.

[0050] In some descriptions of CNNs, the sub-sampling is described as pooling. There are a number of methods for pooling. One method for pooling is determining a sum of MxM elements. Another method of pooling is determining a maximum of MxM elements. In other implementations, a subsampling region can be overlapping with other sub-sampling regions. For example, in a 4x4 grid of numbers (which may be the output of a layer), using non-overlapping 2x2 regions for pooling results in a 2x2 output. In another example, in a 4x4 grid of numbers, using overlapping 2x2 regions for pooling results in a 3x3 output.

Nonlinearity

[0051] According to various implementations, CNN structures can have data passed through a nonlinear function after the convolution sum, after the sub-sampling or pooling, or after both the convolution sum and the sub-sampling for each layer. Three symmetric functions that can be used to process the CNN data include: the erfc(-x)-l transfer function, the sigmoid function, and the Tanh functions, respectively. Additionally, the CNN data may be processed by a non-symmetric ReLU function, which is analogous to a rectifying function. In some implementations, the individual sums from a sub-sampling layer are passed through a squashing function before going to the next convolution layer. The squashing function can have a variety of shapes, and the shape of the squashing function can be symmetric or non-symmetric.

Features

[0052] In the CNN diagram shown in Figure 1, the first layer 104 (a convolution layer) and the second layer 106 (a subsampling layer) have the same number of features. In particular, in Figure 1, the first 104 and second 106 layers each have six features. The third layer 108 (a second convolution layer) has 16 features. The number of features is increased by adding several other mappings of the image pixels of the second layer 106 features to the features of the third layer 108. Thus, the number of features of the convolution and subsampling layers can differ. The expansion of the number of features illustrates a break of symmetry in the network. Additionally, convolution and subsampling layers can have different features. In particular, features can be transformed as the feature data moves from one layer to the next layer. According to some examples, the weights are determined during a training phase and the weights are saved after the training phase ends. In some examples, different features are maintained in a convolution layer from the features maintained in a subsampling layer.

Convolution Full Implementation

[0053] In the convolution layer, the sum from a previous sub-sampling layer is multiplied by a trainable weight. Additionally, a trainable bias is added.

[0054] In a convolution full implementation, every weight of a layer exists in the device circuit. For example, referring to Figure 1, a convolution full implementation includes independent programmable weights for the six features of the first layer 104 and individual weights for each element of the convolution sum (shown in Equation 1 above). This is 25 weights for each sum. In one example, there are 28x28=784 convolutions for each of the six features of the first layer 104, resulting in 784 subwindows in the output array from the convolution operation 120 for each of the six features. Thus, the total number of convolutions in the first layer 104 is 6*28*28=4,704, and since there are 25 weights for each convolution, this results in 5*4,704 = 117,600 weighted sums.

[0055] In some implementations, the weights are tied such that the weights in each shifted window have the same value. When implementing a CNN with SAT, there is a separate CapDAC (capacitor digital-to-analog converter) for each weighted sum. In one example, when the weights are tied, the CapDACs are tied, and the tied CapDACs are programmed in a single operation. Thus, all the weights that are the same can be programmed in a single operation, rather than programming each of the weights separately. This helps improve efficiency of programming of the weights for the CNN.

[0056] One method used for reducing the number of weighted sums is increasing the stride of the subwindows. Increasing the stride of the subwindows means moving the subwindows by more than one pixel for each subwindow. For example, for a stride of two, the subwindow is moved two pixels horizontally and two pixels vertically, so the number of subwindows is reduced by four and there are four times fewer total weighted sums.

[0057] In one implementation, a full convolution implementation as described above has 117,600 weighted sums and therefore has 117,600 analog memory capacitors that connect to 117,600 digital to analog converters (DACs) that are the weights. In one example, the DACs are hybrid CapDACs. This implementation is fast since all of the convolution sums happen in parallel within a few clock cycles needed for the scaling by the weights. This implementation is also the largest in device area. For larger image data arrays, the full implementation method can be size and cost prohibitive.

Convolution Multiplexed Implementation

[0058] For imaging data at a selected frame rate, multiple convolution and sub-sampling layers can be pipelined, and each layer can operate independently in parallel. Each layer completes within the frame rate period. The boundaries for multiplexing can be implemented in a number of ways.

[0059] Figure 2 shows a convolution multiplexed circuit 200, according to some embodiments of the disclosure. In the circuit 200, each pixel has a set of capacitors connected to, or driven by, the image source. A first pixel 202 is coupled to a first set of capacitors 212, and a second pixel 204 is coupled a second set of capacitors 214. In one example, the first 212 and second 214 sets of capacitors are each a 5x5 subwindow of memory cells. In some examples, as shown in Figure 2, a 5x5 subwindow scan of the image pixels is used, and there are 5 memory cells connected to each pixel source. In one example, the memory cells are analog memory cells.

[0060] In Figure 2, the memory cells are fixed capacitors as shown in the first 212 and second 214 sets of capacitors. The first 212 and second 214 sets of capacitors are multiplexed (216, 218) to an array of variable capacitance structures 222a-222d. The variable capacitance structures 222a-222d can be in a number of forms. In one example, the variable capacitance structures 222a- 222d are hybridCapDAC structure 220. The hybridCapDAC structure 220 outputs a convolution output 224.

[0061] In one example, the variable capacitance structures 222a-222d of Figure 2 form a single 5x5 matrix subwindow. In other implementations, any number of arrays or matrices of variable capacitance cells can be multiplexed to reduce the convolution time.

[0062] Figure 3 shows a convolution multiplexed circuit 300 that uses a variable capacitance cell directly, according to some embodiments of the disclosure. In the circuit 300, each pixel has a set of variable capacitors coupled to the image source. A first pixel 302 is coupled to a first set of variable capacitors 312, and a second pixel 304 is coupled a second set of variable capacitors 314. In one example, the first 312 and second 314 sets of capacitors are each a 5x5 subwindow of weighted memory cells. In some implementations, the memory cells have programmable weights. In some implementations, the weights for each of the memory cells are fixed after the learning phase of the neural network, in which the weights are learned. Thus, the weighting can be incorporated directly as a memory capacitor size that is equal to the weight. The first set of capacitors 312 has a first subwindow convolution output 322, and the second set of capacitors 314 has a second subwindow convolution output 324.

[0063] According to various implementations, the architecture shown in Figure 3 is affected by the weight resolution requirements, such that when the weight resolution requirement is high, then the variable capacitor structure may be too large to implement as arrays. However, multiplexing as shown in Figure 2 can be smaller and can be implemented as arrays even with high weight resolution requirements.

Bias Addition

[0064] In some implementations, the convolution sum of weighted values is shifted with a trainable bias value. Figure 4 shows a diagram of a convolution circuit 400 including a weighted bias, according to some embodiments of the disclosure. In the circuit 400, each pixel has a set of variable capacitors coupled to the image source, similar to the circuit 300 shown in Figure 3. A first pixel 402 is coupled to a first set of variable capacitors 412, and a second pixel 404 is coupled a second set of variable capacitors 414. In one example, the first 412 and second 414 sets of capacitors are each a 5x5 subwindow of weighted memory cells. The first set of capacitors 412 has a first output 416, and a first weighted bias 432 is added to the first output to result in the first subwindow convolution output 422. The second set of capacitors 414 has a second output 418, and a second weighted bias 434 is added to the second output 418 to result in the second subwindow convolution output 424. A first 432 and second 434 weighted biases shift the convolution sum of weighted values.

[0065] The first weighted bias 432 is added with a scaled charge value from a fixed voltage source 436 sampled by a CapDAC structure 426. Similarly, the second weighted bias 434 is added with a scaled charge value from a fixed voltage source 438 sampled by a CapDAC structure 428. In various examples, the CapDAC structures 426, 428 are HybridCapDACs.

Subsampling Layer Implementation

[0066] The subsampling layer involves an MxM average of the convolution outputs followed by a nonlinear transform. According to one implementation, the MxM average is an average of the non-overlapping MxM subwindows of the convolution outputs. The MxM average essentially decimates the spatial array output of the convolution step. However, charge sharing of an MxM subset of capacitor outputs of the convolution layer can act as a mean function directly. In some examples, there may be normalization issues when sharing of an MxM subset of capacitor outputs of the convolution layer acts directly as a mean function.

[0067] In some implementations, subsampling is overlapping. In other implementations, subsampling is not overlapping. Figure 5 shows a convolution multiplexed circuit 500 in which subsampling is non-overlapping, according to some embodiments of the disclosure. The circuit 500 includes a first subwindow of variable capDACs 502, and a second subwindow of variable capDACs 504. The capDACs in the first 502 and second 504 subwindows may be hybridCapDACs. The first subwindow 502 outputs a first convolution output 506, and the second subwindow 504 outputs a second convolution output 508. The first 506 and second 508 outputs are input to a subsampling summer 510 including a switching element. The subsampling summer 510 includes a first switch 526 coupled to the first output 506, and a second switch 528 coupled to the second output 508. In one implementation, the first 526 and second 528 switches prevent overlapping of the subsampling. [0068] The subsampling output 512 from the subsampling switching element 510 is input to a variable capDAC 514. The variable capDAC 514 acts as a trainable weight. In some examples, the variable capDAC 514 is a hybridCapDAC. A bias 516 is added to the output from the variable capDAC 514 to result in the subwindow subsampling output 522. The bias 516 is added with a scaled charge value from a fixed voltage source 520 sampled by a CapDAC structure 518.

[0069] When subsampling is not overlapping subwindows, then the implementation is simple and passive. When overlapping is used, each output of the convolution sum drives several inputs of the subsequent subsampling layer.

Maximum Pooling Implementation

[0070] According to one implementation, another subsampling method is called MAX Pooling. MAX Pooling uses the maximum value of an MxM subwindow of values for the output. Implementing a MAX function in a switched capacitor (switchcap) circuit implies a comparison event among all of the MxM elements.

[0071] Figure 6 shows a circuit 600 for sub-sampling, and in particular for MAX pooling, according to some embodiments of the disclosure. The MAX pooling circuit 600 includes a set of six analog voltage comparators 602a-602f for comparing four elements 604a-604d of a 2x2 subwindow 606 of a window 620 with the other elements 604a-604d in the subwindow 606. As shown in Figure 6, the first element 604a is compared with the second element 604b at comparator 602b, the first element 604a is compared with the third element 604c at comparator 602a, and the first element 604a is compared with the fourth element 604d at comparator 602c. Similarly, the second element 604b is compared with the first 604a, third 604c, and fourth 604d elements, at comparators 602b, 602d, and 602e, respectively. The third element 604c is compared with the first 604a, second 604b, and fourth 604d elements, at comparators 602a, 602d, and 602f, respectively. The fourth element 604d is compared with the first 604a, second 604b, and third 604c elements at comparators 602c, 602e, and 602f, respectively. In various implementations, the analog voltage comparators can be continuous time comparators or clocked comparators.

[0072] The outputs of the comparators 602a-602f are decoded at a decoder 608 to determine the largest element. In one example, the decoder 608 is a combinatorial logic block. The decoder 608 outputs a digital decision 610 that selects which of four switches 612a-612d to close to enable the MAX operation output, where the switches 612a-612d connect lines from each of the subwindows 604a-604d to the output line 614. When a switch is closed, data from the respective subwindow will be output to the output line 614. Nonlinear function Implementation

[0073] As discussed above, various different transfer functions can be used in the subsampling layers of a CNN. When a nonlinear transfer function is used in the subsampling layers of a CNN, the shape of the nonlinear transfer function has a broad range of possibilities. According to one aspect, implementing the CNN layers with SAT includes picking a nonlinear transfer function that is small and low power. According to various implementations, an analog rectification function is used for the subsampling layers of a CNN, and the rectification is either greater than zero or less than zero. In particular, the analog rectifcation is a constant offset in input and output from the ideal function. The trainable bias terms can compensate for the constant offset.

[0074] Figures 7, 8, and 9 show three ways of creating nonlinear transforms. Figure 7 shows a nonsymmetric transform 700 including a diode rectification circuit 702, according to some embodiments of the disclosure. The nonsymmetric transform 700 results in a rectification transfer function 704. Figure 8 shows a nonsymmetric transform 800 including MOSFET rectification 802, according to some embodiments of the disclosure. The nonsymmetric transform 800 results in a rectification transfer function 804. Figure 9 shows a symmetric nonlinear transform 900 including MOSFET clamping 902. The symmetric nonlinear transform 900 results in a transfer function 904.

[0075] Figures 7, 8, and 9 include shunting elements, which connect to capacitance based charge outputs. One example of a capacitance based charge output is a subwindow sum output that is passed through the transform. The transforms 700, 800, and 900 can be used to change the charge on a subsampling layer output by implementing a charge sharing (CS) event.

[0076] According to various implementations, the process, temperature and voltage variability can be adjusted. In various examples, detection success can be sensitive to variability in process, temperature and voltage.

[0077] Figure 10 shows a nonsymmetric transform 1000 including MOSFET rectification 1002 and first 1006 and second 1008 switches. The first 1006 and second 1008 switches close according to the first 1016 and second 1018 timing diagrams, respectively. Figure 10 shows an implementation of rectification using a charge sharing event to discharge the charge on the subwindow sum during clock phase 2 when the voltage is negative. Other functions can be implemented in a similar way. The nonsymmetric transform 1000 results in a rectification transfer function 1004.

[0078] In some implementations, the nonlinear transfer function or squashing function is implemented at the convolution operation as well as at the output of the pooling operation. Applications

[0079] Convolutional neural networks (CNNs) use spatial and temporal structure by sharing weights across features. The architecture of CNNs allow for equivariance in the feature activations, thus making CNNs ideal for image and video applications. For example, character identification such as hand written digit recognition and, more broadly, image recognition/classification, and video recognition/classification. According to various implementations, CNNs are used when the input data is in the form of an array with highly correlated local variables and have shift invariance.

[0080] Speech and Natural language processing are two other areas in which CNNs can be used to take advantage of the inherent structure in the problem. Automatic Speech Recognition (ASR) is one application that has historically been approached by using Hidden Markov Models (HMMs) or Gaussian Mixture Models (GMMs) to solve the problem. Now, however, the state of the art is to use deep neural networks and in particular CNNs for ASR. CNNs that have been trained to do ASR can also be used to identify languages in noisy environments.

[0081] CNNs have very broad reaching applications including drug discovery and chemical analysis. In one application, a CNN is trained to predict bioactivity and chemical interactions of molecules. Biological systems operate via molecular level interactions, thus being able to predict molecular interactions using CNNs can greatly aid in drug discovery.

[0082] In other applications, CNNs can be used for DNA Mapping. DNA mapping is used to describe the positions of genes on chromosomes, and to determine the distance between genes.

[0083] Figure 11 shows a method 1100 for implementing a neural network using sampled analog technology, according to some embodiments of the disclosure. At step 1102, analog input data including first and second analog input data points is received. At step 1104, the first analog input data point is analyzed with a first set of capacitors to generate a first output. At step 1106, the second analog input data point is analyzed with a second set of capacitors to generate a second output. At step 1108, an analog convolution output is generated based on the first and second outputs. The analog convolution output includes a plurality of features, as discussed above.

[0084] In some examples, the method includes multiplexing the first and second outputs through an array of variable capacitance structures to generate the analog convolution output. In some implementations, the first and second sets of capacitors are variable capacitance cells, and the first and second sets of capacitors each have a fixed weight. Generating the first output includes multiplying the first input data point with the first fixed weight, and generating the second output includes multiplying the second input data point with the second fixed weight. [0085] In some implementations, a first bias value is generated by sampling a scaled charge from a first fixed voltage source, and adding the first bias value to the first output. The scaled charge is sampled by a first capacitor digital-to-analog converter. In some examples, the analog convolution output is averaged with a second analog convolution output at a sub-sampler. In some examples, the analog convolution output includes a subwindow of values, and a maximum value of the subwindow of values is determined.

Variations and Implementations

[0086] In various implementations, SAT as described herein can be used in any type of CNN. According to some implementations, SAT is used in Dense Convolutional Networks. In Dense Convolutional Neural Networks, each layer is connected with every other layer in a feed-forward manner. For each layer, the inputs include features of all the preceding layers. Similarly, for each layer, the layer's features are input to all subsequent layers.

[0087] In the discussions of the embodiments above, the capacitors, clocks, DFFs, dividers, inductors, resistors, amplifiers, switches, digital core, transistors, and/or other components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs. Moreover, it should be noted that the use of complementary electronic devices, hardware, software, etc. offer an equally viable option for implementing the teachings of the present disclosure.

[0088] In one example embodiment, any number of electrical circuits of the FIGURES may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer- readable storage medium comprising instructions to allow a processor to carry out those functionalities.

[0089] In another example embodiment, the electrical circuits of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on chip (SOC) package, either in part, or in whole. An SOC represents an IC that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the CNN functionalities may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.

[0090] In another example embodiment, the electrical circuits of the FIGURES may be implemented to be part of the training of the CNN circuit. The training is a feedback path which processes the output of the CNN block to determine the various weights.

[0091] It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.

[0092] Note that the activities discussed above with reference to the FIGURES are applicable to any integrated circuits that involve signal processing, particularly those that can execute specialized software programs, or algorithms, some of which may be associated with processing digitized real-time data. Certain embodiments can relate to multi-DSP signal processing, floating point processing, signal/control processing, fixed-function processing, microcontroller applications, etc.

[0093] In certain contexts, the features discussed herein can be applicable to medical systems, scientific instrumentation, wireless and wired communications, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and other digital-processing-based systems.

[0094] Moreover, certain embodiments discussed above can be provisioned in digital signal processing technologies for medical imaging, patient monitoring, medical instrumentation, and home healthcare. This could include pulmonary monitors, accelerometers, heart rate monitors, pacemakers, etc. Other applications can involve automotive technologies for safety systems (e.g., stability control systems, driver assistance systems, braking systems, infotainment and interior applications of any kind). Furthermore, powertrain systems (for example, in hybrid and electric vehicles) can use high-precision data conversion products in battery monitoring, control systems, reporting controls, maintenance activities, etc.

[0095] In yet other example scenarios, the teachings of the present disclosure can be applicable in the industrial markets that include process control systems that help drive productivity, energy efficiency, and reliability. In consumer applications, the teachings of the signal processing circuits discussed above can be used for image processing, auto focus, and image stabilization (e.g., for digital still cameras, camcorders, etc.). Other consumer applications can include audio and video processors for home theater systems, DVD recorders, and high-definition televisions. Yet other consumer applications can involve advanced touch screen controllers (e.g., for any type of portable media device). Hence, such technologies could readily part of smartphones, tablets, security systems, PCs, gaming technologies, virtual reality, simulation training, etc.

[0096] Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGURES may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures.

[0097] Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in "one embodiment'', "example embodiment", "an embodiment", "another embodiment", "some embodiments", "various embodiments", "other embodiments", "alternative embodiment", and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments.

[0098] It is also important to note that the functions related to CNNs illustrate only some of the possible CNN functions that may be executed by, or within, systems illustrated in the FIGURES. Some of these operations may be deleted or removed where appropriate, or these operations may be modified or changed considerably without departing from the scope of the present disclosure. In addition, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion.

Substantial flexibility is provided by embodiments described herein in that any suitable arrangements, chronobgies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.

[0099] Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words "means for" or "step for" are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. OTHER NOTES. EXAMPLES. AND IMPLEMENTATIONS

[00100] Note that all optional features of the apparatus described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments.

[00101] In a first example, a system is provided (that can include any suitable circuitry, dividers, capacitors, resistors, inductors, ADCs, DFFs, logic gates, software, hardware, links, etc.) that can be part of any type of computer, which can further include a circuit board coupled to a plurality of electronic components. The system can include means for clocking data from the digital core onto a first data output of a macro using a first clock, the first clock being a macro clock; means for clocking the data from the first data output of the macro into the physical interface using a second clock, the second clock being a physical interface clock; means for clocking a first reset signal from the digital core onto a reset output of the macro using the macro clock, the first reset signal output used as a second reset signal; means for sampling the second reset signal using a third clock, which provides a clock rate greater than the rate of the second clock, to generate a sampled reset signal; and means for resetting the second clock to a predetermined state in the physical interface in response to a transition of the sampled reset signal.

[00102] The 'means for 7 in these instances (above) can include (but is not limited to) using any suitable component discussed herein, along with any suitable software, circuitry, hub, computer code, logic, algorithms, hardware, controller, interface, link, bus, communication pathway, etc. In a second example, the system includes memory that further comprises machine- readable instructions that when executed cause the system to perform any of the activities discussed above.