Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EXECUTION METHOD, EXECUTION DEVICE, LEARNING METHOD, LEARNING DEVICE, AND PROGRAM FOR DEEP NEURAL NETWORK
Document Type and Number:
WIPO Patent Application WO/2019/050771
Kind Code:
A1
Abstract:
Executing a deep neural network by obtaining, during deep neural network inference, a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module (S210, S215); generating a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer (S220); storing the compressed feature map into memory; reconstructing the binary intermediate feature map by decompressing the compressed feature map read from the memory using a reconstruction layer corresponding to the nonlinear dimensionality reduction layer (S240); and converting the reconstructed binary intermediate feature map into a floating-point or fixed-point intermediate feature map using a second transformation module (S245, S250).

Inventors:
GUDOVSKIY, Denis A. (10900 North Tantau Ave.Suite 20, Cupertino CA, 95014, US)
RIGAZIO, Luca (10900 North Tantau Ave.Suite 20, Cupertino CA, 95014, US)
Application Number:
US2018/048867
Publication Date:
March 14, 2019
Filing Date:
August 30, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA (20000 Mariner Avenue, Suite 200Torrance, CA, 90503, US)
International Classes:
G06N3/02; G06N3/04; G06N3/06; G06N3/08
Domestic Patent References:
WO2016145379A12016-09-15
Foreign References:
US20140085501A12014-03-27
US20170132515A12017-05-11
US20160360202A12016-12-08
US20030103667A12003-06-05
US20070233477A12007-10-04
US5161204A1992-11-03
US20160098249A12016-04-07
US20030059121A12003-03-27
US8527276B12013-09-03
Attorney, Agent or Firm:
FIELDS, Kenneth W. (1030 15th Street NW, Suite 400 EastWashington, DC, 20005, US)
Download PDF:
Claims:
[CLAIMS]

[Claim 1 ]

An execution method for a deep neural network, the execution method comprising :

obtaining, during deep neural network inference, a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module;

generating a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer;

storing the compressed feature map into memory;

reconstructing the binary intermediate feature map by decompressing the compressed feature map read from the memory using a reconstruction layer corresponding to the nonlinear dimensionality reduction layer; and

converting the reconstructed binary intermediate feature map into a floating-point or fixed-point intermediate feature ma p using a second transformation module.

[Claim 2]

The execution method according to claim 1, wherein

the nonlinear dimensionality reduction layer is a single projection convolved layer or a sequence of projection convolved layers, a nd

the reconstruction layer is a single reconstruction convolved layer or a sequence of reconstruction convolved layers.

[Claim 3]

A backpropagation-based learning method for the deep neural network executed using the execution method according to claim 1 or 2, the learning method com prising :

applying a n a na lytical derivative of the first transformation module and the second tra nsformation module to a gradient for a next layer among layers included in the deep neural network to generate a gradient for a previous layer a mong the layers i ncl uded in the deep neural network; updating a weight and a bias based on the gradient generated for the previous layer; and

initializing a weight for the nonlinear dimensionality reduction layer a nd a weight for the reconstruction layer, based on an identity ma pping function .

[Claim 4]

An execution device for a deep neura l network, the execution device comprising :

a processor that executes deep neural network inference, wherein the processor:

obtains a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module;

generates a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer;

stores the compressed feature map into memory;

reconstructs the binary intermediate feature ma p by decompressing the com pressed feature map read from the memory using a reconstruction layer corresponding to the nonlinear dimensionality red uction layer; and

converts the reconstructed binary intermediate feature ma p into a floating-point or fixed-point intermediate feature map using a second transformation module.

[Claim 5]

A learning device for a deep neural network, the learning device comprising :

a processor that executes backpropagation-based lea rning by the deep neura l network executed using the execution method according to claim 1 or 2,

wherein the processor:

applies a n ana lytical derivative of the first tra nsformation mod u le and the second tra nsformation module to a g radient for a next layer a mong layers incl uded in the deep neu ra l network to generate a gradient for a previous layer among the layers included in the deep neural network;

updates a weight and a bias based on the gradient generated for the previous layer; and

initializes a weight for the nonlinear dimensionality reduction layer and a weight for the reconstruction layer, based on an identity mapping function .

[Claim 6]

A program for ca using a computer to execute the execution method according to claim 1.

[Claim 7]

A program for ca using a computer to execute the learning method according to claim 3.

Description:
[DESCRIPTION ]

[Title of Invention]

EXECUTION M ETHOD, EXECUTION DEVICE, LEARNING M ETHOD, LEARNING DEVICE, AND PROGRAM FOR DEEP NEURAL N ETWORK [Technical Field]

The present disclosure relates to, for exa mple, an execution method for a deep neural network.

[Background Art]

Recent achievements of deep neural networks (hereinafter referred to as DNN) make them an attractive choice in many computer vision applications including image classification and object detection . However, the memory and computations required for DN Ns ca n place an excessive processing load on low-power deployments.

Proposed or suggested methods of reducing such processing load incl ude layer fusion in which calculation of layers can be fused without storing intermediate feature maps into memory, and compression of quantized feature map values using nonlinear dimensiona lity reduction layers (see Non Patent Literature (N PL) 1 and 2) .

[Citation List]

[Non Patent Literature]

[N PL 1]

M . Alwani, H . Chen, M . Ferdman, and P. A. Milder; Fused-layer CNN accelerators; MICRO ; pages 1- 12 ; October 2016

[N PL 2]

F. N . Iandola, S. Han, M . W. Moskewicz, K. Ashraf, W. J . Da lly, and K.

Keutzer; Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5M B model size; arXiv preprint arXiv : 1602.07360, 2016

[Summa ry of Invention]

[Technical Problem]

However, the reductions in DN N processing load gained by the above methods are smal l and may introduce a reduction in accuracy. Furthermore, the improvement in processing speed is insufficient. The present disclosure provides, for example, an execution method and execution device characterized by a low processing load and fast processing with a reduced drop in accuracy.

[Solution to Problem]

In order to solve the above problem, an execution method for a DN N according to one aspect of the present disclosure includes : obtaining, during deep neural network inference, a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module; generating a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer; storing the compressed feature ma p into memory; reconstructing the binary intermediate featu re map by decompressing the compressed feature ma p read from the memory using a reconstruction layer corresponding to the nonlinear dimensionality reduction layer; and converting the reconstructed binary intermediate feature map into a floating-point or fixed-point intermediate feature map using a second transformation module.

These general and specific aspects may be implemented using a system, a device, an integrated circuit, a computer program, a computer-readable recording medium (e.g . CD-ROM), or any combination of systems, devices, integrated circuits, computer programs, or computer-readable recording media .

[Advantageous Effects of Invention]

The DNN execution method, etc., according to the present disclosure are characterized by a low processing load and fast processing with a reduced drop in accuracy.

[Brief Description of Drawings]

[FIG . 1 ] FIG . 1 shows a model illustrating a method according to a n em bodiment of executing a DN N during inference.

[FIG . 2] FIG . 2 is a model ill ustrating a forward pass during inference.

[FIG. 3] FIG . 3 is a model ill ustrating a backward pass during backpropagation corresponding to the execution method . [FIG . 4] FIG . 4 shows the results of inference accuracies from an ImageNet dataset when the execution method is applied to SqueezeNet architecture in Implementation Example 1.

[FIG. 5] FIG . 5 shows the results of inference accuracies from an ImageNet dataset when the execution method is applied to MobileNetV2 architecture in Implementation Example 1.

[FIG . 6] FIG. 6 shows the results of inference accuracies from a VOC2007 dataset when the execution method is applied to an SSD512 model in Implementation Example 2.

[FIG. 7] FIG. 7 illustrates memory usage in SSD512 models in

Implementation Example 2.

[FIG . 8] FIG . 8 is a block diagram illustrating a hardware configuration example of an information processing device that executes one of the above methods.

[Description of Embodiments]

An execution method for a DN N according to one aspect of the present disclosure includes : obtaining, during deep neural network inference, a binary intermediate feature map in binary representation by converting a floating-point or fixed-point intermediate feature map into a binary vector using a first transformation module ; generating a compressed feature map by compressing the binary intermediate feature map using a nonlinear dimensionality reduction layer; storing the compressed feature map into memory; reconstructing the bina ry intermediate feature map by decompressing the compressed feature map read from the memory using a reconstruction layer corresponding to the nonlinear dimensionality reduction layer; and converting the reconstructed binary intermediate feature ma p into a floating-point or fixed-point intermediate feature map using a second transformation module.

With this, compared to conventional art, it is possible to execute DN Ns with little memory usage. This reduction in memory bandwidth leads to a n increase in processing speeds.

Here, for example, the nonlinear dimensionality reduction layer may b a single projection convolved layer or a seq uence of projection convolved layers, and the reconstruction layer may be a single reconstruction convolved layer or a sequence of reconstruction convolved layers.

A backpropagation-based learning method for the deep neural network executed using the above execution method, includes : applying a n analytical derivative of the first transformation module and the second tra nsformation module to a gradient for a next layer among layers included in the deep neural network to generate a gradient for a previous layer among the layers included in the deep neural network; updating a weig ht and a bias based on the gradient generated for the previous layer; and initializing a weight for the nonlinea r dimensiona lity reduction layer and a weight for the reconstruction layer, based on an identity mapping function .

The addition of processing for binary representation in inference makes learning possible at the binary level . This a llows DNNs to be executed with less memory usage a nd faster processing speeds than conventional art.

These general and specific aspects may be implemented using a system, a device, an integrated circuit, a computer program, a computer-readable recording medium (e.g . CD-ROM), or any combination of systems, devices, integrated circuits, computer programs, or computer-readable recording media .

Hereinafter, an embodiment is described in detail with reference to the Drawings.

The fol lowing embodiment describes a general or specific example . The n umerical values, shapes, elements, the arrangement and connection of the elements, steps, the processing order of the steps, etc., shown in the following embodiment are mere examples, and are not intended to limit the scope of the invention according to the present disclosure. Therefore, among the elements in the following embodiment, those not recited in any one of the independent claims are described as optional elements.

(EM BODIM ENT)

[DN N]

First, a commonly used DN N will be described .

The input feature map X 1- 1 of an l-th convolution layer in commonly used DNNs can be represented by (1), where C , H, and W are the number of input channels, the height and the width, respectively. R represents all real numbers.

The input X 1-1 is convolved with a weight tensor W 1 represented by (2), where C is the number of output channels, H f and W f are the height and the width of filter kernel, respectively.

Note that the bias vector b represented by (3) is added to the result of convolution operation. b G c ... (3)

Once all C channels are computed, an element-wise nonlinear function is applied to the result of the convolution operations. Then, the C-th channel of the output tensor X 1 represented by (4) can be computed using the computational expression represented by (5). l R CxHxW (4)

X l c = g (\N * 1"1 + b c ) ... (5)

Note that * in (5) denotes convolution operation, and g() is some nonlinear function. For example, assume g() is a rectified linear unit (ReLU) defined as g(x) = max(0,x) such that all activations are non-negative.

Please reference the above as needed for comprehension of denotations used in the figures and the following description.

[DNN Execution Method]

Next, a method according to this embodiment for executing a

DNN during inference will be described. FIG. 1 shows a model that illustrates a DNN execution method according to this embodiment. In FIG. 1, the model indicated by (a) is a conventional method, and the model indicated by (b) is a method according to this embodiment. The two models are contrasted for comparative purposes.

The models indicated by (a) and (b) schematically illustrate neural networks including multiple convolution layers and processed from left to right. Note that to simplify notification, biases are not shown in either of the models.

First, with the conventional method indicated by (a), the calculation of N sequential layers is fused together (step S100) without storing intermediate feature maps χ' ~Ν+1 through X 1-1 to obtain feature map X 1 . This fusion can be done in a channel-wise fashion using memory buffers which are much smaller than the whole feature map.

Next, feature map X 1 is quantized using a nonlinear function q() (step S110). Pre-quantization feature map X 1 is an element of a field of real numbers IR, and quantized feature map x' is an element of a finite field over integers. The quantization step may introduce a reduction in accuracy due to imperfect approximation. The network architecture is not changed after quantization and feature maps are compressed only up to a certain suboptimal bitwidth resolution.

Next, nonlinear dimensionality reduction is performed using an additional convolution layer (step S120). Here, the mapping represented in (6) is performed using the projection weights P' represented in (7).

X E QCxHxW Y l E Op*** . . . (6) P l CxCxHf Wf {7)

Note that c in (6) and (7) denotes output channel dimension, and is less than C. Here, only the compressed feature map Ϋ' needs to be stored in the memory buffer (step S130).

Thereafter, the compressed feature map Ϋ' can be projected back onto the high-dimensional tensor using the weights R 1 (step S140). Lastly, an inverse transform using an inverse quantization function q _1 () (step S150) is performed to obtain the feature map X l+1 , which is an element of a field of real numbers. Next, the DNN execution method according to this embodiment, which is illustrated in (b) in FIG. 1, will be described. Unlike the conventional method, this method includes a transformation for representing an intermediate feature map using a binary vector. First, this transformation will be described.

First, consider a scalar x derived from feature map x', which is an element of a field of real numbers. Here, conventional quantization can be represented by (8), where a scalar-to-scalar mapping or nonlinear function is expressed as q(x). x G R lxl -¾ x E Q lxl : min|| - ■■■ (8)

Note that x is the quantized scalar, Q is the GF(2 B ) finite field for fixed-point representation, and B is the number of bits.

Here, in this embodiment, a new x representation is introduced by a linear binarization function b() defined by (9). x Q l l -¾ x B Bxl : x = b®x ... (9)

Here, Θ is a bitwise AND operation. Additionally, vector b = [l 0 ,! 1 ,...,! - 1 ] 1 , and B is finite field GF(2).

An inverse function of the linear binarization function b() is expressed as in (10). x e B Bxl x £ Q lxl : x = b T x

= b T b(g)x = {2 B -l)®x ... (10)

Equations (9) and (10) show that a scalar over a higher cardinality finite field can be linearly converted to and from a vector over a finite field with two elements.

Hereinafter, a DNN execution method according to this embodiment including compression of a feature map based on these derivations will be described in accordance with (b) in FIG. 1.

The layers before the layer in which the feature map X 1 is obtained (step S200) are the same as those through step S100 in (a). Accordingly, description and indication in the drawings of steps before step S200 are omitted.

In the next layer, feature maps X 1 , which are activations, are quantized to obtain feature mapx (step S210). In the next layer, the transformation in (9) is applied to feature map x (step S215). The result of this transformation, feature map x ; , is represented by (11). X 1 eB BxC HxW ...(ii)

The feature map x' converted to binary representation in such a manner is hereinafter also referred to as a binary intermediate feature map.

For implementation convenience, a bit dimension newly added to (11) can be concatenated along channel dimension resulting in the binary intermediate feature map x shown in (12). x' e B BCxHxW ... (12)

Note that a module of layers in which the processes in steps S210 and S215 are executed is one example of a first transformation module according to this embodiment.

A single nonlinear dimensionality reduction layer using projection weights p/ or a sequence of nonlinear dimensionality reduction layers with p ; projection weights can be applied to this binary intermediate feature map x' to obtain a compressed representation of binary intermediate feature map x' in binary vectors over finite field GF(2) (hereinafter also referred to as a compressed feature map Ϋ') (step S220).

Only the compressed feature maps Ϋ ; , which are elements of field B, need to be stored in memory during inference (step S230). Non-compressed feature maps can be processed using small buffers, e.g., in a sequential channel-wise fashion, and therefore need not be stored in memory.

Then, in the layer after processing using convolutional layers R , a binary intermediate feature map whose above-described compression is undone is reconstructed (step S240). Once the reconstructed binary intermediate feature map is input into the next layer, the binary representation is undone using the inverse function b _1 () from (10) (step S245) to convert the binary intermediate feature map into an intermediate feature map, which is an element of a field of integer elements, and then an inverse quantization function q _1 () is applied to further convert the intermediate feature map into a feature map, which is an element of a field of real numbers (S250).

Note that a module of layers in which the processes in steps

S245 and S250 are executed is one example of a second transformation module according to this embodiment.

This concludes the description of the DNN execution method according this embodiment. Next, performance evaluations related to memory usage and accuracy under this method will be introduced and compared with those of conventional methods in an implementation example (to be described later).

[DNN Learning Method]

Next, a backpropagation learning method for the deep neural network executed using the method described above will be described. FIG. 2 is a model illustrating a forward pass during inference in the execution method described above. FIG. 3 is a model illustrating a backward pass during backpropagation corresponding to the execution method described above. The inference pass in FIG. 2 corresponds to equation (9), and the inference pass in FIG. 3 corresponds to equation (10).

Newly introduced function b _1 () in the DNN execution method according to this embodiment can be represented as a gate that makes hard decisions similar to ReLU. The gradient of function b _1 () can then be calculated using (13).

V R l l V G Bxl : V = 1 S>0 V ... (13)

Lastly, the gradient of function b() is a scaled sum of the gradient vector calculated by (14).

V G Bxl -¾ V G M lxl : V - 1 T V

= l T l¾>oV = ||£||oV ... (i4) Note that ! I» in (14) is a gradient scaling factor that represents the number of nonzero elements in x. Practically, the scaling factor can be calculated based on statistical information only once and used as a static hyperparameter for gradient normalization .

Since the purpose of the network according to this embodiment is to learn and keep only the smallest Ϋ ' , the choice of P 1 a nd R 1 initialization is important. Therefore, we can initialize these weight tensors by an identity function that ma ps the non-compressed feature map to a compressed feature ma p and vice versa to provide a suitable starting point for training . At the same time, other initializations are possible, e.g ., noise sampled from some distribution can be added as well .

[Implementation Example 1 ]

[Outline]

The binarization and quantization layers described in the embodiment were implemented using SqueezeNet VI .1 and Mobilenet V2 as base floating-point network architectures.

[SqueezeNet VI .1]

In this im plementation example, the "fire2/squeeze" and "fire3/sq ueeze" layers which are the largest of the "squeeze" layers due to high spatial dimensions were compressed . The in put to the network has a resolution of 227 x 227, and the weights are all floating-point.

The quantized a nd compressed models a re retrained for 100,000 iterations with a mini-batch size of 1 ,024 on the ImageNet (ILSVRC2012) training dataset, a nd stochastic gradient descent solver with a step-policy learning rate starting from le ~3 /10 every 20,000 iterations. Although this large mini-batch size was used by the original model, it helps the quantized and compressed models to estimate gradients as well . The compressed models were derived and retrained iteratively from the 8-bit quantized model . FIG . 4 shows the results of inference accuracies of 50,000 images from a n ImageNet validation dataset. The leftmost column indicates model type (fp32 : number of sing le-precision floating points, uint8 : 8-bit unsigned integer array, etc. ), a nd the remaining col umns indicate, from left to right : weight data size ; activation, i .e., feature map size; top-most answer (top- 1 ) accuracy; and top five answer (top-5) accuracy. Values outside the parenthesis indicate accuracies with retraining, and values inside the parenthesis indicate accuracies without retraining .

According to these resu lts, when binary representation conversion is performed a nd l x l convolution kernels are used, compared to when binary representation conversion is not performed, although the n umber of weights slightly increases, accuracy increases in the respective models (top- 1 accuracy : 6-bit = 1.0% increase, 4-bit = 2.4% increase. When a 3 x 3 stride 2 convolution kernel was used, an increase of approximately 47% in weight size for the 6-bit model was observed . That allowed for a decrease in the spatial dimension of feature maps by exploiting local spatial qua ntization redundancies. Then, the size of feature map activations is further reduced by a factor of 4 compared to when there is no conversion into binary representation, while top- 1 accuracy dropped by 4.3% and 4.6% for 8-bit and 6-bit models, respectively, compared to fp32.

[MobileNetV2]

The "conv2_l/linear" feature ma p was compressed . This feature map is nearly three times the size of any other feature map. The same training hyperparameters are used as in the SqueezeNet setup. The number of iterations is 50,000 with proportional change in learning rate policy. ReLU layer was added after "conv2__l/linea r" to be compatible with the current implementation of compression method . Hence, "conv2_l/linear" feature map includes signed integers in the origina l model and unsigned integers in the modified one. Note that batch normalization layers were observed to cause some instability to the training process. Accordingly, norma lization and sca ling parameters a re fixed and merged into weights and biases of convolutiona l layers. The modified model was then retrained from the original one. FIG . 5 shows a table of MobileNetV2 ImageNet inference accuracy results. The columns correspond to the table in FIG . 4 for the most pa rt. However, the types of models used under the heading "quantized, not converted to bina ry representation" are models including the signed integers described above (int9: 9-bit signed integer array, etc.)

The quantized models without ReLU after retraining experience -0.3%, -0.3%, and 0.3% top-1 accuracy drops for 9-bit,

7-bit, and 5-bit quantization, respectively, compared to fp32. Note that quantized MobileNetV2 is resilient to smaller bitwidths with only

0.6% degradation for the 5-bit model compared to the 9-bit model.

On the other hand, the ReLU quantized model outperforms all other models in the table in terms of accuracy results.

Models represented in binary and using lxl convolution kernels experienced approximately the same scores as with conventional methods without binary representation. When 2x2 stride 2 convolution kernels were used, feature maps were compressed by another factor of 2 with around 4.5% accuracy degradation and 5% increase in weight size.

Although not included in the table, a comparison of results from when a 2x2 stride 2 convolution kernel was used and a 3x3 stride 2 convolution kernel was used showed that the former is superior in terms of both accuracy and data size.

[Implementation Example 2]

[Outline]

In this implementation example, a Pascal VOC dataset was used for object detection, and accuracies were evaluated. More specifically, 4,952 VOC2007 images and a training dataset of 16,551 VOC2007 and VOC2012 images were used. Moreover, an SSD (single shot detector) 512 model was used in execution of the method described in the above embodiment, and SqueezeNet pretrained on ImageNet was used for feature extraction instead of VGG-16. This reduces the number of parameters and overall inference time by a factor of 4 and 3, respectively.

The original VOC images are rescaled to 512x512 resolution.

As with the implementation example with ImageNet, several models were generated for comparison: a floating-point model, quantized models, and compressed models. Quantization and compression were applied to the "fire2/squeeze" and "fire3/squeeze" layers which represent, if the fusion technique is applied, more than 80% of the total feature map memory due to their large spatia l dimensions. Typically, spatial dimensions decrease quadratically because of max pooling layers compared to linear growth in the depth dimension . The compressed models are derived from the 8-bit qua ntized model, and both are retrained for 10,000 mini-batch-256 iterations using SGD solver with a step-policy learning rate starting from le ~3 /10 every 2,500 iterations.

[Results]

FIG . 6 shows a table of inference accuracy results according to this implementation example. From left to right, the columns indicate : model type; weight size ; feature map, i .e., activation size ; and mAP ( mean average precision) .

Among models that are qua ntized and not converted to binary representation (with retraining), compared to the floating-point model, the 8-bit quantized model decreases accuracy by 0.04%, while 6-bit, 4-bit, and 2-bit models decrease accuracy by approximately 0.5%, 2.2%, and 12.3%, respectively. Values inside parenthesis are reference values for models without retraining .

Among models using a l x l convolution kernel that include binary representation conversion, mAP for the 6-bit model is increased by approximately 0.5% and mAP for the 4-bit model is decreased by approximately 0.5% .

A model using a 2 x 2 convolution kernel with stride 2 performs better tha n a corresponding 3 x 3 convolution kernel while requiring less para meter and computations, exhibiting close to 1% higher mAP. [Memory Usage]

FIG. 7 shows a ta ble summarizing memory usage in the evaluated SSD models. Note that, here, only the la rgest feature maps that represent more than 80% of tota l activation memory are considered .

Assuming that the input frame is stored separately, the fusion tech nique a l lows for com pression of feature maps by a factor of 19.

Fused and q uantized 8-bit and 4-bit fixed-point models decrease the size of feature ma ps by a factor of 4 and 8, respectively.

When the method accord ing to the em bodiment described above that incl udes binary representation conversion (2 x 2 stride 2 kernel) is applied, this gains a nother factor of 2 compression compared to the 4-bit model described a bove with only 1.5% deg radation in mAP.

In tota l, the memory usage required for this feature extractor is red uced by two orders of mag nitude.

[Conclusion]

As described above, the DN N execution method according to this disclosu re is performed by additionally including inference and learning over GF(2) in a conventional DN N method including fused layer computation and quantization . Such GF(2) binary representation allows for feature map compression in a higher-dimensiona l space using auto-encoder inspired layers embedded into a DN N . These compression-decompression layers can be implemented using conventional convolution layers with bitwise operations. More precisely, the method according to the present disclosure trades cardinality of the finite field with the dimensionality of the vector space, which makes possible to learn features at the binary level . The compression method for inference according to the present disclosure can be adopted for GPUs, CPUs, or custom accelerators. Alternatively, existing binary neural networks can be extended to achieve hig her accuracy for emerging applications such as object detection among others.

[Other Embodiments]

Hereinbefore, a n execution method a nd a learning method for a DN N according to one or more aspects have been described based on a n embodiment, but the present invention is not limited to this embodiment. As long as they do not depart from the essence of the present invention, various modifications to the embodiment conceived by those skilled in the art may be incl uded as one of these aspects.

Moreover, although the embodiment i n the present disclosure is described based on execution a nd learning methods for a DN N, the present invention ca n be implemented as execution and learning devices for a DN N incl uding functiona l elements that execute the processes of each layer. Such devices are each implemented as one or more information processing devices each including, for exa mple, a processor that executes the method a nd memory for storing the uncompressed feature maps. FIG . 8 is a block diagram illustrating a hardware configuration example of an information processing device according to this embodiment.

As illustrated in FIG. 8, information processing device 100 includes a CPU (centra l processing unit) 101 , main memory 102, storage 103, a com munications I/F (interface) 104, a nd a GPU (graphics processing unit) 105. These elements are connected via a bus, and are capable of sending a nd receiving data to and from one another.

CPU 101 is a processor that executes a control program stored in, for exa mple, storage 103, and a program for, for example, implementing the DNN execution method described above.

Main memory 102 is a volatile storage area used by CPU 101 as a work area for executing the programs.

Storage 103 is a non-volatile storage area that stores, for example, the programs.

Communications I/F 104 is a communications interface that communicates with external devices via a com munications network (not shown in the drawings) . For example, when the DN N execution device is implemented as a plurality of information processing devices 100, the sending and receiving of data between information processi ng devices 100 is performed by communications I/F 104 via the communications network. Communications I/F 104 is, for example, a wired LAN interface. Note that the com munications I/F 104 may be a wireless LAN interface. Moreover, the communications I/F 104 is not limited to a LAN interface ; communications I/F 104 may be any sort of communications interface that is ca pable of communicatively connecting to a com munications network.

GPU 105 is, for exam ple, a processor that executes a program for im plementing the DN N learning method described above.

The present invention ca n also be implemented as a program for causing a n information processing device including a processor and memory to execute the DN N execution or learning method according to the embodiment, and as a non-transitory recording medium having such a program recorded thereon .

[Industrial Applicability]

The present disclosure is applicable in computer vision applications such as image classification and object detection .

[Reference Signs List]

100 information processing device

101 CPU

102 main memory

103 storage

104 communications I/F

105 GPU