Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INFERENCE DEVICE, CONVOLUTIONAL COMPUTATION EXECUTION METHOD, AND PROGRAM
Document Type and Number:
WIPO Patent Application WO/2019/082859
Kind Code:
A1
Abstract:
The present invention provides an inference device that reduces the number of times of multiplications in a convolutional layer. The inference device comprises a plurality of processing elements (PEs) and a control unit. The control unit achieves, by controlling the plurality of PEs, convolutional computation in a convolutional neural network employing each of a plurality of pieces of input data and a weight group including a plurality of weights corresponding respectively to the plurality of pieces of input data. Further, each of the plurality of PEs executes computation including a multiplication process of a single piece of input data and a single weight, and also executes a multiplication process included in the convolutional computation by employing an element that is included in each of the plurality of pieces of input data and that does not have a value of zero.

Inventors:
SHIBATA SEIYA (JP)
Application Number:
PCT/JP2018/039248
Publication Date:
May 02, 2019
Filing Date:
October 22, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NEC CORP (JP)
International Classes:
G06N3/04; G06N3/06; G06V10/764
Foreign References:
CN106447034A2017-02-22
JP2009080693A2009-04-16
US20160358070A12016-12-08
Other References:
OHNO, YOSHIYUKI ET AL.: "Optimization of Direct Sparse Convolution on Vector Processor", IPSJ SIG TECHNICAL REPORTS, vol. 2017-ARC-227, no. 16, 19 July 2017 (2017-07-19), pages 1 - 7, ISSN: 2188-8574
Attorney, Agent or Firm:
KATO, Asamichi (JP)
Download PDF: