Title:
DISTRIBUTED PROCESSING SYSTEM AND DISTRIBUTED PROCESSING METHOD
Document Type and Number:
WIPO Patent Application WO/2020/085058
Kind Code:
A1
Abstract:
According to the present invention, a distributed processing node 1[1] transmits distributed data from a communication port 10 to a distributed processing node 1[2] as intermediate aggregate data. A distributed processing node 1[k] (k=2, …, N) generates updated intermediate aggregate data from received intermediate aggregate data and distributed data and transmits the updated intermediate aggregate data from a communication port 10 to a distributed processing node 1[k+] (k+=k+1, provided that k+=1 when k=N). Distributed processing node 1[1] transmits intermediate aggregate data received via a communication port 11 from communication port 11 to distributed processing node 1[N] as aggregate data. Distributed processing node 1[k] transmits aggregate data received via communication port 10 from a communication port 11 to distributed processing node 1[k-1]. Each distributed processing node 1 updates the weight of a neural network on the basis of aggregate data. As a result, the present invention can perform effective distributed processing when applied to deep learning.
Inventors:
KAWAI KENJI (JP)
KATO JUNICHI (JP)
NGO HUYCU (JP)
ARIKAWA YUKI (JP)
ITO TSUYOSHI (JP)
SAKAMOTO TAKESHI (JP)
KATO JUNICHI (JP)
NGO HUYCU (JP)
ARIKAWA YUKI (JP)
ITO TSUYOSHI (JP)
SAKAMOTO TAKESHI (JP)
Application Number:
PCT/JP2019/039449
Publication Date:
April 30, 2020
Filing Date:
October 07, 2019
Export Citation:
Assignee:
NIPPON TELEGRAPH & TELEPHONE (JP)
International Classes:
G06F15/173; G06N3/08
Other References:
SERGEEV, ALEXANDER ET AL.: "Horovod: fast and easy distributed deep learning in TensorFlow", ARXIV, 21 February 2018 (2018-02-21), pages 1 - 10, XP081215801, Retrieved from the Internet [retrieved on 20191127]
Attorney, Agent or Firm:
YAMAKAWA, Shigeki et al. (JP)
Download PDF: