Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BYZANTINE TOLERANT GRADIENT DESCENT FOR DISTRIBUTED MACHINE LEARNING WITH ADVERSARIES
Document Type and Number:
WIPO Patent Application WO/2019/105543
Kind Code:
A1
Abstract:
The present application concerns a computer-implemented method for training a machine learning model in a distributed fashion, using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environment and comprises performing a learning round, comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, receiving an estimate update vector (gradient) from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector, and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors. The method aggregates the gradients while guaranteeing resilience to up to half workers being compromised (malfunctioning, erroneous or modified by attackers).

Inventors:
BLANCHARD PEVA (CH)
EL MHAMDI EL MAHDI (CH)
GUERRAOUI RACHID (CH)
STAINER JULIEN (CH)
Application Number:
PCT/EP2017/080806
Publication Date:
June 06, 2019
Filing Date:
November 29, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ECOLE POLYTECHNIQUE FED LAUSANNE EPFL (CH)
International Classes:
G06N3/08; G06N3/04
Other References:
PEVA BLANCHARD ET AL: "Byzantine-Tolerant Machine Learning", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 8 March 2017 (2017-03-08), XP080755117
PEVA BLANCHARD ET AL: "Brief Announcement: Byzantine-Tolerant Machine Learning", PRINCIPLES OF DISTRIBUTED COMPUTING, ACM, 2 PENN PLAZA, SUITE 701NEW YORKNY10121-0701USA, 25 July 2017 (2017-07-25), pages 455 - 457, XP058369817, ISBN: 978-1-4503-4992-5, DOI: 10.1145/3087801.3087861
YUDONG CHEN ET AL: "Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent", 15 May 2017 (2017-05-15), XP055493233, Retrieved from the Internet [retrieved on 20180718]
SHIQI SHEN ET AL: "Auror: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems", COMPUTER SECURITY APPLICATIONS, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 5 December 2016 (2016-12-05), pages 508 - 519, XP058306858, ISBN: 978-1-4503-4771-6, DOI: 10.1145/2991079.2991125
M. ABADI; P. BARHAM; J. CHEN; Z. CHEN; A. DAVIS; J. DEAN; M. DEVIN; S. GHEMAWAT; G. IRVING; M. ISARD ET AL.: "Tensorflow: A system for large-scale machine learning", PROCEEDINGS OF THE 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI), 2016
P. BLANCHARD; E. M. EL MHAMDI; R. GUERRAOUI; J. STAINER: "Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC '17", 2017, ACM, article "Brief announcement: Byzantine-tolerant machine learning", pages: 455 - 457
L. BOTTOU: "Online learning and stochastic approximations", ONLINE LEARNING IN NEURAL NETWORKS, vol. 17, no. 9, 1998, pages 142
L. BOTTOU: "Proceedings of COMPST", 2010, SPRINGER, article "Large-scale machine learning with stochastic gradient descent", pages: 177 - 186
M. B. COHEN; Y. T. LEE; G. MILLER; J. PACHOCKI; A. SIDFORD: "Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing", 2016, ACM, article "Geometric median in nearly linear time", pages: 9 - 21
J. DEAN; G. CORRADO; R. MONGA; K. CHEN; M. DEVIN; M. MAO; A. SENIOR; P. TUCKER; K. YANG; Q. V. LE ET AL.: "Large scale distributed deep networks", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2012, pages 1223 - 1231
D. L. DONOHO; P. J. HUBER: "The notion of breakdown point", AFESTSCHRIFTFOR ERICH L. LEHMANN, 1983, pages 157184
E. M. EL MHAMDI; R. GUERRAOUI; S. ROUAULT: "On the robustness of a neural network", 2017 IEEE 36TH SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS), September 2017 (2017-09-01), pages 84 - 93, XP033230766, DOI: doi:10.1109/SRDS.2017.21
E. M. EL MHAMDI; R. GUERRAOUI: "When neurons fail", 2017 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), May 2017 (2017-05-01), pages 1028 - 1037, XP033114012, DOI: doi:10.1109/IPDPS.2017.66
A. FAWZI; S.-M. MOOSAVI-DEZFOOLI; P. FROSSARD: "Robustness of classifiers: from adversarial to random noise", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2016, pages 1624 - 1632
J. FENG; H. XU; S. MANNOR: "Outlier robust online learning", ARXIV PREPRINT ARXIV: 1701.00251, 2017
R. GEMULLA; E. NIJKAMP; P. J. HAAS; Y. SISMANIS: "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining", 2011, ACM, article "Large-scale matrix factorization with distributed stochastic gradient descent", pages: 69 - 77
S. S. HAYKIN: "Neural networks and learning machines", PEARSON UPPER SADDLE RIVER, vol. 3, 2009
M. HERLIHY; S. RAJSBAUM; M. RAYNAL; J. STAINER: "Latin American Symposium on Theoretical Informatics", 2014, SPRINGER, article "Computing in the presence of concurrent solo executions", pages: 214 - 225
J. KONE PN 'Y; B. MCMAHAN; D. RAMAGE: "Federated optimization: Distributed optimization beyond the datacenter", ARXIV PREPRINT ARXIV: 1511.03575, 2015
J. KONE^N 'Y; H. B. MCMAHAN; F. X. YU; P. RICHTARIK; A. T. SURESH; D. BACON: "Federated learning: Strategies for improving communication efficiency", ARXIV PREPRINT ARXIV: 1610.05492, 2016
L. LAMPORT; R. SHOSTAK; M. PEASE: "The byzantine generals problem", ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS (TOPLAS), vol. 4, no. 3, 1982, pages 382 - 401, XP058212778, DOI: doi:10.1145/357172.357176
X. LIAN; Y. HUANG; Y. LI; J. LIU: "Asynchronous parallel stochastic gradient for nonconvex optimization", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2015, pages 2737 - 2745
M. LICHMAN, UCI MACHINE LEARNING REPOSITORY, 2013
N. A. LYNCH, DISTRIBUTED ALGORITHMS, 1996
J. MARKOFF, HOW MANY COMPUTERS TO IDENTIFY A CAT?, 2012, pages 06 - 25
B. MCMAHAN; E. MOORE; D. RAMAGE; S. HAMPSON; B. A. Y ARCAS: "Communication-efficient learning of deep networks from decentralized data", ARTIFICIAL INTELLIGENCE AND STATISTICS, 2017, pages 1273 - 1282
H. MENDES; M. HERLIHY: "Proceedings of the forty fifth annual ACM symposium on Theory of computing", 2013, ACM, article "Multidimensional approximate agreement in byzantine asynchronous systems", pages: 391 - 400
B. T. POLYAK; A. B. JUDITSKY: "Acceleration of stochastic approximation by averaging", SIAM JOURNAL ON CONTROL AND OPTIMIZATION, vol. 30, no. 4, 1992, pages 838 - 855
F. B. SCHNEIDER: "Implementing fault-tolerant services using the state machine approach: A tutorial", ACM COMPUTING SURVEYS (CSUR), vol. 22, no. 4, 1990, pages 299 - 319, XP055323065, DOI: doi:10.1145/98163.98167
R. K. SRIVASTAVA; K. GREFF; J. SCHMIDHUBER: "Training very deep networks", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2015, pages 2377 - 2385
L. SU; N. H. VAIDYA: "Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing", 2016, ACM, article "Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms", pages: 425 - 434
L. SU; N. H. VAIDYA: "International Symposium on Distributed Computing", 2016, SPRINGER, article "Non-bayesian learning in the presence of byzantine agents", pages: 414 - 427
A. TRASK; D. GILMORE; M. RUSSELL: "Modeling order in neural word embeddings at scale", ICML, 2015, pages 2266 - 2275
J. TSITSIKLIS; D. BERTSEKAS; M. ATHANS: "Distributed asynchronous deterministic and stochastic gradient optimization algorithms", IEEE TRANSACTIONS ON AUTOMATIC CONTROL, vol. 31, no. 9, 1986, pages 803 - 812
B. WANG; J. GAO; Y. QI: "A theoretical framework for robustness of (deep) classifiers under adversarial noise", ARXIV PREPRINT ARXIV:1612.00334, 2016
S. ZHANG; A. E. CHOROMANSKA; Y. LECUN: "Deep learning with elastic averaging sgd", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2015, pages 685 - 693
T. ZHANG: "Proceedings of the twenty first international conference on Machine learning", 2004, ACM, article "Solving large scale linear prediction problems using stochastic gradient descent algorithms", pages: 116
Attorney, Agent or Firm:
WEGNER, Hans et al. (DE)
Download PDF:
Claims:
Claims

1. A computer-implemented method for training a machine learning model using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environment and comprises: performing a learning round, comprising: broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment; receiving an estimate vector from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector; and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors.

2. The method of claim l, wherein, if the first computer has not received an estimate vector from a given worker computer, the first computer uses a default estimate vector for the given worker computer.

3. The method of claim l or 2, wherein determining the updated parameter vector precludes the estimate vectors which have a distance greater than a predefined maximum distance to the other estimate vectors.

4. The method of any of the preceding claims, wherein determining the updated parameter vector comprises computing a score for each worker computer, the score representing the sum of distances, preferably squared distances, of the estimate vector of the worker computer to a predefined number of its closest estimate vectors.

5. The method of the preceding claim, wherein n is the total number of worker computers, f is the number of erroneous worker computers returning an erroneous estimate vector, and the predefined number of closest estimate vectors is

6. The method of any of the preceding claims 4 or 5, wherein for each worker computer 1 , the score is computed as ; wherein the sum runs over the closest vectors to V wherein 11 is the total number of worker computers, wherein f is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector V J belongs

to the closest estimate vectors to ^ .

7. The method of any of the preceding claims 4 or 5, wherein for each worker computer 1 , the score is computed as wherein the sum runs over the closest vectors to wherein k is a predefined integer that can take values from wherein n is the total number of worker computers, wherein f is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector

^ ' belongs to the closest estimate vectors to

8. The method of any of the preceding claims 4 or 5, wherein for each worker computer ? , the score is computed as wherein is a predefined positive integer and Ms a normalization factor, wherein n is the total number of worker computers, wherein f is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector belongs to the closest estimate vectors to

9. The method of any of the preceding claims 4-8, further comprising: selecting the estimate vector of the worker computer having the minimal score.

10. The method of the preceding claim, further comprising: if two or more worker computers have the minimal score, selecting the estimate vector of the worker computer having the smallest identifier.

11. The method of any of the preceding claims 4-8, further comprising: selecting two or more of the estimate vectors which have the smallest scores; and computing the average of the selected estimate vectors.

12. The method of the preceding claim, wherein the number of selected estimate vectors is selected to set a trade-off between convergence speed and resilience to erroneous worker computers.

13. The method of the preceding claim, wherein the number of selected estimate vectors is wherein n is the total number of worker computers and f is the number of erroneous worker computers returning an erroneous estimate vector.

14. The method of any of the preceding claims, wherein determining the updated parameter vector comprises computing the Medoid of the received estimate vectors, or a variant of the Medoid comprising minimizing the sum of non-squared distances over a subset of neighbors of a predetermined size.

15. The method of any of the preceding claims, wherein determining the updated parameter vector comprises computing the average of the received estimate vectors with a probability p or selecting the received estimate vector that minimizes the sum of squared distances to a predetermined number of closest estimate vectors with a probability 1 - p, wherein p decreases with each learning round.

16. The method of any of the preceding claims, wherein the machine learning model comprises a neural network, regression, matrix factorization, support vector machine and/or any gradient-based optimizable learning model.

17. The method of any of the preceding claims, wherein the method is used for training a spam filter, email filtering, recommender system, natural language processing, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), computer vision, pattern recognition, image classification and/or artificial intelligence.

18. A computer in a distributed computing environment, adapted for performing a method in accordance to any of the preceding claims.

19. A computer in a distributed computing environment for training a machine learning model using Stochastic Gradient Descent, SGD, wherein the computer comprises: a processor configured for performing a learning round, comprising: broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment; receiving an estimate vector from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector; and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors.

20. A distributed computing environment, comprising: a first computer according to claim 18 or 19; and a plurality of worker computers.

21. A computer program comprising instructions for implementing a method in accordance with any of the claims 1-17.

22. A non-transitory computer-readable medium comprising code that, when executed, causes a first computer of a distributed computing environment to: perform a learning round, comprising: broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment; receiving an estimate vector from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector; and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors.

Description:
Byzantine tolerant gradient descent for

distributed machine learning with adversaries

1. Technical Field

The present invention generally relates to the field of machine learning and distributed implementations of stochastic gradient descent, and more particularly to a method for training a machine learning model, as well as systems and computer programs for carrying out the method.

2. The prior art

Nowadays, machine learning is increasingly popular for computing tasks where designing and programming explicit algorithms with good performance is difficult or infeasible, e.g. in applications such as email filtering, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), computer vision, pattern recognition and artificial intelligence. Generally speaking, machine learning employs algorithms that can learn from and make predictions on data, thereby giving computers the ability to learn without being explicitly

programmed.

The increasing amount of data available (see reference [6]) together with the growing complexity of machine learning models (see reference [27]) has led to learning schemes that require vast amounts of computational resources. As a consequence, many industry-grade machine learning implementations are nowadays distributed among multiple, i.e. possibly thousands of, computers (see reference [1]). For example, as of 2012, Google reportedly used 16,000 processors to train an image classifier (see reference [22]). More recently, attention has been given to federated learning and federated optimization settings with a focus on communication efficiency (see references [15, 16, 23]).

However, distributing a computation over several machines (so-called worker processes) induces a higher risk of failures. Such failures include crashes and computation errors, stalled processes, biases in the way the data samples are distributed among the processes, but also attackers trying to compromise the entire system. Therefore, systems are needed that are robust enough to tolerate so-called “Byzantine” failures, i.e., completely arbitraiy behaviors of some of the processes (see reference [17]).

One approach to mask failures in distributed systems is to use a state machine replication protocol (see reference [26]), which requires, however, state transitions to be applied by all worker processes. In the case of distributed machine learning, this constraint can be translated in two ways: either (a) the processes agree on a sample of data based on which they update their local parameter vectors, or (b) they agree on how the parameter vector should be updated. In case (a), the sample of data has to be transmitted to each process, which then has to perform a heavyweight computation to update its local parameter vector. This entails communication and computational costs that defeat the entire purpose of distributing the work. In case (b), the processes have no way to check if the chosen update for the parameter vector has indeed been computed correctly on real data. In other words, a byzantine process could have proposed the update and may easily prevent the convergence of the learning algorithm. Therefore, neither of these solutions is satisfactory in a realistic distributed machine learning setting.

Many learning algorithms today rely on Stochastic Gradient Descent (SGD) (see references [4, 13]). SGD may be used e.g. for training neural networks (see reference [13]), regression (see reference [34]), matrix factorization (see reference [12]) and/or support vector machines (see reference [34]). Typically, a cost function - depending on a parameter vector - is minimized based on stochastic estimates of its gradient.

Distributed implementations of SGD (see reference [33]) typically take the following form: A single parameter server is in charge of updating the parameter vector, while worker processes perform the actual update estimation, based on the share of data they have access to. More specifically, the parameter server executes learning rounds, during each of which the parameter vector is broadcast to the workers. In turn, each worker computes an estimate of the update to apply (an estimate of the gradient), and the parameter server aggregates all results to finally update the parameter vector.

Nowadays, this aggregation is typically implemented through averaging (see reference [25]), or variants of averaging (see references [33, 18, 31]).

Until now, distributed machine learning frameworks have largely ignored the possibility of failures, especially of arbitrary (i.e., byzantine) ones. Causes of such failures may include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. In particular, the inventors have observed that no linear combination of the updates proposed by the workers (such as averaging according to the currently known approaches) can tolerate a single Byzantine worker. Basically, a single Byzantine worker can force the parameter server to choose any arbitrary vector, even one that is too large in amplitude or too far in direction from the other vectors. This way, the Byzantine worker can prevent any classic averaging-based approach to converge, so that the distributed system delivers an incorrect result, or stalls or crashes completely in the worst case.

As an alternative to linear averaging, a non-linear, e.g. squared-distance-based aggregation rule, that selects among the proposed vectors the vector“closest to the barycenter” (e.g. by taking the vector that minimizes the sum of the squared distances to eveiy other vector), might look appealing. Yet, such a squared-distance-based aggregation rule tolerates only a single Byzantine worker. Two Byzantine workers can collude, one helping the other to be selected, by moving the barycenter of all the vectors farther from the“correct area”.

Still another alternative would be a majority-based approach which looks at every subset of n - f vectors ( n being the overall number of workers and J being the number of Byzantine workers to be tolerated), and considering the subset with the smallest diameter. While this approach is more robust to Byzantine workers that propose vectors far from the correct area, its exponential computational cost is prohibitive, which makes this method unfeasible in practice.

In summary, all known distributed machine learning implementations are either not fault tolerant at all, i.e. they output incorrect results in the presence of workers delivering erroneous vectors, or they are very inefficient in terms of computational cost.

It is therefore the technical problem underlying the present invention to provide a distributed machine learning implementation which is both fault tolerant, i.e. which delivers correct results even in the presence of arbitrarily erroneous workers, and efficient, i.e. less computationally intensive, thereby at least partly overcoming the above explained disadvantages of the prior art.

3. Summary of the invention

This problem is solved by the present invention as defined in the independent claims. Advantageous modifications of embodiments of the invention are defined in the dependent claims. In one embodiment, a computer-implemented method for training a machine learning model using Stochastic Gradient Descent (SGD) is provided. The method is performed by a first computer in a distributed computing environment and may comprise performing a learning round comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, and receiving an estimate vector from all or a subset of the worker computers. Each received estimate vector may either be an estimate of a gradient of a cost function (if it is delivered by a correct worker), or an erroneous vector (if it is delivered by an erroneous, i.e.

“byzantine” worker). If the first computer has not received an estimate vector from a given worker computer, the first computer may use a default estimate vector for that given worker computer. The method further determines an updated parameter vector for use in a next learning round.

The determination may be based only on a subset of the received estimate vectors. This way, the method differs from the above-explained averaging approach which takes into account all vectors delivered by the workers, even the erroneous ones. In contrast to this known approach, the present method operates only on a subset of the received estimate vectors. This greatly improves the fault tolerance while leading to a

particularly efficient implementation, as will be explained further below.

In one aspect, determining the updated parameter vector precludes the estimate vectors which have a distance greater than a predefined maximum distance to the other estimate vectors. Accordingly, the first computer does not aggregate all estimate vectors received from the worker computers (as in the known averaging approach), but disregards vectors that are too far away from the other vectors. Since such outliers are erroneous with a high likelihood, this greatly improves the fault tolerance of the method in an efficient manner.

Determining the updated parameter vector may comprise computing a score for each worker computer, the score representing the sum of (squared) distances of the estimate vector of the worker computer to a predefined number of its closest estimate vectors.

Preferably, n is the total number of worker computers, f is the number of erroneous worker computers returning an erroneous estimate vector, and the predefined number of closest estimate vectors is n - f .

Accordingly, the method advantageously combines the intuitions of the above- explained majority-based and squared-distance-based methods, so that the first computer selects the vector that is somehow the closest to its n ~~ f neighbors, namely the estimate vector that minimizes the sum of squared distances to its closest vectors.

This way, the method satisfies a strong resilience property capturing sufficient conditions for the first computer’s aggregation rule to tolerate Byzantine workers.

Essentially, to guarantee that the cost will decrease despite Byzantine workers, the vector output chosen by the first computer should (a) point, on average, to the same direction as the gradient and (b) have statistical moments (preferably up to the fourth moment) bounded above by a homogeneous polynomial in the moments of a correct estimator of the gradient. Assuming the present method satisfies this

resilience property and the corresponding machine learning scheme converges.

A further important advantage of the method is its (local) time complexity ( linear in the dimension of the gradient, where d is the dimension of the parameter vector (note that in modern machine learning, the dimension d of the parameter vector may take values in the hundreds of billions; see reference [30]).

Preferably, the score may be computed for each worker computer i as , wherein the sum runs over the closest vectors to

wherein 11 is the total number of worker computers, wherein 1 is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector belongs to the closest estimate vectors to

Alternatively, the score may be computed for each worker computer as wherein the sum runs over the closest vectors to wherein k is a predefined integer that can take values from

wherein n is the total number of worker computers, wherein f is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector belongs to the closest estimate vectors to . This way, the designer can initially setup the parameter k to 2 if the suspicion on f malicious workers is strong. Note that this alternative differs from the one described above essentially only in that the parameter k is set to 2 in the first alternative. This setting of k=2 has the benefit of being provably Byzantine-resilient (as will be shown further below) even when no knowledge is provided on the suspected f machines. If evidence is provided that only part of those f machine are to be strongly suspected, while the others are only weakly suspected, (for instance when some lack a critical operating system (OS) security update while the other only missed some minor security update), then k can be made smaller (and negative) to ultimately reach -f-i when only one machine is strongly suspected. On the opposite, when strong suspicion is there, the designer can prefer relying only on the very closest neighbors of i and set k to higher (positive) values, to ultimately rely only on the closest neighbour.

Moreover, the score may be computed for each worker computer i as wherein is a predefined positive integer and is a normalization factor, wherein n is the total number of worker computers, wherein f is the number of erroneous worker computers returning an erroneous estimate vector, and wherein denotes the fact that an estimate vector belongs to the cjosesl; estimate vectors to This way, ^ can be set to i when a Median- based solution is desired (note that when a=i and k=-f-i, the method becomes exactly the Medoid, a=i and corresponds to what can be seen as a Mediod-like method “Medoid-Krum” (see further below)). Alternatively, when a solution that is based on higher order moments is the goal, a can be set to take values higher than i, for instance, a=2 (the standard method (“Krum”)) corresponds to controlling the growth of the second moment (variance) between the aggregated gradient and the real gradient, and similarly, a=m larger or equal to 3 will lead to a better control of the m-th order moment.

The present invention provides various ways of aggregating the estimate vectors received from the worker computers, all of which will be described in more detail further below:

For example, the method may comprise selecting the estimate vector of the worker computer having the minimal score. This approach is also referred to herein as“Krum”. If two or more worker computers have the minimal score, the method may select the estimate vector of the worker computer having the smallest identifier. As another example, the method may select two or more of the estimate vectors which have the smallest scores, and compute the average of the selected estimate vectors. This approach is also referred to herein as“Multi-Krum”. The number of selected estimate vectors may be selected to set a trade-off between convergence speed and resilience to erroneous worker computers. For example, the number of selected estimate vectors may be wherein n is the total number of worker computers and is the

number of erroneous worker computers returning an erroneous estimate vector.

In yet another example, determining the updated parameter vector may comprise computing the Medoid of the received estimate vectors, or a variant of the Medoid comprising minimizing the sum of non-squared distances over a subset of neighbors of a predetermined size. This approach is also referred to herein as“Medoid-Krum” .

In still another example, determining the updated parameter vector may comprise computing the average of the received estimate vectors with a probability p or selecting the received estimate vector that minimizes the sum of squared distances to a predetermined number of closest estimate vectors with a probability l - p, wherein p decreases with each learning round. This approach is also referred to herein as“l-p- Krum”.

Generally, the machine learning model may comprise a neural network, regression, matrix factorization, support vector machine and/or any gradient-based optimizable learning model. The method may be used for any computation-intensive task, such as without limitation training a spam filter, email filtering, recommender system, natural language processing, detection of network intruders or malicious insiders working towards a data breach, optical character recognition (OCR), computer vision, pattern recognition, image classification and/or artificial intelligence.

The present invention is also directed to a computer in a distributed computing environment, wherein the computer is adapted for performing any of the methods disclosed herein. For example, a computer is provided in a distributed computing environment for training a machine learning model using Stochastic Gradient Descent (SGD), wherein the computer comprises a processor configured for performing a learning round, the learning round comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, receiving an estimate vector from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector, and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors.

The invention also concerns a distributed computing environment comprising a first computer as disclosed above and a plurality of worker computers.

Lastly, a computer program and non-transitory computer-readable medium is provided for implementing any of the methods disclosed herein.

4. Short description of the drawings

In the following detailed description, presently preferred embodiments of the invention are further described with reference to the following figures:

Fig. 1: Gradient estimates computed by correct and byzantine workers around an actual gradient;

Fig. 2: A geometric representation of estimates computed by correct and byzantine workers around an actual gradient;

Fig. 3: A geometric representation of a convergence analysis;

Fig. 4: A cross-validation error evolution with rounds, respectively in the absence and in the presence of 33% byzantine workers;

Fig. 5: A cross-validation error at around 500 when increasing a mini-batch size;

Fig. 6: A cross-validation error evolution with rounds;

Fig. 7: A comparison of averaging aggregation with 0% byzantine workers to

Multi-Krum facing 45% byzantine workers;

Fig. 8: A test accuracy of the 500 rounds as a function of the mini-batch size for an averaging aggregation with 0% byzantine workers versus multi-Krum facing 45% byzantine workers;

Fig. 9: An evolution of cross-validation accuracy with rounds for the different aggregation rules in the absence of byzantine workers;

Fig. 10: An evolution of cross-validation accuracy with rounds for the different aggregation rules in the presence of 33% Gaussian byzantine workers. 5· Detailed description of preferred embodiments

In the following, presently preferred embodiments of the invention are described with respect to methods and systems for machine learning in a distributed computing environment. In particular, an aggregation rule also referred to herein as“Krum” (i.e. a way how the parameter server may process the estimate vectors received from the worker computers) is provided, which satisfies the ^ -Byzantine resilience condition defined further below.

For simplicity of presentation, a version of Krum which only selects one vector among the vectors provided by the worker computers will be presented first. Other variants of Krum will be disclosed herein as well. These variants comprise“Multi-Krum”, which interpolates between Krum and averaging, thereby allowing to mix the resilience properties of Krum with the convergence speed of averaging. Furthermore, these variants comprise“Medoid-Krum” which is inspired by the geometric median. Lastly, these variants comprise“l-p Krum”, wherein the average of proposed vectors is chosen with probability p and Krum is chosen with probability l-p. Furthermore, as p is antiproportional to the number of learning rounds, with an increasing number of learning rounds the probability of choosing the average converges to o, whereas the probability of choosing Krum converges to 1.

General system overview and computation flow

The distributed computing environment comprises a first computer (also referred to herein as“parameter server”) and n worker computers (also referred to herein as “workers”), similar to the general distributed system model of reference [l]. Preferably, the parameter server is assumed to be reliable. Known techniques such as state- machine replication can be used to ensure such reliability. A portion 1 of the workers are possibly“Byzantine”, i.e. they may deliver erroneous and/or arbitraiy results (e.g. due to malfunctioning, being erroneous or being modified by an attacker).

Computation is preferably divided into (theoretically infinitely many) rounds. During round t , the parameter server broadcasts its parameter vector to all the workers. Each correct worker P computes an estimate of the gradient of the cost function Q , where is a random variable representing, e.g., the sample (or a mini-batch of samples) drawn from the dataset. A Byzantine worker ^ proposes a vector v *‘ which can deviate arbitrarily from the vector it is supposed to send if it was correct, i.e., according to the algorithm assigned to it by the system developer, as illustrated in Fig. 1.

More precisely, Fig. 1 illustrates that the gradient estimates computed by correct workers (dashed arrows) are distributed around the actual gradient (solid arrow) of the cost function (curved line). A Byzantine worker can propose an arbitrary vector (dotted isolated arrow).

The communication between the parameter server and the worker computers is preferably synchronous. If the parameter server does not receive a vector value V b ‘ from a given Byzantine worker ^ , then the parameter server may act as if it had received the default value instead.

The parameter server preferably computes a vector by applying a deterministic function F to the vectors received. lS also referred to herein as the “aggregation rule” of the parameter server. The parameter server preferably updates the parameter vector using the following SGD equation

The correct (non-Byzantine) workers are assumed to compute unbiased estimates of the gradient n More precisely, in every round { , the vectors

's proposed by the correct workers are preferably independent identically distributed random vectors, This may be achieved by ensuring that each sample of data used for computing the gradient is drawn uniformly and independently (see reference [3]).

The Byzantine workers preferably have full knowledge of the system, including the aggregation rule F and/or the vectors proposed by the workers. They may furthermore collaborate with each other (see reference [21]).

As an alternative to the above-described client/server environment, embodiments of the present invention may also be realized in a peer-to-peer environment where some or all of the correct workers act as a server. For example, each worker may have a copy of the parameter vector that it updates by aggregating other workers’ gradient. In each round, each worker first broadcasts its gradient, then collects other workers’ gradients. On those collected gradients (including its own), the worker applies the Krum aggregating rule (or any variant thereof as disclosed herein) (as if the worker was the server), then updates its local copy of the parameter vector.

The meaning of the terms“parameter vector”,“cost function”,“sample/mini-batch” drawn from the dataset,“estimate” (of the gradient) computed by each worker and “updated parameter vector” should be apparent to those skilled in the art of machine learning and SGD. However, for the sake of illustration, consider the following use- case:

A deep learning solution is deployed and trained over several computers. The parameter vector is a vector comprising all the synaptic weights and the internal parameters of the deep learning model. The cost function is any measure of the deviation between what the deep learning model predicts, and what the computers actually observe (for example users’ behavior in a recommender system versus what the recommender system predicted, or correct tagging of photos versus what the tagging system tagged in photos of a social network, or correct translations uploaded by users versus predicted translations of the model, etc.). The sample/mini-batch drawn from the dataset is a set of observed data on which the cost function and its estimated gradient can be computed, by comparing observation to prediction. Using the aggregation of the estimated gradients the parameter vector can be updated by decrementing the previous parameter vector in the direction suggested by the gradient. This way, the model achieves smaller error between observation and prediction, as evaluated by the loss function.

Byzantine resilience

As already explained in the background section further above, most known SGD-based learning algorithms (see references [4, 13, 12]) employ an aggregation rule which computes the average (or a closely related rule) of the input vectors. Lemma 1 below states that no linear combination of the vectors can tolerate a single Byzantine worker. In particular, averaging is not Byzantine resilient.

Lemma 1 Consider an aggregation rule of the form

where the l are non-zero scalars. Let T UT be any vector in R A single Byzantine worker can make always select U . In

particular, a single Byzantine worker can prevent convergence.

Proof. Immediate: if the Byzantine worker proposes , then

' Note that the parameter server could cancel the effects of the Byzantine

l

behavior by setting, e.g., to o. This, however, requires means to detect which worker is Byzantine.

In the following, we define basic requirements on an appropriate Byzantine-resilient aggregation rule. Intuitively, the aggregation rule should output a vector F that is not too far from the“real” gradient & , more precisely, the vector that points to the steepest direction of the cost function being optimized. This can be expressed as a lower bound (condition (i)) on the scalar product of the (expected) vector F and & . Fig. 2 illustrates this geometrically. If EF belongs to the ball centered at " with radius r , then the scalar product is bounded below by a term involving

Fig. 2 illustrates that if hen is bounded below by

where

It will be appreciated that the norm of a vector, 1 1...1 1 , is denoted as P...P in certain formulas. Both notations shall be considered to have the same meaning.

Condition (ii) is more technical, and states that the moments of F should be controlled by the moments of the (correct) gradient estimator ^ . The bounds on the moments of

G are classically used to control the effects of the discrete nature of the SGD dynamics (see reference [3]). Condition (ii) allows to transfer this control to the aggregation rule.

Definition 1 Byzantine Resilience) Let any angular value, and

any integer be any independent identically distributed random vectors in R be any random vectors in , possibly dependent on the ^’s. aggregation rule F is said to be

Byzantine resilient if, for any vector satisfie is bounded above by a linear combination of terms

The Krum function

The barycentric aggregation rule can be defined as the vector in

that minimizes the sum of squared distances to the (note that removing the square of the distances leads to the geometric median, which will be discussed further below in certain Krum variants). Lemma i above, however, states that this approach does not tolerate even a single Byzantine failure.

One could try to select the vector B among the which minimizes the sum i.e., which is“closest to all vectors”. However, because such a sum involves all the vectors, even those which are veiy far, this approach does not tolerate Byzantine workers: by proposing large enough vectors, a Byzantine worker can force the total barycenter to get closer to the vector proposed by another Byzantine worker.

To overcome this problem, one aspect of the present invention is to preclude the vectors that are too far away. More precisely, the Krum aggregation rule

maybe defined as follows: For any we denote by the

fact that belongs to the c]osest vectors to Then, we define for each worker 1 , the score where the sum runs over the closest vectors to Finally, where l * refers to the worker minimizing the score, for all If two or more workers have the minimal score, we choose the one with the smallest identifier. Lemma 2 The expected time complexity of the Krum Function K

where are d -dimensional vectors, is

Proof. For each , the parameter server computes the squared distances

(time Then the parameter server selects the first of these distances (expected time with Quickselect) and sums their values (time Thus,

computing the score of all the takes additional term is required

to find the minimum score, but is negligible relatively to

Proposition l below states that, if gradient estimator is accurate

enough (i.e. its standard deviation is relatively small compared to the norm of the gradient), then the Krum function is Byzantine-resilient, where angle

depends on the ratio of the deviation over the gradient.

Proposition l Let be any independent and identically distributed random -dimensional vectors s.t . Let

then the Krum function -Byzantine resilient where is defined

by

The condition on the norm of the gradient, , can be satisfied, to a certain extent, by having the (correct) workers compute their gradient estimates on mini-batches (see reference [3]). Indeed, averaging the gradient estimates over a minibatch divides the deviation s by the squared root of the size of the mini-batch. Proof. (Sketch) Without loss of generality, we assume that the Byzantine vectors occupy the last f positions in the list of arguments of B

be the index of the vector chosen by

the Krum function. We focus on the condition Byzantine resilience

(Definition i).

Consider first the case where i;s a vector proposed by a correct process. The first step is to compare the vector with the average of the correct vectors such that the number of such

The last inequality holds because the right-hand side of the first inequality involves only vectors proposed by correct processes, which are mutually independent and follow the distribution of G ..

Now, consider the case where is a vector proposed by a Byzantine process. The fact that k minimizes the score implies that for all indices of vectors proposed by correct processes

Then, for all indices 1 of vectors proposed by correct processes

The term ® ^ is the only term involving vectors proposed by Byzantine processes. However, the correct process has neighbors and non-neighbors.

Therefore, there exists a correct process * which is farther from 7 than every neighbor J of z (including the Byzantine neighbors). In particular, for all ^ such that

Combining equations l, 2, and a union bound yields jn turn, implies -sina Condition ^ is proven by bounding the moments of with moments of the vectors proposed by the correct processes only, using the same technique as above.

The full proof is as follows:

Proof. Without loss of generality, we assume that the Byzantine vectors

occupy the last positions in the list of arguments of Kr ; i.e.,

index is orrect if it refers to a vector

among An index is Byzantine if it refers to a vector among

For each index (correct or Byzantine) we denote by the number of correct (resp. Byzantine) indices such that We have

We focus first on the condition resilience. We determine an j upper bound on the squared distance Note that, for any correct ' ,

We denote by i * the index of the vector chosen by the Krum function.

where I denotes the indicator function ( equals 1 if the predicate is true, and 0

otherwise). We examine the case for some correct index 1 .

We now examine the case for some Byzantine index . The fact that

minimizes the score implies that for all correct indices 1

Then, for all correct indices

We focus on the term Each correct process has neighbors, and f non-neighbors. Thus there exists a correct worker which is farther from ' than any of the neighbors of * . In particular, for each Byzantine index such that

Putting everything back together, we obtain

By assumption, belongs to a ball centered a with radius . This implies To sum up, condition (i) of the Byzantine resilience property holds. We now focus on condition (ii).

Denoting by C a generic constant, when we have for all correct indices

The second inequality comes from the equivalence of norms in finite dimension. Now

Since the are independent, we finally obtain that is bounded above by a

linear combination of terms of the form This completes the proof of condition (ii). Convergence analysis

In the following, the convergence of the SGD using the Krum function defined above is analyzed. The SGD equation is expressed as follows where at least” vectors among the are correct, while the other ones may be

Byzantine. For a correct index where r is the gradient estimator. We define the local standard deviation

The following proposition considers an (a priori ) non-convex cost function. In the context of non-convex optimization, even in the centralized case, it is generally hopeless to aim at proving that the parameter vector tends to a local minimum. Many criteria may be used instead. We follow reference [2], and we prove that the parameter vector almost surely reaches a“flat” region (where the norm of the gradient is small), in a sense explained below.

Proposition 2. We assume that (i) the cost function is three times differentiable with continuous derivatives, and is non-negative, ) the learning rates satisfy and the gradient estimator satisfies

constants ; (iv) there exists a constant such that for all x

(v) finally, beyond a certain horizon, ; there exist and

such that Then the sequence of gradients converges almost surely to zero. Fig. 3 illustrates the condition on the angles between and , in the

regi .on

Conditions (i) to (iv) are the same conditions as in the non-convex convergence analysis in reference [3]. Condition (v) is a slightly stronger condition than the corresponding one in reference [3], and states that, beyond a certain horizon, the cost function Q is“convex enough”, in the sense that the direction of the gradient is sufficiently close to the direction of the parameter vector * . Condition (iv), however, states that the gradient estimator used by the correct workers has to be accurate enough, i.e., the local standard deviation should be small relatively to the norm of the gradient. Of course, the norm of the gradient tends to zero near, e.g., extremal and h

saddle points. Actually, the ratio controls the maximum angle

between the gradient and the vector chosen by the Krum function. In the regions where the Byzantine workers may take advantage of the noise

(measured by in the gradient estimator

to bias the choice of the parameter server. Therefore, Proposition 2 is to be interpreted as follows: in the presence of

Byzantine workers, the parameter vector almost surely reaches a basin around points where the gradient is small ( points where the cost landscape is“almost flat”.

Note that the convergence analysis is based only on the fact that function is -Byzantine resilient.

The complete proof of Proposition 2 is as follows:

Proof. For the sake of simplicity, we write _ Before proving the main claim of the proposition, we first show that the sequence is almost surely globally confined within the region

(Global confinement).

Le where

Note that

This becomes an equality when Applying this inequality to yields

Let denote the s -algebra encoding all the information up to round t . Taking the p

conditional expectation with respect to * yields

Thanks to condition -Byzantine resilience, and the assumption on the first four moments of there exist positive constants

such that

Thus, there exist positive constant such that

When the first term of the right hand side is null because

When this first term is negative because (see Fig. 3) Hence

We define two auxiliaiy sequences

Note that the sequence converges because Then

Consider the indicator of the positive variations of the left-hand side

Then

The right-hand side of the previous inequality is the summand of a convergent series.

By the quasi-martingale convergence theorem (see reference [2]), this shows that the sequence u‘ ? converges almost surely, which in turn shows that the sequence u ' converges almost surely,

Let us assume that When t is large enough, this implies that and are greater than D . Inequality l becomes an equality, which implies that the following infinite sum converges almost surely Note that the sequence converges to a positive value. In the region we have

This contradicts the fact that Therefore, the sequence converges to zero.

This convergence implies that the sequence is bounded, i.e., the vector ' is confined in a bounded region containing the origin. As a consequence, any continuous function of is also bounded, such as, e.g., and all the derivatives

of the cost function . In the sequel, positive constants etc are introduced whenever such a bound is used.

(Convergence).

We proceed to show that the gradient converges almost surely to zero. We define

Using a first-order Taylor expansion and bounding the second derivative with we obtain

Therefore

By the properties of Byzantine resiliency, this implies

which in turn implies that the positive variations of are also bounded

The right-hand side is the summand of a convergent infinite sum. By the quasi- martingale convergence theorem, the sequence converges almost surely,

Taking the expectation of Inequality 2, and summing on the convergence

We now define

Using a Taylor expansion, as demonstrated for the variations of we obtain

Taking the conditional expectation, and bounding the second derivatives by

The positive expected variations of are bounded

The two terms on the right-hand side are the summands of convergent infinite series.

By the quasi-martingale convergence theorem, this shows that converges almost surely.

We have

This implies that the following infinite series converge almost surely

Since converges almost surely, and the series diverges, we conclude that the sequence converges almost surely to zero.

Experimental evaluation

The following is an evaluation of the convergence and resilience properties of Krum, as well as variants of Krum.

Resilience to Byzantine processes

We consider the task of spam filtering (based on the dataset spambase of reference [19]) as an exemplaiy application of the concepts disclosed herein. The learning model is a multi-layer perceptron (MLP) with two hidden layers. There are worker

processes. Byzantine processes propose vectors drawn from a Gaussian distribution with mean zero, and isotropic covariance matrix with standard deviation 200 we refer to this behavior as Gaussian Byzantine. Each (correct) worker estimates the gradient on a mini-batch of size 3. We measure the error using cross-validation. Fig. 4 shows how the error ( ^ -axis) evolves with the number of rounds (3 -axis).

Fig. 4 illustrates a cross-validation error evolution with rounds, respectively in the absence and in the presence of 33% Byzantine workers. The mini-batch size is 3. With 0

% Gaussian Byzantine workers, averaging converges faster than Krum. With 33 % Gaussian Byzantine workers, averaging does not converge, whereas Krum behaves as if there were 0 % Byzantine workers. This experiment confirms that averaging does not tolerate (the rather mild) Gaussian Byzantine behavior, whereas Krum does. The Cost of Resilience

As seen above, Krum slows down learning when there are no Byzantine workers. The following experiment shows that this overhead can be significantly reduced by slightly increasing the mini-batch size. To highlight the effect of the presence of Byzantine workers, the Byzantine behavior has been set as follows: each Byzantine worker computes an estimate of the gradient over the whole dataset (yielding a veiy accurate estimate of the gradient), and proposes the opposite vector, scaled to a large length. We refer to this behavior as omniscient.

Fig. 5 illustrates how the error value at the 500 round ( ^ -axis) evolves when the mini-batch size varies -axis). In this experiment, we consider the tasks of spam filtering (dataset spambase ) and image classification (dataset MNIST). The MLP model is used in both cases. Each curve is obtained with either 0 or 45 % of omniscient Byzantine workers. In all cases, averaging still does not tolerate Byzantine workers, but yields the lowest error when there are no Byzantine workers. However, once the size of the mini-batch reaches the value 20 ; Krum with 45 % omniscient Byzantine workers is as accurate as averaging with 0 % Byzantine workers. We observe a similar pattern for a ConvNet as provided in the supplementary material.

Multi-Krum

Krum as presented above selects only one vector among the vectors proposed by the workers. Multi-Krum, by contrast, computes for each vector proposed, the score as in the Krum function above. Then, Multi-Krum selects the m · · · > n } vectors which score the best, and outputs their average Note that, the cases and correspond to Krum and averaging respectively.

Fig. 6 shows how the error ( ^ -axis) evolves with the number of rounds (^-axis). In the figure, we consider the task of spam fi ltering (dataset spambase ), and the MLP model.

The Multi-Krum parameter is set to . Fig. 6 shows that Multi-Krum with 33 % Byzantine workers is as efficient as averaging with 0 % Byzantine workers.

From the practitioner’s perspective, the parameter may be used to set a specific trade-off between convergence speed and resilience to Byzantine workers. Medoid-Krum

This aggregation rule is an easily computable variant of the geometric median. As discussed above, the geometric median is known to have strong statistical robustness, however there exists no algorithm yet (see reference [l]) to compute its exact value.

Recall that the geometric median of a set of d -dimensional vectors

defined as follows:

The geometric median does not necessarily lie among the vectors A computable alternative to the median are the medoids, which are defined as follows:

As a medoid is not unique, similarly to Krum, if more than a vector minimizes the sum, we will refer to the Medoid as the medoid with the smallest index. l-p Krum

In this aggregation rule, the parameter server chooses the average of the proposed vectors with probability P , and Krum with probability . Moreover, we choose to depend on the learning round. In a preferred implementation ; where is the round number. With such a probability, and despite the presence of Byzantine workers, ^ ~ P Krum has a similar proof of convergence as Krum: the probability of choosing Krum goes to 1 when ^ ^ °°. The rational is to follow averaging in the early phases, to accelerate learning in the absence of Byzantine workers, while mostly following Krum in the later phases and guarantee Byzantine resilience. Remember that the parameter server never knows if there are Byzantine workers or not. The latter can behave like correct workers in the beginning and fool any fraud detection measure. Experimental details and additional results

The concepts underlying the present invention have been evaluated on a distributed framework where we set some nodes to have an adversarial behavior of two kinds:

(a) The omniscient Byzantine workers: workers have access to all the training-set (as if they breached into the other workers share of data). Those workers compute a rather precise estimator of the true gradient, and send the opposite value multiplied by an arbitrarily large factor.

(b) The Gaussian Byzantine workers: Byzantine workers do not compute an estimator of the gradient and send a random vector, drawn from a Gaussian distribution of which we could set the variance high enough (200) to break averaging strategies.

On this distributed framework, we train two models with non-trivial (a-priori non- Convex) loss functions: a 4-layer convolutional network (ConvNet) with a final fully connected layer, and a classical multilayer perceptron (MLP) with two hidden layers, and on two tasks: spam filtering and image classification. We use cross-validation accuracy to compare the performance of different algorithms. The focus is on the Byzantine resilience of the gradient aggregation rules and not on the performance of the models per se.

Replacing an MLP by a ConvNet

Fig. 7 shows that, similarly to the situation on an MLP, mKrum (Multi-Krum) is, despite attacks, comparable to a non-attacked averaging.

More precisely, Fig. 7 compares an averaging aggregation with 0% Byzantine workers to mKrum facing 45% omniscious Byzantine workers for the ConvNet on the MNIST dataset. The cross-validation error evolution during learning is plotted for 3 sizes of the size of the mini-batch.

In the same veine, in Fig. 8, it can be seen that like for an MLP, the ConvNet only requires a reasonably low batch size for Krum to perform (despite 45 % Byzantine workers) as good as a non-attacked averaging.

More precisely, Fig. 8 illustrates a test accuracy after 500 rounds as a function of the mini-batch size for an averaging aggregation with 0% Byzantine workers for the ConvNet on the MNIST dataset versus mKrum facing 45% of omniscious Byzantine workers. Optimizing Krum

Fig. 9 compares the different variants in the absence of Byzantine workers. As can be seen, Multi-Krum is comparably fast to averaging, then comes l-p Krum, while Krum and the Medoid are slower. In more detail, Fig. 9 illustrates the evolution of cross- validation accuracy with rounds for the different aggregation rules in the absence of Byzantine workers. The model is the MLP and the task is spam filtering. The mini- batch size is 3. Averaging and mKrum are the fastest, l-p Krum is second, Krum and the Medoid are the slowest.

In the presence of Byzantine workers (Fig. 10), Krum, Medoid and l-p Krum are similarly robust. Unsurprisingly, averaging is not resilient (no improvement overtime). Multi-Krum outperforms all the tested aggregation rules. More precisely, Fig. 10 shows the evolution of cross-validation accuracy with rounds for the different aggregation rules in the presence of 33% Gaussian Byzantine workers. The model is the MLP and the task is spam filtering. The mini-batch size is 3. Multi-Krum (mKrum) outperforms all the tested aggregation rules.

Additional aspects of embodiments of the invention

The Distributed Computing Perspective

Although seemingly related, results in d -dimensional approximate agreement (see references [24, 14]) cannot be applied to our Byzantine-resilient machine context for the following reasons: (a) references [24, 14] assume that the set of vectors that can be proposed to an instance of the agreement is bounded so that at least correct workers propose the same vector, which would require a lot of redundant work in our setting; and more importantly, (b) referrence [24] requires a local computation by each worker that is in while this cost seems reasonable for small dimensions, such as, e.g., mobile robots meeting in a 2D or space, it becomes a real issue in the context of machine learning, where d may be as high as * 60 billion (see reference

[30]) (making d a crucial parameter when considering complexities, either for local computations, or for communication rounds). The expected time complexity of the

Krum function is . A closer approach to the presently proposed one has been recently proposed in references [28, 29]. In reference [28], the study only deals with parameter vectors of dimension one, which is too restrictive for today’s multi- dimensional machine learning. In reference [29], the authors tackle a multi- dimensional situation, using an iterated approximate Byzantine agreement that reaches consensus asymptotically. This is however only achieved on a finite set of possible environmental states and cannot be used in the continuous context of stochastic gradient descent.

The Statistics and Machine Learning View

Embodiments of the invention build in part upon the resilience of the aggregation rule (see reference [11]) and theoretical statistics on the robustness of the geometric median and the notion of breakdown (see reference [7]). For example, the maximum fraction of

Byzantine workers that can be tolerated, i.e. ; reaches the optimal theoretical value ( 1/2 ) asymptotically on ^ . It is known that the geometric median does achieve the optimal breakdown. However, no closed form nor an exact algorithm to compute the geometric median is known (only approximations are available; see reference [5]; and their Byzantine resilience is an open problem). An easily computable variant of the median is the Medoid, which is the proposed vector minimizing the sum of distances to all other proposed vectors. The Medoid can be computed with a similar algorithm to Krum.

Robustness Within the Model

It is important to keep in mind that embodiments of the present invention deal with robustness from a coarse-grained perspective: the unit of failure is a worker, receiving its copy of the model and estimating gradients, based on either local data or delegated data from a server. The nature of the model itself is less important, the distributed system can be training models spanning a large range from simple regression to deep neural networks. As long as this training is using gradient-based learning, the proposed algorithm to aggregate gradients, Krum, provably ensures convergence when a simple majority of nodes are not compromised by an attacker.

A natural question to consider is the fine-grained view: is the model itself robust to internal perturbations? In the case of neural networks, this question can somehow be tied to neuroscience considerations: could some neurons and/or synapses misbehave individually without harming the global outcome? We formulated this question in another work and proved a tight upper bound on the resulting global error when a set of nodes is removed or is misbehaving (see reference [9]). One of the many practical consequences (see reference [8]) of such fine-grained view is the understanding of memory cost reduction trade-offs in deep learning. Such memory cost reduction can be viewed as the introduction of precision errors at the level of each neuron and/or synapse (see reference [9]).

Other approaches to robustness within the model tackled adversarial situations in machine learning with a focus on adversarial examples (during inference; see references [10, 32, 11]) instead of adversarial gradients (during training) as Krum does. Robustness to adversarial input can be viewed through the fine-grained lens we introduced in reference [9], for instance, one can see perturbations of pixels in the inputs as perturbations of neurons in layer zero. It is important to note the

orthogonality and complementarity between the fine-grained (model/input units) and the coarse-grained (gradient aggregation) approaches. Being robust, as a model, either to adversarial examples or to internal perturbations, does not necessarily imply robustness to adversarial gradients during training. Similarly, being distributively trained with a robust aggregation scheme such as Krum does not necessarily imply robustness to internal errors of the model or adversarial input perturbations that would occur later during inference. For instance, the algorithm provided in the present invention is agnostic to the model being trained or the technology of the hardware hosting it, as long as there are gradients to be aggregated.

References

[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Savannah, Georgia, USA, 2016.

[2] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer. Brief announcement: Byzantine-tolerant machine learning. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC’17, pages 455-457, New York, NY, USA, 2017. ACM.

[3] L. Bottom Online learning and stochastic approximations. Online learning in neural networks, 17(9)1142, 1998.

[4] L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings ofCOMPSTAT’2010, pages 177-186. Springer, 2010. [5] M. B. Cohen, Y. T. Lee, G. Miller, J. Pachocki, and A. Sidford. Geometric median in nearly linear time. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, pages 9-21. ACM, 2016.

[6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223-1231, 2012.

[7] D. L. Donoho and P. J. Huber. The notion of breakdown point. A festschrift for Erich L. Lehmann, 157184, 1983.

[8] E. M. El Mhamdi, R. Guerraoui, and S. Rouault. On the robustness of a neural network. In 2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS), pages 84-93, Sept 2017.

[9] E. M. El Mhamdi and R. Guerraoui. When neurons fail. In 2017 IEEE

International Parallel and Distributed Processing Symposium (IPDPS), pages 1028- 1037, May 2017.

[10] A. Fawzi, S.-M. Moosavi-Dezfooli, and P. Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems, pages 1624-1632, 2016.

[11] J. Feng, H. Xu, and S. Mannor. Outlier robust online learning. arXiv preprint arXiv:i70i.0025i, 2017.

[12] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed stochastic gradient descent. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 69-77. ACM, 2011.

[13] S. S. Haykin. Neural networks and learning machines, volume 3. Pearson Upper Saddle River, NJ, USA:, 2009.

[14] M. Herlihy, S. Rajsbaum, M. Raynal, and J. Stainer. Computing in the presence of concurrent solo executions. In Latin American Symposium on Theoretical

Informatics, pages 214-225. Springer, 2014.

[15] J. Kone^n ^ , B. McMahan, and D. Ramage. Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv: 1511.03575, 2015. [16] J. Kone^n ^ , H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv: 1610.05492, 2016.

[17] L. Lamport, R. Shostak, and M. Pease. The byzantine generals problem. ACM Transactions on Programming Languages and Systems (TOPLAS), 4(3)1382-401, 1982.

[18] X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. In Advances in Neural Information Processing Systems, pages 2737-2745, 2015.

[19] M. Lichman. UCI machine learning repository, 2013.

[20] LPD-EPFL. The implementation is part of a larger distributed framework to run sgd in a reliable distributed fashion and will be released in the github repository of the distributed computing group at epfl, https://github.com/lpd-epfl.

[21] N. A. Lynch. Distributed algorithms. Morgan Kaufmann, 1996.

[22] J. Markoff. How many computers to identify a cat? 16,000. New York Times, pages 06-25, 2012.

[23] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Areas.

Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273-1282, 2017.

[24] H. Mendes and M. Herlihy. Multidimensional approximate agreement in byzantine asynchronous systems. In Proceedings of the forty fifth annual ACM symposium on Theory of computing, pages 391-400. ACM, 2013.

[25] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 3q(4):838-855, 1992.

[26] F. B. Schneider. Implementing fault-tolerant services using the state machine approach: A tutorial. ACM Computing Surveys (CSUR), 22(43:299-319, 1990.

[27] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In Advances in neural information processing systems, pages 2377-2385, 2015. [28] L. Su and N. H. Vaidya. Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, pages 425-434. ACM, 2016.

[29] L. Su and N. H. Vaidya. Non-bayesian learning in the presence of byzantine agents. In International Symposium on Distributed Computing, pages 414-427. Springer, 2016.

[30] A. Trask, D. Gilmore, and M. Russell. Modeling order in neural word embeddings at scale. In ICML, pages 2266-2275, 2015.

[31] J. Tsitsiklis, D. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE transactions on automatic control, 31(9)1803-812, 1986.

[32] B. Wang, J. Gao, and Y. Qi. A theoretical framework for robustness of (deep) classifiers under adversarial noise. arXiv preprint arXiv: 1612.00334, 2016.

[33] S. Zhang, A. E. Choromanska, and Y. LeCun. Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pages 685-693, 2015.

[34] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning, page 116. ACM, 2004.