Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISTRIBUTED MACHINE LEARNING
Document Type and Number:
WIPO Patent Application WO/2023/111658
Kind Code:
A1
Abstract:
A method (700) for distributed machine learning, ML, using a set of N CDs. The method includes obtaining first topology information, wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected, thereby identifying a set of CD pairs. The method also includes obtaining first network state information, wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value. The method further includes determining a consensus weight matrix (W) using the obtained first network state information and the first topology information.

Inventors:
DALAL HARDIK (CA)
FAPI EMMANUEL THEPIE (CA)
ZHU ZHONGWEN (CA)
LIANG BEN (CA)
WANG JINGRONG (CA)
Application Number:
PCT/IB2021/061946
Publication Date:
June 22, 2023
Filing Date:
December 17, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L41/12; G06N5/02; G06N20/00; H04L41/14; H04L41/16
Foreign References:
US20200195007A12020-06-18
Other References:
LIU BO ET AL: "Distributed Training for Multi-Layer Neural Networks by Consensus", IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, IEEE, USA, vol. 31, no. 5, 1 May 2020 (2020-05-01), pages 1771 - 1778, XP011785854, ISSN: 2162-237X, [retrieved on 20200430], DOI: 10.1109/TNNLS.2019.2921926
KHATANA VIVEK ET AL: "Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks", 2020 AMERICAN CONTROL CONFERENCE (ACC), AACC, 1 July 2020 (2020-07-01), pages 4689 - 4694, XP033797115, DOI: 10.23919/ACC45564.2020.9147544
A. DIEULEVEUTK. K. PATEL: "Communication trade-offs for Local-SGD with large step size", PROC. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, vol. 32
D. ALISTARHD. GRUBICJ. LIR. TOMIOKAM. VOJNOVIC: "QSGD: Communication-efficient SGD via gradient quantization and encoding", PROC. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, vol. 30, 2017, pages 1709 - 1720
S. SHIX. CHUB. LI: "MG-WFBP: Efficient data communication for distributed synchronous SGD algorithms", PROC. IEEE INFOCOM, 2019, pages 172 - 180
L. XIAOS. BOYD: "Fast linear iterations for distributed averaging", SYSTEMS & CONTROL LETTERS, vol. 53, no. 1, 2004, pages 65 - 78
X. LIANC. ZHANGH. ZHANGC.-J. HSIEHW. ZHANGJ. LIU: "Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent", PROC. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2017, pages 5336 - 5346
NEGLIAGIOVANNI ET AL.: "The role of network topology for distributed machine learning", PROC. IEEE INFOCOM, 2019
M. CHENH. V. POORW. SAADS. CUI: "Convergence Time Optimization for Federated Learning Over Wireless Networks", IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, vol. 20, no. 4, 2021, pages 2457 - 2471
Z. MENGH. XUM. CHENY. XUY. ZHAOC. QIAO: "Learning-Driven Decentralized Machine Learning in Resource-Constrained Wireless Edge Computing", PROC. IEEE INFOCOM, 2021, pages 1 - 10, XP033947069, DOI: 10.1109/INFOCOM42981.2021.9488817
K. SRIVASTAVAA. NEDICD. STIPANOVIC: "Agent-Based Optimization", 2013, SPRINGER STUDIES IN COMPUTATIONAL INTELLIGENCE, article "Distributed bregman-distance algorithms for min-max optimization", pages: 143 - 174
Download PDF:
Claims:
CLAIMS: 1. A method (700) for distributed machine learning, ML, using a set of N computing devices, CDs (101), the method comprising: obtaining (s702) first topology information, wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected, thereby identifying a set of CD pairs; obtaining (s704) first network state information, wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value; and using (s706) the first network state information and the first topology information, determining a consensus weight matrix, W. 2. The method of claim 1, further comprising, using the obtained first network state information and first topology information, determining a communication resource matrix, ^^^ . 3. The method of claim 1 or 2, wherein the set of N CDs comprises a first CD and a second CD, the topology information indicates that the first CD is connected to the second CD via a first link, and determining the network state values comprises calculating a first network state value, L1,2, associated with the first CD and the second CD. 4. The method of claim 3, further comprising: obtaining network bandwidth budget information indicating a network bandwidth budget, B; and determining an amount of bandwidth, B1,2, to allocate to the first link, wherein B1,2 is a function of B, and CL1,2, is a function of B1,2. 5. The method of any one of claims 1-4, wherein, for each CD included in the set of N CDs, the topology information comprises a topology vector for the CD, wherein the topology vector for the CD comprises, for each other CD included in the set of N CDs, a value indicating whether or not the CD is connected to the other CD. 6. The method of any one of claims 1-5, wherein determining W using the first network state information and the first topology information comprises: using the first network state information and the first topology information, determining an intermediate consensus weight matrix, W1; and determining W using W1. 7. The method of claim 6, wherein determining W based on W1 comprises: determining second topology information using W1, wherein the second topology information identifies a subset of the set of CD pairs identified by the first topology information; determining whether a counter has reached a threshold value; and as a result of determining that the counter has reached the threshold value, determining W using the second topology information. 8. The method of claim 7, further comprising determining a communication resource matrix, ^ using the second topology information. 9. The method of claim 6, wherein determining W based on W1 comprises: determining second topology information using W1; determining whether the second topology information is equal to the first topology information; calculating a factor; as a result of determining that the second topology information is equal to the first topology information, using the factor to determine a second intermediate joint consensus weight matrix, W2; and determining W using W2. 10. The method of any one of claims 7-9, wherein W1 is an N x N matrix comprising a set of values W1i,j for i=0 to N and j=0 to N, and the second topology information is an N x N adjacency matrix, A2, comprising a set of values A2i,j for i=0 to N and j=0 to N, and determining A2 using W1 comprises: setting A2i,j = 1 as a result of determining that W1i,j > 0, otherwise setting Ai,j = 0. 11. The method of any one of claims 1-10, further comprising: providing at least a first portion of W to a first CD included in the set of N CDs, wherein the first CD is configured to use the first portion of W to determine model parameters for a first local ML model; providing at least a second portion of W to a second CD included in the set of N CDs, wherein the second CD is configured to use the second portion of W to determine model parameters for a second local ML model; obtaining the model parameters determined by the first and second CDs; and using the obtained model parameters to determine model parameters for a global ML model. 12. The method of any one of claims 1-11, wherein said network state value is a performance indicator value. 13. The method of any one of claims 1-11, wherein said performance indicator value is: a latency value, a packet loss value, a packet re-transmission success rate value, a packet re-transmission success failure rate value, a jitter value, a congestion value, a load value, or a capacity value. 14. A computer program (843) comprising instructions (844) which when executed by processing circuitry (802) of a network node (800) causes the network node to perform the method of any one of claims 1-13. 15. A carrier containing the computer program of claim 14, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium (842). 16. A network node (800), the network node (800) being configured to: obtain first topology information, wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected, thereby identifying a set of CD pairs; obtain first network state information, wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value; and determining a consensus weight matrix, W, using the first network state information and the first topology information. 17. A network node (800), the network node (800) comprising processing circuitry (802) and a memory (842), the memory containing instructions (844) executable by the processing circuitry whereby the network node is configured to: obtain first topology information, wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected, thereby identifying a set of CD pairs; obtain first network state information, wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value; and determining a consensus weight matrix, W, using the first network state information and the first topology information. 18. The network node of claim 16 or 17, wherein the network node is further configured to perform the method of any one of claims 2-13.
Description:
DISTRIBUTED MACHINE LEARNING TECHNICAL FIELD [001] Disclosed are embodiments related to distributed machine learning. BACKGROUND [002] Distributed machine learning (ML) has gained widespread attention as a way to speed up large-scale ML. The development of communication networks and parallel computing has enabled a shift of distributed ML from the cloud to the network edge. To speed up the ML model training process, communication-efficient learning algorithms have been extensively investigated. Existing research has studied communication frequency (see, e.g., reference [1]), coding and gradient compression (see, e.g., reference [2]), and the overlap of computation and communication (see, e.g., reference [3]). [003] In a typical distributed ML architecture, a group of computing devices collaboratively train a shared ML model either in a centralized manner (i.e., using parameter server(s) (PS(s)) or in a decentralized manner without the use any PS. [004] In the centralized architecture, the computing devices need only communicate with a PS, which can easily become a communication bottleneck. A more general, distributed learning architecture (without any centralized PS) uses device-to-device (D2D) networks. The typical process of D2D learning is as follows: (1) a group of computing devices cooperatively train a common model using their local datasets (the training model is estimated locally by each computing device); (2) at each iteration, each computing device (or “device” for short”) calculates the local gradients of its local loss function; (3) in the gradient descent step, each device updates its local model with a weighted average of the model estimates from its neighboring devices; (4) the training process repeats until convergence of the training model is reached; and (5) one of the local models can be selected to function as the global model or a global model can be created by averaging some or all of the local models. [005] In D2D learning, a consensus weight matrix is formed by those weights assigned to the neighbors of all devices for consensus. Some heuristics based a Laplacian matrix of a communication graph representing the group of communication devices and their interconnections, such as, for example, best constant weight, maximum-degree weight, and Metropolis weight, are the common choices of the consensus weight matrix design (see, e.g., reference [4]). [006] It has been proposed that the fastest distributed linear averaging (FDLA) through spectral norm minimization can be used to find the optimal weight matrix that leads to the fastest convergence rate (see, e.g., reference [4]). In summary, given the topology of the communication graph, FDLA gives the best weight matrix that leads to the fastest convergence rate in terms of the training iterations. At each iteration, the training time consists of both the communication and computation time of the devices. SUMMARY [007] Certain challenges presently exist. For example, conventional D2D learning systems generally make use of every physical link between computing devices, which may not be an effective way to speed up the training process. The training time is characterized by multiplying the number of iterations by the expected time of one iteration. In practice, there exists a trade-off between the convergence rate and communication cost when designing the consensus weight matrix. A more connected network may result in fewer iterations to converge (i.e., higher convergence rate), but it also introduces higher communication costs per iteration. In existing work, there has been no discussion on how to set the consensus weight matrix to speed up the training process. [008] Accordingly, there is provided a method for distributed machine learning, ML, using a set of N computing devices (CDs) (also known as “workers”). The method includes obtaining first topology information, wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected, thereby identifying a set of CD pairs. The method also includes obtaining first network state information (e.g., a latency matrix or other network state information) , wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value (e.g., a performance indicator (PI), such as, for example, a latency value). The method further includes determining a consensus weight matrix (W) using the obtained first network state information and the first topology information. [009] There is also provided a computer program comprising instructions which when executed by processing circuitry of a network node causes the network node to perform any of the methods disclosed herein. In one embodiment, there is provided a carrier containing the computer program wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium. There is also provided a network node configured to perform any of the methods disclosed herein. The network node may include processing circuitry and a memory containing instructions executable by the processing circuitry whereby the network node is configured to perform any one of the methods disclosed herein. [0010] The embodiments disclosed herein provide several advantages. For example, the embodiments enable a decrease in the amount of time it takes to train an ML model while retaining a training convergence rate. That is, the embodiments reduce the training “wall- clock time” (i.e., end-to-end model training cycle in distributed machine learning), as well as reduce communication cost by using less than all of the available communication links. BRIEF DESCRIPTION OF THE DRAWINGS [0011] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate various embodiments. [0012] FIG.1 illustrates a ML system according to an embodiment. [0013] FIG.2 is a flowchart illustrating a process according to some embodiments. [0014] FIG.3 shows the convergence of the ML process. [0015] FIGs.4A and 4B illustrate various results. [0016] FIG.4C illustrates an example physical network topology. [0017] FIG.5A and 5B illustrate various results. [0018] FIG.5C illustrates an example physical network topology. [0019] FIGs.6A, 6B, and 6C illustrate various results. [0020] FIG.7 is a flowchart illustrating a process according to some embodiments. [0021] FIG.8 is a block diagram of a network node according to some embodiments. DETAILED DESCRIPTION [0022] This disclosure describes a process for increasing the speed of a decentralized, D2D ML model training process performed by a ML system that comprises a group of computing devices where each computing device (or “device” for short) is communicatively connected to at least one of the other devices in the group via a network link. Hence, the group of devices and network links can be represented by a communication graph. Given the topology of this communication graph and network link information, a sparse subgraph can be determined and used to speed up the training process of a global ML model. The proposed process retains the training convergence rate of FDLA while enforcing graph sparsity and avoiding the selection of poor links. The total training time is reduced with shorter per-iteration training latency. [0023] FIG.1 illustrates a ML system 100 according to an example embodiment. ML system 100 includes a group of communication devices (CDs) 101 (i.e., CD1, CD2, CD3, CD4, ..., CDN) and a server 106. In the example shown, each CD is connected to at least one other CD via a network link. For example, a first network link connects CD1with CD2, a second network link connects CD1 with CD4, and a third network link connects CD1 with CDN. The group of CDs 101 forms D2D network. In the example shown, each CD is also communicatively connected to server 106, which may be an edge server. Each CD may be communicatively connected to server 106 via an access network (e.g., a Long Term Evolution (LTE) access network, a 5G access network, etc.). [0024] In one embodiment, to facilitate direct D2D communications, the CDs use orthogonal channels to communicate with one or more neighboring CDs, and the server 106 is responsible for managing the CDs and controlling the direct D2D communication. The control signalling edge server 106 to CD may be done through an operator network. Message transfer between any two CDs may be done through a sidelink established between the two CDs or may be done through the operator network. [0025] The basic topology of the group of CDs can be depicted as an undirected graphG = ( N, ℇ), where ^^ is the set of CDs and ℇ is the edge set and (i, j) ∈ ℇ, if there exists a link between CD i and CD j. [0026] The CDs participate in training a global model for a synchronized distributed learning task. In each training iteration, each CD calculates local gradients of the training model using their local datasets. Then, each CD updates its local model through gradient descent with a weighted average of neighbors’ model estimates (including itself). The training process repeats until the convergence of the training model or reaching some prescribed maximum number of training iterations. [0027] To speed up the training process in wall-clock time, the consensus weight matrix and communication resource allocation scheme are designed efficiently and effectively. In other words, the goal is to have a sparse network for the message exchange among the CDs participating the distributed ML training process. Arbitrary link reduction within the D2D network, however, might lead to difficult to converge or even a divergence of the ML model training process in each CD. [0028] Hence, to guarantee the convergence of the training process, the following common assumptions can be made: i) the local loss functions are convex with their gradients bounded by a constant L; ii) there exist global optimal model parameters; iii) the graph G is connected (or connected over each period) and the consensus weight matrix (W) is doubly stochastic; iv) ρ (W) = < 1 guarantees that the learning models converge and a smaller ρ(W) suggests a better network consensus; v) the total allocated resource of the selected links should not exceed a resource budget. [0029] To speed up the training process, W and communication resource allocation scheme B given the basic topology ^^ and network information (i.e., a latency function L i,j ) are jointly designed. The problem of joint consensus weight matrix design and communication resource allocation can be formulated as follows:

where the objective function denotes the theoretical upper bound of the training wall-clock time, is the per-iteration latency, is the latency function of the link from CD i to j, denotes the processing latency of CD i, denotes the communication latency from CD i to j, and ξ i denotes the computation intensity required at CD i, μ i denotes the computation capacity of CD i, D i is the size of the packet sent by CD i to CD j (D i = 0 if (D i is not connected to CD j). B i,j is the bandwidth allocated to the link from CD i to j, η i,j is the spectrum efficiency of the link from CD i to j, 1 { ∙} is an indicator function, and is the resource budget, 1 = [1,1, ... ,1] T is the all-one column vector. [0030] Constraints (1) - (2) state that the total allocated resource should not exceed the resource budget. Constraint (3) states that the elements of each row vector in W sum to one. Constraint (4) states a symmetric W for the undirected graph. Therefore, the sum of each column vector in W also equals one. Note that directed graph is also applicable in this work by simply substituting constraint (4) with 1 T W = 1 T (see reference [4]). Constraint (5) shows the selected links are restricted by the basic topology, i.e., an adjacency matrix A, and S A = { W ∈ ℝ N×N | W i,j = 0 if A i,j = 0 and i ≠ j}. [0031] In D2D learning, coefficients ξ i and D i are determined by the model we select to train and the CDs process and transfer this shared model. For example, in anomaly detection, different models, e.g., support vector machine and neural networks, correspond to different coefficients in the latency function. [0032] Due to the existence of the indicator function, the objective function of P0 is no longer convex with respect to W. Moreover, due to the vast search space, the greedy algorithm is computationally expensive. Furthermore, it is difficult to derive the closed form of the (sub)gradients of the spectral norm, making the gradient-based solutions to non-convex optimization inapplicable to P0, e.g., successive convex approximation and majorization minimization. [0033] FIG.2 is a flowchart illustrating a ML process 200, according to an embodiment, for increasing the speed of a decentralized, D2D ML model training process performed by ML system 100. Process 200 finds the right topology for distributed learning by considering graph sparsity and the inherent goodness of the links. Once the topology is found (i.e., once the links are selected), P0 is naturally decomposed into two convex subproblems by separating the two decision variables ( W and B). This can be solved efficiently through convex optimization. Here, computation capacity of the CDs and channel conditions are known and constant. At each iteration k of the training process the following steps are performed. Process 200 may begin with step s202 and assumes that the group of CDs consists of N CDs. [0034] Step s202 is an initialization step that comprises setting k=0, λ (k) =0, and A (k) =0. [0035] Step s204 comprises setting A (k) equal to A if A (k) = 0, where A is an adjacency matrix that, for each CD in the group of CDs 101, indicates the other CDs to which the CD is connected via a link. Thus, given N CDs in the group of CDs 101, A is an NxN matrix where A i,j = 0 and A i,j is equal to 1 if CDi is connected to CD j via a link, othwerwise A i,j = 0. [0036] Step s206 comprises setting is a resource budget (e.g., an available bandwidth) and is the number of elements of A (k) that have a value of 1. [0037] Step s208 comprises generating an NxN latency matrix For example, the following code can be used to generate L (k) : [0038] Step s210 comprises determining a weight matrix W by solving P1, where [0039] Solving P1 enforces graph sparsity and avoids selecting bad links. P1 is a tradeoff between the convergence factor, and weighted graph sparsity, The L-1 norm encourages sparsity of W by zeroing out the elements in W. Furthermore, to differentiate the links when enforcing graph sparsity, we propose to weight each link using a network state matrix L (k) , which reflects the state (e.g., quality) of the links. Since P1 is convex in W, it can be efficiently solved through semidefinite programming. Accordingly, instead of directly solving the non-convex P0 with large search space, we propose to tackle the sparse graph design to a convex problem P1 , which can be solved efficiently. In one embodiment, the matrix L (k) is a latency matrix, and when L (k) is estimated, equal bandwidth allocation was used in order to capture the inherent goodness of the links. In particular, if one applies Min-max Resource Allocation, i.e., Min-Max-RA(), to optimize the bandwidth allocation and obtain the latency matrix L (k) , such optimal latency would be equal for all links. After obtaining W (k) by solving P1, step s212 is performed. [0040] Step s212 comprises obtaining A (k+1) by replacing nonzero elements of the weight matrix W with ones and diagonal elements as zeros. That is: [0041] Step s214 comprises determining whether A (k+1) = A (k) . If A (k+1) = A (k) , then in step s216 λ (k+1) is set equal to λ (k) + Δ, where Δ is a predetermined constant, otherwise in step s218 λ (k+1) is set equal to λ (k) . That is, if the adjacency matric A (k+1) changes, the process continues by further extracting a sparser graph with the current scale coefficient λ k . If A (k+1) does not change, the convergence factor is retained by increasing the scale coefficient with Δ. [0042] Step s220 comprises incrementing k by 1 (i.e., k is set equal to k + 1). [0043] Step s222 comprises determining whether k is less than K, where K is a predetermined value. If k is less than K, then process 200 proceeds to back to step s204, otherwise process 200 proceeds to step s224. [0044] When k = K the loop terminates. Using the extracted sparse topology A (k) , the corresponding consensus weight matrix design W and communication resource allocation scheme ^^ can be obtained by solving the spectral norm minimization and min-max resource allocation (Min-Max-RA) problems, respectively (step s224). There are known algorithms to solve these problems, e.g., FDLA (see reference [4]) and the primal-dual Lagrangian approach (see reference [9]). FDLA() and Min-Max-RA() corresponds to the algorithms that solve the spectral norm minimization and min-max resource allocation problems, respectively. Since they are convex problems, they can be efficiently solved. [0045] Since ρ ( W ) < 1 guarantees the convergence of distributed learning, it can be shown that the extracted topology satisfies ρ( W (k) ) < 1, given that the physical network topology A is connected. FIG.3 shows that process 200 converges a few steps after k 0 . [0046] Performance evaluation: [0047] The performance of process 200 is compared with that of the following benchmarks in reference [4]: (1) Max-degree (each edge is set with the same weight based on the maximum degree of the graph); (2) Metropolis (each edge is set with a weight based on the maximum degree of two adjacent CDs); (3) Best-constant (each edge is set with the same weight based on the optimal constant weight based on the eigenvalues of the Laplacian matrix of the graph); and FDLA (he consensus weight matrix is calculated by solving the spectral norm minimization problem; this is the fastest convergence rate w.r.t. the number of training iterations). [0048] In small scale network with 8 CDs, we can calculate the optimal solution through exhausted search. We observe that process 200 achieves the optimal solution as OPT and obtains the same (fastest) convergence rate as FDLA, as shown in FIG.4A and FIG.4B. With fewer edges selected for communications, process 200 further reduces the latency in each training iteration. Although those commonly used heuristics fully utilize all physical links, it is revealed that a properly designed sparse network can also achieve efficient consensus for distributed learning. FIG.4C shows the physical network topology consists of both the solid and dot edges. Only solid edges are selected by process 200. It turns out that 7 out of the 12 edges can achieve the fastest convergence rate with greatly reduced communication latency in each training iteration. In summary, process 200 efficiently enforces the graph sparsity with reduced communication latency and, meanwhile, retains a comparable convergence factor with FDLA. [0049] The performance on a network with 50 CDs is shown in FIG.5A. Although sacrificing a certain convergence rate in FIG.5A when compared with the lower bound FDLA, process 200 still retains a significantly better training time than other heuristics in FIG.5B. When compared with the first four benchmarks, process 200 reduces the training time by 85%, 74%, 73%, and 51%, respectively. Moreover, as shown in FIG.5C only 40% of the edges are selected by process 200 to achieve a similar level of consensus when compared with FDLA. [0050] So far, the results of different consensus weight matrix designs in terms of the theoretical bound of training time has been discussed. Now the practical training time of the implemented distributed convolutional neural network (CNN) training is shown. FIG.6A and FIG.6B show the training and test accuracy over time, respectively. We observe that FDLA does not necessarily lead to the fastest convergence speed in practice in terms of the wall-clock time. The performance of process 200 is in line with the observations in existing work that a sparse network may converge faster. With fewer edges selected for communications and reduced latency in each training iteration, process 200 spends the minimum time to achieve the same level of training accuracy. It is also worth pointing out that, in practice, with different realizations of the system setup, there does not exist a consensus weight matrix that strictly outperforms the other weight matrices in all situations. Because the training performance is also affected by the problem characteristics, learning algorithms, and even the design of local datasets, etc. However, it is of importance that process 200 addresses the role of sparse consensus weight matrix design in speeding up distributed learning. [0051] FIG.6C shows the speed up over FDLA under different network scales with the same bandwidth budget. For a fair comparison, the batch size is adjusted by the number of CDs in order to maintain the overall batch size of 256. In theory, the speed-up is calculated w.r.t. the value of an objective function. In the experiment, the speedup is calculated by comparing the time to achieve the same level of training accuracy. With more CDs sharing the same resource pool, the communication cost gradually overweighs the computation cost and thus the speed-up is mainly due to the reduced per-iteration latency with efficient sparse graph design. The slope of the speed-up decreases due to the scarcity of bandwidth resources. Furthermore, this reflects the fact that the convergence factor is prioritized when the network is rich in communication resources. While in a large-scale network, a sparser network can achieve the same level of convergence rate with reduced latency in each training iteration. However, with limited communication resources, blindly increasing the number of CDs aggravates network congestion, canceling out the benefits of parallel computing. [0052] FIG.7 is a flowchart illustrating a process 700, according to an embodiment, for distributed machine learning, ML, using the set of N CDs 101. Process 700 may begin in step s702. [0053] Step s702 comprises obtaining first topology information (e.g., an adjacency matrix (A)), wherein, for each CD included in the set of N CDs, the first topology information identifies other CDs included in the set of N CDs to which the CD is connected (e.g., as shown in FIG.1, CD1 is connected to CD2, CD4, and CDN and CD2 is connected to CD1, CD3, and CDN), thereby identifying a set of CD pairs. [0054] Step s704 comprises obtaining first network state information (denoted L (k) ) (e.g., a latency matrix), wherein, for each CD included in the set of N CDs, the first network state information comprises a network state vector for the CD, wherein the network state vector for the CD comprises, for each other CD to which the CD is indicated as being connected by the first topology information, a determined network state value (e.g., a performance indicator, such as, for example, any one or a combination of: a latency value, a packet loss value, a congestion value, a load value, a capacity value, etc.). [0055] Step s706 comprises determining a consensus weight matrix (e.g., ) using the first network state information and the first topology information. In some embodiments, the process also includes determining the communication resource matrix using the first network state information and first topology information. [0056] In some embodiments, the set of N CDs comprises a first CD and a second CD, the topology information indicates that the first CD is connected to the second CD via a first link, and determining the network state values comprises calculating a first network state value (L 1,2 ) associated with the first CD and the second CD. For example, in the embodiment where the network state values are latency values, then L 1,2 may be determined by calculating: L 1,2 = L 1p + CL 1,2 , wherein L1p is a processing latency of the first CD, and CL 1,2 , is a latency associated with the first link connecting the first CD with the second CD. In some embodiments, the process also includes obtaining network bandwidth budget information indicating a network bandwidth budget (e.g., ); and determining an amount of bandwidth (B1,2) to allocate to the first link, wherein B 1,2 is a function of the network bandwidth budget, and CL 1,2 , is a function of B 1,2 . [0057] In some embodiments, for each CD included in the set of N CDs, the topology information comprises a topology vector for the CD, wherein the topology vector for the CD comprises, for each other CD included in the set of N CDs, a value indicating whether or not the CD is connected to the other CD. [0058] In some embodiments, determining the consensus weight matrix using the first network state information and first topology information comprises using the first network state information and the first topology information, determining an intermediate consensus weight matrix (W1) and determining W using W1. In some embodiments, determining the consensus weight matrix based on W1 comprises: determining second topology information (e.g. A (k) ) using W1, wherein the second topology information identifies a subset of the set of CD pairs identified by the first topology information; determining whether a counter has reached a threshold value; and as a result of determining that the counter has reached the threshold value, determining the consensus weight matrix using the second topology information (e.g., solving FDLA(A (k) ), wherein A (k) is the second topology information). In some embodiments, the process also includes determining a communication resource matrix ^ using the second topology information (e.g. solving Min-Max-RA(A (k) )). [0059] In some embodiments, determining the consensus weight matrix based on W1 comprises: determining second topology information using W1; calculating a factor (e.g., λ (k) + Δ); determining whether the second topology information is equal to the first topology information; as a result of determining that the second topology information is equal to the first topology information, using the factor to determine a second intermediate joint consensus weight matrix, W2; and determining W using W2. [0060] In some embodiments, W1 is an N x N matrix comprising a set of values W1 i,j for i=0 to N and j=0 to N, and the second topology information is an N x N adjacency matrix, A2, comprising a set of values A2 i,j for i=0 to N and j=0 to N, and determining A2 using W1 comprises: setting A2 i,j = 1 as a result of determining that W1 i,j > 0, otherwise setting A i,j = 0. [0061] In some embodiments, the process also includes providing at least a first portion of the consensus weight matrix to a first CD included in the set of N CDs, wherein the first CD is configured to use the first portion of consensus weight matrix to determine model parameters for a first local ML model; providing at least a second portion of the consensus weight matrix to a second CD included in the set of N CDs, wherein the second CD is configured to use the second portion of the consensus weight matrix to determine model parameters for a second local ML model; obtaining the model parameters determined by the first and second CDs; and using the obtained model parameters to determine model parameters for a global ML model. [0062] In some embodiments, the network state value comprises a performance indicator value. In some embodiments, the performance indicator value is: a latency value, a packet loss value, a packet re-transmission success rate value, a packet re-transmission success failure rate value, a jitter value, a congestion value, a load value, or a capacity value. [0063] FIG.8 is a block diagram of a network node 800, according to some embodiments, that can implement any one or more of the network nodes described herein. That is, network node 800 can perform the above described methods. As shown in FIG.8, network node 800 may comprise: processing circuitry (PC) 802, which may include one or more processors (P) 855 (e.g., a general purpose microprocessor and/or one or more other processors, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like), which processors may be co-located in a single housing or in a single data center or may be geographically distributed (i.e., network node 800 may be a distributed computing apparatus); at least one network interface 848 comprising a transmitter (Tx) 845 and a receiver (Rx) 847 for enabling network node 800 to transmit data to and receive data from other nodes connected to a network 110 (e.g., an Internet Protocol (IP) network) to which network interface 848 is connected (directly or indirectly) (e.g., network interface 848 may be wirelessly connected to the network 110 via an AP and a core network, in which case network interface 848 is connected to an antenna arrangement); and a storage unit (a.k.a., “data storage system”) 808, which may include one or more non-volatile storage devices and/or one or more volatile storage devices. In embodiments where PC 802 includes a programmable processor, a computer readable medium (CRM) 842 may be provided. CRM 842 stores a computer program (CP) 843 comprising computer readable instructions (CRI) 844. CRM 842 may be a non-transitory CRM, such as, magnetic media (e.g., a hard disk), optical media, memory devices (e.g., random access memory, flash memory), and the like. In some embodiments, the CRI 844 of computer program 843 is configured such that when executed by PC 802, the CRI causes network node 800 to perform steps described herein (e.g., steps described herein with reference to the flow charts). In other embodiments, network node 800 may be configured to perform steps described herein without the need for code. That is, for example, PC 802 may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software. [0064] Conclusion [0065] As described above, there is provided a process that minimizes the total training time with a limited communication resource budget. The total training latency is characterized by multiplying the per-iteration delay with the number of iterations needed to reach the convergence of the global model in distributed learning. In summary, the process produces: i) an optimal sparse topology design for distributed learning to accelerate the training time, ii) smart consensus weight matrix design for information exchange in distributed learning, and iii) efficient communication resource allocation scheme for network communication in distributed learning. [0066] While various embodiments are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context. [0067] Additionally, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel. [0068] References [0069] [1] A. Dieuleveut and K. K. Patel, “Communication trade-offs for Local-SGD with large step size,” in Proc. Advances in Neural Information Processing Systems, vol.32, pp. 13601–13612, 2019. [0070] [2] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “QSGD: Communication-efficient SGD via gradient quantization and encoding,” in Proc. Advances in Neural Information Processing Systems, vol.30, pp.1709– 1720, 2017. [0071] [3] S. Shi, X. Chu, and B. Li, “MG-WFBP: Efficient data communication for distributed synchronous SGD algorithms,” in Proc. IEEE INFOCOM, pp.172–180, 2019. [0072] [4] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol.53, no.1, pp.65–78, 2004. [0073] [5] X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, “Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent,” in Proc. Advances in Neural Information Processing Systems, 2017, pp.5336–5346. [0074] [6] Neglia, Giovanni, et al. "The role of network topology for distributed machine learning." in Proc. IEEE INFOCOM, 2019. [0075] [7] M. Chen, H. V. Poor, W. Saad and S. Cui, "Convergence Time Optimization for Federated Learning Over Wireless Networks," in IEEE Transactions on Wireless Communications, vol.20, no.4, pp.2457-2471, 2021. [0076] [8] Z. Meng, H. Xu, M. Chen, Y. Xu, Y. Zhao and C. Qiao, "Learning-Driven Decentralized Machine Learning in Resource-Constrained Wireless Edge Computing," in Proc. IEEE INFOCOM, 2021, pp.1-10. [0077] [9] K. Srivastava, A. Nedic, and D. Stipanovic, “Distributed bregman-distance algorithms for min-max optimization,” in Agent-Based Optimization, Springer Studies in Computational Intelligence, 2013, pp.143–174.