Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISTRIBUTED MACHINE LEARNING WITH NEW LABELS USING HETEROGENEOUS LABEL DISTRIBUTION
Document Type and Number:
WIPO Patent Application WO/2022/162677
Kind Code:
A1
Abstract:
A method for distributed machine learning (ML) which includes providing a first dataset including a first set of labels to a plurality of local computing devices including a first local computing device and a second local computing device. The method further includes receiving, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels. The method further includes receiving, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels. The method further includes generating a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values. The method further includes generating a third set of ML model probabilities values by sampling using the generated weights matrix.

Inventors:
GAUTHAM KRISHNA GUDUR (IN)
PEREPU SATHEESH KUMAR (IN)
Application Number:
PCT/IN2021/050097
Publication Date:
August 04, 2022
Filing Date:
January 29, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
GAUTHAM KRISHNA GUDUR (IN)
International Classes:
G06N20/00; G06N3/08; G06N5/04
Foreign References:
US9147129B22015-09-29
US10657461B22020-05-19
Attorney, Agent or Firm:
D J, Solomon et al. (IN)
Download PDF:
Claims:
CLAIMS:

1. A method for distributed machine learning (ML) at a central computing device, the method comprising: providing a first dataset including a first set of labels to a plurality of local computing devices including a first local computing device and a second local computing device; receiving, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels; receiving, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels; generating a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values; generating a third set of ML model probabilities values by sampling using the generated weights matrix; generating a first set of data impressions using the generated third set of ML model probabilities values, wherein the first set of data impressions includes data impressions for each of the one or more labels different from any label in the first set of labels; generating a second set of data impressions by clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels; and training a global ML model using the generated second set of data impressions.

2. The method of claim 1, further comprising: generating a fourth set of ML model probabilities values by averaging using the first set of data impressions and the second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels,; and providing the generated fourth set of ML model probabilities values to the plurality of

25 local computing devices, including the first local computing device and the second local computing device, for training local ML models.

3. The method of any one of claims 1-2, wherein the received first set of ML model probabilities values and the received second set of ML model probabilities values are one of: Softmax values, sigmoid values, and Dirichlet values.

4. The method of any one of claims 1-3, wherein the generated weights matrix is a class similarity matrix and the class similarity matrix is generated according to: where Wi is vector of the weights connecting the previous layer nodes to the class node i and C G RK x K is the similarity matrix for K labels in the received first set of ML model probabilities values and the received second set of ML model probabilities values.

5. The method of any one of claims 1-4, wherein sampling using the generated weights matrix is according to:

Softmax = Dir (K, C) where Dir is a Dirichlet distribution function and C is concentration parameter which controls the spread of the Softmax values over labels in the received first set of ML model probabilities values and the received second set of ML model probabilities values.

6. The method of any one of claims 1-5, wherein clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to a k-medoids clustering algorithm and uses the elbow method to determine the number of clusters k.

7. The method of any one of claims 1-6, wherein clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to:

X1 = l1meMSk Xm, where

MSk is the set of users with label k.

8. The method of claim 2, wherein generating the generated fourth set of model probabilities values is according to: acc(^fDi+i (x0)), wherein acc is accuracy function defined by the ratio of correctly classified samples to the total samples for the given local ML model.

9. A method for distributed machine learning (ML) learning at a local computing device, the method comprising: receiving a first dataset including a first set of labels; generating a second dataset including the first set of labels from the received first dataset and one or more labels different from any label in the first set of labels; training a local ML model using the generated second dataset; generating a weights matrix using the one or more labels different from any label in the first set of labels; generating a set of ML model probabilities values by using the generated weights matrix and trained local ML model; and providing the generated set of ML model probabilities values to a central computing device.

10. The method of claim 9, wherein the received first data set is a public dataset and the generated second dataset is a private dataset.

11. The method of any one of claims 9-10, wherein the local ML model is one of: a convolutional neural network (CNN), a artificial neural network (ANN), and a recurrent neural network (RNN).

12. The method of any one of claims 9-11, further comprising: receiving a set of ML model probabilities values from the central computing device representing an averaging using a first set of data impressions and a second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels; and training the local ML model using the received set of ML model probabilities values.

13. The method of any one of claims 1-12, wherein the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of radio network nodes which are configured to classify an alarm type using the trained local ML models.

14. The method of any one of claims 1-12, wherein the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of wireless sensor devices which are configured to classify an alarm type using the trained local ML models

15. A central computing device comprising: a memory; and a processor coupled to the memory, wherein the processor is configured to: provide a first dataset including a first set of labels to plurality of local computing devices including a first local computing device and a second local computing device; receive, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels; receive, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels;

28 generate a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values; generate a third set of ML model probabilities values by sampling using the generated weights matrix; generate a first set of data impressions using the generated third set of ML model probabilities values, wherein the first set of data impressions includes data impressions for each of the one or more labels different from any label in the first set of labels; generate a second set of data impressions by clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels; and train a global ML model using the generated second set of data impressions.

16. The central computing device of claim 15, wherein the processor is further configured to: generate a fourth set of ML model probabilities values by averaging using the generated first set of data impressions and the generated second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels; and provide the generated fourth set of ML model probabilities values to a plurality of local computing devices, including the first local computing device and the second local computing device, for training local ML models.

17. The central computing device of any one of claims 15-16, wherein the received first set of ML model probabilities values and the received second set of ML model probabilities values are one of : Softmax values, sigmoid values, and Dirichlet values.

18. The central computing device of any one of claims 15-17, wherein the generated weights matrix is a class similarity matrix and the class similarity matrix is generated according to:

29 where Wi is vector of the weights connecting the previous layer nodes to the class node i and C G RK x K is the similarity matrix for K labels in the received first set of ML model probabilities values and the received second set of ML model probabilities values.

19. The central computing device of any one of claims 15-18, wherein sampling using the generated weights matrix is according to:

Softmax = Dir (K, C) where Dir is a Dirichlet distribution function and C is a concentration parameter which controls the spread of the Softmax values over labels in the received first set of ML model probabilities values and the received second set of ML model probabilities values.

20. The central computing device of any one of claims 15-19, wherein clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to a k-medoids clustering algorithm and uses the elbow method to determine the number of clusters k.

21. The central computing device of any one of claims 15-20, wherein clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to:

MSk is the set of users with label k.

22. The central computing device of claim 16, wherein generating the fourth set of ML model probabilities values for each label of the first set of labels and the one or more labels different from any label in the first set of labels is according to: acc(^fDi+i (x0)), wherein acc is accuracy function defined by the ratio of correctly classified samples to the total samples for the given local ML model.

23. The central computing device of any one of claims 15-22, wherein the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of radio network nodes which are configured to classify an alarm type using the trained local ML models.

24. The central computing device of any one of claims 15-22, wherein the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of wireless sensor devices which are configured to classify an alarm type using the trained local ML models.

25. A local computing device comprising: a memory; a processor coupled to the memory, wherein the processor is configured to: receive a first dataset including a first set of labels; generate a second dataset including the first set of labels from the received first dataset and one or more labels different from any label in the first set of labels; train a local ML model using the generated second dataset; generate a weights matrix using the one or more labels different from any label in the first set of labels; generate a set of model probabilities values by using the generated weights matrix and trained local ML model; and provide the generated set of model probabilities values to a central computing device.

26. The local computing device of claim 25, wherein the received first data set is a

31 public dataset and the generated second dataset is a private dataset.

27. The local computing device of any one of claims 25-26, wherein the local ML model is one of: a convolutional neural network (CNN), a artificial neural network (ANN), and a recurrent neural network (RNN).

28. The local computing device of any one of claims 25-27, wherein the processor is further configured to: receive a set of ML model probabilities values from the central computing device representing an averaging using a first set of data impressions and a second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels; and train the local ML model using the received set of ML model probabilities values.

29. The local computing device of any one of claims 25-28, wherein the local computing device comprises a radio network node and the processor is further configured to classify an alarm type using the trained local ML model.

30. The local computing device of any one of claims 25-28, wherein the local computing devices comprises a wireless sensor device and the processor is further configured to classify an alarm type using the trained local ML model.

31. A computer program comprising instructions which when executed by processing circuitry causes the processing circuitry to perform the method of any one of claims 1-14.

32. A carrier containing the computer program of claim 31, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium.

32

Description:
DISTRIBUTED MACHINE LEARNING

WITH NEW LABELS USING HETEROGENEOUS LABEL DISTRIBUTION

TECHNICAL FIELD

[001] Disclosed are embodiments related to distributed machine learning and, in particular, to distributed machine learning, such as, for example, federated learning, with new labels using heterogeneous label distribution.

BACKGROUND

[002] In the past few years, machine learning has led to major breakthroughs in various areas, such as natural language processing, computer vision, speech recognition, and Internet of Things (loT), with some breakthroughs related to automation and digitalization tasks. Most of this success stems from collecting and processing big data in suitable environments. For some applications of machine learning, this process of collecting data can be incredibly privacy invasive. One potential use case is to improve the results of speech recognition and language translation, while another one is to predict the next word typed on a mobile phone to increase the speed and productivity of the person typing. In both cases, it would be beneficial to directly train on the same data instead of using data from other sources. This would allow for training a machine learning (ML) model (referred to herein as “model” also) on the same data distribution (i.i.d. - independent and identically distributed) that is also used for making predictions. However, directly collecting such data might not always be feasible owing to privacy concerns. Users may not prefer nor have any interest in sending everything they type to a remote server/cloud.

[003] One recent solution to address this is the introduction of federated learning, a new distributed machine learning approach where the training data does not leave the users’ computing device at all. Instead of sharing their data directly, the client computing devices themselves compute weight updates using their locally available data. It is a way of training a model without directly inspecting clients’ or users’ data on a server node or computing device. Federated learning is a collaborative form of machine learning where the training process is distributed among many users. A server node or computing device has the role of coordinating between models, but most of the work is not performed by a central entity anymore but by a federation of users or clients.

[004] After the model is initialized in every user or client computing device, a certain number of devices are randomly selected to improve the model. Each sampled user or client computing device receives the current model from the server node or computing device and uses its locally available data to compute a model update. All these updates are sent back to the server node or computing device where they are averaged, weighted by the number of training examples that the clients used. The server node or computing device then applies this update to the model, typically by using some form of gradient descent.

[005] Current machine learning approaches require the availability of large datasets, which are usually created by collecting huge amounts of data from user or client computing devices. Federated learning is a more flexible technique that allows training a model without directly seeing the data. Although the machine learning process is used in a distributed way, federated learning is quite different to the way conventional machine learning is used in data centers. The local data used in federated learning may not have the same guarantees about data distributions as in traditional machine learning processes, and communication is oftentimes slow and unstable between the local users or client computing devices and the server node or computing device. To be able to perform federated learning efficiently, proper optimization processes need to be adapted within each user machine or computing device. For instance, different telecommunications operators will each generate huge alarm datasets and relevant features. In this situation, there may be a good list of false alarms compared to the list of true alarms. For such a machine learning classification task, typically, the dataset of all operators in a central hub/repository would be required beforehand. This is required since different operators will encompass a variety of features, and the resultant model will learn their characteristics. However, this scenario is extremely impractical in real-time since it requires multiple regulatory and geographical permissions; and, moreover, it is extremely privacy- invasive for the operators. The operators often will not want to share their customers’ data out of their premises. Hence, distribute machine learning, such as federated learning, may provide a suitable alternative that can be leveraged to greater benefit in such circumstances. SUMMARY

[006] The concept of distributed machine learning, such as federated learning, is to build machine learning models based on data sets that are distributed across multiple computing devices while preventing data leakage. See, e.g., Bonawitz, Keith, et al. “Towards federated learning at scale: System design.” arXiv preprint arXiv: 1902.01046 (2019). Recent challenges and improvements have been focusing on overcoming the statistical challenges in federated learning. There are also research efforts to make federated learning more personalizable. The above works all focus on on-device federated learning where distributed mobile user interactions are involved and communication cost in massive distribution, imbalanced data distribution, and device reliability are some of the major factors for optimization.

[007] However, there is a shortcoming with the current federated learning approaches proposed. It is usually inherently assumed that user or client computing devices, referred to herein as clients or users also, try to train/update the same model architecture. In this case, clients or users do not have the freedom to choose their own architectures and ML modeling techniques. This can be a problem with clients or users since it can result in either overfitting or under fitting the local models on the computing devices. This might also result in an incompetent global model in the global server node or computing device, hereinafter referred as global user also, after model updating. Hence, it can be preferable for clients or users to select their own architecture/model tailored to their convenience, and the central resource can be used to combine these (potentially different) models in an effective manner.

[008] Another shortcoming with the current approaches is that a real-time client or user might not have samples following an i.i.d. distribution. For instance, in an iteration client or user A can have 100 positive samples and 50 negative samples, while user B can have 50 positive sample, 30 neutral samples and 0 negative samples. In this case, the models in a federated learning setting with these samples can result in a poor global model.

[009] Further, current federated learning approaches can only handle the situation where each of the local models have the same labels across all the clients or users and do not provide the flexibility to handle unique labels, or labels that may only be applicable to a subset of the clients or users. However, in many practical applications, having unique labels or new or overlapping unseen labels that may only be applicable to a subset of the clients or users, for each local model can be an important and common scenario owing to their dependencies and constraints on specific regions, demographics, etc. In this case, there may be different labels across all the data points specific to the region.

[0010] Recently, a method enabling the use of heterogeneous model types and architectures among users of federated learning was developed by the assignee of the subject application and is disclosed in PCT/IN2019/050736. Further, a method which can handle heterogeneous labels and heterogeneous models in a federated learning setting was developed by the assignee of the subject application and is disclosed in PCT/IN2020/050618. However, there still remains a need for a method which can handle new and unseen heterogeneous labels in a distributed machine learning setting.

[0011] Embodiments disclosed herein provide methods which can handle new and unseen heterogeneous labels in a distributed machine learning setting. Embodiments disclosed herein provide a way, in, for example, federated learning, to handle new unseen labels with heterogeneous label distribution for all the users for a given problem of interest, i.e., image classification, text classification etc. The terms “labels” and “classes” are used herein interchangeably, and the methods disclosed and claimed herein are applicable to, and adapted for handling, both new and unseen heterogeneous labels and classes, as those terms are used herein and, as well, as generally understood by those of ordinary skill in the art. As described in further detail with respect to exemplary embodiments, classes may be, for example, “Cat”, “Dog”, “Elephant”, etc., and labels from those classes include, similarly, specific instances of “Cat”, “Dog”, “Elephant”, etc.

[0012] While embodiments handle the unseen heterogeneous labels in a federated learning setting, it is generally assumed that there is a public dataset available with all the local clients or users and a global user. Instead of sending the local model updates to the global user, the local users send the softmax probabilities obtained from the public dataset. Embodiments disclosed herein provide a framework with a zero-shot learning mechanism by synthesizing Data Impressions from class similarity matrices to learn new classes (labels). In some embodiments, in order to incorporate trustworthiness of local clients or users reporting labels, an unsupervised clustering technique to validate newly reported classes (labels) across local clients or users is used.

[0013] In this way, for example, the new classes (labels) are added to the public dataset again for the next federated learning iteration. An added advantage of embodiments is that the local clients or users can fit their own models (heterogeneous models) in the federated learning approach.

[0014] Embodiments can also advantageously handle new and unseen heterogeneous labels, enable the local clients or users to have different new and unseen classes (labels) across local devices during federated learning, handle heterogeneous label distributions across users, which is common in most industries, and handle different data distributions and model architectures across users.

[0015] According to a first aspect, a method for distributed machine learning (ML) at a central computing device is provided. The method includes providing a first dataset including a first set of labels to a plurality of local computing devices including a first local computing device and a second local computing device. The method further includes receiving, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels. The method further includes receiving, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels. The method further includes generating a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values. The method further includes generating a third set of ML model probabilities values by sampling using the generated weights matrix. The method further includes generating a first set of data impressions using the generated third set of ML model probabilities values, wherein the first set of data impressions includes data impressions for each of the one or more labels different from any label in the first set of labels. The method further includes generating a second set of data impressions by clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels. The method further includes training a global ML model using the generated second set of data impressions. [0016] In some embodiments, the method further includes generating a fourth set of ML model probabilities values by averaging using the first set of data impressions and the second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels. In some embodiments, the method further includes providing the generated fourth set of ML model probabilities values to the plurality of local computing devices, including the first local computing device and the second local computing device, for training local ML models.

[0017] In some embodiments, the received first set of ML model probabilities values and the received second set of ML model probabilities values are one of: Softmax values, sigmoid values, and Dirichlet values. In some embodiments, sampling using the generated weights matrix is according to Softmax values and a Dirichlet distribution function. In some embodiments, the generated weights matrix is a class similarity matrix. In some embodiments, clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to a k-medoids clustering algorithm and uses the elbow method to determine the number of clusters k.

[0018] According to a second aspect, a method for distributed machine learning (ML) learning at a local computing device is provided. The method includes receiving a first dataset including a first set of labels. The method further includes generating a second dataset including the first set of labels from the received first dataset and one or more labels different from any label in the first set of labels. The method further includes training a local ML model using the generated second dataset. The method further includes generating a weights matrix using the one or more labels different from any label in the first set of labels. The method further includes generating a set of ML model probabilities values by using the generated weights matrix and trained local ML model. The method further includes providing the generated set of ML model probabilities values to a central computing device.

[0019] In some embodiments, the received first data set is a public dataset and the generated second dataset is a private dataset. In some embodiments, the local ML model is one of: a convolutional neural network (CNN), an artificial neural network (ANN), and a recurrent neural network (RNN). In some embodiments, the method further includes receiving a set of ML model probabilities values from the central computing device representing an averaging using a first set of data impressions and a second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels. In some embodiments, the method further includes training the local ML model using the received set of ML model probabilities values.

[0020] In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of radio network nodes which are configured to classify an alarm type using the trained local ML models. In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of wireless sensor devices which are configured to classify an alarm type using the trained local ML models.

[0021] According to a third aspect, a central computing device is provided. The central computing device includes a memory and a processor coupled to the memory. The processor is configured to provide a first dataset including a first set of labels to plurality of local computing devices including a first local computing device and a second local computing device. The processor is further configured to receive, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels. The processor is further configured to receive, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels. The processor is further configured to generate a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values. The processor is further configured to generate a third set of ML model probabilities values by sampling using the generated weights matrix. The processor is further configured to generate a first set of data impressions using the generated third set of ML model probabilities values, wherein the first set of data impressions includes data impressions for each of the one or more labels different from any label in the first set of labels. The processor is further configured to generate a second set of data impressions by clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels. The processor is further configured to train a global ML model using the generated second set of data impressions. [0022] In some embodiments, the processor is further configured to generate a fourth set of ML model probabilities values by averaging using the generated first set of data impressions and the generated second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels. In some embodiments, the processor is further configured to provide the generated fourth set of ML model probabilities values to a plurality of local computing devices, including the first local computing device and the second local computing device, for training local ML models.

[0023] In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of radio network nodes which are configured to classify an alarm type using the trained local ML models. In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of wireless sensor devices which are configured to classify an alarm type using the trained local ML models.

[0024] According to a fourth aspect, a local computing device is provided. The local computing device includes a memory and a processor coupled to the memory. The processor is configured to receive a first dataset including a first set of labels. The processor is further configured to generate a second dataset including the first set of labels from the received first dataset and one or more labels different from any label in the first set of labels. The processor is further configured to train a local ML model using the generated second dataset. The processor is further configured to generate a weights matrix using the one or more labels different from any label in the first set of labels. The processor is further configured to generate a set of ML model probabilities values by using the generated weights matrix and trained local ML model. The processor is further configured to provide the generated set of ML model probabilities values to a central computing device.

[0025] In some embodiments, the received first data set is a public dataset and the generated second dataset is a private dataset. In some embodiments, the local ML model is one of: a convolutional neural network (CNN), a artificial neural network (ANN), and a recurrent neural network (RNN). In some embodiments, the processor is further configured to receive a set of ML model probabilities values from the central computing device representing an averaging using a first set of data impressions and a second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels. In some embodiments, the processor is further configured to train the local ML model using the received set of ML model probabilities values.

[0026] In some embodiments, the local computing device comprises a radio network node and the processor is further configured to classify an alarm type using the trained local ML model. In some embodiments, the local computing devices comprises a wireless sensor device and the processor is further configured to classify an alarm type using the trained local ML model.

[0027] According to a fifth aspect, a computer program is provided comprising instructions which when executed by processing circuitry causes the processing circuitry to perform the method of any one of the embodiments of the first or second aspects.

[0028] According to a sixth aspect, a carrier is provided containing the computer program of the fifth aspect, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium.

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate various embodiments.

[0030] FIG. 1 illustrates a federated learning system according to an embodiment.

[0031] FIG. 2 illustrates a federated learning system according to an embodiment.

[0032] FIG. 3A illustrates a message diagram according to an embodiment.

[0033] FIG. 3B illustrates a message diagram according to an embodiment.

[0034] FIG. 4A is a flow chart according to an embodiment.

[0035] FIG. 4B is a flow chart according to an embodiment.

[0036] FIG. 5A is a flow chart according to an embodiment.

[0037] FIG. 5B is a flow chart according to an embodiment.

[0038] FIG. 6 is a block diagram of an apparatus according to an embodiment. [0039] FIG. 7 is a block diagram of an apparatus according to an embodiment.

DETAILED DESCRIPTION

[0040] FIG. 1 illustrates a system 100 of federated learning according to an embodiment. As shown, a central computing device 102 is in communication with one or more local computing devices 104. As described in further detail herein, in some embodiments, a local client or user is associated with a local computing device 104, and a global user is associated with a central server or computing device 102. In some embodiments, local computing devices 104 or local users may be in communication with each other utilizing any of a variety of network topologies and/or network communication systems. In some embodiments, central computing device 102 may include a server device, cloud server or the like. In some embodiments, local computing devices 104 may include user devices or user equipment (UE), such as a smart phone, tablet, laptop, personal computer, and so on, and may also be communicatively coupled through a common network, such as the Internet (e.g., via WiFi) or a communications network (e.g., LTE or 5G). While a central computing device is shown, the functionality of central computing device 102 may be distributed across multiple nodes, computing devices and/or servers, and may be shared between one or more of the local computing devices 104.

[0041] Federated learning as described in embodiments herein may involve one or more rounds, where a global model is iteratively trained in each round. Local computing devices 104 may register with the central computing device 102 to indicate their willingness to participate in the federated learning of the global model, and may do so continuously or on a rolling basis. Upon registration (and potentially at any time thereafter), the central computing device 102 may select a model type and/or model architecture for the local computing device to train. Alternatively, or in addition, the central computing device 102 may allow each local computing device 104 to select a model type and/or model architecture for itself. The central computing device 102 may transmit an initial model to the local users 104. For example, the central computing device 102 may transmit to the local users 104 a global model (e.g., newly initialized or partially trained through previous rounds of federated learning). The local users 104 may train their individual models locally with their own data. The results of such local training may then be reported back to central computing device 102, which may pool the results and update the global model. This process may be repeated iteratively. Further, at each round of training the global model, central computing device 102 may select a subset of all registered local users 104 (e.g., a random subset) to participate in the training round.

[0042] Embodiments disclosed herein provide a way to handle new, unseen heterogeneous labels among different local users or computing devices 104.

[0043] To demonstrate the general scenario of new, unseen classes or labels with heterogeneous label distribution among local users, let us assume the task of image classification in a federated learning setting across different animals with three local users. In this example, User 1 may have labels from two classes — ‘Cat’ and ’Dog’; User 2 may have labels from two classes — ‘Dog’ and ’Pig’; and User 3 may have labels from two classes — ‘Cat’ and ’Pig’ — in the first iteration of federated learning. In the second or following iterations, in this example, a new class exists with label ‘Sheep’ for User 1 and a new class exists with label ‘Elephant’ for User 3. In this example, for all the users, they are working towards image classification and the labels of the images are quite different for different users. This is a typical scenario with new labels with heterogeneous label distributions amongst users.

[0044] Generally speaking, many different types of problems relevant to many different industries will have local users 104 that have new, unseen heterogeneous labels. For instance, let us assume that the local users are telecommunications operators. Quite often, the operators have different data distributions and different labels with them. Some of the labels are common between these operators, while some labels tend to be very unique or more specialized, and catered to certain operators only, or to operators within certain regions. Furthermore, new and unseen labels can occur due to a variety of reasons depending on the specific operator, the operator’s region, etc. Embodiments herein provide, in such situations, for a common and unified model in the federated learning framework since the operator typically will not transfer data due to privacy concerns and can gather only insights.

[0045] One challenge in addressing this problem is to combine these different local labels, including existing heterogeneous labels and new unseen labels, and models into a single global model. This is not straight forward since the local users do not have the label distribution of other local users visible in them, and local models are usually built to describe only the local labels they have. Hence, there is a need for a method which can combine these local models to a global model.

[0046] In some embodiments, a public dataset may be made available to all the local users and the global user. The public dataset contains data related to the union of all the labels across all the local users. Suppose, for example, that the label set for User 1 is U ± , User 2 is U 2 , and User P is Up, the union of all the labels forms the global user label set {U ± U U 2 U U 3 ... U Up}. Multiple local computing devices include different model architectures in each federated learning (FL) iteration, and the incoming data in each local computing device includes data with new classes and labels in some FL iterations, with either unique or overlapping classes in each FL iteration. An initial public dataset with label set consisting of all labels accessible across all local user computing devices is established, and the dataset is updated when new classes (labels) are incoming over different iterations. This public dataset is used as the test set and it is not exposed during FL iterations to the local models for consistency to prevail while testing. There are cases when no local user reports new classes, or any (or all) local user reports new classes. There are also considerations regarding whether the reported labels of the new classes from local users are trustworthy or not.

[0047] In an exemplary embodiment, the new labels streaming in to the local users are not present in public dataset. All the local users have an idea of all the labels other local users have.

[0048] Anonymized Data Impressions

[0049] Embodiments disclosed herein enable detection of similar labels across different local users in a federated learning setting without the knowledge of the data. Embodiments disclosed herein enable constructing the anonymized data without actually transferring data and identifying similarities on the anonymized data. In some embodiments, anonymized Data Impressions are generated using zero-shot learning to compute the anonymized data.

[0050] In some embodiments, a ML model M is provided, which relates between the input X and output y, where X G R M x N is set of the features available, and y G R M , which represents the dimension of the labels (classes) space for the M samples. As used in exemplary embodiments described herein, features may be, for example, in an image of a dog, cat, elephant, etc., characteristics from patterns, colors, shapes, texture, sizes, etc., such as, fur, hair, brown, gray, angular, smooth, etc. In some applications of the methods disclosed herein, such as, for example, for normal data based classification, the features can be a set of sensors data which can be used for classification. An anonymized feature set X*, which has same properties of X, can be created by, for example, utilizing the following exemplary algorithm, which is explained in further detail below with reference to “Sampling the Softmax values” and “Creating Data Impressions.”

Algorithm

Input: Public Dataset 0 (% 0 , y 0 ), Private Datasets D^, Total users M, Total iterations I, LabelSet l m for each user

Output: Trained Model scores /

Initialize = 0 (Global Model Scores) for i = 0 to I do for m = 0 to M do

Build: Model predict

Local Update (at local computing device):

Choice 1: New classes are not reported are the global scores of labelset l m

Choice 2: New classes are reported

Train a new model on public data D o and new data together. Send weights of the last layer ( to global user. end for

Global Update (at central computing device):

Choice 1: No user reports new classes

Update label wise where acc f D i+i(x 0 )) is the accuracy function, defined by the ratio of correctly classified samples to the total samples for the given local model

Choice 2: Any user reports new classes

Create Data Impressions (DI) for each user m with weights W^.

Choice 2i : Global user trusts new labels of all users

Average DI of all users to create common DI for each new label X 1 A^, where M Sk is set of users with label k.

Choice 2 : Global user does not trust new labels of all users

1. Perform k-medoids clustering on combined DI across new classes of all users, and arrive at number of labels l new .

2. With new labels l new , we compute DI using Choice 2i.

At the end of Choice 2i and Choice 22, we ask the users to report model scores on new public dataset add end for

[0051] Sampling the Softmax values

[0052] In some embodiments, softmax values from a Dirichlet distribution are sampled. The distribution may be controlled by using a weights matrix, such as, for example, a class/label similarity matrix. The class similarity matrix includes information on how similar the classes (labels) are to each other. If the classes (labels) are similar, the softmax values will likely be concentrated uniformly over these classes (labels) and vice-versa.

[0053] The class similarity matrix is obtained by considering the weights of the last layer of the each local model. This weights matrix can be used for both the local and global ML models. In general, any ML model may have the final layer as a fully connected layer with a softmax non-linearity. If the classes (labels) are similar, there will be similar weights between connection of previous layer to the nodes of the classes (labels). The class similarity matrix (C) can be constructed, for example, as: where Wi is vector of the weights connecting the previous layer nodes to the class node i and C G R K x K is the similarity matrix for K classes in the data.

[0054] With the class similarity matrix constructed, the next step is to sample the softmax values as:

Softmax = Dir (K, C) where Dir is a Dirichlet distribution function and C is concentration parameter which controls the spread of the softmax values over classes in the data.

[0055] Creating Data Impressions

[0056] In some embodiments, Y k = [yr k , y2 k ,.... , y\ k ] G R K X N , K is the number of classes (labels) in the data, N softmax vectors corresponding to class (label) k, sampled from Dirichlet distribution constructed from previous step - i.e., sampling of the softmax values.

Using these softmax values, the input data features (data impressions) are computed by solving the following optimization problem using the model M and sampled softmax values Y k :

X* = arg minx LCE (yi k , M(X))

[0057] To solve the optimization problem, initialize the input X to be random input and iterate until the cross-entropy loss (LCE) change is less than a significant value between two iterations. The process is repeated for each of the K classes (labels), and data impressions for each class (label) are obtained, which represent anonymized data features for each class (label). For example, in an exemplary embodiment applying the methods disclosed herein to an image classification problem, the anonymized data features can be a set of anonymized (i.e., look alike) image features which are responsible for classification. In another example, in an exemplary embodiment applying the methods disclosed herein to a sensor data classification problem, the anonymized data features can be a set of anonymized sensor data which are responsible for classification.

[0058] Embodiments of the disclosed methods enable handling of heterogeneous labels as well as heterogeneous models in federated learning, which is very useful in applications where users are participating from different organizations which have multiple and disparate labels. A further advantage of the disclosed methods is that they can handle different distributions of samples across all the users, which can be common in any application.

[0059] Application of the forgoing steps for creating anonymized data impressions, including sampling softmax values and creating data impressions, and the exemplary algorithm provided, are further discussed with reference to FIG. 2, FIG. 3A, FIG. 3B, FIG. 4A, FIG. 4B, FIG. 5 A, and FIG. 5B.

[0060] FIG. 2 illustrates a system 300 according to some embodiments. System 200 includes three users 104, labeled as “Focal Computing Device 1”, “Focal Computing Device 2”, and “Focal Computing Device 3”. These users may have heterogeneous labels, including new, unseen labels streaming in during different iterations. Continuing with the example image classification described above, User 1, Focal Computing Device 1 may have labels from two classes — ‘Cat’ and ’Dog’; User 2, Focal Computing Device 2 may have labels from two classes — ‘Dog’ and ’Pig’; and User 3, Focal Computing Device 3 may have labels from two classes — ‘Cat’ and ’Pig’ — in the first iteration of federated learning. In the second or following iterations, in this example, a new class exists with label ‘Sheep’ for User 1, Focal Computing Device 1 and a new class exists with label ‘Elephant’ for User 3, Focal Computing Device 3.

[0061] As illustrated, each of the Users, Focal Computing Devices 104, include a Focal ME Model. User 1, Focal Computing Device 1 includes Focal ME Model Ml; User 2, Focal Computing Device 2 includes Focal ME Model M2; and User 3, Focal Computing Device 3 includes Focal ME Model M3. Each of the Focal ME Models Ml, M2, and M3 may be the same or different model types (a CNN model, an Artificial Neural Network (ANN) model, and an RNN model). [0062] System 200 also includes a Central Computing Device 102, which includes a Global ML Model.

[0063] As described above, for a given iteration of federated learning, new labels (classes) may stream into users, local computing devices 104. If labels (classes) are not reported, we train the new labels (classes) along with the public dataset, and send the new model weights to the global user. If user reports new labels (classes), we cluster the averaged Data Impressions of each new label (class) using a simple k-medoids clustering algorithm, and use the elbow operation to determine the optimal number of clusters k. Considering the new clusters k as new labels (classes), we compute the Data Impressions for each new label (class), and again add the new Data Impressions to the public dataset, and new labels (classes) to the public label (class) set.

[0064] As shown, there are three different local devices which include different labels and architectures. Interaction happens between a central global ML model, which exists in the central computing device 102, and the users are local computing devices 104, e.g., configurations with embedded systems or mobile phones.

[0065] The ML model training for each local user of a local computing device user and transfer of ML model probabilities values to global user of a central computing device 102 is capable of running on a low-resource constrained device, such as, for example, one having -256MB RAM. This makes the federated learning methods according to embodiments described herein suitable for running on many types of local client computing devices, including contemporary mobile/embedded devices such as smartphones. Advantageously, the federated learning methods according to embodiments described herein are not computationally intensive for local users and local computing devices, and can be implemented in low-power constrained devices.

[0066] Example

[0067] We collected a public dataset of all labels in the data and made it available to all the users - here, telecommunications operators. The public dataset consisted of an alarms dataset corresponding to three telecommunications operators. For the example, the first operator has three labels l 2 , (3}, the second operator has three labels {l 2 , (3, 14}, and the third operator has three labels (l 2 , l 4 , l 5 }. The dataset has similar features, but has different patterns and different labels. In this example, the first operator has incoming data with a new label Z 6 , while the third operator has incoming data with a new label l 7 . Due to few geographical and technological innovation from one operator, there could be new labels from that operator, which is unseen by that operator previously, or by the other operator or the global model. The objective for each of the users is to classify the alarms as either a true alarm or a false alarm based on their respective features.

[0068] The users have the choice of building their own models. In this example, the users are given the choice of building a Convolutional Neural Network (CNN) model.

However, rather than a conventional federated learning setting, the choice of designing the architecture - i.e., the different number of layers and filters in each layer - is given to the users.

[0069] As indicated, in this example, there are three different operators. The three different operators choose to fit three different CNN models. Based on the dataset, operator 1 chooses to fit a three-layer CNN with 32, 64 and 32 filters in each layer respectively.

Similarly, operator 2 chooses to fit a two-layer ANN model with 32 and 64 filters in each layer respectively. The third operator chooses to fit a two-layered Recurrent Neural Network (RNN) with 32 and 50 units each. These models are chosen based on the nature of data and different iterations.

[0070] In this case, the global model is constructed as follows. The softmax probabilities of the local model are computed on the subset of public data to which the labels in the local model have access to. Data Impressions of new data for the respective new labels in the global model are created. The respective new label data is added to the public dataset after clustering across just the new label data. Now, the new label set includes l 6 and l 7 added to the initial label set. The label-based averaging of all distributions of all local softmax probabilities are computed and sent back to the local users. These steps repeat for multiple iterations of the federated learning model.

[0071] The final average accuracies obtained for the three local models are 82%, 88% and 75%. After the global model is constructed, the final accuracies obtained at the three local models are 85%, 93% and 79%. In this way, we evaluate that the federated learning model and methods disclosed herein are effective and yield better results, when compared to the local models operating by themselves. The model is run for 50 iterations and we report these accuracies across three different experimental trials, and we average the accuracies.

[0072] FIG. 3A illustrates a message diagram 300 according to an embodiment. Local users or client computing devices 104 (three local users are shown) and central computing device 102 communicate with each other. The central computing device 102 first provides a public dataset to each of the local computing devices 104 at 310, 312, and 314. Each of the local computing devices 104 have a local ML model - Local Computing Device 1 has Local ML Model Ml 320; Local Computing Device 2 has Local ML Model M2 322; and Local Computing Device 3 has Local ML Model M3 324. Each of the local computing devices 104 also have a private dataset - Local Computing Device 1 has Private Dataset plus new label(s) 330; Local Computing Device 2 has Private Dataset - no new label(s) 332; and Local Computing Device 3 has Private Dataset plus new label(s) 334. As described above with reference to PIG. 2 and below in further detail with reference to PIG. 5A, each of the local computing devices use the received public dataset and their own private dataset, including with new label(s) in the case of Local Computing Device 1 and Local Computing Device 3, to train their local ML models and generate a weights matrix and model probabilities values, which are provided to the central computing device 102. Local Computing Device 1 provides ML Model probabilities values (Ml) to the Central Computing Device at step 340, and Local Computing Device 3 provides ML Model probabilities values (M3) to the Central Computing Device at step 342. As explained below in further detail with reference to PIG. 4A, after receiving the ML model probabilities values from the local computing devices, the central computing device generates a weights matrix using the received ML model probabilities values, performs sampling using the generated weights matrix, and generates data impressions, including data impressions for the new labels, by clustering. The global ML model 345 is trained using the generated data impressions.

[0073] PIG. 3B illustrates further messages for the message diagram 300 according to an embodiment. As indicated with reference to PIG. 3A, the global ML model 345 is trained using the generated data impressions, which includes the new labels. The trained global ML model is the Updated Global ML Model 350 shown in FIG. 3B for the Central Computing Device 102. As explained below in further detail with reference to Fig. 4B, the central computing device generates ML model probabilities values by averaging using the generated data impressions, including data impressions for the new labels. The central computing device 102 provides these ML model probabilities values to each of the local computing devices 104 at 360, 362, and 364. Each of the local computing devices 104 use the received ML model probabilities values to train and, thereafter, Local Computing Device 1 has Updated Local ML Model Ml 370;

Local Computing Device 2 has Updated Local ML Model M2 372; and Local Computing Device 3 has Updated Local ML Model M3 374.

[0074] FIG. 4A illustrates a flow chart according to an embodiment. Process 400 is a method for distributed machine learning (ML) at a central computing device. Process 400 may begin with step s402.

[0075] Step s402 comprises providing a first dataset including a first set of labels to a plurality of local computing devices including a first local computing device and a second local computing device.

[0076] Step s404 comprises receiving, from the first local computing device, a first set of ML model probabilities values from training a first local ML model using the first set of labels.

[0077] Step s406 comprises receiving, from the second local computing device, a second set of ML model probabilities values from training a second local ML model using the first set of labels and one or more labels different from any label in the first set of labels.

[0078] Step s408 comprises generating a weights matrix using the received first set of ML model probabilities values and the received second set of ML model probabilities values.

[0079] Step s410 comprises generating a third set of ML model probabilities values by sampling using the generated weights matrix.

[0080] Step s412 comprises generating a first set of data impressions using the generated third set of ML model probabilities values, wherein the first set of data impressions includes data impressions for each of the one or more labels different from any label in the first set of labels. [0081] Step s414 comprises generating a second set of data impressions by clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels.

[0082] Step s416 comprises training a global ML model using the generated second set of data impressions.

[0083] FIG. 4B illustrates a flow chart according to an embodiment. In some embodiments, process 400 further includes the steps of process 450. Process 450 may begin with step s452.

[0084] Step s452 comprises generating a fourth set of ML model probabilities values by averaging using the first set of data impressions and the second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels.

[0085] Step s454 comprises providing the generated fourth set of ML model probabilities values to the plurality of local computing devices, including the first local computing device and the second local computing device, for training local ML models.

[0086] In some embodiments, the received first set of ML model probabilities values and the received second set of ML model probabilities values are one of: Softmax values, sigmoid values, and Dirichlet values. In some embodiments, sampling using the generated weights matrix is according to Softmax values and a Dirichlet distribution function. In some embodiments, the generated weights matrix is a class similarity matrix. In some embodiments, clustering using the generated first set of data impressions for each of the one or more labels different from any label in the first set of labels is according to a k-medoids clustering algorithm and uses the elbow method to determine the number of clusters k.

[0087] FIG. 5A illustrates a flow chart according to an embodiment. Process 500 is a method for distributed machine learning (ML) learning at a local computing device. Process 500 may begin with step s502.

[0088] Step s502 comprises receiving a first dataset including a first set of labels. [0089] Step s504 generating a second dataset including the first set of labels from the received first dataset and one or more labels different from any label in the first set of labels.

[0090] Step s506 training a local ML model using the generated second dataset.

[0091] Step s508 generating a weights matrix using the one or more labels different from any label in the first set of labels.

[0092] Step s510 generating a set of ML model probabilities values by using the generated weights matrix and trained local ML model.

[0093] Step s512 providing the generated set of ML model probabilities values to a central computing device.

[0094] FIG. 5B illustrates a flow chart according to an embodiment. In some embodiments, process 500 further includes the steps of process 550. Process 550 may begin with step s552.

[0095] Step s552 comprises receiving a set of ML model probabilities values from the central computing device representing an averaging using a first set of data impressions and a second set of data impressions for each label of the first set of labels and the one or more labels different from any label in the first set of labels.

[0096] Step s554 comprises training the local ML model using the received set of ML model probabilities values.

[0097] In some embodiments, the received first data set is a public dataset and the generated second dataset is a private dataset. In some embodiments, the local ML model is one of: a convolutional neural network (CNN), a artificial neural network (ANN), and a recurrent neural network (RNN).

[0098] In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of radio network nodes which are configured to classify an alarm type using the trained local ML models. In some embodiments, the plurality of local computing devices, including the first local computing device and the second local computing device, comprises a plurality of wireless sensor devices which are configured to classify an alarm type using the trained local ML models [0099] FIG. 6 is a block diagram of an apparatus 600 (e.g., a local computing device 104 and/or central computing device 102), according to some embodiments. As shown in FIG. 6, the apparatus may comprise: processing circuitry (PC) 602, which may include one or more processors (P) 655 (e.g., a general purpose microprocessor and/or one or more other processors, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like); a network interface 648 comprising a transmitter (Tx) 645 and a receiver (Rx) 647 for enabling the apparatus to transmit data to and receive data from other computing devices connected to a network 610 (e.g., an Internet Protocol (IP) network) to which network interface 648 is connected; and a local storage unit (a.k.a., “data storage system”) 608, which may include one or more non-volatile storage devices and/or one or more volatile storage devices. In embodiments where PC 602 includes a programmable processor, a computer program product (CPP) 641 may be provided. CPP 641 includes a computer readable medium (CRM) 642 storing a computer program (CP) 643 comprising computer readable instructions (CRI) 644. CRM 642 may be a non-transitory computer readable medium, such as, magnetic media (e.g., a hard disk), optical media, memory devices (e.g., random access memory, flash memory), and the like. In some embodiments, the CRI 644 of computer program 643 is configured such that when executed by PC 602, the CRI causes the apparatus to perform steps described herein (e.g., steps described herein with reference to the flow charts). In other embodiments, the apparatus may be configured to perform steps described herein without the need for code. That is, for example, PC 602 may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software.

[00100] FIG. 7 is a schematic block diagram of the apparatus 600 according to some other embodiments. The apparatus 600 includes one or more modules 700, each of which is implemented in software. The module(s) 700 provide the functionality of apparatus 600 described herein (e.g., the steps herein, e.g., with respect to FIGS. 2, 3, 4A, 4B, 5A, 5B).

[00101] While various embodiments of the present disclosure are described herein, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the abovedescribed exemplary embodiments. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein or otherwise clearly contradicted by context.

[00102] Additionally, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of the steps may be re-arranged, and some steps may be performed in parallel.