Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ENABLING TRAINING OF A MACHINE-LEARNING MODEL FOR TRIGGER-WORD DETECTION
Document Type and Number:
WIPO Patent Application WO/2022/106682
Kind Code:
A1
Abstract:
It is provided a method for enabling training a machine-learning, ML, model for trigger-word detection, the method being performed in a training data provider (1). The method comprises: receiving (40) sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determining (42) that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and providing (44) the labelled sound-based data to train the ML model.

Inventors:
TANZI TANZIA HAQUE (SE)
Application Number:
PCT/EP2021/082484
Publication Date:
May 27, 2022
Filing Date:
November 22, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ASSA ABLOY AB (SE)
International Classes:
G10L15/06; G06N3/09; G06N3/091; G06N3/098; G10L15/08; G10L15/18; G10L15/30; G10L25/24
Foreign References:
US20160189706A12016-06-30
CN110706695A2020-01-17
Other References:
YU-AN CHUNG ET AL: "Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 23 March 2018 (2018-03-23), XP081236462
Attorney, Agent or Firm:
KRANSELL & WENNBORG KB (SE)
Download PDF:
Claims:
CLAIMS

1. A method for enabling training a machine-learning, ML, model for trigger-word detection, the method being performed in a training data provider (i), the method comprising: receiving (40) sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determining (42) that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and providing (44) the labelled sound-based data to train the ML model; wherein the determining (42) that the sound-based data corresponds to a trigger word comprises: performing (42a) speech recognition of the sound-based data; finding (42b) a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtaining (42c) semantic vector data based on the found section of sound-based data; and determining (42d) that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the soundbased data and a vector corresponding to the trigger word, is less than a threshold distance.

2. The method according to claim 1, wherein the sound-based data is in the form of mel-frequency cepstral coefficients, MFCCs.

3. The method according to any one of the preceding claims, further comprising: discarding (41) sections of the sound-based data that fail to correspond to voice sounds.

4. The method according to any one of the preceding claims, further comprising, after the providing (44) the labelled sound-based data: discarding (46) all of the sound-based data.

5. The method according to any one of the preceding claims, further comprising: training (45) a local ML model; and transmitting (48) at least part of the local ML model to a central location for aggregated learning of a central ML model.

6. The method according to claim 5, further comprising: receiving (50) an updated ML model being based on the central ML model.

7. A training data provider (1) for enabling training a machine-learning, ML, model for trigger- word detection, the training data provider (1) comprising: a processor (60); and a memory (64) storing instructions (67) that, when executed by the processor, cause the training data provider (1) to: receive sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model; wherein the instructions to determine that the sound-based data corresponds to a trigger word comprise instructions (67) that, when executed by the processor, cause the training data provider (1) to: perform speech recognition of the sound-based data; find a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtain semantic vector data based on the found section of sound-based data; and determine that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance.

8. The training data provider (1) according to claim 7, wherein the sound-based data is in the form of mel-frequency cepstral coefficients, MFCCs.

9. The training data provider (1) according to any one of claims 7 to 8, further comprising instructions (67) that, when executed by the processor, cause the training data provider (1) to: discard sections of the sound-based data that fail to correspond to voice sounds. 18

10. The training data provider (1) according to any one of claims 7 to 9, further comprising instructions (67) that, when executed by the processor, cause the training data provider (1) to, after the instructions to provide the labelled sound-based data, discard all of the sound-based data.

11. The training data provider (1) according to any one of claims 7 to 10, further comprising instructions (67) that, when executed by the processor, cause the training data provider (1) to: train a local ML model; and transmit at least part of the local ML model to a central location for aggregated learning of a central ML model.

12. The training data provider (1) according to claim 11, further comprising instructions (67) that, when executed by the processor, cause the training data provider (1) to: receive an updated ML model being based on the central ML model.

13. A computer program (67, 91) for enabling training a machine-learning, ML, model for trigger-word detection, the computer program comprising computer program code which, when executed on a training data provider (1) causes the training data provider (1) to: receive sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model; wherein the instructions to determine that the sound-based data corresponds to a trigger word comprise instructions (67) that, when executed by the processor, cause the training data provider (1) to: perform speech recognition of the sound-based data; find a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtain semantic vector data based on the found section of sound-based data; and determine that the found section of sound-based data corresponds to a trigger 19 word when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance.

14. A computer program product (64, 90) comprising a computer program according to claim 13 and a computer readable means on which the computer program is stored.

Description:
ENABLING TRAINING OF A MACHINE-LEARNING MODEL FOR TRIGGER-WORD DETECTION

TECHNICAL FIELD

[0001] The present disclosure relates to the field of trigger-word detection and in particular to enabling training of a machine-learning model for trigger-word detection.

BACKGROUND

[0002] New technology opens up new opportunities. For instance, the evolution of electronic devices, it is now possible to monitor people using various sensors. This can be particularly useful for elderly people or disabled people, who in this way can enjoy greatly improved quality of life by living in their own home instead of being in a staffed care facility.

[0003] According to some such monitoring devices, the monitored person is able to trigger help when needed using their voice. This can be achieved by defining trigger words that the monitored person can say for this purpose. An audio sensor captures a sound signal and detects if someone utters the trigger word, e.g. ‘help’.

[0004] The detection of the trigger word can be based on a machine-learning (ML) model. The ML model needs to be trained, ideally also over time after deployment, to improve the performance of the trigger-word detection. However, providing the training data is a tedious and time-consuming process.

SUMMARY

[0005] One object is to improve how machine learning models for trigger-word detection can be trained.

[0006] According to a first aspect, it is provided a method for enabling training a machine-learning, ML, model for trigger-word detection. The method is performed in a training data provider. The method comprises: receiving sound-based data, the soundbased data being based on sounds captured in a space to be monitored; determining that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and providing the labelled sound-based data to train the ML model. The determining that the sound-based data corresponds to a trigger word comprises: performing speech recognition of the sound-based data, finding a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtaining semantic vector data based on the found section of sound-based data; and determining that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance.

[0007] The sound-based data may be in the form of mel-frequency cepstral coefficients, MFCCs.

[0008] The method may further comprise: discarding sections of the sound-based data that fail to correspond to voice sounds.

[0009] The method may further comprise, after the providing the labelled soundbased data: discarding all of the sound-based data.

[0010] The method may further comprise: training a local ML model; and transmitting at least part of the local ML model to a central location for aggregated learning of a central ML model.

[0011] The method may further comprise: receiving an updated ML model being based on the central ML model.

[0012] According to a second aspect, it is provided a training data provider for enabling training a machine-learning, ML, model for trigger-word detection. The training data provider comprises: a processor; and a memory storing instructions that, when executed by the processor, cause the training data provider to: receive soundbased data, the sound-based data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model. The instructions to determine that the sound-based data corresponds to a trigger word comprise instructions that, when executed by the processor, cause the training data provider to: perform speech recognition of the sound-based data; find a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtain semantic vector data based on the found section of sound-based data; and determine that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance.

[0013] The sound-based data may be in the form of mel-frequency cepstral coefficients, MFCCs.

[0014] The training data provider may further comprise instructions that, when executed by the processor, cause the training data provider to: discard sections of the sound-based data that fail to correspond to voice sounds.

[0015] The training data provider may further comprise instructions that, when executed by the processor, cause the training data provider to, after the instructions to provide the labelled sound-based data, discard all of the sound-based data.

[0016] The training data provider may further comprise instructions that, when executed by the processor, cause the training data provider to: train a local ML model; and transmit at least part of the local ML model to a central location for aggregated learning of a central ML model.

[0017] The training data provider may further comprise instructions that, when executed by the processor, cause the training data provider to: receive an updated ML model being based on the central ML model.

[0018] According to a third aspect, it is provided a computer program for enabling training a machine-learning, ML, model for trigger-word detection. The computer program comprises computer program code which, when executed on a training data provider causes the training data provider to: receive sound-based data, the soundbased data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model. The instructions to determine that the sound-based data corresponds to a trigger word comprise instructions that, when executed by the processor, cause the training data provider to: perform speech recognition of the sound-based data; find a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtain semantic vector data based on the found section of sound-based data; and determine that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound -based data and a vector corresponding to the trigger word, is less than a threshold distance.

[0019] According to a fourth aspect, it is provided a computer program product comprising a computer program according to the third aspect and a computer readable means on which the computer program is stored.

[0020] Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the element, apparatus, component, means, step, etc." are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.

BRIEF DESCRIPTION OF THE DRAWINGS

[0021] Aspects and embodiments are now described, by way of example, with reference to the accompanying drawings, in which:

[0022] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied;

[0023] Figs 2A-C are schematic diagrams illustrating embodiments of where the training data provider can be implemented; [0024] Figs 3A-B are flow charts illustrating embodiments of methods for enabling training an ML model for trigger-word detection, by providing training data for the ML model;

[0025] Fig 4 is a schematic diagram illustrating components of the training data provider of Figs 2A-C; and

[0026] Fig 5 shows one example of a computer program product comprising computer readable means.

DETAILED DESCRIPTION

[0027] The aspects of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the invention are shown. These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description.

[0028] Fig 1 is a schematic diagram illustrating an environment in which embodiments presented herein can be applied. A person 5 to be monitored is at least part of the time present in a physical space 14. The physical space 14 can e.g. be a room, a flat, a house, an office etc. A monitoring device 2 is configured to monitor the person based on an audio sensor 3, such as a microphone. The monitoring device 2 contains a local ML model 4. There may be one or more monitoring devices 2 working in parallel on the same or complementing scene.

[0029] The monitoring device 2 can be connected to a network 6, which can be an Internet protocol (IP) based network. The network can e.g. comprise any one or more of a local wireless network, a cellular network, a wired local area network, a wide area network (such as the Internet), etc. Optionally, a central ML device 7, containing a central ML model 9, is also connected to the network 6. The central ML device 7 can then be used in cooperation with a plurality of separate monitoring devices 2 with respective local ML models 4, e.g. as part of a federated ML model. [0030] The monitoring device 2 and/ or the central ML device 7 infers results based on input data. Specifically, the monitoring device 2 and/ or the central ML device 7 infers when the monitored person 5 utters a trigger word, such as ‘help’ based on audio data captured by the audio sensor 3. The ML models can support a single trigger word or multiple trigger words.

[0031] As described in more detail below, the local ML model 4 and/ or the central ML model are trained without manual input based on captured sound data, provided by a training data provider.

[0032] There may be one or more monitoring devices 2 to cover a smaller or larger physical space 14. While the training described in embodiments based herein is based on sound data, the monitoring device 2 can also contain other sensors, such as any one or more of video, radar, infrared sensor, etc.

[0033] Figs 2A-C are schematic diagrams illustrating embodiments of where the training data provider 1 can be implemented.

[0034] In Fig 2A, the training data provider 1 shown, implemented in the monitoring device 2. The monitoring device 2 is thus the host device for the training data provider 1 in this implementation. In this case, the training data is thus obtained locally, at the site of the local ML model 4.

[0035] In Fig 2B, the training data provider 1 shown, implemented in the central ML device 7. The central ML device 7 is thus the host device for the training data provider 1 in this implementation. In this case, the training data is thus obtained centrally, at the site of the central ML model 9.

[0036] In Fig 2C, the training data provider 1 is shown, implemented as a standalone device. The training data provider 1 thus does not have a host device in this implementation. The training data provider 1 can then be located in any suitable location, as long as it can communicate with an ML model for training and with the monitoring device 2 for obtaining sound-based data. [0037] Figs 3A-B are flow charts illustrating embodiments of methods for enabling training an ML model for trigger-word detection, by providing training data for the ML model. The method can be performed regularly, e.g. once every 24 hours or on any other suitable schedule, being based on sound-based data collected during that period. In other words, the method can be performed on batch data and is not directly part of the execution of the ML model to infer a result.

[0038] While these embodiments are described with reference to one trigger word, the same processing can occur for multiple trigger words.

[0039] In a receive sound-based data step 40, the training data provider 1 receives sound-based data, the sound-based data being based on sounds captured in a space to be monitored. The sound-based data can be in the form of mel-frequency cepstral coefficients, MFCCs. MFCCs are an efficient way to represent sound data audible by humans.

[0040] In an optional discard non-voice sections step 41, the training data provider 1 discards sections of the sound-based data that fail to correspond to voice sounds. Since the training is used to recognise a trigger word uttered by a user, the sections (in time) that do not contain voice data can safely be discarded, thus saving storage and processing requirements, as well as bandwidth when the sound-based data is communicated.

[0041] In a conditional trigger word step 42, the training data provider 1 determines when the sound-based data corresponds to a trigger word. This step can be performed repeatedly for a plurality of (time) sections of the sound-based data. When a trigger word is determined, the training data provider labels this (section of) soundbased data to correspond to the trigger word and the method proceeds to a provide labelled data for training step 44. Otherwise, the method ends. The labelling thus indicates that the sound-based data corresponds to a trigger word. In one embodiment, the trigger word is not determined using an ML model, since it is the ML model that is to be trained. Training an ML model based on its own inference would not result in any improvement of the ML model. [0042] In a provide labelled data for training step 44, the training data provider 1 provides the labelled sound-based data to train the ML model. Since the data is labelled, the labelled sound-based data can be used by the ML model to improve its performance. The ML model can be the local ML model 4 and/or the central ML model 9. When the labelled sound-based data needs to be transmitted, e.g. from the monitoring device 2 to the central ML device 7, the labelled sound-based data can be encrypted to preserve privacy during the data transfer.

[0043] In an optional train local model step 45 the training data provider trains the local ML model 4.

[0044] In an optional discard all sound-based data step 46, the training data provider 1 discards all of the sound-based data. In this way, when the processing is done, no privacy sensitive sound-based data is stored.

[0045] In an optional transmit model for central training step 48, the training data provider 1 transmits at least part of the local ML model 4 to a central location for aggregated learning of a central ML model.

[0046] In an optional receive updated model step 50 the training data provider receives an updated ML model being based on the central ML model.

[0047] When steps 45, 48 and 50 are performed, these form part of a federated learning process, where the central ML model can be based on multiple local ML models, and where the resulting aggregated central ML model is used for the local ML models. Such a solution is particularly applicable for a situation based on events that only occur rarely, such as trigger-word detection. By aggregating the local ML models in the central ML model, which is pushed back to the local level, the training is based on multiple local ML models which thus increases the number of instances of trigger-word detection.

[0048] Looking now to Fig 3B, this is a flow chart illustrating sub-steps of the conditional trigger word step 42. [0049] In a speech recognition sub-step, 42a, the training data provider 1 performs speech recognition of the sound-based data.

[0050] In a find close to trigger word sub-step 42b, the training data provider 1 finds a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word. Close can here be determined by a metric of speech-recognition similarity between the sound-based data and the trigger word being higher than a threshold, but lower than a threshold to consider the sound-based data to correspond to the trigger word.

[0051] In a speech2vec sub-step 42c, the training data provider 1 obtains semantic vector data based on the found section of sound-based data. The section of sound-based data is then used to get a vector. The vector is a sequence of numbers, in the format of (a, b, c, ...), where a, b, c, etc. are real numbers. The size of the vector is also called the number of dimensions of the vector. This step can e.g. be performed using its own ML model, using a speech2vec framework. Speech2vec is similar to word2vec, which vectorises words in text format, but speech2vec is based on vectorisation of sound-based data, i.e. not text data.

[0052] In a conditional distance to trigger word < threshold sub-step 42b, the training data provider 1 determines when the found section of sound-based data corresponds to a trigger word. This is determined when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance. When this is the case, the method proceeds to the determine trigger word sub-step 42e. Otherwise, there is no trigger word determined in the conditional trigger word step 42. The distance can e.g. be a Euclidian distance or a cosine distance between the vectors.

[0053] In a determine trigger word step 42e, the training data provider 1 determines the trigger word to correspond to the found section. This step is performed when step 42b is positive. In other words, the speech recognition in step 42b considers the section of sound-based data to be close to trigger word, but not sufficiently close to be a match. When this close (speech recognition) match is combined with the vector analysis of the section of sound-based data, voice utterances which are not sufficient to trigger a speech recognition match are still found to correspond to the trigger word based on the vector comparison. This allows variations of the trigger word to be detected as the trigger word, which can thus be used for the training of the ML model(s).

[0054] This solution can be exploited in a semi-automated training, where a person utters the trigger word in deliberate variations which can correspond to when the person is in a real state of help, e.g. when lying down or in a weakened state. Such variations of utterances of the trigger word will then be automatically captured in steps 42a-e and result in ML model(s) that are more capable of recognising the trigger word when such variations of utterances occur when in operation.

[0055] Using embodiments presented herein, training can be performed without manual input, and this training can continue regularly based on real data after deployment. Since more training results in a better ML model, the ML model(s) will improve over time, resulting in better recognition of the trigger word, covering new variants over time. Additionally, by vectorizing the sound data (rather than word data from speech recognition), trigger words can be detected even if the person uttering the trigger word is on the floor or is in an otherwise weakened state where speech recognition fails. The ability to detect trigger words in this situation is of great importance for the context of embodiments presented herein, where trigger words can be used e.g. for elderly care. This solution thus solves a problem of improving the recognition of trigger words, and subsequent training of an ML model, when the monitored person may not be able to clearly say the trigger word.

[0056] Fig 4 is a schematic diagram illustrating components of the training data provider 1 of Figs 2A-C. It is to be noted that, when the training data provider 1 is implemented in a host device (see Figs 2A-B and corresponding text above), one or more of the mentioned components can be shared with the host device. A processor 60 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions 67 stored in a memory 64, which can thus be a computer program product. The processor 60 could alternatively be implemented using an application specific integrated circuit (ASIC), field programmable gate array (FPGA), etc. The processor 60 can be configured to execute the method described with reference to Figs 3A-B above.

[0057] The memory 64 can be any combination of random-access memory (RAM) and/or read-only memory (ROM). The memory 64 also comprises persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid-state memory or even remotely mounted memory.

[0058] A data memory 66 is also provided for reading and/ or storing data during execution of software instructions in the processor 60. The data memory 66 can be any combination of RAM and/or ROM.

[0059] The training data provider 1 further comprises an I/O interface 62 for communicating with external and/or internal entities. For instance, the I/O interface 62 allows the training data provider 1 to communicate the network 6. Optionally, the I/O interface 62 also includes a user interface.

[0060] Other components of the training data provider 1 are omitted in order not to obscure the concepts presented herein.

[0061] Fig 5 shows one example of a computer program product 90 comprising computer readable means. On this computer readable means, a computer program 91 can be stored, which computer program can cause a processor to execute a method according to embodiments described herein. In this example, the computer program product is in the form of a removable solid-state memory, e.g. a Universal Serial Bus (USB) drive. As explained above, the computer program product could also be embodied in a memory of a device, such as the computer program product 64 of Fig 4. While the computer program 91 is here schematically shown as a section of the removable solid- state memory, the computer program can be stored in any way which is suitable for the computer program product, such as another type of removable solid-state memory, or an optical disc, such as a CD (compact disc), a DVD (digital versatile disc) or a Blu-Ray disc.

[0062] Hereinafter follows a list of embodiments, enumerated with roman numerals. [0063] i. A method for enabling training a machine-learning, ML, model for trigger-word detection, the method being performed in a training data provider, the method comprising: receiving sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determining that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and providing the labelled sound-based data to train the ML model.

[0064] ii. The method according to embodiments i, wherein the sound-based data is in the form of mel-frequency cepstral coefficients, MFCCs.

[0065] in. The method according to any one of the preceding embodiments, further comprising: discarding sections of the sound-based data that fail to correspond to voice sounds.

[0066] iv. The method according to any one of the preceding embodiments, wherein the determining that the sound-based data corresponds to a trigger word comprises: performing speech recognition of the sound-based data.

[0067] v. The method according to embodiment iv, wherein the determining that the sound-based data corresponds to a trigger word comprises: finding a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtaining semantic vector data based on the found section of sound-based data; and determining that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound-based data and a vector corresponding to the trigger word, is less than a threshold distance.

[0068] vi. The method according to any one of the preceding embodiments, further comprising, after the providing the labelled sound-based data: discarding all of the sound-based data. [0069] vii. The method according to any one of the preceding embodiments, further comprising: training a local ML model; and transmitting at least part of the local ML model to a central location for aggregated learning of a central ML model.

[0070] viii. The method according to embodiment vii, further comprising: receiving an updated ML model being based on the central ML model.

[0071] ix. A training data provider for enabling training a machine-learning, ML, model for trigger-word detection, the training data provider comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the training data provider to: receive sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model.

[0072] x. The training data provider according to embodiment ix, wherein the sound-based data is in the form of mel-frequency cepstral coefficients, MFCCs.

[0073] xi. The training data provider according to any one of embodiments ix to x, further comprising instructions that, when executed by the processor, cause the training data provider to: discard sections of the sound-based data that fail to correspond to voice sounds.

[0074] xii. The training data provider according to any one of embodiments ix to xi, wherein the instructions to determine that the sound-based data corresponds to a trigger word comprise instructions that, when executed by the processor, cause the training data provider to: perform speech recognition of the sound-based data. [0075] xiii. The training data provider according to embodiment xii, wherein the instructions to determine that the sound-based data corresponds to a trigger word comprise instructions that, when executed by the processor, cause the training data provider to: find a section of sound-based data that, using the speech recognition, fails to be considered to be a trigger word, but is close to being considered to be a trigger word; obtain semantic vector data based on the found section of sound-based data; and determine that the found section of sound-based data corresponds to a trigger word when a distance, in vector space, between the semantic vector of the sound -based data and a vector corresponding to the trigger word, is less than a threshold distance.

[0076] xiv. The training data provider according to any one of embodiments ix to xiii, further comprising instructions that, when executed by the processor, cause the training data provider to, after the instructions to provide the labelled sound-based data, discard all of the sound-based data.

[0077] xv. The training data provider according to any one of embodiments ix to xiv, further comprising instructions that, when executed by the processor, cause the training data provider to: train a local ML model; and transmit at least part of the local ML model to a central location for aggregated learning of a central ML model.

[0078] xvi. The training data provider according to embodiment xv, further comprising instructions that, when executed by the processor, cause the training data provider to: receive an updated ML model being based on the central ML model.

[0079] xvii. A computer program for enabling training a machine-learning, ML, model for trigger-word detection, the computer program comprising computer program code which, when executed on a training data provider causes the training data provider to: receive sound-based data, the sound-based data being based on sounds captured in a space to be monitored; determine that the sound-based data corresponds to a trigger word, and labelling this sound-based data to correspond to the trigger word; and provide the labelled sound-based data to train the ML model.

[0080] xviii. A computer program product comprising a computer program according to embodiment xvii and a computer readable means on which the computer program is stored.

[0081] The aspects of the present disclosure have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims. Thus, while various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.