Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR TARGET SOURCE SEPARATION
Document Type and Number:
WIPO Patent Application WO/2023/195541
Kind Code:
A1
Abstract:
Embodiments of the present disclosure disclose a system and method for extraction of a target sound signal. The system collects collect a mixture of sound signals. The system selects a query identifying the target sound signal to be extracted from the mixture of sound signals, the query comprising one or more identifiers. Each identifier is present in a predetermined set of one or more identifiers and defines at least one of mutually inclusive and mutually exclusive characteristics of the mixture of sound signals. The system determined one or more logical operators connecting the extracted one or more identifiers. The system transforms the one or more identifiers and the extracted logical operators into a digital representation. The system executes a neural network trained to extract the target sound signal by mixing the digital representation with intermediate outputs of intermediate layers of the neural network.

Inventors:
WICHERN GORDON (US)
TZINIS EFTHYMIOS (US)
SUBRAMANIAN ASWIN SHANMUGAM (US)
LE ROUX JONATHAN (US)
Application Number:
PCT/JP2023/014469
Publication Date:
October 12, 2023
Filing Date:
March 31, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
International Classes:
G10L21/0308; G06N3/02; G10L25/30
Other References:
EFTHYMIOS TZINIS ET AL: "Heterogeneous Target Speech Separation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 April 2022 (2022-04-07), XP091201116
XUBO LIU ET AL: "Separate What You Describe: Language-Queried Audio Source Separation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 28 March 2022 (2022-03-28), XP091185865
Attorney, Agent or Firm:
FUKAMI PATENT OFFICE, P.C. (JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A sound processing system to extract a target sound signal, the sound processing system comprising: at least one processor; and memory having instructions stored thereon that, when executed by the at least one processor, cause the sound processing system to: collect a mixture of sound signals along with the target sound signal; collect a query identifying the target sound signal to be extracted from the mixture of sound signals, the query comprising one or more identifiers; extract from the query, each identifier of the one or more identifiers, said each identifier being present in a predetermined set of one or more identifiers, each identifier defining at least one of mutually inclusive and mutually exclusive characteristics of the mixture of sound signals; determine one or more logical operators connecting the extracted one or more identifiers; transform the extracted one or more identifiers and the one or more logical operators into a digital representation predetermined for querying the mixture of sound signals; execute a neural network trained to extract the target sound signal, identified by the digital representation, from the mixture of sound signals, by combining the digital representation with intermediate outputs of intermediate layers of the neural network processing the mixture of sound signals, wherein the neural network is trained with machine learning to extract different sound signals identified in a predetermined set of digital representations; and output the extracted target sound signal, (shown in FIG. 1, 2A, 2B) [Claim 2]

The sound processing system of claim 1, wherein sound signals in the mixture of sound signals are collected from a plurality of sound sources with facilitation of one or more microphones, wherein each sound source of the plurality of sound sources corresponds to at least one of a speaker, a person or an individual, an industrial equipment, a vehicle, or a natural sound. (FIG. 1) [Claim 3]

The sound processing system of claim 1 , wherein the predetermined set of one or more identifiers is associated with a plurality of sound sources, wherein the each of the one or more identifiers in the predetermined set of one or more identifiers comprises at least one of: a loudest sound source identifier, quietest sound source identifier, a farthest sound source identifier, a nearest sound source identifier, a female speaker identifier, a male speaker identifier, and a language specific sound source identifier. (FIG. 1) [Claim 4]

The sound processing system of claim 1, wherein the one or more identifiers are combined using the one or more logical operators to extract the target sound signal having mutually inclusive and exclusive characteristics, wherein the one or more logical operator comprises at least one of: NOT operator, AND operator, and OR operator, wherein NOT operator is used with any single identifier of the one or more identifiers.

[Claim 5]

The sound processing system of claim 1 , wherein the neural network is trained using the predetermined set of digital representations of a plurality of combinations of identifiers in the predetermined set of one or more identifiers. (FIG. 5A, 5B).

[Claim 6]

The sound processing system of claim 1 , wherein the neural network is trained using a positive example selector and a negative example selector to extract the target sound signal. (Shown in FIG. 7) [Claim 7]

The sound processing system of claim 1, wherein the digital representation is represented by at least one of: a one hot conditional vector, a multi-hot conditional vector, and text description. (FIG. 3C) [Claim 8]

The sound processing system of claim 1, wherein the intermediate layers of the neural network comprise one or more intertwined blocks, wherein each of the one or more intertwined blocks comprise at least one of: a feature encoder, a conditioning network, a separation network, and a feature decoder, wherein the conditioning network comprises a feature-invariant linear modulation (FiLM) layer that takes as an input the mixture of sound signals and the digital representation and modulates the input into the conditioning input, wherein the FiLM layer processes the conditioning input and sends the processed conditioning input to the separation network. (FIG. 6). [Claim 9]

The sound processing system of claim 8, wherein the separation network comprises a convolution block layer that utilizes the conditioning input to separate the target sound signal from the mixture of sound signals, wherein the separation network is configured to produce a latent representation of the target sound signal. (FIG. 4, 6).

[Claim 10]

The sound processing system of claim 8, wherein the feature decoder converts a latent representation of the target sound signal produced by the separation network into an audio waveform. (FIG. 6).

[Claim 11]

A computer-implemented method for extracting a target sound signal, the method comprising: collecting a mixture of sound signals from a plurality of sound sources; selecting a query identifying the target sound signal to be extracted from the mixture of sound signals, the query comprising one or more identifiers; extracting from the query each identifier of the one or more identifiers, said each identifier being present in a predetermined set of one or more identifiers, each identifier defining at least one of mutually inclusive and mutually exclusive characteristics of the mixture of sound signals; determining one or more logical operators connecting the extracted one or more identifiers; transforming the extracted one or more identifiers and the one or more logical operators into a digital representation predetermined for querying the mixture of sound signals; executing a neural network trained to extract the target sound signal identified by the digital representation from the mixture of sound signals by combining the digital representation with intermediate outputs of intermediate layers of the neural network processing the mixture of sound signals, wherein the neural network is trained with machine learning to extract the target sound signal identified in the predetermined set of digital representations; and outputting the extracted target sound signal.

[Claim 12]

The computer-implemented method of claim 11 , wherein the mixture of sound signals are collected from a plurality of sound sources with facilitation of one or more microphones, wherein the plurality of sound sources corresponds to at least one of speakers, a person or an individual, industrial equipment, and vehicles.

[Claim 13]

The computer-implemented method of claim 11, wherein the predetermined set of one or more identifiers are associated with a plurality of sound sources, wherein each of the one or more identifiers in the predetermined set of one or more identifiers comprises at least one loudest sound source identifier, quietest sound source identifier, farthest sound source identifier, nearest sound source identifier, female speaker identifier, male speaker identifier, and language specific sound source identifier.

[Claim 14]

The computer-implemented method of claim 11, wherein the one or more identifiers are combined using the one or more logical operators to extract the target sound signal having mutually inclusive and exclusive characteristics.

[Claim 15]

The computer-implemented method of claim 14, wherein the neural network is trained using the set of predetermined digital representations of a plurality of combinations of identifiers in the predetermined set of one or more identifiers.

[Claim 16]

The computer-implemented method of claim 11, further comprising: generating one or more queries associated with the mutually inclusive and exclusive characteristics of the target sound signal during training of the neural network.

[Claim 17]

The computer-implemented method of claim 11, wherein the intermediate layers of the neural network comprises one or more intertwined blocks, wherein each of the one or more intertwined blocks comprise at least one of: a feature encoder, a conditioning network, a separation network, and a feature decoder, wherein the conditioning network comprises to a featureinvariant linear modulation (FiLM) layer that takes as an input the mixture of sound signals and modulates the input into the conditioning input, wherein the FiLM layer processes the conditioning input and sends the processed conditioning input to the separation network.

Description:
[DESCRIPTION]

[Title of Invention]

METHOD AND SYSTEM FOR TARGET SOURCE SEPARATION [Technical Field]

[0001] This disclosure generally relates to target sound source separation, and more specifically to a sound processing system for extracting the target sound from a mixture of sound signals.

[Background Art]

[0002] Traditional source separation systems for extracting a target sound signal are typically intended to isolate only a particular type of sound, such as for speech enhancement or instrument de-mixing, where the target was determined by the training scheme and may not be changed at test time. Traditional source separation approaches, typically separate an audio, mixture only into sources of a fixed type (for example, isolate vocals from background music), or else they separate all sources in the mixture (e.g., isolate each speaker in a meeting room) without any differentiating factor, and then use post-processing to find a target signal. Recently, conditioning-based approaches have emerged as a promising alternative, where an auxiliary input such as a class-label can be used to indicate the desired source, but the set of available conditions is typically mutually exclusive and lacks flexibility.

[0003] For example, in the cocktail party problem, humans have the uncanny ability to focus on a sound source of interest within a complex acoustic scene and may change the target of their focus depending on the situation, relying on attention mechanisms that modulate the cortical responses to auditory stimuli. While the field of sound source separation has made great strides towards reproducing such abilities in machines, particularly with the advent of deep learning approaches, there is still a gap in terms of the flexibility with which the target source can be determined. As already discussed, early works developed “specialist” models intended to isolate only a particular type of sound. Later works such as deep clustering and permutation invariant training (PIT) focused on separating all sources in a mixture without any differentiating factor. However, this still leaves the problem of determining which of the extracted sources is the source of interest unsolved.

[0004] Accordingly, there exists a need for an advanced system that overcomes the above-stated disadvantages. To that end, there is a need for a technical solution to overcome the above-mentioned challenges. More specifically, there is need for such a system that outperforms conventional sound processing systems for extraction of the target sound signals.

[Summary of Invention]

[0005] The present disclosure provides an enhanced sound processing system for identifying and extracting a target sound signal from a mixture of sounds. More specifically, the present disclosure provides a sound processing and training system that is configured to identify the target sound signal from the mixture of sounds based on mutually inclusive concepts, such as, loudness, gender, language, spatial location, etc.

[0006] To that end, some embodiments provide a conditioned model that is configured to mimic human’s flexibility when selecting a target sound signal, by focusing on extracting sounds based on semantic concepts and criteria of different nature, i.e., heterogeneous, such as whether a speaker is near or far from the microphone, speaks softly or loudly or speaks in a certain language and the like. Some embodiments are based on a realization that the mixture of sound signals is collected from a plurality of sound sources. In addition, a query identifying a target sound signal to be extracted from the mixture of sound signals is collected. The query is associated with the one or more identifiers that are indicative of mutually inclusive characteristics of the target sound signal.

[0007] To that end, the mixture of sound signals is collected from the plurality of sound sources with facilitation of one or more microphones, wherein the plurality of sound sources corresponds to at least one of one or more speakers, a person or an individual, industrial equipment, and vehicles.

[0008] Further, each identifier present in the query having one or more identifiers belongs to a predetermined set of one or more identifiers and is extracted from the query. Each extracted identifier defines at least one of mutually inclusive and mutually exclusive characteristics of the target sound signal. In addition, one or more logical operators are used to connect the extracted one or more identifiers.

[0009] Some embodiments are based on a recognition that the extracted one or more identifiers and the one or more logical operators are transformed into a digital representation. The digital representation of the one or more identifiers is selected from a set of predetermined digital representations of a plurality of combinations of the one or more identifiers.

[0010] To that end, the digital representation corresponds to a conditioning input, which may be represented in any manner, such as by a one hot conditional vector or a multi-hot conditional vector, by a text input, an audio input, and the like, wherein the conditioning input comprises one or more of the mutually inclusive characteristics of the target sound signal.

[0011] Some embodiments are based on the recognition of execution of a neural network trained to extract the target sound signal from the mixture of sound signals by mixing the digital representation with intermediate outputs of intermediate layers of the neural network. The neural network is trained for each of the set of predetermined digital representations of the plurality of combinations of the one of more identifiers for extracting the target sound signal from the mixture of sound signals. To that end, at training time, the extraction model is configured to generate one or more queries associated with the one or more identifiers from the predetermined set of one or more identifiers.

[0012] To that end, in some embodiments, the neural network is based on an architecture comprising one or more intertwined blocks, where each block comprises at least: a feature encoder, a conditioning network, a separation network, and a feature decoder. The conditioning network comprises a feature-invariant linear modulation (FiLM) layer that takes as an input the mixture of sound signals and modulates the input into the conditioning input, wherein the FiLM layer processes the conditioning input and sends the processed conditioning input to the separation network.

[0013] Accordingly, one embodiment discloses a method implemented by a computer for extracting a target sound signal. The method includes collecting a mixture of sound signals from a plurality of sound sources. The method further includes selecting a query identifying the target sound signal to be extracted from the mixture of sound signals. The method includes extracting from the query each identifier present in a predetermined set of one or more identifiers. The method includes determining one or more logical operators connecting the extracted one or more identifiers. The method further includes transforming the extracted one or more identifiers and the one or more logical operators into a digital representation predetermined for querying the mixture of sound signals, The method includes executing a neural network trained to extract the target sound signal identified by the digital representation from the mixture of sound signals, by combining the digital representation with intermediate outputs of intermediate layers of the neural network processing the mixture of sound signals. The neural network is trained with machine learning to extract different sound signals identified in a set of predetermined digital representations. Furthermore, the method includes outputting the extracted target sound signal.

[0014] Some embodiments provide the sound processing system that is configured to extract the target sound signal from the mixture of sound signals. The sound processing system comprises at least one processor and memory having instructions stored thereon forming executable modules of the sound processing system. The at least one processor is configured to collect a mixture of sound signals. In addition, the at least one processor is configured to collect a query identifying the target sound signal to be extracted from the mixture of sound signals. The query comprises one or more identifiers. The at least one processor is further configured to extract from the query, each identifier of the one or more identifiers, said each identifier being present in a predetermined set of one or more identifiers. Each identifier defines at least one of mutually inclusive and mutually exclusive characteristics of the mixture of sound signals. The at least one processor is configured to determine one or more logical operators connecting the extracted one or more identifiers. Further, the at least one processor is configured to transform the extracted one or more identifiers and the one or more logical operators into a digital representation predetermined for querying the mixture of sound signals. The at least one processor is further configured to execute a neural network trained to extract the target sound signal identified by the digital representation from the mixture of sound signals by combining the digital representation with intermediate outputs of intermediate layers of the neural network. The at least one processor is further configured to output the extracted target sound signal.

[0015] Various embodiments disclosed herein provide the sound processing system that can more accurately, efficiently and in a reduced time, extract the target sound signal from the mixture of sound signals. Further, various embodiments provide the sound processing system that is based on the neural network that may be trained to extract the target sound signal based on mutually inclusive and/or mutually exclusive characteristics of the target sound signal. The neural network may be trained using combinations of the mutually inclusive and/or mutually exclusive characteristic datasets, in the form of predetermined set of one or more identifiers, in a manner superior to the existing neural networks.

[0016] Further features and advantages will become more readily apparent from the detailed description when taken in conjunction with the accompanying drawings.

[Brief Description of Drawings]

[0017]

[Fig. 1]

FIG. 1 illustrates a block diagram of an environment for extraction of a target sound signal, according to some embodiments of the present disclosure;

[Fig. 2A]

FIG. 2A illustrates a block diagram of a sound processing system extract the target sound signal, according to some embodiments of the present disclosure;

[Fig. 2B]

FIG. 2B illustrates a functional block diagram of the sound processing system to extract the target sound signal, according to some embodiments of the present disclosure;

[Fig. 2C]

FIG. 2C illustrates a block diagram of a query interface of the sound processing system, according to some embodiments of the present disclosure; [Fig. 2D]

FIG. 2D illustrates an example of the query interface of the sound processing system, according to some embodiments of the present disclosure;

[Fig. 3A] FIG. 3 A illustrates a block diagram of a method for generating a digital representation, according to some embodiments of the present disclosure;

[Fig. 3B]

FIG. 3B illustrates a block diagram of a plurality of combinations of one or more identifiers, according to some embodiments of the present disclosure; [Fig- 3C]

FIG. 3C illustrates a block diagram of different types of the digital representation, according to some embodiments of the present disclosure;

[Fig. 4]

FIG. 4 illustrates a block diagram of a neural network, according to some embodiments of the present disclosure;

[Fig. 5A]

FIG. 5A illustrates a block diagram of training of a neural network, according to some embodiments of the present disclosure;

[Fig. 5B]

FIG. 5B illustrates a block diagram of training of a neural network with a bridge condition, according to some embodiments of the present disclosure; [Fig. 6]

FIG. 6 illustrates a block diagram of execution of the neural network for extracting the target sound signal, according to some embodiments of the present disclosure;

[Fig. 7]

FIG. 7 illustrates a flow diagram showing training of the neural network, in accordance with some embodiments of the present disclosure;

[Fig. 8]

FIG. 8 illustrates a flow diagram of a method executed by the sound processing system for performing signal processing, according to some embodiments of the present disclosure; and [Fig. 9]

FIG. 9 illustrates a block diagram of the sound processing system for extraction of the target sound signal, according to some embodiments of the present disclosure.

[Description of Embodiments]

[0018] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, apparatuses and methods are shown in block diagram form only in order to avoid obscuring the present disclosure. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.

[0019] As used in this specification and claims, the terms “for example,” “for instance,” and “such as,” and the verbs “comprising,” “having,” “including,” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open ended, meaning that the listing is not to be considered as excluding other, additional components or items. The term “based on” means at least partially based on. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.

[0020] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.

[0021] The present disclosure provides a sound processing system that is configured to identify a target sound signal from a mixture of sounds based on concepts including mutually inclusive concepts, such as, loudness, gender, language, spatial location, and the like. That is, the same target sound signal may be identified using multiple different such concepts. The sound processing system collects the mixture of sound signals and selects a query identifying the target sound signal to be extracted from the mixture of sound signals. Further, the sound processing system extracts from the query one or more identifiers associated with the target sound signal. The one or more identifiers are indicative of characteristics of the target sound signal including mutually inclusive and mutually exclusive characteristics of the target sound signal. The one or more identifiers are used as conditioning input and are transformed into a digital representation in the form of at least one of: one hot conditional vector, multi hot conditional vector, text input or audio input. Further, the digital representation of the conditioning input is utilized as an input to a neural network to extract the target sound signal from the mixture of sound signals. The neural network is trained to extract the target sound signal identified by the digital representation from the mixture of sound signals by combining the digital representation with intermediate outputs of intermediate layers of the neural network processing the mixture of sound signals. The neural network is trained with machine learning to extract the target sound signal identified in a set of predetermined digital representations. In addition, the neural network is trained based on an architecture having one or more intertwined blocks. The one or more intertwined blocks comprise at least one of: a feature encoder, a conditioning network, a separation network, and a feature decoder. The conditioning network comprises a feature-invariant linear modulation (FiLM) layer that takes as an input an encoded feature representation of the mixture of sound signals and modulates the input based on the conditioning input, which is in the form of the digital representation. The FiLM layer processes the conditioning input and sends the processed conditioning input to the separation network, where the target sound signal is separated from the mixture of sound signals. In addition, the FiLM layer repeats the process of sending the conditioning input to the separation network in order to separate the target sound signal from the mixture of sound signals.

[0022] System Overview

[0023] FIG. 1 illustrates an environment 100 for extraction of a target sound signal, according to some embodiments of the present disclosure. The environment 100 includes a plurality of sound sources 102, one or more identifiers 104, one or more microphones 106 and a mixture of sound signals 108, a network 110 and a sound processing system 112.

[0024] The plurality of sound sources 102 may correspond to at least one of: one or more speakers like a person or individual, industrial equipment, vehicles. The mixture of sound signals 108 is collected from the plurality of sound sources 102 with facilitation of the one or more microphones 106. Each sound signal in the mixture of sound signals 108 is associated with criteria or one or more identifiers 104, which define some characteristic of that sound signal in the mixture of sound signals 108. For example, the one or more identifiers 104 may be used to mimic humans' flexibility when selecting which sound source to deal with, by focusing on extracting sounds from the mixture of sound signals 108 based on semantic concepts and criteria of different nature, i.e., heterogeneous. These heterogeneous criteria include in an example, such as whether a speaker is near or far from the one or more microphones 106, is the speaker talking soft or loud or speaks in a certain language. In this manner, the one or more identifiers 104 are associated with a plurality of sound sources 102. Other example of the one or more identifiers 104 comprise such as at least one of: a loudest sound source, a quietest sound source, a farthest sound source, a nearest sound source, a female speaker, a male speaker, and a language specific sound source.

[0025] The mixture of sound signals 108 associated with these one or more identifiers 104 may be transmitted to the sound processing system 112 through a network 110.

[0026] In one embodiment of the present disclosure, the network 110 is internet. In another embodiment of the present disclosure, the network 110 is a wireless mobile network. The network 110 includes a set of channels. Each channel of the set of channels supports a finite bandwidth. The finite bandwidth of each channel of the set of channels is based on capacity of the network 110. Further, the one or more microphones 106 are arranged in a pattern such that sound signal of each of the plurality of sound sources 102 get captured. The pattern of arrangement of the one or more microphones 106 allows the sound processing system 112 to use the relative time difference between microphones to estimate localization information of the plurality of sound sources 102. The localization information may be provided in the form of direction of arrival of the sound or a distance of the sound source from the one or more microphones 106.

[0027] In operation, the sound processing system 112 is configured to collect the mixture of sound signals 108 from the plurality of sound sources 102. In addition, the sound processing system 112 is configured to collect a query identifying a target sound signal to be extracted from the mixture of sound signals 108. Further, the sound processing system 112 is configured to extract from the query each identifier present in a predetermined set of the one or more identifiers defining mutually inclusive and exclusive characteristics of the mixture of sound signals 108. The sound processing system 112 is further configured to determine one or more logical operators connecting the extracted one or more identifiers. The sound processing system 112 is further configured to transform the extracted one or more identifiers and the one or more logical operators into a digital representation predetermined for querying the mixtures of sound signals 108. Furthermore, the sound processing system 112 is configured to execute a neural network trained to extract the target sound signal identified by the digital representation from the mixture of sound signals 108 by combining the digital representation with intermediate outputs of intermediate layers of the neural network processing the mixture of sound signals 108. The sound processing system 112 is further explained in detail in FIG. 2 A and FIG. 2B.

[0028] FIG. 2A shows a block diagram of the sound processing system 112 to extract a target sound signal 218, according to some embodiments of the present disclosure. The sound processing system 112 includes a memory 202, a processor 204, a database 206, a query interface 208, and an output interface 216. The memory 202 corresponds to at least one of RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage, or any other storage medium which can be used to store the desired information, and which can be accessed by the sound processing system 112. The memory 204 includes non-transitory computer-storage media in the form of volatile and/or nonvolatile memory. The memory 204 may be removable, non-removable, or a combination thereof. Exemplary memory devices include solid-state memory, hard drives, optical-disc drives, and the like. The memory 202 stores instructions which are executed by the processor 204. The memory 202 includes a neural network 210, and a transformation module 212. The memory 202 is associated with the database 206 of the sound processing system 112. The sound processing system 112 collects the mixture of sound signals 108 from the plurality of sound sources 102. The database 206 is configured to store the collected mixture of sound signals 108. The mixture of sound signals 108 corresponds to a mixture of sound signals having different characteristics. The characteristics include but are not limited to such as farthest sound source from the one or more microphones 106, nearest sound source, female speaker, French speaker, and the like. In addition, the database 206 stores the characteristics of each of the plurality of sound sources 102. In an embodiment, the database 206 is queried to extract the target sound signal 218 using the query interface 208 of the sound processing system 112. Further, the database 206 stores the predetermined set of the one or more identifiers and the set of predetermined digital representations associated with the target sound signal 218.

[0029] The sound processing system 112 is configured to collect a query identifying the target sound signal 218 to be extracted from the mixture of sound signals 108 using the query interface 208.

[0030] Further, the sound processing system 112 is configured to extract from the collected query each identifier present in a predetermined set of one or more identifiers with facilitation of the extraction model 214. In an example, identifier corresponds to any characteristic of a sound source such as “loudest” speaker, “female” speaker, and the like. The identifier is extracted from the collected query using the extraction model 214. The collected query is utilized by the extraction model 214 for analysis of the collected query. The extraction model 214 is configured identify each identifier from the collected query based on the analysis of the collected query. Each identifier is present in the predetermined set of one or more identifiers. Further, the predetermined set of one or more identifiers defines the mutually inclusive and exclusive characteristics of the mixture of sound signals 108. In an example, the predetermined set of one or more identifiers are stored in the database 206. The predetermined set of one or more identifiers are generated from past set of data associated with the mixture of sound signals 108. In addition, the predetermined set of one or more identifiers may be generated from the past set of data through one or more third party sources associated with the sound processing system 112.

[0031] The collected query may contain a plurality of combination of one or more identifiers. In an example, the collected query may be “Female” and “French” speaker. Here, “Female” speaker and “French” speaker are the two identifiers. The plurality of combination of one or more identifiers is selected using one or more logical operators. The one or more logical operators allow the sound processing system 112 to select the plurality of combinations of one or more identifiers. The extraction model 214 is configured to determine the one or more logical operators connecting each of the one or more identifiers 104 extracted from the predetermined set of one or more identifiers. In addition, the extraction model 214 is configured to generate one or more queries using the collected query, the one or more identifiers 104, and the one or more determined logical operators. The one or more queries are further processed to generate a conditioning input 222. In an example, the extraction model 214 generates queries such as, “Which is the farthest sound signal?”, and (&) “Which is the English speaking sound source?”. Further, the extraction model 214 utilizes all the queries and generates the conditioning input 222. Here, the conditioning input 222, for example, is “Which is the farthest English speaking sound source?”. The conditioning input 222 is an input containing the plurality of combinations of one or more identifiers of the target sound signal 218. The conditioning input 222 corresponds to a processed query containing the plurality of combinations of one or more identifiers.

[0032] Furthermore, the conditioning input 222 is utilized by the transformation module 212. The transformation module 212 is configured to transform the extracted one or more identifiers 220 into a digital representation predetermined for querying the mixture of sound signals 108. In one example, the transformation module 212 selects the digital representation of the extracted one or more identifiers 220 from a set of predetermined digital representations of the plurality of combinations of one or more identifiers of the target sound signal 218 (further explained in FIG. 2B and 3 A). Further, the one or more identifiers 104 may be used at training time to train the neural network 210 for extracting the target sound signal 218 from the mixture of sound signals 108, by generating different training combinations of the one or more identifiers. Furthermore, the one or more identifiers 104 are utilized by the extraction model 214 to generate the one or more queries. The one or more queries are associated with the mutually inclusive and exclusive characteristics of the target sound signal 218 used during training of the neural network 210. The extraction model 214 is configured to execute the neural network 210 trained .to extract the target sound signal from the mixture of sound signals 108 by combining the digital representation of the one or more identifiers 104 with intermediate outputs of intermediate layers of the neural network 210. Further, the extracted target sound signal 218 is outputted from the output interface 216.

[0033] FIG. 2B shows a functional block diagram 200B of the sound processing system 112 to extract the target sound signal 218, according to some embodiments of the present disclosure. The sound processing system 112 collects the mixture of sound signals 108. The mixture of sound signals 108 is collected from the plurality of sound sources 102 with facilitation of the one or more microphones 106 (as explained in FIG. 1). The sound processing system 112 is configured to collect the query identifying the target sound signal 218 to be extracted from the mixture of sound signals 108 using the query interface 208. The query interface 208 is configured to accept the one or more identifiers 104 associated with the target sound signal 218 indicative of mutually inclusive and mutually exclusive characteristics of the target sound signal 218 by collecting the query. The query is collected by the query interface 208. In an embodiment, the query is collected using a voice command with facilitation of natural language processing techniques. The collected query is further analyzed to identify the one or more identifiers 104 for generating the processed query (conditioning input 222). For example, one identifier from the one or more identifiers 104 corresponds to a loudest speaker, and another identifier from the one or more identifiers 104 corresponds to a female speaker. Here, mutually inclusive characteristics of the target sound signal 218 corresponds to “loudest” and “female” and the processed query corresponds to “female loudest speaker”. The target sound signal 218 associated with the above mentioned one or more identifiers 104 indicative of mutually inclusive and exclusive characteristics is a sound source that is loudest from all the plurality of sound sources 102 and has a female voice. (Explained in detail in FIG. 2C).

[0034] Further, the collected query is utilized by the extraction model 214 to extract each identifier present in the predetermined set of one or more identifiers defining mutually inclusive and exclusive characteristics of the mixture of sound signals 108. The extraction model 214 extracts each identifier to generate the conditioning input 222 (As explained above in FIG 2A).

[0035] Furthermore, the conditioning input 222 is utilized by the transformation module 212. The transformation module 212 transforms the conditioning input 222 into a digital representation 224 (transformation module is further explained in FIG. 3A). The digital representation 224 is further sent to the neural network 210 for training of the neural network 210 to extract the target sound signal 218 (digital representation is further explained in FIG. 3C) and also at test time to generate an output associated with the extracted target sound signal 218 from the mixture of sound signals 108. The neural network 210 is trained with machine learning with facilitation of one or more machine learning algorithms to extract different sound signals identified in a predetermined set of digital representations. The predetermined set of digital representation includes representation of different sound signals that may be extracted from past set of data or one or more third party sources. The target sound signal 218 is extracted out of these different sound signals present in the predetermined set of digital representation. In an embodiment, the one or more machine learning algorithms used to train the neural network 210 includes but may not be limited to voice activity detection algorithm (VAD), and deep speech algorithm. In general, deep speech algorithm is used for automatically transcribing spoken audio. Deep speech algorithm takes digital audio as an input and returns a “most likely” text transcript of that digital audio. In addition, VAD is a technique in which presence or absence of human speech is detected.

[0036] FIG. 2C illustrates a block diagram 200C of the query interface 208 of the sound processing system 112, according to some embodiments of the present disclosure. In an example, the query interface 208 includes a section 208 A, and a section 208B. The section 208 A corresponds to a section for selecting identifier(s) (the extracted one or more identifiers 220) associated with the target sound signal 218. The second section 208B corresponds to a section for selecting one or more logical operators. The one or more logical operators includes but may not be limited to AND (&) operator 208bl, and OR (||) operator 208b2. In addition, the one or more logical operators include NOT (!) operator 208cl. In an example, NOT operator 208c 1 may be applied with any single identifier of the one or more identifiers 220. The extracted one or more identifiers 220 of the first section 208A are combined using the one or more logical operators (AND operator 208b 1 and OR operator 208b2) of the second section 208B to generate a processed query to extract the target sound signal 218 having mutually inclusive and mutually exclusive characteristics. The one or more logical operators allow generation of the processed query that may further function as the conditioning input 222 for extracting the target sound signal 218.

[0037] In an embodiment, the section 208A has a dropdown list that allows selecting an appropriate identifier such as “French speaker” 208aa, “English speaker” 208bb, “male speaker” 208cc, “female speaker” 208dd, “loudest speaker” 208ee, “quietest speaker” 208ff and the like. The section 208B has a dropdown list that allows selecting the one or more logical operators. In an example, NOT operator 208c 1 is selected and “male speaker” 208cc is selected from the section 208 A. In addition, AND operator 208b 1 is selected from the dropdown list in the section 208B. Further, “loudest speaker” 208ee is selected from the section 208A The conditioning input 222 generated using inputs selected from the section 208A, the section 208B, and NOT operator 208c 1 corresponds to “Not the male speaker 208cc and must be the loudest speaker 208ee”.. Furthermore, the query interface 208 includes “Add identifier” section 208d to select multiple identifiers of the one or more identifiers 220. “Add identifier” section 208d may or may not be used.

[0038] Further, the query interface 208 has a voice interface 226 that allows a user to give voiee commands associated with the target sound signal 218. The voice commands are analyzed using natural language processing techniques and the one or more identifiers 104 (may also be a plurality of combinations of one or more identifiers 304) are extracted from the voice commands to generate the conditioning input 222. The query interface 208 is utilized during training of the neural network 210 for accurately extracting the target sound signal 218.

[0039] FIG. 2D illustrates an example of the query interface 208 of the sound processing system 112, according to some embodiments of the present disclosure. The query interface 208 has a section 228. The section 228 allows a user to type a logical expression 230 using the one or more identifiers 220 and the one or more logical operators. The one or more logical operators corresponds to the logical operators explained in FIG. 2C such as AND operator 208bl, OR operator 208b2 and NOT operator 208cl. In an example, the user typed the logical expression 230. The logical expression 230 is represented as:

[0040] (French speaker AND (NOT loudest speaker)) OR (Male speaker)

[0041] The logical expression 230 denotes that the user has selected a French speaker and it must not be the loudest speaker but may or may not be a male speaker.

[0042] Further, the logical expression 230 is not limited to the mentioned expression.

[0043] FIG. 3A shows a block diagram of a method for generating the digital representation 224, according to some embodiments of the present disclosure. In an example as illustrated, the database 206 includes a set of predetermined digital representations 302. The set of predetermined digital representations 302 may be extracted from one or more third party databases. The set of predetermined digital representations 302 may include a plurality of combinations of one or more identifiers 304 of the target sound signal 304. The plurality of combinations of one or more identifiers 304 corresponds to at least two or more characteristics of a particular sound source. In an example, the plurality of combinations of one or more identifiers 304 include “loudest” and “female” speaker, “quietest”, “male” and “French speaking” speaker and the like (further explained in FIG. 3B). The set of predetermined digital representations 302 are utilized by the transformation module 206 to transform the conditioning input 222 into the digital representation 224.

[0044] The transformation module 206 generates the digital representation 224 of the conditioning input 222 of the extracted one or more identifiers 104 from the set of predetermined digital representations 302. For example, if the extracted one or more identifiers 104 corresponds to “loudest”, “male” speaker, the transformation module 206 considers “loudest” and “male” speaker as identifiers and transforms these identifiers into a conditioning input for extracting the target sound signal 218 and generates the digital representation 224 of the conditioning input.

[0045] FIG. 3B shows an exemplary block diagram 300B of the plurality of combinations of one or more identifiers 304, according to some embodiments of the present disclosure. The plurality of combinations of the one or more identifiers 304 includes but may not be limited to: French male speaker 304a, farthest English speaker 304b, loudest Spanish speaker 304c, and nearest female speaker 304d. The plurality of combinations of the one or more identifiers 304 is not limited to the above mentioned examples.

[0046] FIG. 3C shows a block diagram 300C of the digital representation 224, according to some embodiments of the present disclosure. The digital representation 224 corresponds to a transformed representation of the conditioning input 222. The digital representation 224 is represented by at least one of: a one hot conditional vector 306, a multi-hot conditional vector 308, a text description 310 and the like.

[0047] In an example, the digital representation 224 includes the one hot conditioning vector 306. If the conditioning input 222 is, “extract the farthest speaker from microphone”, the one hot conditional vector 306 will include ‘ 1 ’ in position corresponding to the farthest sound source and zero in all other conditions such as closest speaker, male/female, loud/quiet, and the like in a vector of features of the sound signal. In another example, the digital representation 224 includes the multi-hot conditioning vector 308. If the conditioning input 222 is, “extract the loudest female speaker”, the multi-hot conditional vector 308 will include 1 in the position corresponding to the loudest speaker and female speaker and all other conditions such as male speaker, quieter speaker, and the like will be set to zero.

[0048] In an example, the conditioning input 222 is transformed to the digital representation in the form of a one-hot vector or a multi-hot vector at run-time via one or more of: selection in a drop-down menu of possible options, by rule-based parsing of text input, or by first converting speech to text and then using rule-based parsing. Additionally, to generate multi-hot vector representation, logical operators such as “and” and “or” may be combined to create multi-hot vectors between conditions. In some embodiments, additional one-hot dimensions are added to indicate “and”/”or” queries for generation of digital representations for conditioning inputs.

[0049] In yet another example, the digital representation 224 comprises the text description 310, which is especially important when the target sound signal 218 is not speech, but general sound sources such as industrial equipment or vehicles. In this situation descriptions such as male/female, and English/French cannot be used. In this case, the text description 310 is converted into an embedding vector and then the embedding vector is inputted to the neural network 210 instead of one hot conditional vector 306. For example, a model such as a word2vec model or a Bidirectional Representation for Transformers (BERT) model may be used to generate the embedding vector from the text description 310. In general, word2vec is a technique for natural language processing. The word2vec model generally uses a neural network to learn word associations from a large corpus of text. In addition, the BERT model is designed to help computers or machines understand the meaning of ambiguous language in text by using surrounding text to establish context. Irrespective of the type of representation of the digital representation 224, the neural network 210 is trained and guided by the digital representation 224 to extract the target sound signal 218. The training of the neural network 210 is further explained in FIG. 4, and FIG. 5 A.

[0050] FIG. 4 shows a block diagram illustrating architecture 400 of the neural network 210, in accordance with some embodiments of the present disclosure. The neural network 210 may be a neural network trained to extract the target sound signal 218 and in some examples even localization information of the target sound signal 218. Further, the training is based on the premise that a training data comprises an unordered and heterogeneous set of training data components. For example, for the neural network 210, the digital representation 224 is generated to train the neural network 210 for extracting a target sound signal 410. The target sound signal 410 is same as the target sound signal 218 of FIG. 2.

[0051] The neural network 210 is trained to extract the target sound signal 410 from the mixture of sound signals 108 by combining the conditioning input 222 with intermediate outputs of intermediate layers of the neural network 210. The neural network 210 comprises one or more intertwined blocks such as a feature encoder 404, a conditioning network 402, a separation network 406, and a feature decoder 408. In an example, the conditioning network 402 comprises a feature-invariant linear modulation (FiLM) layer (explained in FIG. 6).

[0052] The conditioning network 402 takes as input the conditioning input 222 transformed into the digital representation 224. The conditioning network processes the digital; representation 224 which identifies the type of source to be extracted from the mixture of sound signals 108 into a form that is useful for the separation network 406. The feature encoder 404 receives the mixture of sound signals 108. The feature encoder 404 corresponds to a learned one-dimensional convolutional feature encoder (ConvlD) (explained below in FIG. 6). Further, the feature encoder 404 is configured to convert the mixture of sound signals 108 into a matrix of features for further processing by the separation network 406. The separation network 406 corresponds to a convolution block layer (explained in detail in FIG. 6). The separation network 406 utilizes the conditioning input 222 and the matrix of features to separate the target sound signal 410 from the mixture of sound signals 108. The separation network 406 is configured to produce a latent representation of the target sound signal 410. The separation network 406 combines the conditioning input 222 and the matrix of features to generate the latent representation of the target sound signal 410 separated from the mixture of sound signals 108. The feature decoder 408 is typically an inverse process of the feature encoder 404 and converts the latent representation of the target sound signal produced by the separation network 406 into an audio waveform in the form of the target audio signal 410.

[0053] The neural network 210 undergoes a training phase which is further illustrated in FIG. 5A.

[0054] FIG. 5 A shows an exemplar block diagram 500 A of training of the neural network 210, according to some embodiments of the present disclosure. To that end, at training time, the plurality of combinations of one or more identifiers 304 are fed into the neural network 210 for training 504 of the neural network 210. The plurality of combinations of one or more identifiers 304 are converted to the set of predetermined digital representations 302. In an embodiment, the neural network 210 is trained using the set of predetermined digital representations 302 of the plurality of combinations of one or more identifiers. In addition, the neural network 210 is trained with a training data 502.

[0055] In an example, the training data 502 includes a first training dataset 502a and a second training dataset 502b. The first training dataset 502a comprises sound data recorded in reverberant conditions and includes spatial data of the sound sources with respect to the one or more microphones 106 but does not have data associated with language of the sound sources. The second training dataset 502b has data in multiple languages but was recorded in non-reverberant conditions. Therefore, the second training dataset comprises language related data about the sound sources but does not include spatial data of the sound sources. The neural network 210 is trained using both the first training dataset 502a and the second training dataset 502b. Further, the trained neural network 210 is configured to separate sound sources based on language in reverberant conditions by using a conditioning input as described previously, even though that combination was missing in the training data 502 during training 504. To enable this, the trained neural network 210 generates . test mixtures 506 along with all available combinations of characteristic conditions of the sound source for execution 508 of the neural network 210. In an example, while execution 508, if a required condition is language specific, but the recorded sound is reverberant then the trained neural network 210 extracts the target sound source based on the required condition (language) even though reverberant data with language labels was missing in the training data 502 during training 504. This is particularly useful when there exists a bridge condition between the two different training datasets.

[0056] FIG. 5B shows an exemplary block diagram 500B of training of the neural network 210 with bridge condition 502c, according to some embodiments of the present disclosure. The plurality of combinations of the mutually inclusive characteristics 304 are fed into the neural network 210 for training 504 of the neural network 210. The plurality of combinations of the mutually inclusive characteristics 304 is in the form of the digital representation 224. In addition, the neural network 210 is trained with the training data 502. In an example, the training data 502 includes the first training dataset 502a and the second training dataset 502b. The first training dataset 502a includes gender data of the sound source but does not include energy data. The second training dataset 502b includes energy data and gender data. The neural network 210 is trained with the first training dataset 502a and the second training dataset 502b.

[0057] In an example, the bridge condition 502c is loudest speaker in a mixture of sound signals. Such energy conditioning is convenient because training samples can often be easily introduced in the training dataset 502, as it is easy to control the loudness of each source when generating the mixture of sound signals. That is, a simple gain can be applied to isolated source examples when creating the mixtures of sound signals during the training 504 of the neural network 210, such that any dataset can be made to have the ability to condition on energy. The terms loudness and energy are used interchangeably, to represent some notion of volume of the sound signal.

[0058] The trained neural network 210 generates test mixtures 506 along with all possible combinations of characteristic conditions of the sound source for execution 508 of the neural network 210. In an example, if only the first training dataset 502a is accessed for extracting a target source specific to gender, the neural network 210 will be able to extract the target source specific to gender accurately due to the bridge condition 502c used in the training data. The bridge condition allows gender conditioning in the first training dataset 502a, even though gender conditioning is unavailable in the first training dataset 502a during training 504. In addition, during execution 508, all possible conditions are available for extracting the target source. The execution 508 of the neural network 210 is further explained in FIG. 6.

[0059] FIG. 6 shows a block diagram 600 for the execution 508 of the neural network 210 for extracting the target sound signal 218, according to some embodiments of the present disclosure. The neural network 210 inputs and outputs time domain signals and includes following components: (1) a learned one-dimensional convolutional feature encoder (ConvlD) 606 (herein after feature encoder 606) configured to obtain an intermediate representation, (2) a feature-invariant linear modulation (FiLM) layer 602, (3) B intermediate blocks 604 for processing the intermediate representations, and (4) a learned one-dimensional transposed convolutional decoder 608 for returning to a time-domain signal. The FiLM layer 602 corresponds to B-FiLM layers. The learned one-dimensional convolutional feature encoder (ConvlD) 606 corresponds to the feature encoder 404 of FIG. 4. The B intermediate blocks 604 correspond to a convolution block layer 604. The convolution block layer 604 corresponds to the separation network 406 of FIG. 4. The convolution block layer 604 is a stack of U-net convolutional blocks. Each U-net block contains several convolution blocks that learn a high-level latent representation and several transposed convolution blocks that go from the high-level latent representation back to a representation comparable to the Linet input. In an example, the combination of FiLM layer 602 (B-FiLM layers) and the B intermediate blocks 604 is repeated B times.

[0060] The mixture of sound signals 108 are sent to the feature encoder 606. The feature encoder 506 converts the mixture of sound signals 108 into a matrix of features for further processing by the FiLM layer 602 and the convolution block layer 604. The FiLM layer 602 takes as an input the matrix of features of the mixture of sound signals 108. In addition, the FiLM layer 602 takes as the input the digital representation 224 (for example, the one hot conditional vector 306 shown in FIG. 3C). The FiLM layer 602 processes the input (the matrix of features and the one hot conditional vector 306) and sends the processed input to the convolution block layer 604. The convolution layer 604 combines the matrix of features and the processed conditioning input to produce a latent representation of the target sound signal 218. The latent representation is sent to the learned one-dimensional transposed convolutional decoder 608 for separating the target sound signal 218 from other sound sources 610. The FiLM layer 602 and the convolution block layer 604 are trained and executed to extract the target sound signal 218 and estimate localization information of the extracted target sound signal 218. The localization information of the target sound signal 218 is indicative of a location of an origin of the extracted target sound signal 218.

[0061] In an example, a mixture x of sound signals is considered,

Z N ,

Sj - R T of N sound source waveforms si, ... SN with T time-domain samples. In general, it is assumed that there exists a signal characteristic condition C (e.g., the spatial location of a sound source) in a set C of conditions, and a desired concept value v for that condition (e.g., far, or near) which belongs to the set V of all discriminative concepts. Now, given the condition C and its concept value v, a target submix ST of all sound sources whose condition C matches the concept value v is retrieved from the input mixture x. The target submix ST 6 is an indicator function, and the same notation C is used to denote a signal characteristic and the function C: R T — > V which returns the value of that characteristic for an input signal.

[0062] The input signals are the signals from speech sources, and it is considered that signal characteristics C in the set C= {E, G, S, L}, where E denotes the signal-energy (with values low/high), G denotes the gender (female/male as self-identified by the dataset’s speakers), S denotes the spatial location (near/far), and L denotes the language (English/French/German/Spanish). Thus, a target is specified based on a total of |V| = 2 + 2 + 2 + 4 = 10 concepts. A semantic discriminative information is encoded for the desired concept v in a one-hot vector c = l[v] £ {0, 1 } |V| which has one only at the corresponding index of the concept v, given some arbitrary ordering of V. The goal of the task is then to train a separation model f, parameterized by 0, which takes as input a mixture of sound sources x alongside a conditioning vector c and estimates the target submix $ T as follows: ST = f(x, c; 0).

[0063] The FiLM layer 602 is added at the input of each of the B U- ConvBlocks (convolution block layer 604), as shown in Fig. 6. In addition, some extra parameters for scaling and bias are B pairs of matrices (W0,Wy) with size |V| x Cm, where Cm = 512 is the number of intermediate channels in each processing block. In an example, the network f is set to produce estimates ST and So for ST and the submix So of other (non-target) sources, enforcing ST + So = x = ST + SQ.

[0064] FIG. 7 shows a flow diagram 700 illustrating training of the neural network 210 for acting as a heterogeneous separation model 712. The flow diagram 700 includes the database 206, the extraction model 214, the conditioning input 222 transformed in the form of digital representation 224, a negative example selector 702, a positive example selector 704, an audio mixer 706a, an audio mixer 706b, the neural network 210 acting as the heterogeneous separation model 712 and a loss function 714. The database 206 is an audio database that includes collection of isolated sound signals. For example, different speech signals for human voice applications, and associated metadata such as distance of a speaker to a microphone, signal level, language, and the like for each isolated sound signal.

[0065] The extraction model 214 is configured to generate one or more random queries associated with the mutually inclusive characteristics of the target sound signal 610. The one-hot conditional vector 306 or the multi-hot conditional vector 308 of the accepted one or more identifiers 220 is randomly selected based on the one or more random queries generated. The multi-hot conditional vector 308 may be a multi-hot “and” conditioning vector or a multi-hot “or” conditioning vector. For multi-hot “and” conditioning all selected identifiers must be true for a sound signal to be relevant target sound signal. For multi-hot “or” conditioning vector at least one of the selected identifiers needs to be true. For the text description 310 conditioning, all sound signals in the database 206 are required to have one or more natural language descriptions of the corresponding sound signal.

[0066] In an example, an audio signal is randomly selected from the database .206 as a positive example and the corresponding text description is used as the conditioning input 222. The conditioning input 222 transformed into the digital representation 224 is sent to the heterogeneous separation model 712 for further processing. The conditioning input 222 transformed into the digital representation 224 is sent to the negative example selector 702 and the positive example selector 704. The negative example selector 702 returns zero, one, or multiple sound signals from the database 206 that are not relevant for the given conditioning input used for training of the heterogeneous separation model 712 for the one or more random queries. In an embodiment, the negative example selector 702 may return zero or non- relevant sound signals so that the heterogeneous separation model 712 can be robust at inference time.

[0067] The positive example selector 704 returns zero, one, or multiple sound signals from the database 206 that are relevant for the given conditioning input. It is important to sometimes have the positive example selector return zero relevant audio signals so the heterogeneous target sound extraction model can be robust to this case at inference time.

[0068] The zero, one, or multiple sound signals from the positive example selector 704 are passed through the audio mixer 706a to obtain a ground truth target sound signal for training. The sound signals returned from both the positive example selector 704 and the negative example selector 702 are also passed to the audio mixer 706b to create an audio mixture signal 708 during training which is inputted into the heterogeneous separation model 712. The heterogeneous separation model 214 processes the digital representation 224 and the audio mixture signal 708 to extract separated target sound signal 716.

[0069] The ground truth target audio signal is compared with the separated target sound signal 716 with facilitation of the loss function 714. In other words, the loss function 714 compares the ground truth target audio signal 710 with the separated target sound signal 716 returned by the heterogeneous separation model 712 using the loss function 714. In an example, relevant loss functions comparing the two sound signals (e.g., SNR, scale-invariant source to distortion ratio, mean-squared error, etc.), can be computed in time domain, frequency domain, or a weighted combination of time-domain and frequency-domain losses.

[0070] In an example, several sound sources such as multiple people speaking may be present, such as in a business meeting or at a party, and a machine listening device (e.g., a robot or hearing aid-like device) may be required that can focus on the speech of a particular person. However, the machine listening device needs input from a user to identify which person to focus on, which is often context dependent. For example, if two people are speaking and one is male and one is female, the user my give input to the machine listening device to focus on the male speaker. In an example, the machine listening device comprises the sound processing system 112 which uses the neural network 210 to perform the task of identification of the sound signal of interest using the heterogeneous separation model 712. If both speakers are male, then the heterogeneous separation model 712 is utilized to describe the speech of the person of interest, such as how far they are from the microphone or the volume of their speech relative to competing speakers. The heterogeneous separation model 712 allows for using a control device to select the signal characteristic for a given mixture of speakers that is most appropriate for isolating the speaker of interest (a particular sound source) in the context of a particular situation.

[0071] The heterogeneous separation model 712 is trained such that it can perform multi condition-based separation as described above. Typically, source separation models are trained using mixture/target pairs, where two or more isolated source signals (e.g., speech waveforms) are combined to create a mixture, and the isolated signals are used as targets. This combination, also referred to as a mixing process, takes each isolated source signal and optionally applies some basic signal processing applications (e.g., apply a gain, equalization, etc.) to the isolated sources and then combines them together to obtain the target audio mixture signal. The processed isolated sources then serve as training targets for a given audio mixture signal. However, the heterogeneous separation model 712 uses a triplet containing (1) an audio mixture signal, (2) a digital representation, for example represented by a one-hot conditional vector, and (3) a target signal corresponding to the description represented by the one-hot conditional vector. [0072] Another example of the heterogeneous separation model 712 system may be in combination with a system that identifies the signal characteristics of all speakers present in a mixture signal using multiple criteria, but without isolating them. For example, detecting the gender or language being spoken is possible even when speech is overlapping. Identified values of these criteria are used to conditionally extract the isolated signals of speakers present in the audio mixture. Further, the different criteria present in the audio mixture are combined using a process similar to “logical and" (i.e., the one-hot vector, become a multi-hot, with ones in the location of all relevant criteria), and use all criteria to separate the signal. Also, each of the criteria may be used independently and assess which of the conditioning criteria results in the best target signal separation performance for a given mixture.

[0073] FIG. 8 shows a flow chart 800 depicting a method for identifying a target sound signal based on the embodiments described above, according to some embodiments of the present disclosure. The method 800 is performed by the sound processing system 112. The flow chart initiates at step 802. Following step 802, at step 804, the method includes collecting the mixture of sound signals 108 from the plurality of sound sources 102 with facilitation of the one or more microphones 106. The plurality of sound sources 102 corresponds to at least one of speakers, a person or individual, industrial equipment, and vehicles. The mixture of sound signals 108 are collected from the plurality of sound sources 102 with facilitation of the one or more microphones 106 along with the one or more identifiers 104.

[0074] At step 806, the method includes collecting the query identifying the target sound signal 218 to be extracted from the mixture of sound signals 108 with facilitation of the query interface 208 (as explained in FIG. 2). The query is indicative of the mutually inclusive and exclusive characteristics of the target sound signal 218. The query is associated with the one or more identifiers 104 of the plurality of sound sources 102. The one or more identifiers 104 comprises at least one loudest sound source, quietest sound source, farthest sound source, nearest sound source, female speaker, male speaker, and language specific sound source.

[0075] At step 808, the method includes extracting from the query, each identifier present in the predetermined set of one or more identifiers defining mutually inclusive and exclusive characteristics of the mixture of sound signals 108 with facilitation of the extraction model 214 (as explained in FIG. 2B). Following step 808, at step 810, the method includes determining the one or more logical operators connecting the extracted one or more identifiers 220 using the query interface 208 (as explained in FIG. 2C).

[0076] At step 812, the method includes transforming the extracted one or more identifiers 220 into the digital representation 224 with facilitation of the transformation module 206 (as explained in FIG. 3A and FIG. 3C). The transformation module 206 is configured for generating the digital representation 224 of the extracted one or more identifiers 220 from the set of predetermined digital representations 302 of the plurality of combinations of the mutually inclusive characteristics 304 of the target sound signal 218. The digital representation 224 is represented by the one hot conditional vector 306, or the multi-hot conditional vector 308, and the text description 310 (as explained in FIG. 3C).

[0077] At step 814, the method includes executing the neural network 210 trained to extract the target sound signal 218 from the mixture of sound signals 108 with facilitation of the extraction model 214. In addition, the extraction model 214 is configured to generate one or more queries associated with the mutually inclusive and exclusive characteristics of the target sound signal during training of the neural network 210. The neural network 210 is trained using the set of predetermined digital representations 302 of the plurality of combinations of the mutually inclusive characteristics 304 for extracting the target sound signal 218. Further, the neural network 210 is trained to produce localization information of the target sound signal 218 indicative of a location of an origin of a sound source of the plurality of sound sources 102 of the target sound signal 218.

[0078] At step 816, the method incudes outputting the extracted target sound signal along with the localization information with facilitation of the output interface 216. At step 814, the method terminates.

[0079] FIG. 9 shows a block diagram 900 of the sound processing system 112 for performing processing of the mixture of sound signals 108, according to some embodiments of the present disclosure. In some example embodiments, the block diagram 900 includes the one or more microphones 106 that collect data including the mixture of sound signals 108 of the plurality of sound sources 102 from an environment 902.

[0080] The sound processing system 112 includes a hardware processor 908. The hardware processor 908 is in communication with a computer storage memory, such as a memory 910. The memory 910 includes stored data, including algorithms, instructions and other data that is implemented by the hardware processor 908. It is contemplated that the hardware processor 908 includes two or more hardware processors depending upon the requirements of the specific application. The two or more hardware processors are either internal or external. The sound processing system 112 is incorporated with other components including output interfaces and transceivers, among other devices.-

[0081] In some alternative embodiments, the hardware processor 908 is connected to the network 904, which is in communication with the mixture of sound signals 108. The network 904 includes but is not limited to, by nonlimiting example, one or more local area networks (LANs) and/or wide area networks (WANs). The network 904 also includes enterprise-wide computer networks, intranets, and the Internet. The sound processing system 112 includes one or more number of client devices, storage components, and data sources. Each of the one or more number of client devices, storage components, and data sources comprise a device or multiple devices cooperating in a distributed environment of the network 904.

[0082] In some other alternative embodiments, the hardware processor 908 is connected to a network-enabled server 914 connected to a client device 916. The network-enabled server 914 corresponds to a dedicated computer connected to a network that run software intended to process client requests received from the client device 916 and provide appropriate responses on the client device 916. The hardware processor 908 is connected to an external memory device 918 that stores all necessary data used in the target sound signal extraction, and a transmitter 920. The transmitter 920 helps in transmission of data between the network-enabled server 914 and the client device 916. Further, an output 922 associated with the target sound signal and localization information of the target sound signal is generated.

[0083] The mixture of sound signals 108 are further processed by the neural network 210. The neural network 210 is trained with combinations of mutually inclusive characteristics 906 of each of the sound signals. The plurality of combinations of the mutually inclusive characteristics 906 are fed into the neural network 210 for training of the neural network 210 (as explained in FIG. 7). The plurality of combinations of the mutually inclusive characteristics 906 is in the form of digital representation 224.

[0084] Many modifications and other embodiments of the disclosures set forth herein will come to mind to one skilled in the art to which these disclosures pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. It is to be understood that the disclosures are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.