Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SOUND SOURCE ENUMERATION AND DIRECTION OF ARRIVAL ESTIMATION USING A BAYESIAN FRAMEWORK
Document Type and Number:
WIPO Patent Application WO/2020/264466
Kind Code:
A1
Abstract:
One embodiment provides a method of sound source enumeration and direction of arrival (DoA) estimation. The method, the method includes estimating, by an enumeration module, a number of sound sources associated with an acoustic signal. The estimating includes selecting a specific parametric model from a generalized model. The generalized model is related to a microphone array architecture used to capture the acoustic signal. The method further includes estimating, by a DoA module, a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model. The estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

Inventors:
XIANG NING (US)
BUSH DANE (US)
LANDSCHOOT CHRISTOPHER (US)
Application Number:
PCT/US2020/040046
Publication Date:
December 30, 2020
Filing Date:
June 29, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
XIANG NING (US)
BUSH DANE (US)
LANDSCHOOT CHRISTOPHER (US)
International Classes:
G01S3/80; H04B15/00
Foreign References:
US20080247565A12008-10-09
US20190208317A12019-07-04
US20060245601A12006-11-02
US20190116449A12019-04-18
Other References:
BUSH DANE; XIANG NING: "A model-based Bayesian framework for sound source enumeration and direction of arrival estimation using a coprime microphone array", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 143, no. 6, 29 June 2018 (2018-06-29), pages 3934 - 3945, XP012229681
Attorney, Agent or Firm:
GANGEMI, Anthony, P. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of sound source enumeration and direction of arrival (DoA) estimation, the method comprising:

estimating, by an enumeration module, a number of sound sources associated with an acoustic signal, the estimating comprising selecting a specific parametric model from a generalized model, the generalized model related to a microphone array architecture used to capture the acoustic signal; and

estimating, by a DoA module, a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model,

wherein the estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

2. The method of claim 1, further comprising capturing, by the microphone array, the acoustic signal, the microphone array configured to implement beamforming.

3. The method of claim 1, wherein estimating the DoA comprises estimating at least one parameter of the selected model.

4. The method of claim 1, wherein estimating the number of sound sources comprises comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

5. The method according to any one of claims 1 to 4, wherein the microphone array architecture corresponds to a coprime microphone array.

6. The method according to any one of claims 1 to 4, wherein the microphone array architecture corresponds to a spherical microphone array.

7. The method according to any one of claims 1 to 4, wherein estimating the number of sound sources comprises nested sampling.

8. A computer readable storage device having stored thereon instructions that when executed by one or more processors result in the following operations comprising: estimating a number of sound sources associated with an acoustic signal, the estimating comprising selecting a specific parametric model from a generalized model, the generalized model related to a microphone array architecture used to capture the acoustic signal; and

estimating a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model,

wherein the estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

9. The device of claim 8, wherein the instructions that when executed by one or more processors result in the following additional operations comprising capturing the acoustic signal, the microphone array configured to implement beamforming.

10. The device of claim 8, wherein estimating the DoA comprises estimating at least one parameter of the selected model.

11. The device of claim 8, wherein estimating the number of sound sources comprises comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

12. The device according to any one of claims 8 to 11, wherein the microphone array architecture corresponds to a coprime microphone array.

13. The device according to any one of claims 8 to 11, wherein the microphone array architecture corresponds to a spherical microphone array.

14. The device according to any one of claims 8 to 11, wherein estimating the number of sound sources comprises nested sampling.

15. A system for sound source enumeration and direction of arrival (DoA) estimation, the system comprising:

a computing device comprising:

an enumeration module configured to estimate a number of sound sources associated with an acoustic signal, the estimating comprising selecting a specific parametric model from a generalized model, the generalized model related to a microphone array architecture used to capture the acoustic signal; and a DoA module configured to estimate a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model,

wherein the estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

16. The system of claim 15, further comprising a microphone array configured to capture the acoustic signal, the microphone array configured to implement beamforming.

17. The system of claim 15, wherein estimating the DoA comprises estimating at least one parameter of the selected model.

18. The system of claim 15, wherein estimating the number of sound sources comprises comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

19. The system of claim 16, wherein the microphone array architecture corresponds to a coprime microphone array.

20. The system of claim 16, wherein the microphone array architecture corresponds to a spherical microphone array.

Description:
SOUND SOURCE ENUMERATION AND DIRECTION OF ARRIVAL

ESTIMATION USING A BAYESIAN FRAMEWORK

CROSS REFERENCE TO RELATED APPLICATION(S)

This application claims the benefit of U.S. Provisional Application No. 62/867,444, filed June 27, 2019, U.S. Provisional Application No. 62/889,766, filed August 21, 2019, and U.S. Provisional Application No. 63/041,429, filed June 19, 2020, which are incorporated by reference as if disclosed herein in their entireties.

FIELD

The present disclosure relates to sound source enumeration and direction of arrival estimation, in particular to, sound source enumeration and direction of arrival estimation using a Bayesian framework.

BACKGROUND

Microphone arrays may be used in a variety of acoustic applications, including, but not limited to, passive-sonar localization, spatial-audio recording, and isolation of a particular signal from noise. For example, acoustic source localization based on microphone array signal processing may be used to aid in locating a source of gunfire. In another example, acoustic source localization may be used to localize an unknown number of spatially distributed sound sources in spatially distributed noise.

Localizing multiple concurrent sound sources simultaneously in complex sound environments can be a challenge as there may be variations in the number of sources, along with their locations, characteristics, and strengths. There may also be unwanted interference from fluctuating background noise.

SUMMARY

In an embodiment, there is provided a method of sound source enumeration and direction of arrival (DoA) estimation. The method, the method includes estimating, by an enumeration module, a number of sound sources associated with an acoustic signal. The estimating includes selecting a specific parametric model from a generalized model. The generalized model is related to a microphone array architecture used to capture the acoustic signal. The method further includes estimating, by a DoA module, a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model. The estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

In some embodiments, the method may further include capturing, by the microphone array, the acoustic signal, the microphone array configured to implement beamforming.

In some embodiments of the method, the estimating the DoA includes estimating at least one parameter of the selected model.

In some embodiments of the method, the estimating the number of sound sources includes comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

In some embodiments of the method, the microphone array architecture corresponds to a coprime microphone array.

In some embodiments of the method, the microphone array architecture corresponds to a spherical microphone array.

In some embodiments of the method, the estimating the number of sound sources includes nested sampling.

In some embodiments, there is provided a computer readable storage device. The device has stored thereon instructions that when executed by one or more processors result in the following operations including: estimating a number of sound sources associated with an acoustic signal. The estimating includes selecting a specific parametric model from a generalized model. The generalized model is related to a microphone array architecture used to capture the acoustic signal. The operations further include estimating a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model. The estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

In some embodiments, the instructions that when executed by one or more processors result in the following additional operations including capturing the acoustic signal, the microphone array configured to implement beamforming.

In some embodiments of the computer readable storage device, estimating the DoA includes estimating at least one parameter of the selected model.

In some embodiments of the computer readable storage device, estimating the number of sound sources includes comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

In some embodiments of the computer readable storage device, the microphone array architecture corresponds to a coprime microphone array. In some embodiments of the computer readable storage device, the microphone array architecture corresponds to a spherical microphone array.

In some embodiments of the computer readable storage device, estimating the number of sound sources includes nested sampling.

In an embodiment, there is provided system for sound source enumeration and direction of arrival (DoA) estimation. The system includes a computing device. The computing device includes an enumeration module and a DoA module. The enumeration module is configured to estimate a number of sound sources associated with an acoustic signal. The estimating includes selecting a specific parametric model from a generalized model. The generalized model is related to a microphone array architecture used to capture the acoustic signal. The DoA module is configured to estimate a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model. The estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework.

In some embodiments, the system further includes a microphone array configured to capture the acoustic signal. The microphone array is configured to implement beamforming.

In some embodiments of the system, estimating the DoA includes estimating at least one parameter of the selected model.

In some embodiments of the system, estimating the number of sound sources comprises comparing an evidence associated with a first parametric model with an evidence associated with a second parametric model.

In some embodiments of the system, the microphone array architecture corresponds to a coprime microphone array.

In some embodiments of the system, the microphone array architecture corresponds to a spherical microphone array.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawings show embodiments of the disclosed subject matter for the purpose of illustrating features and advantages of the disclosed subject matter. However, it should be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein:

FIG. 1 illustrates a functional block diagram of a sound source enumeration and direction of arrival estimation system consistent with several embodiments of the present disclosure; and FIG. 2 is a flowchart of sound source enumeration and direction of arrival estimation operations consistent with several embodiments of the present disclosure.

DETAILED DESCRIPTION

In many room acoustics and noise control applications, it may be challenging to identify the directions of arrivals (DoAs) of each of a possible plurality of incoming sound sources. When estimating the DoA, the acoustic signal under consideration may contain one or a plurality of concurrent sound sources originating from one or more directions. This leads to a two-tiered challenge of first identifying the correct number of sources, followed by determining the directional information of each source.

Generally, this disclosure relates to sound source enumeration and direction of arrival estimation using a Bayesian framework. The acoustic signal may be captured by an array of microphones arranged in a defined microphone array architecture. Microphone array architectures may include, but are not limited to, linear arrays, coprime linear microphone arrays, spherical microphone arrays (e.g., full spherical and hemispherical), etc. The captured acoustic signal may then be converted to one or more representative electrical signals.

A model-based Bayesian framework, consistent with the present disclosure, is configured to utilize Bayes’ Theorem both to estimate a number of sound sources and to estimate a direction of arrival (DoA) for each sound source associated with the captured acoustic signal. DoA may include one or more of an elevation and/or azimuth incident angle of a sound source relative to the microphone array.

Estimating the number of sound sources includes selecting a specific parametric model from a generalized model based, at least in part, on probability distributions. The generalized model includes the number of sound sources as a variable while the specific parametric model has the number of sound sources fixed at the estimated number of sound sources. The specific parametric model may then be used to estimate the DoA of the sound sources. A model-based parameter estimation technique may be considered an inverse technique. The inversion to be performed is to estimate a set of parameters encapsulated in a model, based on experimental data and the model itself (the hypothesis).

Bayes’ Theorem for data analysis may be written as:

where Q corresponds to model parameters (/¾, ft,..., 6¾) for a parametric function (i.e., model), H(Q) 2 ) is experimental data (e.g., di, d2,..., d k ); and / represents background information. In Bayesian analysis, a goal is to estimate the parameters, Q, contained in the model, H{Q ), from the experimental observations, 2). r(q\G) is a probability representing the initial implication of the parameter values from the background information before taking into account the experimental

data 2), and is referred to as the prior probability for Q (“prior” for short). p(2)|0, /), called “likelihood”, represents the probability of observing the data 2), if the parameters take any particular set of values Q. This likelihood represents the probability of getting the measured data 2) supposing that the model H(q) holds with values of its defining parameters Q.

r(q |2>, 7) represents the probability of the parameter values Q after taking the data values 2) into

account. r(q |2>, 7) may be referred to as the“posterior” probability. The quantity p(2)|7) in the denominator on the right-hand side of Eq. 1 represents the probability that the observed data occur no matter what the values of the parameters. p(2)|7) may be interpreted as a normalization factor ensuring that the posterior probability for the parameters integrates to unity. Data analysis tasks based on experimental observations generally include estimating a set of parameters encapsulated in the model H(q). The model may also be termed“the hypothesis”. Thus, Bayes’ theorem represents how an initial assignment may be updated based, at least in part, on data.

The posterior probability (r(q |2>, 7)) may then be determined based, at least in part, on the prior probability (p(0|7)) and based, at least in part, on the likelihood function (p( D\G, 7)). The prior probability may represent an initial indication of possible values of parameters, Q. The parameters may correspond to coefficients in a functional form that specifies the model, H. The likelihood function is configured to capture experimental data.

The prior probability may be selected based on prior information, if known, or may be selected to represent no preference for any particular value. For example, no preference may be represented by a relatively broad and/or relatively flat prior probability. In another example, a maximum entropy technique may be used to generate a maximum entropy prior. The maximum entropy technique is configured to provide a prior with no preference for a particular value while satisfying known constraints of a probability distribution. For example, a uniform prior may correspond to a location parameter. In another example, a logarithmic prior may correspond to a scale parameter.

The likelihood function represents the probability of getting the measured data assuming values of the parameters. The likelihood function may be configured to incorporate a priori information that the model H(q) describes the experimental data D such that residual errors between the model and the data have a finite error variance s 2 . A resulting Gaussian likelihood distribution may then be a consequence of the principle of maximum entropy.

Generally, there may be a finite number of models (i.e., hypotheses), Hi, ¾,..., H M that may explain captured data. Each model, ¾, may include a corresponding set of parameters 0 S. Model selection may then correspond to an inverse problem to infer which model of the set of models may be preferred by the data. Bayesian analysis applied to parameter estimation may be understood as a first level of inference. Model selection may then be understood as a second level of inference.

Bayesian data analysis may be utilized to perform both parameter estimation and model selection, using Bayes’ theorem. Bayesian data analysis, consistent with the present disclosure, may be configured to first perform the second level of inference (model selection), followed by the first level of inference (parameter estimation).

Considering model selection, given a set of competing models, Bayes’ theorem may be applied to an arbitrary member, ¾, of the finite set of models Hi, ¾,..., H M , given the data 2). In this context, the background information / indicates that each model of this finite model set, H M describes the data 2) well. Bayes’ theorem applied to each model, H s , in this set of M competing models, given the data 2) and the background information / , can be written by replacing Q in Eq. 1 by Hs as:

In the form of Eq. 2, Bayes’ theorem represents how prior knowledge about model ¾, encoded in the prior probability p(H s \I), is updated in the presence of data 2), given the background information /. The probability p(D\H s , /) in the context of model selection is referred to as“marginal likelihood” of the data and/or the“Bayesian evidence”. p(H s \T>, /) is the posterior probability of the model ¾, given the data.

Model selection may include evaluating a“Bayes’ factor”, K t j as:

where 1 < i,j £ M; i ¹ j. The fraction termed the“prior ratio”, represents prior

knowledge as to how strongly the model H t is preferred over H j , before considering the data 2). In some situations, prior preference information regarding the models may not be available. In these situations, the principle of maximum entropy may be used to assign an equal prior probability to each of the M models as:

In these situations, the Bayes’ factor for model comparison between two different models H t and H j relies on the posterior ratio between the models as:

which is equal to the marginal likelihood ratio when the model probabilities are uniform.

For computational convenience, the Bayes’ factor may be determined using a logarithmic scale with units‘decibans’ as:

Li j = 10 log 10 (¾) = 10 log 10 (Z ; ) - 10 log 10 (Z / ) , [ decibans ] (6) with simplified notations for the Bayesian evidence, Z L = p(2) /). Thus, the evidence values (i.e., marginal likelihoods) for two models may be compared quantitatively. Among a finite set of competing models, a highest possible Bayes’ factor, L i;· , may then indicate that the data prefers model over model H j the most. The model most preferred by the data may then correspond to the selected model.

Once a model has been selected, based on experimental data, the Bayesian framework may be used with the selected model (“Hs”) to estimate the selected model parameters 0s, also based on experimental data. Bayes theorem may thus be applied to estimate the parameters, 0s, given the data 2) and the model Hs. The quantity p(2>|7) in Eq. 1 becomes p(D\H s ), the probability of the data 2) given the model Hs. In this context, the background information, /, now includes that a specific model Hs is selected or given. Bayes’ theorem for parameter estimation operations may then be written as:

where the subscript S and background information, /, have been dropped for simplicity. Thus, Bayes’ theorem in this context represents how prior knowledge about parameters 0, given the specific model H encoded in p(0|H), may be updated by incorporating data 2) through the likelihood, p(2)|0, H).

The prior, p(0|H), is configured to encode the knowledge about the parameters before the data are incorporated. Once the data have been observed and/or measured, the likelihood, p(2)|0, H ), incorporates the data for updating the prior probability for the parameters. The posterior probability, r(q|2), H), is configured to encode the updated knowledge in the light of the data.

The posterior probability is normalized and is constrained to integrate to unity over the parameter space. The normalization constraint includes integrating both sides of Eq. 7 over the parameter space, Q, where the integral marginalizes out Q which appear in the likelihood, by assigning the prior for the parameters Q. For parameter estimation, p(T>\H) plays the role of a normalization constant in Eq. 7. p(2)|/7) is identical to the Bayesian evidence in Eq. 2. Rearrangement of the terms of Bayes theorem in Eq. 7 yields:

r(b\T>, H X p(T>\H = p(D\0, H) X r(q\H) (8)

In other words, Eq. 8 may be understood as: posterior x evidence = likelihood x prior. The likelihood and the prior correspond to inputs and the posterior and evidence are outputs of the Bayesian inference.

To estimate the evidence p(2)|/7), the product of the likelihood and the prior probability are integrated over the parameter space, Q, as:

Estimation of the evidence may be performed using a numerical sampling technique such as a Markov chain Monte Carlo approach, e.g., nested sampling. Once the evidence has been estimated, the product of the likelihood function and the prior probability may contribute to the normalized posterior probability. From estimates of the posterior probability distributions, mean values of selected parameters, associated variances and interrelationships may be estimated.

It may be appreciated that Eq. 8 implicitly indicates that model selection and parameter estimation may both be accomplished within a unified Bayesian framework. Once the evidence has been estimated, the product of the associated likelihood function and the prior probability may contribute to the normalized posterior probability since the evidence is estimated. Mean values of relevant parameters (and also their uncertainties as indicated by associated individual variances) may be estimated from estimates of posterior distributions.

Thus, two levels of inference may be utilized in the Bayesian framework to select a model from a plurality of models and to then estimate parameters associated with the selected model. Estimating a number of sound sources based, at least in part, on an acoustic signal received by a plurality of microphones in a microphone array corresponds to model selection. Estimating a direction of arrival (e.g., angle) for each sound source corresponds to parameter estimation.

An apparatus, method and/or system for sound source enumeration and DoA estimation are configured to estimate a number of sound sources associated with an acoustic signal. The estimating may include selecting a specific parametric model from a generalized model. The generalized model is related to a microphone array architecture used to capture the acoustic signal. The apparatus, method and/or system for sound source enumeration and DoA estimation are further configured to estimate a direction of arrival of each sound source of the number of sound sources based, at least in part, on the selected model. The estimating the number of sound sources and estimating the DoA of each sound source are performed using a Bayesian framework, as described herein.

As used herein,“generalized model” corresponds to a set of models, Hi, ¾,..., H M or a model with at least one parameter that has not been specified. As used herein,“specific parametric model” corresponds to the specific model selected by model selection operations, from the generalized model. The specific parametric model may thus correspond to one instance of the generalized model, e.g., H s or to the generalized model with the at least one parameter specified.

FIG. 1 illustrates a functional block diagram of a sound source enumeration and direction of arrival (DoA) system 100 consistent with several embodiments of the present disclosure. System 100 includes a computing device 102 and a microphone array 104.

Microphone array 104 is configured to capture an acoustic signal 106. In general, the acoustic signal 106 may include contributions from a plurality of acoustic sources. Computing device 102 may include, but is not limited to, a microcomputer, a portable computer, a desktop computer, a tablet computer, a laptop computer, etc. Computing device 102 includes a processor circuitry 110, a memory circuitry 112, an input/output (I/O) circuitry 114 and a user interface (UI) 116. Computing device 102 may include an enumeration module 120, a DoA module 122, a model data store 124, a configuration data store 126 and/or a sound data store 128.

Processor circuitry 110 may include, but is not limited to, a single core processing unit, a multicore processor, a graphics processing unit, etc. Processor circuitry 110 is configured to perform operations of computing device 102, including, for example, operations of the enumeration module 120 and the DoA module 122. Memory circuitry 112 may be configured to store data and/or configuration parameters associated with enumeration module 120, DoA module 122 and/or microphone array 104. For example, memory circuitry 112 may be configured to store the model data store 124, the configuration data store 126 and/or the sound data store 128. I/O circuitry 114 may be configured to receive electrical signals from microphone array 104, as described herein. UI 116 may include a user input device (e.g., keyboard, mouse, touch sensitive display, etc.) and/or a user output device, e.g., a display. Model data store 124 may be configured to store parameters associated with each model, as described herein. Configuration data store 126 may be configured to store model data associated with one or more microphone array architectures. Configuration data 126 may include, for example, a microphone array architecture identifier, a number of microphones in the array, etc.

Microphone array 104 includes a plurality of microphones 130-1,..., 130-N. The plurality of microphones may be arranged in a particular configuration. Characteristics of the physical configuration may include, for example, a number of microphones and a respective position of each microphone relative to one or more other microphones in the array.

Microphone array physical configurations may include, but are not limited to, one

dimensional linear microphone arrays, two dimensional (e.g., rectangular) microphone arrays, coprime microphone arrays, spherical microphone arrays, etc. A spherical microphone array may be configured as a full sphere or may be configured as a portion (e.g., a hemisphere) of a sphere.

In operation, a sound signal may be captured by microphone array 104. The sound signal may include contributions from a plurality of sound sources. A number of sound sources and a location of each sound source may generally not be known. A corresponding electrical signal 106 may be received by computing device 102, by, e.g., I/O circuitry 114. The corresponding electrical signal 106 may be configured such that a respective individual electrical signal from each microphone 130-1,..., 130-N is maintained separate from each other individual electrical signal. The received electrical signals may be sampled and stored as one or more sound data sets in sound data store 128. Each sound data set may be associated with a microphone identifier and/or a time indicator. The sound data sets may thus correspond to data, D, as described herein.

Enumeration module 120 may be configured to perform sound source enumeration (i.e., model selection) operations within the Bayesian framework, as described herein. Sound source enumeration result may then be stored in model data store 124 and/or may be provided to a user via UI 116. DoA module 122 may be configured to estimate direction of arrival, e.g., an angle, for each sound source estimated by enumeration module 120 within the Bayesian framework, as described herein. DoA result may then be stored in model data store 124 and/or may be provided to a user via UI 116.

In an embodiment, the microphone array 104 is configured as a coprime microphone array architecture. A coprime microphone array may be one dimensional (i.e., linear) or two dimensional. A coprime linear microphone array allows for a relatively narrower beam with fewer microphones compared to a uniform linear microphone array. A coprime microphone array includes two uniform linear subarrays that are coincident (the same starting point and both continue in the same direction) with M and N microphones, respectively, where M and N are coprime with each other. By applying spatial filtering to both subarrays and combining their outputs, M + N - 1 microphones are configured to yield M x N directional bands.

Coprime linear microphone arrays may be configured to extend a frequency range of a given number of array elements by exceeding a spatial Nyquist limit.

In some embodiments, a coprime microphone array may be configured for broadband beamforming. Parametric models describing this broadband behavior enable the use of model-based Bayesian inference for estimating source directions as well as a number of sources present in the sound field. For example, for broadband beamforming coprime microphone array data, a generalized model may correspond to a generalized Laplace distribution function. The generalized Laplace distribution function may be written as:

where Q is azimuth angular variable, A 0 is a constant parameter configured to account for a noise

floor and parameters per sound source include an amplitude, A s , an angle of arrival, 0 S , and beam width, S s of each sound source. The number of parameters per sound source in this example is three. The model parameter for S number of sound sources,

0 5 = {A l A 2 ,—,A s , ! , , 5 , 5 1 ... , ri 5 ), includes all the amplitude and angular parameters.

Model selection is related to the probability to which a modeled set of data matches an experimentally measured set of data. In this example, the experimentally measured data may correspond to a set of sound signal intensities measured at a plurality of angles, D = [£ ) (0)]. Model selection (corresponding to enumerating a number of sound sources) includes estimating (i.e., determining) the posterior probability, p(H s \D ), as described herein.

Considering the principle of maximum entropy where no assumptions are made beyond available known information. For the likelihood function, the variance between observed data and model predicted data is assumed finite. Accordingly, the likelihood function corresponds to a normal distribution:

2

where E 2 = — H s k ) /2, and K is the number of data points. The standard deviation term is not of interest, thus dependence on this parameter may be removed by integrating the joint probability density distribution of the likelihood over all possible values of s. Thus, the marginalized probability distribution function for the likelihood takes the form of a student t-distribution as:

where G(K/2) is the gamma function evaluated at K/2 (K being the number of data points). Based on the maximum entropy, the prior is assumed to be constant. The marginal likelihood, p(D \H s ), is determined numerically, as described herein. The evidence may then be determined, i.e., the integral for evidence, may be approximated, using a Markov chain Monte Carlo method, e.g., nested sampling.

Thus, given the generalized model of Eq. 10 and experimentally measured data captured from microphone array 104 by, e.g., I/O circuitry 114 and/or enumeration module 120, Bayesian analysis, as described herein, may be performed to estimate the number of sound sources indicated by the experimentally measured data. Estimating the number of sound sources corresponds to selecting a specific parametric model from the generalized model. In this example, the generalized model corresponds to S models of Eq. 10.

In another embodiment, the microphone array 104 is configured as a spherical microphone array architecture. In this embodiment, the spherical microphone array may correspond to a full sphere or to a hemisphere. A spherical microphone array includes a plurality of microphones distributed over a surface of at least a portion of a sphere, for example, a hemisphere.

A spherical microphone array may be configured to facilitate spherical harmonic beamforming. Spherical harmonic beamforming corresponds to processing a sound signal to map sound energy around the spherical array. Principles of spherical harmonics may be utilized to beamform spherical sound signals in order to map and model a sound energy of a sound scape. Spherical harmonics can be formulated by solving the spherical wave equation. Spherical harmonics may be represented as:

where q, f are elevation and azimuth angles, m, n are integer numbers representing a degree and an order of spherical harmonics, respectively. P™(·) is the Legendre function of degree m and order n, and j = V— 1. The degree m may be understood as representing different orientations of the spherical harmonics and the order n, provides an increase in resolution. The number of microphones sampling the sphere determines the order of spherical harmonics that the spherical array may achieve.

“Beamforming” relates to spatially filtering a signal, e.g., a sound signal.“Spherical beamforming” corresponds to spatially filtering data with respect to a sphere. Spatial filtering may facilitate“listening” to a selected direction and suppressing other directions. A spatial filter direction may thus be referred to as a beam and a directional pattern of this“listening” direction may be described as a beam pattern. For filtering a plurality of sound sources, an energy sum of a plurality of filter directions may be written as:

where S is the number of concurrent sound sources, A s represents amplitude and the fraction represents normalized energy associated with the 5th source, with

where Y 5 = {q 1 , q 5 ; f 1 , ... , f 5 } are S number of listening directions (sound sources), which are fixed, yet unknown, and Y 5 = {q 5 , f 5 } denotes the direction of the 5th sound source. The symbol * represents complex conjugate. Y™(·) are the spherical harmonics of order n and degree m.

In cases where beamforming is used, beamforming data D(6i, fi) captured from M number of microphones embedded on a rigid spherical surface with radius, r, may be expressed as:

where:

where miC (k, r, q^, f ) represents the /th microphone output among M microphone channels around a rigid sphere surface at angular position { , fi }, and

for axis-symmetric beamforming in the plane-wave decomposition mode, and the bracket represents spherical mode amplitude for a rigid sphere with j n (k r) and h n (k r) being spherical Bessel and Hankel functions of the first kind, respectively. The prime denotes the derivative with respect to the argument. Angular variables, Q, and ø, represent the elevation and the azimuth angles, respectively.

Enumerating a number of sound sources (corresponding to model selection) is related to the probability to which a modeled set of data, H t = Q, ø)], matches an

experimentally measured set of data, D = [D(6, ø)], as given in Eq. 16. In other words, enumerating the number of sound sources corresponds to determining p(H j |D), the posterior probability, as described herein. Thus, given the generalized model of Eq. 14 and

experimentally measured data captured from microphone array 104 by, e.g., EO circuitry 114 and/or enumeration module 120, Bayesian analysis, as described herein, may be performed to estimate the number of sound sources indicated by the experimentally measured data.

Estimating the number of sound sources corresponds to selecting a specific parametric model from the generalized model. In this example, the generalized model corresponds to S models ofEq. 14. Parameter estimation may be performed based, at least in part, on Bayes’ Theorem

(Eq. 7), as described herein. The evidence, p(D\H ) may be determined according to Eq. 8, by integrating the product of the likelihood and the prior distribution. The likelihood, p(D\G, H), may be determined as:

with where Q is the total number of data points, Q = / K with Q 1 < G j < G j and f ± < k < <p K covering the entire angular range under consideration. Model H(G j , 0 k ) and data D are determined by Eqs. 14 and 16, respectively.

The Bayesian framework applied to prediction of DoA for a plurality of sound sources includes determination of the evidence term. The evidence may be determined based, at least in part, on sampling. In one nonlimiting example, nested sampling may be used. The model parameters associated with a sample model associated with a maximum likelihood or with model parameters that have converged to within a tolerance may then correspond to direction of arrival, i.e., angles Q and f.

Thus, the sound source enumeration and direction of arrival system 100 may be configured to estimate a number of sound sources and direction of arrival of an acoustic signal, within a probabilistic Bayesian framework.

FIG. 2 is a flowchart 200 of example sound source enumeration and DoA estimation operations consistent with several embodiments of the present disclosure. In particular, the flowchart 200 illustrates estimating a number of sound sources and a direction of arrival of each sound source using a Bayesian framework. The operations of flowchart 200 may be performed by, for example, computing device 102 (e.g., enumeration module 120 and/or DoA module 122) of FIG. 1.

Operations of flowchart 200 may begin with identifying a microphone array architecture at operation 202. A generalized models may be selected at operation 204. For example, the generalized model may be selected based, at least in part, on the identified microphone array architecture. An acoustic signal may be captured from the microphone array at operation 206. At operation 208, a number of sound sources may be estimated using a Bayesian framework. Estimating the number of sound sources may correspond to selecting a specific parametric model from the generalized model. At operation 210, a direction of arrival for each sound source be estimated using a Bayesian framework. Program flow may then continue at operation 212.

Thus, a number of sound sources may be estimated and a direction of arrival of each sound source may be estimated using a Bayesian framework.

As used in any embodiment herein, the terms "logic" and/or“module” may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices.

“Circuitry”, as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The logic and/or module may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.

Memory circuitry 112 may include one or more of the following types of memory: semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, and/or optical disk memory. Either additionally or alternatively memory circuitry 112 may include other and/or later-developed types of computer-readable memory.

Embodiments of the operations described herein may be implemented in a computer- readable storage device having stored thereon instructions that when executed by one or more processors perform the methods. The processor may include, for example, a processing unit and/or programmable circuitry. The storage device may include a machine readable storage device including any type of tangible, non-transitory storage device, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of storage devices suitable for storing electronic instructions.

The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.