Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PULSE-BASED AUTOMATIC SPEECH RECOGNITION
Document Type and Number:
WIPO Patent Application WO/2017/100298
Kind Code:
A1
Abstract:
Various examples are provided related to speech recognition. In one example, a method includes converting an auditory signal into a pulse train, segmenting the pulse train into a series of frames having a predefined duration, and identifying a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a kernel adaptive autoregressive-moving-average (KAARMA) network. In another example, a speech recognition system includes processing circuitry configured to convert an auditory signal into a pulse train, segment the pulse train into a secured of frames, and identifying a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a KAARMA network. The series of frames segmented from the pulse train can be applied to a KAARMA chain including a plurality of KAARMA networks for identification.

Inventors:
PRINCIPE JOSE C (US)
LI KAN (US)
Application Number:
PCT/US2016/065344
Publication Date:
June 15, 2017
Filing Date:
December 07, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THE UNIV OF FLORIDA RES FOUND INC (US)
International Classes:
G06F15/18; G06F17/21; G06F40/00; G06N3/04; G10L15/02; G10L15/16; H03H17/02; H03H21/00
Foreign References:
US8756265B12014-06-17
US20150220832A12015-08-06
Other References:
LI ET AL.: "The Kernel Adaptive Autoregressive-Moving-Average Algorithm'';", IEEE TNNLS SPECIAL ISSUE ON NEURODYNAMIC SYSTEMS FOR OPTIMIZATION AND APPLICATIONS, vol. 27, no. 7, 1 January 2015 (2015-01-01), pages 1 - 14, XP055389254, Retrieved from the Internet [retrieved on 20170124]
Attorney, Agent or Firm:
SCHOEN, Randy, R. et al. (US)
Download PDF:
Claims:
CLAIMS

Therefore, at least the following is claimed:

1. A speech recognition method, comprising:

converting an auditory signal into a pulse train;

segmenting the pulse train into a series of frames having a predefined duration; and

identifying a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a kernel adaptive autoregressive- moving-average (KAARMA) network.

2. The speech recognition method of claim 1 , wherein the pulse train is transformed into a reproducing kernel Hilbert space (RKHS) using a spike kernel.

3. The speech recognition method of any of claims 1 and 2, wherein the KAARMA

network generates a state that identifies the portion of the auditory signal.

4. The speech recognition method of any of claims 1-3, wherein the portion of the

auditory signal is a phoneme.

5. The speech recognition method of any of claims 1-3, wherein the portion of the

auditory signal is a triphone.

6. The speech recognition method of any of claims 1-5, comprising identifying a word by applying the series of frames segmented from the pulse train to a KAARMA chain including the KAARMA network.

7. The speech recognition method of claim 6, wherein the word is identified based at least in part upon a plurality of states generated by KAARMA networks of the

KAARMA chain.

8. The speech recognition method of any of claims 1-7, wherein the portion of the series of frames is applied to the KAARMA network in a natural left-to-right temporal sequence.

9. The speech recognition method of any of claims 1-7, wherein the portion of the series of frames is applied to the KAARMA network in a reversed right-to-left temporal sequence.

10. The speech recognition method of any of claims 1-8, wherein the portion of the

auditory signal is identified by applying the portion of the series of frames to the KAARMA network in a natural left-to-right temporal sequence and applying the portion of the series of frames to a second KAARMA network in a reversed right-to- left temporal sequence.

11. The speech recognition method of claim 10, wherein the portion of the auditory signal is identified based upon a first state generated by the KAARMA network and a second state generated by the second KAARMA network.

12. The speech recognition method of any of claims 1-11 , wherein the auditory signal is converted into a plurality of pulse trains associated with corresponding frequency bands of the auditory signal, individual pulse trains of the plurality of pulse trains segmented into a corresponding series of frames having the predefined duration.

13. The speech recognition method of claim 12, wherein the portion of the auditory signal is identified by applying corresponding portions of the corresponding series of frames associated with the corresponding frequency bands to the KAARMA network.

A speech recognition system, comprising:

processing circuitry including a processor and memory comprising a speech recognition application, where execution of the speech recognition application by the processor causes the processing circuitry to:

convert an auditory signal into a pulse train;

segment the pulse train into a series of frames having a predefined duration; and

identify a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a kernel adaptive

autoregressive-moving-average (KAARMA) network.

15. The speech recognition system of claim 14, wherein the KAARMA network generates a state that identifies the portion of the auditory signal.

16. The speech recognition system of any of claims 14 and 15, wherein execution of the speech recognition application causes the processing circuitry to identify a word by applying the series of frames segmented from the pulse train to a KAARMA chain including the KAARMA network.

17. The speech recognition system of claim 16, wherein the word is identified based at least in part upon a plurality of states generated by KAARMA networks of the

KAARMA chain.

18. The speech recognition system of any of claims 14-17, wherein the portion of the auditory signal is identified by applying the portion of the series of frames to the KAARMA network in a natural left-to-right temporal sequence and applying the portion of the series of frames to a second KAARMA network in a reversed right-to- left temporal sequence.

19. The speech recognition system of claim 18, wherein the portion of the auditory signal is identified based upon a first state generated by the KAARMA network and a second state generated by the second KAARMA network.

20. The speech recognition system of any of claims 14-19, wherein the auditory signal is converted into a plurality of pulse trains associated with corresponding frequency bands of the auditory signal, individual pulse trains of the plurality of pulse trains segmented into a corresponding series of frames having the predefined duration.

Description:
PULSE-BASED AUTOMATIC SPEECH RECOGNITION

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to, and the benefit of, co-pending U.S. provisional application entitled "Pulse-Based Automatic Speech Recognition" having serial no.

62/263,958, filed December 7, 2015, which is hereby incorporated by reference in its entirety.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002] This invention was made with government support under agreement N66001-10-

C-2008 awarded by the U.S. Department of Defense. The Government has certain rights to the invention.

BACKGROUND

[0003] An automatic speech recognition (ASR) system translates recorded audio signal to text. It is a pattern classification problem at its core. However, both the nonstationarity of the signal and the large variation in the temporal dimension of the speech feature sequences prevent classical classifiers such as Bayes, nearest-neighbor, or state-of-the-art classifiers such as support vector machines (SVMs), which are limited to static patterns or fixed- dimension inputs, from being implemented in a straightforward manner.

SUMMARY

[0004] Embodiments of the present disclosure are related to speech recognition.

[0005] In one embodiment, among others, a speech recognition method comprising: converting an auditory signal into a pulse train; segmenting the pulse train into a series of frames having a predefined duration; and identifying a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a kernel adaptive autoregressive-moving-average (KAARMA) network. In one or more aspects of these embodiments, the pulse train can be transformed into a reproducing kernel Hilbert space (RKHS) using a spike kernel. The KAARMA network can generate a state that identifies the portion of the auditory signal. The portion of the auditory signal can be a phoneme, a triphone, or a word.

[0006] In one or more aspects of these embodiments, the speech recognition method can comprise identifying a word by applying the series of frames segmented from the pulse train to a KAARMA chain including the KAARMA network. The word can be identified based at least in part upon a plurality of states generated by KAARMA networks of the KAARMA chain. The portion of the series of frames can be applied to the KAARMA network in a natural left-to-right temporal sequence. The portion of the series of frames can be applied to the KAARMA network in a reversed right-to-left temporal sequence.

[0007] In one or more aspects of these embodiments, the portion of the auditory signal can be identified by applying the portion of the series of frames to the KAARMA network in a natural left-to-right temporal sequence and applying the portion of the series of frames to a second KAARMA network in a reversed right-to-left temporal sequence. The portion of the auditory signal can be identified based upon a first state generated by the KAARMA network and a second state generated by the second KAARMA network. The auditory signal can be converted into a plurality of pulse trains associated with corresponding frequency bands of the auditory signal. Individual pulse trains of the plurality of pulse trains can be segmented into a corresponding series of frames having the predefined duration. The portion of the auditory signal can be identified by applying corresponding portions of the corresponding series of frames associated with the corresponding frequency bands to the KAARMA network.

[0008] In another embodiment, a speech recognition system comprises processing circuitry including a processor and memory comprising a speech recognition application, where execution of the speech recognition application by the processor causes the processing circuitry to: convert an auditory signal into a pulse train; segment the pulse train into a series of frames having a predefined duration; and identify a portion of the auditory signal by applying at least a portion of the series of frames segmented from the pulse train to a kernel adaptive autoregressive-moving-average (KAARMA) network. In one or more aspects of these embodiments, the KAARMA network can generate a state that identifies the portion of the auditory signal

[0009] In one or more aspects of these embodiments, execution of the speech recognition application can cause the processing circuitry to identify a word by applying the series of frames segmented from the pulse train to a KAARMA chain including the KAARMA network. The word can be identified based at least in part upon a plurality of states generated by KAARMA networks of the KAARMA chain. The portion of the auditory signal can be identified by applying the portion of the series of frames to the KAARMA network in a natural left-to-right temporal sequence and applying the portion of the series of frames to a second KAARMA network in a reversed right-to-left temporal sequence. The portion of the auditory signal can be identified based upon a first state generated by the KAARMA network and a second state generated by the second KAARMA network.

[0010] In one or more aspects of these embodiments, the auditory signal can be converted into a plurality of pulse trains associated with corresponding frequency bands of the auditory signal, individual pulse trains of the plurality of pulse trains segmented into a corresponding series of frames having the predefined duration. The portion of the auditory signal can be identified by applying corresponding portions of the corresponding series of frames associated with the corresponding frequency bands to the KAARMA network.

[0011] Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims. In addition, all optional and preferred features and modifications of the described embodiments are usable in all aspects of the disclosure taught herein. Furthermore, the individual features of the dependent claims, as well as all optional and preferred features and modifications of the described embodiments are combinable and interchangeable with one another.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.

[0013] FIG. 1 is a schematic diagram of an example of a general state-space model for a dynamical system in accordance with various embodiments of the present disclosure.

[0014] FIG. 2 illustrates an example of a kernel adaptive autoregressive-moving- average (KAARMA) network in accordance with various embodiments of the present disclosure.

[0015] FIGS. 3A and 3B are examples of algorithms that implement portions of the KAARMA network of FIG. 2 in accordance with various embodiments of the present disclosure.

[0016] FIGS. 4A through 4D include tables illustrating performance results of the KAARMA network of FIG. 2 in accordance with various embodiments of the present disclosure.

[0017] FIG. 5 illustrates an example of a KAARMA chain of 3 grammar states in accordance with various embodiments of the present disclosure.

[0018] FIGS. 6A through 6C illustrate an example of segmentation of frequency responses of a 12-filter gammatone filterbank in accordance with various embodiments of the present disclosure.

[0019] FIGS. 7A through 7C illustrate an example of word recognition using pulse- based KAARMA chain in accordance with various embodiments of the present disclosure. [0020] FIGS. 8A through 8C include tables illustrating and comparing performance results of the KAARMA chain and a hidden Markov model (HMM) framework in accordance with various embodiments of the present disclosure.

[0021] FIGS. 9A through 9C illustrate examples of spike train templates in accordance with various embodiments of the present disclosure.

[0022] FIG. 10 includes a table comparing performance of a liquid state machine (LSM) using various supervised learning techniques in accordance with various embodiments of the present disclosure.

[0023] FIG. 11 illustrates a misclassified spike train by the LSM of FIG. 16 in accordance with various embodiments of the present disclosure.

[0024] FIGS. 12A and 12B illustrate examples of online performance of quantized KAARMA (or QKAARMA) for spike train classification during training and testing in accordance with various embodiments of the present disclosure.

[0025] FIG. 13 illustrates an example of a minimized finite state machine in accordance with various embodiments of the present disclosure.

[0026] FIGS. 14A and 14B illustrated examples of state transition trajectories for a spike train generated by templates 0 and 1 , respectively, in accordance with various embodiments of the present disclosure.

[0027] FIG. 15 is a schematic block diagram of an example of a speech processing device in accordance with various embodiments of the present disclosure.

DETAILED DESCRIPTION

[0028] Disclosed herein are various embodiments related to speech recognition.

Reference will now be made in detail to the description of the embodiments as illustrated in the drawings, wherein like reference numbers indicate like parts throughout the several views.

[0029] A kernel adaptive autoregressive-moving-average (KAARMA) algorithm has the ability to model dynamical systems and as a temporal classifier using the benchmark Tomita grammars. This disclosure describes the use of KAARMA networks as an acoustic model for speech recognition. Since speech production is both nonlinear and nonstationary in nature, KAARMA-based solutions can deliver computationally efficient solutions. The performance of these grammar classifiers is shown for the fundamental application in speech processing, isolated word recognition, using both conventional feature sequences (e.g., mel-frequency cepstral coefficients or MFCCs) and biologically inspired signals (e.g., spike or pulse trains). The performance of these classifiers can be enhanced under a statistical framework.

KAARMA

[0030] The KAARMA algorithm will now be briefly described. Refer to "The kernel adaptive autoregressive-moving-average algorithm" by K. Li and J. C. Principe (IEEE Trans, on Neural Networks and Learning Systems, April 2015), which is hereby incorporated by reference in its entirety, for a more in depth discussion. Referring to FIG. 1 , shown is an example of a general state-space model for a dynamical system. Let the dynamical system be defined in terms of a general continuous nonlinear state transition and observation functions, f(y) and h(-), respectively, as:

where

T

f{¾ , . u; i = [ W fr- i.m), " ' , /("-> (∑.-!, m)

-** ' * " " ■ A i (3)

with input U j e M , state χ έ e R nx , output y t e R ny , and the parenthesized superscript ( * ) indicating the k-th component of a vector or the k-th column of a matrix. Note that the input, state, and output vectors have independent degrees of freedom or dimensionality. [0031] For simplicity, EQNS. (1) and (2) can be rewritten in terms of a new hidden state vector as:

where \ n is an n y x n y identity matrix, 0 is an n y x n x zero matrix, and o is the function composition operator. This augmented state vector s t e R ns is formed by concatenating the output y t with the original state vector χ έ . With this rewriting, the measurement equation simplifies to a fixed selector matrix I = [° y ].

[0032] Next, define an equivalent transition function as gis^^ U j ) = f (X j -χ, U j ) taking as argument a new state variable s. Using this notation, EQNS. (1) and (2) become:

X; = g(s;-i, u i ) , (7) y £ = h(x £ ) = h o g(s i _ 1 , u i ) . (8)

[0033] To learn the general continuous nonlinear transition and observation functions, g(y) and h o g(y), respectively, the theory of reproducing kernel Hilbert space (RKHS) can be applied. First, map the augmented state vector s t and the input vector U j into two separate RKHSs as < (Sj) e ¾ an d Φ( ΰ E H-u, respectively, using a spike kernel (e.g., a Schoenberg kernel). By the representer theorem, the state-space model defined by EQNS. (7) and (8) can be expressed as the following set of weights (functions in the input space) in the joint RKHS K su H S ® H U as:

Ω = Ω¾ ;ιι =

h o g(- ,■) (9) where ® is the tensor-product operator. Define the new features in the tensor-product RKHS as:

-, Δ

(10) It follows that the tensor-product kernel is defined by:

}C su (s, II. s', u')

(/C 5 ® ; u ) (s, u ; . s' 5 u')

/C s (s. s') · £„(ιι, ιι') · (11)

[0034] This construction has several advantages over the simple concatenation of the input u and the state s. First, the tensor product kernel of two positive definite kernels is also a positive definite kernel. Second, since the adaptive filtering is performed in an RKHS using features, there is no constraint on the original input signals or the number of signals, as long as the appropriate reproducing kernel is used for each signal. Last but not least, this formulation imposes no restriction on the relationship between the signals in the original input space. This is important for input signals having different representations and spatio- temporal scales. For example, under this framework, a neurobiological system can be modeled taking spike trains, continuous amplitude local field potentials (LFPs), and vectorized state variables as inputs.

[0035] Finally, the kernel state-space model becomes:

(12)

= US,- . (13)

FIG. 2 shows an example of a simple KAARMA network. In general, the states (S j ) 503 are assumed hidden, and the desired state does not need to be available at every time step, e.g., a deferred desired output value for y t may only be observed at the final indexed step i = f, i.e., d f .

[0036] Kernel Adaptive Recurrent Filtering. The learning procedure presented here computes the exact error gradient in the RKHS. For simplicity, consider only the Gaussian kernel in the derivation. For the state and input vectors, the joint inner products can be computed using JC as (s, s') and JC au (u, u'), respectively.

[0037] The cost function at time i can be defined as:

(14) where e; = dj - y t e R nyxl is the error vector, with d; as the desired signal. The error gradient with respect to the RKHS weights at time i is:

where the partial derivative -^- consists of terms,— ¾r,— ¾r, — ¾pr corresponding to the state dimension. Note that the feature-space weights Ω; are functions with potentially infinite dimension. Fortunately, the functional derivative is well-posed in the RKHS, since Hilbert spaces are complete normed vector spaces, the Frechet derivative can be used to compute EQN. (15).

[0038] For the / -th component of H j , the gradient can be expanded using the chain rule as:

where ^ dil; =

[0039] Applying the product rule to the gradient yields:

where e R ns is the /c-th column of the n s x n s identity matrix. The distinction between a recurrent formulation and the normal feed-forward kernel adaptive filtering (KAF) lies in the gradient term on the right-hand side of EQN. (17). In a recurrent network, past states are coupled with the current state input through feedback. Consequently, the partial derivatives of the previous states with respect to the current filter weights are nonzero.

[0040] Using the representer theorem, weights H^at time i can be written as a linear combination of prior features given by:

Ω^ ΨίΑ;, (18) where Ψ έ = [^(s_ 1; u 0 ), ... , ^(s m _ 2 , u m _ 1 )] e Μ η Ψ Χ7 is a collection of the m past tensor- product features with potentially infinite dimension η ψ , and Α έ = [α ί 1; ... , a in J e m. mxns is the set of corresponding coefficients. For feedforward KAF such as the kernel least mean squares (KLMS) algorithm, the number of basis functions grows linearly with each new sample (m = i). Here, use m to denote a dictionary Ψ; of arbitrary size, with ^(s.-^ Uo) initialization. Thus, the k-th component (1≤ k≤ n s ) of the filter weights at time i becomes:

Ω^ = Ψ £ Α<* } = Ψ £ α £> *. (19)

[0041] Substituting the expression for weights Ω £ in EQN. (18) into the feedback gradient on the right-hand side of EQN. (17) and applying the chain rule gives:

where the partial derivation is evaluated using the Gaussian tensor-product kernel of EQN. (1 1), where K t = diag =

Xa^Sj. st- - X^Uj. ut) and D t [(s_ ! - s £--1 ), (s m _ 2 - s £--1 )] is the difference matrix between state centers of the filter and the current input state s^. The gradient coefficients in EQN. (20) can be collected into a matrix A t = -^- = 2a s Af KjDf , which is called the state-

" s i-i

transition gradient. Substituting EQN. (20) into EQN. (17) gives the following recursion:

[0042] The gradient ^¾ measures the sensitivity of s £ , the state output at time i, to a small change in the k-th weight component, taking into account the effect of such variation in the weight values over the entire state trajectory s 0 , s^. In this evaluation, the initial state s 0 , the input sequence u^, and the remaining weights (n \ where j≠ k) are fixed. The state is back-propagated with respect to a constant weight via feedback.

[0043] Clearly EQN. (21) is independent of any teacher signal or error that the system may incur in the future and can be computed entirely from the observed data. Therefore, the state gradients can be forward propagated. Since the initial state is user-defined and functionally independent of the filter weights, by setting dS m l = 0, the ensuing recursions in da,

EQN. (21) becomes:

By induction, the basis functions can be factored out and the recursion expressed as:

(23) where Ψ = [Ψ - ! , U j )] e Μ η Ψ Χί are centers generated by the input sequence and

updated state-transition gradient, with initializations Ψ = [^(so, ^)] and =

[0044] Combining EQN. (17) with EQN. (23) gives the error gradient:

(24)

Updating the weights in the negative direction yields:

(26) where η is the learning rate. Because learning is provided in a LMS fashion in the RKHS, the conventional tradeoff of speed and accuracy exists, and the same rules as for the KLMS algorithm can be utilized.

[0045] Since the centers in a KAARMA network naturally forms feature clusters during training. Its growth can be curbed by evaluating each new center from the feature update Ψ' with the existing ones. FIGS. 3A and 3B include pseudocode illustrating a KAARMA algorithm and a quantization algorithm, respectively. The algorithm of FIG. 3B outlines the quantization procedure of quantized KAARMA (or QKAARMA), which constrains the network growth in the last five lines of the KAARMA algorithm of FIG. 3A. Each new center from the feature update Ψ' is compared with the existing ones. If the minimum joint distance (input and state) is below a quantization threshold q, the new center Ψ'® = i >(s-, u-) is simply discarded, and its corresponding coefficients A'(i), where (0 indicates the i-th row, are added to the nearest existing neighbor's (indexed by j * ), thus updating the weights without growing the network structure.

KAARMA FOR AUTOMATIC SPEECH RECOGNITION

[0046] Automatic speech recognition (ASR) research can involve connectionist networks that can focus on a frame-to-frame predictive or discriminative model, by computing the local observation probability at each time step. This can be based upon an assumption that each frame of the speech signal can be labeled or assigned a desired value. Alternatively, certain speech recognition tasks can be treated as grammatical inference problems, exactly like syntactic pattern recognition involving the Tomita grammars.

[0047] As a proof of concept, the TI-46 corpus of isolated English digits was used for experimental testing in this disclosure. The corpus of speech was designed and collected at Texas Instruments, Inc. in 1980, consisting of utterances from 16 speakers (eight males and eight females) each speaking the digits "zero" through "nine" 26 times. Of the 4160 possible utterances, 4000 were used in the subsequent experimental testing. These utterances were further partitioned randomly into a training set (of 2700 utterances with an equal number of male/female utterances and digits: 135 utterances per gender, per digit) and a testing set (of 1300 utterances with an equal number of male/female utterances and digits: 65 utterances per gender, per digit).

[0048] To help align each utterance and reduce the number of non-speech data points used in computation, each utterance can be normalized with respect to its maximum absolute amplitude, then truncated automatically into the smallest window containing all voiced regions using a simple threshold-based end-point detection algorithm. Next, each truncated utterance was analyzed on 25 ms speech frames at 100 fps. Each frame was Hamming windowed, filtered by a first-order pre-emphasis filter (a = 0.95). The magnitude spectrum from the discrete Fourier transform (DFT) can be computed and scaled by the mel- scale triangular filter bank. The output energy can then be log-compressed and transformed via the DCT to cepstral coefficients.

[0049] For the ASR experiment, ten 1-vs-all KAARMA networks were trained, one for each digit. Each training utterance was represented by a sequence of 12 MFCCs and assigned a single label of ±1 based on the target class. The QKAARMA algorithm was used for training, with hidden states e M 3 , kernel parameters a s = a u = 2.5, learning rate η = 0.05, and quantization threshold q = 0.81. For the complete ASR system performance, each test utterance passes through all ten classifiers, and the digit corresponding to the KAARMA network with the largest output value was selected as the recognition result.

[0050] The classification performance of this KAARMA ASR system is summarized in the training and testing confusion matrices in the training and testing tables in FIG. 4A. The drop in performance from single 1-vs-all classifiers to the overall recognition system and from the training set to the testing set is expected, due to overfitting and the fact that errors in individual classifiers can accumulate across the entire testing set for the final 10-vs-10 word recognition system. Another contributing factor is that many digits share similar acoustic features in their temporal representation. Theoretically, recurrent networks that are trained using the gradient descent method are ill-suited for learning long-term dependencies. If two different digits converge to the same set of acoustic features after enough time has elapsed or share significant overlap during utterances, it becomes much more difficult for recurrent networks such as the KAARMA algorithm to distinguish the two. To better illustrate this phenomenon, each of the 10 digits present in the corpus is broken up into sub-word speech units of phonemes in the table of FIG. 4B.

[0051] One simple way to circumvent this problem for certain digits, without changing the experiment, is to simply reverse the order of the acoustic features. Digits that used to share the same trailing phoneme may end up in different ones (of course the converse can also happen for certain digits). In the table of FIG. 4C, the results are tabulated for individual 1-vs-all classifiers using both the natural left-to-right temporal sequence convention and the reversed ordering (right-to-left) during training and testing of the digit corpus. The true positive (TP), false positive (FP), and 1-vs-all classification accuracy are listed for each two classifiers, with the better performance in each category between the two orderings highlighted in bold font.

[0052] If the ASR system is constructed by selecting the 1-vs-all classifier with the higher accuracy between the two ordering categories for each digit, the testing set recognition rate of this modified ASR can be improved over the uniform sequence ordering. However, if both classifiers are combined by multiplying the softmax scores, the

performance can be further improved, as shown in the table of FIG. 4D.

KAARMA CHAIN

[0053] A KAARMA chain approach can be formulated for isolated word recognition under a statistical framework. Commonly, a hidden Markov model (HMM) framework is used for speech recognition. In the HMM framework, a speech signal, specifically the sequence of acoustic feature vectors 11 = {Μ 1 , Μ 2 - - , }, is generated by a finite state automaton comprising L states S = {s 1 , s 2 , --- , s ns } under a probabilistic framework. An HMM is equivalent to a stochastic regular grammar with each speech unit associated with a specific Markov model M t comprising states from S according to a predefined topology. The left-right (Bakis) model is the most commonly used topology for speech recognition. States are aligned from left to right to form a single Markov chain, indexed incrementally and with only self- or right-transitions allowed, i.e., α ί ; = 0, for j < i. Furthermore, the initial state is fixed at state s 1 . Left-right HMMs are able to model the temporal properties of speech.

[0054] The training and recognition criteria for HMMs are based on maximizing a posteriori probability Pr( j |Xi) that the observation U has been produced by the HMM M t . Using Bayes' rule, the expression can be rewritten as:

where Pr(Xi| j) is the maximum likelihood estimate (MLE) criterion, Pr(Xi) is constant during recognition, and the a priori probability Pr( j ) is an appropriate language model.

[0055] The statistical recognition criterion can be solved directly using the KAARMA algorithm. The maximum a posteriori (MAP) criterion can be defined as:

M * = arg max Prfli \U) , (28)

"M

where M is the inference model, which is equivalent to maximizing the a posterior state sequence or most probable state sequence for each model.

[0056] Define the states in a KAARMA chain as context-free grammars (e.g., phoneme, syllable, word, etc.), denoted by Q = {q 1 , q 2 ,■■■, ¾} (this distinction is made to not confuse a grammar state q t with the KAARMA internal state variables s t , i.e., q t e ■■■ < ¾}■)■ Under this formulation, a single (global grammar) KAARMA network trained on the entire observation trajectory 11 = {Μ 1 , Μ 2 - ---> u nu } can b e viewed as:

where y t is the output of a KAARMA network trained to recognize or classify the grammar q = 1. The softmax function can be used to ensure that the posterior estimates are non- negative and sum to one. To improve the classification results, several KAARMA networks that specialize in different regions of a word can be trained. For each isolated word, the observation sequence can be partitioned into L equal segments. Each segment can be assigned a grammar state and a separate KAARMA network can be used to learn its classification grammar. FIG. 5 shows an example of a KAARMA chain of 3 grammar states, used to the digit "7."

[0057] Next, the state transitions for the KAARMA chain can be fixed at a i}j = 1 for j > i. KAARMA is able to handle nonstationarities by leveraging their internal hidden states Sj . Unlike the restricted structure of a traditional left-to-right model, the hidden state s t in each grammar state are free to form transitions that best fit the available data, i.e., an ergodic model. In the KAARMA chain formulation, the recognized word can be given by the following MAP criterion:

[0058] A major advantage of performing adaptive filtering in the RKHS is the freedom to choose the input representation as long as the appropriate reproducing kernel is used. By having separate formulations of the exogenous input u and the internal state s vectors, the KAARMA algorithm imposes no restriction on the relationship between the two signals in the original input space. This enables it to work directly with nonnumeric data such as spike trains. The KAARMA chain paradigm will be applied for a biologically-plausible speech recognition system. For each speech signal, biologically inspired features can be extracted to mimic the filtering performed by the human auditory system. The Schoenberg kernel can used be as a suitable reproducing kernel (or spike kernel) for spike trains. The spike kernel transforms the spike trains into a high dimensional feature space (e.g., Hilbert space).

SPIKE-BASED ACOUSTIC FEATURES

[0059] First, a gammatone filterbank can be applied to each acoustic signal and its outputs converted into spike trains using leaky integrate-and-fire neurons with spike rate adaptation and a refractory current. The neuron parameters are resistance R m = 10 7 , time constant r m = 10 "2 , spike threshold V th = -55 x 10 "3 , spike delta V spike = 0.5, reversal potential for spike-rate adaptation ¾ = -2 x 10 "1 , and reset potential V reset = -8 x 10 "2 . This formulation is motivated by the mechanical to electrical transduction in the cochlea. Different regions of the basilar membrane vibrate to particular sound frequencies, in response to fluid flow in the cochlea. Sensory hair cells in the organ of Corti then convert the mechanical response to electrical signals which travel along the auditory nerve to the brain for processing.

[0060] The gammatone filterbank simulates the mechanical response of the cochlea in which the output of each filter models the frequency response of the basilar membrane at a particular location. FIG. 6A illustrates an example of frequency responses of a 12-filter gammatone filterbank with center frequencies equally spaced between 50 Hz and 8 kHz on the ERB-rate scale. It can be defined in the time domain by the impulse response:

g(t) = a 1'1 ^ 2 ^ cos(2-/ c i + φ) , (31) where f c is the center frequency (in Hz), φ is the phase of the carrier (in radians), a is the amplitude, n is the filter order, b is the filter bandwidth (in Hz), and t indicates time (in s). FIG. 6B shows the gammatone filter bank outputs for an utterance of the digit "8," superimposed with the corresponding spike trains generated using leaky integrate-and-fire (LIF) neurons. Next, the spike trains can be segmented into frames the same way as the MFCCs, with a frame duration of 25ms and rate of 100 fps. FIG. 6C shows an example frame representation of the spike train inputs for the same digit "8" utterance. Again, the positive class is replicated 3 times and this training set is used once to train the KAARMA classifiers. For the isolated word recognition problem, the spike frames are used directly as features (with hidden states s e M 3 , kernel parameters a s = 4, a u = 6, learning rate 7 = 0.1, quantization threshold q = 0.25). To reduce over-fitting the parameters were not fully optimized over their respective ranges. The recognition performance is also shown using firing rates obtained from the spike frames (with hidden states s e M 3 , kernel parameters a s = a u = 5 > learning rate η = 0.1, quantization threshold q = 0.55). [0061] FIGS. 7A-7C illustrate an overview of the spike-based ASR system using KAARMA chains for isolated word recognition. FIG. 7A shows the segmentation of frequency responses and FIGS. 7B and 7C show an example of portions of the KAARMA chain for work recognition. For a 5-state KAARMA chain, the recognition results are shown in the tables of FIGS. 8A and 8B.The spike-based performances are summarized and compared to the performance of a 5-state HMM with a mixture of 8 Gaussians in the table of FIG. 8C.

[0062] The state-of-the-art bioinspired digital performance using a liquid state machine (LSM) on the TI-46 digit corpus has been reported. For the multispeaker spoken digit task with 1590 speech samples (80% used for training and remaining 20% for testing) and training epoch of 500, the final classification rate for the 77-channel spike input LSM was 92.3%. The spike or pulse-based speech recognition of this disclosure achieves an accuracy of 95.23% for a larger speech set with 4000 samples (67% for training and 33% for testing), using a single training epoch (where only the desired class or 10% of the training data is replicated 3 times).

[0063] Furthermore, producing a constant output for a time-varying liquid state is a major challenge for a LSM, since the memory-less readout has to transform the transient and nonstationary states of the liquid filter into a stable output without any attractor states to rely on. For the KAARMA formulation using pulse-based signals, once the stable dynamics are learned, a finite state machine or deterministic finite automata (DFA) can easily be extracted from the binary time sequences, where all the information of the input is contained in its temporal evolution, e.g., the interspike intervals of individual spike trains. The limitations of LSM and the advantages of the DFA learned from the KAARMA algorithm is illustrated here for completeness.

[0064] Spike Train Classification using DFA learned from the KAARMA algorithm. Two Poisson spike trains of frequency 20 Hz and duration 0.5 s are generated as templates for two classes 0 and 1. FIG. 9A illustrates Poisson spike train templates (solid spikes for class 0 and dashed spikes for class 1). Actual spike trains used for training and testing are noisy versions of the templates, with each spike timing varying by a random amount, according to a zero mean Gaussian distribution with standard deviation or Gaussian jitter of 4 ms. An example of a jittered Class 0 spike train is shown at the bottom of FIG. 9A. The training set consisted of 500 realizations, with another 200 forming the independent test set. Due to the random displacement of spikes, it becomes impossible to recognize spike trains generated from a specific template using a single interspike interval. Furthermore, as non- numeric data, there are no spatial cues to rely on.

[0065] The liquid filter was a randomly connected recurrent neural microcircuit comprising 135 integrate and fire neurons, with 20% of the population randomly set as inhibitory. A readout neuron modeled as a perceptron or threshold gate was trained to classify the spike trains. The task of this threshold gate is to output 0 or 1 corresponding to the template used to generate the input spike train. The state of the microcircuit was sampled every 25 ms by low-pass filtering the response. Supervised learning was applied to the set of training examples in the form: (state x M (t), target y(t)) to train a readout function f M . The experimental setup for the LSM is described in "Learning-tool: analysing the computational power of neural microcircuits (version 1.0)" (The IGI LSM Group, June 1 1 , 2006, http://www.lsm.tugraz.at/download/learning-tool-1.1-manual.p df), which is hereby incorporated by reference in its entirety.

[0066] The QKAARMA network is trained directly on the spiking stimuli, downsampled using 10 ms bins such that for multiple spike count in a bin, only a single 1 is recorded. Each jittered version of the target template forms a binary string of length 50. Note that the tensor- product formulation allows KAARMA networks to operate directly on spike train inputs via an appropriate spike train kernel. Here, binning is used to produce an exact binary-input finite state machine and to compare with the binned performance of the LSM. FIGS. 9B and 9C illustrate examples of test set spike trains in their original spike timings (left) and in the binned form (right) separated by template class 0 and 1 , respectively. Hidden states of dimension 3 (s e M 3 ) were used with the kernel parameter for both the states and the binary inputs set at a s = a u = 1. The learning rate was fixed at η = 0.1, with a quantization threshold of q = 0.4 and a DFA extraction quantization threshold of q DFA = 0.11.

[0067] The performances of the LSM using various supervised learning techniques (linear classification, parallel delta (p-delta) rule, linear regression, and LevenbergMarquardt (LM) backpropagation on a two-layer NN with 5 hidden units), measured by the correlation coefficient (CC), mean absolute error (MAE), error score, and mean squared error, are summarized in the table of FIG. 10. The best performance was given by the two-layer NN trained using LM backpropagation, which was trained for 48 epochs with the best validation performance at epoch 33. FIG. 11 illustrates a misclassified spike train by the LSM approach. Shown are LSM results for test input 3 with readout function trained using linear classification, p-delta rule, linear regression, and backpropagation (desired given by dashed line and estimates given by circles).

[0068] FIG. 12A shows the online performance of the QKAARMA algorithm for the spike train classification task during training. While small perturbations in the input can cause large and unpredictable dynamical changes in the liquid states x M of a LSM, the KAARMA network trained with stable states easily handled the variations in interspike intervals for each class of spike trains. The KAARMA network starts to show strong discrimination after only 60 training samples, compared to the 33 epochs (where each epoch consists of 500 training samples) needed to train a LSM readout with suboptimal performance. FIG. 12B shows the testing performance on the testing set for the spike train classification task after training was completed. The QKAARMA algorithm was able to correctly label each test stimulus after a single learning pass through the training set. For this classification task, the KAARMA algorithm easily outperformed the LSM methods.

[0069] Unlike the LSM framework (which only has one attractor state: the resting state), KAARMA networks compute with stable attractors learned directly from data and can be easily converted into simple exact solutions in the form of deterministic finite automata. FIG. 13 illustrates the minimized finite state machine for accepting all template 0 spike trains and rejecting all template 1 spike trains, and vice versa, extracted from the trained KAARMA network. Shown are minimized DFA for Poisson spike trains with Gaussian jitter of 4 ms. State transitions were every 10 ms. As can be seen, the two minimized DFA are different, corresponding to distinct grammars governing the two Poisson spiking templates. Not only does this speed up computation dramatically via simple traversal of the states in the DFA with each input, as shown in FIGS. 14A and 14B, but this also opens the door for novel applications in neuroscience.

[0070] FIG. 14A shows the state transition trajectories for a spike train (shown as vertical lines at the bottom of each plot) generated by template 0, where accepted states are indicated by the dots. The final state at time step 51 gives the classification result. FIG. 14B shows the state transition trajectories for a spike train (shown as vertical lines at the bottom of each plot) generated by template 1 , where accepted states are indicated by the dots. The final state at time step 51 again gives the classification result. KAARMA is able to create a state model for each class of spike train firings by extracting its grammar directly from the data. For example, analysis is possible of long-term firing rates of neurons by comparing the dynamics or grammars in stable regions associated with different behaviors.

[0071] The KAARMA algorithm's capability to deliver computationally efficient and competitive solutions to fundamental applications in automatic speech recognition has been demonstrated. Simulations show that KAARMA-based classifiers can outperform similar HMM architectures on the TI-46 digit corpus.

[0072] With reference now to FIG. 15, shown is a schematic block diagram of a speech processing device 2100 according to an embodiment of the present disclosure. The speech processing device 2100 includes at least one processor circuit, for example, having a processor 2103 and a memory 2106, both of which are coupled to a local interface 2109. To this end, the speech processing device 2100 may comprise, for example, at least one server computer or like device. The local interface 2109 may comprise, for example, a data bus with an accompanying address/control bus or other bus structure as can be appreciated.

[0073] Stored in the memory 2106 are both data and several components that are executable by the processor 2103. In particular, stored in the memory 2106 and executable by the processor 2103 are a speech recognition application 2112 based upon pulse-based detection using a KAARMA network and/or KAARMA chain as previously discussed, one or more pulse-based data sets 21 15 that may be used for training and/or testing of the

KAARMA network and/or KAARMA chain, and potentially other applications 2118. Also stored in the memory 2106 may be a data store 2121 including, e.g., audio, video and other speech data. In addition, an operating system may be stored in the memory 2106 and executable by the processor 2103. It is understood that there may be other applications that are stored in the memory and are executable by the processor 2103 as can be appreciated.

[0074] Where any component discussed herein is implemented in the form of software, any one of a number of programming languages may be employed such as, for example, C, C++, C#, Objective C, Java ® , JavaScript ® , Perl, PHP, Visual Basic ® , Python ® , Ruby, Delphi ® , Flash ® , or other programming languages. A number of software components are stored in the memory and are executable by the processor 2103. In this respect, the term

"executable" means a program file that is in a form that can ultimately be run by the processor 2103. Examples of executable programs may be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of the memory 2106 and run by the processor 2103, source code that may be expressed in proper format such as object code that is capable of being loaded into a random access portion of the memory 2106 and executed by the processor 2103, or source code that may be interpreted by another executable program to generate instructions in a random access portion of the memory 2106 to be executed by the processor 2103, etc. An executable program may be stored in any portion or component of the memory including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid- state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.

[0075] The memory is defined herein as including both volatile and nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power. Thus, the memory 2106 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid-state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, or a combination of any two or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), or magnetic random access memory (MRAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable

programmable read-only memory (EPROM), an electrically erasable programmable readonly memory (EEPROM), or other like memory device.

[0076] Also, the processor 2103 may represent multiple processors 2103 and the memory 2106 may represent multiple memories 2106 that operate in parallel processing circuits, respectively. In such a case, the local interface 2109 may be an appropriate network that facilitates communication between any two of the multiple processors 2103, between any processor 2103 and any of the memories 2106, or between any two of the memories 2106, etc. The processor 2103 may be of electrical or of some other available construction.

[0077] Although portions of the speech recognition application 2112, pulse-based data sets 21 15, and other various systems described herein may be embodied in software or code executed by general purpose hardware, as an alternative the same may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits having appropriate logic gates, or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.

[0078] The speech recognition application 21 12 and pulse-based data sets 21 15 can comprise program instructions to implement logical function(s) and/or operations of the system. The program instructions may be embodied in the form of source code that comprises human-readable statements written in a programming language or machine code that comprises numerical instructions recognizable by a suitable execution system such as a processor in a computer system or other system. The machine code may be converted from the source code, etc. If embodied in hardware, each block may represent a circuit or a number of interconnected circuits to implement the specified logical function(s).

[0079] Also, any logic or application described herein, including the speech recognition application 21 12 and pulse-based data sets 2115 that comprises software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor 2103 in a computer system or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a "computer-readable medium" can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system.

[0080] The computer-readable medium can comprise any one of many physical media such as, for example, magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, memory cards, solid-state drives, USB flash drives, or optical discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read- only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.

[0081] In this disclosure, a simple and naive segmentation-alignment technique has been provided by first detecting the end points of the voiced regions in each isolated word utterance and then partitioning the acoustic features into segments of equal length corresponding to a fixed number of grammar states. Unlike classical HMMs, which operates on quasi-stationary states, good performance was obtained due to the KAARMA algorithm's ability to model dynamics within each arbitrarily defined state regions. This has worked well for a small vocabulary of only ten words. For more complex and realistic speech recognition problems, the grammar state concept can be applied to a more fundamental speech unit: phonemes, in order to avoid learning redundant grammar states in the construction of each word model. A dedicated KAARMA network may be trained for each phoneme using phoneme-labeled training data such as the TIMIT database. Since KAARMA network provides no native phone/frame alignment capability, it could rely on a more conventional hybrid HMM framework by leveraging a Viterbi pass to compute an alignment of frames to states and the re-estimation of duration models in order to correctly classify phones.

[0082] Furthermore, pulse-based signals can be treated as binary sequences, which can encompass all the necessary dynamics in the form of a finite state machine or DFA. Computing using DFA, extracted from a KAARMA network trained from the pulse data, can be much faster than traditional methods involving analog integration or kernel functions since the state transitions are done automatically based on pulse arrival, e.g., via a lookup table.

[0083] It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear

understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims. [0084] It should be noted that ratios, concentrations, amounts, and other numerical data may be expressed herein in a range format. It is to be understood that such a range format is used for convenience and brevity, and thus, should be interpreted in a flexible manner to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. To illustrate, a concentration range of "about 0.1 % to about 5%" should be interpreted to include not only the explicitly recited concentration of about 0.1 wt% to about 5 wt%, but also include individual concentrations (e.g., 1 %, 2%, 3%, and 4%) and the sub-ranges (e.g., 0.5%, 1 .1 %, 2.2%, 3.3%, and 4.4%) within the indicated range. The term "about" can include traditional rounding according to significant figures of numerical values. In addition, the phrase "about 'x' to 'y'" includes "about 'x' to about 'y'".