Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SPEECH RECOGNITION METHOD AND APPARATUS
Document Type and Number:
WIPO Patent Application WO/2018/039500
Kind Code:
A1
Abstract:
The present application discloses speech recognition methods and apparatuses. An exemplary method may include extracting, via a first neural network, a vector containing speaker recognition features from speech data. The method may also include compensating bias in a second neural network in accordance with the vector containing the speaker recognition features. The method may further include recognizing speech, via an acoustic model based on the second neural network, in the speech data.

Inventors:
HUANG ZHIYING (CN)
XUE SHAOFEI (CN)
YAN ZHIJIE (CN)
Application Number:
PCT/US2017/048499
Publication Date:
March 01, 2018
Filing Date:
August 24, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ALIBABA GROUP HOLDING LTD (US)
International Classes:
G10L15/00
Foreign References:
US20150127327A12015-05-07
US20150269933A12015-09-24
US20160163310A12016-06-09
Other References:
SHAOFEI XUE ET AL.: "IEEE/ACM Transactions On Audio, Speech, And Language Processing", vol. 22, 1 December 2014, IEEE, article "Fast adaptation of deep neural network based on discriminant codes for speech recognition", pages: 1713 - 1725
HUANG ZHIYING ET AL.: "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP", 20 March 2016, IEEE, article "Speaker adaptation OF RNN-BLSTM for speech recognition based on speaker code", pages: 5305 - 5309
YAJIE MIAO ET AL., TOWARDS SPEAKER ADAPTIVE TRAINING OF DEEP NEURAL NETWORK ACOUSTIC MODELS, 1 December 2014 (2014-12-01)
VARIANT EHSAN ET AL.: "2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP", 4 May 2014, IEEE, article "Deep neural networks for small footprint text-dependent speaker verification", pages: 4052 - 4056
See also references of EP 3504703A4
Attorney, Agent or Firm:
CAPRON, Aaron, J. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A speech recognition method, comprising:

extracting, via a first neural network, a vector containing speaker recognition features from speech data;

compensating bias in a second neural network in accordance with the vector containing the speaker recognition features; and

recognizing speech, via an acoustic model based on the second neural network, in the speech data.

2. The speech recognition method of claim 1 , wherein compensating bias in the second neural network in accordance with the vector containing the speaker recognition features includes:

multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

3. The speech recognition method of claim 2, wherein the first neural network, the second neural network, and the weight matrix are trained through:

training the first neural network and the second neural network respectively; and collectively training the trained first neural network, the weight matrix, and the trained second neural network.

4. The speech recognition method of claim 3, further comprising:

initializing the first neural network, the second neural network, and the weight matrix; updating the weight matrix using a back propagation algorithm in accordance with a predetermined objective criterion; and updating the second neural network and a connection matrix using the error back propagation algorithm in accordance with a predetermined objective criterion.

5. The speech recognition method of any one of claims 1-4, wherein the speaker recognition features include at least speaker voiceprint information.

6. The speech recognition method of claim 1, wherein compensating bias in the second neural network in accordance with the vector containing the speaker recognition features includes:

compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features, wherein the vector containing the speaker recognition features is an output vector of a last hidden layer in the first neural network.

7. The speech recognition method of claim 6, wherein compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features includes:

transmitting the vector containing the speaker recognition features, output by nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

8. The speech recognition method of claim 1, wherein the speech data is collected original speech data or speech features extracted from the collected original speech data.

9. The speech recognition method of claim 1, wherein the speaker recognition features correspond to different users, or correspond to clusters of different users.

10. A non-transitory computer-readable medium storing a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to perform a method for speech recognition, the method comprising:

extracting, via a first neural network, a vector containing speaker recognition features from speech data;

compensating bias in a second neural network in accordance with the vector containing the speaker recognition features; and

recognizing speech, via an acoustic model based on the second neural network, in the speech data.

11 . The non-transitory computer-readable medium of claim 10, wherein compensating bias in the second neural network in accordance with the vector containing the speaker recognition features includes:

multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

12. The non-transitory computer-readable medium of claim 11, wherein the first neural network, the second neural network, and the weight matrix are trained through:

training the first neural network and the second neural network respectively; and collectively training the trained first neural network, the weight matrix, and the trained second neural network.

13. The non-transitory computer-readable medium of claim 12, wherein the set of instructions that are executable by the one or more processors of the apparatus to cause the apparatus to further perform:

initializing the first neural network, the second neural network, and the weight matrix; updating the weight matrix using a back propagation algorithm in accordance with a predetermined objective criterion; and

updating the second neural network and a connection matrix using the error back propagation algorithm in accordance with a predetermined objective criterion.

14. The non-transitory computer-readable medium of claim 10, wherein the speaker recognition features include at least speaker voiceprint information.

15. The non-transitory computer-readable medium of claim 10, wherein compensating bias in the second neural network in accordance with the vector containing the speaker recognition features includes:

compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features, wherein the vector containing the speaker recognition features is an output vector of a last hidden layer in the first neural network.

16. The non-transitory computer-readable medium of claim 15, wherein compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features includes: transmitting the vector containing the speaker recognition features, output by nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

17. The non -transitory computer-readable medium of claim 10, wherein the speech data is collected original speech data or speech features extracted from the collected original speech data.

18. The non-transitory computer-readable medium of claim 10, wherein the speaker recognition features correspond to different users, or correspond to clusters of different users.

19. A speech recognition apparatus, comprising:

an extraction unit configured to extract, via a first neural network, a vector containing speaker recognition features from speech data; and

a recognition unit configured to:

compensate bias in a second neural network in accordance with the vector containing the speaker recognition features, and

recognize speech, via an acoustic model based on the second neural network, in the speech data.

Description:
A SPEECH RECOGNITION METHOD AND APPARATUS

CROSS REFERENCE TO RELATED APPLICATION

[001] The present application claims the benefit of priority to Chinese Application No. 201610741622.9, filed August 26, 2016, the entire contents of which are incorporated herein by reference.

TECHNICAL FIELD

[002] The present application relates to speech recognition, and more particularly, to a speech recognition method and apparatus.

BACKGROUND

[003] At present, great progress has been made on speaker independent (SI) speech recognition systems. But differences between different users may result in performance degradation of the speech recognition system for specific users.

[004] Speaker dependent (SD) speech recognition systems can solve the problem of performance degradation of the SI speech recognition systems. However, the SD speech recognition system requires an input of a large amount of user speech data for training, which causes great inconvenience for users and results in high cost.

[005] Speaker adaptation technologies can make up for the shortcomings of the SI and SD speech recognition systems to a certain extent. With the speaker adaptation technologies, SD speech features can be transformed into SI speech features, which are then provided to an SI acoustic model for recognition. Alternatively, SI acoustic systems may be converted into SD acoustic systems. Then, the SD speech features are recognized.

[006] Compared with the SI speech recognition systems, the speaker adaptation technologies consider speech features with user individual differences, and therefore have better recognition performance. Compared with the SD recognition systems, the speaker adaptation technologies introduce the prior information of the SI systems, and thus the amount of user speech data required is greatly reduced.

[007] The speaker adaptation technologies can be divided into off-line speaker adaptation technologies and on-line speaker adaptation technologies depending on whether user speech data is obtained in advance. With the on-line speaker adaptation technologies, parameters of speech recognition systems can be adjusted at regular intervals (e.g., 600 ms) according to the current user speech input, thereby realizing speaker adaptation.

[008] At present, a solution of the on-line speaker adaptation methods is shown in Fig. 1. The solution may include splicing speech features of a user and an i-vector (i.e., a distinguishable vector) extracted for the user. The solution may also include inputting the spliced features into a deep neural network (DNN) for speech recognition. An extraction process of the i-vector may include inputting acoustic features of speech into a Gaussian mixture model to obtain a mean supervector, and multiplying the mean supervector by a T matrix to obtain the i-vector. When the user is speaking, according to the solution, the i- vector can be extracted from the beginning part of the user's speech. The extracted i-vector is used for speech recognition of the rest of the user's speech, thus realizing the on-line speaker adaptation.

[009] The solution mainly has the following problems. In the on-line speaker adaptation technologies, since the i-vector extraction process is complicated and requires a certain time length of speech data, speech data for extracting the i-vector and speech data for speech recognition are different from each other. In speech recognition, the speech data for extracting the i-vector is the preliminary speech data of those speech data to be recognized. Therefore, the i-vector does not match the speech data that needs to be recognized, thus affecting the performance of speech recognition. SUMMARY

[010] Embodiments of the present disclosure provide speech recognition methods and apparatuses that can effectively improve the performance of speech recognition in the online speaker adaptation without introducing too much computation complexity.

[01 1] These embodiments include a speech recognition method. The method may include extracting, via a first neural network, a vector containing speaker recognition features from speech data. The method may also include compensating bias in a second neural network in accordance with the vector containing the speaker recognition features. The method may further include recognizing speech, via an acoustic model based on the second neural network, in the speech data. Compensating bias in the second neural network in accordance with the vector containing the speaker recognition features can include multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[012] The first neural network, the second neural network, and the weight matrix can be trained through: training the first neural network and the second neural network respectively, and training the trained first neural network, the weight matrix, and the trained second neural network collectively.

[013] In addition, the method may include initializing the first neural network, the second neural network, and the weight matrix. The method may also include updating the weight matrix using a back propagation algorithm in accordance with a predetermined objective criterion. The method may further include updating the second neural network and a connection matrix using the error back propagation algorithm in accordance with a predetermined objective criterion. The speaker recognition features may include at least speaker voiceprint information. [014] Compensating bias in the second neural network in accordance with the vector containing the speaker recognition features may include compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features. The vector containing the speaker recognition features may be an output vector of a last hidden layer in the first neural network.

[015] Compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features can include transmitting the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network. The first neural network may be a recursive neural network. The speech data can be collected original speech data or speech features extracted from the collected original speech data. The speaker recognition features may correspond to different users, or correspond to clusters of different users.

[016] These embodiments also include a speech recognition method. The method may include collecting speech data. The method may also include extracting a vector containing speaker recognition features by inputting the collected speech data into a first neural network. The method may further include compensating bias in a second neural network in accordance with the vector containing the speaker recognition features. In addition, the method may include recognizing speech by inputting the collected speech data into the second neural network.

[017] Compensating bias in a second neural network in accordance with the vector containing the speaker recognition features may include multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network. The speaker recognition features may include at least speaker voiceprint information. The first neural network can be a recursive neural network.

[018] Compensating bias in a second neural network in accordance with the vector containing the speaker recognition features may include transmitting the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[019] Moreover, these embodiments include a speech recognition apparatus. The speech recognition apparatus may include a memory configured to store a program for speech recognition. The speech recognition apparatus may also include a processor configured to execute the program for speech recognition to extract, via a first neural network, a vector containing speaker recognition features from speech data. The processor may also be configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. The processor may further configured to recognize speech, via an acoustic model based on the second neural network, in the speech data.

[020] The processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features may include being configured to multiply the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network. The speaker recognition features may include at least speaker voiceprint information. The first neural network can be a recursive neural network.

[021] The processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features may include being configured to transmit the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[022] Furthermore, these embodiments include a speech recognition apparatus. The speech recognition apparatus may include a memory configured to store a program for speech recognition. The speech recognition apparatus may also include a processor configured to execute the program for speech recognition to collect speech data. The processor may also be configured to extract a vector containing speaker recognition features by inputting the collected speech data into a first neural network. The processor may be further configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. In addition, the processor may be configured to recognize speech by inputting the collected speech data into the second neural network.

[023] The processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features includes being configured to multiply the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network. The speaker recognition features may include at least speaker voiceprint information. The first neural network can be a recursive neural network.

[024] The processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features may include being configured to transmit the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network. [025] These embodiments also include a speech recognition apparatus. The speech recognition apparatus may include an extraction unit configured to extract, via a first neural network, a vector containing speaker recognition features from speech data. The speech recognition apparatus may also include a recognition unit configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features, and recognize speech, via an acoustic model based on the second neural network, in the speech data.

[026] These embodiments further include a speech recognition apparatus. The speech recognition apparatus may include a collecting unit configured to collect speech data. The speech recognition apparatus may also include an extraction and compensation unit configured to extract a vector containing speaker recognition features by inputting the collected speech data into a first neural network, and compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. The speech recognition apparatus may further include a recognition unit configured to recognize speech by inputting the collected speech data into the second neural network.

BRIEF DESCRIPTION OF THE DRAWINGS

[027] The accompanying drawings, which constitute a part of this specification, illustrate several embodiments and, together with the description, serve to explain the disclosed principles.

[028] Fig. 1 is a schematic diagram of an exemplary i-vector based on-line speaker adaptation solution.

[029] Fig. 2 is a flow chart of an exemplary speech recognition method, according to some embodiments of the present disclosure. [030] Fig. 3 is a schematic diagram of an exemplary system architecture for speech recognition, according to some embodiments of the present disclosure.

[031] Fig. 4 illustrates a schematic diagram of an exemplary neural network, according to some embodiments of the present disclosure.

[032] Fig. 5 is a schematic diagram of an exemplary system architecture, according to some embodiments of the present disclosure.

[033] Fig. 6 is a schematic diagram of an exemplary speech recognition apparatus, according to some embodiments of the present disclosure.

[034] Fig. 7 is a flow chart of an exemplary speech recognition method, according to some embodiments of the present disclosure.

[035] Fig. 8 is a schematic diagram of an exemplary implementation process of the speech recognition method, according to some embodiments of the present disclosure.

[036] Fig. 9 is a schematic diagram of an exemplary speech recognition apparatus, according to some embodiments of the present disclosure.

DETAILED DESCRIPTION

[037] Many details are illustrated in the following descriptions to facilitate a comprehensive understanding of the present disclosure. Methods and apparatuses in the present disclosure may be implemented in many other ways different from these described herein. Those skilled in the art may make similar extensions without departing from the connotation of the present disclosure. Therefore, the present disclosure is not limited to specific implementations disclosed in the following.

[038] The technical solution of the present application will be described in detail with reference to accompanying drawings and embodiments. It should be noted that, if not conflicting, the embodiments of this application and various features in the embodiments may be combined with each other, which are all within the protection scope of this application. In addition, although a logical order is shown in the flow charts, in some cases, the steps shown or described may be executed in an order different from that herein.

[039] In some embodiments, a computing device that executes the speech recognition method may comprise one or more processors (CPUs), an input/output interface, a network interface, and a memory.

[040] The memory may include a non-permanent memory, a random access memory (RAM), and/or a nonvolatile memoiy, such as a read-only memoiy (ROM) or a flash memory (flash RAM), in computer-readable media. The memoiy is an example of computer-readable media. The memory may include module 1 , module 2, module N, where N is an integer greater than 2.

[041] The computer-readable media include permanent and non-permanent, removable and non-removable storage media. The storage media may realize information storage with any method or technology. The information may be a computer-readable instruction, data structure, program module or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memoiy (ROM), electrically erasable programmable read-only memory (EEPROM), flash memoiy or other memoiy technologies, compact disk read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassette tape, magnetic disk storage or other magnetic storage devices or any other non-transmission media which can be used to store information that can be accessed by computing devices. As defined herein, the computer-readable media do not include transitory media, such as modulated data signals and earners.

Q [042] The embodiments of the present disclosure provide many advantages. Some of these embodiments include extracting, via a first neural network, a vector containing speaker recognition features from speech data, compensating bias in a second neural network in accordance with the vector containing the speaker recognition features. This results in transforming the neural network for speech recognition from an SI acoustic system to an SD acoustic system, thereby improving the recognition performance. Since the speech data from which the vector containing the speaker recognition features is extracted is the same as the speech data for speech recognition, the recognition performance may be significantly improved. Moreover, when the vector containing the speaker recognition features is extracted via a neural network for speaker recognition, the extraction can be realized through the forward process of the neural network.

[043] Fig. 2 is a flow chart of an exemplary speech recognition method, according to some embodiments of the present disclosure. The speech recognition method can include steps S I 10 and S 120.

[044] In step S I 10, a vector containing speaker recognition features from speech data is extracted via a first neural network. In some embodiments, after step SI 10, the method may further include length normalizing the extracted vector containing speaker recognition features. In some embodiments, the extracted vector containing speaker recognition features can be used directly without length normalization.

[045] In step -S I 20, a bias is compensated in a second neural network in accordance with the vector containing the speaker recognition features; and speech in the speech data is recognized via an acoustic model based on the second neural network.

[046] The language "first" and "second" are only used to distinguish, for example, different neural networks. Without departing from the scope of the exemplary embodiments, the first neural network may be referred to as a second neural network. Likewise, the second neural network may be referred to as a first neural network.

[047] The first neural network can be a neural network for classifying speakers. The first neural network may extract speaker recognition features according to the input speech data, such as, but not limited to, speaker voiceprint information. The second neural network can be a neural network for speech recognition. The second neural network may recognize text information according to the input speech data.

[048] The method can be applied to a system including a first neural network and a second neural network. Speech data can be input into the first neural network and the second neural network for recognition. The method may include extracting, via the first neural network, a vector containing speaker recognition features from speech data. The method may also include compensating bias in the second neural network in accordance with the vector containing the speaker recognition features. The method may also include recognizing speech, via an acoustic model based on the second neural network, to obtain text information in the speech data.

[049] The speaker recognition features refer to features that can effectively characterize individual differences of a speaker. The individual differences of the speaker may be caused by differences in the vocal tract. The individual differences of the speaker may also be caused by the environment or channels. The speaker adaptation technology is a compensation technology, which can be used for vocal tract compensation and compensation for different speaking environments, such as noisy environments and office environments. The speaker adaptation technology can also be used for compensation for different channels, such as telephone channels and microphone channels. In different environments or through different channels, speech data collected from the same user can be regarded as speech data of different speakers because of different speaker recognition features. [050] In the embodiments herein, a neural network may contain a plurality of neuron nodes connected together. An output of one neuron node may be an input of another neuron node. The neural network may include multiple neural layers. According to their functions and nature, neural layers in the neural network can be divided into: an input layer, hidden layers, and an output layer. The hidden layers refer to layers invisible to a user. The input layer is responsible for receiving the input and distributing to the hidden layers. There may be one or more hidden layers. The output result of the last hidden layer is provided to the output layer. The user can see the final result output from the output layer. The neural networks and bias compensation will be described in detail below.

[051] In the method, the acoustic model based on the second neural network may be speaker-independent, and can be converted into an SD acoustic model by introducing a vector containing speaker recognition features for bias compensation. Therefore, the performance of speech recognition may be improved.

[052] The neural network is one kind of global models that can compute multiple dimensions of acoustic features in parallel. In another aspect, the Gaussian model used to extract the i-vector is a local model and needs to compute each of the dimensions separately. Therefore, when the first neural network is used to extract the vector containing the speaker recognition features in this method, short speech data can be used for extraction to achieve better real-time performance and to be feasible in actual products.

[053] In addition, since the system using the Gaussian mixture model is different from the neural network based system, joint optimization of these two systems may be not easy. Nonetheless, in the embodiments of the present application, the first neural network and the second neural network can be optimized as a whole. Moreover, the process where the vector containing the speaker recognition features is extracted via the first neural network is simple and involves a small computation amount. It may meet the real-time requirement of the on-line speaker adaptive recognition. Besides, short-time data can be used for the extraction.

[054] When short-term data can be used for the extraction, speech data from which the vector containing the speaker recognition features is extracted can be the speech data to be recognized. In other words, the extracted vector containing speaker recognition features may well match the speech data to be recognized. Accordingly, the performance of speech recognition can be improved significantly.

[055] Fig. 3 is a schematic diagram of an exemplary system architecture for speech recognition, according to some embodiments of the present disclosure. The system includes a speech collecting device 11 , a speech recognition device 12, and an output device 13.

[056] Speech recognition device 12 includes a speaker recognition unit 121 configured to perform the above-described step S I 10, and a speech recognition unit 122 configured to perform the above-described step S 120. In other words, speaker recognition unit 121 may be configured to send the extracted vector containing speaker recognition features to speech recognition unit 122. Alternatively, speech recognition unit 122 may be configured to acquire the vector containing the speaker recognition features from speaker recognition unit 121.

[057] Speech collecting device 1 1 may be configured to collect original speech data, and output the original speech data or speech features extracted from the original speech data to speaker recognition unit 121 and speech recognition unit 122, respectively.

[058] Output device 13 is configured to output a recognition result of speech recognition unit 122. The output schemes of output device 13 may include, but are not limited to, one or more of the following: storing the recognition result in a database, sending the recognition result to a predetermined device, or displaying the recognition result on a predetermined device.

1 [059] In some embodiments, speech collecting device 11 and speech recognition device 12 may be integrated in one device. Alternatively, speech collecting device 1 1 may send, via a connection line, a wireless connection or the like, the original speech data or the extracted speech features to speech recognition device 12. In some embodiments, when speech recognition device 12 is located at a network side, speech collecting device 11 may send, via the Internet, the original speech data or the extracted speech features to speech recognition device 12.

[060] Output device 13 and speech recognition device 12 can be integrated in one device. Alternatively, output device 13 may be configured to receive or acquire, via a connection line, a wireless connection, or the like, the recognition result from speech recognition device 12. In some embodiments, when speech recognition device 12 is located at a network side, output device 13 may be configured to receive or acquire, via the Internet, the recognition result from speech recognition device 12.

[061] Speech recognition device 12 may further include a computing unit for multiplying the vector containing the speaker recognition features by a weight matrix. The vector containing the speaker recognition features are extracted by speaker recognition unit 121. Speech recognition device 12 may be configured to provide the product of the multiplication to speech recognition unit 122. Alternatively, speaker recognition unit 121 or speech recognition unit 122 may be configured to multiply the vector containing the speaker recognition features by the weight matrix.

[062] Speech recognition device 12 may not be an independent device. For example, speaker recognition unit 121 and speech recognition unit 122 may be distributed in two devices. Speaker recognition unit 121 or speech recognition unit 122 may also be implemented by one or more distributed devices. [063] Fig. 4 illustrates a schematic diagram of an exemplary neural network, according to some embodiments of the present disclosure. As shown in Fig. 4, the neural network comprises an input layer L7, a hidden layer L2, and an output layer L3. Input layer LI contains three neuron nodes XI, X2 and X3. Hidden layer L2 contains three neuron nodes YJ, Y2 and Y3. Output layer L3 contains one neuron node Z. The neural network shown in Fig. 4 is merely used to illustrate the principle of a neural network and is not intended to define the first neural network and the second neural network described above.

[064] In Fig. 4, a bias node Bl corresponds to hidden layer L2, and is used to store a bias term for bias compensation at hidden layer L2. The bias term in bias node Bl and outputs of each neuron node in input layer LI provide inputs for each neuron node in hidden layer L2. A bias node B2 corresponds to output layer L3, and is used to store a bias term for bias compensation at output layer L3. The bias term in bias node B2 and outputs of each neuron node in hidden layer L2 provide inputs for each neuron node in output layer 13. The bias term may be either preset or input into the neural network from an external device.

[065] The bias term refers to a vector used for bias compensation. Bias

compensation at a certain layer refers to, for each neuron node of the layer, the computation is based on a result of the weighted summation of output values of all neuron nodes of the previous layer, plus a value, corresponding to the neuron node, in the bias term provided by the bias node corresponding to the layer.

[066] For example, assuming that the output values of neuron nodes XI, X2 and X3 in input layer LI are xl, x2 and x3, respectively. For neuron node Yl in hidden layer L2, the output value is:

f ( W¾ xl + VV¾ x2 + W¾ x3 + b 1 ) where f represents the computation made by neuron node Yl on the content in the brackets, the content in the brackets represents the input value received by neuron node Yl ; W^ 1 refers to the weight between the j-th neuron node in layer LI and the i-th neuron node in the next layer (i.e., layer 12), for example, for Yl, i = 1, j = 1, 2, 3; and b 1 refers to the value of the bias term in bias node Bl, corresponding to the 5-th neuron node in hidden layer 12, s = 1 , 2, 3, for example, the value of the bias term in bias node Bl, corresponding to the neuron node Yl, is b 1 .

[067] Referring back to Fig. 2, in step SI 20, compensating bias in a second neural network in accordance with the vector containing the speaker recognition features may refer to linearly transforming the vector containing the speaker recognition features and taking it as the bias term of a certain layer or certain layers, other than the input layer, in the second neural network. The linear transformation may be made in, but is not limited to, a way of multiplication by the weight matrix.

[068] The first neural network can include three hidden layers. In some

embodiments, the f irst neural network may include one or two hidden layers, or may include four or more hidden layers. In some embodiments, the second neural network may include three hidden layers. In some embodiments, the second neural network may include one or two hidden layers, or may include four or more hidden layers.

[069] In some embodiments, the speaker recognition features may include at least speaker voiceprint information. The speaker voiceprint information can be used to distinguish speech data of different users. In other words, the speaker voiceprint information extracted from the speech data of different users is different. In some embodiments, the speaker recognition features may include one or more of the following: speaker voiceprint information, environment information, and channel information. The environment c information can be used to characterize the features of environment where the speech data is collected. The channel information can be used to characterize the features of the channel where the speech data is collected.

[070] In some embodiments, the first neural network may be a recursive neural network. The recursive neural network refers to a neural network having one or more feedback loops, and can achieve real dynamic modeling for a nonlinear system. When a recursive neural network is used to extract the vector containing the speaker recognition features, the extraction can be performed on shorter-term data. The recurrent neural network may be, but is not limited to, an long-short term memory (LSTM) recurrent neural network.

[071 ] In some embodiments, compensating bias in the second neural network in accordance with the vector containing the speaker recognition features in step 120 may include multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[072] In some embodiments, when the weight matrix is a unit matrix, after multiplied by a weight matrix, the vector containing the speaker recognition features may not change. The vector containing the speaker recognition features can be directly taken as the bias term of the second neural network.

[073] In some embodiments, the first neural network, the second neural network, and the weight matrix may be trained through training the first neural network and the second neural network respectively, and training the trained first neural network, the weight matrix, and the trained second neural network collectively. Training collectively may refer to inputting speech data for training into the first neural network and the second neural network respectively, and compensating bias on the second neural network after multiplying the vector containing the speaker recognition features, extracted by the first neural network, by the weight matrix. The training may be performed by, but not limited to a graphics processing unit (GPU).

[074] In some embodiments, after training the trained first neural network, the weight matrix, and the trained second neural network collectively, the method may further include initializing the first neural network, the second neural network, and the weight matrix. The method may also include updating the weight matrix using a back propagation algorithm in accordance with a predetermined objective criterion. In addition, the method may include updating the second neural network and a connection matrix using the error back propagation algorithm in accordance with a predetermined objective criterion. The initialization on the weight matrix may be a random initialization according to the Gaussian distribution. The above predetermined objective criterion may include, but is not limited to: target least mean square error (LMS), recursive least square (RLS), and normalized least mean square error (NLMS).

[075] In some embodiments, compensating bias in the second neural network in accordance with the vector containing the speaker recognition features includes compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features. The vector containing the speaker recognition features can be an output vector of a last hidden layer in the first neural network. For example, assuming that the second neural network contains an input layer, three hidden layers and an output layer, all the layers, except for the input layer, can refer to the output layer and the three hidden layers. Some layers, except for the input layer, can refer to one or more of the four layers, i.e., the output layer and the three hidden layers.

[076] The bias compensation on a certain layer in the second neural network according to the vector containing the speaker recognition features may refer to taking a vector, obtained by multiplying the vector containing the speaker recognition features by the weight matrix, as the bias term of the layer. For example, the bias compensation on all the layers, other than the input layer, in the second neural network according to the vector containing the speaker recognition features may refer to taking a vector, obtained by multiplying the vector containing the speaker recognition features by the weight matrix, as the respective bias terms of the output layer and the three hidden layers in the second neural network.

[077] In some embodiments, the vector containing the speaker recognition features may be an output vector of the last hidden layer in the first neural network. The output vector of the last hidden layer has fewer dimensions with respect to the output vector of the output layer, thereby avoiding overfitting.

[078] In some embodiments, the vector containing the speaker recognition features may be an output vector of hidden layers, other than the last hidden layer, in the first neural network, or may be an output vector of the output layer.

[079] In some embodiments, compensating bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features may include transmitting the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network. The bias node corresponding to a certain layer may store a bias term used for bias compensation on the layer. The vector containing the speaker recognition features may be a vector consisting of respective output values of multiple neuron nodes in the last hidden layer in the first neural network.

[080] Transmitting the vector containing the speaker recognition features to the bias nodes may refer to directly sending the vector containing the speaker recognition features to the bias nodes, or may refer to linearly transforming the vector containing the speaker recognition features and then sending to the bias nodes.

[081] When multiple layers in the second neural network are subjected to bias compensation with the same vector, the multiple layers may respectively correspond to a bias node, or the multiple layers may also correspond to the same bias node. For example, the vector containing the speaker recognition features, extracted by the first neural network, may be transmitted to a plurality of bias nodes, respectively. The plurality of bias nodes correspond to a plurality of layers requiring bias compensation in the second neural network on a one-to-one basis. As another example, the vector containing the speaker recognition features, extracted by the first neural network, may also be transmitted to one bias node. The bias node corresponds to a plurality of layers requiring bias compensation in the second neural network.

[082] In some embodiments, the speech data is collected original speech data or speech features extracted from the collected original speech data. The speech features may include, but are not limited to, the Mel frequency cepstral coefficient (MFCC), the perceived linear prediction coefficients (PLP), the filter bank feature, or any combination thereof.

[083] In some embodiments, the speaker recognition features may correspond to different users on a one-to-one basis, or may correspond to clusters of different users on a one-to-one basis. The speaker recognition features corresponding to different users on a one- to-one basis means that the output layer of the first neural network outputs the identity of the user. The speaker recognition features corresponding to clusters of different users on a one- to-one means that the output layer of the first neural network outputs the category identity after the users are clustered.

[084] A cluster may contain one or more patterns, wherein the pattern may refer to a vector of a measurement, or may be a point in a multidimensional space. The clustering operation is based on similarity, and the patterns in the same cluster have more similarity than the patterns in different clusters. The algorithms for clustering can be divided into division methods, hierarchical methods, density algorithms, graph theory-clustering methods, grid algorithms, and model algorithms. For example, the algorithms can be K-MEANS, K- MEDOIDS, Clara, or Clarans.

[085] Clustering users may refer to classifying speaker recognition features of a plurality of users into a plurality of clusters according to the similarity between the speaker recognition features of different users during the training, and computing (e.g., weighted averaging) the plurality of speaker recognition features, which are classified into one cluster to obtain a vector containing speaker recognition features corresponding to the cluster. The category identity can be an identity used to represent one cluster. The category identities correspond to the clusters on a one-to-one basis.

[086] When speech recognition for a very large number of users is required, if the clustering operation is performed, the set of output results may be a limited number of vectors containing speaker recognition features. For example, when there are millions of users, if the users are classified into thousands of clusters, there are only thousands of vectors containing speaker recognition features, thereby greatly reducing the implementation complexity.

[087] When the speaker recognition features are classified into a plurality of clusters according to the similarity between the speaker recognition features, different clustering results may also be obtained according to different dimensions of similarity, e.g., different types of speaker recognition features, such as voiceprint information, environmental information, channel information, etc. For example, speaker recognition features with similar voiceprints can be regarded as one cluster. As another example, speaker recognition features corresponding to the same or similar environment can be regarded as one cluster. Alternatively, speaker recognition features coixesponding to similar channels can be regarded as one cluster.

[088] Fig. 5 is a schematic diagram of an exemplary system architecture for speech recognition, according to some embodiments of the present disclosure. As shown in Fig. 5, the system may include a speaker classifier 21 and a speech recognition system 23. The speaker recognition feature in the system is speaker voiceprint information. Speaker classifier 21 is configured to perform the above-described step SI 10. Speech recognition system 23 is configured to perform the above-described step SI 20.

[089] The vector containing speaker voiceprint information can be linearly transformed by a connection matrix 22, The connection matrix may be, but is not limited to, a weight matrix.

[090] Speaker classifier 21 utilizing the first neural network for extracting the vector containing the speaker recognition features may include an input layer 211, one or more hidden layers 212, and an output layer 213. In some embodiments, the number of hidden layers 212 may be three. Alternatively, there may be one or more hidden layers 212.

[091] Speech recognition system 23 utilizing the second neural network for recognizing speech may include an input layer 231 , one or more hidden layers 232, and an output layer 233. In some embodiments, the number of hidden layers 212 may be three. In some embodiments, there may be one or more hidden layers 212.

[092] The speech data received by input layer 211 of the first neural network in speaker classifier 21 can be the same as that received by input layer 231 of the second neural network in speech recognition system 23. The speech data can be the collected original speech data. Alternatively, the speech data can be speech features extracted from the original speech data. [093] Accordingly, the first neural network in speaker classifier 21 can have the same input as the second neural network in speech recognition system 23. That is, the speech data from which the vector containing speaker voiceprint information is obtained may be the same as the speech data for speech recognition. Therefore, the bias compensation on the second neural network according to the vector containing speaker voiceprint information can be completely matched with the speech data to be recognized. As a result, the performance of speech recognition can be effectively improved. The first neural network and the second neural network can each include any one or a combination of several of the following neural networks: a fully connected neural network (DNN), a convolution neural network (CNN), and a recurrent neural network (RNN).

[094] A vector expression containing speaker voiceprint information may be an output vector of the last hidden layer in speaker classifier 21.

[095] In speech recognition system 23, each of output layer 233 and one or more hidden layers 232 can take the linearly transformed vector expression containing speaker voiceprint information as a bias term. In some embodiments, in output layer 233 and one or more hidden layers 232, at least one or more layers can take the linearly transformed vector expression containing speaker voiceprint information as a bias term.

[096] Connection matrix 22 may also be configured to carry out length

normalization on the vector containing speaker voiceprint information. In some embodiments, the vector containing speaker voiceprint information, output by the speaker classifier, can be directly provided, after being multiplied by a weight, to the speech recognition system 23, without being subjected to the length normalization.

[097] The data output by output layer 213 of speaker classifier 21 may be the tag IDs of different users, or may be the tag ID of a cluster after the users are clustered. The output data of the output layer may be used only for training. The recognition result output from output layer 233 of the speech recognition system 23 may be a state-level, a phone- level, or a word-level tag ID.

[098] The exemplary system architecture shown in Fig. 5 can further perform the following functionality:

[099] Train, using the training data, the acoustic model (e.g., the acoustic model referred to in Fig. 2) based on the second neural network and the first neural network of the speaker classifier. The first and the second neural network can achieve desired speech recognition performance or speaker recognition performance, respectively. Moreover, the training can include training the first neural network, the connection matrix, and the second neural network collectively. A GPU can be used for speeding up these training.

[0100] The system architecture can use the trained acoustic model and speaker classifier as the acoustic model and speaker classifier for network initialization. In some embodiments, the network initialization may also include randomly initializing the connection matrix of Fig. 5.

[0101 ] According to a predetermined objective criterion, the system architecture can use the back propagation (BP) algorithm to update the connection matrix to reach a convergent state.

[0102] According to a predetermined objective criterion, the system architecture can use the BP algorithm to update the acoustic model and the connection matrix to reach a convergent state. The predetermined objective criterion can be set according to the needs in actual applications.

[0103] Moreover, the system architecture can extract speech features from the collected original speech data. The extracted speech features are processed by the speaker classifier, so that a vector containing speaker voiceprint information corresponding to the speech features is obtained. The vector is linearly transformed by the connection matrix and sent to the speech recognition system. The extracted speech features are decoded by the acoustic model based on the second neural network in the speech recognition system. Finally, a speech recognition result can be obtained. In the speech recognition system, the bias terms of the output layer and the three hidden layers of the second neural network may be the linearly transformed vector containing speaker voiceprint information.

[0104] The present application also relates to a speech recognition apparatus. The speech recognition apparatus includes a memory configured to store a program for speech recognition. The speech recognition apparatus also includes a processor configured to execute the program for speech recognition. The processor, when executing the program for speech recognition, may be configured to extract, via a first neural network, a vector containing speaker recognition features from speech data.

[0105] The processor, when executing the program for speech recognition, may also be configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. The processor, when executing the program for speech recognition, may further be configured to recognize speech, via an acoustic model based on the second neural network, in the speech data.

[0106] In some embodiments, the processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features may include being configured to multiply the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[0107] In some embodiments, the speaker recognition features may include at least speaker voiceprint information.

[0108] In some embodiments, the processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features may include being configured to compensate bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features. The vector containing the speaker recognition features can be an output vector of a last hidden layer in the first neural network.

[0109] In some embodiments, the speaker recognition features may correspond to different users on a one-to-one basis, or may correspond to clusters of different users on a one-to-one basis. The speaker recognition features corresponding to different users on a one- to-one basis means that the output layer of the first neural network outputs the identity of the user. The speaker recognition feature corresponding to clusters of different users on a one-to- one basis means that the output layer of the first neural network outputs the category identity after the users are clustered.

[0110] In some embodiments, the first neural network may be a recursive neural network.

[011 1] In some embodiments, the processor being configured to compensate bias in the second neural network in accordance with the vector containing the speaker recognition features includes being configured to transmit the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[0112] Moreover, the processor, when executing the program for speech recognition, may be configured to perform above steps SI 10 and S120. More details of the operations executed by the processor, when executing the program for speech recognition, can be found above.

[0113] The present application further relates to a speech recognition apparatus. Fig. 6 is a schematic diagram of an exemplary speech recognition apparatus, according to some embodiments of the present disclosure. The speech recognition apparatus includes an extraction unit 31 and a recognition unit 32.

[01 14] In general, these units (and any sub-units), can be a packaged functional hardware unit designed for use with other components (e.g., portions of an integrated circuit) and/or a part of a program (stored on a computer readable medium) that performs a particular function of related functions. The unit can have entry and exit points and can be written in a programming language, such as, for example, Java, Lua, C or C++. A software unit can be compiled and linked into an executable program, installed in a dynamic link library, or written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software units can be callable from other units or from themselves, and/or can be invoked in response to detected events or interrupts. Software units configured for execution on computing devices can be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other non-transitory medium, or as a digital download (and can be originally stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code can be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions can be embedding in firmware, such as an EPROM. It will be further appreciated that hardware units can be comprised of connected logic units, such as gates and flip-flops, and/or can be comprised of programmable units, such as programmable gate arrays or processors. The units or computing device functionality described herein are preferably implemented as software units, but can be represented in hardware or firmware. Generally, the units described herein refer to logical units that can be combined with other units or divided into sub-units despite their physical organization or storage. [01 15] Extraction unit 31 may be configured to extract, via a first neural network, a vector containing speaker recognition features from speech data. Extraction unit 31 may be configured to perform operations similar to those for extracting the vector containing the speaker recognition features in the above-described apparatus.

[01 16] Recognition unit 32 may be configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features, and recognize speech, via an acoustic model based on the second neural network, in the speech data. Recognition unit 32 may be configured to perform operations similar to those for recognizing speech in the above-described apparatus.

[0117] In some embodiments, recognition unit 32 being configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features may include being configured to multiply the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[0118] In some embodiments, the speaker recognition features may include at least speaker voiceprint information.

[01 19] In some embodiments, recognition unit 32 being configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features may include being configured to compensate bias at all or a part of layers, except for an input layer, in the second neural network in accordance with the vector containing the speaker recognition features. Alternatively, the vector containing the speaker recognition features can be an output vector of a last hidden layer in the first neural network.

[0120] In some embodiments, the first neural network may be a recursive neural network. [0121] In some embodiments, recognition unit 32 being configured to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features may include being configured to transmit the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[0122] In some embodiments, extraction unit 31 may be configured as speaker recognition unit 121 in the system architecture shown in Fig. 3. Recognition unit 32 may be configured as speech recognition unit 122 in the system architecture shown in Fig. 3. The apparatus of Fig. 6 may be configured as the speech recognition device in the system architecture shown in Fig. 3. More detailed operations of the apparatus of Fig. 6 may be referred to as those described above for the speech recognition device in Fig. 3.

[0123] Moreover, the operations executed by extraction unit 31 and recognition unit 32 can be similar to steps SI 10 and SI 20 in the above speech recognition method. More details of the operations executed by extraction unit 31 and recognition unit 32 can also be found above.

[0124] The present application is also directed to a speech recognition method. Fig. 7 is a flow chart of an exemplary speech recognition method, according to some embodiments of the present disclosure. This method can be performed by the speech recognition device of Fig 3 and/or system architecture of Fig. 5. As shown in Fig. 7, the speech recognition method includes steps S410, S420, and S430 illustrated as follows:

[0125] In step S410, the system architecture collects speech data.

[0126] In step S420, the system architecture extracts a vector containing speaker recognition features by inputting the collected speech data into a first neural network, and compensate bias in a second neural network in accordance with the vector containing the speaker recognition features.

[0127] In step S430, the system architecture recognizes speech by inputting the collected speech data into the second neural network.

[0128] Steps S410, S420, and S430 can be continuously performed during the collecting process. Whenever a batch of speech data is collected, steps S420 and S430 may be performed on the batch of speech data to obtain a result of speech recognition for the batch of speech data, The size of a batch of speech data may be, but is not limited to, one or more frames.

[0129] Fig. 8 is a schematic diagram of an exemplary implementation process of the speech recognition method in Fig. 7, according to some embodiments of the present disclosure.

[0130] The implementation process includes collecting a user's speech. The implementation process also includes inputting the collected speech data directly or speech features extracted therefrom to the first neural network and the second neural network. The implementation process further include extracting, via the first neural network, the vector containing the speaker recognition features and sending the vector, as a bias term, to the second neural network. The implementation process also include outputting a recognition result of the speech data from the second neural network.

[0131] The collected original speech data may be directly provided to the first neural network and the second neural network. Alternatively, speech features may be extracted from the collected original speech data and the extracted speech features are then provided to the first neural network and the second neural network.

[0132] In some embodiments, compensating bias in a second neural network in accordance with the vector containing the speaker recognition features may include multiplying the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[0133] In some embodiments, the speaker recognition features may include at least speaker voiceprint information.

[0134] In some embodiments, the first neural network may be a recursive neural network.

[0135] In some embodiments, compensating bias in a second neural network in accordance with the vector containing the speaker recognition features includes transmitting the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[0136] More details of the first neural network, the second neural network, the extraction of the vector containing the speaker recognition features, the bias compensation at the second neural network according to the vector containing the speaker recognition features, and the speech recognition based on the second neural network are similar to those described above for the speech recognition method.

[0137] The embodiments disclosed in the present application further relate to a speech recognition apparatus. The speech recognition apparatus includes a memory configured to store a program for speech recognition. The speech recognition apparatus also includes a processor configured to execute the program for speech recognition to extract, via a first neural network, a vector containing speaker recognition features from speech data. The processor is also configured to execute the program for speech recognition to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. The processor is further configured to execute the program for speech recognition to recognize speech, via an acoustic model based on the second neural network, in the speech data.

[0138] In some embodiments, the processor being configured to execute the program for speech recognition to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features may include being configured to multiply the vector containing the speaker recognition features by a weight matrix to be a bias term of the second neural network.

[0139] In some embodiments, the speaker recognition features may include at least speaker voiceprint information.

[0140] In some embodiments, the first neural network may be a recursive neural network.

[0141] In some embodiments, the processor being configured to execute the program for speech recognition to compensate bias in a second neural network in accordance with the vector containing the speaker recognition features may include being configured to transmit the vector containing the speaker recognition features, output by neuron nodes at the last hidden layer of the first neural network, to bias nodes corresponding to the all or the part of layers, except for the input layer, in the second neural network.

[0142] When the processor is configured to read and execute the program for speech recognition, the processor being configured to collect speech data can refer to the operations of the speech collecting device in Fig. 3. More details of the operations can be found in the description of Fig. 3. More operations details of extracting the vector containing the speaker recognition features, compensating bias at the second neural network according to the vector containing the speaker recognition features, and recognizing speech based on the second neural network can also be referred to as those descriptions above in the speech recognition method. [0143] The present application further relates to a speech recognition apparatus. Fig. 9 is a schematic diagram of an exemplary speech recognition apparatus, according to some embodiments of the present disclosure. As shown in Fig. 9, the speech recognition apparatus includes a collecting unit 61 configured to collect speech data. The speech recognition apparatus also includes an extraction and compensation unit 62 configured to extract a vector containing speaker recognition features by inputting the collected speech data into a first neural network, and compensate bias in a second neural network in accordance with the vector containing the speaker recognition features. The speech recognition apparatus further includes a recognition unit 63 configured to recognize speech by inputting the collected speech data into the second neural network. These units (and any sub-units) can be a packaged functional hardware unit designed for use with other components (e.g., portions of an integrated circuit) and/or a part of a program (stored on a computer readable medium) that performs a particular function of related functions.

[0144] Collecting unit 61 may be configured to perform operations similar to those for collecting speech data in the above-described apparatus.

[0145] Extraction and compensation unit 62 may be configured to perform operations similar to those for extracting the vector containing the speaker recognition features and compensating bias at the second neural network in the above-described apparatus.

[0146] Recognition unit 63 may be configured to perform operations similar to those for recognizing speech.

[0147] Collecting unit 61 may be equipped in an independent device. Alternatively, collecting unit 61 can be equipped in the same device with extraction and compensation unit 62 and recognition unit 63.

[0148] Collecting unit 61 may be implemented with reference to the speech collecting device shown in Fig. 3. More implementation details of extracting the vector containing the speaker recognition features by the first neural network, compensating bias at the second neural network according to the vector containing the speaker recognition features, and recognizing speech by extraction and compensation unit 62 and recognition unit 63 can be referred to as those descriptions in the above speech recognition method.

[0149] As indicated above, it is appreciated that all or some steps of the above- mentioned methods may be completed by relevant hardware under instructions through programs. The programs may be stored in a computer-readable storage medium, such as a read-only memory, a magnetic disk or a compact disc. Optionally, all or some steps of the embodiments described above may also be implemented using one or more integrated circuits. Accordingly, various modules/units in the above embodiments may be implemented in the form of hardware, or may be implemented in the form of software functional modules. This application is not limited to any particular form of combination of hardware and software.

[0150] Certainly, there may be various other embodiments of this application. Those skilled in the art would be able to make various changes and variations in accordance with this application without departing from the spirit and substance of this application. All these corresponding changes and variations should fall within the scope of the claims of this application.