Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VOICE PROCESSING METHOD, APPARATUS, AND SYSTEM
Document Type and Number:
WIPO Patent Application WO/2014/194728
Kind Code:
A1
Abstract:
Methods, apparatus, and systems for voice processing are provided herein. An exemplary method can be implemented by a terminal. A voice bit stream to be sent can be obtained. Voice control information corresponding to the voice bit stream to be sent can be obtained. The voice control information can be used for a voice server to determine a voice-mixing strategy. The voice bit stream and the voice control information can be sent to the voice server. At least one voice bit stream,returned by the voice server based on the voice-mixing strategy,can be received. The at least one voice bit stream can be outputted.

Inventors:
PENG YUANJIANG (CN)
LIU HONG (CN)
Application Number:
PCT/CN2014/076090
Publication Date:
December 11, 2014
Filing Date:
April 24, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TENCENT TECH SHENZHEN CO LTD (CN)
International Classes:
G10L19/00; H04L29/06
Foreign References:
CN103327014A2013-09-25
CN1845573A2006-10-11
CN101110868A2008-01-23
CN101378430A2009-03-04
Attorney, Agent or Firm:
ADVANCE CHINA IP LAW OFFICE (No.85 Huacheng Avenue Tianhe Distric, Guangzhou Guangdong 3, CN)
Download PDF:
Claims:
Claims

1 . A voice processing method, implemented by a terminal, comprising:

obtaining a voice bit stream to be sent:

obtaining voice control information corresponding to the voice bit stream to be sent, wherein the voice control information is used for a voice server to determine a voice-mixing strategy:

sending the voice bit stream and the voice control information to the voice server:

receiving at least one voice bit stream returned by the voice server based on the voice-mixing strategy: and

outputting the at least one voice bit stream.

2. The method according to claim 1 , wherein, when the at least one voice bit stream includes multiple voice bit streams, the method further includes:

before the outputting of the at least one voice bit stream, performing a voice mixing process of the at least one voice bit stream.

3. The method according to claim 1 , wherein the voice control information includes first voice control information directly extracted from the voice bit stream.

4. The method according to claim 3, wherein the voice control information further includes second voice control information related to a current user or a current voice session.

5. The method according to claim 4, wherein the second voice control information includes a request to speci fy a voice mixing process to be performed on the voice server.

6. The method according to claim 3, wherein the first voice control information includes short term amplitude energy, long term amplitude energy, voice activity detection information, or a combination thereof.

7. A voice processing method implemented by a voice server, comprising:

receiving, from each terminal of a plurality of terminals, a voice bit stream and voice control information used for the voice server to determine a voice-mixing strategy:

generating the voice-mixing strategy by generating a first voice-mi ing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal: according to the first voice-mix ing strategy and respectively for the each terminal of the plurality of terminals, selecting multiple voice bit streams that need a voice mixing process: and according to the second voice-mixing strategy,

returning the multiple voice bit streams that need the voice mixing process to a corresponding terminal of the plurality of terminals, or WO 2014/194728 nng me voice mixing process on the multiple voice bitpcj/cN20 l4/0760 0 h° voice mixing process to generate a mixed voice bit stream, and returning the mixed voice bit stream to the corresponding terminal.

8. The method according to claim 7, wherein the voice control information includes first voice control information directly extracted from the voice bit stream for generating the first voice-mixing strategy according to the first voice control information.

9. The method according to claim 8, wherein the voice control information further includes second voice control information related to a current user or a current voice session for generating the second voice-mixing strategy according to the second voice control information.

10. The method according to claim 9, wherein the generating of the second voice-mixing strategy includes:

detecting whether the voice server has sufficient hardware resources; and

when the voice server has sufficient hardware resources, marking, in the second voice-mixing strategy, to perform the voice mixing process on the voice server; or

when the voice server does not have sufficient hardware resources, marking, in the second voice-mixing strategy, to perform the voice mixing process on the corresponding terminal.

1 1 . The method according to claim 10, further including:

when the hardware resources are detected to have been changed, changing the second voice- mixing strategy accordingly.

12. The method according to claim 9, wherein the generating of the second voice-mixing strategy includes:

detecting user rights corresponding to the each terminal; and

when the user rights of the current user exceed a preset level, marking, in the second voice- mixing strategy, to perform the voice mixing process on the voice server; or

when the user rights o the current user do not exceed the preset lev el, marking, in the second voice-mixing strategy, to perform the voice mixing process on the corresponding terminal.

13. A voice processing terminal apparatus, comprising:

a first obtaining unit configured to obtain a voice bit stream to be sent;

a second obtaining unit contlgured to obtain voice control information corresponding to the voice bit stream to be sent, wherein the voice control information is used for a voice server to determine a voice-mixing strategy;

a sending unit contlgured to send the voice bit stream and the voice control information to the voice server;

a receiving unit contlgured to receive at least one voice bit stream returned by the voice server based on the voice-mi ing strategy; and WO 2014/194728 configured to output the at least one voice bit stream. pci7CN2014/076090

14. The apparatus according to claim 14, further including:

a voice-mi ing unit configured to perform a voice mixing process of the at least one voice bit stream, when the at least one voice bit stream includes multiple voice bit streams.

15. The apparatus according to claim 14, wherein the voice control information includes first voice control information directly extracted from the voice bit stream.

16. The apparatus according to claim 16, wherein the voice control information further includes second voice control information related to a current user or a current session.

1 7. The apparatus according to claim 17, wherein the second voice control information includes a request to speci fy a voice mixing process to be performed on the voice server.

18. The apparatus according to claim 16, wherein the first voice control information includes short term amplitude energy, long term ampl itude energy, voice activity detection information, or a combination thereof.

19. A voice processing system including the apparatus of any claim of claims 13-18.

20. A voice processing server apparatus, comprising:

a receiving unit configured to receive, from each terminal of a plurality of terminals, a voice bit stream and voice control information used for a voice server to determine a voice-mixing strategy; a voice-mixing-strategy generation unit configured to generate the voice-mixing strategy by generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal;

a selection unit configured to select multiple voice bitO streams that need a voice mixing process, according to the first voice-mixing strategy and respectively for the each terminal of the plurality of terminals; and

a voice-mixing processing unit configured to, according to the second voice-mixing strategy, return the multiple voice bit streams that need the voice mi ing process to a

corresponding terminal of the plurality of terminals, or

perform the voice mi ing process on the multiple voice bit streams that need the voice mixing process to generate a mixed voice bit stream, and return the mixed voice bit stream to the corresponding terminal.

21. The apparatus according to claim 20, wherein the voice control information includes first voice control information directly extracted from the voice bit stream, and the voice-mixing-strategy generation unit is further configured to:

generate the first voice-mixing strategy according to the first voice control information.

22. The apparatus according to claim 2 1 , wherein the voice control information further includes second voice control information related to a current user or a current session, and the voice-mixing- strategy generation unit is further con figured to: WO 2014/194728 cond voice-mixing strategy according to the second vcpcj/cN20 l4/0760 0itl on-

23. The apparatus according to claim 22, wherein the voice-mixing-strategy generation unit is further configured to:

detect whether the voice server has sufficient hardware resources; and

when the voice server has sufficient hardware resources, mark, in the second voice-mixing strategy, to perform the voice mixing process on the voice server; or

when the voice server does not have sufficient hardware resources, mark, in the second voice- mixing strategy, to perform the voice mixing process on the corresponding terminal.

24. The apparatus according to claim 23, wherein the voice-mixing-strategy generation unit is further configured to:

when the hardware resources are detected to have been changed, change the second voice- mixing strategy correspondingly.

25. The apparatus according to claim 23, wherein the voice-mixing-strategy generation unit is further configured to:

detect user rights corresponding to the terminal; and

when the user rights o the current user exceed a preset level, mark, in the second voice- mixing strategy, to perform the voice mixing process on the voice server; or

when the user rights o the current user do not exceed the preset level, mark, in the second voice-mixing strategy, to perform the voice mixing process on the corresponding terminal.

26. A voice processing system including the apparatus of any claim o claims 20-25.

Description:
VOICE PROCESSING METHOD, APPARATUS, AND SYSTEM

Description

CRO S S-REFERENCE S TO RELATED APPLICATION S

[0001] This application claims priority to Chinese Patent Application No. 201310222683.0, filed on June 6, 2013, the entire contents of which are incorporated herein by reference.

FI ELD O F THE DIS C LO SUR E

[0002] The present disclosure generally relates to voice processing technology and, more particularly, relates to methods, apparatus, and systems for voice processing.

BACKGROUND

[0003] In a v oice processing system, in order to support multi-party voice communication, a voice mixing process often needs to be performed on voices from multiple channels. Multi-channel v oice mixing refers to a method or process for superimposing waveforms of voices from multiple channels upon each other, to form a single channel of voice. The simplest voice mixing is to directly add together all original wav eforms of v oices ( e.g., pulse-code modulation ( PCM ) streams) from input channels to form one voice PCM stream after the voice mixing.

[0004] However, in a practical multi-channel voice mixing system, there are usually a large number of input channels that participate in the voice mixing. In this case, simply, directly adding together voice PCM streams from all input channels can cause a series of problems such as increased background noise and output overflow. Therefore, a multi-channel voice mixing system often selects inputted voices from a small number of channels ( usually 2 to 5 channels) at a time for the voice mixing, according to a certain voice-mixing strategy (e.g., a first voice-mixing strategy), in order to minimize problems such as increased background noise and output overflow.

[0005] In a v oice communication system, based on different locations for v oice mix ing, there are two mixing modes including server voice mixing and terminal voice mixing. The server v oice mixing has relativ ely high v oice mixing quality, but the voice mixing process consumes signi ficant resources. Especially when there are a great number of voice users, the voice serv er can be overwhelmed. Terminal voice mixing can reduce resource load on the server, but has relativ ely low v oice mixing quality and cannot meet the high quality requirements for occasions such as

audio/video conferences.

BRIEF SU M M AR Y OF T H E D I S C LO S U R E

[0006] One aspect of the present disclosure includes a voice processing method. An exemplary method can be implemented by a terminal . A voice bit stream to be sent can be obtained. Voice control information corresponding to the voice bit stream to be sent can be obtained. The v oice control information can be used for a voice server to determine a voice-mixing strategy. The voice bit stream and the voice control information can be sent to the voice server. At least one voice least one voice bit stream can be outputted.

[0007] Another aspect of the present disclosure includes a voice processing method implemented by a voice server. In an exemplary method, from each terminal of a plurality of terminals, a voice bit stream and voice control information used for the voice server to determine a voice-mixing strategy can be received. The voice-mixing strategy can be generated by generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal . According to the first voice-mixing strategy and respectively for the each terminal of the plurality of terminals, multiple voice bit streams that need a voice mix ing process can be selected. According to the second voice-mixing strategy, the multiple voice bit streams that need the voice mixing process can be returned to a corresponding terminal of the plurality o terminals, or the voice mixing process can be performed on the multiple voice bit streams that need the voice mixing process to generate a mixed voice bit stream and the mixed voice bit stream can be returned to the corresponding terminal.

[0008] Another aspect o the present disclosure includes A voice processing terminal apparatus. The apparatus can include a first obtaining unit, a second obtaining unit, a sending unit, a receiving unit, and an output unit. The first obtaining unit can be configured to obtain a voice bit stream to be sent. The second obtaining unit can be configured to obtain voice control information corresponding to the voice bit stream to be sent. The voice control information can be used for a voice server to determine a voice-mixing strategy. The sending unit can be configured to send the voice bit stream and the voice control information to the voice serv er. The receiving unit can be configured to receive at least one voice bit stream returned by the voice server based on the voice- mixing strategy. The output unit can be configured to output the at least one voice bit stream.

[0009] Another aspect o the present disclosure includes a voice processing server apparatus. The voice processing server apparatus can be implemented on a voice server, and can include a receiving unit, a voice-mixing-strategy generation unit, a selection unit, and a voice-mixing processing unit. The receiving unit can be configured to receive, from each terminal of a plurality o terminals, a voice bit stream and voice control information used for a voice server to determine a voice-mixing strategy. The voice-mixing-strategy generation unit can be configured to generate the voice-mixing strategy by generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal. The selection unit can be configured to select multiple voice bit streams that need a voice mixing process, according to the first voice-mixing strategy and respectively for the each terminal o the plurality of terminals. The voice- mixing processing unit can be configured to, according to the second voice-mixing strategy, return the multiple voice bit streams that need the voice mixing process to a corresponding terminal of the plurality o terminals, or to perform the voice mixing process on the multiple voice bit streams that stream to the corresponding terminal.

[0010] Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings o the present disclosure.

BRIEF DES CRIPTION OF T H E DRAWINGS

[001 1 ] The following drawings are merely examples for illustrative purposes according to various disclosed embodiments and are not intended to limit the scope of the disclosure.

[001 2] FIG. 1 depicts an exemplary operation environment of voice processing methods, apparatus and systems in accordance with various disclosed embodiments;

[001 3] FIG. 2 depicts a structure diagram of an exemplary terminal in accordance with various disclosed embodiments;

[0014] FIG. 3 depicts a flow diagram o an exemplary voice processing method in accordance with various disclosed embodiments;

[0015] FIG. 4 depicts a structure diagram of an exemplary voice processing apparatus in accordance with various disclosed embodiments;

[0016] FIG. 5 depicts a flow diagram of another exemplary voice processing method in accordance with various disclosed embodiments;

[001 7] FIG. 6 depicts exemplary architecture diagram of multi-stage cascade voice mixing in accordance with various disclosed embodiments;

[001 8] FIG. 7 depicts a structure diagram of another exemplary voice processing apparatus in accordance with various disclosed embodiments;

[0019] FIG. 8 depicts a flow diagram of another exemplary voice processing method in accordance with various disclosed embodiments;

[0020] FIG. 9 depicts a structure diagram of an exemplary voice processing system in accordance with various disclosed embodiments; and

[002 1 ] FIG. 10 depicts an exemplary computing system consistent with the disclosed embodiments.

D ET A I L E D DE S CRIPTION

[0022] Reference will now be made in detail to exemplary embodiments of the disclosure, which are illustrated in the accompanying drawings.

[0023] Various embodiments provide voice processing methods, apparatus, and systems. FIG. 1 depicts an exemplary operation environment of voice processing methods, apparatus and systems in accordance with various disclosed embodiments. As shown in FIG. 1 , a plurality o terminals 21 can respectively communicate with a voice server 22 v ia a network 23. The network 23 can include the Internet, a local area network (LAN), a mobile communication network, or any other types of computer networks or telecommunication networks, either wired or wireless. certain server functionalities, e.g., receiving/sending voice bit streams, processing voice, mixing voice. The voice server 22 may also include one or more processors to execute computer programs in parallel.

[0025] The voice server 22 may be implemented on any appropriate computing platform. FIG. 10 shows a block diagram of an exemplary computing system 1000 (or computer system) capable of implementing the voice server 22. As shown in FIG. 10, the exemplary computer system 1 000 may include a processor 1002, a storage medium 1004, a monitor 1006, a communication module 1008, a database 1010, peripherals 1012, and one or more bus 1014 to couple the devices together. Certain devices may be omitted and other devices may be included.

[0026] The processor 1002 can include any appropriate processor or processors. Further, the processor 1002 can include multiple cores for multi-thread or paral lel processing. The storage medium 1004 may include memory modules, e.g., Read-Only Memory (ROM), Random Access Memory ( RAM ), and flash memory modules, and mass storages, e.g., CD-ROM, U-disk, removable hard disk, etc. The storage medium 1004 may store computer programs for implementing various processes ( e.g., obtaining voice bit stream, generating voice mixing strategy, etc.), when executed by the processor 1002.

[0027] The monitor 1006 may include display devices for displaying contents in the computing system 1000. The peripherals 1012 may include I/O devices such as keyboard and mouse.

[0028] Further, the communication module 1008 may include network devices for

establishing connections through the network 23. The database 1010 may include one or more databases for storing certain data and for performing certain operations on the stored data, e.g., storing programs for mixing voice or for generating voice mixing strategies, etc.

[0029] The terminal 2 1 can include any appropriate client-side computing device, e.g., a personal computer (PC) ( e.g., a desktop computer or a notebook computer), a work station computer, a mobile terminal (e.g., a smart phone or a personal digital assistant), a hand-held computing device ( e.g., a tablet PC), etc. One or more smart operating systems can be installed on the terminal 2 1 and run on the terminal 2 1 .

[0030] The terminal 2 1 may be configured to provide structures and functions for certain actions and operations. For example, FIG. 2 depicts a structure diagram of an exemplary terminal in accordance with various disclosed embodiments. As an electronic device 1 00 shown in FIG. 2, the terminal 2 1 can include one or more processors 102 ( For illustrative purposes, one processor 102 is shown in FIG. 2), a memory 104, a transmission module 106, and an audio circuit 1 10. The structure of the terminal shown in FIG. 2 does not limit the terminal 2 1 according to various embodiments. More or less components than the components as shown in FIG. 2 can be included in the terminal 2 1 . Certain components can be combined. Components arrangements different from FIG. 2 can be used. e.g. software programs and/or modules corresponding to the voice processing methods, apparatus and systems as disclosed in various embodiments. By running or executing the software programs and/or modules stored in the memory 104, and by retrieving data stored in the memory 104, the processor 102 can perform various functions and process data. The memory 104 can include highspeed RAM and/or non-volatile memory, e.g., one or more of magnetic storage devices, flash memory devices, and/or other non-volatile solid-state memory devices. In various embodiments, the memory 104 may further include a memory remotely located from the processor 102. In those cases, a remote memory 104 can be remotely connected to the electronic dev ice 1 00 via a network. The network can include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and a combination thereof.

[0032] The transmission module 106 is configured to receive or transmit data via a network. For example, the network may include a wired network and/or a wireless network. In one example, the transmission module 106 can include a network interface controller (NIC). The N IC can be connected with other network devices (e.g. routers, modems, etc.) via a network cable, such that the N IC can communicate with the Internet.

[0033] In one example, the transmission module 106 can include a radio frequency ( RF) module configured to receive and transmit electromagnetic waves. Thus, conversion between the electromagnetic waves and electrical signals can be accomplished, such that the transmission module 106 can communicate with communication networks or other communication devices. The RF module may include any suitable circuit elements that perform the functions of the RF module, including, e.g., an antenna, an RF transceiver, a digital signal processor, an encryption/decryption chip, a subscriber identity module (SIM) card, memory, etc. The RF module can communicate with various networks including, e.g., the Internet, intranet, wireless communication network, or can communicate with other devices v ia wireless networks.

[0034] The wireless networks can include cellular telephone networks, wireless LAN, and/or metropolitan area network (MAN). The wireless networks can use various communication standards, protocols and technologies, including, but not limited to. Global System for Mobile Communication (GSM), Enhanced Data GSM Environment ( EDGE), wideband code division multiple access ( W- CDMA), code division access (CDMA), time division multiple access (TD A), Wireless Fidelity ( WiFi ) (e.g., the American Institute of Electrical and Electronics engineers standard IEEE 802. 1 1 a, IEEE 802. 1 l b, IEEE802. 1 l g, and/or IEEE 802. 1 I n), Voice ov er internet protocol (VoIP),

Worldwide Interoperability for Microwave Access (Wi-Max), any other suitable protocols for email, instant messaging and short message, and/or any other suitable communication protocols.

[0035] The audio circuit 1 10, coupled with a speaker 101 , an audio jack 103, and a

microphone 105 can provide an audio interface between a user and the electronic dev ice 100. For the audio data to an electrical signal, and transmit the electrical signal to the speaker 101 . The speaker 101 is con figured to convert the electrical signal to an audio signal output ( e.g., sound waves that can be heard by human ear). On the other hand, the audio circuit 1 10 is configured to receive an electrical signal from the microphone 105, convert the electrical signal to audio data, and transmit the audio data to the processor 102 for further processing. In certain examples, the audio data can be received from the memory 104 or the transmission module 106. In addition, the audio data can be stored in the memory 104 for further processing, or be transmitted via the transmission module 106.

[0036] FIG. 3 depicts a flow diagram of an exemplary voice processing method in accordance with various disclosed embodiments. The exemplary voice processing method can be implemented by the terminal 21. As shown in FIG. 3, the method can include the following exemplary steps.

[0037] In Step S I 10. a voice bit stream to be sent is obtained. For example, in Step S 1 10, a sound can be recorded via the microphone 105. The microphone 105 can output an analog electrical signal that needs to be first converted to a digital signal, i.e., the voice bit stream. In various embodiments, a voice bit stream can be also referred to as a voice stream, or a voice code stream. A voice bit stream can refer to any suitable type o data sequence digitally representing an audio signal.

[0038] For example, a voice bit stream, e.g., a pulse-code modulation (PCM) stream of an audio signal, can be formed by sampling, quantization, and/or encoding. Further, in order to reduce volume o the voice bit stream, the PCM stream can be compressed, e.g., compressed using Adaptive Di fferential Pulse Code Modulation (MSADPCM) algorithm, audio compression algorithms of International Telephone and Telegraph Consultative Committee (CCITT) ( e.g.. A- LAW algorithm, μ-law algorithm). Moving Pictures Experts Group (MPEG) compression algorithm, and/or any other suitable algorithms. Thus, the voice bit stream can include a PCM stream, or a voice bit stream that has been compressed using certain algorithm( s).

[0039] Further, the voice bit stream can be inputted via the microphone 105 and/or any other suitable means, without limitation. For example, the voice bit stream can be obtained by directly reading an audio file stored in the memory 104.

[0040] In Step S I 20, corresponding to the voice bit stream, voice control information to be used by a voice server to determine a voice-mixing strategy (or voice-mixing strategies) is obtained. The voice control information can include first v oice control information directly extracted from the voice bit stream, and/or second voice control information obtained using other methods.

[0041 ] As used herein, unless otherwise specified, voice mixing can refer to any suitable process that accomplishes mixing of audio signals from multiple channels. In various embodiments, a voice mixing process can include mixing voice bit streams from multiple channels, i.e., mixing multiple voice bit streams, to generate a single channel of voice, e.g., a mixed voice bit stream. In various embodiments, an audio signal can include a voice, noise, or any other suitable sounds, without l imitation. The methods, apparatus, and systems in accordance with various embodiments can be used for processing any audio signals, based on needs of practical applications.

[0042] For example, the first voice control information can include, e.g., short time energy (i.e., short term amplitude energy), long time energy (i.e., long term amplitude energy), voice activity detection information, or a combination thereof. An audio signal can essentially include a discrete time signal, which can be expressed as X( n), where n represents time. Energy of the audio signal (E) can be defined as:

[0043] £ =∑^ 2 (»)

[0044] In the above formula, the upper limit and lower limit o the summation can be positive infinity and negative in inity, respectively. During actual processing, when a short time range or a long time range is used for calculation, the short time energy or the long time energy can be obtained, respectively. In actual voice processing, the short time energy can be used to distinguish between unvoiced, voiced, and mute. The long time energy can be used to indicate average energy during a long period of time. Voice activity detection is a voice processing technique used to detect whether there is a voice signal, and further, to distinguish between a normal voice and background noise, to determine whether a voice input is continuous, etc. Voice activity detection information can refer to information generated by the voice activity detection.

[0045] The second voice control information can include, e.g., the voice control information related to a current user or a current voice session ( or conversation). In one embodiment, users corresponding to certain terminals can have higher rights or higher levels, and accordingly can have higher priority in voice input or voice mixing. For example, in a voice session, there can be a host among a plurality of users ( or user participants) that participate in the voice session. Voice of the host can have the highest voice-mixing priority. For example, in another voice session, a user is a charged user (i.e., a user that makes a payment), and can thus have a higher voice-mixing priority relative to a free user (e.g., a user that does not make a payment).

[0046] As used herein, a session, or a voice session, can refer to a process of interactive information interchange involving audio signals. A session can include a dialogue, a conversation, a meeting, a call, etc. between two or more communicating dev ices ( e.g., between two terminals, or a terminal and a server).

[0047] In one example, the second voice control information can further include a request that specifies voice mixing to be performed on the server side (i.e., on the server). For example, on the terminal 2 1 , when a voice session is established or during the voice session, options can be provided for the users to select a voice session mode. When the user selects a mode that needs high performed on the server can be included in the second voice control information.

[0048] in one embodiment, a certain current voice session can have a higher priority ( e.g., voice-mixing priority) relative to other general sessions. For example, a voice session has been paid, so voice mixing quality of the session needs to be ensured, and thus voice mixing may need to be performed on the server side.

[0049] In Step SI 30, the obtained v oice bit stream and the voice control information are sent to a voice server. For example, the v oice bit stream and the voice control information can be converted to a network data packet in a predetermined format, and then sent ( i.e., transmitted) to the v oice server 20 via the transmission module 106. Accordingly, the v oice server 20 can receive the voice bit stream and the voice control information.

[0050] in Step S I 40, at least one v oice bit stream returned by the voice server is received. For example, after the voice server 20 receives the voice bit stream and the voice control information, the voice serv er 20 can perform a corresponding voice mixing process, and can then return a v oice bit stream formed by the voice mixing process, or return multiple voice bit streams that have not been processed using the voice mixing process. The voice mixing process and the returning of voice bit stream( s) are further detailed above and below in the present disclosure. Accordingly, the terminal 2 1 can receiv e the voice bit stream( s) returned by the voice server 20.

[0051] In Step S I 50, the at least one voice bit stream is outputted. For example, the number of the voice bit streams can be determined first. When only one voice bit stream is received, it can indicate that the v oice mixing process has been performed on the voice server 20, or that there is only one person at other end( s) of the current voice session. In this case, the voice bit stream can be directly decoded and outputted.

[0052] When multiple voice bit streams are received, the voice bit streams may need to be processed in a v oice mixing process. In various embodiments, the voice mixing process can refer to adding together PCM streams of various voice bit streams, to generate a mixed voice bit stream. A ter the voice mixing process, the voice bit stream( s) (e.g., the mixed voice bit stream ) can be outputted. In v arious embodiments, the outputting can refer to converting a voice bit stream to an analog electrical signal via the audio circuit 1 10 and then outputting the analog electrical signal to the speaker 101 or the audio jack 103. Thus, the user can hear an outputted sound directly or hear the outputted sound via any of headphones and/ or speakers (e.g., loudspeakers) that are connected to the audio jack 103.

[0053] According to various embodiments, during a voice call, in addition to a v oice bit stream sent to a v oice server, voice control information to be used by the voice server to determine a voice-mixing strategy can also be sent to the voice server. Correspondingly, the voice server does server can thus be reduced.

[0054] FIG. 4 depicts a structure diagram of an exemplary voice processing apparatus in accordance with various disclosed embodiments. As shown in FIG. 4, a voice processing apparatus 200 can include a first obtaining unit 2 10, a second obtaining unit 220, a sending unit 230, a receiving unit 240, and/or an output unit 250. For example, the various units described above can be stored in the memory 104 (as shown in FIG. 2), in order to be executed by the processor 102 (as shown in FIG. 2). The units are included in the memory 104 (e.g., the memory 104 as shown in FIG. 2), for illustrative purposes only. The units can be included in, and/or distributed among various components of a computer system in order to accomplish the respective functions, without limitation.

[0055] The first obtaining unit 2 10 is configured to obtain a voice bit stream to be sent. The second obtaining unit 220 is configured to obtain voice control in formation to be used by a voice server to determine a voice-mixing strategy, corresponding to the voice bit stream.

[0056] The sending unit 230 is contlgured to send the obtained voice bit stream and the voice control information to a voice server. The receiving unit 240 is contlgured to receive at least one voice bit stream returned by the voice server.

[0057] The output unit 250 is contlgured to output the at least one voice bit stream. For example, the number of the voice bit streams can be determined first. When only one voice bit stream is received, it can indicate that a voice mixing process has been performed on the voice server 20 (e.g., as shown in FIG. 1 ), or that there is only one person at other end( s) of the current voice session. In this case, the voice bit stream can be directly decoded and outputted. When multiple voice bit streams are received, the voice bit streams may need to be processed in the voice mixing process.

[0058] Thus, the voice processing apparatus 200 can further include a voice-mixing unit 260. When the at least one voice bit stream includes multiple voice bit streams, the voice-mixing unit 260 is configured to perform a voice mixing process on the at least one voice bit stream. Further details of the v oice processing apparatus 200 can be similar to or the same as depicted in the voice processing methods in accordance with various embodiments (e.g., as shown in FIG. 3 ).

[0059] According to various embodiments, during a voice call, in addition to a voice bit stream sent to a voice server, voice control information to be used by the voice server to determine a voice-mixing strategy can also be sent to the voice server. Correspondingly, the voice server does not need to extract the voice control information from the voice bit stream. Burden on the voice server can thus be reduced.

[0060] FIG. 5 depicts a flow diagram of another exemplary voice processing method in accordance with various disclosed embodiments. In various embodiments, the exemplary voice 1). As shown in FIG. 5, the method can include the following exemplary steps.

[0061 ] in Step S310, voice bit streams and voice control information to be used by the voice server to determine a voice-mixing strategy ( or voice-mixing strategies), sent by a plurality of terminals, are received. In various embodiments, the voice control information can include first voice control information directly extracted from the voice bit stream, and/or second voice control information obtained using other methods.

[0062] For example, as shown in FIG. 1 , a plural ity of terminals 2 1 can participate in a voice session. Therefore, each terminal 2 1 of the plurality of terminals 2 1 can simultaneously send a voice bit stream and voice control information to the server 20. Contents o the voice control information can be similar to or the same as the voice control information described in the voice processing methods as disclosed in various embodiments (e.g., as shown in FIG. 3). Accordingly, the voice server 20 can receive the voice bit streams and the voice control information sent by the plurality of terminals 2 1 .

[0063] In Step S320, according to the voice control information, a first voice-mixing strategy and a second voice-mixing strategy are generated. The first voice-mixing strategy can include information that determines which voice bit streams that a voice mixing process can be performed on.

[0064] For example, there can be five terminals participating in a voice session, and the five terminals can be respectively identi fied as A, B, C, D, and E. Normally, for each terminal, the voice bit streams corresponding to other terminals can be processed using a voice mixing process. For example, for terminal A, terminals B, C, D and E can be selected for the voice mixing process. For terminal B, terminals A, C, D and E can be selected for the voice mixing process, and so on.

However, such method may process all the voice bit streams and consume signi ficant resources.

[0065] The first voice-mixing strategy can be generated accordingly to the first voice control information, and/or any other suitable information, without limitation. In one example, according to the first voice control information, voice bit stream(s) that are effective voice bit stream(s) can be identified. The voice mixing process can process only the effective voice bit streams. An effective voice bit stream can refer to a voice bit stream that provides a voice input that is an effective input. The effective input can refer to, e.g., having a sound being inputted and the sound is not background sound or noise. Whether a voice input is an effective input can be determined according to information including, e.g., short time energy, long time energy, voice acti vity detection information, or a combination thereof. In one example, a predetermined number (e.g., ranging from about 2 to about 5) of the greatest (e.g., loudest, or hav ing the greatest short time energy, or having the greatest long time energy) voice bit streams can be identi fied for the voice mixing process.

[0066] In one example, attribute information of a user corresponding to each terminal (e.g., terminal 2 1 ) can be obtained first. A voice-mixing priority of the user can be obtained according to of participating users. Voice of the host can have the highest voice-mixing priority. Thus, the first voice-mixing strategy (or the second voice-mixing strategy) should include at least the voice bit stream from the terminal 2 1 logged in by the host. For other users, when a v oice-mixing priority of a user exceeds a preset voice-mixing priority level, the user can be included in a list of users that must be selected, e.g., by the first voice-mixing strategy for the voice mixing process.

[0067] The second voice-mixing strategy can include information that determines whether to perform the voice mixing process on the v oice server 20 or to have the voice mixing process performed by the terminal 2 1 . In one embodiment, the second voice-mixing strategy can be generated accordingly to the second voice control information.

[0068] For example, the second voice control information (or the second voice-mixing strategy) can include a request that specifies voice mixing to be performed on the server side (i.e., on the server). Thus, the second voice-mixing strategy can specify to perform the voice mixing process on the voice server 20. Optionally, the second voice-mixing strategy can specify to perform the voice mixing process on the voice server 20 by marking (i.e., marking in the second voice-mixing strategy), or including therein a tag, flag, or any other suitable identi ication that can specify to perform the voice mixing process on the voice server 20. When the second voice control information does not include the request that specifies voice mixing to be performed on the server side, the second voice-mixing strategy can by default specify to have the voice mixing process performed by the terminal 21 . Optionally, the second voice-mixing strategy can speci fy to have the voice mixing process performed by the terminal 2 1 by including therein a tag, flag, or any other suitable identi fication that can speci fy to have the voice mixing process performed by the terminal 2 1 .

[0069] In one embodiment, a current voice session has been paid and thus can hav e a relatively high priority (i.e., voice-mixing priority). In that case, the second voice-mixing strategy can specify to perform the voice mixing process on the voice server 20.

[0070] Further, the second voice-mixing strategy is not limited to being generated

accordingly to the second voice control in formation. For example, the voice server 20 may generate the second voice-mixing strategy according to its own hardware resource condition, and/or characteristics of the user associated with the terminal 2 1 .

[0071 ] In one example, the voice server 20 can detect whether it has sufficient hardware resources. The hardware resources can include, e.g., quota of processing time of the processor, storage space, etc. When there are sufficient hardware resources, the second voice-mixing strategy can speci fy to perform the voice mixing process on the voice server 20. When there are no sufficient hardware resources, the second voice-mixing strategy can speci fy to have the v oice mixing process performed by the terminal 2 1 . the second voice-mixing strategy) can change dynamically. For example, at a first moment, according to the hardware resource condition (e.g., hardware resource consumption condition), the second voice-mixing strategy can specify to have the voice mixing process performed by the terminal 21. When, at a second moment, the voice server 20 detects that certain hardware resources become available, the second voice-mixing strategy can be changed to specify to have the voice mixing process performed on the voice server 20.

[0073] in one embodiment, the voice server 20 can generate ( or determine) the second voice- mixing strategy according to user rights corresponding to the terminals 2 1 . First, the user rights corresponding to the user o each terminal 2 1 can be obtained. When the user rights of a certain user exceed a preset level, the second voice-mixing strategy can speci fy to perform the voice mixing process on the voice server 20. When the user rights of the certain user do not exceed the preset level, the second voice-mixing strategy can specify to have the voice mixing process performed by the terminal 2 1 (e.g., the terminal 2 1 corresponding to the certain user).

[0074] For example, during a voice session, a plurality of participating users can be charged users, and can thus have a higher voice-mixing priority relative to a free user. In that case, the second voice-mixing strategy can speci fy to perform the voice mixing process on the voice server 20 ( e.g., for the charged users). Otherwise, when the plurality of participating users are free users, the second voice-mixing strategy can specify to have the voice mixing process performed by the terminal 2 1 .

[0075] The above examples of generating the first voice-mixing strategy and the second voice-mixing strategy according to actual situations are merely illustrative, and the actual situations are not limited in the present disclosure. In various embodiments, information of how to generate the first voice-mixing strategy and the second voice-mixing strategy according to actual situations can be stored in management configuration information. The management configuration information can include document( s) or file( s) that describe how to generate the first voice-mixing strategy and the second voice-mixing strategy according to the first voice control information, the second voice control information and/or other suitable information.

[0076] In Step S330, multiple voice bit streams are selected respectively for each terminal for a voice mixing process, according to the first voice-mixing strategy. In Step S340, according to the second voice-mixing strategy, the multiple voice bit streams for the voice mixing process are returned to a corresponding terminal, or the voice mixing process is performed on the multiple voice bit streams and the voice bit streams (e.g., a mixed voice bit stream) from the voice mixing process are returned to the corresponding terminal.

[0077] The above-described voice mixing process can be completed on a single-level server. That is, the voice server 20 can complete the voice mix ing tasks (e.g., the voice mixing process) and obtained after the voice mixing process is completed) to the corresponding terminal. In various embodiments, the voice server 20 can complete the voice mixing tasks (e.g., the voice mixing process) to generate a mixed voice bit stream. Thus, the returning o the voice bit streams from the voice mixing process to the corresponding terminal can include returning the mixed voice bit stream to the corresponding terminal.

[0078] The voice mixing process can also be completed by a multi-stage cascade voice server. For example, FIG. 6 depicts an exemplary architecture diagram of multi-stage cascade voice mixing in accordance with various disclosed embodiments. As shown in FIG. 6, after the voice server 20 directly receives a voice ( i .e., a voice bit stream) inputted by the terminal 2 1 and completes the voice mixing process according to the voice-mixing strategies (e.g., the first voice-mixing strategy and/or the second voice-mixing strategy), output from the voice mixing process is not directly returned to the terminal 2 1 and can instead become input for a superior voice server 30 of the voice server 20.

[0079] For the superior voice server 30, the voice server 20 can be equivalent to a client-side ( e.g., a terminal ) of the superior voice server 30. The superior voice server 30 can complete the voice mixing process using voice-mixing strategies similar to or the same as the first voice-mixing strategy and/or the second voice-mixing strategy as depicted in various disclosed embodiments. The voice bit streams from the completed voice mixing process can be returned by the superior voice server 30 to the voice server 20, and then be forwarded to the corresponding terminal 2 1 by the voice server 20 after certain necessary processes are completed.

[0080] Further, FIG. 6 depicts a two-stage cascade structure for illustrative purposes only. The methods, apparatus and systems depicted in various embodiments are not limited to the two- stage cascade structure. More stages can be used in the multi-stage cascade structure according needs of practical applications, without limitation.

[008 1 ] in the voice processing methods in accordance with various embodiments, a voice server ( e.g., the voice server 20 as shown in FIGS. 1 or 6) can dynamically determine voice-mixing strategies according to various factors. Thus, consumption of hardware resources on the voice server can be reduced, and desired voice mixing effects for a terminal can be ensured.

[0082] FIG. 7 depicts a structure diagram of another exemplary voice processing apparatus in accordance with various disclosed embodiments. The voice processing apparatus can be a server apparatus and can be implemented on a voice server. As shown in FIG. 7, the voice processing apparatus 400 can include a receiving unit 410, a voice-mixing-strategy generation unit 420, a selection unit 430, and/or a voice-mixing processing unit 440. Certain units may be omitted and other units may be included.

[0083] The receiving unit 410 is configured to receive voice bit streams and voice control in formation to be used by the voice server to determine a voice-mixing strategy ( or voice-mixing configured to generate a first voice-mixing strategy and a second voice-mixing strategy, according to the voice control information.

[0084] The selection unit 430 is configured to select multiple voice bit streams respectively for each terminal for a voice mixing process, according to the first voice-mixing strategy. The voice- mixing processing unit 440 is configured to return the multiple voice bit streams for the voice mixing process to a corresponding terminal, or to perform the voice mixing process on the multiple voice bit streams and return the voice bit streams (e.g., a mixed voice bit stream) from the voice mixing process to the corresponding terminal, according to the second voice-mixing strategy

[0085] Further details o the voice processing apparatus 400 can be similar to or the same as depicted in the voice processing methods in accordance with various embodiments (e.g., as shown in FIG. 5). Using the voice processing apparatus in accordance with various embodiments, a voice server (e.g., the voice server 20 as shown in FIGS. 1 or 6) can dynamically determine voice-mixing strategies according to various factors. Thus, consumption of hardware resources on the voice server can be reduced, and desired voice mixing effects for a terminal can be ensured.

[0086] FIG. 8 depicts a flow diagram of another exemplary voice processing method in accordance with various disclosed embodiments. As shown in FIG. 8, the method can include the following exemplary steps that can be implemented on a plurality of terminals.

[0087] In Step S 1 10, a voice bit stream to be sent is obtained. In Step S I 20, corresponding to the voice bit stream, voice control information to be used by a voice server to determine a voice- mixing strategy (or voice-mixing strategies) is obtained.

[0088] In Step SI 30, the obtained voice bit stream and the voice control information are sent to a voice server. Further detai ls of Steps S 1 10-S 130 as above can be similar to or the same as depicted in the voice processing methods in accordance with various embodiments (e.g., as shown in FIG. 3).

[0089] As shown in FIG. 8, the method can further include the following exemplary steps that can be implemented on the voice server. In Step S310, voice bit streams and corresponding voice control information respectively sent by the plurality of terminals are received.

[0090] In Step S320, according to the voice control information, a first voice-mixing strategy and a second voice-mixing strategy are generated.

[0091 ] In Step S330, multiple voice bit streams are selected respectively for each terminal for a voice mixing process, according to the first voice-mixing strategy. In Step S340, according to the second voice-mixing strategy, the multiple voice bit streams for the voice mixing process are returned to a corresponding terminal, or the voice mixing process is performed on the multiple voice bit streams and the voice bit streams (e.g., a mixed voice bit stream) from the voice mixing process are returned to the corresponding terminal. Further details of Steps S3 10-S340 as above can be embodiments (e.g., as shown in FIG. 5).

[0092] As shown in FIG. 8, after Step S340, the method can further include the following exemplary steps that can be implemented on the plurality of terminals. In Step S140, at least one voice bit stream returned by the voice server is received. In Step SI 50, the at least one voice bit stream is outputted.

[0093] In the voice processing methods in accordance with various embodiments, a terminal can send voice control information to be used by a voice server to determine a voice-mixing strategy to the voice server. The voice server can dynamically determine voice-mixing strategies according to various factors. Thus, consumption of hardware resources on the voice server can be reduced, and desired voice mixing effects for a terminal can be ensured.

[0094] FIG. 9 depicts a structure diagram of an exemplary voice processing system in accordance with various disclosed embodiments. As shown in FIG. 9, the voice processing system 1000 can include a terminal module 61 and a server module 62.

[0095] The terminal module 61 can include a first obtaining unit 210, a second obtaining unit 220, and a sending unit 230. The first obtaining unit 210 is configured to obtain a voice bit stream to be sent.

[0096] The second obtaining unit 220 is configured to obtain voice control information corresponding to the voice bit stream, to be used by the server module 62 to determine a voice- mixing strategy. The sending unit 230 is configured to send the obtained voice bit stream and the obtained voice control information to the server module 62.

[0097] The server module 62 can include a receiving unit 410. a voice-mixing-strategy generation unit 420, a selection unit 430, and/or a voice-mixing processing unit 440. The receiving unit 410 is configured to receive voice bit streams and corresponding voice control information to be used by the server module 62 to determine a voice-mixing strategy (or voice-mixing strategies) sent respectively by a plurality o terminals (e.g., a plurality of the terminal modules 61 ).

[0098] The voice-mixing-strategy generation unit 420 is configured to generate a first v oice- mixing strategy and a second voice-mixing strategy, according to the v oice control information. The selection unit 430 is con igured to select multiple voice bit streams respectively for each terminal (e.g., each terminal module 61 ) for a voice mixing process, according to the first v oice-mixing strategy. The voice-mixing processing unit 440 is configured to return the multiple voice bit streams for the v oice mixing process to a corresponding terminal module 61 , or to perform the voice mixing process on the multiple voice bit streams and return the voice bit streams from the voice mixing process to the corresponding terminal module 61 , according to the second voice-mixing strategy. 250. The receiving unit 240 is configured to receive at least one voice bit stream returned by the voice server.

[00100] The output unit 250 is configured to output the at least one voice bit stream. Further details of various units/modules of the voice processing system as above can be similar to or the same as depicted in the voice processing apparatus in accordance with various embodiments ( e.g., as shown in FIGS. 4 and/or 7).

[00101 ] in the voice processing systems in accordance with various embodiments, a terminal can send voice control information to be used by a voice server to determine a voice-mixing strategy to the voice server. The voice server can dynamically determine voice-mixing strategies according to various factors. Thus, consumption of hardware resources on the voice server can be reduced, and desired voice mixing effects for a terminal can be ensured.

[00102] Therefore, various embodiments also provide a voice processing method. In an exemplary method, one terminal of a plurality of terminals can obtain a voice bit stream to be sent. The one terminal can obtain voice control information corresponding to the voice bit stream to be sent. The voice control information is used for a voice server to determine a voice-mixing strategy. The one terminal can send the voice bit stream and the voice control information to the voice server. The voice server can receive, from each terminal of the plurality of terminals, a voice bit stream and corresponding voice control information. The voice server can generate the voice-mixing strategy by generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal. According to the first voice-mixing strategy and respectively for the each terminal of the plural ity of terminals, the voice server can select multiple voice bit streams that need a voice mixing process. According to the second voice-mixing strategy, the voice server can return the multiple voice bit streams that need the voice mixing process to a corresponding terminal of the plurality of terminals, or perform the voice mixing process on the multiple voice bit streams that need the voice mixing process to generate a mixed voice bit stream and return the mixed voice bit stream to the corresponding terminal.

[00103] Various embodiments also provide a voice processing system, comprising a voice server. The voice server communicates with one terminal of a plurality of terminals, via a network. The one terminal includes a first obtaining unit configured to obtain a voice bit stream to be sent. The one terminal includes a second obtaining unit configured to obtain voice control information corresponding to the voice bit stream to be sent. The voice control information is used for a voice server to determine a voice-mixing strategy. The one terminal further includes a sending unit configured to send the voice bit stream and the voice control information to the voice server. The voice server includes a receiving unit configured to receive, from each terminal of the plurality of terminals, a voice bit stream and corresponding voice control information. The voice server includes generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal. The voice server includes a selection unit configured to select multiple voice bit streams that need a voice mixing process, according to the first voice-mixing strategy and respectively for the each terminal of the plurality of terminals. The v oice server further includes a voice-mixing processing unit configured to, according to the second voice- mixing strategy, return the multiple voice bit streams that need the voice mixing process to a corresponding terminal of the plurality of terminals, or perform the v oice mixing process on the multiple voice bit streams that need the voice mixing process to generate a mixed voice bit stream and return the mixed v oice bit stream to the corresponding terminal.

[00104] Further, various embodiments provides a (non-transitory) computer-readable storage medium having computer-executable instructions stored therein. The computer-readable storage medium can include nonvolatile memory, e.g., optical disk, hard disk, and/or flash memory.

Computer-executable instructions can be used for computer(s) or similar computing apparatus to implement (or complete, or accomplish) the voice processing methods in accordance with various disclosed embodiments.

[00105] The embodiments disclosed herein are exemplary only. Other applications, advantages, alternations, modi fications, or equivalents to the disclosed embodiments are obvious to those skilled in the art and are intended to be encompassed within the scope of the present disclosure.

[00106] Without limiting the scope of any claim and/or the speci fication, examples of industrial applicability and certain advantageous effects o the disclosed embodiments are listed for illustrative purposes. Various alternations, modifications, or equivalents to the technical solutions of the disclosed embodiments can be obvious to those skilled in the art and can be included in this disclosure.

[00107] The disclosed methods and systems can be used in a variety o Internet applications. By using the disclosed methods and systems, one terminal of a plurality of terminals can obtain a voice bit stream to be sent. The one terminal can obtain voice control information corresponding to the voice bit stream to be sent. The voice control in formation is used for a voice server to determine a voice-mixing strategy. The one terminal can send the voice bit stream and the voice control information to the voice server.

[00108] Therefore, during a voice session, in addition to a voice bit stream sent to a voice server, voice control information to be used by the voice server to determine a voice-mixing strategy can also be sent to the voice server. Correspondingly, the voice server does not need to extract the voice control information from the voice bit stream. Burden on the voice server can thus be reduced.

[00109] The voice server can receive, from each terminal of the plurality of terminals, a voice bit stream and corresponding voice control information. The voice server can generate the voice- mixing strategy by generating a first voice-mixing strategy and a second voice-mixing strategy according to the voice control information received from the each terminal. According to the first voice-mixing strategy and respectively for the each terminal of the plurality of terminals, the voice server can select multiple voice bit streams that need a voice mixing process. According to the second voice-mixing strategy, the voice server can return the multiple voice bit streams that need the voice mixing process to a corresponding terminal of the plurality of terminals, or perform the voice mixing process on the multiple voice bit streams that need the voice mixing process to generate a mixed voice bit stream and return the mixed voice bit stream to the corresponding terminal.

[001 1 0] Therefore, a voice server can dynamically determine voice-mixing strategies according to various factors. Thus, consumption of hardware resources on the voice server can be reduced, and desired voice mixing effects for a terminal can be ensured. Terminal 21

Voice server 22

Network 23

Superior voice server 30

Electronic dev ice 100

Processors 102

Memory 104

Transmission module 106

Audio circuit 1 10

Speaker 101

Audio jack 103

Microphone 105

Voice processing apparatus 200

First obtaining unit 2 10

Second obtaining unit 220

Sending unit 230

Receiving unit 240

Output unit 250

Voice-mixing unit 260

Voice processing apparatus 400

Receiving unit 410

Vo i ce- m i x i n g- st ra t egy generation unit 420

Selection unit 430

Voice-mixing processing unit 440

Terminal module 61

Server module 62

System 1000

Processor 1002

Storage medium 1004

Monitor 1006

Communications 1008

Database 1010

Peripherals 1012

Bus 1014