Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDIO TRANSPOSITION
Document Type and Number:
WIPO Patent Application WO/2021/254961
Kind Code:
A1
Abstract:
An electronic device comprising circuitry configured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

Inventors:
FERRAS FONT MARC (DE)
FABBRO GIORGIO (DE)
HOFFMANN FALK-MARTIN (DE)
KEMP THOMAS (DE)
UHLICH STEFAN (DE)
Application Number:
PCT/EP2021/065967
Publication Date:
December 23, 2021
Filing Date:
June 14, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY GROUP CORP (JP)
SONY EUROPE BV (GB)
International Classes:
G10H1/00; G10H1/36
Domestic Patent References:
WO2020103550A12020-05-28
Foreign References:
KR20200065248A2020-06-09
US5296643A1994-03-22
US7974838B12011-07-05
Other References:
JIANGLIN WANG ET AL: "Vocal Folds Disorder Detection using Pattern Recognition Methods", 2007 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY : [EMBC '07] ; LYON, FRANCE, 22 - 26 AUGUST 2007 ; [IN CONJUNCTION WITH THE BIENNIAL CONFERENCE OF THE SOCIÉTÉ FRANÇAISE DE GÉNIE BIOLOGIQUE ET MÉDICAL (SFGB, 22 August 2007 (2007-08-22), pages 3253 - 3256, XP031336902, ISBN: 978-1-4244-0787-3
UHLICH, STEFAN ET AL.: "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP", 2017, IEEE, article "Improving music source separation based on deep neural networks through data augmentation and network blending"
JENQ LIUCHIN-TENG LIN: "Fundamental frequency estimation based on the joint time-frequency analysis of harmonic spectral structure", IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, vol. 9, no. 6, September 2001 (2001-09-01), pages 609 - 621, XP011054124
SONDHI, M. M: "New methods of pitch extraction", EEE TRANS. AUDIO ELECTROACOUST. AU-16, 1968, pages 262 - 266, XP002112239
ROSS, M. J.SHAFFER, H. L.COHEN, A.FREUDBERG, R.MANLCY, H. J: "Average magnitude difference function pitch extractor", IEEE TRANS. ACOUST. SPEECH SIGNAL PROCESS. ASSP-22, 1974, pages 353 - 362, XP002434823, DOI: 10.1109/TASSP.1974.1162598
MOORER, J. A.: "The optimum comb method of pitch period analysis of continuous digitized speech", IEEE TRANS. ACOUST. SPEECH SIGNAL PROCESS. ASSP-22, 1974, pages 330 - 338
MOORER, J. A: "Linear Prediction of Speech", 1974, SPRINGER-VERLAG
NOLL, A.M.: "Cepstrum pitch determination", J. ACOUST. SOC. AM., vol. 41, 1966, pages 293 - 309, XP000579956, DOI: 10.1121/1.1910339
SCHROEDER, M. R.: "Period histogram and product spectrare: New methods for fundamental frequency measurement", J. ACOUST. SOC. AM., vol. 43, 1968, pages 829 - 834, XP008058427, DOI: 10.1121/1.1910902
R.C. MAHERJ. W. BEAUCHAMP: "Fundamental frequency estimation of musical signals using a two-way mismatch procedure", JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, vol. 95, no. 4, April 1994 (1994-04-01)
B. SECRESTG. DODDINGTON: "An integrated pitch tracking algorithm for speech systems", ICASSP '83. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, BOSTON, MASSACHUSETTS, USA, 1983, pages 1352 - 1355
"New phase-vocoder techniques for pitch-shifting, harmonizing and other exotic effects", PROC. 1999 IEEE WORKSHOP ON APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS, NEW PALTZ, NEW YORK
JEAN LAROCHEMARK DOLSON: "Improved Phase Vocoder Time-Scale Modification of Audio", IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, vol. 7, no. 3, May 1999 (1999-05-01), XP011054370
J. WANGC. JO: "Vocal Folds Disorder Detection using Pattern Recognition Methods", 2007 29TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, LYON, 2007, pages 3253 - 3256
DIBAZAR, ALIREZANARAYANAN, SHRIKANTH: "A system for automatic recognition of pathological speech", PROCEEDINGS OF THE ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, November 2002 (2002-11-01)
Attorney, Agent or Firm:
MFG PATENTANWÄLTE (DE)
Download PDF:
Claims:
CLAIMS

1. An electronic device comprising circuitry configured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of a second vocal sig- nal.

2. The electronic device of claim 1 wherein the circuitry is further configured to determine the first pitch range of the first vocal signal based on a first pitch analysis result of the first vocal signal and the second pitch range of the second vocal signal based on a second pitch analysis result of the second vocal signal.

3. The electronic device of claim 1 wherein the circuitry is further configured to determine the first pitch analysis result based on the first vocal signal and the second pitch analysis result based on the second vocal signal.

4. The electronic device of claim 1 wherein the accompaniment comprises all parts of the audio input signal except for the first vocal signal.

5. The electronic device of claim 1 wherein audio output signal is the accompaniment.

6. The electronic device of claim 1 wherein the audio output signal is the audio input signal.

7. The electronic device of claim 1 wherein the audio output signal is a mixture of the accom- paniment and the first vocal signal.

8. The electronic device of claim 1 wherein the circuitry is further configured to separate the accompaniment into a plurality of instruments.

9. The electronic device of claim 1 wherein the circuitry is further configured to separate a sec- ond audio input signal by audio source separation.

10. The electronic device of claim 9, wherein the second audio input signal is separated into the second vocal signal and a remaining signal.

11. The electronic device of claim 1 wherein the circuitry is further configured to determine a singing effort based on the second vocal signal, wherein the transposition value is based on the sing- ing effort and the pitch ratio.

12. The electronic device of claim 11 wherein the singing effort is based on the second pitch analysis result of the second vocal signal and the second pitch range of the second vocal signal. 13. The electronic device of claim 11 wherein the circuitry is further configured to determine the singing effort based on a jitter value and/ or a RAP value and/ or a shimmer value and/ or a APQ value and/ or a Noise-to-Harmonic-Ratio and/ or a soft phonation index.

14. The electronic device of claim 1, wherein the circuitry is configured to transpose the audio output signal based on a pitch ratio, such that transposition value corresponds to an integer multiple of a semitone.

15. The electronic device of claim 1, wherein the circuitry comprises a microphone configured to capture the second vocal signal.

16. The electronic device of claim 1, wherein the circuitry is configured to capture the first audio input signal from a real audio recording.

17. A method comprising: separating by audio source separation a first audio input signal into a first vocal signal and an accom- paniment, and transposing an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

18. A computer program comprising instructions, the instructions when executed on a processor causing the processor to perform the method of claim 17.

Description:
AUDIO TRANSPOSITION

TECHNICAL FIELD

The present disclosure generally pertains to the field of audio processing, and in particular, to de- vices, methods and computer programs audio transposition. TECHNICAL BACKGROUND

There is a lot of audio content available, for example, in the form of compact disks (CD), tapes, au- dio data files which can be downloaded from the internet, but also in the form of soundtracks of videos, e.g. stored on a digital video disk or the like, etc.

When a music player is playing a song of an existing music database, the listener may want to sing along. Naturally, the listener’s voice will add to the original artist’s voice present in the recording and potentially interfere with it. This may hinder or skew the listener’s own interpretation of the song. Therefore, karaoke systems provide a playback of a song in the musical key of the original song re- cording, for a karaoke singer to sing along with the playback. This can force the karaoke singer to reach a pitch range that is beyond his capabilities, i.e. too high or too low. This may result in a high singing effort for the karaoke singer to reach the pitch range of the original song and therefore the karaoke singer may not be able to stand long singing sessions or could damage his vocal cords. This may also result in the karaoke user having to adapt his pitch to reduce his effort and save his vocal cords and therefore the overall quality of the performance may be bad.

Although there generally exist techniques for audio transposition, it is generally desirable to improve methods and apparatus for transposition of audio content.

SUMMARY

According to a first aspect the disclosure provides an electronic device comprising circuitry config- ured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of a second vocal signal.

According to a second aspect the disclosure provides a method comprising: separating by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and trans- posing an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the sec- ond vocal signal. Further aspects are set forth in the dependent claims, the following description and the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS Embodiments are explained by way of example with respect to the accompanying drawings, in which:

Fig. 1 schematically shows a first embodiment of a process of a karaoke system to automatically transpose an audio signal based on audio source separation and pitch range estimation;

Fig. 2 schematically shows a general approach of audio upmixing/remixing by means of blind source separation (BSS), such as music source separation (MSS);

Fig. 3 shows in more detail an embodiment of a process of pitch analysis performed in the pitch an- alyzer in Fig 1;

Fig. 4 schematically shows a flow chart describing the process of the pitch range determiner of Fig.1 ; Fig. 5 schematically shows a graph of pitch analysis result;

Fig. 6 schematically shows a flow chart describing the process of the pitch range comparator of Fig.l;

Fig. 7 schematically shows a flow chart describing the process of the transposer of Fig.l;

Fig. 8 schematically shows a second embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation

Fig. 9 schematically describe a singing effort determiner of Fig. 8

Fig. 10 schematically shows the transposition value determiner of Fig. 8;

Fig. 11 schematically shows a third embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation;

Fig. 12 schematically shows a fourth embodiment of a process of a of a karaoke systems which transposes an audio signal based on audio source separation and pitch range estimation;

Fig. 13 schematically shows a fifth embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation; and

Fig. 14 schematically describes an embodiment of an electronic device that can implement the pro- cesses of pitch range determination and transposition as described above. DETAILED DESCRIPTION OF EMBODIMENTS

Before a detailed description of the embodiments under reference of Fig. 1 to Fig. some general ex- planations are made.

The embodiments disclose an electronic device comprising circuitry configured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

The electronic device may for example be any music or movie reproduction device such as a karaoke box, a smartphone, a PC, a TV, a synthesizer, mixing console or the like.

The circuitry of the electronic device may include a processor, may for example be CPU, a memory (RAM, ROM or the like), a memory and/or storage, interfaces, etc. Circuitry may comprise or may be connected with input means (mouse, keyboard, camera, etc.), output means (display (e.g. liquid crystal, (organic) light emitting diode, etc.)), loudspeakers, etc., a (wireless) interface, etc., as it is gen- erally known for electronic devices (computers, smartphones, etc.). Moreover, circuitry may com- prise or may be connected with sensors for sensing still images or video image data (image sensor, camera sensor, video sensor, etc.

The input signal can be an audio signal of any type. It can be in the form of analog signals, digital signals, it can origin from a compact disk, digital video disk, or the like, it can be a data file, such as a wave file, mp3-file or the like, and the present disclosure is not limited to a specific format of the input audio content. An input audio content may for example be a stereo audio signal having a first channel input audio signal and a second channel input audio signal, without that the present disclo- sure is limited to input audio contents with two audio channels. In other embodiments, the input audio content may include any number of channels, such as remixing of a 5.1 audio signal or the like.

The input signal may comprise one or more source signals. In particular, the input signal may com- prise several audio sources. An audio source can be any entity, which produces sound waves, for ex- ample, music instruments, voice, vocals, artificial generated sound, e.g. origin form a synthesizer, etc.

The input audio content may represent or include mixed audio sources, which means that the sound information is not separately available for all audio sources of the input audio content, but that the sound information for different audio sources, e.g. at least partially overlaps or is mixed. The accompaniment may be a residual signal that results from separating the vocals signal from the audio input signal. For example, the audio input signal may be a piece of music that comprises vo- cals, guitar, keyboard and drums and the accompaniment signal may be a signal comprising the gui- tar, the keyboard and the drums as residual after separating the vocals from the audio input signal.

Transposition may be the changing of the pitch of tones of piece of music by a certain interval or shifting an entire piece of music into a different key according to the interval.

A pitch ratio may be a ratio between two pitches. Transposition by a pitch ratio may mean shifting a pitch of tones of piece of music by the ratio between two pitches of or shifting an entire piece of music into a different key according the number of semitones that is defined by the ratio between two pitches.

Blind source separation (BSS), also known as blind signal separation, is the separation of a set of source sig- nals from a set of mixed signals. One application for Blind source separation (BSS), is the separation of music into the individual instrument tracks such that an upmixing or remixing of the original con- tent is possible.

In the following, the terms remixing, upmixing, and downmixing can refer to the overall process of generating output audio content on the basis of separated audio source signals originating from mixed input audio content, while the term “mixing” can refer to the mixing of the separated audio source signals. Hence the “mixing” of the separated audio source signals can result in a “remixing”, “upmixing” or “downmixing” of the mixed audio sources of the input audio content.

In audio source separation, an input signal comprising a number of sources (e.g. instruments, voices, or the like) is decomposed into separations. Audio source separation may be unsupervised (called “blind source separation”, BSS) or partly supervised. “Blind” means that the blind source separation does not necessarily have information about the original sources. For example, it may not necessarily know how many sources the original signal contained or which sound information of the input sig- nal belong to which original source. The aim of blind source separation is to decompose the original signal separations without knowing the separations before. A blind source separation unit may use any of the blind source separation techniques known to the skilled person. In (blind) source separa- tion, source signals may be searched that are minimally correlated or maximally independent in a probabilistic or information-theoretic sense or on the basis of a non-negative matrix factorization structural constraints on the audio source signals can be found. Methods for performing (blind) source separation are known to the skilled person and are based on, for example, principal compo- nents analysis, singular value decomposition, (independent component analysis, non-negative ma- trix factorization, artificial neural networks, etc. Although, some embodiments use blind source separation for generating the separated audio source signals, the present disclosure is not limited to embodiments where no further information is used for the separation of the audio source signals, but in some embodiments, further information is used for generation of separated audio source signals. Such further information can be, for example, in- formation about the mixing process, information about the type of audio sources included in the in- put audio content, information about a spatial position of audio sources included in the input audio content, etc.

The circuitry may be configured to perform the remixing or upmixing based on the at least one fil- tered separated source and based on other separated sources obtained by the blind source separation to obtain the remixed or upmixed signal. The remixing or upmixing may be configured to perform remixing or upmixing of the separated sources, here “vocals” and “accompaniment” to produce a remixed or upmixed signal, which may be sent to the loudspeaker system. The remixing or upmixing may further be configured to perform lyrics replacement of one or more of the separated sources to produce a remixed or upmixed signal, which may be sent to one or more of the output channels of the loudspeaker system.

According to some embodiment the circuitry may be further configured to determine the first pitch range of the first vocal signal based on a first pitch analysis result of the first vocal signal and the second pitch range of the second vocal signal based on a second pitch analysis result of the second vocal signal.

According to some embodiment wherein the accompaniment comprises all parts of the audio input signal except for the first vocal signal.

According to some embodiment wherein audio output signal may be the accompaniment.

According to some embodiment wherein audio output signal may be the audio input signal.

According to some embodiment wherein the audio output signal may be the mixture of the accom- paniment and the first vocal signal.

According to some embodiment wherein may be further configured separate the accompaniment into a plurality of instruments.

According to some embodiment second audio input signal may be separated into the second vocal signal and a remaining signal.

According to some embodiment the circuitry may be further configured to determine a singing ef- fort based on the second vocal signal, wherein the transposition value is based on the singing effort and the pitch ratio. According to some embodiment the singing effort may be based on the second pitch analysis result of the second vocal signal and the second pitch range of the second vocal signal.

According to some embodiment the circuitry may be further configured to determine the singing effort based on a jitter value and/ or a RAP value and/ or a shimmer value and/ or an APQ value and/ or a Noise-to-Harmonic-Ratio and/ or a soft phonation index.

According to some embodiment the circuitry may be further configured to transpose the audio out- put signal based on a pitch ratio, such that transposition value corresponds to an integer multiple of a semitone.

The transposition value may be rounded to ceil or rounded to floor to the next integer multiple of a semitone. Therefore, the accompaniment may be transposed by an integer multiple of a semitone.

According to some embodiment the circuitry may comprises a microphone configured to capture the second vocal signal.

According to some embodiment the circuitry may be further configured to capture the first audio input signal from a real audio recording.

A real audio recording may be any recoding of music that is recorded for example with a micro- phone compared to a computer-generated sound. A real audio recording may be stored in a suitable audio file like WAV, MP3, AAC, WMA, AIFF etc. That means the audio input may be an actual au- dio, meaning un-prepared raw audio from for example a commercial performance of a song.

The embodiments disclose a method comprising separating by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and transposing an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

The embodiments disclose a computer program comprising instructions, the instructions when exe- cuted on a processor causing the processor to perform the method comprising separating by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and trans- posing an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the sec- ond vocal signal.

Embodiments are now described by reference to the drawings.

Fig. 1 schematically shows a first embodiment of a process of a karaoke system to automatically transpose an audio signal based on audio source separation and pitch range estimation. An audio in- put signal x(n), which is received from a mono or stereo audio input 13, contains multiple sources (see 1, 2, ..., K in Fig. 2) and is input to a process of Music Source Separation 12 and decomposed into separations (see separated source 2 and residual signal 3 in Fig. 2), here into a separated source 2, namely original vocals s originai (n), and a residual signal 3, namely accompaniment s Acc (n). An exemplary embodiment of the process of Music Source Separation 2 is described in Fig. 2 below. The audio output signal is X * (n) is equal to the accompaniment s Acc (n ) and the audio output signal is x * (n) is transmitted to an transposer 17 and the original vocals S originai (n) are transmitted to a signal adder 18 and a pitch analyzer 14 (more detail in Fig. 3) which estimates a pitch analysis result ω , originaI (n) of the original vocals s originai (n). The pitch analysis result d)f originai (n) is input into a pitch range estimator 15 (described in more detail in Fig. 4) which estimates a pitch range R ω , original °f the original vocals S originai (n). The pitch range L ω original is input into a pitch com- parator 16. A User’s microphone 11 acquires an audio input signal y(n), which is input int to a pro- cess of Music Source Separation 12 and decomposed into separations (see separated source 2 and residual signal 3 in Fig. 2), here into a separated source 2, namely, namely user vocals s user (n), and a residual signal 3 which is not needed in the following. The user vocals s originai (n ) are transmitted to a pitch analyzer 14 (more detail in Fig. 3) which estimates a pitch analysis result ω f , User (n) of the user vocals s originai (n). The pitch analysis result (Of f,User (n) is input into a pitch range estimator 15 (described in more detail in Fig. 4) which estimates a pitch range Reuser of the user vocals S user (n)· The pitch range R ω ,user is input into a pitch comparator 16. The pitch range estimator 16 (described in more detail in Fig. 5) receives the pitch range R ω, original °f the original vocals s original 00 and the pitch range L w user of the user vocals S user (n) and outputs a pitch ratio be- tween the pitch of an average of pitch range R ω , original of the original vocals s0riainai(n ) and an average of the pitch range Reuser of the user vocals S user (n). The pitch ratio R ω is input into a transposer 17 (described in more detail in Fig. 6). The transposer 17 receives as inputs a transposi- tion value transpose_val, which is equal to the pitch ratio R w in this case, and the audio output signal is x * (n) (= accompaniment s Acc (n )) and transposes the audio output signal is x * (n) (=accompani- ment s Acc (n)) by the pitch ratio R w . The transposer outputs a transposed accompaniment s Acc (n) and inputs it into a signal adder 18. The signal adder 18 receives the transposed accompaniment s Acc (n) and the original vocals s original p) and adds them together and outputs the added signal to a loudspeaker system 19. The pitch ratio R ω is further output to a display unit 20 where the value is presented to the user. The display unit 20 further receives lyrics of the user vocals S user (n) and pre- sents them to the user.

In the embodiment of Fig. 1, audio source separation is performed on the audio input signal y(n) in real-time. The audio input signal y(n) is for example a karaoke signal, which comprises the user’s vocals and a background sound. The background sound may be any noise that may be captured by the microphone of the karaoke singer, for example the noise of crowd etc. The audio input signal y(n) is processed online through a vocal separation algorithm to extract and potentially remove the user vocals from the background sound. An example for real-time vocal separation is described in published paper Uhlich, Stefan, et al. “Improving music source separation based on deep neural net- works through data augmentation and network blending.” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, wherein the bi-directional LSTM layers are replaced by uni-directional ones.

The audio source separation is performed on the audio input signal x(n) in real-time. The audio in- put signal x(n) is for example a song on which the karaoke singing should be performed, which comprises the original vocals and the accompaniment. The audio input signal x(n ) may processed online through a vocal separation algorithm to extract and potentially remove the user vocals from the playback sound or the audio input signal x(n) may be processed in advance when the audio in- put signal x(n) is for example stored in a music library. In case of in-advance-processing the pitch analysis and the pitch range estimation may also be performed in advance. In order to do in-ad- vance-processing each of the songs in a karaoke song database needs to be analyzed for pitch range.

There exist karaoke boxes that where a manual transposition is possible. However, most karaoke singers (also called karaoke users) do not know whether the pitch range is adequate to their capabili- ties and therefore an automatic on-line transposition of the accompaniment s Acc (n) has a great ad- vantage.

In one embodiment the audio input x(n) is a MIDI file (see more details in description of Fig. 7 be- low). In this case the karaoke system transposes the accompaniment s Acc (n) each of the MIDI tracks by a MIDI synthesizer.

In another embodiment the audio input x(n) is an audio recording, for example a WAV file, a MP3 file, AAC file, a WMA file, AIFF file etc. That means the audio input x(ri) is an actual audio, mean- ing un-prepared raw audio from for example a commercial performance of a song. The karaoke ma- terial does not require any manual preparation, and can be processed totally automatically, on-line and be provided good quality and high realism, so in this embodiment no pre-prepared audio/MIDI material is needed.

To analyze pitch-range and singing effort (see Fig. 8) of the karaoke singer, the karaoke system uses a vocal/instrument separation algorithm (see Fig. 2) to obtain a clean vocal recording from the mi- crophone of the karaoke singer or the original song (sung by the original singer). Although the pitch analysis unit and the transposer unit are functionally separated in Fig.l they are both carried out automatically in both stages are combined such that minimal transposition factors and deviation from the original recording are achieved while minimizing singer fatigue and effort. The system essentially optimizes the performance experience for both singers and listeners of the karaoke session.

Further, advantages of the karaoke system described above are that the low-delay processing of vo- cal/instrument separation allows for an online pitch analysis and transposition. Further, the vocal separation allows for accurate analysis of vocal pitch range and determination of the singing effort. Further, the vocal/instrument separation processes real audio does the karaoke not limited to MIDI karaoke songs and therefore the music is much more realistic. Still further, the vocal/instrument separation enables improved transposition quality of real audio recordings

Audio remixing/ upmixing by means of audio source separation

Fig. 2 schematically shows a general approach of audio upmixing/ remixing by means of blind source separation (BSS), such as music source separation (MSS). First, audio source separation (also called “demixing”) is performed which decomposes a source audio signal 1, here audio input signal x(n), comprising multiple channels I and audio from multiple audio sources Source 1, Source 2, ...,

Source K (e.g. instruments, voice, etc.) into “separations”, here separated source 2, e.g. vocals So (n), and a residual signal 3, e.g. accompaniment S A (n), for each channel i, wherein K is an integer number and denotes the number of audio sources. The residual signal here is the signal obtained af- ter separating the vocals from the audio input signal. That is, the residual signal is the “rest” audio signal after removing the vocals for the input audio signal. In the embodiment here, the source au- dio signal 1 is a stereo signal having two channels i = 1 and i = 2. Subsequently, the separated source 2 and the residual signal 3 are remixed and rendered to a new loudspeaker signal 4, here a signal comprising five channels 4a-4e, namely a 5.0 channel system. The audio source separation process (see 104 in Fig. 1) may for example be implemented as described in more detail in published paper Uhlich, Stefan, et al. “Improving music source separation based on deep neural networks through data augmentation and network blending.” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017.

As the separation of the audio source signal may be imperfect, for example, due to the mixing of the audio sources, a residual signal 3 (r(n)) is generated in addition to the separated audio source signals 2a-2d. The residual signal may for example represent a difference between the input audio content and the sum of all separated audio source signals. The audio signal emitted by each audio source is represented in the input audio content 1 by its respective recorded sound waves. For input audio content having more than one audio channel, such as stereo or surround sound input audio content, also a spatial information for the audio sources is typically included or represented by the input au- dio content, e.g. by the proportion of the audio source signal included in the different audio chan- nels. The separation of the input audio content 1 into separated audio source signals 2a-2d and a residual 3 is performed on the basis of blind source separation or other techniques which are able to separate audio sources.

In a second step, the separations 2a-2d and the possible residual 3 are remixed and rendered to a new loudspeaker signal 4, here a signal comprising five channels 4a-4e, namely a 5.0 channel system. On the basis of the separated audio source signals and the residual signal, an output audio content is generated by mixing the separated audio source signals and the residual signal on the basis of spatial information. The output audio content is exemplary illustrated and denoted with reference number 4 in Fig. 2.

In a second step, the separations and the possible residual are remixed and rendered to a new loud- speaker signal 4, here a signal comprising five channels 4a-4e, namely a 5.0 channel system. On the basis of the separated audio source signals and the residual signal, an output audio content is gener- ated by mixing the separated audio source signals and the residual signal on the basis of spatial infor- mation. The output audio content is exemplary illustrated and denoted with reference number 4 in Fig. 2.

The audio input x(n) and audio input y(n) can be separated by the method described on Fig. 2, wherein the audio input y(n) is separated into the user vocals S user (n) and a non-used background sound and the and audio input x(n) is separated into the original vocals S user (n) and the accompa- niment s acc (n). The accompaniment s acc (n) be further separated into the respective tracks, for ex- ample drums, piano, strings etc. (see Fig. 11). The separation of the vocal allows large improvements in the way both accompaniment and vocals are processed.

Another method to the removed the accompaniment from the audio input y (n) is for example a crosstalk cancellation method, where a reference of the accompaniment is subtracted in-phase from the microphone signal, for example by using adaptive filtering.

Another method to separate the audio input y(n) can be utilized if a mastering recording for the au- dio input y(n) is available in-detail knowledge about how audio input y(n) (i.e. a song) was mas- tered. In this case the stems need to be mixed again without the vocals and the vocals need to be mixed again without all the accompaniment. In this process a much larger number of stems is used during mastering, e.g. layered vocals, multi-microphone takes, effects being applied, etc.

Pitch analysis Fig. 3 shows in more detail an embodiment of a process of pitch analysis performed in the pitch an- alyzer 13 in Fig 1. As described in Fig. 1, a pitch analysis is performed on the original vocals S original (n) and on the user vocals S originai (n), respectively, to obtain a pitch analysis result ω f (n). In particular, a process of signal framing 301 is performed on vocals 300, namely on a vocals signal s(n), to obtain Framed Vocals S n (i). A process of Fast Fourier Transform (FFT) spectrum analysis 302 is performed on the framed vocals S n (i ) to obtain the FFT spectrum S ω (n). A pitch measure analysis 303 is performed on the FFT spectrum S ω (n) to obtain a pitch measure result R P f ).

At the signal framing 301, a windowed frame, such as the framed vocals S n (i) can be obtained by where s(n + i ) represents the discretized audio signal (i representing the sample number and thus time) shifted by n samples, h(i) is a framing function around time n (respectively sample n), like for example the hamming function, which is well-known to the skilled person.

At the FFT spectrum analysis 302, each framed vocals is converted into a respective short-term power spectrum. The short-term power spectrum S(ω ) as obtained at the Discrete Fourier trans- form, also known as power spectral density, which may be obtained by where S n (i ) is the signal in the windowed frame, such as the framed vocals S n (i ) as defined above, ω are the frequencies in the frequency domain, |S ω (n)| are the components of the short-term power spectrum 5(ω) and N is the numbers of samples in a windowed frame, e.g. in each framed Vocals.

The pitch measure analysis 303 may for example be implemented as described in the published pa- per Der-Jenq Liu and Chin-Teng Lin, “Fundamental frequency estimation based on the joint time- frequency analysis of harmonic spectral structure” in IEEE Transactions on Speech and Audio Pro- cessing, vol. 9, no. 6, pp. 609-621, Sept. 2001:

A pitch measure R P f ) is obtained for each fundamental- frequency candidate ω f from the power spectral density S ω (n) of the frame window S n by where R E f ) is the energy measure of a fundamental- frequency candidate ω f , and R I/ f ) is the impulse measure of a fundamental-frequency candidate ω f .

The energy measure R E f ) of a fundamental-frequency candidate (Of is given by where if (ω f ) is the number of the harmonics of the fundamental frequency candidate (Of, h in (nω f ) is the inner energy related to a harmonic Iω f of the fundamental frequency candidate ω f , and E is the total energy, where

The inner energy is the area under the curve of spectrum bounded by an inner window of length W in and the total en- ergy is the total area under the curve of the spectrum.

The impulse measure of a fundamental- frequency candidate (Of is given by where ay is the fundamental frequency candidate, if (ω f ) is the number of the harmonics of the fundamental frequency candidate ay, h in (lω f ) is the inner energy of the fundamental frequency candidate, related to a harmonic nω f and h out (lω f ) is the outer energy, related to the harmonic lay.

The outer energy is the area under the curve of spectrum bounded by an outer window of length W out .

The pitch analysis result for frame window S n is obtained by where ) is the fundamental frequency for window S(n), and R P f ) is the pitch measure for fundamental frequency candidate ω f obtained by the pitch measure analysis 303, as described above.

The fundamental frequency at sample n is the pitch measurement result that indicates the pitch of the vocals at sample n in the vocals signal s(n).

Still further, a low pass filter (LP) 304 is performed on the pitch measurement result S)f(n ) to obtain a pitch analysis result ω f (n) 305.

The low pass filter 305 can be a causal discrete-time low-pass Finite Impulse Response (FIR) filter of order M given by where α i is the value of the impulse response at the i th instant for In this causal discrete- time FIR filter e p (n) of order M, each value of the output sequence is a weighted sum of the most recent input values.

The filter parameters M and ¾ can be selected according to a design choice of the skilled person. For example, α 0 = 1 for normalization purposes. The parameter M can for example be chosen on a time scale up to lsec.

A pitch analysis process as described with regard to Fig. 3 above is performed on the original vocals s original (n) to obtain the original vocals pitch analysis result ω f , original(.n ) and on the user vocals s originai(n ) to obtain the user’s pitch analysis result ω f,user (n).

In the embodiment of Fig. 3, it is proposed to perform pitch measure analysis, such as the Pitch measure Analysis 303, for estimating the fundamental frequency Of, based on FFT-spectrum. Alter- natively, the fundamental frequency CL )f may be estimated based on a Fast Adaptive Representation (FAR) spectrum algorithm.

Other methods for pitch analysis and estimation for monophonic signals which can be used instead or additional to method described in Fig. 3 are described in the following scientific papers: A multi- plicative autocorrelation method is described in “New methods of pitch extraction," by Sondhi, M. M, published in EEE Trans. Audio Electroacoust. AU-16, 262-266, in 1968. An average magnitude difference function method is described in "Average magnitude difference function pitch extractor” by Ross, M. J., Shaffer, H. L., Cohen, A., Freudberg, R., and Manley, H. J, published in IEEE Trans. Acoust. Speech Signal Process. ASSP-22, 353-362, in 1974. A comb filtering method is described in “The optimum comb method of pitch period analysis of continuous digitized speech" by Moorer, J. A., published in IEEE Trans. Acoust. Speech Signal Process. ASSP-22, 330-338, in 1974.A linear prediction analysis based method is described in “Linear Prediction of Speech”, by Moorer, J. A, published in Springer-Verlag, New York, in 1974. A cepstrum based method is described in "Cepstrum pitch determination", by Noll, A.M., published in J. Acoust. Soc. Am. 41, 293-309, in 1966. A period histogram method is described in "Period histogram and product spectrare: New methods for fundamental frequency measurement," by Schroeder, M. R., published in J. Acoust.

Soc. Am. 43, 829-834, in 1968.

Still further, other more advanced methods for pitch analysis and estimation which can be used in- stead or additional to method described in Fig. 3 is described in the scientific paper “Fundamental frequency estimation of musical signals using a two-way mismatch procedure”, by R.C. Maher, J. W. Beauchamp, published in the Journal of the Acoustical Society of America 95(4), in April 1994.

For a robust pitch determination, it is needed to use a pitch tracking (avoid pitch doubling errors and voiced/unvoiced detection), which is often done by using a dynamic programming on a pitch F0 candidates, as described in any of the methods given above. A pitch tracking method is described in “An integrated pitch tracking algorithm for speech systems", B. Secrest and G. Doddington, pub- lished in ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing, Boston, Massachusetts, USA, 1983, pp. 1352-1355, doi: 10.1109/ICASSP.1983.1172016.

Still further, the pitch analysis and the (key) transposition is better if vocals and the accompaniment are separate.

Pitch Range Determination

Fig. 4 schematically shows a flow chart describing the process of the pitch range determiner 15 of Fig.l. In step 41, a pitch analysis result ω f(n) is received as input into the pitch range determiner 15. In step 42, it is tested whether the sample number n is zero. If the query from step 41 was an- swered with yes, the process continues with step 43. In step 43, a lower limit min ω f (n) of a pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is initialized with min _(ω f ( 0) = n f (0), and a upper limit max _ω f (n ) of a pitch range R ω (n) = [min_my(n), max_ω f (n)] is initialized with max _ω f ( 0) = ω f (0 ' ). After step 43, the process continues with step 51. In step 51 the pitch range R ω = [min_ω f (n), max_ω f (n)] is output as a result of the pitch range determiner 15 and stored in a storage, for example the storage memory 1202. If the query from step 41 was answered with no the process continues with step 44. In step 44, an old pitch range R ω old = [min_ω f (n) —

1), max_ω f (n — 1)] is loaded from the storage. In step 45, it is tested whether the pitch analysis result ω f (n) is smaller than the a lower limit min _(ω f (n — 1) of the old pitch range R ω (n) = [min_ω f (n — 1), ma x_ω f (n — 1)]. If the query from step 45 was answered with yes, the process continues with step 46. In step 46 the lower limit min _ω f (n) of the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is set to min _ω f (n) = ω f (n) and the process continuous with step 50. In step 50, the upper limit min _ω f (n) of the pitch range R ω (n) = [min_a f (n), max_a f (n)] is set to max _ω f (n) = max _ω f (n — 1) and the process continuous with step 51. In step the In step 51 the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is output as a result of the pitch range determiner 15 and stored in a storage, for example the storage memory 1202. If the query from step 45 was answered with no, the process continues with step 47. In step 47 the lower limit max _ω f (n) of the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is set to min _ω f (n) = min _0)f(n — 1) and the process continuous with step 48. In step 48, it is tested whether the pitch analysis result oy(n) is greater than the a upper limit max _oy (n — 1) of the old pitch range R w = [min_ω f (n — 1), ma x_ω f (n — 1)]. If the query from step 48 was answered with yes, the process continues with step 49. In step 49, the upper limit max _oy (n) of the pitch range R^, =

[min_ω f (n), ma x_ω f (n)] is set to max _ω f (n) = ω f (n) and the process continuous with step 51. In step the In step 51 the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is output as a result of the pitch range determiner 15 and stored in a storage, for example the storage memory 1202. If the query from step 48 was answered with no, the process continues with step 50. In step 50, the upper limit max _ω f (n) of the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is set to max _ω f (n) = max _ω f (n — 1) and the process continuous with step 51. In step the In step 51 the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is output as a result of the pitch range determiner 15 and stored in a storage, for example the storage memory 1202.

The pitch range determination process as described above can be carried out based on the original vocals s originaI (ii) pitch analysis result ω f , originaI (n) and on the user vocals s user (n) pitch analy- sis result ω f user (n).

The pitch determination process of the pitch determiner as described above in Fig. 4 can be carried out on-line which means that for each sample (or frame) from the audio input y(n), for example a karaoke performance of an user, a pitch analyzing process 14 and a pitch range determination process 15 is carried out.

In another embodiment the pitch range determination process of the pitch determiner 15 as described above may be carried out on in-advance stored audio input x(n) that is for example a stored song of a karaoke system whose pitch range should be determined. In this case the upper limit max _ω f (n) of the pitch range R ω ,(n) = [min_ω f (n), max_ω f ((n)] is determined by setting wherein max is the maximum- function and N is the number of all samples if the stored audio input x(n) and the lower limit min _ω f (n) of the pitch range R ω (n) = [min_ω f (n), max_ω f (n)] is determined by setting min wherein min is the minimum-function

In yet another embodiment the pitch range determination process of the pitch determiner 15 as de- scribed above may be carried out on in-advance stored audio input y(n), that is for example a stored karaoke performance of a user on a number of previous songs from which a pitch range and singing effort (see below) profile can be compiled. In this case the pitch range R ω (n) =

[min_ω f (n), max_ω f (n)] can be determined as described in the previous paragraph.

Fig. 5 schematically shows a graph of pitch analysis result. On an x-axis of a diagram 50 a number of samples n of an audio input t y(n) or t x(n) is shown, wherein the total number of samples is N.

On a y-axis of the diagram 50 the pitch range analysis result ω f(n) is shown. A graph 53 indicates the pitch range analysis result ω f (n ) over the sample number n. The lower limit min _ω f (n) of the pitch range R ω (h) = [min _ω f (h), max_ω f (n)] over all N samples is given by the value min _ω f (N) which is lowest value that the graph 53 reaches over all N. The upper limit max _ω f (n) of the pitch range R ω (n) = [min_my (n), max_ω f (n)] over all N samples is given by the value max (N ) which is highest value that the graph 53 reaches over all N.

Pitch Range Comparison

Fig. 6 schematically shows a flow chart describing the process of the pitch range comparator 16 of Fig.1. In step 61, the pitch range R ω , original (n) = [min_ω f (n), max_ω f (n)] (also called a first pitch range) of the original vocals S originai (ri) (also called first vocal signal) is received as input into step 63. In step 62, the pitch range R ω , user (n) = [min_ω f (n) , max_ω f )y(n)] (also called a second pitch range) of the user’s vocals s user (n ) (also called a second vocal signal) is received as input into step 64. In step 63, an original vocal pitch range average ori gi na l is determined as avg ωf/ orifliriai (n) = [max_a) fi0riginai (n) - min_(n) f,original (n )]/2 + min _ω f,original (n). In step 64, an users’s vocal pitch range average avg_ω f ) f iUser (tl) is determined as avg _0>f iUser (n) =

[ma x_Cd/, user (n) — min_ω f , user .(n)]/2 + min _ω f,user (n). In step 65 a pitch ratio R ω (n) is deter- mined as R w (h) = [(avg Uf user (n) ~ avg ωf/ original (n)) / avg_ω f,original (n) + 1], In step 66, the pitch ratio R ω (n) is output as result of the pitch range comparison process of the pitch range comparator 16. The pitch range comparison process of the pitch range comparator 16 as described above is carried out for every sample n of the user’s vocals S user (n). That means, while a user may perform a kara- oke, the pitch ratio R ω (n) can be adapted at every sample n. The final pitch ratio P ω (N ) over all samples n = 1 ...N after finishing a karaoke performance by a user can be stored in a database, for example the storage 1202, and be linked to the user.

The pitch ratio R ω (n) is a value relative to original vocal pitch range average avg_ω f , original (n) and centered around the 1, so that it can be seen as a kind of a “transposition factor” which should be applied to the that original vocal pitch frequency (ω f,original (.n )·

As described above, as well as the pitch analysis result 03f(n ) from the pitch analyzer 14 and the pitch range R ω (n) from the pitch range determiner 15 the pitch ratio R w (h) can be determined on- line for every sample n from an audio input y(n), for example from a live karaoke performance of a user, and from an audio input x(n), for example from a chosen song to which to a karaoke perfor- mance should be performed.

If a pitch range R ω,user (N) of an user is known in advance(i.e. before the karaoke on a song is per- formed which yields an audio input y(n)), for example from another song that was performed by the user and is stored in the storage 1202, the pitch ratio R ω (N) may be determined based on the in advance known range of the user R ω user and a in advance known range of the user original(N)·

In the realm of music and musical transposition it is often stated how much semitones or full tones a piece of music is transposed. Since an octave comprises 12 semitones and octave corresponds to pitch ratio R ω (n) = 2 the transposition up by a semitone corresponds to a pitch ratio R ω (n) =

21/12 _ 1.087 the transposition down by a semitone corresponds to pitch ratio R w (ti) =

(1/2) 1 / 12 = 0.920. In this way, the pitch ratio R w (n ) and a semitone transposition specification can be easily converted into each other. Therefore, another embodiment the pitch ratio R w (n) may be rounded to ceil or to floor (i.e. up or down) to the next semitone such that pitch ratio R w (n) al- ways corresponds to a transposition by a integer multiple of an semitone.

Transposition

As described above the goal is, during a karaoke performance of an user to a song, to transpose the accompaniment s Acc (n ) of the song such that the user can more easily match his voice to the ac- companiment s Acc (n ). The “transposition factor” by which the accompaniment s Acc (n ) should be transposed is determined as described in Fig. 6 above. Transposition of an audio input can for ex- ample be done by a standard pitch-scale modification technique, where all frequencies are be multi- plied by a predetermined value, in our case by the transposition value transpose_val(n). The standard pitch-scale modification technique comprises a step of time-scale modification and a step of resampling.

Fig. 7 schematically shows a flow chart describing the process of the transposer 17 of Fig. 1. In step 71, a transposition value transpose_val is received. In this embodiment the transposition value transpose_val(n) is set equal to the pitch ratio R ω ,user (n), i.e. transpose va](n) = R ω,user (n) · In step 72 the accompaniment s Acc (n) is received as input. In step 73 a time-scale modification of the accompaniment s Acc (n) is with the transposition value the transpose_val(n) as time factor. The time-scale modification of the accompaniment s Acc (n ) is done with a phase-vocoder. A phase vo- coder expands or shortens accompaniment s Acc (n) by the factor of the transposition value transpose_val without altering the frequencies of the accompaniment s Acc (n). This yields a time- scaled modified accompaniment s Acc d (n) as an output of step 73 and as input into step 74. In step 74, the time-scaled modified accompaniment S Acc mod (n) is resampled with a new sampling pe- riod AT * transpose_val (n) , wherein the AT is sampling period which was used when sampling the accompaniment s Acc (n). That means during the resampling the with the new sampling period AT * transpose_val(n) the time-scaled modified accompaniment S Acc mod (n) has been shortened or ex- panded to the original length of the accompaniment s Acc (n) and thereby all frequencies are multi- plied by the factor of the transposition value transpose_val(n), which yields the transposed accompaniment s Acc (n). In step 75, the transposed accompaniment s Acc (n) is output as result of the transposer 17.

In this embodiment the audio output signal is x * (n ) is equal to the accompaniment s Acc (n). In gen- eral, the same process as described above in Fig. 7 can be applied to another audio output signal is x * (n). For example, in another embodiment the audio output signal is x * (n) may be equal to the audio input signal x(n). In this case the same transposition as described above in Fig. 7 is applied to the audio output signal is x * (n)). In this case the output signal of the transposer might be named transposed signal s * (n).

The time-scale modification phase-vocoder and the resampling is described in more detail for exam- ple in the paper “New phase-vocoder techniques for pitch-shifting, harmonizing and other exotic effects”, z published in Proc. 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, New York, Oct. 17-20, 1999 or in the papers mentioned therein. Still fur- ther an improved phase-vocoder is explained in more detail for example in the paper, “Improved Phase Vocoder Time-Scale Modification of Audio”, by Jean Laroche and Mark Dolson, published in IEEE transactions on speech and audio processing, vol. 7, no. 3, May 1999. In case that the transposition value transpose_val(n) is smaller than 1, the steps 73 and 74 of Fig.

7 might be interchanged.

As described above, as well as the pitch ratio R ω (n) can be determined on-line for every sample n the transposed accompaniment s A cc (n) can be determined on-line for every n depending on the current transposition value transpose_val(n) (this can also be viewed as a transposition key) and can then be applied to the whole song in real-time.

If a the pitch ratio R w (IV) (and thereby the transposition value transpose_val(n)) for a chosen kar- aoke song and for a specified user is known in advance, as described above, the transposed accom- paniment S Acc (n) may also be determined in advance.

As described above, the accompaniment s Acc (n) as output by the MSS 12 (see Fig. 2), can for exam- ple include all instruments (tracks) like for example drums, piano, strings etc. In this case the trans- position process of the transposer is as descried in Fig. 7 directly applied to the “complete” accompaniment s Acc (n ) (also called polyphonic pitch transposition). The polyphonic pitch transpo- sition may result in lower quality than the single-track pitch transposition (see Fig. 11) because it may be difficult to tackle very different attack/release, melodic/percussive, multi note-on note-off for a track with multiple instruments. Therefore, artifacts like pre-echo for percussive parts, comb/ flange effects for melodic parts may occur.

As described above, the pitch ratio ratio R w (n) can also be stated in semitones or full tones and ex- actly the same is true for the transposition value transpose_val(n).

Still further in another embodiment, the audio input signal x(n) may be available as a MIDI (Musi- cal Instrument Digital Interface), and therefore the accompaniment S Acc (n), or the single tracks of the accompaniment may be available as MIDI file as well. In this case the transposition of the MIDI file accompaniment S Acc (n ) can be achieved by standard MIDI commands like a transposition filter. That means in this case the transposition is performed by simply transposing the key of the MIDI track by the desired transposition value transpose_val(n) prior to the instrument synthesis.

Therefore, the above described transposer is able to process any type of recording (synthesized MIDI, third party cover, or commercially released recordings) wherein the transposition quality may be improved to by the high separation quality and pitch analysis and transposition value determina- tion.

Singing Effort Determination

Fig. 8 schematically shows a second embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation. An audio input signal x(n), which is received from a mono or stereo audio input 13, contains multiple sources (see 1, 2, ..., K in Fig. 2) and is input to a process of Music Source Separation 12 and decomposed into separations (see separated source 2 and residual signal 3 in Fig. 2), here into a separated source 2, namely original vocals s originaI (n), and a residual signal 3, namely accompaniment S Acc (n) . An ex- emplary embodiment of the process of Music Source Separation 2 is described in Fig. 2 below. The audio output signal is x*(n ) is equal to the accompaniment s Acc (n) and the audio output signal is x*(n) is transmitted to an transposer 17 and the original vocals s origina i(ri) are transmitted to a sig- nal adder 18 and a pitch analyzer 14 (more detail in Fig. 3) which estimates a pitch analysis result Mf.originaiin) of the original vocals s originaI (n). The pitch analysis result 0) f originaI (n) is input into a pitch range estimator 15 (described in more detail in Fig. 4) which estimates a pitch range R ω , original °f the original vocals S origina i(ri). The pitch range R ( o, original is input into a pitch com- parator 16. A User’s microphone 11 acquires an audio input signal y(n), which is input int to a pro- cess of Music Source Separation 12 and decomposed into separations (see separated source 2 and residual signal 3 in Fig. 2), here into a separated source 2, namely, namely user vocals s user (n), and a residual signal 3 which is not needed in the following. The user’s vocals s origina i(n ) are transmit- ted to a singing effort determiner 22, to the signal adder 18 and to a pitch analyzer 14 (more detail in Fig. 3) which estimates a pitch analysis result 0)f iUser (n) of the user vocals S origina i(n). The pitch analysis result (Of iUser (n) is input into a pitch range estimator 15 (described in more detail in Fig. 4) which estimates a pitch range Reuser of the user vocals S user (n). The pitch range R ( o,user is input into a pitch comparator 16. The pitch range estimator 16 (described in more detail in Fig. 5) receives the pitch range R ω , original of the original vocals s origina i(n) and the pitch range Reuser of the user vocals S user (jl) and outputs a pitch ratio R w between the pitch of an average of pitch range R ω , original of the original vocals S originaI (n) and an average of the pitch range R ωuser of the user vocals S user (n). The pitch ratio R ω is input into transposition value determiner 23. The singing ef- fort determiner 22 receives the user’s vocals S originaI (n ), pitch analysis result ω f User (n) of the user vocals S originaI (n ) and the pitch range R ωuser °f the user vocals S user (n ) and determines a singing effort (see Fig. 9). The singing effort determiner 22 outputs a singing effort flag E which is input into the transposition value determiner 23. The transposition value determiner 23 determines a transposition value transpose_val, based on the pitch ratio R ω and the singing effort flag E. The transposition value determiner 23 outputs the transposition value transpose_val into a transposer 17. The transposer receives the transposition value transpose_val and the audio output signal is x*(n) (=accompaniment s Acc (n )) and transposes the audio output signal is x*(n) ^accompani- ment s Acc (n)) by the transposition value transpose_val. The transposer 17 outputs a transposed accompaniment s Acc (n) and inputs it into a signal adder 18. The signal adder 18 receives the trans- posed accompaniment s Acc (n) and the original vocals s originaI (n) and adds them together and out- puts the added signal to a loudspeaker system 19. The transposition value transpose_val is further output to a display unit 20 where the value is presented to the user. The display unit 20 further re- ceives lyrics of the user vocals S user (n) and presents them to the user.

Singing effort and vocal pathologies

The karaoke system can further estimate a singing effort of a karaoke user. The singing effort indi- cates if a karaoke user has great effort to reach the pitch range of the original song, i.e. if the karaoke user must make high efforts to sing as high or as low as the original song. If amateur karaoke user sings beyond his natural capabilities for a longer period of time, the user will not be able to stand long singing sing sessions and could damage his vocal cords and the quality of the performance will be bad.

There are different characteristic parameters which can be deduced from an analysis of the user vo- cals S user (n ) and or the user’s pitch analysis result ω f ,user (n ) which indicate a high singing effort. These different characteristic parameters are for example:

- A jitter value (in percent /%/), which is is a relative evaluation of the period-to-period (very short- term) variability of user’s pitch analysis result ω f ,user (n ) within a analyzed voice sample, wherein voice break areas are excluded.

- A RAP value (in percent /%/), which is is a relative evaluation of the period-to-period variability of the pitch within the analyzed voice sample with a smoothing factor of three periods, wherein voice break areas are excluded.

- A shimmer value (in percent /%/), which is a relative evaluation of the period-to-period (very short term) variability of the peak-to-peak amplitude within the analyzed voice sample, wherein voice break areas are excluded.

- A APQ value (in percent /%/), which is a relative evaluation of the period-to-period variability of the peak-to-peak amplitude within the analyzed voice sample at the smoothing of 11 periods, wherein voice break areas are excluded.

- A Noise-to-Harmonic-Ratio (NHR) value, which is the average ratio of the inharmonic spectral energy in the 1500-4500 Hz frequency range to the harmonic spectral energy in the 70-4500 Hz fre- quency range. This is a general evaluation of the noise present in the analyzed signal.

- A soft phonation index (SPI) value, which is the average ratio of the lower-frequency harmonic en- ergy in the range of 70-1600 Hz to the higher- frequency harmonic energy in the range of 1600-4500 Hz. This parameter reflects the approximation of vocal folds. High values of SPI are stated to corre- late with incomplete vocal fold adduction and are a better indicator of breathiness than EGG. NHR and SPI are both computed using a pitch-synchronous frequency-domain method.

A more in depth analysis of the above mentioned parameters and ways to measure and detect them based on the user vocals S user (n) and or the user’s pitch analysis result ω f user (n) is describe in the scientific paper "Vocal Folds Disorder Detection using Pattern Recognition Methods", J. Wang and C. Jo, published in 200729th Annual International Conference of the IEEE Engineering in Medi- cine and Biology Society, Lyon, 2007, pp. 3253-3256, doi: 10.1109/IEMBS.2007.4353023.

Most of the above parameters are related to the vocal cords. Some of these are related to expressive- ness while singing as well, like jitter (vibrato), but exhibiting progressive chaotic vocal cord behavior through the karaoke singing session might be an indicator of developing short-term vocal cord is- sues like swelling. The NHR value could be as well used to detect aphonia as well. The karaoke sys- tem can monitor these above described and its variations over a karaoke session of a user and determine the singing effort and a possible vocal cord damage (for example through progressive degradation of singing quality).

Fig. 9 schematically describe a singing effort determiner 22 of Fig. 8. In step 91, the user vocals S user (n) is received as input into the singing effort determiner 22. In step 92, the user’s pitch analysis result ω f ,user (n ) is received as input into the singing effort determiner 22. In step 93, the pitch range R ω , user (n) = [min_ω f ,user (n) , max_ω f,user (n)] of the user’s vocals S user (n) is received as input into the singing effort determiner 22. In step 94, the jitter value jitter_val is determined based on the user’s pitch analysis result m f user (n) and the user vocals s user (n). This is described in more detail in the paper of J. Wang and C. Jo which was cited above the papers cited therein. In step 95, a first singing effort value pitch_high(n) is initialized with pitch_high(n) = 0, wherein the first singing effort value pitch_high(n) indicates, if set to 1, that a karaoke singer must make great effort, or fails, to reach a high pitch. Still further in step 95, a second singing effort value pitch_low(n) is initialized with pitchjow(n) = 0, wherein a second singing effort value pitch_low(n) indicates, if set to 1, that a karaoke singer must make great effort, or fails, to reach a low pitch. In step 96, it is tested if the jitter value jitter_val(n) is greater than a threshold of 5%. In another embodiment the threshold for the jitter can have another value. If the query from step 96 is answered with yes, it is proceeded with step 97. In step 97, it is tested if the absolute value of the difference of the user’s pitch analysis result ω f , user (n) and low value the pitch range R ωuser (n) is greater than the absolute value of the differ- ence of the user’s pitch analysis result ωf ,user (n) and high value the pitch range Rω user (n), I ω f .user(n) — min_a f user(n ) I > I ωf,user(n) - max - ω f,user(.n) · If the query from step 97 is answered with yes, it is proceeded with step 98. In step 98, the first singing effort value pitch_high(n) is set to 1, pitch_high(n) = 1 and it is proceeded with step 100. If the query from step 97 is answered with no, it is proceeded with step 99. In step 99, the second singing effort value pitchjow(n) is set to 1, pitch_low(n) = 1 and it is proceeded with step 100. If the query in step 96 is answered with no, it is proceeded with step 100. In step 100 the singing effort E(n) = {pitchjow(n), pitch_high(n)} is output by the singing effort determiner 22.

In the embodiment above singing effort E(n) is a “binarized” value of the jitter value jitter_val(n), i.e. a flag was set when it was above a threshold and the flag was not set when it was below the threshold. In another embodiment the singing effort E(n) can be a quantitative value, for example a value that is direct proportional to the jitter value jitter_val(n).

In yet another embodiment any of the other above described different characteristic parameters can be used instead of the jitter or in addition in order to determine a first and a second singing effort value as described in Fig. 9.

In yet another embodiment the singing effort E(n) can be a quantitative value, for example a value that is direct proportional to any linear or nonlinear combination above described different charac- teristic parameters.

In another embodiment the karaoke system can propose to stop or pause singing to prevent more severe vocal cord problems. More details how to recognize pathological speech, which can also be utilized to detect a high singing effort are for example described in “A system for automatic recogni- tion of pathological speech”, by : Dibazar, Alireza & Narayanan, Shrikanth, published in Proceed- ings of the Asilomar Conference on Signals, Systems and Computers, November 2002. In this paper standard MFCC and pitch features are used for the classification of several speech production re- lated pathologies.

If the singing effort determiner 22 has determined the singing effort value E and a pitch ratio R ω , a transposition value transpose_val can be determined.

Fig. 10 schematically shows the transposition value determiner 23 of Fig. 8. In step 101, the pitch ratio P ω is received as input into the transposition value determiner 23. In step 102, the singing ef- fort E = {pitchjow(n), pitch_high(n)} is received as input into the transposition value determiner 23. In step 103, the pitch ratio R ω is set equal to a transposition value transpose_val(n), transpose_val(n) = P ω . In step 104, it is tested if the first singing effort value pitch_high is set to 1. If the query in step 104 is answered with yes, it is proceeded with step 105. In step 105, the trans- position transpose_val value is decreased by 0.05, transpose_val(n)=transpose_val — 0.05 and it is proceeded with step 108. If the query in step 104 is answered with no it is proceeded with step 106. In step 106, it is tested if the second singing effort value pitchjow is set to 1. If the query in step 106 is answered with yes, it is proceeded with step 107. In n step 107, the transposition value transpose_val(n) is increased by 0.05, transpose_val(n)=transpose_val(n) + 0.05 and it is pro- ceeded with step 108. In step 108, the transposition value transpose_val is output by the transposi- tion value determiner 23.

Fig. 11 schematically shows a third embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation. The embodiment of Fig. 11 is mostly similar to the embodiment of Fig. 1. However, in Fig. 11 the accompaniment S ACC (n) can be separated by the music source separation 12 into different instruments (tracks), for example a first instrument (n), a second instrument s A2 (n) and a third instrument s A3 (n)), for example drums, piano, strings etc. Each of the three instruments s A1 (n), s A2 (n ) and s A3 (n )) can be set as the output signal x * (n) and transposed by the transposer 17 by the same transposition value as described above in Fig. 7. The transposer 17 outputs for the input of the first instrument s A1 (n) a transposed first instrument s A1 (n), or the input of the second instrument s A2 (n) a transposed sec- ond instrument and for the third instrument s A3 (n)) a transposed third instrument s A3 (n)). The transposed first instrument the transposed second instrument s A2 (n ) the transposed third instrument are summed together by the adders 1101 and 1102 and a the complete transposed accompaniment ) is received.

In yet another embodiment the accompaniment s Acc (n) can be separated into melodic/harmonic tracks and percussion tracks, and the same single-track (single instrument) transposition as described above can be applied. If the accompaniment s Acc (n ) is separated into more than one track (instru- ment) the transposition process of the transposer 17 is applied to each of the separated tracks indi- vidually and the individually transposed tracks are summed up afterwards into a stereo recording to receive the complete transposed accompanimen

Fig. 12 schematically shows a fourth embodiment of a process of a of a karaoke systems which transposes an audio signal based on audio source separation and pitch range estimation. The embod- iment of Fig. 12 is mostly similar to the embodiment of Fig. 1. However, in Fig. 12 the audio output signal x * (n) which is transposed by the transposition value transpose_val(n) is equal to the audio input signal x(n), which means that the original vocals s originaI (n) (and the accompaniment s acc (n)) is also transposed by the value transpose_val(n) as described above. The output of the transposer, that is the transposed signal s * (n) is input into the adder 18 and it is proceeded as de- scribed in Fig. 1 Fig. 13 schematically shows a fifth embodiment of a process of a of a karaoke systems which trans- poses an audio signal based on audio source separation and pitch range estimation. The embodiment of Fig. 13 is mosdy similar to the embodiment of Fig. 1. However, in Fig. 13 the audio output signal x * (n ) which is transposed by the transposition value transpose_val(n) consists of the original vo- cals S original (n) mixed together with the accompaniment S acc (n)). For example, the output signal X * (n) consists of the original vocals s originai (n ) which is multiplied by a gain G (that means they are amplified or damped) plus the accompaniment s acc (n)). The output of the transposer, that is the transposed signal s * (n) is input into the adder 18 and it is proceeded as described in Fig. 1

Fig. 14 schematically describes an embodiment of an electronic device that can implement the pro- cesses of pitch range determination and transposition as described above. The electronic device 1200 comprises a CPU 1201 as processor. The electronic device 1200 further comprises a micro- phone array 1210, a loudspeaker array 1211 and a convolutional neural network unit 1220 that are connected to the processor 1201. The processor 1201 may for example implement a pitch analyzer, a pitch range determiner, a pitch comparator, a singing effort determiner, a transposition determiner or a transposer that realize the processes described with regard to Fig. 1, Fig. 8, Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7, Fig. 9 and Fig. 10 in more detail. The CNN 1220 may for example be an artificial neu- ral network in hardware, e.g. a neural network on GPUs or any other hardware specialized for the purpose of implementing an artificial neural network. The CNN 1220 may for example implement a source separation 104. A Loudspeaker array 1211, such as the Loudspeaker system 111 described with regard to Fig. 1, Fig. 8 consists of one or more loudspeakers that are distributed over a prede- fined space and is configured to render any kind of audio, such as 3D audio. The electronic device

1200 further comprises a user interface 1212 that is connected to the processor 1201. This user in- terface 1212 acts as a man-machine interface and enables a dialogue between an administrator and the electronic system. For example, an administrator may make configurations to the system using this user interface 1212. The electronic device 1200 further comprises an Ethernet interface 1221, a Bluetooth interface 1204, and a WLAN interface 1205. These units 1204, 1205 act as I/O interfaces for data communication with external devices. For example, additional loudspeakers, microphones, and video cameras with Ethernet, WLAN or Bluetooth connection may be coupled to the processor

1201 via these interfaces 1221, 1204, and 1205. The electronic device 1200 further comprises a data storage 1202 and a data memory 1203 (here a RAM). The data memory 1203 is arranged to tempo- rarily store or cache data or computer instructions for processing by the processor 1201. The data storage 1202 is arranged as a long-term storage, e.g. for recording sensor data obtained from the mi- crophone array 1210 and provided to or retrieved from the CNN 1220. The data storage 1202 may also store audio data that represents audio messages, which the public announcement system may transport to people moving in the predefined space.

It should be noted that the description above is only an example configuration. Alternative configu- rations may be implemented with additional or other sensors, storage devices, interfaces, or the like.

It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is, however, given for illustrative purposes only and should not be construed as binding.

It should also be noted that the division of the electronic device of Fig. 1 into units is only made for illustration purposes and that the present disclosure is not limited to any specific division of func- tions in specific units. For instance, at least parts of the circuitry could be implemented by a respec- tively programmed processor, field programmable gate array (FPGA), dedicated circuits, and the like.

All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example, on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.

In so far as the embodiments of the disclosure described above are implemented, at least in part, us- ing software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a com- puter program is provided are envisaged as aspects of the present disclosure.

Note that the present technology can also be configured as described below.

(1) An electronic device comprising circuitry configured to separate by audio source separation a first audio input signal (x(n)) into a first vocal signal (s or igi nai (n)) and an accompaniment

(s Acc (n); s A1 (n), S A2 (n), s A3 (n)), and to transpose an audio output signal (x * (n) ' ) by a transposi- tion value (transpose_val(n)) based on a pitch ratio (R ω (n)), wherein the pitch ratio ( P ω (n )) is based on comparing a first pitch range (R ω , original (ni)) °f the first vocal signal ( S originaI (n )) and a second pitch range (R ω , user (n)) of the second vocal signal (s user (n)).

(2) The electronic device of (1), wherein the circuitry is further configured to determine the first pitch range (R ω , originaI (n)) °f the first vocal signal ( s originaI (n )) based on a first pitch analysis re- sult ( ω f ' originaI (n )) of the first vocal signal (s originaI (n)) and the second pitch range (R ωuser (n)) of the second vocal signal (s user (n)) based on a second pitch analysis result (ω f ,user (n)) of the sec- ond vocal signal (s user (n )).

(3) The electronic device of (1) or (2), wherein the circuitry is further configured to determine the first pitch analysis result (ω f , originaI ( n) based on the first vocal signal (s originai (n)) and the second pitch analysis result (0)f user (ri)) based on the second vocal signal (s user (n)).

(4) The electronic device of anyone of (1) to (3), wherein the accompaniment (s Acc (n); S A1 (n), S A2 (n) , s A3 (n) comprises all parts of the audio input signal (x(n) except for the first vocal signal (S original 00)

(5) The electronic device of of anyone of (1) to (4), wherein audio output signal (x * (n)') is the accompaniment (s Acc (n)).

(6) The electronic device of anyone of (1) to (5), wherein the audio output signal (x * (n)) is the audio input signal (x(n)).

(7) The electronic device of anyone of (1) to (6), wherein the audio output signal (x * (n)) is a mixture of the accompaniment (s Acc (n)) and the first vocal signal (s originai (n))

(8) The electronic device of anyone of (1) to (8), wherein the circuitry is further configured to separate the accompaniment (s A1 (n), s A2 (n), S A3 (n)) into a plurality of instruments (s A1 (n); s A2 (n); s A3 (n)).

(9) The electronic device of anyone of (1) to (8), wherein the circuitry is further configured to separate a second audio input signal (y(n )) by audio source separation.

(10) The electronic device of (9), wherein the second audio input signal (y(n)) is separated into the second vocal signal (s user (n)) and a remaining signal.

(11) The electronic device of anyone of (1) to (10), wherein the circuitry is further configured to determine a singing effort (E(n)) based on the second vocal signal (S user (n)), wherein the transpo- sition value (transpose_val(n)) is based on the singing effort (E(n)) and the pitch ratio ( R ω (n ) ).

(12) The electronic device of (11) wherein, the singing effort (E(n)) is based on the second pitch analysis result (ω f , user (n)) of the second vocal signal (s user (n)) and the second pitch range

(R ω, user (n)) of the second vocal signal ( S user (n)).

(13) The electronic device of (11) or (12), wherein the circuitry is further configured to determine the singing effort (E(n)) based on a jitter value (jitter_val(n)) and/ or a RAP value and/ or a shim- mer value and/ or a APQ value and/ or a Noise-to-Harmonic-Ratio and/ or a soft phonation index. (14) The electronic device of anyone of (1) to (13), wherein the circuitry is configured to trans- pose the audio output signal (x * (n)) based on a pitch ratio ( R ω (n )), such that transposition value (transpose_val(n)) corresponds to an integer multiple of a semitone.

(15) The electronic device of anyone of (1) to (14), wherein the circuitry comprises a microphone configured to capture the second vocal signal (s user (n)) .

(16) The electronic device of anyone of (1) to (15), wherein the circuitry is configured to capture the first audio input signal (x(n )) from a real audio recording.

(17) A method comprising: separating by audio source separation a first audio input signal (x(n)) into a first vocal signal (s originaI(n)) and an accompaniment (s Acc (n); s A1 (n), s A2 (n), s A3 (n )), and transposing an audio output signal (x * (n)') by a transposition value (transpose_val(n)) based on a pitch ratio (R ω (n )), wherein the pitch ratio (R ω (n)) is based on comparing a first pitch range (Ra),original(n)) of the first vocal signal (S original (n)) and a second pitch range (R ω user (n)) of the second vocal signal ( s u ser (n)). (18) A computer program comprising instructions, the instructions when executed on a processor causing the processor to perform the method (17).