Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR SEPARATING SOUND AND CONDITION MONITORING SYSTEM AND MOBILE PHONE USING THE SAME
Document Type and Number:
WIPO Patent Application WO/2015/022036
Kind Code:
A1
Abstract:
Invention provides a system and method for separating sound from an object of interest from its background and condition monitoring system and mobile phone using the same. The sound separating system includes a sound source localizing part, including at least one microphone and a processing unit, and an object direction reference determination part, determining, as object direction reference, information on directions of the object of interest with respect to the sound source localizing part; wherein the processing unit obtains information on a direction from which sound arrives at the sound source localizing part from the sound source by using microphone signal, and comparing it with the object direction reference so as to filter the sound from the background of the sound source. By having the sound separating system, it can extract the background noise from a specific device respective location and analyse only these signals from the object of interest.

Inventors:
ORMAN MACIEJ (PL)
PAPE DETLEF (CH)
Application Number:
PCT/EP2013/070283
Publication Date:
February 19, 2015
Filing Date:
September 27, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ABB TECHNOLOGY LTD (CH)
International Classes:
G01S3/802; H04S7/00; G01S3/803; G10K11/34; G10K11/35
Domestic Patent References:
WO2009153053A12009-12-23
Foreign References:
US20120062729A12012-03-15
US20120284619A12012-11-08
Attorney, Agent or Firm:
ZIMMERMANN & PARTNER (München, DE)
Download PDF:
Claims:
CLAIMS

1. A system for separating sound from an object of interest and that from its background, a sound source consists of said object of interest and its background, including:

a sound source localizing part, including at least one microphone and a processing unit; and

an object direction reference determination part, being adapted for determining, as object direction reference, information on directions of said object of interest with respect to said sound source localizing part;

wherein:

said processing unit is adapted for obtaining information on a direction from which sound arrives at said sound source localizing part from said sound source by using microphone signal, and comparing it with said object direction reference so as to separate the sound from the object of interest and its background.

2. The system according to any of the preceding claims, wherein:

said processing unit is further adapted for judging if said direction from which said sound arrived at said sound source localizing part falls under the scope of said object direction reference so as to filter the sound from said background of said sound source.

3. The system according to any of the preceding claims, wherein:

said sound source localizing part includes:

a movable unit, being adapted for free movement and being integrated with said microphone; and

a motion tracking unit, being adapted for tracking the movement of the movable unit; wherein:

said processing unit is further adapted for receiving said microphone signal and motion tracking unit signal and obtaining said information on direction from which sound from the sound source arrived using the microphone signal and motion tracking unit signal obtained during movement of the movable unit.

4. The system according to claim 3, wherein:

the movable unit is integrated with the motion tracking unit.

5. The system according to claim 3 or 4, wherein:

the motion tracking unit is an inertia measurement unit.

6. The system according to claim 3 or 4, wherein:

the motion tracking unit is a vision tracking system

7. The system according any of claims 3 to 6, wherein:

said processing unit is further adapted for evaluating Doppler Effect frequency shift of the microphone signal with respect to a directional movement of the movable unit from an initial position.

8. The system according to any of claims 3 to 7, wherein:

said processing unit is further adapted for evaluating a sound level of the microphone signal with respect to a maximum and/or minimum amplitude while the movable unit is moving with respect to the initial position.

9. The system according to claim 7 or 8, wherein: the processing unit is further adapted for providing the information on the direction depending on the Doppler Effect frequency shift of the microphone signal and the motion tracking unit signal.

10. The system according to claim 8 or 9, wherein:

the processing unit is further adapted for providing the information on the direction depending on the sound level of the microphone signal and the motion tracking unit signal.

11. The system according to any of the preceding claims, wherein:

said sound source localizing part is an acoustic camera, including a plurality of microphones.

12. The system according to any of claims 1 to 10, wherein:

said object direction reference determination part includes:

a camera, being adapted for capturing picture of said object of interest and its background; and

a human machine interface, being adapted for obtaining information on contour of said object of interest as regards its background based on said picture;

wherein:

said object direction reference determination part is further adapted for predetermining said object direction reference based on the information on said contour of said object of interest as regards its background.

13. A condition monitoring system, including the system for separating sound from an object of interest and that from its background according to any of claims 1 to 12, wherein: said processing unit is further adapted for judging a condition of said object of interest based on a frequency of said separated sound.

14. The condition monitoring system according to claim 13, further including:

an alarm device, being adapted for generating alarm in response to failure condition of said object of interest.

15. A mobile phone, including the system according to any of claims 1 to 12.

16. A method for separating sound from an object of interest and that from its background, a sound source consists of said object of interest and its background, including:

determining, as object direction reference, information on directions of said object of interest with respect to a sound source localizing part;

obtaining information on a direction from which sound arrives at said sound source localizing part from said sound source by using microphone signal; and

comparing said obtained information with said object direction reference so as to separate the sound from the object of interest and its background.

17. The method according to claim 16, including:

receiving said microphone signal and motion tracking unit signal and obtaining said information on direction from which sound from the sound source arrived using the microphone signal and motion tracking unit signal obtained during movement of the movable unit.

18. The method according to claim 16 or 17, wherein:

predetermining said object direction reference based on the information on contour of said object of interest as regards its background.

19. The method for condition monitoring of an object of interest using the method according to any of claims 16 to 18, further comprising:

judging a condition of said object of interest based on a frequency of said separated sound.

20. The method according to claim 19, further including:

generating alarm in response to failure condition of said object of interest.

Description:
System and Method for Separating Sound and Condition Monitoring System and

Mobile Phone Using the Same

Technical Field

The invention relates to system and method for separating sound arriving from an object of interest and its background and a condition monitoring system and a mobile phone using the same. Background Art

Acoustic analysis is a method which today is often used for example in speech recognition however it is rarely ever used in industry appl ication as a condition monitoring technique. The quality of acoustic monitoring is very much dependent on the background noise of the environment, which the machine is operated at. The effect of background noise can be mitigated by sound source local ization. Sound source localization might be performed by acoustic camera.

Sound analysis can be an important quality aspect of condition monitoring tool. When faults in machinery and plant installations occur, they can often be detected by a change in there noise emissions. In this way, acoustic camera makes the hearing procedure automated and more objective. Current technologies of acoustic camera can be used to visual ize sounds and their sources. Maps of sound sources that look similar to thermo graphic images are created. Noise sources can be localized rapidly and analysed according to various criteria. An acoustic camera consists of some sort of video device, such as a video camera, and a multiple of sound pressure measuring devices, such as microphones, sound pressure is usual ly measured as Pascal ' s (Pa). The microphones are normal ly arranged in a pre-set shape and position with respect to the camera.

The idea of acoustic camera is to do noise/sound source identification, quantification and perform a picture of acoustic environment by array processing of multidimensional acoustic signals received by microphone array and to overlay that acoustic picture to the video picture. It is a device with integrated microphone array and digital video camera, which provides visualization of acoustic environment. Possible applications of acoustic camera as test equipment are nondestructive measurements for noise/sound identification in interior and exterior of vehicles, trains and airplanes, measurement in wind tunnels, etc. Acoustic camera can also be built in complex platform such as underwater unmanned vehicles, robots and robotized platforms etc. When using microphone array consisting of a multiple of microphones, however, it may entail problems regarding the relatively high complexity, a relatively large volume, and a relatively high cost of the acoustic camera.

In some further conventional concepts, a few microphones are moved between

measurements by way of drives, for example motors. The motion tracking of the microphones is done via detection of the parameters of the drives, for example the speed of the motor or the initial position of the motor. The motion of the microphones is limited due to the mechanical restriction of the drive, in other words, the microphone cannot move randomly and some route cannot be followed because of the restriction. Moreover, positional accuracy is l imited here in many cases by the length of a sampled or "scanned" area. When moving microphones with motors, the problem of the accuracy of the position of the microphones arises. For example, problems may result due to tolerances of the motor or due to vibrations of the construction. Furthermore, the construction of the arrangement for moving microphones with motors without reflections at fixtures is difficult.

Besides, in the noisy environment of industrial plants, where many devices are operating at the same time, the acoustic analysis system, for example the acoustic camera, will take into account the frequencies of all the sound emitted from the object of interest and its background ( environment ), and the conventional system is unable to automatically separate the sound from the object of interest and its background. Therefore, the influence of the frequency of the sound from the background with that from the object of interest cannot be automatically removed. In particular as regards a condition monitoring system for detecting a health state/fai lure of an object of interest, for example electrical motor. which uses the acoustic analysis system, it looks for high ampl itude noise signals or certain noise frequencies or pattern and locating the sound source of the noise. However, in case the specific failure signal is lower than the background noise and the frequencies or patterns to be looked for are not known, the analysis is not possible.

Brief Summary of the Invention

It is therefore an objective of the invention to provide a system for separating sound from an object of interest and that from its background, a sound source consists of the object of interest and its background, including: a sound source local izing part, including at least one microphone and a processing unit; and an object direction reference determ ination part, being adapted for determining, as object direction reference, information on directions of the object of interest with respect to the sound source localizing part; wherein: the processing unit is adapted for obtaining information on a direction from which sound arrives at the sound source localizing part from the sound source by using microphone signal, and comparing it w ith the object direction reference so as to filter the sound from the background of the sound source. By having the sound separating system, it can extract the background noise from a specific device respective location and analyse only these signals from the object of interest.

According to another aspect of the invention, it provides a condition monitoring system. The condition monitoring system includes the system for separating sound from an object of interest and that from its background, wherein: the processing unit is further adapted for judging a condition of the object of interest based on a frequency of the filtered sound. By having the condition monitoring system, the extracted signals are processed automatically for fai lure detection.

According to another aspect of the invention, it provides a mobile phone. The mobile phone includes the system for separating sound from an object of interest and that from its background as an extension of its functionality.

According to another aspect of the invention, it provides a method for separating sound from an object of interest and that from its background, a sound source consists of said object of interest and its background, including: determining, as object direction reference, information on directions of said object of interest with respect to a sound source localizing part: obtaining information on a direction from which sound arrives at said sound source localizing part from said sound source by using microphone signal; and comparing said obtained information with said object direction reference so as to filter the sound from said background of said sound source. By having the method, it can extract the background noise from a specific device respective location and analyse only these signals from the object of interest. According to another aspect of the invention, it provides a method for condition monitoring of an object of interest using the above method, further comprising: judging a condition of said object of interest based on a frequency of said separated sound. By having the condition monitoring method, the extracted signals are processed automatically for failure detection.

Brief Description of the Drawings

The subject matter of the invention wil l be explained in more detail in the following text with reference to preferred exemplary embodiments which are illustrated in the drawings, in which:

Figure 1 is a block diagram of a system for localizing a sound source according to one embodiment of present invention;

Figure 2 illustrates a random movement of the movable unit according to one embodiment of present invention;

Figure 3 A is a block diagram of a system for local izing a sound source according to one embodiment of present i nvention;

Figure 3B is a block diagram of a system for localizing a sound source according to one embodiment of present invention;

Figure 3C is a block diagram of a system for localizing a sound source according to one embodiment of present i nvention;

Figure 4 shows a schematic illustration of typical movement of the movable unit according to one embodiment of present invention;

Figure 5 shows a spectrum illustration of Doppler Effect frequency shift of the microphone signal provided by the microphone of the movable unit;

Figure 6 shows a visualization of 2-dimension source localization according to one embodiment of present i nvention;

Figure 7 shows a visualization of 3-dimension source localization according to one embodiment of present invention;

Figure 8 shows a typical directional sensitivity of a microphone according to one embodiment of present i nvention;

Figure 9 shows the determination of a sound source location is according to the embodiment of figure 8:

Figure 10 shows a flowchart of a method for localizing a sound source according to one embodiment of present i nvention;

Figure 1 1 is a block diagram of a system for separating sound from an object of interest and that from its background by using said system for localizing sound source as described above according to figure 3C:

Figure 12 shows the illustration of the direction indicated by the object direction reference;

Figures 1 3 A and 1 3 B show the acoustic spectrum before and after the fi ltration according to an embodiment of present invention;

Figures 14 A and 14B show pictures taken by the camera and reproduced on the human machine interface before and after marking the contour of the object of interest according to an embodiment of present invention;

Figures 1 5 A and 15B present example spectrum of two electrical motor cases; and

Figure 16 shows flowchart of a method for separating sound from an object of interest and that from its background according to an embodiment of present invention. The reference symbols used in the drawings, and their meanings, are l isted in summary form in the l ist of reference symbols. In principle, identical parts are provided with the same reference symbols in the figures. Preferred Embodiments of the Invention

Figure I is a block diagram of a system for localizing a sound source according to one embodiment of present invention. The system according to figure 1 is designated with 1 in its entirety. As shown in figure 1 . the system 1 includes a movable unit 10, a motion tracking unit 1 1 and processing unit 12. The movable unit 10 may be integrated with a microphone 100. The movable unit 10 is movable with freedom w ith respect to a sound source in a random path, for example in l inear movement, in circular movement, in forward and backward movement, and so on. The motion tracking unit 11 is adapted for tracking the movement of the movable unit 10. This allows the flexibility of the selection of the movable unit path set by the operator. Figure 2 i llustrates a random movement of the movable unit according to one embodiment of present invention. The microphone 100 is adapted for collecting the sound wave that transmits from the sound source and arrives at the microphone 100, and thus generating the microphone signal representing a value of a component of the collected sound w ave. The motion tracking unit I 1 is adapted for tracking a movement of the movable unit 10 and the microphone 100 integrated therewith where the sound wave is detected, and thus generating the motion tracking unit signal representing the position and velocity of the movable unit 10 and the microphone 100 integrated therewith. The movement can hereby be in the x, y, z direction as wel l as a rotation of the movable unit. The processing unit 1 2 is adapted for receiving the

microphone signal of the microphone 100 of the movable unit 10 and motion tracking unit signal from the motion tracking unit 1 1 and obtaining information on a direction from which sound from the sound source arrives using the movable unit signal obtained during movement of the movable unit 10.

In the fol lowing, the functioning of the system I will be explained briefly. The processing unit 12 is capable of evaluating the microphone signal having been received or recorded or sampled with respect to the movement of the microphone 100 with the movement of the movable unit 10 from an initial position. Hence, the microphone signal includes the Doppler Effect frequency shift. By determining the Doppler effect frequency shift from the recorded signals collected by the same microphone during its movement, the relative direction of the movable unit to the sound source can be calculated and in combination with the position signals the location of the sound source can be determi ned. This would provide a simple system with lower costs and low volume for the sound source location. In addition, due to the integration of the microphone into the movable unit that can follow random path during its movement, the position for collecting the sound wave can be selected with less restriction. In addition, the accuracy of the motion tracking signal can be increased because the movable unit is not driven by devices that have tolerances, vibrations, or reflections at fixtures. Moreover, the motion tracking unit signal is expressive and good for indication of the movement of the microphone, and thus the accuracy of the position and velocity of the microphone arises.

Figure 3 A is a block diagram of a system for local izing a sound source according to one embodiment of present invention. In this embodiment according to figure 3 A. it is possible to integration of the motion tracking unit 1 I into the movable unit 10, thus the motion tracking unit 11 is movable together with the movable unit 10. For example, the motion tracking unit 1 1 may be an IMU ( inertial measurement unit ) including gyroscope, and it may be integrated with the movable unit 10 together w ith the microphone 100. Thus, the movable unit 10 can simultaneously measure a combination of acoustic signal, at least one direction acceleration signals, at least one direction gyroscope signals. By having such configuration, the system 1 becomes more compact. Figure 3B is a block diagram of a system for localizing a sound source according to one embodiment of present invention. In this embodiment according to figure 3B, the motion tracking unit 1 1 may be a vision tracking system which is separate from the movable unit 10 and generating the information about the velocity and position of the movable unit 10 by pattern recognition technology. Figure 3C is a block diagram of a system for local izing a sound source according to one embodiment of present invention. The processing unit 12 may be integrated with the movable unit 10. Of course, the processing unit 12 may be separated from the movable unit 10 and may be implemented by a personal computer. The system may further include a screen being adapted for optical visualizing the position of the sound source for presenting as a map of sound locations, for example the location of sound sources is displayed on a picture of the scanned area. The screen may be i ntegrated with the movable unit or on a personal computer.

in some embodiment, the processing unit 12 may be further adapted for determine direction information on evaluation of a level of the microphone signal during the movement of the microphone 100 and the movable unit 10. The processing unit 12 may. for example, determine a sound level of the microphone 100 with respect to a maximum and/or minimum ampl itude while the movable unit 10 is moving with respect to the initial position, and further adapted for providing the information on the direction depending on the sound level of the microphone signal and the motion tracking unit signal. In summary, it thus can be stated that different information can be extracted from the microphone signal. For example, using the Doppler Effect frequency shift, an influence of the Doppler Effect on the microphone signal can be evaluated. An amplitude of the microphones signal may also be employed for improving the precision of the direction determination. However, the microphone signal may also include a combination of the above-mentioned information.

In some embodiment, all the measurements may be performed by mobile phone.

Figure 4 shows a schematic illustration of typical movement of the movable unit according to one embodiment of present invention. As shown in figure 4, the movable unit 10 is at an initial position 30. where the acoustic signal should be recorded and microphone should not be moving. The stationary signal (assuming that the sound source is generating stationary signal ) w il l be used as a reference in further analysis. The direction determination is based on the analysis of the non-stationary acoustic signal . Furthermore, to obtain such non-stationary signal it is required to perform movement of microphone while recording the acoustic signal . Such movement can be performed in 3 directions: forward-backward to the object of interest, left- right and up-down as it presented in figure 4 or a combination of the movement components in the 3 directions, it needs to be performed in front of the object of interest. As shown in figure 3. as example, the movable unit 10 may move to position 31 in the direction of forward-backward, to position 32 in the direction of left-right, to position 33 in the direction of up-down, or a position along the direction as a combination of the above three directions. Hence, a distance of the microphone 100 and the movable unit 10 from the sound source varies in the movement from the initial position 30 to position 3 1 . 32, 33. The closer the microphone will stand to the sound source the better resolution of the measurements it is possible to achieve.

Figure 5 shows a spectrum illustration of Doppler Effect frequency shi ft of the microphone signal provided by the microphone. For example, the sound source contains 5 kHz as a main frequency. At the initial position 30, the Doppler Effect frequency shift is minimum and/or even negligible. A first signal 40 is recorded, which means that the microphone 100 is not moving. A second signal 41 is recorded whenthe microphone 100 and the movable unit 10 are moving away from the sound source, while a third signal 42 is recorded when the microphone is getting closer to the sound source. As it is visible frequency peak at 5 kHz in case of the first signal 40 is relatively sharp w hich means that the frequency is constant in whole measurements period. It is possible to notice that second signal 41 is not a sharp peak any longer and it is clearly shifted to lower frequency while the third signal 42 is shifted into the higher frequency. The visible effect is known as a Doppler Effect. The above figure illustrating Doppler Effect proves that frequency shi ft is big enough to be measurable by the same microphone.

While performing acoustic measurements with moving microphone it is required to simultaneously measure 3 direction acceleration signals. 3 direction gyroscope. Typically, contemporary mobiles phones got those sensors embedded. These measurements will be utilized to detect mobile phone path and speed. Alternatively, vision markers might be uti lized to obtain microphone movement path and velocity.

In the following, details regarding the procedure when determining the direction from which the sound from the sound source arrives at the microphone wil l be described here. Here, it is assumed that the frequency of the sound source is known, for example. 5kHz. The direction of the sound source with respect to the microphone may. for example, be described by a direction of a velocity that deviates from the velocity of the microphone and the movable unit in terms of the measure of the angle between the two velocities. This description may apply to visualization of 2 -dimension or 3-dimension source localization. For example, if the velocity of the movable unit is known, the direction of the sound source may also be described by the unknown angle, and this leads to the need for determination of the unknown angle which w i ll be described hereinafter.

Figure 6 shows a visualization of 2 -dimension source localization according to one embodiment of present i nvention. Based on stationary measurements, the frequencies of interest should be selected. In this example our frequency of interest is 5 kHz. Then from the 2 direction speed path moment of time where speed is most constant should be selected. This speed we can marked as V. For the moment of time for which the speed V was selected in respective acoustic signal frequency shift of 5 kHz should be determined. Since such signal might be relatively short any methods l ike best fitting sinus should be better than standard FFT approach.

The Doppler Effect equation describe relation between speed of moving sound source or moving observer and frequency shift in acoustic signal recorded by observer. In presented case only the observer is moving. For this case the Doppler Effect equation looks as follows:

Where f s is frequency shift due to Doppler Effect. f 0 is actual sound source frequency, v is sound speed which we can assume as equal to 340 m/s, v 0 is the motion speed of the observer, in this case it is the speed of microphone. The sign of v 0 depends on the direction of speed in relation to sound source.

For proper localization of sound source C, distance \ CB\ between object of interest C and microphone at initial position B is required. In the example presented in motion speed at point B is equal to V and as it is possible to notice the microphone is heading to point D which is moved the left side of sound source C by a angle. By rearranging the eq. (1) to the form where v 0 will be on the left side of equation and assuming that microphone is getting closer to the sound source the equation got the following form:

If fo is the frequency of interest and f s is actual frequency shift of respective f 0 then v 0 is microphone speed in relation to the sound source C. Therefore we can write:

In figure 6 this component of speed was marked as Vy. The cosines of angle a can be expressed as

Substituting equation 3 in to eq 4 and use speed V obtain in previous it is possible to calculate angle a. By knowing the angle a it is possible to determine position of sound source as point C or point A as presented in figure 6.

Thus, it can. for example, be seen that in 2-dimension the sound source is in the direction along one of the two sides. BA and BC. of triangle BAC. Hence, based on the finding, a direction of the sound source can be determined. In summary, the processing unit 12 is adapted for evaluating a first Doppler Effect frequency shift of the first microphone signal with respect to a first directional movement of the movable unit from an initial position, and the processing unit 1 2 is adapted for providing the information on the direction depending on the first Doppler Effect frequency shift of the first microphone signal and the motion tracking unit signal.

In order to increase the accuracy of the determination of the sound source direction, the above procedure may be repeated for a different selection of velocity V at least once. By having such repetition, we may get another triangle B'A'C with at least one side overlapping one of the sides of BA and BC of triangle BAC. Hence, based on the finding involving a combination of the two triangles BAC and B ' A ' C * . the direction of the sound source can be determined in the direction from the intersection of the sides, for example BC and B C * . In summary, the processing unit is further adapted for evaluating a second Doppler Effect frequency shift of a second microphone signal with respect to a second directional movement of the movable unit from the initial position and the processing unit is adapted for providing the information on the direction depending on the first Doppler Effect frequency shift and the second frequency shift of the first and second microphone signals and the motion tracking unit signals for the first and second directional movements of the movable unit.

Figure 7 shows a visualization of 3-dimension source local ization according to one embodiment of present invention. A 3D plane P may be created in such a way so it is perpendicular to our initial motion point B and its centre is in point C which is our object of interest (in this case it is also our sound source ). In case of localization sound source in 3D angle a calculated may create cone which is vertex is at the point B. Common part of cone and plane P is creating the ell ipses which is marked on as figure 7. While repeating calculation three times three different cones will be calculated and 3 different ellipses will be drawn. Common part of ellipses is our sound source C which in this case is also object of interest. While substituting the plane P w ith real photo of object of interest sound source localization might be visualized as propose in figure 7. Hence, based on the finding, a direction of the sound source can be determined in 3-dimension. In summary, the processing unit is adapted for evaluating a first, second and third Doppler Effect frequency shift of a first, second and third microphone signal with respect to a first, second and third directional movement of the movable unit from the initial position: and the processing unit is adapted for providing the information on the direction depending on the first, second and third Doppler Effect frequency shifts of the first, second and third microphone signals and the motion tracking unit signals for the first, second and third directional movements of the movable unit.

Alternatively the sound amplitude could be also evaluated from the recorded sound signals for determining the sound source location. The sound sensitivity of a microphone varies with the direction of the sound source and figure 8 shows a typical directional sensitivity of a microphone according to one embodiment of present invention. The trajectory 802 shows the sensitivity of the microphone 100 for sound arriving under different angles. In the direction indicated by the arrow 801 the microphone 100 will reach its maximum sensitivity and thus the highest output level. By turning the movable unit 10 and thus the microphone 100 in such a way that the maximum and/or minimum output level is reached, the microphone unit will direct in the direction of the sound source and this direction can be determined.

Alternative also other sound amplitudes as e.g. the minimum sound level could be used for a direction detection. The level and the direction must only be clearly determinable. The shown sensitivity is a typical sensitivity of a microphone and will vary with the exact embodiment of the microphone and its environment. But the max imum sensitivity wil l only be achieved in certain direction and can be used for determining the direction of a sound source.

Figure 9 shows the determination of a sound source location is according to the

embodiment of figure 8. At a known position A the microphone is turned in the direction X I of its maximum and/or minimum ampl itude and w i ll thus direct to the location of the sound source B. At second know n position C. which is not in l ine of the direction determined at position A, again the microphone is turned to its maximum and/or minimum amplitude and will give a second direction X2 measurement of the sound source location B. The intersection point of the two direction lines X 1 and X2 will give the location of the sound source. The positions A and C are hereby determined with the motion tracking unit 1 1 . The motion tracking can be one of the already above described methods or any other method for determining the positions A and C. Alternatively the motion tracking unit could also consist of two marked positions in the space, where the two measurements are performed.

Figure 10 shows a flowchart of a method for localizing a sound source according to one embodiment of present i nvention. The method 1000 includes, in step 1010, obtaining microphone signal and motion tracking unit signal from a movable unit integrated with a microphone and a motion tracking unit during free movement of the movable unit. It is determined a sampling rate at which microphone signal is sampled, based on a known frequency of the sound signal. For example, a sampled version of the microphone signal from the microphone 100 is present, which was generated by sampling the microphone signal at a sampling frequency at least twice bigger than the frequency of interest.

The method 1000 further includes, in a step 1020. determining a velocity and an initial position of the movable unit and the microphone on the basis of motion tracking unit signal, and determining a Doppler Effect frequency shift on the basis of the microphone signal .

The method 1000 further includes, in a step 1030, determining a direction of the microphone with respect to the sound source, based on an value of a Doppler Effect frequency shift and the motion tracking unit signal. For example, a Doppler Effect frequency shift among the microphone signals is dependent on the speed of the microphones with respect to the sound source. The offset of the Doppler Effect frequency shift indicates how great a speed of the movable unit is moving with respect to the sound source. For example, evaluation of the direction of the sound source may be performed according to the algorithm described according to figures 6 and 7.

Alternatively, the method 1000 further includes, in a step 1010, evaluating a sound level of the microphone signal with respect to a maximum and/or minimum amplitude while the movable unit is moving with respect to the initial position, and in step 1030, providing the information on the direction depending on the sound level of the microphone signal and the motion tracking unit signal.

With different levels of the accuracy of the direction of the sound source to be determined, the procedure may be performed at least once, itinerant!}' for a first Doppler Effect frequency shift of the first microphone signal with respect to a first directional movement of the movable unit from an initial position, a second Doppler Effect frequency shift of the second microphone signal with respect to a second directional movement of the movable unit from an initial position, and a third Doppler Effect frequency shift of the third microphone signal with respect to a third directional movement of the movable unit from an initial position, and the motion tracking unit signals respectively for the first, second and third directional movement of the movable unit. Alternatively, the procedure may take into consideration of a sound level of the microphone signal with respect to a maximum and/or minimum amplitude while the movable unit is moving from the initial position, and the motion tracking unit signal for the directional movement of the movable unit.

Figure I I is a block diagram of a system for separating sound from an object of interest and that from its background by using said system for localizing sound source as described above according to figure 3C. The skilled person should understand that the sound source localizing system can be those described according to any of figures I to 1 0 or a static unit including a multiple of microphones by using acoustic holography or beamforming technique. The sound source direction can also be determined with a static unit, where e.g. two or more microphones are located at certain separated positions. Due to the different travel paths from the sound source to the different microphones, the sound w ill arive with a certain time delay. This time delay will result in a phase shift of the signals determined by the different microphone and by evaluating this phase shift, the direction of sound can be determined. As shown in figure 1 1 . an object of interest and many devices of its background consist of the sound source. The sound separating system 1 1 includes the sound source local izing system I with a microphone 100, and an object direction reference determination part 13. The object direction reference determination part 13 is for determining, as object direction reference, information on directions of said object of interest with respect to said sound source localizing system 1.

Figure 12 shows the illustration of the direction indicated by the object direction reference. As shown in figure 12, the object direction reference may indicate a multiple of directions as shown by the arrows, in which the sound may arrive at the sound source localizing system 1 from different contour 14 points of the object of interest 15, for example, the direction 12a starting from the upper contour 14 of the object of interest 15, the direction 12b starting from the lower contour 14 of the object of interest 15, the direction 1 2c starting from the left contour 14 of the object of interest 15, the direction 1 2d starting from the right contour 14 of the object of interest 15. and etc. Based on the information on the object direction reference, the processing unit 1 2 of the sound source localizing system 1 can locate the object of interest with respect to the position of itself. The more points on the contour 14 of the object of interest are considered, the more accurate the location with respect to the sound source localizing system I can be determined. As alternative, the skilled person shall understand that other technology for determination of objection direction reference is appl icable, for example select the directions from the points inside the contour 14 of the object of interest.

Back to figure 11, the processing unit 12 of the sound source localizing system 1 can obtain information on a direction from which sound arrived at said sound source localizing part 1 from the sound source by using microphone signal, which has been described above by the embodiments according to figures 1 to 10. To avoid redundancy, detailed description thereabout is omitted hereafter. The processing unit 1 2 further can compare the information with the object direction reference so as to separate the sound from the background of said sound source. For example, the processing unit 12 can judge if said direction from which the sound arrived at the sound source localizing system 1 falls under the scope of the object direction reference so as to filter the sound the background, namely the remained sound after the fi ltering mainly contain those arrived from the object of interest. Direction of sound can be described by two angles a in vertical plane and β in horizontal plane (see figure 12). To separate sound which origin in object of interest from sound of its background angles of sound direction w ith angles of reference direction needs to be compare. If sound of interest got respective angles of direction a and β to following comparison of angles needs to be perform:

where な is an angle in vertical plane of reference direction. β i is an angle in horizontal plane of the same reference direction. a k is an angle in vertical plane of reference direction which is located on opposite site of contour to the respective direction described by angles ctj and β ί 5 β ¾ . is an angle in horizontal plane of reference direction which is located on opposite site of contour to the respective direction described by anglesな and β £ , If formula (5) is true then sound described by direction angles a and β is coming from within the contour which means its origin is in object of interest. Contour 14 can be 2 dimensional line creating two dimensional plane and the angles of directions can be mapped into this 2 dimensional plane as wel l indicating location of sound in 2 dimensional space.

By having the sound separating system, it can extract the background noise from a specific device respective location and analyse only these signals from the object of interest.

Figures 13A and 13B show the acoustic spectrum before and after the fi ltration according to an embodiment of present invention. As comparison of figures 13A and 13B, there is difference of the sound component amplitude in terms of frequency. For example, the ampl itude for sound component at frequency A or B in figure 13A is of the same height with that of figure 13B, which means all the sound with frequency A or B arrived at the sound source local izing system 1 from the object of interest; the amplitude for sound component at frequency C in figure 13A has value but its counterpart in figure 13B is zero. which means all the sound w ith frequency C arrived at the sound source localizing system 1 from the background. By having such compari son of sound directions performed by the processing unit 12, the sound separating system 11 is able to automatically separate the sound from the object of interest and its background. The sound source local izing system can detect only acoustic signals from the di rection of the object of interest, while signals from other locations as of the background are suppressed. Therefore, the influence of the sound from the background with that from the object of interest is removed, and this is particular helpful in case that the former frequency is stronger than the latter frequency and the characteristic of the sound from the object of interest is unknown.

Back to figure 11, the object direction reference determination part 13 may include a camera 130 and a human machine interface 1 3 1 . The camera 130 can capture picture of the object of interest and its background, and the human machine interface 1 3 1 can obtain the information on the contour of said object of interest as regards its background based on the picture captured by the camera 130. By having the human machine interface, the user can select the object direction reference to be analysed. The skil led person should understand that the human machine interface 1 3 1 can be implemented by a touch screen, a personal computer, and etc. The camera 130 can be directed to the area of sound source including the object of interest and its background, which is monitored by the sound separating system 1 1 and this area is displayed on the screen. The user can then select the object of interest to be investigated by marking it on the screen. This can be done by either clicking onto the position, drawing a line around the object or any other input method. Figures 14 A and 14B show pictures taken by the camera and reproduced on the human machine interface before and after marking the contour of the object of interest according to an embodiment of present invention. As shown in figures 14 A and 14B. an electrical motor is an example of the object of interest and a touch screen serves an example of the human machine interface 13 1 . As illustrated by figure 14B. the user marked the contour of the electrical motor by drawing its contour on the touch screen. The object orientation determination part 13 can determine the object direction reference based on the information on the contour of the electrical motor as regards its background. If the location is determined to be inside the contour of the electrical motor as indicated by portion in the figure 14B. the specific frequency component should remain in in figure 13B. If the location is in the area outside of the contour of the electrical motor, the specific frequency component will be removed from the spectrum in figure 1 3 A and not reproduce in the spectrum in figure 1 3B. This operation w i ll be done for al l frequency visible in the spectrum or for a defined frequency range or noise pattern.

Besides, the local izing system 1 can be used for a condition monitoring system, where the processing unit 12 can be further adapted for judging a condition of said object of interest based on a frequency of said separated sound. Again taking the electrical motor as an example of the object of interest, the acoustic spectrum from figure 1 3B can be considered as relative similar to the vibration spectrum of the electrical motor. Therefore similar methods as for vibration analysis can be appl ied. For example, it is well known that in case of electric motors static eccentricity cause additional forces visible in vibration at frequency/, , given by following equation:

where J),,,, is power supply frequency. Figures 1 5 A and 15B present example spectrum of two electrical motor cases. Figure 15A presents acoustic spectrum of healthy motor case while figure 15B presents acoustic spectrum electrical motor w ith static eccentricity. In below case, both electrical motors were supplied by 50 Hz, therefore static eccentricity related frequency/, , should be visible at l OOHz. It is possible to notice that figure 15B consist of high peak at around 100 Hz while figure 15A did not. Value of this peak is above 600 mPa, while on the figure 15A this peak is smaller than 350 mPa. By taking the amplitude off ecc frequency as static eccentricity i ndicator it is clearly visible that the electrical motor in case of figure 15B got higher level of static eccentricity than healthy electrical motor from case of figure 15 A. Those results are very similar to vibration based results and they are clearly indicating static eccentricity. However if those acoustic measurements will be taken by normal microphone there will be no guarantee that source of frequency of interest (in our case/, , ) is electrical motor body. Since in this

embodiment all the remaining frequencies visible in spectrum after filtration are having its source within the contour (electrical motor body), it is understandable that this frequency is not cause by background noise. By having the condition monitoring system, the extracted signals are processed automatically for failure detection. The condition monitoring system may further include an alarm device, for example by a loudspeaker, for the user to do a manual failure analysis.

It is understandable to the skilled person that the components of the sound source localizing system 1 can be implemented in a mobile phone as an extension of its functionality.

Figure 16 shows flowchart of a method for separating sound from an object of interest and that from its background according to an embodiment of present invention. The method 16 includes, in step 160, determining, as object direction reference, information on directions of the object of interest with respect to a sound source localizing part, in step 1 61 .

obtaining information on a direction from w hich sound arrives at the sound source localizing part from the sound source by using microphone signal, and in step 1 62.

comparing said obtained information with the object direction reference so as to separate the sound from said background of said sound source. The method 1 6 may further include predetermining said object direction reference based on the information on contour of said object of interest as regards its background. The method 16 may be used for condition monitoring, and thus it may further include judging a condition of said object of interest based on a frequency of said fi ltered sound. It is possible to iteratively check the direction of each single frequency from the acoustic spectrum to be inside the contour or not.

The method 16 may be supplemented by all those steps and features described herein. For example, obtaining 161 information on a direction from which sound arrives at the sound source localizing part may include one or more of the method according to figure 10.

Though the present invention has been described on the basis of some preferred embodiments, those skilled in the art should appreciate that those embodi ments should by no way l imit the scope of the present invention. Without departing from the spirit and concept of the present invention, any variations and modifications to the embodiments should be w ithin the apprehension of those w ith ordinary knowledge and ski lls in the art. and therefore fall in the scope of the present invention which is defined by the accompanied claims.