Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM FOR COMBINED HEARING AND BALANCE TESTS OF A PERSON WITH MOVING SOUND SOURCE DEVICES
Document Type and Number:
WIPO Patent Application WO/2020/254462
Kind Code:
A1
Abstract:
The invention relates to a system (1) for hearing and balance tests of a person with moving sound source devices (30), comprising at least one sound source device (30) arranged around a common center (100), wherein the at least one sound source device (30) is arranged behind a visual barrier (40) so that the at least one sound source device (30) is invisible to a person (2) located at the common center (100), wherein each sound source device (30) of the at least one sound source device (30) is movably arranged and movable around the common center (100) independently from each other, wherein the system (1) comprises an automatically movable platform (50) for a person (2), wherein the platform (50) is arranged at the common center (100), wherein the movable platform (100) is configured to move the person (2) relatively to the at least one sound source device (30), wherein the movable platform comprises a motor arranged and configured to automatically move the platform (50).

Inventors:
FISCHER TIM (CH)
CAVERSACCIO MARCO (CH)
WIMMER WILHELM (CH)
Application Number:
PCT/EP2020/066869
Publication Date:
December 24, 2020
Filing Date:
June 18, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV BERN (CH)
International Classes:
A61B5/00; A61B5/12
Domestic Patent References:
WO1988004909A21988-07-14
WO1988004909A21988-07-14
Foreign References:
US7163516B12007-01-16
US20150032186A12015-01-29
Other References:
KANJI WATANABE ET AL: "Dataset of head-related transfer functions measured with a circular loudspeaker array", ACOUSTICAL SCIENCE AND TECHNOLOGY, 1 May 2014 (2014-05-01), Tokyo, pages 159 - 165, XP055323676, Retrieved from the Internet [retrieved on 20161128], DOI: 10.1250/ast.35.159
WATANABE K ET AL.: "Dataset of Head-Related Transfer Functions Measured with a Circular Loudspeaker Array", ACOUSTICAL SCIENCE AND TECHNOLOGY, vol. 35, no. 3, 2014, pages 159 - 65, XP055323676, DOI: 10.1250/ast.35.159
Attorney, Agent or Firm:
SCHULZ JUNGHANS PATENTANWÄLTE PARTGMBB (DE)
Download PDF:
Claims:
Claims

1. A system (1 ) for a hearing and balance test of a person with moving sound source devices (30), comprising at least one sound source device (30) arranged around a common center (100),

wherein the at least one sound source device (30) is arranged behind a visual barrier (40) so that the at least one sound source device (30) is invisible to a person (2) located at the common center (100),

wherein each sound source device (30) of the at least one sound source device (30) is movably arranged and movable around the common center (100) independently from each other,

characterized in that

the system (1 ) comprises an automatically movable platform (50) for a person (2), wherein the platform (50) is arranged at the common center (100), wherein the movable platform (100) is configured to move the person (2) relatively to the at least one sound source device (30), wherein the movable platform comprises a motor arranged and configured to automatically move the platform (50).

2. The system (1 ) according to one claim 1 , wherein the system (1 ) comprises a plurality of sound source devices (30) arranged behind the visual barrier (40) so that the sound source devices (30) are invisible to a person (2) located at the common center (100), wherein each sound source device (30) is movably arranged and movable around the common center (100) independently from each other.

3. The system (1 ) according to claim 1 or 2, wherein the movable platform (50) is configured to rotate a person (2) on the platform (50) around a rotation axis extending particularly vertically through the common center (100).

4. The system (1 ) according to one of the preceding claims, wherein the movable platform (50) is configured to move a person (2) in a translational fashion, particularly along a horizontal plane.

5. The system (1 ) according to one of the preceding claims, wherein the system (1 ) comprises a computerized system (60) such as a computer (61 ) configured to control position and/or velocity of the at least one sound source device (30) by means of control commands provided to the sound source device (30) and wherein the computerized system (60) is further configured to control a position and/or a speed of the movable platform (50) by means of control commands provided to the movable platform (50).

6. The system (1 ) according to claim 5, wherein the computerized system (60) is configured to control the position and/or the velocity of the platform (50) in relation to the position and/or velocity of the at least one sound source device (30).

7. The system (1 ) according to one of the preceding claims, wherein the at least one sound source device (30) is arranged on a track (70), along which the sound source device (30) is movable, particularly wherein the plurality of sound source devices (30) are arranged on and are movable on the same track (70).

8. The system (1 ) according to one of the preceding claims, wherein each sound source device (30) comprises a mobile module (31 ) configured to move the sound source device (30) around the common center (100), particularly along the track (70), particularly wherein the mobile module (31 ) is motorized and configured to automatically move the sound source device (30), particularly in response to control commands received from the computerized system (60).

9. The system (1 ) according to one of the preceding claims, wherein the system (1 ) comprises at least one, particularly fixed optical sensor (80) such as a camera, wherein the at least one optical sensor (80) is arranged such with respect to the at least one sound source device (30) that from sensor data acquired by the at least one optical sensor (80) a position of all sound source devices (30) can be determined.

10. The system (1 ) according to claim 9, wherein each sound source device (30) has an optical marker (32), particularly wherein each marker (32) is different from the other markers (32), wherein each marker (32) is arranged such that the at least one optical sensor (80) records said marker (32).

1 1 . The system (1 ) according to one of the preceding claims, wherein each sound source device (30) comprises a sensor configured to determine a position and/or a velocity of the sound source device, particularly a position and/or a velocity of the sound source device (30) on the track (70).

12. The system (1 ) according to any of the claims 5 to 1 1 , wherein each sound source device (30) and/or the at least one optical sensor (80) of the system (1 ) each comprise a transmission module configured to particularly wireless transmit a status information of the sound source device (30) to the computerized system (60), particularly wherein the transmission module is configured to transmit the sensor data acquired by the sensors of the sound source devices (30) and/or the at least one optical sensor (80) to the computerized system (60), particularly wherein the status information comprises or is indicative of a position and/or a velocity of the sound source device (30).

13. The system (1 ) according to any of the preceding claims, wherein the system (1 ) further comprises a physiological sensor system (90) particularly arranged at the common center (100), wherein the physiological sensor system (90) is arranged and configured to acquire sensor data from a person (2) at the common center (100), the sensor data being indicative of a heart rate, a skin resistance, a pupil diameter, a direction of gaze and/or a movement of the head of the person (2) at the common center (100), particularly wherein the physiological sensor system (90) is configured to provide the acquired sensor data to the computerized system (60).

14. The system (1 ) according to one of the claims 5 to 13, wherein the system (1 ), particularly the computerized system (60) is configured to store the status information, particularly the sensor data acquired by the sensors of the sound source devices (30), the at least one optical sensor (80) and/or the physiological sensor system (90) as well as position and/or speed of the platform (50).

15. The system (1 ) according to any of the preceding claims, wherein the system (1 ) is configured to be operated and functional in a non-anechoic space, particularly wherein the system (1 ) is arranged in a non-anechoic space.

Description:
System for combined hearing and balance tests of a person with moving sound source devices

Specification

The invention relates to a system for a particularly combined hearing and balance test of a person with moving sound source devices.

The human inner ear combines the structures required for the senses of hearing (cochlea) and balance (vestibule and semicircular canals). Although the structures are anatomically directly connected and share the same fluid compartments (i.e. the endolymphatic and perilymphatic spaces), sensory cross-talk is rare under physiological conditions. However, certain inner ear-associated diseases exist that can cause combined symptoms with hearing loss, tinnitus, and vertigo e.g. Meniere’s disease or cross-stimulation e.g. Tullio’s phenomenon. So far, these phenomena and their related inner ear diagnostics are assessed separately in the domains of Audiology (hearing diagnostics) and Neurotology (balance diagnostics).

Inner ear diagnostics have to rely on the indirect observation of physiologic phenomena after stimulation e.g. using behavioral tests, because the inner ear is difficult to access and is embedded within the densest bone of the human body (petrous part of the temporal bone). A direct in-situ application of sensors e.g.

pressure sensors into the inner ear is highly invasive and currently not applicable.

Existing devices and methods are not capable of addressing the need for more specific and sensitive inner ear diagnostics that can also be applied to detect and investigate symptoms of sensory cross-talk. There is an unresolved need in the art to address the above-mentioned shortcomings and inadequacies. Furthermore, the performance of most systems in the art disclose depends on acoustic room characteristics, as for example the generation of virtual sound sources using two or more real - also referred to as primary - sound sources by producing predefined sound wave fields requires a strict control of the echoic behavior of the environment. Alternatively, headphones are used, which are not suitable for all persons, such as for example for hearing impaired persons wearing a hearing aid and situations, as sound perception in headphones does not reflect the real-world sound perception. Watanabe et al. [1] discloses a setup comprising a plurality of loudspeakers that are arranged along a horizontal and vertical circle having a seat in the center of these two loudspeaker circles. A combined hearing and balance test is not possible with this setup, as it for example does not provide a movable seat. Moreover, the setup of Watanabe et al. is comparable costly due to its large number of speakers required.

For directional hearing tests the so-called“Mainzer Kindertisch” is known that comprises a plurality of sound sources arranged at fixed positions around a common center, where the test person sits. The sound source devices are configured to generate wave fields for generating virtual sound sources in order to estimated and record the directional hearing capabilities of the test person. The test person is required to point towards the direction the sound seemingly comes from. The virtual sound source position accuracy of such systems is in the range of 5°, however, the implemented virtual sound sources do not generate the same wave fields as real sound sources at the given position.

On the other hand, rotary chair tests are known - WO 8804909 A2 - for testing the vestibular system of a person. Specifically, systems are known for recording an eye- movement of the test person in response to a rotational acceleration for assessing the so-called vestibular-ocular reflex (“nystagmus”).

However, no solution is known in the art to simultaneously estimate all the various inner ear functions in a correlated manner; testing the directional hearing capabilities in combination with the sense of balance.

An object of the present invention is to provide a system that solves said problem. The object is achieved by the device having the features of claim 1.

Advantageous embodiments are described in the subclaims.

According to claim 1 , the system for a particularly combined hearing and balance test of the inner ear of a person with at least one moving sound source device, comprises at least one sound source device arranged around a common center, wherein the at least one sound source device is arranged behind an acoustic transparent but visually non-transparent barrier, particularly wherein the barrier is not worn by the person but is arranged fixedly at the system, so that the at least one sound source device is invisible or particularly visually non-discernable to a person located at the common center, wherein each sound source device of the at least one sound source device is movably arranged and movable around the common center independently from each other.

The system is characterized in that the system comprises an automatically movable platform for a person, wherein the platform is arranged at the common center, wherein the movable platform is configured to move the person relatively to the at least one sound source device, wherein the movable platform comprises a motor arranged and configured to automatically move the platform.

The system according to the invention allows for simultaneous acoustical and motional exposure of persons, for reproducible assessments by acoustical and motional stimuli in a spatially and temporally coordinated manner.

It is a particularly advantage to move each sound source device independently from each other, omitting the need for the generation of virtual sound sources and their sensitivity to room acoustics and the position of the person with respect to the primary, i.e. real sound sources.

A sound source device comprises for example a loudspeaker. In order to render the sound source device movable and well controllable, the sound source device can be a wireless controllable audio robot (WCAR).

The combination of the movable sound source devices and the movable platform at the common center of the at least one sound source device, allows for testing of responses and reactions of a person to acceleration and particularly simultaneously provided acoustic stimuli.

The common center is particularly an area or volume that is arranged approximately at the same distance from all sound source devices. Furthermore, the common center is particularly arranged on a side the sound source device that is configured and arranged to emit sound.

The at least one sound source device, particularly all sound source devices, are particularly controllable by means of control commands causing the sound source device to emit a sound and/or to move to a specific position or to move with a predefined speed along a direction.

The sound source device is particularly configured to move in a horizontal plane. The platform is configured to automatically move the person in a predefined and controlled manner. For this purpose the platform particularly comprises means to receive control commands, wherein the control commands cause the platform to execute a movement along a specific lateral or vertical direction or around a specific rotation or tilt axis.

The system according to the invention allows controlling the person’s motional state with respect to the sound source devices positions and velocities.

According to another embodiment of the invention, the at least one sound source device or all sound source devices are configured to move along a curved trajectory.

According to another embodiment of the invention, the platform comprises a seat for the person, particularly wherein the seat is movable.

According to another embodiment of the invention, the system comprises a plurality of sound source devices arranged behind the visual barrier so that the sound source devices are invisible and particularly visually non-discernable to a person located at the common center, wherein each sound source device is movably arranged and movable around the common center independently from each other.

As movements of a sound source device often comes with a noise associated and generated by the motion of the sound source device, a test person might be able to acoustically track and anticipate motions of the sound source device, which might influence the test in a negative way.

Having a plurality of sound source devices, it is possible to mask a motion-associated noise from one or more moving sound source devices, for example by providing a masking noise sound from the sound source device(s) that is/are not moving.

According to another embodiment of the invention, the movable platform is configured to rotate a person on the platform around a rotation axis extending particularly vertically through the common center.

This embodiment allows for performing measurements or example related to the vestibular system of a person in combination with a directional hearing test. According to another embodiment of the invention, the movable platform is configured to tilt a person on the platform around at least one rotation axis extending parallel to a horizontal plane.

According to another embodiment of the invention, the movable platform is configured to move a person in a translational fashion, particularly along all three dimensions, particularly along a horizontal plane, particularly around the common center.

According to another embodiment of the invention, the system comprises a computerized system such as a computer configured to control positions and/or velocities of the at least one sound source device, particularly of the plurality of sound source devices by means of control commands issued by the computerized system and transmitted to the sound source device(s) and wherein the computerized system is further configured to control a particularly angular and/or a lateral position and/or a translational and/or an angular speed of the movable platform by means of control commands provided to the platform.

For this purpose, the computerized system is connected to each sound source device for example by means of a wireless data connection for transmitting the control commands to the at least one sound source device and the platform.

The computerized system allows for centrally controlling the at least one sound source device and the platform in relation to each other.

The computerized system can comprise an operator interface that allows an operator to control the experiments conducted.

Moreover, the computerized system is particularly configured to execute a computer program comprising computer program code with instructions that when executed by the computerized system causes the at least one sound source device and the platform to move according to the control commands comprised in the instructions of the computer program.

The term“controlling” in the context of the sound source device particularly refers to the controlling of the sound source device with regard to its sound emission, particularly by its loudspeaker and its position and/or velocity. Further, according to another embodiment of the invention, the computerized system is configured and arranged to receive status information from the at least one sound source device and the platform, such that a bi-directional communication can be established between the computerized system and components of the system. The components comprise particularly the at least one sound source device and the platform.

According to another embodiment of the invention, the computerized system is configured to control the lateral and/or the angular position and/or the lateral and/or angular velocity of the platform in relation to the positions and/or velocities of the at least one sound source device, particularly of the plurality of sound source devices.

According to another embodiment of the invention, the at least one sound source device is arranged in a horizontally extending plane and movable in said horizontally extending plane, i.e. the at least one sound source device is movable in a horizontal plane in the same height.

According to another embodiment of the invention, the plurality of sound source devices is independently movable in the same horizontal plane.

According to another embodiment of the invention, the at least one sound source device is movable along a horizontally extending, round and curved trajectory, such as a circle, an oval or an ellipse, particularly wherein the plurality of sound source devices are independently movable along said trajectory.

According to another embodiment of the invention, the at least one sound source device is arranged on a track, along which the sound source device is movable.

According to another embodiment of the invention, the plurality of sound source devices is arranged on the same track, along which the sound source devices are independently.

A track can for example consist of one or more rails.

According to another embodiment of the invention, the track extends along a horizontal, particularly closed and curved trajectory, such as a circle, an oval or an ellipse. Particularly, according to another embodiment of the invention, the track comprises optical markers arranged and configured to indicate predefined positions on the track to the at least one sound source device, particularly to a mobile module (see following paragraphs) or an optical sensor of the sound source device.

The optical markers provide a feedback means for the sound source device regarding its position and speed on the track.

According to another embodiment of the invention, wherein each sound source device comprises a particularly separate mobile module configured to move the sound source device around the common center, particularly along the track, particularly wherein the mobile module is motorized and configured to automatically move the sound source device, particularly in response to control commands received from the computerized system.

Particularly, the mobile module is further configured to silently move the sound source device.

The mobile module or the sound source device can comprise a micro-controller for controlling the mobile module, for receiving control commands, and transmitting status information.

Furthermore, according to another embodiment of the invention, the sound source device, particularly the mobile module comprises an inertial measurement unit (IMU) or a gyroscope for determining its speed, orientation and/or position, such that a position of the sound source device can be determined autonomously by the sound source device.

According to another embodiment of the invention, each mobile module comprises a hydraulic, a pneumatic, a magnetic or an electric force generation device for moving the sound source device.

According to another embodiment of the invention, the system comprises at least one optical sensor such as a camera, wherein the optical sensor is particularly fixedly arranged such with respect to the at least one sound source device that from sensor data acquired by the at least one optical sensor a position of all sound source devices can be determined, particularly wherein the system is configured to transmit the sensor data to the computerized system, wherein a position of all sound source devices is determined by the computerized system from the sensor data received from the at least one optical sensor.

According to another embodiment of the invention, each sound source device has a particularly differing optical marker, wherein each marker is arranged such on the sound source device that the at least one optical sensor can record said marker.

This embodiment allows a precise position determination and identification of each sound source device by the computerized system from the sensor data transmitted from the at least one optical sensor.

Further, according to another embodiment of the invention, the at least one optical sensor or a second, further optical sensor (different from the at least one or more optical sensors) is configured and arranged to record the position and orientation of the platform and particularly to transmit sensor data relating to the position and orientation of the platform to the computerized system.

Further, according to another embodiment of the invention, the at least one optical sensor or a third, further optical sensor (different from the at least one or more optical sensors) is configured and arranged to record the person on the platform, particularly the gaze of the person and particularly to transmit sensor data relating to the person to the computerized system.

According to another embodiment of the invention, each sound source device of the at least one sound source device comprises a sensor, such as a gyroscope, an IMU, or an optical sensor configured to determine a position and/or a velocity of the sound source device, particularly a position and/or a velocity of the sound source device on the track.

Particularly the sensor is comprised in the mobile module from the at least one sound source device.

According to another embodiment of the invention, the system is configured and arranged to transmit sensor data from the sensor of each sound source device to the computerized system, wherein the computerized system is configured to determine the position and/or velocity of the sound source device.

According to another embodiment of the invention, each sound source device (of the at least one sound source device) and/or the optical sensor of the system comprises a transmission module configured to particularly wireless transmit a status

information of the sound source device to the computerized system, particularly wherein the status information comprises the sensor data acquired by the sensors of the sound source devices and/or the optical sensor to the computerized system, particularly wherein the status information comprises or is indicative of a position and/or a velocity and/or an acceleration of the sound source device.

According to another embodiment of the invention, the computerized system is configured to control the position and/or the velocity of the sound source devices in response to status information received from sound source devices and/or the optical sensor, particularly transmitted by the transmission module.

According to another embodiment of the invention, the system comprises a physiological sensor system for example arranged at the common center, wherein the physiological sensor system is arranged and configured to acquire sensor data from the person at the common center, the sensor data being indicative of a heart rate, a skin resistance, a pupil diameter, a direction of gaze and/or a movement of the head of a person at the common center, particularly wherein the physiological sensor system is configured to provide and particularly transmit the acquired sensor data to the computerized system.

According another embodiment of the invention, the physiological sensor system comprises an electroencephalography (EEG) device or a functional near infrared spectroscopy device.

According to another embodiment of the invention, the system is configured to adjust a position or a velocity of the at least one sound source device and/or a position or velocity of the platform in response to sensor data received from the physiological sensor system, particularly wherein the physiological sensor system transmits the acquired sensor data to the computerized system, wherein the computerized system processes said sensor data and transmits control commands to the at least one sound source and/or the platform.

According to another embodiment of the invention, the system, particularly the computerized system is configured to store the status information, particularly the sensor data acquired by the sensors of the sound source devices, the optical sensor and/or the physiological sensor system. For this purpose the computerized system particularly comprises a memory, such as a hard-drive, a RAM or similar electronic memory components.

According to another embodiment of the invention, the system comprises visually differing markers arranged in regular intervals around the common center providing defined orientation for a gaze of a person at the common center.

This allows to accurately determining a gaze of a person during acoustic a motion tests.

According to another embodiment of the invention, the system is arranged in a non- anechoic space.

As the effect of echo might distort the generated sound wave field required for a virtual sound source, the system according to this embodiment is robust against such effects, as it is configured to be operatable and functional even in an non-anechoic space. This is particularly due to the fact that the system does not necessarily comprise a virtual sound source or rely on the generation of virtual sound sources.

While an anechoic space is configured to not reflect sound at the walls of the space, a non-anechoic space does not meet this criteria, i.e. sound is reflected from walls, and thus particularly distorting the generation of stable virtual sound sources.

This embodiment allows performing hearing tests in realistic sound environments, as echo influences particularly the understanding of speech.

The problem is furthermore solved by a method for diffusing motion noise emitted by a moving sound source device, particularly for a system according to the invention having two sound source devices, a first and a second sound source device, wherein when the first sound source device is moving, the second sound source device emits a sound signal overlaying the motion noise, particularly wherein the sound source device are arranged laterally shifted from each other.

According to another embodiment of the invention, the method is executed on a system according to the invention.

The term“computerized device” or computerized system or a similar term denotes an apparatus comprising one or more processors operable or operating according to one or more programs The terms 'processor' or 'computer', or system thereof, are used herein as ordinary context of the art, such as a general purpose processor or a micro-processor, RISC processor, or DSP, possibly comprising additional elements such as memory or communication ports. Optionally or additionally, the terms 'processor' or 'computer' or derivatives thereof denote an apparatus that is capable of carrying out a provided or an incorporated program and/or is capable of controlling and/or accessing data storage apparatus and/or other apparatus such as input and output ports. The terms 'processor' or 'computer' denote also a plurality of processors or computers connected, and/or linked and/or otherwise communicating, possibly sharing one or more other resources such as a memory.

The terms 'computer program' particularly denotes one or more instructions or directives or circuitry for performing a sequence of operations that generally represent an algorithm and/or other process or method. The program is stored in or on a medium such as RAM, ROM, or disk, or embedded in a circuitry accessible and executable by an apparatus such as a processor or other circuitry.

The processor and program may constitute the same apparatus, at least partially, such as an array of electronic gates, such as FPGA or ASIC, designed to perform a programmed sequence of operations, optionally comprising or linked with a processor or other circuitry.

Figure description and Examples

Particularly, exemplary embodiments are described below in conjunction with the Figures. The Figures are appended to the claims and are accompanied by text explaining individual features of the shown embodiments and aspects of the present invention. Each individual feature shown in the Figures and/or mentioned in said text of the Figures may be incorporated (also in an isolated fashion) into a claim relating to the device according to the present invention.

Fig. 1 shows a system having three sound source devices;

Fig. 2 shows a picture of a system according to the invention;

Fig. 3 shows a detail of a sounds source device;

Fig. 4 shows a first operation mode of the system; and Fig. 5 shows a second operation mode of the system.

Fig. 1 shows a schematic view of the system 1 according to one embodiment of the invention, wherein Fig. 2 shows a picture of the system 1 .

The system 1 comprises three sound source devices 30 that are arranged on a circular rail 70 around a common center 100. The rail 70 is arranged horizontally at 1.2 m and has a radius of 1 m. The system 1 can comprise up to 12 sound source devices, such as loudspeakers or wireless controllable audio robots (WCARs) to be positioned at any position on the rail.

The rail 70 and the sound source devices 30, (in Fig. 1 and 2 only three are shown) are covered by a sound transparent but optically non-transparent curtain 40 to provide a visual barrier to the person 2 arranged at the common center 100 of the rail 70.

At the common center 100 a platform 50 comprising a seat and a user interface 62 is arranged. The platform 50 is movable, in this embodiment rotatable, around the common center 100.

The user interface 62 is part of a computerized system 60 for controlling the position, speed and sound emission state of the sound source devices 30.

In Fig. 1 three cameras 80 are arranged such at the system 1 that it can record the positions of all sound source devices 30 on the track 70. The cameras 80 are connected to a computer 61 being the central control unit for the system 1 . From the cameras 80 images of the sound source devices 30 are provided to the computer 61. The computer 61 is part of the computerized system 60 that receives sensor data from the sensors of the system 1 and issues control commands to the individual components, i.e. the sound source devices 30 and the platform 50 causing the respective component to execute the issued command.

For each of the three sound source devices 30, acoustic stimulus, position and movement speed are parameters which can be independently controlled and adapted for each sound source device by an operator operating the computerized system 60. The operator can modify the type, duration and loudness of the acoustic stimuli as well as the specific test or trajectory to be performed by the sound source devices 30. In particular the three sound source devices 30 shown are WCARS. The hardware (cf. e.g. Fig. 3) of each WCAR 30 consists of a Raspberry Pi 3 Model B (RP3) for wireless communication (not shown) with the computerized system, audio signal and movement data processing. Audio output is provided by a loudspeaker 33 such as a HiFiBerry DAC+ which is connected to the RP3. To enable movement of the WCARs 30, a DC Motor in combination with a controller is used forming a mobile module 31 (cf. e.g. Fig. 3). The power transmission for the movement of each WCAR takes place by means of a rubber V-belt on a rubber roller.

For audio output generation, each WCAR uses a 200W 30hm Class D Amplifier in combination with a loudspeaker 33. Power management for the RP3 is realized with a PiJuice board and a Li-ion battery (not shown). The amplifier and the motor controller each use separate Li-ion battery for power supply (both not shown).

Wireless data transfer is realized via proprietary WLAN between the WCARs 30 and the computerized system 60.

To account for possible slippage of the WCAR on the rail, each WCAR 30 is equipped with an optical marker 32 for position control. Using the optical marker 32, the current position of the WCAR 30 on the rail 70 can be determined. Optical position controlling is realized via the three cameras 80 which are connected to the computerized system 80. The WCAR’s 30 position is evaluated based on the corresponding optical marker 32 that is attached on the geometric center of the WCAR 30.

Additionally to the optical position determination, each WCAR 30 comprises a motor encoder comprised by the mobile module 31 for determining the position of the WCAR 30.

The combined determination of the position using the three cameras 80 and the motor encoder provides the system 1 with a position accuracy of ~ 1cm (-0.6 deg) on the rail 70 for each sound source device 30. The output of the motor encoder is recorded continuously by the RP3 with a sample rate of at least 5 Flz.

Alternatively or additionally, the rails 70 comprise optical markers 71. These markers 71 can be evaluated by the computerized system 60, e.g. the three cameras 80, in order to estimate an absolute positon of the sound source devices 30 on the rail 70. Flowever, additionally, these markers 71 can be recorded and evaluated by the sound source devices 30 by means of an optical sensor (not shown) that can be comprised by each sound source device 30.

The system further comprises a physiological sensor system 90 comprising a wearable device with integrated eye cameras for recording the position of the eyes of the person 2 on the platform 50.

The physiological sensor system 90 is also configured to transmit the acquired sensor data to the computerized system 60, such that for example the eye positon and gaze direction of the person can be evaluated in correlation to the speed / position of the platform 50 and the sound source devices 30.

On the frontal hemisphere of the sound transparent curtain 40, fiducial markers 41 with labels of the corresponding angular position can be placed with 3 degrees spacing, resulting in 61 markers 41. The markers 41 enable to robustly evaluate gaze fixations and head positions by the physiological sensor system 90. The labels of the angular positions may serve as orientation points if angle-specific fixations are desired during a test.

The physiological sensor system 90 is further configured to record and transmit sensor data to the computerized system 60, the sensor data comprising information in particular about at least one of

- A position of the person in relation to the system;

- An absolute and relative position of head, ears, and hands

- An acceleration of the person;

- A blood pressure of the person;

- An EEG;

A head movement of the person;

An eye movement of the person;

A transfer function in the free field;

- A pupil size of the person;

- A skin conductance of the person;

- A speech recognition.

The computerized system 60 is particularly configured to - Transmit control commands for a stimulation pattern. The stimulation pattern enables temporal and spatial coordination of acoustic and motional stimuli to which the person is exposed;

- Carry out real-time adjustments to the system based on recorded sensor data, e.g. in response to a determined position of person, the sound source devices and type of stimulus;

- Store status information received form the components of the system, such as sensor data for later data evaluation;

Provide the operator with online feedback on the current status of the system;

- Generate a data evaluation report after completion of the measuring task.

Fig. 4 and Fig. 5 are similar to Fig. 1 and emphasize on the various modes of operation of the system.

In Fig. 4 a synchronous rotation of the sound source devices and the platform is shown. Most of features have been elaborated already in Fig. 1 and apply to Fig. 4 in a similar fashion. Therefore only differing features are explicitly referred to.

In the embodiment shown in Fig. 4, the person is rotated clockwise with a predefined speed while the three sound source devices are rotated clockwise as well. Both the person and the sound source devices are rotating with the same angular velocity - the sound sources appear static to the person, however, the person’s inner ear is stimulated acoustically and rotationally.

The angular motion is indicated by the curved arrows 101 associated to the sound source devices and the platform.

In Fig. 5 a diffuse rotation of the sound source devices and the platform is shown. Most of features have been elaborated already in Fig. 1 and apply to Fig. 5 in a similar fashion. Therefore only differing features are explicitly referred to.

The sound sources and the person, and thus the platform are moved independently and in an uncorrelated manner during the presentation of acoustic stimuli to reduce the influence of room acoustics and minimize the contribution of directional sound.

With the system according to the invention multisensory inner ear tests can be performed in an non-anechoic chamber. References

[1] Watanabe K, et al. Dataset of Head- Related Transfer Functions Measured with a Circular Loudspeaker Array. Acoustical Science and Technology 2014; 35(3): 159— 65.