Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE FOR DETERMINING A HEARING CAPACITY OF A PERSON
Document Type and Number:
WIPO Patent Application WO/2023/158304
Kind Code:
A1
Abstract:
The invention relates to a device (1) for determining a hearing capacity of a person (2), the device being configured for being worn over a head (3) of the person, comprising: - ear enclosure means (4) for enclosing the persons ears (5), such as over-the-ear (OTE) headphones (6), comprising an audio stimulus device (7) for providing an audio stimulus to one or two of the persons ears; - virtual reality (VR) glasses (8) for providing a visual stimulus to one or two of the persons eyes (9); and - a brain signal measurement device (10) configured for measuring a brain signal of the person in response to the audio stimulus and/or visual stimulus, wherein the determining of the hearing capacity of the person is at least partly based on the measured brain signal.

Inventors:
MAJID PARZIJN (NL)
Application Number:
PCT/NL2023/050070
Publication Date:
August 24, 2023
Filing Date:
February 15, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MIMI HOLDING B V (NL)
International Classes:
A61B5/00; A61B5/12; A61B5/369; A61B5/398; G02B27/01; G06F3/01
Domestic Patent References:
WO2018218356A12018-12-06
WO2022013566A12022-01-20
WO2019010540A12019-01-17
WO2018218356A12018-12-06
Foreign References:
EP3456259A12019-03-20
US20200315499A12020-10-08
IN201911025478A
EP3456259A12019-03-20
Attorney, Agent or Firm:
ALGEMEEN OCTROOI- EN MERKENBUREAU B.V. (NL)
Download PDF:
Claims:
CLAIMS

1. Device (1) for determining a hearing capacity of a person (2), the device being configured for being worn over a head (3) of the person, comprising: ear enclosure means (4) for enclosing the person’s ears (5), such as over-the-ear (OTE) headphones (6), comprising an audio stimulus device (7) for providing an audio stimulus to one or two of the person’s ears; virtual reality (VR) glasses (8) for providing a visual stimulus to one or two of the person’s eyes (9); and a brain signal measurement device (10) configured for measuring a brain signal of the person in response to the audio stimulus and/or visual stimulus, wherein the determining of the hearing capacity of the person is at least partly based on the measured brain signal.

2. Device (1) according to claim 1 , wherein the brain signal comprises an electroencephalography (EEG) signal and/or a functional near-infrared spectroscopy (fNIRS) signal.

3. Device (1) according to any one of the preceding claims, wherein the VR glasses (8) comprise an eye signal measurement device (11) for measuring an eye signal of the person (2) in response to the audio stimulus and/or visual stimulus.

4. Device (1) according to claim 3, wherein the eye signal comprises an electrooculography (EOG) signal, an eye tracking signal and/or an eye movement (EM) signal.

5. Device (1) according to any one of the preceding claims, wherein the ear enclosure means (4) comprise an ear signal measurement device (12) configured for measuring an ear signal of the person (2) in response to the audio stimulus.

6. Device (1) according to claim 5, wherein the ear signal measurement device (12) comprises a light detection and ranging (LiDAR) device (13) for scanning the person’s ear canal (14), such as a volume of the person’s ear canal, and/or for measuring movement of the person’s eardrum structures (15), such as the person’s eardrum (16) or the tensor tympani muscle (17), in response to the audio stimulus.

7. Device (1) according to any one of the preceding claims, wherein the audio stimulus includes an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears (5) with 8 - 32 frequency channels, at 1/3 octave intervals.

8. Device (1) according to any one of the preceding claims, wherein the audio stimulus includes an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears (5) with frequency channels between 20 and 20000 Hz.

9. Device (1) according to any one of the preceding claims, wherein the ear enclosure means (4) comprise an audio stimulus device (7) configured for providing an audio stimulus to the two of the person’s ears (5) in the form of binaural audio.

10. Device (1) according to any one of the preceding claims, wherein the ear enclosure means (4) comprise a pair of individual headphones (18), each one of the pair of individual headphones comprising an audio stimulus device (7) having 6 - 8 speakers (19) and preferably one reference speaker (20) for generating a reference audio signal.

11. Device (1) according to any one of the preceding claims, wherein the ear enclosure means (4) comprise a pair of individual headphones (18), each one of the pair of individual headphones having an interior chamber (21).

12. Device (1) according to claim 11 , wherein the interior chamber (21) is in the form of an anechoic chamber (22).

13. Device (1) according to claim 11 or 12, wherein the interior chamber (21) has the shape of an icosahedron.

14. Device (1) according to claim 11 , 12 or 13, wherein the interior chamber (21) comprises an acoustic metamaterial (84).

15. Method for determining a hearing capacity of a person (2), using a device (1) according to any one of the preceding claims, comprising the steps of: arranging the device (1) over the head (3) of the person; providing an audio stimulus to the one or two of the person’s ears (5) with the audio stimulus device (7) of the ear enclosure means; and/or providing a visual stimulus to the one or two of the person’s eyes (9) with the virtual reality (VR) glasses (8); and measuring a brain signal with the brain signal measurement device (10) in response to the audio stimulus and/or visual stimulus; determining the hearing capacity of the person, wherein the determining of the hearing capacity is at least partly based on the measured brain signal.

16. Method according to claim 15, wherein the VR glasses (8) comprise an eye signal measurement device (11) for measuring an eye signal of the person (2) in response to the audio stimulus and/or visual stimulus, comprising the further step of: measuring the eye signal of the person in response to the audio stimulus and/or visual stimulus.

17. Method according to claim 15 or 16, wherein the ear enclosure means (4) comprise an ear signal measurement device (12) configured for measuring an ear signal of the person (2) in response to the audio stimulus, comprising the further step of: measuring the ear signal of the person in response to the audio stimulus.

Description:
Title: Device for determining a hearing capacity of a person

Description

The present invention relates to a device for determining a hearing capacity of a person, as well as a method for determining a hearing capacity of a person using such a device.

PRIOR ART

Hearing loss, especially untreated hearing loss, constitutes a great risk to humanity. It is estimated that in 2050 1 out of 4 persons will suffer from insufficient hearing capacity. It is further estimated that 2 out 3 persons are not treated satisfactorily for hearing loss. This may lead to fatigue, depression, loneliness, unemployment, in short: much lower quality of life. Approximately two-thirds of individuals with hearing loss express interest in purchasing a hearing aid, however, only one-third of those who acknowledge the need for a hearing aid actually go through with the purchase. Of those who do purchase a hearing aid, a concerning 8% choose not to wear them and 16% wear them for less than an hour per day. Unfortunately, half of those who do use hearing aids are dissatisfied with the sound quality and speech intelligibility in noisy environments, and the vast majority experience difficulty identifying the source of sounds. For the persons that do get treated, for instance by being provided with a hearing aid, audio quality and speech intelligibility are sub-par. Accurate and reliable diagnosis is crucial for effective hearing aid adjustments and patient-centered treatment. This is basically due to the fact that the standard diagnostic methods used for determining the hearing capacity of the person are essentially over 90 years old. The standard diagnostic method for determining hearing capacity can generally be seen as inaccurate, having ’’low resolution” and lacking in view of dealing with spatial sound localization issues and speech intelligibility in noise.

WO 2019/010540 A1 discloses a method for assessing the hearing of a patient using functional near-infrared spectroscopy (fNIRS), the method comprising: receiving at least one response signal from an optode placed on a scalp of the patient, the response signal comprising fNIRS data generated by the optode and relating to an aural stimulation received by the patient; comparing at least one parameter of the at least one response signal to a predetermined parameter value; and determining an auditory response of a patient based on the at least one parameter of the at least one response signal. The at least one response signal comprises signals relating to brain activity of the patient. However, merely determining a response based on limited multisensory information, purely based on brain activity, may still lead to an incomplete picture of the patient’s hearing capacity, which is particularly troublesome when a hearing aid is to be prescribed.

US 2020/0315499 A1 discloses means for determining acoustic immittance and other characteristics of ears by measuring oscillatory movements of the eardrum and vibrations of the eardrum, i.e. displacements of the eardrum. For example, optical coherence tomography may be applied to monitor eardrum displacements responsive to a sound. The pressure corresponding to the sound is measured by a suitable instrument such as a microphone. The measured displacements and pressures may be processed to obtain a measure of immitance. US 2020/0315499 A1 thus also primarily focusses on what happens in the ear canal/middle ear and leaves a lot to be desired when determining various aspects of the person’s hearing capacity, in particular when a tailor-made hearing aid is to be prescribed. Thus, the currently available diagnostic processes and/or available diagnostic instruments do not provide enough information to determine multiple/various aspects of the person’s hearing capacity, especially when a tailor- made hearing aid is to be prescribed and fitted.

OBJECT OF THE INVENTION

An object of the present invention is thus to improve the current diagnostic processes and/or available diagnostic instruments for determining multiple/various aspects of the person’s hearing capacity, especially when a tailor- made hearing aid is to be prescribed and fitted.

SUMMARY OF THE INVENTION

According to the present invention a device for determining a hearing capacity of a person is provided, the device being configured for being worn over a head of the person, comprising: ear enclosure means for enclosing the person’s ears, such as over- the-ear (OTE) headphones, comprising an audio stimulus device for providing an audio stimulus to one or two of the person’s ears; virtual reality (VR) glasses for providing a visual stimulus to one or two of the person’s eyes; and a brain signal measurement device configured for measuring a brain signal of the person in response to the audio stimulus and/or visual stimulus, wherein the determining of the hearing capacity of the person is at least partly based on the measured brain signal.

The above “integrated” device for determining the hearing capacity of the person takes various aspects of the user’s response into, i.e. most notably brain signals of the person in response to an audio stimulus and/or visual stimulus. Thus, a much more accurate picture can be obtained of the person’s hearing capacity, leading to the prescription of a better-fitting and better-performing hearing aid and a much higher quality of life for the person concerned.

By evaluating multiple aspects of an individual's response to audio and visual stimuli, the device offers a comprehensive evaluation, leading to a better fitting and performing hearing aid, more effective treatment outcomes, and ultimately improving the quality of life for those with hearing loss.

Diagnosing tinnitus, providing neuro-feedback training and treatment for people with cochlear implants and people with Sensory Processing Disorder (SPD) and Multisensory Audiovisual Processing (MAP) is also facilitated.

The visual stimulus may be provided to the person by showing pictures or (for instance, interactive, video) animations or the like.

In case of problems with the middle ear, an audio stimulus may be provided by means of an oscillator e.g. comprised by the VR glasses.

It should be noted that IN 2019/11025478 A discloses a device for detection of hearing loss, in particular a device for early detection of hearing loss in infants using virtual reality (VR) technology, as well as a method of early detection of hearing loss in infants. By delivering quality 3D audio in VR, the device can impart an interactive experience that modifies itself based on both an object's movements and the movements of the user. The device of the present invention works on the principle of 'Doppler effect' and uses a specially designed virtual reality headset for children, i.e. a head-mounted device that provides virtual reality (VR) for the wearer. Head motion tracking sensors are used to capture the rotation of the head, such as gyroscopes, accelerometers and structured light systems. However, IN 2019/11025478 A merely provides a global indication of the wearer’s hearing loss in case discrepancies exist between the 3D audio and the VR environment “processed” by the wearer. I N 2019/11025478 A e.g. also does not take brain signals into account.

EP 3 456 259 A1 discloses a device for adjusting a hearing aid using a "pupil tracker ". Paragraphs [0047] and [0048] of EP 3 456 259 A1 casually suggest that the "pupil tracker" could be used in a VR headset. However, the VR headset does not appear to play an active role in EP 3 456 259 A1 .

WO 2018/218356 A1 discloses wearable goggles to measure “biosignals.” However, the “hearing aspect” does not appear to get any attention in WO 2018/218356 A1.

An embodiment relates to an aforementioned device, wherein the brain signal comprises an electroencephalography (EEG) signal and/or a functional near-infrared spectroscopy (fNIRS) signal. Thus, the auditory pathway can be accurately assessed to determine the person’s brain signals in response to the audio and/or visual stimulus. Preferably, a combined fNIRS/EEG system is used to simultaneously measure electrical potentials in brain tissue and changes in brain tissue oxygenation and blood volume.

An embodiment relates to an aforementioned device, wherein the VR glasses comprise an eye signal measurement device for measuring an eye signal of the person in response to the audio stimulus and/or visual stimulus.

The eye signal preferably comprises an electrooculography (EOG) signal, an eye tracking signal and/or an eye movement (EM) signal, preferably combined with the fNIRS visual pathway. Thus, by measuring an aforementioned eye signal of the person, the person’s “overall” response to the audio and/or visual stimulus can be better assessed. This is particularly advantageous when investigating possible 3D/environmental audio localization issues. An eye signal or eye tracking may for instance relate to blink duration, blink frequency, eyeball movement, pupil constriction/dilation/size, et cetera.

An embodiment relates to an aforementioned device, wherein the ear enclosure means comprise an ear signal measurement device configured for measuring an ear signal of the person in response to the audio stimulus. Thus, an even more complete picture of the person’s hearing capacity can be obtained. The ear signal measurement device may for instance comprise a light detection and ranging (LiDAR) device for scanning the person’s ear canal, such as a volume of the person’s ear canal, and/or for measuring movement of the person’s eardrum structures, such as the person’s eardrum or the tensor tympani muscle (or any other structures connected to the eardrum), for instance tilting movements and/or vibrations of the eardrum, in response to the audio stimulus.

An embodiment relates to an aforementioned device, wherein the audio stimulus includes an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears with 8 - 32 frequency channels, at 1/3 octave intervals. Thus, the resolution of the audiological test is greatly increased, compared to the “gold standard” 4 - 8 frequency channels at 1 octave intervals.

An embodiment relates to an aforementioned device, wherein the audio stimulus includes an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears with frequency channels between 20 and 20000 Hz. The resolution of the audiological test is thus even further increased compared to the commonly used 250 - 8000 Hz audiological test.

An embodiment relates to an aforementioned device, wherein the ear enclosure means comprise an audio stimulus device configured for providing an audio stimulus to the two of the person’s ears in the form of binaural (“surrounding sound”) audio. Thus, the audio stimulus provided to the person is much more realistic, i.e. resembling real-life situations where “spatial” audio stimuli are provided, leading to a much better determination of the hearing capacity of the person and a better-fitting hearing aid.

Preferably, the ear enclosure means comprise a pair of individual headphones, each one of the pair of individual headphones comprising an audio stimulus device having 6 - 8 speakers and preferably one reference speaker for generating a reference audio signal. Thus, an immersive, realistic, binaural (“surrounding sound”) audio stimulus can be provided to the person. The 6 - 8 speakers may each be configured to provide sound in different frequency ranges. The reference audio signal may for instance comprise reference speech, reference music or reference (white) noise. Preferably, the ear enclosure means, such as each one of the pair of individual headphones, are dimensioned, sized or configured to fully comprise or enclose the person’s auricle(s). Thus, the acoustics of the auricle can be taken into account when performing measurements. An inner surface of the ear enclosure means may e.g. be spaced at a minimum distance of 3, 4 or 5 mm from the respective auricle.

An embodiment relates to an aforementioned device, wherein the ear enclosure means comprise a pair of individual headphones, each one of the pair of individual headphones having an interior chamber. The interior chamber may be in the form of an anechoic chamber. The interior chamber preferably has the shape of an icosahedron. Thus, unwanted sound reflections can be avoided and powerful validation capabilities can be provided over a wide frequency range. Furthermore, a test environment can thus be created, allowing for reproducible and reliable testing.

An embodiment relates to an aforementioned device, wherein the interior chamber comprises an acoustic metamaterial. The acoustic metamaterial is designed to create an anechoic chamber that delivers precise sound reproduction for headphones. Creating multidimensional sound using multiple speakers in an anechoic environment may help to reproduce any type of acoustic environment for fields including hearing, speech understanding, signal processing, and noise pollution research. The term “metamaterial” can also be broadly applied to engineered materials, usually composites, in which an internal structure is used to induce effective properties in the artificial material that are substantially different from those found in its components.

The abovementioned acoustic metamaterials can be used to create 3D sound fields inside of ear enclosure means, such as headphones. This material helps create an anechoic chamber, wherein sounds do not bounce off of surfaces and are heard more clearly as a result.

An anechoic chamber may be defined as a sound-proofed room designed to absorb reflections of sound waves, making the room acoustically dead. These chambers are used in a variety of audio applications including recording studios, listening rooms, and laboratories for noise-cancellation research.

The word "anechoic" comes from the Greek prefix "a-" meaning "without" + the root word "echo", resulting in the meaning of "without echo or reflection". Echo is created when sound waves reflect off of surfaces in a room, so by absorbing these reflections, an anechoic chamber eliminates echoes. In addition to being echo-free, anechoic chambers are also typically free from background ambient noise such as HVAC systems or street traffic. Anechoic chambers are usually constructed with materials that absorb sound waves efficiently such as fiberglass insulation, porous Sound Absorbing Material (SAM), or acoustic foam. The walls, ceiling, and floor of an anechoic chamber are lined with these materials in order to achieve maximum absorption. In some cases, the entire room may be suspended from the ceiling to further reduce vibrations and reflections from the floor.

An acoustic metamaterial is a material that has been designed to control sound waves “in the desired way”. In this case, the acoustic metamaterial is being used to create an anechoic chamber, which is a space where sound waves can be controlled and redirected so that they do not bounce off surfaces and create echoes. This is important for creating surround sound in ear enclosure means, such as headphones, because it allows the sound to be directed directly into the ear without being disrupted by reflections of objects in the headphone.

A basic element of metamaterial design is a unit-cell of identical building blocks, like the crystal lattice. These unit cell interaction with electromagnetic, acoustics or other waves manifests into macro performance with unusual properties. The wave interaction effects with the unit-cell are critical and the constituents of the unit-cell can be engineered to interact for unusual properties.

The acoustic metamaterial used for this purpose may be made up of a series of small, interconnected chambers that are filled with air. The chambers may be shaped and sized in such a way that they can reflect, absorb, or scatter sound waves depending on their frequency. By carefully designing the metamaterial, it is possible to create an anechoic chamber that will block out all unwanted sounds and allow only the desired sound to reach the listener's ears.

Traditional headphones only create sound in two dimensions (2D). They often lead to a tinny or flat-sounding audio experience. The acoustic metamaterial can help to create a 3D listening experience, which provides a more natural and lifelike listening experience, with sound seeming to come from all around the listener rather than just the front or back.

As mentioned in the foregoing, acoustic metamaterials have a number of advantages when it comes to creating surround sound in headphones. First, they are able to absorb sound waves very effectively, which means that the sound waves will not bounce around inside the headphone and create an echo. This is especially important for high-frequency sounds, which can be very difficult to manage without anechoic chambers. Second, acoustic metamaterials are very lightweight, which makes them much more comfortable to wear for long periods of time. Third, they are very effective at blocking out external noise, which means that one can test the hearing thresholds without having to worry about outside distractions.

By combining various types of metamaterials in multiple layers, anechoic headphones can be created with near-silence and a highly controlled environment for acoustic testing and research. The use of these different materials in distinct layers grants a high degree of control over the final device, resulting in an exceptional listening experience with optimal sound control and noise reduction.

Different types of acoustic metamaterials may be utilized in the design and construction of the anechoic headphones, including absorptive, reflective, scattering, and amplifying materials. Absorptive metamaterials are designed to absorb sound waves, reducing noise in applications. Reflective metamaterials reflect sound waves to create barriers that block or deflect sound. Scattering metamaterials scatter sound waves to create a diffused sound field or control the direction of sound waves. Amplifying metamaterials amplify sound waves to increase sound volume in a specific area. The headphones may also use metamaterials with the function of a Faraday cage to provide protection from electromagnetic fields and ensure clean measurements in each space.

The use of acoustic metamaterials in the construction of anechoic chambers within icosahedron headphones improves control over sound waves. These engineered materials are designed to manipulate sound, making them ideal for various acoustic tests and studies. Research, based on the analysis of sound mandalas from Cymascope, revealed that sound waves follow specific mathematical patterns of propagation, which were incorporated into the design of the metamaterial's cell units. These cell units have multiple layers arranged both vertically and horizontally with their distance, length, and shape determined by the patterns found in the sound mandalas.

As mentioned before, different types of acoustic metamaterials may be used in the design of the anechoic headphones, including absorptive, reflective, scattering, and amplifying materials. Absorptive materials are designed to absorb sound waves and reduce noise, while reflective materials reflect sound waves to block or deflect them. Scattering materials scatter sound waves to create a diffused sound field or control the direction of sound waves, and amplifying materials amplify sound waves to increase sound volume in a specific area. Additionally, the headphones may use metamaterials with Faraday cage properties to protect against electromagnetic fields and ensure accurate measurements.

The design of acoustic metamaterials is based on artificially engineered structure-based composites with complex and sophisticated designs. These materials have unique characteristics, such as negative effective mass density and negative effective bulk modulus, which are derived from their geometry and structure rather than their composition. Therefore, physics-based geometrical design is a crucial step in creating specific acoustic metamaterials.

Spiral structures have different effects on sound waves, depending on the specific design of the spiral and the properties of the material it is made of. A spiral structure of a size comparable to the wavelength of sound waves can cause the waves to be scattered in different directions. This scattering effect can be used to create devices that diffuse sound, such as sound-absorbing panels or noise barriers. If the spiral structure has a size much smaller or larger than the wavelength of sound waves, it can cause the waves to be focused or defocused, which can be used to create acoustic lenses or beam steering devices. The effect of a spiral structure on sound waves is determined by the specific design and material properties, and mathematical modeling and computer simulations can predict its behavior when sound waves are incident upon it.

As discussed, an anechoic chamber is an enclosure used to completely isolate a sound source from its surroundings. This is achieved by suspending the sound source in the center of the chamber and lining the walls with acoustically absorbent material. Anechoic chambers are used in a variety of applications, including testing headphones and loudspeakers, measuring noise levels, and conducting research on sound.

As mentioned before, the benefits of using an anechoic chamber are many. First, it allows for accurate measurements of sound levels and frequencies. Second, it eliminates background noise that can interfere with experiments or recordings. Third, it provides a controlled environment for studying sound propagation and other acoustic phenomena. Finally, it can be used to create surround sound in headphones by simulating different listening environments.

When it comes to simulating realistic surround sound in headphones, there are a few different methods that have been used with varying degrees of success. One common method is known as binaural recordings, which relies on recording audio from two different perspectives and then playing it back through headphones. This can create a fairly convincing illusion of being in the same room as the original sound source, but it can sometimes suffer from phase issues that can degrade the overall quality of the experience.

Another popular method is called cross-feed processing, which uses filters to create the illusion of sound emanating from speakers placed at a distance from the listener. This can provide a more natural and spacious sounding experience, but again, it can sometimes suffer from phase issues that can impact fidelity.

Advantageously, acoustic metamaterials may be used to e.g. create anechoic chambers within the ear enclosure means, in particular headphones, themselves. This effectively isolates each ear from bleed-through from the other earcup, allowing for more accurate positioning of virtual sound sources.

The use of acoustic metamaterials with an interior chamber of ear enclosure means, to e.g. develop an anechoic chamber, is submitted to be a novel approach that has the potential to create surround sound in ear enclosure means, such as headphones. This technology can be used to improve the quality of sound in headphones and make them more realistic. By creating an anechoic chamber, echoes and background noise can be eliminated, making it easier for people to hear the audio clearly.

In a broader sense, it should be noted that 50% of people who buy hearing aids are satisfied with the sound quality in the company of others, while 93% have difficulty locating the source of the sound. The device according to the present disclosure allows for measuring responses “within” the human body - including changes to brain waves and e.g. pupil size, providing objective measurements, e.g. allowing for measuring up to 32 frequency channels and thus meeting the needs of those who want better quality sound.

Measuring “directional hearing” is also beneficial for people who desire speech intelligibility while in social situations or while being in the company of others - 93% of people have difficulty locating the source of sound when they were wearing their hearing aids. The present invention also seeks to address this problem, by measuring directional hearing. Traditional headphones only provide sound in the front-left and right directions. But with icosahedron-shaped interior chambers or headphones, a more realistic soundscape can be created and a person will be able to hear sounds from any direction. Plus, they block out external noise more effectively than traditional headphones, which gives an improved listening experience. Icosahedron-shaped headphones are also very comfortable to wear for extended periods of time. As a result, different tests can be performed and more accurate measurements on the effects on a person’s hearing can be obtained. Moreover, their full size gives the opportunity to test the effects of the pinna due to their design and get a more accurate measurement and more data for fitting hearing aids.

It is preferred that the speakers are aligned in the anechoic chamber; this will achieve evenly distributed sound and eliminate echo or reverberation. Preferably, a surround sound effect is created with multiple speakers so a more immersive listening experience is provided.

When creating headphones in the form of an icosahedron, it is generally important to consider various factors in order to provide an optimal listening experience. To e.g. achieve a surround sound effect, multiple speakers are preferably incorporated into the design. One important consideration is the placement of the speakers within an anechoic chamber, which eliminates echoes and reverberations, and ensures even distribution of sound.

To achieve this design, e.g. the speakers, such as 8 speakers, can be positioned at the vertices of the icosahedron shape. These speakers are preferably positioned equidistant from one another and from the listener's ears, in order to project sound evenly in all directions, and achieve a surround sound effect. The optimal angles at which the speakers should be placed can be determined through experimentation and optimization, to enhance the overall sound quality

Signal processing algorithms are mathematical techniques and procedures that are used to analyze, modify, or synthesize signals, such as sound waves. In the context of headphones with a geometric form of an icosahedron, signal processing algorithms could be used to create a surround sound experience by manipulating the sound waves in a way that creates the illusion of sound coming from multiple directions around the listener.

Spatialization algorithms are one type of signal processing algorithm that can be used to create a surround sound experience. These algorithms are used to manipulate the sound waves to create the illusion of sound coming from specific directions. Panning, head-related transfer function (HRTF), and ambisonics are some common techniques that are used in spatialization algorithms. Reverberation algorithms are another type of signal processing algorithm that can be used to create a surround sound experience. These algorithms are used to simulate the effects of sound reflecting off of surfaces and objects in space. By adding a delayed and attenuated version of the original sound wave to the output signal, the resulting sound wave has a more diffuse and reverberant character, which can create the impression of a larger or more reverberant space.

Equalization algorithms are another type of signal processing algorithm that can be used to adjust the frequency response of the sound waves to create the desired tonal characteristics. They can be used to boost or attenuate specific frequency ranges to create the desired balance of frequencies in the sound.

Another aspect of the invention concerns a method for determining a hearing capacity of a person, using an aforementioned device, comprising the steps of: arranging the device over the head of the person; providing an audio stimulus to the one or two of the person’s ears with the audio stimulus device of the ear enclosure means; and/or providing a visual stimulus to the one or two of the person’s eyes with the virtual reality (VR) glasses; and measuring a brain signal with the brain signal measurement device in response to the audio stimulus and/or visual stimulus; determining the hearing capacity of the person, wherein the determining of the hearing capacity is at least partly based on the measured brain signal.

Therein, the VR glasses may comprise an eye signal measurement device for measuring an eye signal of the person in response to the audio stimulus and/or visual stimulus, comprising the further step of: measuring the eye signal of the person, for instance in combination with real-time fNIRS brain signals, in response to the audio stimulus and/or visual stimulus.

The ear enclosure means may comprise an ear signal measurement device configured for measuring an ear signal of the person in response to the audio stimulus, comprising the further step of: measuring the ear signal, for instance with LiDAR, of the person in response to the audio stimulus. In an embodiment, configuring the device for being worn over a head of the person may comprise configuring the device as a helmet.

Preferably, the device is configured to act as a Faraday cage to prevent external electromagnetic fields from interfering with the functioning of the device.

The aforementioned device may furthermore comprise microphones to allow the device to perform active noise cancellation (ANC). Such microphones may be provided in/on an external surface of the device, such as on an external surface of the helmet, when the device is configured as a helmet.

The aforementioned device may also comprise a microphone for recording the person’s voice to perform speech analysis. Thus, an even more complete picture of the person’s capabilities may be obtained.

Furthermore, the aforementioned ear signal measurement device may be configured to perform Optical Coherence Tomography (OCT) and/or LiDAR on the one or more ears of the person, such as to determine movement or position of the eardrum.

Also, the aforementioned device may be configured to perform an Electromyography (EMG) on the person, such as the muscles of the person’s face or muscles in or around the person’s ears, such as to test the stapedius reflex or the functioning of the tensor tympani muscle.

Another aspect of the invention relates to a method for measuring a person's hearing capacity, using an aforementioned device, in order to distinguish between the person’s dominant and non-dominant ear, leading to better hearing aid adjustment and improved wearing comfort.

In general, the Applicant submits that the aforementioned device, and in particular fNIRS, can be used for diagnostics and/or detecting overactive brain areas in people with tinnitus, which may for instance be treated with psychoeducational sessions through VR and neuro-feedback therapy (possibly also via fNIRS).

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be explained in more detail below, with reference to illustrative embodiments shown in the drawings. Therein: Figure 1 shows a perspective view of an example embodiment of a device according to the invention arranged on the head of a person;

Figure 2 shows a schematic view of an example embodiment of a device according to the invention, such as the example embodiment shown in Figure 1 , arranged on the head of a person;

Figure 3 shows an example of an ear signal measurement device embodied by a LiDAR device in an ear canal of the person;

Figure 4 shows an interior chamber of an individual headphone of a pair of individual headphones, in the form of an icosahedron anechoic chamber;

Figure 5 shows a schematic overview of a method for obtaining and/or analyzing data, using a device according to the invention;

Figure 6 shows a schematic overview of a diagnostic method, using a device according to the invention, as well as diagnostic submethods, device sensors to be used in such submethods, and relevant parameters to be measured with such submethods;

Figure 7 shows a schematic overview of a brain signal measurement device in the form of an fNIRS device;

Figure 8 shows a schematic depiction of the basic functionality of an fNIRS device, such as the fNIRS device of Figure 7;

Figures 9a and 9b show preferred positions of fNIRS emitters and detectors with respect to the brain of the person;

Figure 10 shows a schematic view of an example embodiment of an ear signal measurement device comprising a LiDAR device according to the invention to be inserted into a person’s ear canal, for instance for the detection of eardrum movements; and

Figure 11 shows a schematic view of an example embodiment of an interior chamber of an individual headphone of a pair of individual headphones comprised by ear enclosure means, in the form of an icosahedron anechoic chamber, comprising acoustic metamaterial.

DETAILED DESCRIPTION

Figure 1 shows a perspective view of an example embodiment of a device 1 for determining a hearing capacity of a person 2 according to the invention arranged on a head 3 of the person 2. The device 1 is configured for being worn over the head 3 of the person and may for instance comprise a helmet (as will be explained with reference to Figure 2). The device 1 comprises ear enclosure means 4 for enclosing the person’s ears 5, such as over-the-ear (OTE) headphones 6. The ear enclosure means 4 may have inner dimensions of 80 - 85 mm (height) and/or 40 - 45 mm (width). The ear enclosure means 4 comprise an audio stimulus device 7 for providing an audio stimulus to one or two of the person’s ears 5. The audio stimulus device 7 is preferable suitable for producing audio in a frequency range of 100 Hz - 20 kHz. The device 1 furthermore comprises virtual reality (VR) glasses 8 for providing a visual stimulus to one or two of the person’s eyes 9. The VR glasses 8 may operate on any conceivable operating system, such as Android or iOS, to name some examples, and may operate using any available engine, as long as the visual output is high-quality (e.g. having 4K resolution). The device 1 also comprises a brain signal measurement device 10 (e.g. attached to the head 3) configured for measuring a brain signal of the person 2 in response to the audio stimulus and/or visual stimulus. Therein, the determining of the hearing capacity of the person 2 is at least partly based on the measured brain signal. The brain signal may comprise an electroencephalography (EEG) signal and/or a functional near-infrared spectroscopy (fNIRS) signal (the latter will be explained in more detail with reference to Figures 7 - 9b). The device 1 may also comprise an ear signal (or ear response) measurement device 12 e.g. in the form a(n) (internal) light detection and ranging (LiDAR) device 13 for scanning the person’s ear canal, e.g. in response to the audio stimulus. The device 1 may also comprise several external LiDAR devices 13, e.g. arranged on an external surface of the ear enclosure means 4 and/or on an external surface of the device 1 , such as a helmet (e.g. shown in Figure 2), to scan the surroundings of the person 2. The VR glasses 8 may comprise an eye signal (or eye response) measurement device 11 for measuring an eye signal of the person 2 in response to the audio stimulus and/or visual stimulus. The eye signal may comprise an electrooculography (EOG) signal, an eye tracking signal and/or an eye movement (EM) signal. The skilled person will understand that the device 1 may be attached to the head 3 of the person 2 by using appropriate (i.e. comfortable) straps 72, bands, et cetera. The device 1 furthermore should preferably be configured to have sufficient battery life, such as 30 - 36 hours. The device 1 should preferably also be able to communicate via wireless technologies, such as Wi-Fi or Bluetooth. The audio stimulus 7 may for instance comprise ERP, clicking sounds, specific tones, pink noise, music, mono sound, stereo sound, et cetera. The audio stimuli may have an interval of 1.5 seconds - 2.5 seconds, such as 2 seconds. The intensity of the audio stimuli may be 130 dB. Also, in view of the possibly large amounts of data to be communicated/analyzed, the device 1 may comprise photonic chips. The device 1 is preferably also configured to communicate via 5G. The VR glasses 8 may also be used to make people “aware” of hearing loss in a playful manner (e.g. in a VR game setting). Developments with the LiDAR functionality of present-day VR glasses 8 (such as the Oculus Quest) also offer possibilities for hearing diagnostics and optimal adjustment of hearing aids is therefore easier. In particular, linking an environmental LiDAR scan (such as performed with the aforementioned external LiDAR devices 13, e.g. as comprised by the VR glasses 8) with the acoustic value of the environment where the test is carried out, is promising. Such linking can be realized based on algorithmic interpretation. The Applicant foresees that adjusting hearing aids based on the aforementioned acoustic value may help with creating a new generation of smart devices where speech-to-noise ratio is optimized.

Figure 2 shows a schematic view of an example embodiment of a device 1 according to the invention, such as the example embodiment shown in Figure 1 , arranged on the head 3 of a person 2. The device 1 as shown comprises a helmet 23. The helmet 23’s outer shell may be made of Kevlar, graphite, carbon fibre, glass fiber, polycarbonate, et cetera. The helmet 23 preferably is configured to protect the person’s head 3 against impact in case of an accident. Thereto, an inner shell of the helmet may comprise acoustic fabric panels, acoustic foam panels, acoustic mineral wool, acoustic cotton batts, convoluted acoustic panels, fabric-covered foam panels, or the like, to increase comfort and impact resistance, as well as to increase sound absorption. The device 1 as shown in Figure 2 may have ear enclosure means 4 comprising an ear signal measurement device 12 configured for measuring an ear signal of the person 2 in response to the audio stimulus. As more clearly shown in Figure 3, the ear signal measurement device 12 may comprise a light detection and ranging (LiDAR) device 13 for scanning the person’s ear canal 14, such as a volume of the person’s ear canal 14, and/or for measuring movement of the person’s eardrum structures 15, in response to the audio stimulus, such as the person’s eardrum 16, the manubrium malleolaris, the umbo, the musculus stapedius 83 or the tensor tympani muscle 17, to name some examples. The ear signal device measurement device 12 may also comprise an internal microphone 27 (such as an optical microphone). The audio stimulus preferably includes an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears 5 with 8 - 32 frequency channels, at 1/3 octave intervals. The audio stimulus may furthermore include an audiological test, comprising providing audiological test stimuli to the one or two of the person’s ears 5 with frequency channels between 100 and 20000 Hz. In case of problems with the middle ear, an audio stimulus may be provided by an oscillator 24 comprised by the VR glasses 8. The device 1 may be configured to perform an Electromyography (EMG) 28 on the person 2, such as the muscles of the person’s 2 face or muscles in or around the person’s ears 5, such as to test the stapedius reflex or the functioning of the tensor tympani muscle. Alternatively, or in addition thereto, Optical Coherence Tomography (OCT) may be employed. The device 1 may also comprise one or more “skin sensors” 26, e.g. for performing a photoplethysmogram (PPG), i.e. an optically obtained plethysmogram that can be used to detect blood volume changes in a microvascular bed of tissue, such as of the face of the person 2. The one or more skin sensors 26 may also be used to analyze electrodermal activity (EDA), i.e. the property of the human body that causes continuous variation in the electrical characteristics of the skin. The device 1 may comprise an external microphone 25, preferably arranged near the person’s 2 mouth, i.e. during use, e.g. for performing speech analysis. The ear signal measurement device 12 may also comprise a nano-camera, i.e. a camera sufficiently small to be inserted into the ear canal of the person 2, or a sufficiently small endoscope. However, a light detection and ranging (LiDAR) device 13 is preferred, because such as device 13 offers a wider range of measurement possibilities. The device 1 may furthermore comprise ANC microphones 30 to allow the device 1 to perform active noise cancellation (ANC). The ANC microphones 30 may be provided in/on an external surface of the device 1 , such as on an external surface of the helmet 23. Preferably, the device 1 , such as the helmet 23, is configured to act as a Faraday cage to prevent external electromagnetic fields from interfering with the functioning of the device 1. Thereto, the device 1 , such as the helmet 23, may comprise aluminum, copper or chicken wire, to name some examples. Preferably, the device 1 also isolates or shields the person 2 (i.e. his or her ears and/or eyes) from light and/or noise.

As mentioned in the foregoing, Figure 3 shows an example of an ear signal measurement device 12 embodied by a LiDAR device 13 in an ear canal 14 of the person 2. As mentioned in the foregoing, the ear signal measurement device 12 may comprise one or more light detection and ranging (LiDAR) device 13 for scanning the person’s ear canal 14, such as a volume of the person’s ear canal 14, and/or for measuring movement of the person’s eardrum structures 15, such as the person’s eardrum 16 or the tensor tympani muscle 17 (or any other structures connected to the eardrum), in response to the audio stimulus. For sake of clarity, Figure 3 also shows the middle ear 31 .

Figure 4 shows an interior chamber 21 of an individual headphone 18 of a pair of individual headphones 18, in the form of an icosahedron anechoic chamber 22. The ear enclosure means 4, for instance those shown in Figure 1 and 2, may comprise an audio stimulus device 7 configured for providing an audio stimulus to the two of the person’s ears 5 in the form of binaural audio. The ear enclosure means 4 may comprise a pair of individual headphones 18, each one of the pair of individual headphones 18 comprising an audio stimulus device 7 having 6 - 8 speakers 19 and preferably one reference speaker 20 for generating a reference audio signal. As shown in Figure 4, each one of the pair of individual headphones 18 may have an interior chamber 21 in the form of an icosahedron anechoic chamber 22. The interior chamber 21 may be provided with acoustic metamaterial 84.

Figure 5 shows a schematic (functional) overview of a method 32 for obtaining and/or analyzing data, such as using a device 1 according to the invention, for establishing the hearing capacity of the person 2. The method 32 (or device 1) may employ an image and sound generator (module) 44 for generating the audio and/or visual stimulus. The image and sound generator 44 may comprise one or more signal generators 29 for generating an audio signal (audio stimulus), an electric signal, and/or a visual signal (visual stimulus). The one or more signal generators 29 may be functionally coupled to a hardware driver 50 and a sensor driver 50 (comprised by the image and sound generator 44). The hardware driver 50 may be used to “drive” one or more speakers 19, 20, an ear signal measurement device 12, e.g. in the form of a camera, and/or a microphone 25 (external) and/or microphone 27 (internal). The one or more signal generators 29 may also be functionally coupled to a sensor driver 51. The sensor driver 51 may be used to “drive” a brain signal measurement device 10, such as employing EEG, ERP or EMG and/or a skin sensor 26, e.g. employing ADR or PPG, and/or an eye signal measurement device 11 , e.g. employing EOG, EM, ET or EMG. The one or more signal generators 29 may comprise a microprocessor 46 and may furthermore be functionally coupled to a memory and a transmitter. Preferably, the image and sound generator 44 also utilizes artifact removal and correction, as indicated by reference numeral 36. Prior to activation of the image and sound generator module 44, anamnese/pre-processing may per performed, as indicated by reference numeral 33, as well as prescriptive analytics, as indicated by reference numeral 34, to let the image and sound generator 44 generate the proper audio and/or visual stimuli.

Predictive analytics 35 may then be used to obtain speech data, biological data, visual data, audio data and/or video data. A multiplexer engine 45 may be used for data selection/integration. The multiplexer engine 45 may be part of a cognition and emotion analysis module 40, which may employ a controller 43 for proper functioning.

Real-time feedback may be provided between the artifact removal and correction module 36, or, generally speaking, the image and sound generator module 44 and a data collection/biometric analysis module 38. Data collection/biometric analysis 38 may comprise a scalp data controller 52, a measurement data controller 53, an emotion estimation unit 54, a cognition estimation unit 55, and/or an estimation result output unit 56. A descriptive analytics module 39 is also foreseen. A communication interface 41 between the cognition and emotion analysis module 40 and the data collection/biometric analysis module 38/descriptive analytics module 39 is also provided. A learning data generator and storage 47 and a data storage unit 48 are shown in the right portion of Figure 5. A communication interface 41 is again provided between the data collection/biometric analysis module 38/descriptive analytics module 39 and the learning data generator and storage 47 and a data storage unit 48. The learning data generator and storage 47 is connected to an input/output interface 57, which may provide output 58, for further use in data science, context-aware sensing, et cetera. A feedback artifact removal and correction module 37 may also be provided, functionally connected to the image and sound generator module 44, the cognition and emotion analysis module 40, the (lower) communication interface 41 and the data collection/biometric analysis module 38/descriptive analytics module 39.

Figure 6 shows a schematic overview of a diagnostic method 59, using a device 1 according to the invention, as well as diagnostic submethods 591 - 595, device sensors 691 - 695 to be used in such submethods 591 - 595, and relevant parameters 5911 - 5953 to be measured with - or specific tests 5911 - 5953 to be performed with - such submethods 591 - 595. Diagnostic submethods 591 - 595 may respectively relate to (from left to right) determining the absolute threshold of hearing (ATH) (591), determining a differential hearing threshold value/frequency and loudness (592), a directional hearing test (593), determining a threshold for speech intelligibility (594), as well as a free field hearing test (595). Device sensors 691 - 695 to be used in such submethods 591 - 595 may respectively comprise (from left to right) ear enclosure means 4/headphones 18 with: internal microphone 27, (nano-)camera, OCT 29 sensors, speaker(s) 19, 20 and/or various other sensors, such as skin sensors 26 (691); VR glasses 8 and ear enclosure means 4/headphones 18 and/or various other sensors (692); VR glasses 8, (nano-)camera, and ear enclosure means 4/headphones 18 and/or various other sensors (693); VR glasses 8, (nano-)camera, and ear enclosure means 4/headphones 18, internal microphone 27, and/or various other sensors (694); VR glasses 8 and REM (695). For diagnostic submethod 591 , the specific tests to be performed or parameters to be measured (5911 - 5913) may comprise EEG (5911), PPG 26, EDA 26, heart rate, blood pressure, oxygen saturation, EOG, eye tracking and pupil measurements (5912), and/or EMG 28 (meculus tensor tympani) (5913). For diagnostic submethod 592, the specific tests to be performed or parameters to be measured (5921 - 5923) may comprise EMG 28 and EEG (5921), PPG 26, EDA 26, heart rate, blood pressure, oxygen saturation, EOG, eye tracking and pupil measurements (5922), and EMG 28 (stapedius reflex, tensor tympani), as well as (nano-)camera, OCT 29 (eardrum 16 position) (5923). For diagnostic submethod 593, the specific tests to be performed or parameters to be measured (5931 - 5933) may comprise EEG - peaks Na, Pa, Nb (gyrus temporalis superior, trigeminus), auditive cortex, selective and auditive attention (5931), PPG 26, EDA 26, heart rate, blood pressure, oxygen saturation, EOG, pupil measurements, ear drum 16 movement (5932), and EMG 28 (stapedius reflex, tensor tympani, auricular muscle reflex), (nano-)camera (ear drum 16 position) (5933). For diagnostic submethod 594, the specific tests to be performed or parameters to be measured (5941 - 5943) may comprise EEG (formatio reticularis/thalamus, auditive cortex) - peaks Na, Pa, Nb, superior temporal sulcus, reflexive attention (5941), PPG 26, EDA 26, heart rate, blood pressure, oxygen saturation, EOG, pupil measurements, ear drum 16 movement, speech analysis, attention, S/N ratio, speech synthesis (5942), and EMG 28 (stapedius reflex, tensor tympani, post-auricular muscle reflex), (nano-)camera/OCT 29 (ear drum 16 position) (5943). For diagnostic submethod 595, the specific tests to be performed or parameters to be measured (5951 - 5953) may comprise EEG (formatio reticularis/thalamus, auditive cortex) - peaks Na, Pa, Nb, superior temporal sulcus, reflexive attention, selective and auditive attention (olive superior, inferior colliculus) (5951), PPG 26, EDA 26, heart rate, blood pressure, oxygen saturation, EOG, pupil measurements, ear drum 16 movement, speech analysis, attention, S/N ratio, speech synthesis (5952), and ear canal 14 acoustics, stress and free field S/N ratio (5953).

Figure 7 shows a schematic overview of a brain signal measurement device 10 in the form of an fNIRS device 60. The fNIRS device 60 comprises an fNIRS emitter 61 to emit near-infrared light waves into a blood vessel system of a brain comprised by the person’s head 3. The light waves to be emitted by the fNIRS emitters 61 are created by lasers 63. The light waves are transmitted to the head 3 by fibers 65, which are connected to fiber switches 64 for distributing the light waves. An fNIRS detector 62 with ND wheels 67 and detectors 68 then detects the (altered) emitted near-infrared light waves that have travelled through the blood vessel system 69 to estimate cortical hemodynamic activity. One or more of the fibers 65, 66 may be provided with an EEG device 74 arranged on the person’s head 3 at the location where the fibers 65, 66 touch the head 3, such that a combined fNIRS/EEG “sensor” is created, for sensing both fNIRS as well as EEG brain signals (also see Figure 8).

Figure 8 shows a schematic depiction of the basic functionality of an fNIRS device 60, such as the fNIRS device 60 of Figure 7, for use as a brain signal measurement device 10. As mentioned before, the fNIRS device 60 comprises an fNIRS emitter 61 to emit near-infrared light waves into a blood vessel system 69 of a brain 70 comprised by the person’s head 3 (via a fiber 65). An fNIRS detector 62 then detects the (altered) emitted near-infrared light waves that have travelled through the blood vessel system 69 to estimate cortical hemodynamic activity (via another fiber 66). As shown in Figure 8, the fibers 65, 66 may again be provided with an EEG device 74 arranged on the person’s head 3 at the location where the fibers 65, 66 touch the head 3, such that a combined fNIRS/EEG “sensor” is created.

Figures 9a and 9b show preferred positions of fNIRS emitters 61 and detectors 62 with respect to the brain 70 of the person, with the T3 position being indicated by reference numeral 71. The distance “d” may be 25 - 35 mm, such as 30 mm.

As mentioned in the foregoing, an aspect of the invention relates to a method for determining a hearing capacity of a person 2, using an aforementioned device 1 . The method comprises the step of arranging the device 1 over the head 3 of the person 2. The method comprises the further step of providing an audio stimulus to the one or two of the person’s ears 5 with the audio stimulus device 7 of the ear enclosure means 4 and/or providing a visual stimulus to the one or two of the person’s eyes 9 with the virtual reality (VR) glasses 8. The method also comprises the step of measuring a brain signal with the brain signal measurement device 10 in response to the audio stimulus and/or visual stimulus. In addition, the method comprises the step of determining the hearing capacity of the person 2, wherein the determining of the hearing capacity is at least partly based on the measured brain signal.

The VR glasses 8 may comprise an eye signal measurement device 11 for measuring an eye signal of the person 2 in response to the audio stimulus and/or visual stimulus. The method then may comprise the further step of measuring the eye signal of the person 2 in response to the audio stimulus and/or visual stimulus.

The ear enclosure means 4 may comprise an ear signal measurement device 12 configured for measuring an ear signal of the person 2 in response to the audio stimulus. The method then may comprise the further step of measuring the ear signal of the person 2 in response to the audio stimulus.

Figure 10 shows a schematic view of an example embodiment of an ear signal measurement device 12 comprising a LiDAR device 13 according to the invention (such as the one shown in Figure 3) to be inserted into a person’s ear canal. Two external LiDAR devices 13 are also shown in the left portion of Figure 10. However, these external LiDAR devices 13 are to scan the person’s auricle, not the interior of the person’s ear canal. The LiDAR device 13 that is to scan the person’s ear canal may comprise an aspherical lens 75 at a distal end thereof, as well as an objective lens 82. The LiDAR device 13 may furthermore comprise a mirror telescope 76 comprising water 78, and semi-reflectors 79. The aspherical lens 75 may be provided or may comprise polymethylmethacrylate (PMMA), also known as “Perspex” or “Plexiglas”. The LiDAR device 13 may further comprise photodiodes 77 and a polymer 80. A laser 81 connected to an optical fiber 73 provides the necessary laser beam. Thus, the LiDAR device 13 may provide a 270° view of the person’s ear canal.

Figure 11 shows an interior chamber 21 , such as in the form of an icosahedron anechoic chamber 22, comprising acoustic metamaterial 84. It should however be clear that any of the (ear enclosure means 4 of the) embodiments of the device 1 disclosed in the present patent application may be combined with the acoustic metamaterial 84. As mentioned before, by using the acoustic metamaterial 84 sound waves can be controlled and redirected so that they do not bounce off surfaces and create echoes. This is beneficial for creating surround sound in the ear enclosure means 4, such as the headphones 18, because it allows the sound to be directed directly into the ear 5 without being disrupted by reflections of objects in the headphone 18. As mentioned before, this in turn facilitates performing accurate measurements related to directional hearing.

Although the invention has been described above with reference to example embodiments, variants within the scope of the present invention will readily occur to those skilled in the art after reading the above description. Such variants are within the scope of the independent claims and the dependent claims. In addition, it is to be understood that express rights are requested for variants as described in the dependent claims. It should also be noted that the example embodiments shown in the Figures, or features thereof, may be combined to yield embodiments not explicitly shown in the Figures.

LIST OF REFERENCE NUMERALS

1. Device for determining a hearing capacity of a person

2. Person

3. Head of the person

4. Ear enclosure means

5. Person’s ear

6. Over-the-ear (OTE) headphones

7. Audio stimulus device

8. Virtual reality (VR) glasses

9. Person’s eye

10. Brain signal measurement device

11. Eye signal measurement device

12. Ear signal measurement device

13. LiDAR device

14. Person’s ear canal

15. Person’s eardrum structure

16. Person’s eardrum

17. Tensor tympani muscle

18. Individual headphone

19. Speaker

20. Reference speaker

21. Interior chamber

22. Icosahedron anechoic chamber

23. Helmet

24. Oscillator

25. External microphone

26. Skin sensor (PPG/EDA)

27. Internal microphone

28. EMG

29. OCT

30. ANC microphone

31. Middle ear

32. Method for analysis/data processing 33. Anamnese/pre-processing

34. Prescriptive analytics

35. Predictive analytics

36. Artifact removal and correction

37. Feedback artifact removal and correction

38. Data collection/biometric analysis

39. Descriptive analytics

40. Cognition and emotion analysis

41. Communication interface

42. Real-time feedback

43. Controller

44. Image and sound generator

45. Multiplexer engine

46. Microprocessor

47. Learning data generator and storage unit

48. Data storage unit

49. Signal generator

50. Hardware driver

51. Sensor driver

52. Scalp data controller

53. Measurement data controller

54. Emotion estimation unit

55. Cognition estimation unit

56. Estimation result output unit

57. Input/output interface

58. Output (data)

59. Diagnostic method

60. fNIRS device

61. fNIRS emitter

62. fNIRS detector

63. Laser

64. Fiber switch

65. Fiber to head

66. Fiber from head 67. ND wheels

68. Individual fNI RS detector

69. Blood vessel system

70. Brain

71. T3 position

72. Head strap

73. Optical fiber

74. EEG device

75. Aspherical lens

76. Mirror telescope

77. Photodiodes

78. Water

79. Semi-reflectors

80. Polymer

81. Laser

82. Objective lens

83. Stapedius muscle

84. Acoustic metamaterial