Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROACTIVE AND CONTINUOUS VISION DEGRADATION DETECTION
Document Type and Number:
WIPO Patent Application WO/2021/093964
Kind Code:
A1
Abstract:
The invention relates to a method for detecting a change of at least one of a visual and auditory perception of a user (50) consuming a content on a content output device (61–63), the method comprising at a detection entity (100): - determining at least one content output parameter at the content output device (61-63) to output the content when the user (50) is consuming the content, - comparing the at least one content output parameter to previously determined one or more content output parameters collected for the user, - determining a probability that the perception of the user (50) has degraded over time based on the comparing of the at least one content output parameter to the previously determined one or more content output parameters, wherein if the probability is larger than a threshold, - informing the user (50) that at least one of the visual and auditory perception has degraded over time.

Inventors:
LIOKUMOVICH GREGORY (DE)
MOTISAN LORIN (DE)
NIEWIADOMY JAROSLAW (DE)
WILKEN CHRISTIANE (DE)
Application Number:
PCT/EP2019/081435
Publication Date:
May 20, 2021
Filing Date:
November 15, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
G16H50/20; A61B5/00; G16H50/30
Foreign References:
US20160166204A12016-06-16
US20170258319A12017-09-14
US20140168606A12014-06-19
Attorney, Agent or Firm:
BERTSCH, Florian (DE)
Download PDF:
Claims:
Claims

1 . A method for detecting a change of at least one of a visual and auditory perception of a user (50) consuming a content on a content output device (61-63), the method comprising at a detection entity (100):

- determining at least one content output parameter at the content output device (61- 63) to output the content when the user (50) is consuming the content,

- comparing the at least one content output parameter to previously determined one or more content output parameters collected for the user,

- determining a probability that the perception of the user (50) has degraded over time based on the comparing of the at least one content output parameter to the previously determined one or more content output parameters, wherein if the probability is larger than a threshold,

- informing the user (50) that at least one of the visual and auditory perception has degraded over time.

2. The method according to claim 1 , wherein the previously determined one or more content output parameters were collected for the user when consuming the content or other content at the content output device (61-63) or at a plurality of different content output devices comprising the content output device.

3. The method according to claim 1 or 2, further comprising the step by identifying the user (50) who is consuming the content at the content output device, wherein the at least one content output parameter is stored with the previously determined one or more content output parameters in relation to a user identifier identifying the user.

4. The method according to claim 3, wherein at least one of the following is used to identify the user: image data of a camera (71-73) provided at the content output device, login information used to access the content output device and a selected profile used to access the content output device.

5. The method according to any of the preceding claims, wherein the comparing the at least one content output parameter to previously determined one or more content output parameters collected for the user is part of a relative analysis of the at least one content output parameter, the determined probability being a first probability, wherein an absolute analysis is carried out on the determined at least one content output parameter in which at least one parameter value of a possible parameter value range of the at least one content output parameter is evaluated in order to determine a second probability that the perception of the user has degraded, wherein a total probability is determined that the perception of the user has degraded based on the first probability and the second probability.

6. The method according to claim 5, wherein the total probability is determined using a corresponding weighting factor for the first and second probability.

7. The method according to any of the preceding claims, further determining the at least one content output parameter for other users consuming the content in a global analysis, wherein a third probability is determined that the perception over time has degraded taking into account the at least one content output parameter for the other users, wherein the total probability is determined based on the first and third probability.

8. The method according to any of the preceding claims, wherein determining the at least one content output parameter comprises determining with which of a plurality of content output devices (61-63) the content is consumed, wherein a device identifier identifying the content output device (61-63) is received and stored together with the at least one content output parameter, wherein the probability is determined that the perception of the user has degraded taking into account the device identifier.

9. The method according to any of the claims 5 to 8, wherein a dataset is provided which relates different parameter values of the at least one content output parameter to corresponding perception statuses of the user (50) when consuming a predefined content, wherein, when the user is consuming the predefined content, a perception status corresponding to the at least one determined content output parameter is determined using the dataset, wherein the perception status is used for determining the second probability.

10. The method according to any of the preceding claims, wherein the at least one of a visual and auditory perception comprises the visual perception of the user and the content output device comprises a display configured to output the content as visual content, wherein the at least one content output parameter comprises at least one display parameter set at the display to output the visual content.

11 . The method according to claim 10, wherein the display parameter comprises at least one of the following parameters:

- a pixel resolution used at the display,

- a viewing distance between the user and the display. - eye gestures of the user when consuming the visual content,

- a scaling factor with which the content is displayed,

- a font or icon size with which the content is displayed on the display

- a contrast setting of the display,

- a sharpness setting of the display,

- a brightness setting of the display,

- light conditions under which the visual content is consumed by the user

- a color setting of the display.

12. The method according to any of claims 1 to 9, wherein the at least one of a visual and auditory perception comprises the auditory perception, the content output device comprising a loudspeaker configured to output the content as audio content, wherein the at least one content output parameter comprises at least one audio parameter set at the loudspeaker to output the audio content.

13. The method according to claim 12, wherein the at least one audio parameter comprises at least one of the following parameters:

- at least one equalizer parameter used to output the audio content,

- an environment noise present in an environment into which the audio content is output,

- a fact, whether the user is using headphone to consume the audio content,

- which type of headphones is used to consume the audio content,

- a genre of the audio content,

- a source of the audio content

- audio content metadata.

14. The method according to any of the preceding claims, wherein informing the user comprises transmitting a status information to the user describing an assumption how much the perception has degraded and transmitting future steps to be carried out by the user.

15. The method according to any of the preceding claims, wherein the content consumed by the user is a content not especially designed to test the at least one of a visual and auditory perception.

16. The method according to any of the preceding claims, wherein a plurality of different content output parameters are used to determine the probability that the perception of the user has degraded. 17. The method according to any of the preceding claims, wherein the threshold comprises at least one of the following:

- a fixed threshold,

- an average of different thresholds,

- an average threshold adapted over time,

- a threshold depending on the content consumed.

18. A detection entity (100) configured to detect a change of at least one of a visual and auditory perception of a user consuming a content on a content output device, the detection entity comprising a memory (130) and at least one processing unit (120), the memory containing instructions executable by the at least one processing unit, wherein the detection entity (100) is operative to:

- determine at least one content output parameter at the content output device (61-63) to output the content when the user (50) is consuming the content,

- compare the at least one content output parameter to previously determined one or more content output parameters collected for the user,

- determine a probability that the perception of the user (50) has degraded over time based on the comparing of the at least one content output parameter to the previously determined one or more content output parameters, wherein if the probability is larger than a threshold,

- inform the user (50) that at least one of the visual and auditory perception has degraded over time.

19. The detection entity according to claim 18, further being operative to collect the previously determined one or more content output parameters for the user when consuming the content or other content at the content output device (61-63) or at a plurality of different content output devices comprising the content output device.

20. The detection entity according to claim 18 or 19, further being operative to identify the user (50) who is consuming the content at the content output device, wherein the at least one content output parameter is stored with the previously determined one or more content output parameters in relation to a user identifier identifying the user.

21 . The detection entity according to claim 20, further being operative to identify the user based on at least one of the following: image data of a camera (71 -73) provided at the content output device, login information used to access the content output device and a selected profile used to access the content output device.

22. The detection entity according to any of claims 18 to 22, further being operative, to compare the at least one content output parameter to previously determined one or more content output parameters collected for the user as part of a relative analysis of the at least one content parameter, the detection entity further being operative to carry out an absolute analysis on the determined at least one content output parameter in which at least one parameter value of a possible parameter value range of the at least one content output parameter is evaluated in order to determine a second probability that the perception of the user has degraded, and to determine a total probability determined that the perception of the user has degraded based on the first probability and the second probability.

23. The detection entity according to claim 22, further being operative to determine the total probability using a corresponding weighting factor for the first and second probability.

24. The detection entity according to claims 18 to 23, further being operative to determine the at least one content output parameter for other users consuming the content in a global analysis, to determine a third probability that the perception overtime has degraded taking into account the at least one content output parameter for the other users, and to determine the total probability based on the first and third probability.

25. The detection entity according to any of claims 18 to 24, further being operative, for determining the at least one content output parameter, to determine with which of a plurality of content output devices (61-63) the content is consumed, to receive a device identifier identifying the content output device (61-63), to store the device identifier together with the at least one content output parameter, and to determine the probability that the perception of the user has degraded taking into account the device identifier.

26. The detection entity according to any of claims 22 to 25, wherein a dataset is provided which relates different parameter values of the at least one content output parameter to corresponding perception statuses of the user (50) when consuming a predefined content, the detection entity being operative to determine a perception status corresponding to the at least one determined content output parameter using the dataset and to use the perception status to determine for determining the second probability. 27. The detection entity according to any of claims 18 to 26, wherein the at least one of a visual and auditory perception comprises the visual perception of the user and the content output device comprises a display configured to output the content as visual content, wherein the at least one content output parameter comprises at least one display parameter set at the display to output the visual content.

28. The detection entity according to claim 27, wherein the display parameter comprises at least one of the following parameters:

- a pixel resolution used at the display,

- a viewing distance between the user and the display.

- eye gestures of the user when consuming the visual content,

- a scaling factor with which the content is displayed,

- a font or icon size with which the content is displayed on the display

- a contrast setting of the display,

- a sharpness setting of the display,

- a brightness setting of the display,

- light conditions under which the visual content is consumed by the user

- a color setting of the display.

29. The detection entity according to any of claims 18 to 28, wherein the at least one of a visual and auditory perception comprises the auditory perception, the content output device comprising a loudspeaker configured to output the content as audio content, wherein the at least one content output parameter comprises at least one audio parameter set at the loudspeaker to output the audio content.

30. The detection entity according to claim 29, wherein the at least one audio parameter comprises at least one of the following parameters:

- at least one equalizer parameter used to output the audio content,

- an environment noise present in an environment into which the audio content is output,

- a fact, whether the user is using headphone to consume the audio content,

- which type of headphones is used to consume the audio content,

- a genre of the audio content,

- a source of the audio content

- audio content metadata. 31. The detection entity according to any of claims 18 to 30, further being operative, for informing the user, to transmit a status information to the user describing an assumption how much the perception has degraded and transmitting future steps to be carried out by the user.

32. The detection entity according to any of claims 18 to 31 , further being operative to use a plurality of different content output parameters to determine that the perception of the user has degraded.

33. The detection entity according to any of claims 18 to 32, wherein the content consumed by the user is a content not especially designed to test the at least one of a visual and auditory perception.

34. The detection entity according to any of claims 18 to 33, further being operative to use a plurality of different content output parameters to determine the probability that the perception of the user has degraded.

35. The detection entity according to any of claims 18 to 34, wherein the threshold comprises at least one of the following:

- a fixed threshold,

- an average of different thresholds,

- an average threshold adapted over time,

- a threshold depending on the content consumed.

36. A computer program comprising program code to be executed by at least one processing unit of a detection entity, wherein execution of the program code causes the at least one processing unit to execute a method according to any of claims 1 to 17.

37. A carrier comprising the computer program of claim 36 wherein the carrier is one an electronic signal, optical signal, radio signal and computer readable storage medium.

Description:
Proactive and continuous vision degradation detection

Technical Field

The present application relates to a method for detecting a change of at least one visual and auditory perception of a user. Furthermore, a corresponding detection entity is provided configured to detect the change of at least one of the visual and auditory perception. In addition, a computer program and a carrier comprising the computer program is provided.

Background

It is known that the visual and auditory perception of a user changes over time. The eyes and the ears exhibit age-related changes. In addition to age-related perception changes, other reasons, such as maladies or certain events, can influence the visual and auditory perception of a user. Currently available tools or applications require a user to explicitly initiate a test to detect a visual and/or auditory degradation. This requires a special setup like ensuring a specific distance to the screen for the visual perception or measuring the length of the distance. For the auditory perception a defined required environment is needed to test the hearing of the user.

Accordingly, a need exists to provide a system and a method which can detect more easily the degradation of the auditory and visual perception of a user, especially without the need to have special setups that measure the auditory and visual perception.

Summary

This need is met by the features of the independent claims. In the dependent claims further aspects of the invention are described.

According to a first aspect, a method is provided for detecting a change of at least one of a visual and auditory perception of a user consuming a content on a content output device. The method comprises the step of determining at least one content output parameter at the content output device to output the content when the user is consuming the content. Furthermore, the at least one content output parameter is compared to previously determined one or more content output parameters collected for the user. Furthermore a probability is determined based on the comparison where the at least one content output parameter was compared to the previously determined one or more content output parameter that the perception of the user has degraded over time. If the probability is larger than a threshold, the user is informed that at least one of the visual and auditory perception has degraded over time.

Furthermore, the corresponding detection entity is provided configured to detect the change of at least one of the visual and auditory perception of the user wherein the detection entity comprises a memory and at least one processing unit wherein the memory contains instructions executable by the at least one processing unit. The detection entity is operative to work as discussed above or as discussed in further detail below.

Alternatively, a detection entity is provided comprising a first module configured to determine at least one content output parameter at the content output device to output the content when the user is consuming the content. The detection entity comprises a second module configured to compare the at least one content output parameter to previously determined one or more content output parameters collected for the user. A further module, a third module is configured to determine a probability that the perception has degraded based on the comparison. If the probability is larger than a threshold, a fourth module informs the user that at least one of the visual and auditory perception has degraded.

Additionally, a computer program comprising program code is provided, wherein execution of the program code causes the at least one processing unit to execute a method as discussed above or as explained in further detail below.

A carrier comprising the computer program is provided, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.

It is to be understood that the features mentioned above and features yet to be explained below can be used not only in the respective combinations indicated, but also in other combinations or in isolation without departing from the scope of the present invention. Features of the above- mentioned aspects and embodiments described below may be combined with each other in other embodiments unless explicitly mentioned otherwise. Brief description of the drawings

The foregoing and additional features and effects of the invention will become apparent from the following detailed description when read in conjunction with the accompanying drawings, in which like reference numerals refer to like elements.

Fig. 1 schematically shows a system with a detection entity configured to detect the vision and/or auditory degradation of a user in a proactive and continuous way.

Fig. 2 shows an example schematic message flow between the entities involved to detect the degradation of the vision and/or of the auditory system.

Fig. 3 shows an example schematic view of a flowchart of a method carried out by the detection entity when detecting the change of the visual and auditory perception.

Fig. 4 shows an example schematic representation of the detection entity configured to detect a degradation of the visual and/or auditory perception of a user.

Fig. 5 shows another example schematic representation of the detection entity configured to detect a degradation of the visual and/or auditory perception of a user.

Detailed Description of Drawings

In the following, embodiments of the invention will be described in detail with reference to the accompanying drawings. It is to be understood that the following description of embodiments is not to be taken in a limiting sense. The scope of the invention is not intended to be limited by the embodiments described hereinafter or by the drawings, which are to be illustrative only.

The drawings are to be regarded as being schematic representations, and elements illustrated in the drawings are not necessarily shown to scale. Rather, the various elements are represented such that their function in general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components of physical or functional units shown in the drawings and described hereinafter may also be implemented by an indirect connection or coupling. A coupling between components may be established over a wired or wireless connection. Functional blocks may be implemented in hardware, software, firmware, or a combination thereof. In the usual daily life, users use multiple devices to consume content, either audio content or visual content or multimedia content. Many of these devices are equipped with cameras or other sensors and often the devices can be grouped together to collect data. By analyzing how a user configures these devices to consume the visional and/or audio content and how the parameters are changed over time, a probability can be determined that the perception of the user has degraded. If the probability is larger than a threshold the user can be informed that the perception may have degraded.

In the following, the examples are explained in the context of the visual perception. However, it should be understood that the method described below can also be used for the auditory perception of a user or a combination of the visual and auditory perception.

Usually vision degrades slowly over the lifetime and the individual users tend to ignore or not detect this degradation. In daily life, an individual user interacts with several devices to consume different types of visual contents. In the following, the content output device outputting the visual and/or audio content will be referred to as output device. Output devices are used for reading, watching pictures or video or processing pictures or video. Each of these output devices has different configurations and usage patterns for a preferred visual experience.

An example of such a content output device may be a smartphone or user equipment wherein the primary content types are text, images, and video. Normally, the screen is comparatively small and there is a shorter and variable distance from the eyes to the display. Within the context of the present application, the term “user equipment” (UE) refers to a device for instance used by a person (i.e. a user) for his or her personal communication. It can be a telephone type of device, for example a telephone or a Session Initiating Protocol (SIP) or Voice over IP (VoIP) phone, cellular telephone, a mobile station, cordless phone, or a personal digital assistant type of device like notepad, tablet equipped with a wireless data connection. A further output device is a computer screen or display with the primary content type being text, video and video games. The display is normally larger than in the smartphone/UE and comprises a mid-size screen. The distance from the eyes to the display is rather static and is larger compared to the distance to the screen for the smartphone or user equipment.

A further possible output device is a television or smart TV where the primary content type is video. Here, the environment is rather dark and the display is larger than for the computer display or the smartphone. Furthermore, the distance from the eyes to the display is larger than for the computer screen and is mainly static. In general the output devices have displays of different sizes.

The solution discussed below provides a device and a method for detecting a change in the visual and/or auditory perception of a user, wherein content output parameters are collected from the above-mentioned different devices that are used on a daily basis. The information, such as the content output parameters, may be collected from one or several of the above- mentioned devices. The content output parameter may contain parameters, such as a pixel resolution used at a display, a viewing distance between the user and the display, the eye gestures of the user when consuming the content. Other factors, such as a scaling factor with which the content is displayed, a font or icon size with which the content is displayed on the screen may be considered. Furthermore, display settings, such as a contrast setting, a sharpness setting, a brightness setting or the light conditions under which the content is consumed may be considered. Additionally, a color setting of the display may be considered. Corresponding sensors collecting the desired content output parameters may be provided at the corresponding content output device or may be installed externally, by way of example, if all of the content output devices belong to the ecosystem of the same operator or vendor.

The content output devices may be used by different users. If this is the case, sensors may be used, such as built-in cameras, to recognize the different users. Furthermore, an authentication information used to login into the corresponding content output device may be used or a user profile selected by the user to identify the corresponding user which consumes the content on the content output device. Other approaches for recognizing a user may also be used.

Fig. 1 shows a schematic view of a system in which a detection entity 100 is configured to detect a change of a visual and/or auditory perception of a user, wherein the entity 100 operates in a proactive way without the user triggering any testing of the visual and/or auditory perception.

As shown in Fig. 1 a user 50 can consume a content on different content devices 61 , 62 or 63. To each of the content output devices 61 to 63 sensors 71 to 73 may be connected in order to identify the user. The sensors may be provided but are not necessarily provided when the device is only used by a single user. Otherwise, each of the sensors 71 to 73 is capable of identifying the user. The sensor may comprise a camera or may comprise any other sensor configured to identify a user, such as a sensor configured to identify a fingerprint. Additionally the user may also be identified by authentication information used to access the corresponding device 61 to 63.

Each of the devices may use a certain setting to output the content, be it visual or audio content. The one or more content output parameters mentioned in more detail above are collected at the corresponding devices and are sent to the detection entity 100. The detection entity 100 then collects and analyzes the received information in order to deduce a possible degradation of the perception of the user. The detection entity 100 can be implemented as a separate node and may be provided as a local node or may be distributed in a cloud environment. The functionality of the detection entity may also be integrated in one of the devices 61 to 63, named leading device or may even be distributed between the different devices.

In the following, a possible procedure is explained in more detail which can be used to deduce the change of the perception of the user. As explained in connection with Fig. 2, a user selects a content to be consumed and selects at which device he or she wants to consume the content, step S20. In step S21 the request for the content is input into the device and in step S22 the settings for the output of the content are adjusted by the user. In step S23 the content is then delivered to the user with the adjusted settings. The settings are adjusted based on the current environment and based on vision capabilities of the user. It should be understood that the settings may also be adjusted after the content is delivered to the user when the user notices that the provided output is not satisfactory under the present settings.

In step S24, the devices automatically measure the multiple output parameters, such as the viewing parameters during the user interaction and the content output. For a visual content this can mean that the viewing distance to the display is determined using either a built-in front camera or using a manual configuration for longer static distances. Furthermore, it is possible that eye gestures or movements like squinting and blinking and the changes of these gestures during time using the same sensors mentioned above are detected. Furthermore, a screen solution, a scaling factor used for the content may be monitored. Additionally, a font or icon size may be determined with which the content is displayed. Furthermore, any contrast settings, sharpness settings or brightness display settings may be determined. The determined content output parameters discussed above are then transmitted to the detection entity in step S25. The detection entity 100 then stores the data in one or several datasets in step S26. Each dataset may be used for the consumption of one content. Based on the content viewing history, and the different datasets it is then possible to analyze whether there has been any change in the content output parameters which is considered as a significant change. Device 100 analyzes the collected information, e.g. using a static or absolute analysis and/or a dynamic or relative analysis. In the static or absolute analysis, the content output parameter, i.e. the parameter values of a possible parameter value range used by the user are analyzed such as the resolution, font size etc. Here the absolute parameter values are used and are not compared to previously detected parameter values content output parameters, but the parameter values per se are analyzed to deduce a vision status. By way of example, the use of an unusual large font size provides a hint that the vision status might not be very good. In the dynamic or relative analysis, the different content output parameters, i.e. the parameter values collected over time for a certain user may be compared in order to determine if there is any significant change in the content parameters used when consuming the same or other content or if there are specific patterns in changes of the parameter. The static analysis is indicated in Fig. 2 as step S27 wherein the dynamic analysis is indicated in Fig. 2 as step S28. The static analysis may furthermore contain an analysis of pre-stored information which indicates what font size or what content output parameter corresponds to which vision status for a certain device. The dynamic analysis in step S28 can include the analysis of the data from the same user in the past on the same or other devices and a global analysis in step S29 can comprise an analysis of other users consuming the same content.

As far as the static analysis is concerned the datasets for the users having unknown vision status may be created based on dedicated measurements involving users with known vision statuses. Here a specific vision status can be tied to a specific set of viewing parameters. Consequently an initial user vision status can be identified by the initial content output parameters the user has configured based on the above. The presets of known eye gestures can be statically included in the system as well. This means that if the user can not see well, he or she often closes the eyes a bit to make the viewed content sharper. Furthermore the user may place the head and eyes closer to the display than in a usual comfort position.

For the dynamic analysis the entity 100 can use different algorithms which check how viewing patterns are changing for similar content over time wherein the content may be unknown to entity 100. Here, machine learning algorithms may be used which are well-suited for these kinds of tasks. ln step S30 the entity 100 then finally determines if symptoms of degradation are detected. Here, the collected content output parameters are compared with the information provided for the same or other users in comparable cases. This means that cases are considered where similar devices were used for the content output and where a similar environment was used, or a similar content. The determination can be based on the different analysis results obtained in steps S27-S29. The results of all analyses can be combined and weighted and a likelihood may be determined that the vision status has degraded. If the total likelihood is larger than a threshold, the user may be informed accordingly. By way of example, if the dynamic analysis indicates that the content output parameter(s) change over a defined time period by more than a certain fixed or variable value, the dynamic analysis indicates that a vision degrading has occurred. If the comparison of the content output parameter(s) with the output parameter(s) of other users consuming the same content also indicates that the vision perception is poor, both analysis results lead to the same conclusion. Accordingly, the likelihood that the vision has degraded is quite high.

If the static and dynamic analysis provide contradicting results, or generally each of the results may be weighted by a weighting factor, and if the total likelihood that is vision has degraded is higher than a fixed or variable threshold, the user may be informed accordingly. Furthermore, it is possible to also consider the user types such as the age of the user, the sex of the user or any viewing habits. These datasets from the other users may be collected without revealing the identity of the different users. Accordingly, the data may be anonymized data. A degradation of the perception may be determined if the difference between the newly collected data and the other provided information (i.e. the difference in the parameter values collected for the content output parameter) is larger than a certain difference. This difference may be a fixed difference or a adaptive difference in the content output parameters or may contain an average difference adapted with a time period of the history data that are used. This difference may also be adapted dynamically and/or may depend on the used content output device.

Accordingly, the degradation of the visual perception is considered to be present when substantial deviations from the content output parameters detected for the same user in the past or for other users in the past.

Different algorithms can be used for processing the received data. By way of example, during an initial setup, the device 100 can ask if a user has vision issues and when he or she last visited a doctor checking the vision status. The patterns of users who recently did vision tests could be used as master examples.

It may be determined that the perception of the user has degraded when one of the content output parameters changes by more than a certain difference. The content output parameters can vary within a certain value range, e.g. between a smallest font size and a largest font size. If the value, i.e. the font size changes by more than a certain difference, a probability that the vison status has degraded may be increased accordingly. There may be a linear or non-linear relationship between the change of the value of the content output parameters and the probability that the degradation of the perception status occurs. However, it is also possible that the degradation is only determined when several of the content output parameters changed by more than a certain difference. The more content output parameters are considered, the better the quality of the prediction that the visual and auditory perception has degraded will be.

In step S34 when a possible vision degradation has been detected, the detection entity 100 sends a notification to the user. Based on the analysis of the different content output parameters entity 100 can determine a probability or likelihood that the degradation of the visual and auditory perception has occurred. When the determined probability or likelihood is higher than a fixed or a flexible threshold, the user may be informed accordingly. The notification sent to the user in step S34 can include the detected issues and may include suggestions on the next steps to be carried out by the user.

The algorithms used in steps S27 to S30 can include hardcoded rules, static or machine learning based logic.

The described complete system can be used in a standalone mode, but may also be integrated into a vendor's smart home ecosystem. In such a case, more information about the user might be available and could be considered. This kind of information could comprise the medical conditions, the used glasses, the existence of chronic diseases, etc.

Furthermore, it is possible to suggest more concrete actions. By way of example, an ophthalmologist working with the given user can be informed as well. In the method described above, the visual degradation was mainly considered, meaning the visual perception of the user. However, instead of the visual perception, the auditory perception can be checked based on the system. When the content output parameter is an audio parameter, the equalizer parameters may be considered which are used to output the audio content. Furthermore, an environment noise present in the environment into which the audio content is output may be determined. Furthermore, it can be determined whether the user is using headphones to consume the audio content. Here, it may be determined which type of headphones are used to consume the audio content. Furthermore, the genre of the audio content may be determined, and/or the source of the audio content. Furthermore, audio content metadata providing more information about the audio source may be used as content audio parameter.

Fig. 3 summarizes some of the main steps carried out by the detection entity 100 in the method discussed above. In step S41 the detection entity determines the content output parameters collected from the different devices 61 to 63. The output parameter may be collected for a single device, such as one of the devices 61 to 63 of Fig. 1 , or may be collected for several of the devices 61 to 63. In step S42, the output parameters as collected in step S41 are compared at least to the previously determined one or more content output parameters and in step S43 it is detected whether a degradation has occurred. As discussed above in step S43 different approaches may be used and may be combined. Device 100 compares the newly collected output parameters to previously collected output parameters of the same user and of other users. One or several of the content output parameters may be used to finally determine whether a degradation has occurred. If this is the case the user may be informed in step S44 of the fact that the detection entity is of the opinion that a degradation of the visual and auditory perception has occurred at the user.

Fig. 4 shows a schematic architectural view of the detection entity 100 which can carry out the above-discussed determination of the degradation of the visual and auditory perception. The device comprises an interface 110 provided for transmitting user data or control messages to other entities and provided for receiving user data or control messages to other entities. The interface 110 receives the content output parameters from the different devices 61 to 63 when the user is consuming the content. Furthermore, the interface 110 informs the user that a visual and auditory perception degradation has been detected. The device 100 furthermore comprises a processing unit 120 which is responsible for the operation of the device 100. The processing unit 120 comprises one or more processors and can carry out instructions stored on a memory 130. The memory may include a read-only memory, a random access memory, a mass storage, a hard disk, or the like. The memory 130 can furthermore include suitable program code to be executed by the processing unit 120 so as to implement the above- described functionalities in which the device 100 is involved.

Fig. 5 shows a further implementation of the detection entity 300 similar to device 100. The entity 300 comprises a first module 310 configured to determine the content output parameters used when the user is consuming the content. A second module 320 is provided configured to compare the connected content output parameters to previously determined one or more content output parameters collected for the same user and/or for other users. A module 330 is provided to determine whether the perception of the user has degraded over time and if this is the case module 340 is provided configured to inform the user of the degradation of the visual and/or auditory perception.

From the above-said some general conclusions can be drawn. (Here we summarize the dependent claims.)

The previously determined one or more content output parameters were collected for the user when the content or other content such as similar content is consumed at the content output device or at a plurality of different content output devices comprising the content output device. Accordingly, this means that the previously determined content output parameters were collected at the same content output device or any of the other content output devices 61 to 63 used by the same user 50.

Furthermore, the user 50 may be identified who is consuming the content at the content output device. The at least one content output parameter that is determined when the user is consuming the content is then stored with the previously determined one or more content output parameters in relation to a user identifier identifying the user 50.

A camera may be used to identify the user. In addition, login information used by the user to access the content output device or a selected profile selected by the user when accessing the content output device may be used to identify the user.

The comparing of the content output parameters to the previously determined content output parameters can be part of a relative analysis of the at least one content output parameter, the determined probability being a first probability, Furthermore an absolute analysis may be carried out on the determined at least one content output parameter in which at least one parameter value of a possible parameter value range of the at least one content output parameter is evaluated in order to determine a second probability that the perception of the user has degraded, wherein a total probability is determined that the perception of the user has degraded based on the first probability and the second probability.

The total probability may be determined using a corresponding weighting factor for the first and second probability. By way of example the first weighting factor may be larger than the second weighting factor as the evaluation

Furthermore, it is possible to determine the at least one content output parameter for other users consuming the content in a global analysis, wherein a third probability is determined that the perception over time has degraded taking into account the at least one content output parameters determined for the other users, wherein a total probability is determined based on the first, second, and third probability. By way of example, if it is detected that a plurality of the other users also changed the content output parameters, the entity 100 may deduce that the content is presented in a different way so that the change of the content output parameter is not a reason for a degradation of the visual and auditory perception but may be caused by a different setting how the content is presented to different users. If, however, one user uses completely different output parameters compared to other users consuming the same content, one might deduce that the perception of the user is at least not very good. The different analysis steps of Fig. 2 discussed above may be used in different combination or determining a total probability that a degradation of the perception has occurred. The total probability may be determined based on the second probability alone, the second and third probability, the first and second probability, or the first, second, and third probability.

It may be confirmed that the perception of the user has degraded over time when the at least one content output parameter determined for the user differs from the at least one or more content output parameters determined for the other users by more than a parameter value of the possible value range of the corresponding content output parameter.

When the at least one content output parameter is determined for the user it may be determined with which of the plurality of content output devices the content is consumed. A device identifier identifying the content output device may be received and stored together with the at least one content output parameter. The probability can be determined that the perception of the user has degraded taking into account the device identifier.

A dataset may be provided which relates to different parameter values of the at least one content output parameter to corresponding perception status of the user when consuming a predefined content. When the user is consuming the predefined content, the perception status corresponding to the at least one determined content output parameter is determined using the dataset, wherein the perception status is used for determining the second probability The determined perception states can be compared to a former perception status of the user in order to determine if the perception of the user has degraded. The dataset may be a simple file. These datasets can be used as master examples where a user having a certain visual and auditory perception uses a certain set of content output parameters.

The method can consider the visual perception of the user. Here, the content output device comprises a display configured to output the content as visual content, the at least one content output parameter comprises at least one display parameter set at the display to output the visual content.

The display parameter can contain one of the following parameters: a pixel resolution used at the display, a viewing distance between the user and the display, eye gestures of the user when consuming the visual content, a scaling factor with which the content is displayed, a font or icon size with which the content is displayed on the display, a contrast setting of the display, a sharpness setting of the display, a brightness setting of the display, the light conditions under which the visual content is consumed by the user, and last but not least a color setting of the display.

The entity 100 may also determine the auditory perception of the user wherein the content output device then comprises a loudspeaker or transducer configured to output the content as audio content. The at least one content output parameter comprises at least one audio parameter set at the loudspeaker to output the audio content. The content output parameter can then include the equalizer parameters used to output the audio content, an environment noise present in an environment into which the audio content is output. Furthermore, a fact may be considered whether the user is using headphones to consume the audio content and/or, which type of headphones are used to consume the audio content. Furthermore, a genre of the audio content or a source of the audio content or audio content meta data may be considered.

When the user is informed about the detected degradation, a status information may be transmitted to the user describing the assumption how much the perception has degraded. Furthermore, it is possible that future steps to be carried out by the user are proposed to the user.

The content that is consumed by the user is preferably a content which is not especially designed to test at least one of the visual and auditory perception of the user. This means that it is a proactive and continuous detection method which uses the normal consumption habits of the user to detect whether a degradation has occurred.

The degradation may be determined based on a single of the content output parameters. However, it is also possible that a plurality of different content output parameters are considered together to determine whether the perception of the user has degraded or not. This improves the likelihood that the detected degradation as detected is actually present at the user.

The threshold which is used for the determination of the probability of the degradation of the perception may be a fixed threshold, an average of different thresholds, an average threshold adapted over time or a threshold depending on the content consumed. A trigger that is used to detect the degradation symptoms can be flexible. It can be a weighted combination of parameters or even Machine Learning based algorithm or a probability. When thresholds are used, the threshold could be content specific, or could be provided from outside. It can be dynamically recalculated ( e.g. when a popular website changed the design, the change could impact the threshold, or the producers of the content output device changed the fonts for the UE/smartphone or changed the background for display etc.

The above discussion provides a solution which detects a vision degradation without the need of the user to actively test the perception status. It can act in the background continuously and especially proactively without any interference by the user. It can detect the progressing degradation based on the user's habits of using the different content output devices in the daily life. This allows an earlier detection of the progressing degradation. This can be especially helpful for detecting a perception degradation for people needing a certain perception status for their profession. The invention can help to continuously determine a vision status for pilots, train conductors, or similar persons needing a certain vision status or a certain hearing status.

Another advantage of the solution is the combined evaluation of data received from different types of devices which may improve the quality of the diagnostic.