Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR INTERACTIVE USER SUPPORT
Document Type and Number:
WIPO Patent Application WO/2015/074771
Kind Code:
A1
Abstract:
The invention relates to a method for supporting a user (N) in the operation of a human machine interface adapted to recognize three-dimensional interaction gestures executed by the user (N) by means of his/her hand (H) in a vehicle (1). The method comprises the following steps: in an information mode - electronically reproducing information on using the human machine interface, in a training mode - electronically reproducing a request to the user (N) to execute a three-dimensional interaction gesture recognizable by the human machine interface, - recording the three-dimensional interaction gesture accordingly performed by the user (N), - determining and electronically reproducing a feedback on the three- dimensional interaction gesture accordingly performed by the user (N), in an operating mode - recognizing a three-dimensional interaction gesture executed by the user (N), - if the three-dimensional interaction gesture is successfully recognized, then issuing a command associated with the recognized three-dimensional interaction gesture, - determining and electronically reproducing a feedback on the three-dimensional interaction gesture executed by the user (N), wherein the feedback comprises information on the recognizability and/or precision and/or speed of a three-dimensional interaction gesture executed by the user (N).

Inventors:
SCHLIEP FRANK (DE)
KIRSCH OLIVER (DE)
WINTON STEPHEN (DE)
Application Number:
PCT/EP2014/064862
Publication Date:
May 28, 2015
Filing Date:
July 10, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
JOHNSON CONTROLS GMBH (DE)
International Classes:
B60K37/06
Foreign References:
US20130155237A12013-06-20
Attorney, Agent or Firm:
FINGER, Catrin (Gerhart-Hauptmann-Straße 10/11, Erfurt, DE)
Download PDF:
Claims:
Claims

1 . A method for supporting a user (N) in the operation of a human machine interface adapted to recognize three-dimensional interaction gestures executed by the user (N) by means of his hand (H) in a vehicle (1 ) comprising the following steps:

in an information mode

- electronically reproducing information on using the human machine interface ,

in a training mode

- electronically reproducing a request to the user (N) to execute a three- dimensional interaction gesture recognizable by the human machine interface,

- recording the three-dimensional interaction gesture accordingly

performed by the user (N),

- determining and electronically reproducing a feedback on the three- dimensional interaction gesture accordingly performed by the user (N), in an operating mode

- recognizing a three-dimensional interaction gesture executed by the user (N),

- if the three-dimensional interaction gesture is successfully recognized, then issuing a command associated with the recognized three- dimensional interaction gesture,

- determining and electronically reproducing a feedback on the three- dimensional interaction gesture executed by the user (N),

wherein the feedback comprises information on the recognizability and/or precision and/or speed of a three-dimensional interaction gesture executed by the user (N). 2. Method according to claim 1 , characterized in that information on using the human machine interface is electronically reproduced on at least one display (2) and that the feedback is formed as a color code applied to the background color of the at least one display (2).

3. Method according to claim 2, characterized in that the interior of the

vehicle (1 ) is at least partly illuminable and that the feedback is formed as a color code applied to the illumination of the interior of the vehicle or parts of the interior of the vehicle (1 ). 4. Method according to claim 2 or 3, characterized in that the color code indicates a sufficient execution of a requested three-dimensional interaction gesture by a first color, a recognizable yet improvable execution of a requested three-dimensional interaction gesture by a second color and an insufficiently recognizable or unrecognizable execution of a requested three-dimensional interaction gesture by a third color.

5. Method according to one of the previous claims, characterized in that the feedback is formed as an acoustic and/or vibration and/or haptic signal recognizable by the user (N).

6. Method according to one of the previous claims, characterized in that the feedback describes if and optionally what three-dimensional interaction gesture was recognized upon execution by the user (N).

7. Method according to one of the previous claims, characterized in that the feedback describes if a predetermined number of subsequent executions of a three-dimensional interaction gesture was successfully recognized. 8. Method according to one of the previous claims comprising in the training mode the steps of

- storing the feedback on the three-dimensional interaction gesture accordingly performed by the user (N) and - providing the user with a score value determined from the history on feedback on at least one three-dimensional interaction gesture previously performed by the user (N),

wherein the score value increases with the training progress of the user (N).

9. Method according to claim 7, characterized in that advanced three- dimensional interaction gestures are presented in the training mode when a predetermined score value is reached or exceeded, wherein an advanced three-dimensional interaction gesture is formed as a short-cut for a sequence of three-dimensional interaction gestures.

10. Method according to one of the previous claims, wherein the recognition of three-dimensional interaction gestures is adapted to individual variations repeatedly recognized in the execution of three-dimensional interaction gestures by the user (N) such that the execution of three- dimensional interaction gestures comprising such individual variations is recognized more reliably and/or more accurately.

1 1 . An apparatus (6) for supporting a user (N) in the operation of a human machine interface adapted to recognize three-dimensional interaction gestures in a vehicle (1 ), the apparatus comprising

- at least one reproducing means (6.1 ) for electronically reproducing o information on the usage of the human machine interface

and/or

o a request to the user (N) to perform at least one three- dimensional interaction gesture with the human machine interface and/or

o feedback on a three-dimensional interaction gesture executed by the user (N),

- at least one input means (6.2) for registering a three-dimensional interaction gesture executed by the user (N), - at least one control means (6.4) for determining a feedback on a three-dimensional interaction gesture executed by the user (N) and/or for controlling the execution of the electronic reproduction of

o information on the usage of the human machine interface

and/or

o a request to the user to execute at least one three-dimensional interaction gesture with the human machine interface and/or o a feedback on a three-dimensional interaction gesture executed by the user (N) and

- at least one storage means (6.3) for storing of at least one feedback on a three-dimensional interaction gesture executed by the user (N), wherein each of the at least one reproducing means (6.1 ), the at least one input means (6.2) and the at least one storage means (6.3) is connected with the at least one control means (6.4).

12. An apparatus (6) according to claim 1 1 , characterized in that at least one input means (6.2) is formed as a 3D box (3) comprising a predetermined recognition region adapted for registering three-dimensional interaction gestures executed by a user (N).

13. An apparatus (6) according to claim 12, comprising light sources (4)

adapted to illuminate the recognition region of the 3D box (3) or the boundary thereof. 14. An apparatus (6) according to one of the claims 1 1 to 13, characterized in that at least one reproducing means (6.1 ) is formed as a display (2) adapted to reproduce figures and/or animations and/or videos.

15. An apparatus (6) according to claim 14, characterized in that at least one reproducing means (6.1 ) is formed as a head-up display.

16. An apparatus (6) according to one of the claims 10 to 12, characterized in that at least one reproducing means (6.1 ) is formed as an indicator integrated in a master instrument (5) of the vehicle (1 ).

Description:
Method and Apparatus for Interactive User Support

The invention relates to a method and an apparatus for interactively supporting a user in the operation of a human machine interface in a vehicle.

In a vehicle, devices and systems of devices for controlling of and/or informing about functions of the vehicle as well as integrated entertainment and information devices exhibit human machine interfaces that are

increasingly complex and increasingly difficult to explain. From the state of the art, printed user manuals are known as a method for training a user and/or providing guidance in case of usability problems. It is further known to reproduce the content of such user manuals by electronic output devices such as a display or an audio device. Furthermore, the context-sensitive output of information is known for supporting a user in a current interaction, as for example by means of a context-sensitive help text relating to a selectable option or further hints relating to a warning or error message.

Furthermore, from the state of the art, input means for a three-dimensional recording of interaction gestures of a user are known as 3D box. Such a 3D box enables the recording of the posture of at least one finger relative to a palm or the recording of the movement of fingers relative to each other or relative to a palm or the recording of the movement of a complete hand. US 2013/0076615 discloses an interface apparatus and a method for inputting information with a user's finger gesture while the user is driving with his two hands on a steering wheel. In one aspect, the interface apparatus includes a gesture sensor, a gesture processor and a head-up display (HUD), where the gesture sensor and the gesture processor recognizes and interprets the information input by the user's finger gesture and such information is displayed on the HUD. In one embodiment, a plurality of point of interest (POI) icons are displayed on the HUD after the user inputs at least one letter into the system and the user can select the POI by his/her finger gesture. In another embodiment, the gesture sensor can recognize the user's finger gesture in non-alphabet characters.

WO 2014/040930 A1 discloses a method for operating a motor vehicle component by means of gestures, wherein a gesture made by means of an input means, in particular a hand or parts of a hand of an operator, in a three- dimensional operating space that is part of an interior of a motor vehicle is detected, and an operating function linked to the detected gesture is performed, wherein a feedback signal is output as soon as the input means enters or has entered the three-dimensional space from outside.

It is an object of the present invention to provide an improved apparatus and an improved method for supporting a user in the operation of a human machine interface in a vehicle.

The object is achieved by a method according to claim 1 and by an

apparatus according to claim 1 1 .

Preferred embodiments of the invention are given in the dependent claims.

According to the invention, a method for supporting a user in the operation of a human machine interface in a vehicle comprises the following steps:

in an information mode

- electronically reproducing information on using the human machine interface,

in a training mode

- electronically reproducing a request to the user to execute a

predetermined basic user action for using the human machine interface, - recording the basic user action accordingly performed by the user and

- determining, storing and electronically reproducing a feedback on the basic user action accordingly performed by the user. In an embodiment of the invention, the steps of the training mode are iteratively repeated and a trend is determined for the feedback determined on the basic user interactions repeatedly executed by the user.

In a further embodiment of the invention, at least one predetermined advanced user interaction is chosen to be performed by the user, wherein the at least one predetermined advanced user interaction depends on the trend for the feedback determined on the basic user interactions repeatedly executed by the user by then. In a further embodiment, the method for supporting a user in the operation of a human machine interface in a vehicle comprises an operating mode comprising the following steps:

- recording of a user action executed by the user and

- determining, storing and electronically reproducing a feedback on the user action executed by the user.

When using a 3D Box as an input means, a user action may be formed as three-dimensional (3D) interaction gesture comprising a pose or a sequence of poses determined by the posture of at least one finger and/or a palm.

In an embodiment of the invention, a method for supporting a user in the operation of a human machine interface adapted to recognize three- dimensional (3D) interaction gestures executed by the user by means of his/her hand in a vehicle comprises the following steps:

in an information mode - electronically reproducing information on using the human machine interface,

in a training mode

- electronically reproducing a request to the user to execute a 3D

interaction gesture recognizable by the human machine interface,

- recording the 3D interaction gesture accordingly performed by the user,

- determining and electronically reproducing a feedback on the 3D

interaction gesture accordingly performed by the user,

in an operating mode

- recognizing a 3D interaction gesture executed by the user,

- if the 3D interaction gesture is successfully recognized, then issuing a command associated with the recognized 3D interaction gesture,

- determining and electronically reproducing a feedback on the 3D

interaction gesture executed by the user,

wherein the feedback comprises information on the recognizability and/or precision and/or speed of a 3D interaction gesture executed by the user.

While being advantageously powerful and expressive and requiring minor or no visual attention of a user, 3D interaction gestures are also considerably more complex and ambiguous than user actions commonly used for user interfaces in a vehicle such as turning a knob, pressing a button, sliding a slider or entering a key. Therefore, for a user it is more difficult to execute and memorize such 3D interaction gestures or to even decide on the correctness of a present execution of such a 3D interaction gesture. With this embodiment of the invention, a user is supported in learning and executing 3D interaction gestures in a more easy and efficient way than with paper- based or electronic manuals known from the state of the art and improves the success rate of user interactions. Thus, this embodiment of the invention improves the efficiency and safety of human machine interfaces adapted to recognize 3D interaction gestures. According to another aspect of the invention, an apparatus for supporting a user in the operation of a human machine interface in a vehicle comprises

- at least one reproducing means for electronically reproducing

information on the usage of the human machine interface and/or for electronically reproducing a request related to the user to perform at least one predetermined user action with the human machine interface and/or for electronically reproducing feedback on a user action executed by the user upon request,

- at least one input means for recording a user action executed by the user upon request,

- at least one control means for determining a feedback on a user action executed by the user upon request, for controlling the execution of the electronic reproduction of information on the usage of the human machine interface and/or of a request related to the user to execute at least one predetermined user action with the human machine interface and/or a feedback on a user action executed by the user upon request and

- at least one storage means for storing of at least one feedback on a user action executed by the user upon request,

wherein each of the at least one reproducing means, the at least one input means and the at least one storage means is connected with the at least one control means. As an advantage, the method and the apparatus according to the invention enable a user by the electronically reproduced feedback to recognize an imprecision and/or an error associated with an executed user action. Thus, a user may correct such an imprecision and/or such an error in subsequent repetitions of the user action during the training mode until a satisfying precision and/or a satisfyingly low error rate is reached. Thus, a user may learn even complex user actions in a step-by-step manner. In an embodiment of the invention, the reproducing means is a display and the input means is a 3D Box. The display may be formed as a Head Up Display or may be integrated into the console or into a master instrument of the vehicle.

In an embodiment of the invention, an apparatus for supporting a user in the operation of a human machine interface adapted to recognize three- dimensional interaction gestures in a vehicle comprises at least one reproducing means for electronically reproducing

- information on the usage of the human machine interface and/or

- a request to the user to perform at least one three-dimensional

interaction gesture with the human machine interface and/or

- feedback on a three-dimensional interaction gesture executed by the user.

In such an embodiment of the invention, the apparatus for supporting a user in the operation of a human machine interface further comprises

- at least one input means for registering a three-dimensional interaction gesture executed by the user and

- at least one control means for determining a feedback on a three- dimensional interaction gesture executed by the user and/or for controlling the execution of the electronic reproduction on the electronic reproducing means.

The at least one control means may, for example, control the execution of the reproduction of

- information on the usage of the human machine interface and/or

- a request to the user to execute at least one three-dimensional

interaction gesture with the human machine interface and/or - a feedback on a three-dimensional interaction gesture executed by the user

In such an embodiment of the invention, the apparatus for supporting a user in the operation of a human machine interface further comprises at least one storage means for storing of at least one feedback on a three-dimensional interaction gesture executed by the user.

In such an embodiment of the invention, the apparatus for supporting a user in the operation of a human machine interface, each of the at least one reproducing means, the at least one input means and the at least one storage means is connected with the at least one control means.

An apparatus according to this embodiment of the invention supports a user in learning and executing 3D interaction gestures recognizable by a human machine interface in a more easy and efficient way than with paper-based or electronic manuals known from the state of the art and improves the success rate of user interactions. Thus, an apparatus according to this embodiment of the invention improves the efficiency and safety of human machine interfaces adapted to recognize 3D interaction gestures.

In the information mode, the user is presented with potential interaction gestures recognizable by the 3D Box as a video and/or as an animation. Furthermore, the user is explained in the information mode what control functions are available and what interaction gestures they are associated with when operating the system. Such interaction gestures can be presented to the user from a driver's perspective by means of a video.

In the training mode, the user is requested to perform an interaction gesture associated with a predetermined interaction gesture template, as for example to put forth thumb, index finger and middle finger of the right hand. It is possible to demonstrate such an interaction gesture template by means of a figure and/or an animation and/or a video reproduced on the display.

The interaction gesture performed by the user is recorded. As an example, the positions and orientations of individual fingers and/or of a palm are recorded. The agreement with or the deviation from the predetermined interaction gesture template is determined. As an example, the mean squared distances between predetermined positions and/or trajectories of the interaction gesture template and the effectively recorded positions and/or trajectories of the interaction gesture performed by the user may be used as a measure for a degree of agreement.

The user is informed about the determined degree of agreement by a feedback reproduced by the reproducing means. As an example, the background color used in the presentation of potential user interaction gestures may visualize the quality of the last performed interaction gestures, i.e. the degree of agreement with its interaction gesture template. For example, a red background color may indicate an insufficient degree of agreement, a yellow background color may indicate a degree of agreement that has room for improvement and a green background color may indicate a satisfying degree of agreement, wherein, as an example, a satisfying degree of agreement fulfills at least 80% of a set of quality criteria regarding the execution of the interaction gesture. Furthermore, it is possible to provide the user with a qualitative and/or quantitative feedback, for example on the velocity of execution of the interaction gesture or on whether the interaction gesture was fully or partly inside or outside a predetermined recognition region of the 3D Box.

Furthermore, it is possible to partly color at least a region used for the presentation of potential user interaction gestures. If, for example, the position or orientation of a single finger involved in an interaction gesture deviates from its predetermined position or orientation according to the corresponding interaction gesture template, it is possible to color just this single finger in the figure, animation or video presented. It is also possible to support the information of the user by further reproduction means, such as Light Emitting Diodes (LEDs) that mark the range within which an interaction gesture may be recognized. The user may further be supported by

supplementary information delivered by a further second display, such as a Head Up Display, in addition to a primary display, such as a display integrated into the console of a vehicle. Such supplementary information may be delivered automatically, for example after a certain predetermined number of unsuccessful executions of a predetermined interaction gesture. Such supplementary information may also be delivered on request by the user.

It is also possible to provide such feedback by non-graphical reproducing means that, for example, reproduce sound and/or vibration to indicate a successful execution of an interaction gesture.

It is possible to store the feedback associated with an interaction gesture execution and to compare such a feedback with feedback associated with previous executions. This way, it is possible to implement the training mode as a game, wherein a best score or highscore based on previous executions of a certain interaction gesture is compared against the feedback on the current execution of this interaction gesture. It is also possible to provide quantitative feedback on at least one such execution of an interaction gesture to enable the user to evaluate his training progress. For example, it is possible to provide quantitative scores describing the portion of such an execution spent outside the recognition region of the 3D Box, the speed of the execution, the degree of agreement with the corresponding interaction gesture template or how often the recognition region of the 3D Box was left during the execution. It is possible to present such quantitative scores as graphs indicating the progress of the user in the training.

It is possible to differentiate the training mode into different training levels associated with different degrees of difficulty and/or different scope of interaction gestures to be executed within such a training level. As an example, it is possible to train in a more advanced level a more complex interaction gesture that constitutes a shortcut for a sequence of elementary interaction gestures that have been trained and accomplished by the user in a previous training level.

It is also possible to present the user an interactive training graphic illustrating a template execution of an interaction gesture. Such an interactive training graphic may, for example, be formed as an animation or video which the user mimics in order to train the respective interaction gesture. The user may be given feedback on the correctness, precision and speed of the execution of the interaction gesture. As an advantage, a sufficient number of such repetitive executions improves the muscle memory and enables the user to execute the trained interaction gesture automatically or with less cognitive load.

In an analog way to the training mode, the user may be provided feedback on executed interaction gestures in the operation mode, for example on an unreliable, incomplete or unsuccessful recognition of an interaction gesture or on whether the interaction gesture was recognized in the predetermined recognition region of the 3D Box. If an interaction gesture could not be recognized, the user may be informed on possible reasons for such a failure. The user may also be informed if an unsupported interaction gesture was executed. Furthermore, it is possible to individually adapt the recognition onto certain minor individual, i.e. user-specific, variations in the execution of an interaction gesture similar to the adaptation of handwriting recognition. The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:

Figure 1 and

Figure 2 are schematic views of the interior of a vehicle and

Figure 3 is a schematic view of an apparatus for supporting a user in the operation of a human machine interface.

Corresponding parts are marked with the same reference symbols in all figures. Figure 1 is a schematic view on the interior of a vehicle 1 with a display 2 integrated into the console of the vehicle 1 . Interaction gestures within a 3D Box 3 are registered by a camera system (not shown in figure 1 ). A user N brings a hand H into the 3D Box 3 and executes an interaction gesture that may be formed as a 3D interaction gesture. For improving the recognition of the hand H and/or for marking the 3D Box 3 for better orientation of the user N, it is possible to illuminate the 3D Box 3 and/or the hand H of the user N fully or partly by light sources 4. It is possible to control the light sources 4 such that the color of the illumination provides a feedback to the user N on the quality of the execution of an interaction gesture. As an example, the color of the illumination provided by the light sources 4 may be green when the hand H is inside the recognition region of the 3D box 3 and/or when an interaction gesture executed by the user N was recognized. Alternatively, this color may be red when the hand H is outside the 3D box 3 and/or when an interaction gesture executed by the user N was not recognized and/or when an unsupported interaction gesture was recognized. As shown in Figure 2, in the training mode, instructions on how to execute an interaction gesture are presented on the display 2. While and/or after an interaction gesture was recognized, feedback on the execution of the interaction gesture is presented on the display 2. It is also possible to present hints on the correction of an imperfect execution of the interaction gesture. It is further possible to use the display 2 for the presentation of information regarding the usage of the 3D Box 3 or for the presentation of feedback on the execution of interaction gestures in the operating mode. It is also possible to control the background color of the display 2 such that it provides a feedback to the user N on the quality of the execution of an interaction gesture. As an example, this background color may be green when the hand H is inside the recognition region of the 3D box 3 and/or when an interaction gesture executed by the user N was recognized. Alternatively, this background color may be red when the hand H is outside the 3D box 3 and/or when an interaction gesture executed by the user N was not recognized and/or when an unsupported interaction gesture was recognized.

It is also possible to use dedicated indicator elements of the master instrument 5 solely or in addition to the display 2 for reproducing information and/or feedback on interaction gestures.

Figure 3 schematically shows an apparatus 6 for supporting a user N in the operation of an HMI. The apparatus 6 comprises a reproducing means 6.1 for electronically reproducing information on the usage of the HMI. The reproducing means 6.1 may further reproduce a request to the user N to perform at least one interaction gesture with the HMI. The reproducing means 6.1 may also reproduce feedback on an interaction gesture executed by the user N. The reproducing means 6.1 may be formed as a display 2 and/or as an indicator element integrated into a master instrument 5 The apparatus 6 further comprises an input means 6.2 for registering an interaction gesture executed by the user (N). The input means 6.2 may be formed as a 3D Box 3, for example a 3D Box 3 comprising at least one camera for registering the hand H of a user N.

The apparatus 6 further comprises a control means 6.4 for determining a feedback on an interaction gesture executed by the user (N). The control means 6.4 may further control the operation of the reproducing means 6.1 . The control means 6.4 may for example be formed as a microcontroller or as a general purpose processor.

The apparatus 6 further comprises a storage means 6.3 for storing of at least one feedback on an interaction gesture executed by the user (N). The storage means 6.3 may for example be formed as a non-volatile memory like an Electrically Erasable Programmable Read Only Memory (EEPROM) or Flash memory.

The reproducing means 6.1 , the input means 6.2 and the storage means 6.3 are each connected with the control means 6.4. The connection between the control means 6.4 and the reproducing means 6.1 is adapted to transfer information at least in the direction from the control means 6.4 towards the reproducing means 6.1 . The connection between the control means 6.4 and the input means 6.2 is adapted to transfer information at least in the direction from the input means 6.2 towards the control means 6.4. The connection between the control means 6.4 and the storage means 6.3 is adapted to transfer information bidirectionally. List of Reference

1 vehicle

2 display

3 3D Box

4 light source

5 master instrument

6 apparatus

6.1 reproducing means

6.2 input means

6.3 storage means

6.4 control means

N user

H hand