Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EYE - TRACKING BASED COMMUNICATION
Document Type and Number:
WIPO Patent Application WO/2011/083092
Kind Code:
A1
Abstract:
A method for communicating between a first user and a second user comprises using an eye gaze tracker to track the eye gaze of a first user. A trigger is provided for activation by the first user to determine, using the eye gaze tracker, the part of an object at which the first user is directing his gaze when the trigger is activated. An indicator is applied to indicate the determined part of the object, the indicator being detectable by the second user.

Inventors:
SCANLON PATRICIA (IE)
Application Number:
PCT/EP2011/000052
Publication Date:
July 14, 2011
Filing Date:
January 10, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ALCATEL LUCENT (FR)
SCANLON PATRICIA (IE)
International Classes:
G06F3/01
Domestic Patent References:
WO2004084054A22004-09-30
Foreign References:
US20060109237A12006-05-25
ES2302535T32008-07-16
Other References:
ISTVAN BARAKONYI ET AL: "Cascading Hand and Eye Movement for Augmented Reality Videoconferencing", 3D USER INTERFACES, 2007. 3DUI '07. IEEE SYMPOSIUM ON, IEEE, PISCATAWAY, NJ, USA, 1 March 2007 (2007-03-01), XP031069626, ISBN: 978-1-4244-0907-5
Attorney, Agent or Firm:
COCKAYNE, Gillian et al. (Unit 18 Core 3, Workzone,Innova Business Park,Electric Avenue, Enfield EN3 7XU, GB)
Download PDF:
Claims:
Claims

1. A method for communicating between a first user and a second user comprising: using an eye gaze tracker to track the eye gaze of a first user;

providing a trigger for activation by the first user to determine, using the eye gaze tracker, the part of an object at which the first user is directing his gaze when the trigger is activated; and

applying an indicator to indicate the determined part of the object, said indicator being detectable by the second user. 2. The method as claimed in claim 1 and wherein the first and second users are located remote from one another and in contact over a communications link.

3. The method as claimed in claim 1 and wherein the first and second users are present in the same location.

4. The method as claimed in claim 1, 2 or 3 and wherein the object is at least one of: a physical subject; a webpage; a graphical representation; and an image on a display screen. 5. The method as claimed in claim 1, 2, 3 or 4 and wherein the first user views the object and the second user views a representation of the object with the indicator applied to the representation to indicate the part.

6. The method as claimed in claim 5 and wherein the first user views the object on one display device and the second user views the representation on another display device.

7. The method as claimed in claim 1, 2, 3 or 4 and wherein the second user views the object with the indicator applied to the object. 8. The method as claimed in any preceding claim wherein the indicator is at least one of: a visual highlight; an enlargement of an area of an image of the object; and a region delimited by a boundary line.

9. The method as claimed in any preceding claim wherein the trigger is activated by at least one of: pressing a button; voice activity detection; double eye blinking; head movement; and keyword spotting.

10. The method as claimed in any preceding claim and wherein the object is a document stored on a remote server and accessible to a plurality of users and a respective trigger is provided for each user. 11. The method as claimed in any preceding claim wherein the eye gaze tracker is combined with a camera in apparel worn by the first user.

12. The method as claimed in claim 11 and wherein video data from the camera is used to provide an image for the second user and indicator information is combined with the video data.

13. Communication apparatus comprising: an eye gaze tracker; a trigger device; and a processor programmed or configured to perform the method as claimed in any of claims 1 to 12.

Description:
EYE - TRACKING BASED COMMUNICATION

FIELD OF THE INVENTION

The present invention relates to a communication method and apparatus, and more particularly, but not exclusively, to a method and apparatus for use in collaborative working between users located in different places.

BACKGROUND

Two people sitting side by side and discussing an object, such as a physical subject or a document, may collaborate by, for example, gesturing to a part of the object under discussion. Where a larger number of people are involved in such a discussion, there may be problems if not all of the participants are able to see clearly what part of the object has been indicated by a speaker.

There are also difficulties in effectively implementing remote collaboration where participants must communicate over a communications link of some type. If the object under discussion is a physical subject, it may prove difficult to draw the attention of a remote participant to a specific point or area of interest. If video conferencing is employed, a camera can be directed at the subject so as to capture it within the general field of view of the camera.

In another form of remote collaboration, two computer monitors may be set up with a communications link and appropriate software applications so that a person at one location and another at a remote location simultaneously view the same display. One person may move a cursor or highlight text, for example, and the result is viewable by both parties on their respective monitor. One example of a commercially available collaboration tool is "Microsoft NetMeeting". Using such a tool, someone may give a presentation that may be viewed by a remote participant, or the remote participant may watch someone edit a locally stored document. One party may control their mouse to indicate a specific area or item, shown on the monitor, to the remote participant. However, the remote participant may only be able to draw attention to an item by taking over control from the other person, which can be awkward to implement. Where more than two participants are involved in active discussion, this method may prove impracticable. Verbal descriptions may be substituted in an effort to specify or clarify further an item under discussion, and this may lead to imprecision and confusion.

BRIEF SUMMARY

According to a first aspect of the invention, a method for communicating between a first user and a second user comprises using an eye gaze tracker to track the eye gaze of a first user. A trigger is provided for activation by the first user to determine, using the eye gaze tracker, the part of an object at which the first user is directing his gaze when the trigger is activated. An indicator is applied to indicate the determined part of the object, the indicator being detectable by the second user.

The term eye gaze tracker, as used in this specification, includes apparatus of the type which measures the point of gaze ("where we are looking") and apparatus in which the motion of an eye relative to the head is monitored. An eye gaze tracker provides a way to capture the actual part of an object a user is focusing on. There are a number of methods available for measuring eye movements to determine the direction of gaze. One variant uses video images from which the eye position is extracted.

Eye gaze tracking is capable extremely accurate gaze estimations that can, for example, pinpoint down to the exact letter in a document that a user is focusing on. Eye gaze tracking is used by market researchers to learn about user preferences or habits for web usability, advertising and marketing. It is used by psychologists to learn more about how we read, infer visual information, what stimuli we respond to and the like. Eye gaze technology is used by the disabled to interface with computers as opposed to using mouse and keyboard. It is also used to gauge driver alertness and for in-vehicle research and training simulators.

The use of a trigger, so that only when a participant explicitly wishes to illustrate or highlight, enables, in one embodiment of the invention, all participants to take control and highlight their focal point to all other participants.

In an embodiment of the invention, the object under discussion by a user may be, for example, a physical subject; a webpage; a graphical representation; or an image on a display screen, but there are many other applications where the invention may be applied. For example, a physical object could be a piece of hardware and in the medical field, such as surgery, it could be a body part.

In one embodiment, the first and second users are located remote from one another and in contact over a communications link. In another embodiment, the first and second users are present in the same location, for example both may be attendees at a presentation. The invention is applicable to groups of more than two users, who may be all located remote from one another, all collocated or some may be remote and others may be collocated.

In one embodiment, the trigger is activated by at least one of: pressing a button; voice activity detection; double eye blinking; head movement such as nodding or shaking the head; and keyword spotting. Double eye blinking, that is,blinking twice in rapid succession may be used to activate the trigger for applications in which hands free and noise free operation is desired. For example, it may be beneficial in certain in military uses.

BRIEF DESCRIPTION OF THE DRAWINGS

Some embodiments of the present invention will now be described by way of example only, and with reference to the accompanying drawings, in which:

Figure 1 schematically illustrates a n arrangement in accordance with the invention.

DETAILED DESCRIPTION

With reference to Figure 1, a group of people, who may be located in the same room or in different remote locations, are jointly editing or reviewing a document. Each person displays the document on their own hand held device or laptop 1 , 2...n. The document being edited is stored on a remote server 3 that is able to be accessed by all participants. Each person looks at their own device 1, 2...n and has available an eye gaze tracker la, 2a...na to determine the direction of their gaze. When a person wishes to draw attention to an item, he activates his trigger lb, 2b...nb using a button so that the eye gaze tracker captures what item in the document he is focusing on. This item is then automatically highlighted, so that the local and remote audience's attention is drawn to this item. Once another person starts speaking, they activate their own trigger in turn so as to highlight the item that they are focusing on. In this way, all participants in the conference are able to take control and highlight the items they are focused on or referring to. In another embodiment, some or all of the participants have a speech detector that is arranged to activate the trigger, instead of requiring a physical activation by a button, say. The speech detector may be set up so as to activate the eye gaze when any speech at all is detected from the user, or when the user says a particular key word.

In another embodiment, a presentation is made to an audience that includes both local and remote participants. The speaker is monitored by an eye gaze tracker to pin-point what item the speaker is focusing on in a slide, say, when the speaker activates the tracker. This item is then automatically highlighted to alert the local and remote audience as to exactly what is being referred to in the slide in order to draw audience's attention to the salient points of the presentation and to avoid confusion where referring to detailed aspects of a graph or diagram.

In another embodiment, there may be a medical need, for example, in which a local participant, such as a local surgeon, doctor, flight attendant or ambulance emergency staff, may not have the expertise to deal with a problem presented to them. They require expert assistance from a remotely located specialist or expert. For example, the remote expert may be a medical professional or specialist whose direction, guidance and advice is required by a local surgeon who does not have the expertise to deal with all types of surgeries. If a specialist surgeon is required, the specialist may be some distance away from the local surgeon. The expert would then need to travel to the local hospital or the patient may instead require transportation to the specialist's location. Both options lead to wasted time and may be potentially impracticable depending on the nature of the surgery, the difficulty in transportation and the expert's caseload. By employing the invention, remote expert intervention and/or direction may be provided.

The local surgeon attends to the patient and wears specialized glasses. A small camera is mounted on glasses and directed so as to capture the local surgeon's field of view. An eye gaze tracker is also mounted on the glasses to capture what the local participant is focusing on within their field of view. The field of view, available as a video stream, is overlaid with the exact area of focus of the user. This area may be, for example, an encircled area and/or highlighted. The video information, including that obtained using the eye tracker, is transmitted to the remote expert and is displayed for them on their computer monitor, showing the field of view of the local surgeon overlaid on the image. Using this augmented video, the expert then has a greater understanding of what the local surgeon is faced with. The expert is then able to provide advice based exactly on what he sees. An eye tracker is included on the remote surgeon's computer, although alternatively they may rely on a mouse, to capture what part of the image the remote expert is focussing on. This then is relayed back to the local surgeon by overlaying the new highlighted portion on the local surgeon's goggles. In this way, the local surgeon knows exactly what the expert surgeon would like the local surgeon to pay attention to. The remote expert then uses the transmitted augmented video to provide them with visual information about the case.

The expert remote participant is also set up to capture their own direction of gaze at a particular time when triggered to capture what part of the video stream they themselves are focusing on or where exactly they wish the local participant to look at or attend to. The remote participant's focal point is relayed back to the non-expert to guide and direct them. This arrangement enables hands free specific communication to be used, which is very beneficial in such a scenario.

In another embodiment, local and remote participants collaborate and discuss physical objects. In this case the physical object is located locally. The local participant transmits their augmented field of view, with focal point or area highlighted to all remote participants using the triggered eye tracker to identify a particular area. Remote participants can also have their own focal point/area relayed back to local participant in a similar manner. The physical object under discussion may be, but is not limited to, for example, an engine, a production plant and many other types of object. Again, this embodiment may be arranged to enable hands free specific communication, allowing a local participant to carry out work or adjustment during collaboration.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.




 
Previous Patent: METHODS FOR TREATING PANCREATIC CANCER

Next Patent: TRAMPOLINE