Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD OF CAPTURING TRUE GAZE POSITION DATA
Document Type and Number:
WIPO Patent Application WO/2018/170538
Kind Code:
A1
Abstract:
Systems / methods for capturing true gaze position data of subject (102) located within monitoring environment (104). System (100) includes: plurality of light sources (106-1 14) positioned at known three dimensional locations within monitoring environment (104); one or more cameras (1 17) positioned to capture image data corresponding to images of the eyes of subject (102); and system controller (123) that: (i) processes the captured images to determine gaze position of subject (102) within monitoring environment (104); (ii) selectively illuminates respective ones of light sources (106-1 14) at respective time intervals to temporarily attract driver's gaze position towards currently activated light source; (iii) detects a look event where subject's gaze position is determined to be at the currently activated light source; (iv) during each look event, records gaze position as being the known three dimensional location of the currently activated light source; (v) stores gaze position data in a database with image data.

Inventors:
EDWARDS TIMOTHY JAMES HENRY (AU)
NOBLE JOHN (AU)
Application Number:
PCT/AU2018/050248
Publication Date:
September 27, 2018
Filing Date:
March 19, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SEEING MACHINES LTD (AU)
International Classes:
A61B3/113; G06K9/00; B60K37/00; B60R21/015; G06F3/01; G06T7/00
Foreign References:
US20060132319A12006-06-22
JP2003080969A2003-03-19
US20090261979A12009-10-22
US6152563A2000-11-28
US20170039869A12017-02-09
US20140139655A12014-05-22
Other References:
"The human eye can see 'invisible' infrared light", INTERNET ARCHIVE WAYBACK MACHINE, 2 December 2014 (2014-12-02), XP055607972, Retrieved from the Internet [retrieved on 20180508]
Attorney, Agent or Firm:
SHELSTON IP PTY LTD (AU)
Download PDF:
Claims:
We claim:

1 . A system for capturing true gaze position data of a subject located within a monitoring environment, the system including:

a plurality of light sources positioned at known three dimensional locations within the monitoring environment;

one or more cameras positioned to capture image data corresponding to images of the eyes of the subject;

a system controller configured to:

process the captured images to determine a gaze position of the subject within the monitoring environment;

selectively illuminate respective ones of the light sources at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source;

detect a look event in which the subject's gaze position is determined to be at the currently activated light source;

during each look event, record the gaze position as being the known three dimensional location of the currently activated light source; and store the gaze position data in a database together with the image data.

2. A system according to claim 1 wherein the light sources are wirelessly controlled.

3. A system according to claim 1 or claim 2 wherein the specific light source activated is based on the subject's current gaze position.

4. A system according to claim 3 wherein the system controller determines a current peripheral region of the subject's field of view from the current gaze position and selects a light source to be activated based on a determination that it is located in the peripheral region.

5. A system according to claim 1 or claim 2 wherein the system controller illuminates the light sources sequentially in a predefined pattern with subsequent illuminated light sources being proximal in location to a preceding illuminated light source.

6. A system according to any one of the preceding claims wherein the system controller detects a look event by detecting a fixation of the subject's gaze position for a predetermined time during a respective time interval.

7. A system according to any one of claims 1 to 5 wherein the system controller detects a look event by detecting a button press by the subject during a respective time interval.

8. A system according to any one of claims 1 to 5 wherein the system controller detects a look event by detecting a voice command issued by the subject during a respective time interval.

9. A system according to any one of the preceding claims wherein the system controller issues an audible alert in conjunction with the illumination of a light source.

10. A system according to any one of the preceding claims wherein, upon detecting a look event, the system controller deactivates the currently illuminated light source.

1 1 . A system according to any one of the preceding claims wherein the gaze position data is stored in conjunction with one or more of ambient light condition data, subject details, time of day and monitoring environment details.

12. A system according to claim 1 1 wherein the monitoring environment is a vehicle cabin and the gaze position data is stored in conjunction with a speed of the vehicle.

13. A system according to any one of the preceding claims wherein the selective

illumination of the light sources is based on a determination of the driver's attention or distraction.

14. A method of capturing true gaze position data of a subject located within a monitoring environment, the method including the steps:

installing a plurality of light sources within the monitoring environment;

registering the three dimensional locations of the light sources within the monitoring environment;

capturing, from one or more cameras, image data corresponding to images of the eyes of the subject;

selectively illuminating respective ones of the light sources at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source;

detecting a look event being an event when the subject's gaze position is determined to be at the currently illuminated light source; during each look event, recording the gaze position as being the known three dimensional location of the currently activated light source; and

storing the gaze position data and image data in a database.

15. A method according to claim 14 wherein the step of registering the three dimensional locations of the light sources includes scanning the monitoring environment with a three dimensional imaging device to generate a three dimensional scene of the monitoring environment.

16. A method according to claim 5 wherein the three dimensional imaging device includes a LIDAR device.

17. A method according to claim 5 or claim 16 wherein registering the three dimensional locations of the light sources includes manually designating the light sources within the three dimensional scene.

18. A method according to claim 15 or claim 16 wherein registering the three dimensional locations of the light sources includes automatically designating the light sources within the three dimensional scene by shape recognition of the light sources.

Description:
SYSTEM AND METHOD OF CAPTURING TRUE GAZE

POSITION DATA

FIELD OF THE INVENTION

[0001 ] The present invention relates to eye gaze monitoring systems and in particular to a system for capturing true gaze position test data of a subject. Embodiments of the invention have been particularly developed for driver monitoring systems in vehicles. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.

BACKGROUND

[0002] Any discussion of the background art throughout the specification should in no way be considered as an admission that such art is widely known or forms part of common general knowledge in the field.

[0003] Various systems exist today for monitoring driver characteristics to detect fatigue and awareness. Systems which rely on monitoring eye gaze utilize camera systems for imaging the driver's eyes and image processing algorithms to derive information from the images about the eye gaze direction.

[0004] To assess the performance and robustness of these systems, the algorithms are typically fed test data which comprises data obtained from a variety of different driving conditions and vehicle scenarios. By way of example, it is desirable to assess the performance of eye gaze monitoring systems in conditions including:

> A wide field of view relative to the axis of the camera;

> A range of sunlight conditions;

> In drivers wearing various types of spectacles, and with sunglasses with transparency of 45% or more; and

> In drivers having a lazy eye condition.

[0005] The inventors have identified that often the degree to which a monitoring system can be assessed is limited by the accuracy of the test data. Therefore, the inventors have identified a desire for improved or alternative means for measuring depth in driver monitoring systems. SUMMARY OF THE INVENTION

[0006] It is an object of the present invention, in its preferred forms, to provide more accurate gaze position test data for use in eye gaze monitoring systems.

[0007] In accordance with a first embodiment of the present invention, there is provided a system for capturing true gaze position data of a subject located within a monitoring environment, the system including:

a plurality of light sources positioned at known three dimensional locations within the monitoring environment;

one or more cameras positioned to capture image data corresponding to images of the eyes of the subject;

a system controller configured to:

process the captured images to determine a gaze position of the subject within the monitoring environment;

selectively illuminate respective ones of the light sources at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source;

detect a look event in which the subject's gaze position is determined to be at the currently activated light source;

during each look event, record the gaze position as being the known three dimensional location of the currently activated light source; and store the gaze position data in a database together with the image data.

[0008] In some embodiments the light sources are wirelessly controlled.

[0009] In some embodiments the specific light source activated is based on the subject's current gaze position. In some embodiments the system controller determines a current peripheral region of the subject's field of view from the current gaze position and selects a light source to be activated based on a determination that it is located in the peripheral region.

[0010] In some embodiments the system controller illuminates the light sources sequentially in a predefined pattern with subsequent illuminated light sources being proximal in location to a preceding illuminated light source. [001 1 ] In some embodiments the system controller detects a look event by detecting a fixation of the subject's gaze position for a predetermined time during a respective time interval. In other embodiments the system controller detects a look event by detecting a button press by the subject during a respective time interval. In further embodiments the system controller detects a look event by detecting a voice command issued by the subject during a respective time interval.

[0012] In some embodiments the system controller issues an audible alert in conjunction with the illumination of a light source. In some embodiments, upon detecting a look event, the system controller deactivates the currently illuminated light source.

[0013] In some embodiments the gaze position data is stored in conjunction with one or more of ambient light condition data, subject details, time of day and monitoring environment details.

[0014] In one embodiment the monitoring environment is a vehicle cabin and the gaze position data is stored in conjunction with a speed of the vehicle.

[0015] In some embodiments the selective illumination of the light sources is based on a determination of the driver's attention or distraction.

[0016] In accordance with a second embodiment of the present invention, there is provided a method of capturing true gaze position data of a subject located within a monitoring environment, the method including the steps:

installing a plurality of light sources within the monitoring environment;

registering the three dimensional locations of the light sources within the monitoring environment;

capturing, from one or more cameras, image data corresponding to images of the eyes of the subject;

selectively illuminating respective ones of the light sources at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source;

detecting a look event being an event when the subject's gaze position is determined to be at the currently illuminated light source;

during each look event, recording the gaze position as being the known three dimensional location of the currently activated light source; and

storing the gaze position data and image data in a database. [0017] In some embodiments the step of registering the three dimensional locations of the light sources includes scanning the monitoring environment with a three dimensional imaging device to generate a three dimensional scene of the monitoring environment. In one embodiment the three dimensional imaging device includes a LIDAR device.

[0018] In one embodiment registering the three dimensional locations of the light sources includes manually designating the light sources within the three dimensional scene. In another embodiment registering the three dimensional locations of the light sources includes automatically designating the light sources within the three dimensional scene by shape recognition of the light sources.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] Preferred embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which:

Figure 1 is a perspective view of an interior of a vehicle illustrating a driver monitoring camera and exemplary placement of light emitting diodes (LEDs) throughout the vehicle interior;

Figure 2 is a schematic functional diagram of a system for capturing true gaze position data of a subject;

Figure 3 is a perspective driver's view of the vehicle of Figure 1 illustrating a gaze monitoring field of view of the forward scene;

Figure 4 is a two dimensional trace of a driver's gaze position across the monitoring field of view of Figure 3 and a sequence of LED activation;

Figure 5 illustrates a pair of graphs of X and Y gaze position data extracted from the trace of Figure 4; and

Figure 6 is a process flow diagram illustrating a method of capturing true gaze position data.

DETAILED DESCRIPTION

System overview

[0020] Referring initially to Figure 1 , described herein is a system 100 for capturing true gaze position data of a subject, in the form of driver 102, located within a monitoring environment in the form of vehicle 104. Although the invention will be described herein with reference to monitoring a driver of a vehicle, it will be appreciated that the system is equally applicable to monitoring subjects in broader monitoring environments such as in vehicle or aircraft training facilities and in air traffic control facilities.

[0021 ] As used herein, the terms "gaze position" refer to a position in the vehicle cabin (or monitoring environment generally) where the driver is looking and is derived from the gaze direction ray. The gaze direction ray has its origin at the mid point between eyes (MPBE) of the driver and is directed toward the point the driver is observing. This gaze direction ray is not the same as the gaze vector from either eye, but is a synthetic estimate derived from either or both eyes, depending on measurement confidence, eye visibility and other information.

[0022] The present invention captures "true" gaze position data which is as close as possible to the driver's actual eye gaze position. As the accuracy of the measurements is limited to the accuracy of the hardware (camera resolution etc.) even a "true" measure of gaze position is still only an estimate. As such, reference to "true" gaze position data in this specification relates to gaze position data that has been calibrated or verified against known reference positions.

[0023] Referring collectively to Figures 1 and 2, system 100 includes a plurality of light sources, in the form of light emitting diodes (LEDs) 106-1 14, pre-installed at known three dimensional locations within vehicle 5. By way of example, LEDs 106-1 14 may comprise 10 mm diameter by 10 mm tall cylindrical structures having an orange LED at one end. The end of the LED may have a predefined pattern which shapes the emitted light. The number of LED required depends on the application. Each LED preferably includes a unique identifier that is associated with its location within the vehicle.

[0024] Although the light sources are described as being LEDs, it will be appreciated that other types of light sources are applicable such as fluorescent lights, halogen bulbs and incandescent globes. LEDs 106-1 14 are installed through a process described below and it will be appreciated that the locations of LEDs 106-1 14 shown are exemplary only. In practice, the LEDs can be installed at any known location within the monitoring environment that is visible to the subject being monitored.

[0025] An infrared camera 1 17 is positioned to capture images of the eyes of driver 102 at wavelengths in the infrared range. Two horizontally spaced apart infrared illumination devices 1 19 and 121 are disposed symmetrically about camera 1 17 to selectively illuminate the driver's face with infrared radiation during image capture by camera 1 17. Operation in the infrared range reduces distraction to the driver. Use of two spaced apart illumination devices 1 19 and 121 provides for illumination at different angles which allows for reduction of glare effects as described in PCT Patent Application Publication WO 2016/131075 entitled "Glare Reduction" and assigned to Seeing Machines Limited. It will be appreciated that, in alternative embodiments, system 100 is able to operate using only a single infrared illumination device at the expense of potential performance degradation in the presence of glare.

[0026] Camera 1 17 is a two dimensional camera having an image sensor that is configured to sense electromagnetic radiation in the infrared range. In other embodiments, camera 1 17 may be replaced by a single two dimensional camera having depth sensing capability or a pair of like cameras operating in a stereo configuration and calibrated to extract depth. Although camera 1 17 is preferably configured to image in the infrared wavelength range, it will be appreciated that, in alternative embodiments, camera 1 17 may image in the visible range.

[0027] As shown in Figure 2 a system controller 123 acts as the central processor for system 100 and is configured to perform a number of functions as described below. Controller is located within the dash of vehicle 5 and may be connected to or integral with the vehicle on-board computer. In another embodiment, controller 123 may be located within a housing or module together with camera 1 17 and illumination devices 1 19 and 121 . The housing or module is able to be sold as an after-market product, mounted to a vehicle dash and subsequently calibrated for use in that vehicle. In further embodiments, such as flight simulators, controller 123 may be an external computer or unit such as a personal computer.

[0028] Controller 123 may be implemented as any form of computer processing device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. As illustrated in Figure 2, controller 123 includes a microprocessor 124, executing code stored in memory 125, such as random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and other equivalent memory or storage systems as should be readily apparent to those skilled in the art.

[0029] Microprocessor 124 of controller 123 includes a vision processor 127 and a device controller 129. Vision processor 127 and device controller 129 represent functional elements which are performed by microprocessor 124. However, it will be appreciated that, in alternative embodiments, vision processor 127 and device controller 129 may be realized as separate hardware such as microprocessors in conjunction with custom or specialized circuitry.

[0030] Vision processor 127 is configured to process the captured images to determine a three dimensional gaze position of the driver 5 within the monitoring environment. To achieve this, vision processor 127 utilizes one or more gaze determination algorithms. This may include, by way of example, the methodology described in US Patent 7,043,056 entitled "Facial Image Processing System" and assigned to Seeing Machines Pty Ltd. Vision processor 127 may also perform various other functions including determining attributes of the driver 5 and tracking the driver's head motion. The raw image data, gaze position data and other data obtained by vision processor 127 is stored in memory 125.

[0031 ] Controller 123 also includes a device controller 129 configured to selectively illuminate respective ones of the LEDs 106-1 14 at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source. LEDs 106-1 14 are preferably wirelessly controlled by device controller 129 through wireless communication such as Bluetooth™ or WiFi™ communication and powered by a small battery. However, in alternative embodiments, LEDs 106-1 14 may be directly electrically connected to device controller 129. Working in conjunction, device controller 129 and vision processor 127 provide for capturing true gaze position data of driver 102 during an ordinary operation of vehicle 104. This operation will be described with reference to Figures 3 to 5.

[0032] Figure 3 represents a forward view of driver 102 when seated in a driver's seat of vehicle 104. LED's 106-1 14 are visible within a defined two dimensional monitoring field of view 131 . Figure 4 illustrates a trace of the recorded X and Y components of gaze position of driver 102 during a period of time in which LEDs 1 15, 1 12, 106 and 107 are illuminated sequentially. In practice, the gaze position of driver 102 is tracked in three dimensions with the third dimension representing depth or distance from camera 1 17. However, for simplicity, only the transverse X and Y dimensions are shown graphically here.

[0033] During operation of vehicle 104, device controller 129 activates camera 1 17 to capture images of the face of driver 102 in a video sequence. Illumination devices 1 19 and 121 are alternatively activated and deactivated in synchronization with alternate frames of the images captured by camera 1 17 to illuminate the driver during image capture. At predetermined times, and for predetermined time intervals during vehicle operation, device controller 129 activates predetermined LEDs 106-1 14 one at a time. During LED illumination, vision processor 127 is configured to look for and detect a "look event" in which the driver's gaze position is determined to be at the currently activated light source. Detection of a look event can be performed in a number of ways as described below.

[0034] In some embodiments, the specific LED activated is based on the driver's current gaze position and is selected so as to minimize distraction to the driver. In these embodiments, vision processor 127 determines a current peripheral region of the driver's field of view from the current gaze position and a driver field of view model. The determined peripheral region or regions may be determined by a predefined range of distances or angles (having inner and outer bounds) away from the current gaze position. Device controller 129 then selects an appropriate LED to be activated based on a determination that it is located in the peripheral region. A similar process may be performed by selecting an appropriate LED that is proximal to the driver's current gaze position. In other embodiments, device controller 129 illuminates the LEDs sequentially in a predefined pattern stored in memory 125 with subsequent illuminated LEDs being proximal in location to a preceding LED.

[0035] Vision processor 127 is preferably configured to detect a look event by detecting a "fixation" of the subject's gaze position for a predetermined time during a respective time interval. Gaze fixations are an observable natural phenomenon of the human eye. A gaze fixation may be defined as a stationary glance (or semi-stationary glance to account for small eye movements such as saccades) at a fixed location or region for a predetermined period of time. To account for small eye movements, the size of a fixation region should be defined with appropriate spatial buffer, which is proportional to the distance to the region within the three dimensional monitoring environment. That is, fixation at a distant point will be subject to spatial fluctuations due to the small eye movements having a greater effect at long distance. Typical timeframes for a gaze fixation are in the range of 0.1 second to several seconds. An example time criteria for detecting a gaze fixation is that the gaze position remains within a predetermined spatial region for a period of greater than 1 .5 seconds.

[0036] Referring now to Figure 5, there is illustrated a pair of graphs of X and Y gaze position data corresponding to the sequence of eye gaze recorded in Figure 4. As illustrated, fixations of gaze position can be observed in either or both the X and Y gaze position data as flat segments. Similar patterns can be observed in depth component data (Z direction). Fixations are detected to occur during periods where a particular LED is illuminated and also when the driver returns his/her gaze to viewing the center of the forward road scene for a period of time. Fixations may also occur when the driver observes other objects or events and it is important to distinguish fixations on the illuminated LED from fixations on other regions. To achieve this, vision processor 127 is configured to only consider the gaze fixations that occur during illumination of an LED (time filtering) and to also compare the location of the gaze fixation to that of the position of the illuminated LED (position filtering). Using predetermined timing and position criteria, erroneous fixations can be minimized. Example time criteria include only considering fixations which are detected to commence at least 0.1 seconds after an LED is illuminated.

[0037] Additionally or alternatively, vision processor 127 is able to detect or confirm a look event by detecting a button press by driver 102 during a respective time interval. In this scenario, driver 102 is provided with one or more button inputs to confirm they are currently glancing at the illuminated LED. The buttons may be located on the steering wheel or other suitable convenient location within vehicle 105.

[0038] Additionally or alternatively, vision processor 127 may detect a look event by detecting a voice command issued by the driver during a respective time interval. In this scenario, driver 102 glances at the illuminated LED and issues a predetermined voice command such as "Located" while holding their gaze steady. To facilitate this, system 100 includes a microphone (not illustrated) and voice recognition software to receive and detect the issued voice command.

[0039] So that the driver is promptly alerted to the illumination of an LED, controller 123 may include a speaker to issue an audible alert upon illumination of an LED.

[0040] During each detected look event, vision processor 127 records the three dimensional gaze position as being the known three dimensional location of the currently activated LED. The three dimensional gaze position data is stored in memory 125 in conjunction with raw image data of the driver's face.

[0041 ] Once vision processor 127 detects the driver has glanced at the illuminated LED, the system controller deactivates the currently illuminated LED and a further LED is able to be illuminated at a future time.

[0042] The determination as to when to illuminate an LED may be simply based on a predetermined timing sequence (e.g. illuminate one LED every minute or every ten minutes). However, in some embodiments, system 100 may incorporate more advanced consideration such as determining a current level of the driver's attention or distraction (or driver workload). This may entail determining the driver's current workload based on current vehicle events. By way of example, if the vehicle is determined to be turning a corner (detected by a wheel turn and/or turn signal activation) then controller 123 may decide not to illuminate an LED so as to avoid further distraction to the driver. An assessment of driver workload, driver attention or driver distraction may also utilize additional inputs such as vehicle speed data, GPS location, on-board accelerometers, current light condition data. In one embodiment, system 100 may incorporate or receive data from a forward facing dash camera system that is capable of imaging and detecting potential hazards such as pedestrians and nearby vehicles. Such detection of hazards can also be used in the determination by controller 123 as to when to illuminate an LED.

[0043] The three dimensional gaze position data is stored in memory 125 together with the raw image data. The collected gaze position and image data is preferably stored in conjunction with various other relevant data which indicates the current vehicle and driver conditions under which the data was obtained. Such relevant data includes a current ambient light condition, details of the driver (e.g. age, sex, wearing glasses or not), time of day and monitoring environment details (e.g. make and model of vehicle), speed of the vehicle, GPS location of the vehicle, current driver workload, driver distraction or attention level and potential nearby hazards.

Installing the system

[0044] Installation of system 100 involves two primary steps: 1 ) installing the LEDs within the vehicle (or equivalent monitoring environment); and 2) registering the three dimensional locations of the light sources within the monitoring environment.

[0045] The first step involves manually fitting the LEDs at the desired locations within the vehicle using an adhesive or other fixing means. At this point, the LEDs may be synced with controller 123 through the relevant Bluetooth™ or other wireless connection. The LEDs may be installed in a temporary manner so as to be capable of being subsequently removed, or may be installed permanently.

[0046] The second step is performed by first scanning the monitoring environment with a three dimensional imaging device such as a LIDAR device to generate a high-resolution three dimensional model of the monitoring environment. The model is then loaded into CAD software and a system operator designates the location of each LED in the three dimensional model scene, as well as the three dimensional position of the driver imaging camera and/or a known three dimensional location in the vehicle cabin as a reference point. In one embodiment, this designation is performed manually (such as by clicking on the LEDs in the CAD software). In another embodiment, this designation is performed automatically by running a shape recognition algorithm on the three dimensional image data to recognize the light sources. In this embodiment, each light source may include a unique pattern such as a 2D barcode which is recognized by the shape recognition algorithm.

[0047] In an alternative embodiment, the three dimensional position of the LEDs may be performed entirely manually using a tape measure or the like.

Method of capturing true gaze position test data

[0048] The above described system allows for implementing a method 600 of capturing true gaze position test data as illustrated in Figure 6.

[0049] At installation step 601 , a plurality of LEDs is installed within the monitoring environment. At installation step 602, the three dimensional locations of the LEDs are registered within the monitoring environment using a three dimensional scanning device as mentioned above. During operation of the vehicle, at step 603, image data corresponding to images of the eyes of the driver are captured from one or more driver imaging cameras. At step 604, respective ones of the LEDs are selectively illuminated (one at a time) at respective time intervals thereby to temporarily attract the driver's gaze position towards a currently activated light source. At step 605, a look event is detected as being an event when the driver's gaze position is determined to be at the currently activated LED. At step 606, during each look event, the driver's gaze position is recorded as being the known three dimensional location of the currently activated LED. Finally, at step 607, the gaze position data and image data are stored in a database.

Conclusions

[0050] It will be appreciated that the above description describes improved systems and methods of capturing true gaze position data of a subject located within a monitoring environment.

[0051 ] The obtained raw image data is useful as test data to test the performance of gaze tracking algorithms. The obtained three dimensional gaze position data represents reference "truth" data that can be used to compare determined gaze position with actual gaze position for performance and calibration of the algorithms. The resulting dataset obtained by system 100 can be licensed or sold to interested parties seeking to test their gaze tracking algorithms. INTERPRETATION

[0052] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing," "computing," "calculating," "determining", analyzing" or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.

[0053] Reference throughout this specification to "one embodiment", "some embodiments" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases "in one embodiment", "in some embodiments" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.

[0054] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.

[0055] In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.

[0056] It should be appreciated that in the above description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, Figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this disclosure.

[0057] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the disclosure, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.

[0058] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

[0059] Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical, electrical or optical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

[0060] Thus, while there has been described what are believed to be the preferred embodiments of the disclosure, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the disclosure, and it is intended to claim all such changes and modifications as fall within the scope of the disclosure. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure.