Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
WEARABLE LASER POINTER FOR AUTOMOTIVE USER INTERFACE
Document Type and Number:
WIPO Patent Application WO/2019/069249
Kind Code:
A1
Abstract:
The proposed invention has a laser pointer based pointing device fixed or attached to one of the fingers of the user and an eye-gaze tracker attached to a display device. The pointing device is used to point at an intended target at the display device. The eye gaze tracker is activated after it detects the eyes while pointing the laser beam, and used as a switch to make a selection based on the eye gaze of the user being fixed to the intended target beyond a predefined time period and within a boundary of visual angle.

Inventors:
GOWDHAM PRABHAKAR P G (IN)
BISWAS PRADIPTA (IN)
Application Number:
PCT/IB2018/057680
Publication Date:
April 11, 2019
Filing Date:
October 03, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INDIAN INST SCIENT (IN)
International Classes:
B60Q9/00; B60R21/00; G06F3/01
Foreign References:
US20160267336A12016-09-15
US20130162632A12013-06-27
Attorney, Agent or Firm:
KHURANA & KHURANA, ADVOCATES & IP ATTORNEYS (IN)
Download PDF:
Claims:
We claim:

1. A system for user interaction with an infotainment system in a vehicle, comprising: a display having a plurality of options for selection; a wearable laser pointer for a user to point a light beam on the display; an eye gaze tracker for tracking eye gaze direction of the user; a camera to capture images of the display; and a processor system comprisinga memory storing processor-executable instructions; and a processor configured to execute the processor- executable instructions to: ascertain, from the images captured by the camera, location on the display, where the light beam is pointed; ascertain,location, on the display, where eye gaze of the user is directed; select an intended option out of the plurality of options available on the display, based on combination of the user pointing the light beam on the option, and the user directing his eye gaze on that option.

2. The system as claimed in claim 1, wherein the processor system is configured to ascertain if the light beam is pointed at the intended option, on which the eye tracker detects eyes based on which the eye gaze tracker gets activated. The system as claimed in claim 2, wherein on activation of the eye tracker, the processor system is configured to detect if eye gaze of the user is directed to the intended option for a time beyond a predefined time penodand within a boundary of visual angle, and select the intended option if eye gaze of the user is directed to the intended option for a time beyond a predefined time period and within a boundary of visual angle. The system as claimed in claim 3, wherein the predefined time period is300ms.

The system as claimed in claim 1, comprising one or more interfaces, the one or more interfaces configured to carry out one or more of: calibration of the system, activation/ deactivation of the system, change the predefined time period, and adjust exposure and focus of the camera.

The system as claimed in claim 1, wherein the calibration of the system includes calibration of the camera and calibration of the eye gaze tracker. The system as claimed in claim 6, whereincalibration of the camera includes feeding location of corner points of the display to an image processing algorithm in the processor system.

The system as claimed in claim 1, wherein the display has a reflecting surface for reflection of the light beam from the display to enable the camera to capture location of the light beam on the display.

The system as claimed in claim 1, wherein the display includes a transparent glass sheet pasted on the surface to make the surface reflecting. The system as claimed in claim 1, wherein the wearable laser pointer is fixed with a finger of the user.

A method for user interaction with an infotainment system in a vehicle, comprising: capturing, using a camera, images of a display having a plurality of options; ascertaining, from thecaptured imagesusing a processor system, location on the display, where a light beam originating from a laser pointer configured with a finger of a user, is pointed; ascertaining, using the processor system, if the light beam has been pointed on an intended optionduring which the eye gaze tracker detects eyes; activatingan eye gaze detector, if the light beam has been pointed on theintended option during which the eye gaze tracker detects eyes; ascertaining, using the processor system based on inputs from the eye gaze detector, location on the display where eye gaze of the user is directed; ascertaining, using the processor system, if eye gaze of the user been directed to the intended option beyond a predefined time period and within a boundary of visual angle; selecting the intended option if it is ascertained that eye gaze of the user been directed to the intended option beyond the predefined time period and within a boundary of visual angle.

The method as claimed in claim 11, comprising calibration of the camera and the eye gaze tracker using one or more interfaces.

The method as claimed in claim 12, whereinthe calibration of the camera includes feeding location of corner points of the display to an image processing algorithm in the processor system.

The method as claimed in claim 12, comprising changing the predefined time period.

The method as claimed in claim 12, comprising adjusting exposure and focus of the camera.

Description:
WEARABLE LASER POINTER FOR AUTOMOTIVE USER INTERFACE

TECHNICAL FIELD

[0001] The present disclosure relates to systems and methods for allowing the drivers to interact with an automotive user interface using a wearable laser pointer.

BACKGROUND

[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.

[0003] The rapid increase of new information systems and functionalities that can be used within vehicles can increase driver distraction while driving. Drivers are often unaware of the effects that this distraction can have on their own abilities for vehicle control. Driver behavior can indicate whether the driver is distracted while driving. Controlling vehicle systems, for example, a display device within the vehicle can be based on driver behavior.

[0004] To assist such behavior of the driver, several solutions have been proposed in the state of the art to track hand or finger movement or develop alternate pointing modalities for avoiding physically touching of the display device. For example, solutions include three different pointing techniques for operating secondary tasks during driving the vehicle. These three solutions include infrared (IR) sensor based, hand tracking systems, wearable devices and remote controller.

[0005] The IR sensor based hand tracking systems track human finger position; however, such systems fail in terms of accuracy of tracking and in different lighting conditions of the environment. Further, the wearable devices use sensors and processing machines. The sensors are attached to human finger and movement of the sensors is tracked by the connected processing machine. Such wearable devices demand the drivers to stretch his/her arms to reach near the screen of the display device. Yet further, the remote controller with buttons or touch facility may be provided for operating the display device; however, the remote controllers are generally prone to unintended selection of icons in the display device by engaging for unintended movement of hands or fingers. Also, the remote controller demands the driver to stretch their arm toward the display device. Such movement of the arms or hands can degrade driver's performance while operating secondary tasks related to display device.

[0006] Similar other solutions are also known in the art. For example, gaze controlled interfaces for automotive environment provides an eye gaze controlled smart display for passengers of a vehicle. Passengers of the vehicle may point and select icons on the display device by staring at appropriate portion of a screen of the display device.

[0007] Also, a graphic projection display for drivers is known for showing objects on road ahead. The driver may use the object shown on the display for utilizing the different input modalities including eye gaze. However, the graphic projection display may not address a concern of improving accuracy of the gaze tracking, and does not intend to operate display device (infotainment system mounted on the dashboard).

[0008] Further, in the state of the art, the controlled display device with touch screen capabilities are reported with higher reaction time for gaze controlled interface. Also, the controlled display device did not propose any intelligent algorithm to reduce pointing and selection times for gaze controlled interface.

[0009] In some embodiments, the numerical parameters set forth in the written description and attached figures are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.

[0010] As used in the description herein and throughout the figures that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.

[0011] The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.

OBJECTS OF THE INVENTION

[0012] It is an object of the present disclosure to provide a system and method that uses a laser pointer for pointing an intended target and an eye-gaze for selection of the intended target on display systems such as infotainment systems.

[0013] It is a further object of the present disclosure to provide systems and method that could be used in situations where the drivers are intended to reach spaces out of their reach for performing secondary tasks during driving.

[0014] It is yet further object of the present disclosure to provide systems and method that would potentially reduce the movement of hands of the driver to physically touch an intended target on the display device mounted in the dashboard of the vehicle, so as to improve the selection time of the intended target on the display device. [0015] It is yet further object of the present disclosure to provide systems and method that would help the elderly drivers, who have reduced range of movement at shoulder due to physical impairment, for performing secondary tasks during driving a vehicle.

SUMMARY

[0016] This summary is provided to introduce concepts related to allow drivers to interact with an automotive user interface using a wearable laser pointer. The concepts are further described below in the detailed description. This summary is not intended to identity key features or essential features of the present subject matter, nor is it intended to be used to limit the scope of the present subject matter.

[0017] Aspects of the present disclosure relate to system and method for user interaction with an infotainment system in a vehicle. The proposed system and method is based on a laser pointer fixed to one of the fingers of a user, driver of the vehicle, and an eye gaze tracker. The user may use the laser pointer to point at an option on the display which he intends to select. Once the user points at an intended target, such as a selectable option, on the display, the eye gaze tracker tracks eye gaze of the user to get input for selection of the option. Thus, the eye gaze tracker acts as a switch to make a selection of the intended option on the display, wherein the intended option is selected if eye gaze of the user is directed towards the option for a time beyond a predefined time period and is within a predefined visual angle. With such selection, the user may not require his/her arm to be stretched towards the display device and physically touch the display for selecting a target on the display device. As the proposed system reduces the amplitude of hand movement, response time for interacting with the infotainment system gets automatically reduced, improving safety of drivers by reducing time duration utilized for performing secondary tasks.

[0018] In an aspect, the proposed system for user interaction with an infotainment system in a vehicle comprises a display having a plurality of options for selection; a wearable laser pointer for a user to point a light beam on the display; an eye gaze tracker for tracking eye gaze direction of the user; and a camera to capture images of the display.

[0019] In an aspect, the proposed system also includes a processor system comprising a memory storing processor-executable instructions; and a processor, wherein the processor system is configured to execute the processor-executable instructions to: ascertain, from the images captured by the camera, location on the display, where the light beam is pointed; ascertain, location, on the display, where eye gaze of the user is directed; and select an option out of the plurality of options available on the display, based on combination of the user pointing the light beam on the option, and the user directing his eye gaze on that option within a boundary of visual angle beyond a time period.

[0020] In an aspect, the processor system is configured to ascertain if the light beam is pointed at the option during which the eye gaze tracker detects eyes, based on which the processor system activates the eye tracker.

[0021] In an aspect, on activation of the eye tracker, the processor system is configured to detect if eye gaze of the user is directed to the intended option for a time beyond apredefined time period and within a boundary of visual angle, and select the intended option.

[0022] In an aspect, the system also comprises one or more interfaces, the one or more interfaces configured to carry out one or more of: calibration of the system, activation/ deactivation of the system, change predefined time period, and adjust exposure and focus of the camera.

[0023] In an aspect, the calibration of the system includes calibration of the camera and calibration of the eye gaze tracker.

[0024] In an aspect, calibration of the camera includes feeding location of corner points of the display to an image processing algorithm in the processor system.

[0025] In an aspect, the display has a reflecting surface for reflection of the light beam from the display to enable the camera to capture location of the light beam on the display.

[0026] In an aspect, the display includes a transparent glass sheet pasted on the surface to make the surface reflecting. [0027] In an aspect, the wearable laser pointer is fixed with a finger of the user.

[0028] An aspect of the present disclosure relates to a method for user interaction with an infotainment system in a vehicle, the proposed method comprising steps of: (a) capturing, using a camera, images of a display having a plurality of options; (b) ascertaining, from the captured images using a processor system, location on the display, where a light beam originating from a laser pointer configured with a finger of a user, is pointed; (c) ascertaining, using the processor system, if the light beam has been pointed on an intended option during which the eye gaze tracker detects eyes.; (d) activating, if the light beam has been pointed on the intended option during which the eye gaze detector detects eyes, an eye gaze detector; (e) ascertaining, using the processor system based on inputs from the eye gaze detector, location on the display where eye gaze of the user is directed; (f) ascertaining, using the processor system, if eye gaze of the user been directed to the intended option beyond a predefined time period and within a boundary of visual angle; (g) selecting the option if it is ascertained that eye gaze of the user been directed to the intended option beyond the predefined time period and within a boundary of visual angle.

[0029] In an aspect, the method comprises step of calibrating of the camera and the eye gaze tracker using one or more interfaces.

[0030] In an aspect, the calibration of the camera includes feeding location of corner points of the display to an image processing algorithm in the processor system.

[0031] In an aspect, the method also comprises step of changing any one or both of the first predefined time period and the second predefined time period.

[0032] In an aspect, the method further comprises step of adjusting exposure and focus of the camera.

BRIEF DESCRIPTION OF THE DRAWINGS

[0033] The illustrated embodiments of the subject matter will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the subject matter as described herein, wherein:

[0034] FIG. 1 illustrates exemplary system architecture showing functional modules for a system in accordance with an embodiment of the present disclosure;

[0035] FIG. 2 illustrates an exemplary environment for implementation of the proposed system;

[0036] FIG. 3A illustrates an exemplary laser pointer based pointing device mounted on an index finger along with a battery and switch;

[0037] FIG. 3B illustrates an exemplary three dimensional representation of the laser pointer based pointing device;

[0038] FIG. 4 illustrates a graphical user interface for calibration and activation of the proposed system;

[0039] FIG. 5 illustrates a first exemplary setup for an exemplary implementation of the proposed system;

[0040] FIG 6. illustrates an exemplary graphical representation of number of correct selections of each color space components for the first exemplary setup;

[0041] FIG 7. illustrates an exemplary graphical representation of selection time of color space components for the first exemplary setup;

[0042] FIG. 8 illustrates exemplary screenshots of a centre selection (left) and a target selection (right) for a second exemplary setup for an exemplary evaluation of modalities for their performances;

[0043] FIG. 9 illustrates an exemplary graphical representation of comparison between pointing and selection times for different devices, for the second exemplary setup;

[0044] FIG. 10 illustrates an exemplary graphical representation of comparison of indices of performance for different devices, for the second exemplary setup;

[0045] FIG. 11 illustrates an exemplary graphical representation of comparison of cognitive load in terms of Task Load Index (TLX) scores, for the second exemplary setup; [0046] FIG. 12 illustrates an exemplary graphical representation of comparison of subjective preference in terms of System Usability Scale (SUS), for the second exemplary setup;

[0047] FIG. 13 illustrates an exemplary graphical representation of comparison of percent of wrong selection, for the second exemplary setup;

[0048] FIG. 14 illustrates an exemplary graphical representation of calculation of driving performance, for third exemplary setup;

[0049] FIG. 15 illustrates an exemplary graphical representation of comparison of driving performance in terms of mean deviation from the lane, for third exemplary setup;

[0050] FIG. 16 illustrates an exemplary graphical representation of comparison of driving performance in terms of the driving speed, for third exemplary setup;

[0051] FIG. 17 illustrates an exemplary graphical representation of comparison of the driving performance in terms of the standard deviation of steering angle, for third exemplary setup;

[0052] FIG. 18 illustrates an exemplary graphical representation of comparison of the average selection time of the targets on the dashboard display, for third exemplary setup;

[0053] FIG. 19 illustrates an exemplary graphical representation of comparison of cognitive loads in terms of TLX scores, for third exemplary setup; and

[0054] FIG. 20 illustrates an exemplary graphical representation of comparison of system preference in terms of SUS scores, for third exemplary setup.

[0055] FIG. 21 illustrates an exemplary flow diagram for the proposed method for interface with infotainment system of a vehicle, in accordance with embodiments of the present disclosure. DETAILED DESCRIPTION

[0056] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.

[0057] Various terms as used herein are shown below. To the extent a term is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.

[0058] Generally, drivers need to undertake secondary tasks of operating display systems such as infotainment systems, navigation systems, and the like, while driving a vehicle. Such secondary tasks may cause distraction of the driver from the primary task of driving the vehicle. Although there are a number of methods, including voice based or gesture recognition, for interacting with the display systems, the prominently method of interaction with the display system is touch-input modality. For touch-input modality, the drivers have to stretch their arm to the touch screen and tap on it to make a selection.

[0059] In modern cars, infotainment systems may employ buttons with Liquid Crystal Display (LCD) and touch screen for interaction. In addition to this, higher version of the infotainment systems may include improved interaction technologies, such as voice recognition, gesture recognition, haptic feedback, personalized instrument displays, predictive models for assisting the drivers in parking, hand gesture tracking based input, and so forth.

[0060] However, such exiting infotainment systems have following issues: a) Accuracy of digital video interface (DVI) changes for different languages and affective states, b) Need to remember a set of gestures or screen sequence, and c) Intelligent prediction algorithm which cannot improve latency in infrared sensor.

[0061] To address these issues, the present disclosure proposes a system and a method that utilizes a pointing device (laser pointer) and an eye-gaze system. The pointing device may point towards an intended target on a display device (infotainment system), and the eye-gaze system may select the intended target based on aimed pointing by the pointing device.

[0062] With the proposed system and method, a driver of a vehicle may need not stretch his/her arm till the display device for selecting any target. Further, since the proposed system uses a separate method for pointing towards a target and a separate method for selecting the target, unintended selections of any target on the display device may be reduced. Yet further, the proposed system uses the eye-gaze system or tracker just for switching purpose, so that a low cost eye-gaze tracker with limited accuracy may perform the tasks that are sufficient for the operation of the proposed system.

Exemplary Embodiments:

[0063] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.

[0064] Embodiments of the present disclosure include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and firmware and/or by human operators.

[0065] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present disclosure with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present disclosure may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the present disclosure could be accomplished by modules, routines, subroutines, or subparts of a computer program product. [0066] If the specification states a component or feature "may", "can", "could", or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.

[0067] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This present disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the present disclosure to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the present disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).

[0068] Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying the present disclosure. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named element.

[0069] Various embodiments are further described herein with reference to the accompanying figures. It should be noted that the description and figures relate to exemplary embodiments, and should not be construed as a limitation to the subject matter of the present disclosure. It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the subject matter of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the subject matter of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof. Yet further, for the sake of brevity, operation or working principles pertaining to the technical material that is known in the technical field of the present disclosure have not been described in detail so as not to unnecessarily obscure the present disclosure.

[0070] FIG. 1 illustrates exemplary functional modules of the proposed system 100 in accordance with an exemplary embodiment of the present disclosure. In an example, the system 100 may include a display device (infotainment system) 102 having a display/screen 104. In an alternative example, the system 100 may be in communication with the display device 102 having the display 104.

[0071] The system 100 may further include one or more processor(s) 106. The one or more processor(s) 106 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 106 are configured to fetch and execute computer-readable instructions stored in a memory 108 of the system 100. The memory 108 may store one or more computer-readable instructions or routines, which may be fetched and executed to select an intended target on the display device. The memory 108 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.

[0072] The system 100 also includes an interface(s) 110. The interface(s) 110 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 110 facilitate communication of the system 100 with various devices, coupled to the system 100. The interface(s) 110 may also provide a communication pathway for one or more components of the system 100. Examples of such components include, but are not limited to, module(s) 112 and data 114.

[0073] The module(s) 112 can be processing engine(s) implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the module(s) 112. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the module(s) 112 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the module(s) 108 may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the module(s) 112. In such examples, the system 100 may include the machine- readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to system 100 and the processing resource. In other examples, the module(s) 112 may be implemented by electronic circuitry.

[0074] In an implementation, the module(s) 112 further includes an input receiving module 116, a selection module 118, and other module(s) 120. The other module(s) 120 may include programs or coded instructions that supplement applications and functions of the system 100.

[0075] Further, the data 114 may include data that is either stored or generated as a result of functionalities implemented by any of the components of the module(s) 112.

[0076] Further, the proposed system 100 may further include a pointing device (laser pointer) 122, an eye-gaze tracker 124, and a camera 126.

[0077] Before initiation of operation of the proposed system 100, the system 100 may be calibrated and activated. For calibration, the system 100 may be accessed by the interface(s) 110. An exemplary implementation of the interface(s) 110 is shown in FIG. 4. FIG. 4 represents the interface(s) 110 implemented with a combination of hardware and programming (for example, programmable instructions) in Python using PyQt4. The interface(s) 110 may be designed in such a way that the users/drivers may have the flexibility of controlling the exposure value and focus of the camera 126. The user/driver can calibrate the system 100 by clicking/capturing four corners of display 104 as an image during the setup of the proposed system 100. This process helps in feeding the corner points of the real display 104 to an image processing algorithm.

[0078] In addition to calibration, the interface(s) (Graphical User Interface) 110 may also provide features to activate as well as deactivate the proposed system 100. Also, the user interface(s) 110 may provide a feature to preview the video captured by the camera 126 which helps the user/driver to see the instantaneous reflection of parameter changes like exposure and focus, on the captured image.

[0079] Also, in addition to calibration and activation, the system 100 may be tested using, for example, Tobii EyeX tracker which tracks the eye position of the user/driver. Such tracker may be connected to a computing device by means of universal serial bus (USB). With such connection, eye gaze data from the tracker may be received by a software development kit (SDK) in the form of a (x, y) position corresponding to pixel coordinates on the display 104.

[0080] In operation, once the system 100 is calibrated, activated, and tested, the user/driver may wear the pointing device 122 on one of the fingers as shown in FIG. 3A. The pointing device 122 may be powered up by a rechargeable battery, say, Lipo 3.7 V battery. In an example, the battery may be tied up to the user's dorsum of hand. A three dimensional model of the wearable pointing device 122 along with its dimensions in millimeter is shown in FIG. 3B.

[0081] The pointing device 122 may be used by the user/driver to point at an intended target at the display 104 of the display device (infotainment screen) 102 mounted in a dashboard of a vehicle. In an example, the display 104 may be a Light Emitting Diode (LED) display screen of about, say, 13 inches. The size of the display screen may be selected in such a way that a pointing device 122 is reflected to an expected level such that the camera 126 is able to distinguish a pointer of the pointing device 122 and the screen brightness. In an example, if the chosen screen does not reflect a laser light of the pointing device 122, a transparent glass sheet may be pasted on the display screen in order to make the display screen reflect the pointer of the pointing device 122.

[0082] While pointing the pointing device 122 on the display 104 for a predefined time period of 300ms, the input receiving module 116 may receive the pointing as the user input and triggers the eye gaze tracker 124. The eye gaze tracker 124 may then track eye gaze of the user. In an example, the camera 126 can be a Microsoft LifeCam webcam to evaluate the proposed system 100. In another example, the system 110 can also be evaluated successfully with Logitech cameras with the least resolution 320x240. The focus of the camera 126 may be either automatically adjusted or manually done. The camera 126 may be connected to system 100 or imbedded/integrated inside the system 100.

[0083] Continuing with the present disclosure, once the images of the projected on the display 104 is captured, the images captured by the camera 126 may be processed by the selection module 118. The selection module 118 may process the image for identifying pixel coordinates on the display 104 corresponding to a laser point present on the image. The captured images undergoes a perspective transformation so that the skew on the image due to view angle of the camera 126 may be removed. Then, the transformed images may be processed for pixels with maximum intensities in Red colour space, and centroid of these pixels may be calculated. The calculated centroid may then be scaled to actual resolution of the display 104 (display screen).

[0084] Then, the eye-gaze device 124 may check for the eye glance duration at a particular location of 1.6° viewing angle where the user/driver tries to gaze for a particular time of 300 ms to select an intended target and makes a selection only if the glance duration is beyond the predefined time period. The timer for glance duration starts only when the eye gaze is directed within a square boundary of defined size of 60x60 pixels. Also for the particular time of 300 ms the eye gaze should remain within the square boundary. Further, an optimal size of the square boundary, time period of square boundary, and glance duration threshold value may be determined based on various case studies.

[0085] In brief, the proposed system 100 may include the pointing device 122 which a user/driver wears in his/her finger to point towards an intended target or object getting displayed on the display 104 of the display device 102. While pointing on the screen with the Laser Pointer 122, if the eye glance is within a static region of 60x60pixels on the screen for a duration of 300ms, a trigger is sent to the system 118 through the input receiving module 116 to capture the image and process the Laser coordinates on the screen and perform a left click on that coordinates. Here the Eye gaze tracker 124 does the job of checking if the Eye glance while pointing Laser on the screen is within a static region and the duration of glance. Since the gaze tracker 124 sends a trigger to the system 118 to perform the selection operation, the Eye gaze tracker 124 is used as a switch instead of the same operation which could have been done with a simple hardware switch. The Eye gaze monitoring is a separate program (written in C#) which runs independently from system 118. The synergy of pointing device 122 for pointing and the eye-gaze device 124 for selection is expected to give better performance results than the conventional modes of operating the systems using touch screen.

[0086] FIG. 21 is an exemplary flow diagram for the proposed method for interaction with infotainment system of a vehicle. The method 2100 comprises at step 2102 capturing images of a display, such as display 104 shown at FIG. 1, of the infotainment system, wherein the display is having a plurality of options. The images may be captured using a camera such as camera 126 shown at FIG. 1.

[0087] Step 2104 of the method 2100 may be ascertaining location of a light beam on the display. The light beam may be originating from a laser pointer, such as pointing device 122 shown in FIG. 1, that may be configured with a finger of a user. This may be done by processing the captured images using a processor system, such as the processor system comprising processor 106 and data 114 shown in FIG. 1.

[0088] Step 2106 of the method 2100 may be to ascertain if the light beam has been pointed on an option.

[0089] Step 2108 of the method 2100 may be activating an eye gaze detector, such as eye gaze device 122 shown in FIG. 1, if it is ascertained at step 2108 that the light beam has been pointed on the intended option during which the eye gaze tracker detects eyes and gets activated. [0090] Step 2110 of the method 2100 may be ascertaining, using the processor system based on inputs from the eye gaze detector, location on the display where eye gaze of the user is directed.

[0091] Step 2112 of the method 2100 may be ascertaining, using the processor system, if eye gaze of the user has been directed to the intended option beyond a predefined time period.

[0092] Step 2112 of the method 2100 may be selecting the intended option if it is ascertained that eye gaze of the user been directed to the intended option beyond the predefined time period.

[0093] In an aspect, the method may also comprise the step of calibrating of the camera and the eye gaze tracker using one or more interfaces.

[0094] In an aspect, the method may also comprise the step of calibrating of the camera includes feeding location of corner points of the display to an image processing algorithm in the processor system.

[0095] In an aspect, the method may also comprise the step of changing the predefined time period.

[0096] In an aspect, the method may also comprise step of adjusting exposure and focus of the camera.

[0097] The disclosure hereinafter provides a brief description of various experiments or case studies performed for identifying the suitable material and combination for the proposed method and system.

EXEMPLARY IMPLEMENTATIONS:

Experiment 1— Optimizing Laser Pointer

[0098] In first experiment or exemplary implementation of the proposed system 100, different colour space and ambient luminance are explored to optimize the performance of the laser pointer based pointing device 122. In this experiment, following exemplary configuration is utilized: • Material: The setup employs 650nm, 5V, 5mW Laser light and a 19" Dell E1916HV display with a resolution of 1280 x 800 pixels. To capture the image of the red dot, a Microsoft Life Cam Studio Webcam with a resolution of 320x240 pixels is used.

• Design: The camera was placed facing the display device exactly at the centre at an angle 20° of inclination. The laser pointer based pointing device 122 is tied up to a tripod and is placed exactly perpendicular to the screen surface (as show in FIG. 5).

[0099] In the present exemplary implementation or experiment, initially two images were created in Matlab with one black background and another white background. Then, a red spot was marked at a location on the image at a specific location (1023, 576). The laser pointer based pointing device 122 was then fixed rigidly such that it pointed the marked red spot on the screen of the display device 102. Now the display (or screen) 104 was made fully black as well as fully white for displaying the two extreme conditions. The luminance of the room was also set for 5 levels between 2 and 300 Lux by adjusting the lights, the brightness of the display screen was set for 6 levels between 10 and 60 Lux by adjusting the brightness manually and the exposure of the camera is set for 5 levels by adjusting it in a Matlab program. Thereafter, one image was collected for each combination of lighting condition and camera exposure value and in total 300 images is collected for both black as well as white backgrounds.

[00100] As a result of this exemplary implementation or experiment, the following 7 different colour spaces are explored to accurately detect the position of the red dot on screen: RGB, HSV, YCbCr, YUV, HSL, XYZ, and GRAYSCALE. The error of the processed coordinates was calculated by taking the difference between the processed coordinate and the original coordinate. The time elapsed to process the coordinates is recorded for each image. Initially, the different components of each colour space models were analysed for the errors to be less than 6 pixels both in X and Y axes. Some of the components in the spaces had huge errors and were not taken for comparison. The following components were considered for further analysis:

• All components of RGB • Value of HSV (HSV_V),

• Luma of YCbCr (YCbCr_Y),

• Luma and Chroma of YUV (YUV_Y),

• Luma of HSL (HSL_L),

• Saturation of HSL (HSL_S),

• All components of XYZ and

• Grayscale (GRAY).

[00101] Thus, in this exemplary implementation or experiment, a comparison is made between the number of correct selections and processing time for different components of the colour space model. As a result, it has been found that the Red component of RGB produced maximum number of correct selections of 214 out of 300 images undertaking least processing time (as shown in FIGS. 6 & 7).

[00102] Also, as can be appreciated by a person skilled in that art, previous researches in the art were mainly presenting algorithm of detecting laser dot in the screen of the display device 102 but did not evaluate it with respect to range of illuminance and colour space models. Our proposed system 100 ensures that the laser pointer based pointing device 122 can be used to make a selection in less than 2 ms after highlighting between 2 and 30 Lux of ambient illuminance.

Experiment 2— Comparing laser pointer with other pointing modalities

[00103] In this experiment or exemplary implementation, three different input modalities namely Laser Point Tracker, IMU Tracker, Finger Tracker and Motion Tracker, were evaluated for the performance index, cognitive load and the system usability. For evaluation, an ISO tasking method was utilized for pointing and selecting the targets on the display screen and the average selection time of the target is calculated using Fitts' Law.

[00104] Laser Point Tracker. The idea involves the user holding a laser wand and pointing it at the display screen 104 to indicate the position of an object on the screen 104 to be selected. When a button is pressed on the wand, a camera placed in front of a computer screen captures an image of the screen 104. The image is then processed to extract the location of laser dot and to remove mouse pointer using a program implemented through visual basic to coordinates pointed by the user/driver using the laser wand.

[00105] Inertial Measurement unit (IMU) tracker (Xsens): The basic idea of this tracker is to attach an EVIU sensor to a pair of fingers in the hand and control a mouse cursor movement in accordance with the finger-pair movement. This is achieved by mapping the position angles to coordinates on the display screen 104. Since the operation is done on 2-D display screen 104, only the pitch and yaw angles are considered for determination of x and y coordinates on the display screen 104.

[00106] Finger Tracker (LeapMotion): The LeapMotion sensor LM-010 consisting of two monochromatic infrared (IR) cameras and three infrared light Emitting Diodes (LEDs) for tracking the finger positions in a 3D coordinate system.

[00107] Evaluation of cursor movement: In order to evaluate the user/driver experience for different modalities, a standard procedure of target clicking is done. In this procedure, the user/driver may be made to perform a pointing task which is similar to ISO 9241 pointing task. It contains a target in white colour and obstacles/distracters in blue colour as shown in FIG. 8. The cursor is brought to the centre of the screen before each target is clicked in order to note down the distance and time taken for the movement of cursor from the centre of the screen. The user is made to navigate the cursor through the screen and select the target through each modality. Each device was experimented both in normal mode as well as in adaptive mode, where the size of the target as well as the distracters may vary depending on the cursor movement. The average time taken for a person/s to move the cursor from the centre of the screen to the target is calculated for each modality and then the Index of Difficulty (ID) is calculated by using (1).

ID=log2(2D/W) (1) [00108] The graphs are plotted for ID vs. Average Selection Time and curve fitting is done to obtain the average Time to complete the movement of cursor which is illustrated by Fitts' Law as explained by: T=a+blog2(2D/W) (2)

[00109] The tasks were performed on a Windows PC running Windows 7 on 2.0 GHz Intel core-i3 processor and 4GB RAM. The size of the display screen was kept at (435 mm x 325mm) at a resolution of 1024x768. The size of the laser point is 3.5mm diameter on the display screen. The size of the targets and the distance of their separation from the centre take a fixed value from a set of four values as shown below.

• Width of the targets: 45, 55, 65, 75

• Distances from centre: 80, 160, 240, 325 · Pixel width: 0.42mm per pixel

[00110] There were about 9 individuals who participated in the evaluation task. The persons were mostly young students from the campus. The average age of the participants was around 26. The female to male ratio was 2:7. For each input device, two pointing tasks were performed in which the former is conducted in normal mode and the latter is conducted in adaptive mode. So in a total, there were 4(devices) x 2 experiments were conducted for each person. Each experiment was conducted for about 4 minutes logging about 120 data/target hits for a person.

[00111] The corresponding cognitive loads of the participants were also recorded using NASA's TLX (Task Load Index) and SUS (System Usability Scale) scores for each experiment. The students were made to fill up the TLX and SUS score sheets which were in the form of questionnaire based on both positive and negative experience. Results and analysis of Experiment 2:

A. II) vs. Average selection time

[00112] Initially, the average pointing and selection times were compared for different combinations of target width and distances. FIG. 9 plots the pointing and selection times with respect to Indices of Difficulty for different pointing devices. It may be noted that participants took least time using the Laser Pointer based pointing (LSR) while highest using the Leap Motion controller. It is observed from the FIG. 9, that the Laser Point Tracker takes an average time of 1315ms for regular pointing and selection task. The adaptive modality of Laser Tracker further reduces the time of selection down to 1156ms. The Finger Tracker (LeapMotion) took an average time of 1732ms for regular task and 1703ms for adaptive task. The IMU Tracker took an average time of 2181ms for regular task and 2115ms for adaptive task. A repeated measure ANOVA among the pointing devices found significant main effect of devices [F (2, 90) = 75.1, p < 0.01]. A set of pair wise comparisons between adapted and non-adapted versions of pointing found significant improvement due to adaptation for Laser and IMU pointers in pair wise t-tests (p<0.05).

[00113] FIG. 10 plots the Indices of Performance for all pointing devices. The bar indicates the average value while the error bar indicates standard deviation. The highest Index of Performance (IP) was noted for the Laser pointer based pointing.

[00114] It may be noted that the IMU tracker was given a default of 1.5s fixed delay for selection of target. Reducing this delay according to the comfort of the user will bring down the selection time of the target.

B. Cognitive Load

[00115] The cognitive load was highest for Leap Motion Controller while least for the Laser Pointer (FIG. 11). The bar indicates the average value while the error bar indicates standard deviation. The differences among TLX scores were not significant in a one-way ANOVA.

C. Subjective Preference

[00116] The average SUS scores are calculated and plotted for each modalities as shown in FIG. 12. The bar indicates the average value while the error bar indicates standard deviation. All SUS scores were above 68 indicating all devices were preferred by users. The highest score was noted for Laser Pointer while lowest for the Leap Motion controller.

D. Wrong Selection

[00117] During the data collection task, the users clicked the wrong objects/distractions several times which were not accounted for the calculation of average selection time. These clicks are referred to as wrong selections. [00118] The graph in FIG. 13 shows that the Leap Motion Controller has more number of wrong selections. It is also found that number of wrong selections is more in adaptive selection task in every modality.

[00119] Accordingly, with the present experiment or exemplary implementation, it can be concluded that existing gaming, automotive and military aviation environment explored finger or hand movement tracking based interaction, mainly as gesture recognition system. However, any gesture recognition system requires users to remember a set of gestures like the commands of early stage command prompt system. New pointing modalities were explored, where an on-screen pointer can directly be controlled by finger or wrist movement. It is different than existing hand or air-gesture systems in the sense that users need not to remember any specific gesture and they can control an onscreen pointer by small finger movement in right, left, up and down directions.

[00120] The present disclosure proposes algorithms to control pointer using three different types of sensors as listed below:

• Laser Pointer uses a projected beam of light and uses image processing technique to detect the position of the projected beam and uses that to control the pointer;

• BVIU tracker uses inertial sensors to track position of hand and finger and uses that to control a cursor; and

• Leap Motion Controller uses an infrared emitter and receiver to take video of hand movement and an algorithm to control a pointer using the position of the tip of the index finger.

[00121] Additionally, an adaptation algorithm is also evaluated based on enlargement of targets to reduce pointing and selection times.

[00122] A standard ISO 9241 pointing task found the Laser pointer is fastest to operate and the adaptation system can significantly reduce pointing and selection time. User's cognitive load and subjective preference also supported the result.

[00123] The IMU Tracker was given a default of 1.5s fixed delay for selection of target. Reducing this delay according to the comfort of the user will bring down the selection time of the target. The Leap motion tracker required continuous lift of wrist over the sensor, which may contribute to its poor performance. Additionally, the dwell time duration for making a click in LeapMotion Controller was found too low triggering wrong selection, which was even worse in adaptive condition. Experiment 3— Pilot Test with Driving Simulator

[00124] This exemplary implementation or experiment proposes a new interaction device using a Laser pointer which does not require drivers to physically touch a display. Further, its performance is compared with a touch screen in automotive environment and the pointing and selection time for a secondary task. Our result found that the laser pointer did not significantly degrade driving or pointing performance compared to touch screen in standard ISO 26022 lane changing task.

[00125] It was hypothesized that the average selection time of the target of Laser-point Tracker controlled system is significantly different from that of Touch-input controlled system. The user study to compare Touch-input controlled system and Laser-point Tracker controlled system is presented below.

• Participants: At first the user evaluation was done for Touch-input system. There were ten participants. The persons were young students from the campus. The average age of the participants was 26. The female to male ratio was 2:8.

• Material: The Laser-point Tracker designed was used with Lenovo Yoga Laptop for the secondary task. A Logitech G4 driving wheel and associated pedals were used for the primary driving task. An ISO 26022 lane changing task was used to measure participants driving performance and it was run on a 40 inch, say, MicroMax TV screen.

• Design: In this dual task study, participants undertook ISO 26022 lane changing task as their primary task. In this task participants needed to drive along a 3 -lane motor-way. At a regular interval, drivers were shown a sign and instructed to change lane. The destination lane was randomly chosen and the driving path was automatically logged. The secondary task involved pointing and selection on a screen. Further, the following two modalities were used for pointing and selection: (1) Laser-point Tracker, and (2) Touching.

[00126] We have used an existing dashboard display from Jaguar Land Rover. Same dimensions of the buttons were used as the original display but removed all button captions. During the study, one of the buttons of the display was randomly selected as target and a caption 'Target' appeared on that button. The primary and secondary tasks were linked through an auditory cue. While driving, participants were instructed to point and select the designated target on the secondary screen after hearing an auditory cue. The auditory cue was set to appear between 5 and 7 seconds interval. The auditory cue was kept on repeating during the whole duration of driving.

[00127] Procedure: Initially, the aim of the study was explained to participants. They were first trained with the Laser-point Tracker controlled interface and allowed to use the driving simulator to undertake a test run. After training, they undertook trials in different conditions in random order. Participants were instructed to drive realistically without veering off from the driving lane. After each condition, participants filled up the NASA TLX and SUS questionnaire.

Results and analysis of Experiment 3

The following dependent variables were measured:

1. Driving performance: It was measured as

• Mean deviation from designated lane calculated according to Annex E of ISO 26022 standard.

• Average speed of driving, in particular an investigation is made tha whether the new modality significantly affected driving speed.

• Standard Deviation of Steering Angle such that a large standard deviation identified that the drivers made sharp turns for changing lanes.

2. Pointing and clicking performance: It was measured as Response time or selection time of the target which was the time difference between the auditory cue and the time instant of the selection of the target button. This time duration added up the time to react to the auditory cue, switch from primary to secondary task and the pointing and selection time in the secondary task.

3. Cognitive Load: It was measured as the NASA Task Load Index (TLX) score.

4. Subjective preference: It was measured as the System Usability Score

(SUS).

[00128] For each dependent variable, initially, a comparison was made with the descriptive statistics and then undertook parametric or non-parametric statistical hypothesis test. If an ANOVA test found a significant difference, t-test for pair wise comparisons was also used.

[00129] Before undertaking the trial in dual task condition, participants used the driving simulator for undertaking the primary driving task only. The driving path of this single task situation was used as a baseline for evaluating deterioration in driving for the secondary task. Following the description of ISO 26022 Annex E, a reference path trajectory with a constant lane change start position and lane change length was calculated, which has a maximum correlation with the base line driving path. For example, in FIG. 14, the green line showing the reference path while the red line is the driving path in dual task situation.

[00130] The arithmetic mean of the deviation, speed and steering angle from the reference path were compared as a metric of driving performance.

[00131] Further, it can be clearly seen from FIG. 15 that the mean deviation was lowest for Touch-input modality while significantly higher for Laser-point Tracker modality as inferred from pair wise t-test (p<0.01). It may indicate that the participants were distracted more by the event of selection process using Laser-point Tracker than by using Touch-input modality.

[00132] The average speed of driving was high for Touch-input controlled interface while less for Laser-point Tracker controlled system as can be seen in FIG. 16. However, the difference between average speed was less than 9km/h and the differences were not statistically significant. This may indicate that the participants slowed down by 9km/h while performing secondary task using Laser- point Tracker [00133] The standard deviation of steering angle as seen in FIG. 17, for Touch- input modality was not significantly different from Laser-point Tracker modality as inferred from pair wise t-test (p>0.05). It may indicate that drivers drove slower and more cautiously for Laser-point Tracker controlled system.

[00134] The average time of selection of intended targets was lowest for Touch-input controlled system but was not significantly different for Laser-point Tracker controlled system as inferred from pair wise t-test (p>0.05). The difference between average selection times of both the modalities was less than 110 ms as seen in FIG. 18. This may indicate that the participants took less time for selecting the intended targets on the dashboard using Touch-input modality. Since the difference in selection time between both the modalities was less than 110ms, Laser-point Tracker performed promisingly in competence with the Touch-input controlled system.

[00135] The TLX score was least for the Touch-input controlled system as seen in FIG. 19. But it was also found through pair wise t-test (p>0.05) that the TLX score of Laser-point Tracker controlled system was not significantly different from that of Touch-input controlled system.

[00136] The SUS scores were greater than 68 for both the modalities and high for the Touch-input controlled system from which it is inferred that both systems are usable as seen in FIG. 20. But it was also found through pair wise t-test (p>0.05) that the SUS score of Laser-point Tracker controlled system was not significantly different from that of Touch-input controlled system.

[00137] Accordingly, as per the present experiment or exemplary implementation, alternative modality like the Laser-point tracker could be used for performing a secondary task like controlling an infotainment system other than conventional buttons and Touch-input methods in which the driver undertook secondary tasks by glancing at the screen and stretch his/her arm to reach the intended target for selection. The Touch-input controlled system was still fastest to use and the driving performance was also best for Touch-input controlled system than Laser-point Tracker controlled systems.

[00138] It was found that the average selection time was marginally but non- significantly higher in Laser-point Tracker controlled system. But the users significantly deviated for Laser-point Tracker controlled system although the standard deviation of steering angle and speed was not significantly different. The cognitive load as well as system usability was also not significantly different.

[00139] For a person skilled in the art, it is understood that the use of phrase(s) "is", "are", "may", "can", "could", "will", "should" or the like is for understanding various embodiments of the present disclosure and the phrases do not limit the disclosure or its implementation in any manner.

[00140] The above description does not provide specific details of manufacture or design of the various components. Those of skill in the art are familiar with such details, and unless departures from those techniques are set out, techniques, known, related art or later developed designs and materials should be employed. Those in the art are capable of choosing suitable manufacturing and design details.

[00141] Note that throughout the following discussion, numerous references may be made regarding servers, services, engines, modules, interfaces, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms are deemed to represent one or more computing devices having at least one processor configured to or programmed to execute software instructions stored on a computer readable tangible, non- transitory medium or also referred to as a processor-readable medium. For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfil described roles, responsibilities, or functions. Within the context of this document, the disclosed devices or systems are also deemed to comprise computing devices having a processor and a non-transitory memory storing instructions executable by the processor that cause the device to control, manage, or otherwise manipulate the features of the devices or systems.

[00142] The exemplary embodiment also relates to a system/device for performing the operations discussed herein above. This system/apparatus/device may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.

[00143] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. It will be appreciated that several of the above-disclosed and other features and functions, or alternatives thereof, may be combined into other systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may subsequently be made by those skilled in the art without departing from the scope of the present disclosure.

[00144] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art. ADVANTAGES OF THE DISCLOSURE

[00145] The present disclosure provides a system and a method for pointing an intended target and an eye-gaze for selection of the intended target on display systems such as infotainment systems.

[00146] The present disclosure provides system and method that could be used in situations where the drivers are intended to reach spaces out of their reach for performing secondary tasks during driving.

[00147] The present disclosure provides a system and a method that would potentially reduce the movement of hands of the driver to physically touch an intended target on the display device mounted in the dashboard of the vehicle, to decrease the selection time of the intended target on the display device. [00148] The present disclosure provides a system and a method that would help the elderly drivers, who have reduced range of movement at shoulder due to physical impairment, for performing secondary tasks during driving a vehicle.