Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SELECTION OF USER INTERFACE ELEMENTS OF A GRAPHICAL USER INTERFACE
Document Type and Number:
WIPO Patent Application WO/2013/157013
Kind Code:
A1
Abstract:
Provided is a method of selecting a user interface element on a graphical user interface (Ul). A first user input for moving a pointer on a Ul is received from a touch-enabled handheld device. A first location of the pointer is recognized in response to the first user input. A second user input for moving the pointer on the Ul is received from a touch-enabled handheld device. A second location of the pointer on the Ul is recognized in response to the second user input. A user interface element is selected based on the second location of the pointer on the Ul.

Inventors:
JAIN MOHIT (US)
MADHVANATH SRIGANESH (IN)
SHARMA VIMAL (IN)
Application Number:
PCT/IN2012/000270
Publication Date:
October 24, 2013
Filing Date:
April 17, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
JAIN MOHIT (US)
MADHVANATH SRIGANESH (IN)
SHARMA VIMAL (IN)
International Classes:
G06F3/048
Domestic Patent References:
WO2009080653A12009-07-02
WO1994029788A11994-12-22
Foreign References:
US20070070054A12007-03-29
Attorney, Agent or Firm:
ALANKI, N.V. Pradeep Kumar (198F 27th Cross, 3rd Block,Jayanagar, Bangalore 1, Karnataka, IN)
Download PDF:
Claims:
We claim:

1. A method for selecting a user interface element of a graphical user interface (Ul), comprising:

receiving, from a touch-enabled handheld device, a first user input for moving a pointer on the Ul;

recognizing a first location of the pointer on the Ul in response to the first user input; receiving, from the touch-enabled handheld device, a second user input for moving the pointer on the Ul;

recognizing a second location of the pointer on the Ul in response to the second user input; and

selecting the user interface element based on the second location of the pointer on the

Ul.

2. The method of claim 1, wherein:

the first user input comprises a touch input on a touch screen of the mobile device and a succeeding movement of the mobile device in air to control a pointer on the Ul; and

the second user input comprises a continuous swipe movement across the touch screen of the mobile device.

3. The method of claim 1, wherein:

the first user input comprises a touch input on a touch screen of the mobile device and a succeeding movement of the mobile device in air to control a pointer on the Ul; and

the second user input comprises a discrete swipe movement across the touch screen of the mobile device.

4. The method of claim 1, wherein:

the first user input comprises: a touch input on a touch screen of the mobile device, a succeeding movement of the mobile device in air to control a pointer on the Ul, and a subsequent release movement wherein a user releases the touch input from the touch screen of the mobile device; and the second user input comprises a discrete tap movement on the touch screen of the mobile device.

5. The method of claim 1, wherein:

the first user input comprises a continuous touch movement across a touch screen of the mobile device to control a' pointer on the Ul and a subsequent release movement wherein the touch movement is released from the touch screen of the mobile device; and

the second user input comprises a discrete tap movement on the touch screen of the mobile device.

6. The method of claim 1, wherein:

the first user input comprises a continuous swipe movement across a touch screen of the mobile device to control a pointer on the Ul; and

the second user input comprises a discrete movement of the mobile device in air.

7. The method of claim 1, wherein the first user input is preceded by a touch input on a touch screen of the touch-enabled handheld device.

8. A system, comprising: a display device; and

a touch-enabled handheld device with a processor programmed to:

receive a first user input for moving a pointer on a graphical user interface (Ul);

recognize a first location of the pointer on the Ul in response to the first user input; receive a second user input for moving the pointer on the Ul;

recognize a second location of the pointer on the Ul in response to the second user input; and

select a user interface element on the Ul based on the second location of the pointer on the UI.

9. The system of claim 8, wherein the touch-enabled handheld device includes a gyroscope.

10. The system of claim 8, wherein the display device displays the graphical user interface (Ul).

11. The system of claim 8, wherein the first user input comprises a touch input on a touch screen of the mobile device and a succeeding movement of the mobile device in air to control a pointer on the Ul; and

the second user input comprises a continuous swipe movement across the touch screen of the mobile device.

12. The system of claim 8, wherein the first user input comprises: a touch input on a touch screen of the mobile device, a succeeding movement of the mobile device in air to control a pointer on the Ul, and a subsequent release movement wherein a user releases the touch input from the touch screen of the mobile device; and

the second user input comprises a discrete stroke movement on the touch screen of the mobile device.

13. The system of claim 8, wherein the first user input comprises a continuous touch movement across a touch screen of the mobile device to control a pointer on the Ul and a subsequent release movement wherein the touch movement is released from the touch screen of the mobile device; and

the second user input comprises a discrete stroke movement on the touch screen of the mobile device

14. The method of claim 8, wherein the first user input comprises a continuous swipe movement across a touch screen of the mobile device to control a pointer on the Ul; and the second user input comprises a discrete movement of the mobile device in air.

15. A computer program product for selecting a user interface element on a graphical user interface (Ul), the computer program product comprising:

a computer readable storage medium having computer usable program code embodied therewith, the computer usable program code comprising:

computer usable program code that receives a first user input for moving a pointer on the lJh computer usable program code that recognizes a first location of the pointer on the Ul in response to the first user input; computer usable program code that receives a second user input for moving the pointer on the Ul; computer usable program code that recognizes a second location of the pointer on the Ul in response to the second user input; and computer usable program code that selects a user interface element on the Ul based on the second location of the pointer on the Ul.

Description:
SELECTION OF USER INTERFACE ELEMENTS OF A GRAPHICAL USER INTERFACE

Background

An interactive system typically presents a graphical user interface ("Ul") in a visual form. Commonly known interactive systems include television sets and computers.

A remote control with a four-way directional pad is the most common mode of interaction with ah interactive system that is at some distance from a user. Most television sets and devices that are used from a distance (for instance, a DVD player, a set-top box, etc.) are shipped with a remote control so that a user can interact with the system. However, although ubiquitous, such a remote control does not always offer an intuitive way to interact with the system's Ul, especially in case of recent systems such as internet TV (iTV) or Smart TV.

Brief Description of the Drawings

For a better understanding of the solution, embodiments will now be described, purely by way of example, with reference to the accompanying drawings, in which:

FIG. 1 shows a flow chart of a method of selecting user interface elements of a Ul, according to an embodiment.

FIG. 2A and 2B is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment.

FIG. 3A and 3B is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment.

FIG. 4A and 4B is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment.

FIG. 5A and 5B is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment.

FIG. 6A and 6B is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment. FIG. 7 is a diagram of an illustrative mechanism for selecting user interface elements of a Ul, according to an embodiment.

FIG. 8 illustrates a system for selecting user interface elements of a Ul, according to an embodiment.

Detailed Description of the Invention

As mentioned earlier, a typical infrared remote control is not amenable to a smooth interaction with a Ul from a distance. Such a Ul may correspond to an internet TV or Smart TV system, or even a personal computer. Smart TV systems enable users to view not only regular broadcast programs but also provide advanced services such as video on-demand, catch-up TV services, Electronic Program Guide (EPG), etc. In addition, Smart TV systems allow users to access their local or on-line content (for example, photos), browse the internet, download and launch apps, etc. Considering the range of services offered, the user interface of these systems (or of the content shown) is far more complex than of a regular television set.

Proposed, therefore, is a solution that provides a more intuitive and fluid interaction with a Ul, which may be at some distance from the user (for example, shown on a wall-mounted display). Specifically, embodiments of the present solution provide a method and system for interaction with user interface elements of a Ul viewed on a large display by using a touch- enabled handheld device. Examples described later enable selection of a user interface element in such a way that there is fluidity in the process. Usually a visible pointer is shown on the Ul to identify the target that is being selected. But due to interaction from a distance, it is hard to move the pointer over the target directly, resulting in overshooting problem. The problem of overshooting (of a pointer during selection of a user interface element on a Ul) is overcome by providing intuitive means of refining a user's earlier pointer movements. Typically, a pointer is a graphical image on the Ul which reflects movements of a pointing device. A pointer may be used to select and move the graphical user interface elements in a Ul.

FIG. 1 shows a flow chart of a method of selecting user interface elements of a Ul, according to an embodiment. In an example, the method may be implemented in a handheld device, like a mobile phone, an electronic tablet, a Personal Digital Assistant (PDA), etc., which may be coupled to an interactive system, like a television set, set-top box or a computer.

In an embodiment, the method includes receiving (110) a first user input for moving a pointer on a user interface. The method envisages a scenario where a user is interacting with an interactive system via a display device, and the user input is received from a touch-enabled handheld device. The display device may be present in close proximity of a user or it may be at a distance. As mentioned earlier, a display device is an output device for presentation of information in visual or tactile form. Some illustrative examples of a display may include a television screen, a computer monitor, and mounted displays. The display device may employ different display technologies, such as Liquid Crystal Display (LCD), Light-emitting diode Display (LED), laser TV, etc.

It is also envisaged that the display device is capable of displaying a Graphical User Interface (Ul) of the system being interacted with. The graphical user interface may include a number of graphical user interface elements, which may be of different types. Some non- limiting illustrative examples may include a window, a drop-down list, a button, a test box, a list box, a radio button, a check box, etc.

A pointer may be used to select and move the graphical user interface elements in a Ul. Typically, a pointer is a graphical image on a user interface which reflects movements of a pointing device.

The mobile device which is used to provide a user input to the display device may include a mobile phone, a tablet PC (personal computer), an electronic tablet, a PDA, a smart phone, a watch, a remote controller with a touch screen (or touch pad) etc.

Upon receipt of a first user input for moving a pointer on a display, the method recognizes (120) a first location of the pointer on the Ul. The user's input to move a pointer on the Ul is tracked and the location where the pointer movement comes to an end is determined as the first location of the pointer. To provide an illustration, let's assume that a MS Word document is open on a display. A user may provide a first user input to move a pointer on the display to a "Home" menu option on the graphical user interface of the MS Word program. This positioning of the pointer on the "Home" menu option is determined as the first location of the pointer. To provide another illustration, let's consider a scenario where a graphical user interface on a display device includes first level user interface elements and a second level user interface elements within each of the first level user interface elements. A user may provide a first user input to move a pointer to one of the first level user interface elements. In this case, the placement of the pointer on the first level user interface element would be considered as the first location of the pointer.

Once the first user input is complete and the first location of the pointer has been identified, a user may provide a second user input to further move the pointer on the Ul. The second user input may also be received (130) from a mobile device.

The second user input may be provided to fine tune the first input and to refine the location of the pointer from its first location. The second user input may be used to place the pointer in a location of user's actual desire. To illustrate in context of the above MS Word example, after the "Home" menu option has been highlighted (as the first location of the pointer), a user may want to further refine the pointer location in order to select a "Bold" font option, an "Italic" font option, an "Underline" text option, or any other option on the "Home" menu. The second user input is provided to refine the location of the pointer from its first location. The method recognizes (140) the second location of the pointer on the Ul in response to the second user input. The location where the pointer movement comes to an end in response to the second user input is determined as the second location of the pointer. In the above example, if a second input has moved the pointer to the "Bold" font option on the "Home" menu, then this position is identified as the second location of the pointer. The second location of the pointer highlights the user interface element which a user intends to select.

In the context of the later illustration above, the second user input may be provided to place the pointer on a second level user interface element within the first level user interface element. In this way, the user refines the location of the pointer from its first location to a location of his actual desire.

After the second location of the pointer is established, the user interface element corresponding to the second location of the pointer is selected (150). In the earlier given example, the "Bold" font option on the "Home" menu would be selected for execution by the computing system. A selection may be made by providing a third user input to the computing system. In an example, the third user input may be in the form of a single-tap on the touch screen of the mobile device. However, a selection may be made by other methods as well, such as but not limited to, pressing a hardware key, a voice input, a gesture input, etc. The above sequence of user inputs enables a user to select a graphical user interface element of his choice on a distant Ul through a mobile device.

Various mechanisms may be followed to provide a first and a second user input to a Ul using a mobile device. Some of these examples are mentioned below. These examples are for the purpose of illustration only and any combinations thereof may be used to move a pointer on a Ul.

In one example, illustrated in Fig. 2A and 2B, a first user input may comprise a touch input on a touch screen of a mobile device followed by a movement of the mobile device in air to move (control) a pointer on a Ul. The ' first user input may be preceded by an initial touch input on the mobile device.

The initial touch input acts as a signal to indicate that the user may wish to use the mobile device as a pointing device (like a laser pointer). The initial touch transfers the control of a pointer on a Ul to the mobile device, and a subsequent motion of the mobile device in air is used to move the pointer (Fig. 2A). In other words, in this method, if a user wants to control a pointer on a Ul, all he needs to do is to tap on a touch screen of his mobile device and point to a distant display. If a pointer is present on the display, its control would be transferred to the user who may move it to a desired location (on the Ul) by moving his mobile device. The location where the pointer movement comes to an end is determined as the first location of the, pointer. This is typically a location on the graphical user interface where a user would like to make a selection (of a user interface element).

A second user input in the above case may be provided through a movement of a user's hand across the touch screen of the mobile device. For instance, it may be in the form of a continuous touch movement (an uninterrupted stroke) on the touch screen of the mobile device (Fig. 2B), wherein the user maintains a constant contact with the touch screen during the movement. The touch screen acts as a trackpad for moving a pointer on the display. In another instance, a user may make a continuous swipe movement over the touch screen to move the pointer to a location of his choice on the Ul.

A final selection of a graphical user interface element may be made by, for instance, by providing a single tap on the mobile device.

In another example, illustrated in Fig. 3A and 3B, a first user input comprises a touch input on a touch screen of a mobile device followed by a movement of the mobile device in air to move (control) a pointer of a Ul (Fig. 3A) similar to one described in relation to Fig. 2A. However, a second user input in this case is provided through a discrete touch movement(s) across the touch screen of the mobile device.

A user moves his hand across the touch screen as if it's a discrete trackpad. (In a discrete pad mechanism the touch screen of the mobile device acts as a trackpad, but only discrete motions are allowed.) For example, to move a pointer from a present location to the right, a user may perform a small swipe motion towards the right. The user may keep on making these discrete swipe motions until the pointer reaches the user interface element of his choice. Similarly, left, up and down movements may be made to select an interface element in the corresponding direction. A user could also make longer swipe movements across the surface of touch screen to quickly move through the interface elements. In an example, the pointer could be a selection block, as illustrated in Fig. 3B. (A selection block may be considered as a special type of pointer which encircles a user interface element to highlight its selection in a graphical user interface.)

A final selection of a graphical user interface element may be made, for instance, by providing a single tap on the mobile device.

In a yet another example, illustrated in Fig. 4A and 4B, a first user input may comprise a touch input on a mobile device, followed by a " movement of the mobile device in air and a. subsequent release movement wherein the user releases the touch input from the touch screen of the mobile device.

In this example, the initial touch input on a touch screen of the mobile device followed by a movement of the mobile device in air to move (control) a pointer on a Ul is similar to the example mentioned earlier in relation to Fig. 2A. However, the difference lies in a subsequent movement. This includes a step wherein the user who had provided the initial touch input releases his input from the touch screen of the mobile device. The release completes the first user input. For instance, a user may use a finger to provide an initial touch input to the device. The location where the pointer movement stops is construed as the first location of the pointer on the Ul. Once the first location of the pointer is highlighted, a user may release his finger from the touch screen of the mobile device. The release action results in the disappearance of the regular pointer from the Ul, and in its place a selection block appears on the screen (Fig. 4B). Since the appearance of a pointer changes (from cursor to selection block) upon release of the user's initial touch input, the location of the selection block may also be construed as the first location of the pointer.

A second user input in the above case may be provided through discrete touch inputs on the touch screen of the mobile device. A user provides his inputs on the touch screen as if it's a directional pad. (In a directional pad mechanism only discrete input taps are allowed to move a pointer. An input tap in this case may include one or multiple distinct taps, or a long press tap) For example, to move a cursor from a present location to the right, a user may provide a discrete tap on the right hand side of the touch screen. Each stroke moves the cursor to the next interface element on the user interface. The direction of a stroke determines the direction of the cursor movement.

A user may keep on making these discrete strokes until the pointer highlights the user interface element of his choice. A final selection of a graphical user interface element may be made by, for instance, providing a single tap on the mobile device. In an example, a final selection may be made by tapping on an "OK" key on the directional pad (as illustrated in Fig. 4B).

In a further example, illustrated in Fig. 5A and 5B, a first user input may comprise a continuous touch movement across a touch screen of the mobile device to control a pointer on the Ul and a subsequent release movement wherein the touch movement is released from the touch screen of the mobile device (Fig. 5A).

In this method, a user may provide a first user input by performing a regular swipe motion on the touch screen of the mobile device, followed by a release of the user's hand (from the touch screen) when the swipe motion is completed. The location of the pointer (on the display), once the user has removed his contact from the touch screen of the mobile device, is identified as the first location of the pointer.

A second user input in the above case may be provided through a J mechanism similar to the one described in relation to Fig. 5B. Discrete touch inputs are provided on the touch screen of the mobile device. A user provides his inputs on the touch screen as if it's a directional pad. Each stroke moves the cursor to the next interface element on the user interface, and the direction of a stroke determines the direction of the cursor movement. A final selection of a graphical user interface element may be made by, for instance, providing a single tap on the mobile device. In another example, illustrated in Fig. 6A and 6B, a first user input may comprise a swipe movement across a touch screen of the mobile device to control a pointer on the display (Fig. 6A). This is similar to the method described in relation to Fig. 2B above. The location of the pointer on the Ul, once the swipe movement is over, is identified as the first location of the pointer.

A second user input in the above case may be provided by moving the mobile device in air. The mobile device acts as a pointing device to move the pointer frdm its first location on the Ul. The regular pointer disappears on moving the mobile device and a selection box appears in its place. A user moves the selection box from one interface element to another through discrete movements of the mobile device in air. For instance, to move a pointer from a present location to the right, a user may perform a small wrist-motion towards the right (Fig. 6B).

In a yet another example, it is assumed that the user interface elements are organized in a hierarchy, wherein a user interface element (or a group of user interface elements) may lead to another user interface element (or a group of user interface elements). The user interface elements are organized at multiple levels, wherein a first level leads to a second level/the second level to a third level, and so and so forth (Fig. 7).

To select a first level user interface elements, a first user input may be obtained through a discrete touch movement(s) across the touch screen of the mobile device, as illustrated in Fig. 3B. In another instance, it may be obtained through discrete touch inputs on the touch screen of a mobile device, wherein the touch screen acts as a D-pad (such as illustrated in Fig. 4B). A first user input may also be obtained by moving the mobile device in air, wherein a regular pointer disappears on moving the mobile device and a selection box appears in its place (as described in relation to Fig. 6B). Once a first user input is recognized, a cursor may appear inside the selected user interface element to provide access to a second level of user interface elements.

To select a second level of user interface element(s), a second user input may be obtained by a swipe movement across a touch screen of the mobile device (similar to the method described in relation to Fig. 2B earlier) or through a touch input on a touch screen of a mobile device followed by a movement of the mobile device in air to move (control) a pointer on a Ul (Fig. 2A).

To provide an example, a video may be selected from a collection of multimedia files by obtaining a first user input through any of the ways described above. Once a video is selected, a second level user interface element (for example, play, pause, rewind, stop, etc.) may be selected by a second user input through either of the input methods mentioned above.

Fig. 8 illustrates a system for selecting user interface elements of a Ul, according to an embodiment.

The system 800 includes a computing device 810 connected to a display device (screen) 860. The computing device 810 may be a mobile device (for example, a mobile phone, a touch pad, a personal digital assistant (PDA), a remote control, etc.), a desktop computer, a notebook computer, and the like.

Computing device 810 may include a processor 820, for executing machine readable instructions, a memory (storage medium) 830 (for storing machine readable instructions), a touch interface 840 and a communication interface 850.

Processor 820 is arranged to execute machine readable instructions. The machine readable instructions may be in the form of a software program. In an example, processor 820 executes machine readable instructions to: receive a first user input for moving a pointer on the Ul, recognize a first location of the pointer on the Ul in response to the first user input, receive a second user input for moving the pointer on the Ul, recognize a second location of the pointer on the Ul in response to the second user input and select a user interface element on the Ul based on the second location of the pointer on the Ul.

The memory 830 may include computer system memory such as, but not limited to, SDRAM (Synchronous DRAM), DDR (Double Data Rate SDRAM), Rambus DRAM (RDRAM), Rambus RAM, etc. or storage memory media, such as, a floppy disk, a hard disk, a CD-ROM, a DVD, a pen drive, etc.

The touch interface 840 may include a touch input device, such as a touch screen. It may be used to receive a touch input from a user.

Communication interface 850 is used to communicate with an external device, such as a display screen 860. It may be a software program, a hardware, a firmware, or any combination thereof. Communication interface 850 may use a variety of communication technologies to enable communication between the computing device 810 and an external device. Some non- limiting examples of communication technologies which may be used may include infrared, Bluetooth, Wi-Fi, etc. In an example, the computing device 810 may be a mobile device. In such case the computing device may include additional components such as a receiver, a transmitter, an antenna etc. for wireless communication over a -voice or data communication network such as GSM, CDMA, etc.

The display device (screen) 860 may be any device that enables a user to receive visual feedback. For example, the display may be a liquid crystal display (LCD), a light-emitting diode (LED) display, a plasma display panel, a television, a computer monitor, and the like. The display device 860 is capable of displaying a graphical user interface (Ul). The display may include a communication interface for communication with an external device, such as a computing device 810. The communication interface may be a software program, a hardware, a firmware, or any combination thereof. It may use a variety of communication technologies (infrared, Bluetooth, Wi-Fi, etc.) to enable communication between the display device 810 and an external device.

It would be appreciated that the system components depicted in FIG. 8 are for the purpose of illustration only and the actual components may vary depending on the computing system and architecture deployed for implementation of the present solution. The various components described above may be hosted on a single computing system or multiple computer systems, including servers, connected together through suitable means.

It will be appreciated that the embodiments within the scope of the present solution may be implemented in the form of a computer program product including computer- executable instructions, such as program code, which may be run on any suitable computing environment in conjunction with a suitable operating system, such as Microsoft Windows, Linux or UNIX operating system. Embodiments within the scope of the present solution may also include program products comprising computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, such computer-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM, magnetic disk storage or other storage devices, or any other medium which can be used to carry or store desired program code in the form of computer- executable instructions and which can be accessed by a general purpose or special purpose computer. It should be noted that the above-described embodiment of the present solution is for the purpose of illustration only. Although the solution has been described in conjunction with a specific embodiment thereof, numerous modifications are possible without materially departing from the teachings and advantages of the subject matter described herein. Other substitutions, modifications and changes may be made without departing from the spirit of the present solution.