Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FINGERTIP IDENTIFICATION FOR GESTURE CONTROL
Document Type and Number:
WIPO Patent Application WO/2017/051199
Kind Code:
A1
Abstract:
There is disclosed a computer implemented method of fingertip centroid identification in real-time, implemented on a computer system comprising a processor, memory, and a camera system, the method including the steps of: (i) the processor receiving image data from the camera system; (ii) the processor running a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data; (iii) for each identified fingertip, the processor running a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip; (iv) the processor calculating a centroid for each best fit closed shape which corresponds to an identified fingertip, and (v) the processor storing in the memory the calculated centroids for the identified one or more fingertips. Related systems and computer program products are also disclosed.

Inventors:
BREBNER DAVID (NZ)
ROUNTREE RICHARD (NZ)
Application Number:
PCT/GB2016/052984
Publication Date:
March 30, 2017
Filing Date:
September 26, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UMAJIN LTD (NZ)
International Classes:
G06F3/01
Foreign References:
US20100329509A12010-12-30
US20140104168A12014-04-17
Other References:
None
Attorney, Agent or Firm:
ORIGIN LIMITED (GB)
Download PDF:
Claims:
CLAIMS

1. Computer implemented method of fingertip centroid identification in real-time, implemented on a computer system comprising a processor, memory, and a camera system, the method including the steps of:

(i) the processor receiving image data from the camera system;

(ii) the processor running a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, the processor running a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) the processor calculating a centroid for each best fit closed shape which corresponds to an identified fingertip, and

(v) the processor storing in the memory the calculated centroids for the identified one or more fingertips.

2. Method of Claim 1, in which sub pixel accuracy, and per frame accuracy with no lag, are provided.

3. Method of any previous Claim, in which the camera system is a 3D or depth sensing image camera system. 4. Method of Claim 3, in which the 3D or depth sensing image camera system is an infra red-based depth sensor system or a stereoscopic camera system.

5. Method of Claim 4, in which the infra red-based depth sensor system is an Intel Real Sense or a Microsoft inect.

6. Method of any of Claims 3 to 5, wherein depth data is thresholded so that hands are the only likely objects in the view.

7. Method of any of Claims 3 to 6, the processor running a modified morphological operator to erode edges based on depth and/ or curvature.

8. Method of any of Claims 3 to 7, the processor running a modified morphological operator to infill holes and average the z values.

9. Method of any of Claims 3 to 8, the processor performing an identifying fill on spatially proximate pixels in image data, so as to identfy possible hands or hand parts in the image data.

10. Method of any of Claims 3 to 9, the processor identifying fingers and determining respective angles in two dimensional space of the identified fingers.

11. Method of any of Claims 3 to 10, wherein a z sample from a finger boundary is used to determine the z gradient of the fingertip, to determine if the finger is facing into, flat or out of the view.

12. Method of any previous Claim, in which the camera system provides a stream of image data.

13. Method of Claim 12, the processor calculating a stream of centroids from the stream of image data, the processor tracking the identified one or more fingertips by analyzing a stream of calculated centroids for the identified one or more fingertips. 14. Method of Claims 12 or 13, wherein temporal tracking with a simple motion predictor is used to identify fingertips between frames.

15. Method of any of Claims 12 to 14, wherein the processor uses the tracked identified one or more fingertips to identify a gesture by the one or more fingertips.

16. Method of Claim 15, wherein the gesture is a tap, wiggle, swipe, hover, point, shake, tilt finger or dwell.

17. Method of Claim 16, wherein the dwell gesture performs a lock/select of a user interface item, and subsequent finger movement performs a drag of the user interface item. 18. Method of Claim 17, wherein a shake gesture releases the user interface item.

19. Method of any of Claims 15 to 18, wherein a tilt finger gesture performs a drag or paint. 20. Method of Claim 15, wherein the hover gesture snaps a cursor to the nearest UI element as if there is a gravity effect.

21. Method of Claim 15, wherein a plurality of fingers pointing up from an open hand is a rotate gesture.

22. Method of any of Claims 12 to 14, wherein the processor analyses stored fingertip tracking data and identifies an execution of a recognized fingertip gesture by a fingertip.

23. Method of any previous Claim, wherein the computer system further comprises a display.

24. Method of Claim 23, wherein on the display, an overlay is provided of depth data which has been thresholded on the users view. 25. Method of Claims 23 or 24, wherein the intensity with which a hand is rendered on the display falls off visually as a hand gets too far away from the intended position.

26. Method of any of Claims 23 to 25, wherein a cursor is presented on the display for each of the identified fingertips.

27. Method of Claim 26, wherein a fingertip cursor has an icon and/or colour indication of its state.

28. Method of any of Claims 23 to 27, wherein the processor displays user interface elements on the display.

29. Method of Claim 28, wherein the processor identifies fingertip gestures for interaction with the user interface elements.

30. Method of Claim 28, wherein the processor detects selection of a displayed user interface element, when a tracked fingertip position satsifies a predefined spatial relationship with respect to a displayed user interface element.

31. Method of any of Claims 28 to 30, in which fingers tilted forward (i.e. tilt detected), or currently being moved toward the screen in the z axis (i.e. tapping motion detected), are used in a piano playing application. 32. Method of any of Claims 28 to 31, in which the display is a HUD display, in which for stereo views the flattened hand visualisation converges at about the exact same depth as the head up display (HUD) displayed interface elements.

33. Method of any previous Claim, in which the processor is a graphics processing unit (GPU) or specialised image processing hardware (e.g. an FPGA or an ASIC).

34. Computer program product for fingertip centroid identification in real-time, the computer program product executable on a processor of a computer system comprising the processor, memory, and a camera system, to:

(i) receive image data from the camera system;

(ii) execute a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, execute a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) calculate a centroid for each best fit closed shape which corresponds to an identified fingertip, and

(v) store in the memory the calculated centroids for the identified one or more fingertips.

35. Computer program product of Claim 34, executable to perform a method of any of Claims 1 to 33.

36. Graphics processing unit, a field-programmable gate array (FPGA) or an application- specific integrated circuit (ASIC), arranged to perform a method of any of Claims 1 to 33 in hardware, not in software. 37. Fingertip tracking computer system including a processor, memory, a display, a 3D or depth sensing image camera system, and a computer program executable by the processor, the computer system receiving a stream of 3D or depth sensing image camera system data from the 3D or depth sensing image camera system, the computer program executable by the processor to identify fingertips in the 3D or depth sensing image camera system data, and to track fingertips in the 3D or depth sensing image camera system data, in which

(i) the processor receives image data from the camera system;

(ii) the processor runs a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, the processor runs a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) the processor calculates a centroid for each best fit closed shape which corresponds to an identified fingertip;

(v) the processor stores in the memory the calculated centroids for the identified one or more fingertips, and

(vi) the processor tracks the fingertips using the stored calculated centroids of the fingertips.

38. Fingertip tracking computer system of Claim 37, wherein the display is a desktop display, a laptop display, a tablet display, a mobile phone display, a smartwatch display, a smart TV display, a stereoscopic display, a holographic display, a wearable display or a HUD display.

39. Fingertip tracking computer system of Claim 37, wherein the computer system includes a desktop computer, a laptop computer, a tablet computer, a mobile phone computer, a smartwatch computer, a smart TV computer, a stereoscopic display computer, a holographic display computer, a wearable display computer or a HUD display computer. 40. Fingertip tracking computer system of any of Claims 37 to 39, wherein the tracked fingertips are displayed in the display.

41. Fingertip tracking computer system of Claim 40, wherein display data is sent to the display by wired connection.

42. Fingertip tracking computer system of Claim 40, wherein display data is sent to the display wirelessly.

43. Fingertip tracking computer system of any of Claims 37 to 42, wherein the fingertip tracking computer system is a fixed system.

44. Fingertip tracking computer system of any of Claims 37 to 43, wherein the fingertip tracking computer system is a mobile system. 45. Fingertip tracking computer system of any of Claims 37 to 44, arranged to perform a method of any of Claims 1 to 33.

Description:
FINGERTIP IDENTIFICATION FOR GESTURE CONTROL

BACKGROUND OF THE INVENTION

1. Field of the Invention

The field of the invention relates to methods of fingertip identification, and to related systems and computer program products.

2. Technical Background

One problem with several conventional 3D or depth sensing camera systems is that they find it difficult to accurately detect and track the movements of fingertips, making it difficult to mimic accurately 2D touch screen or touch pad interactions, such as tapping, dragging, selecting, scrolling, zooming etc.

3. Discussion of Related Art Inverse kinematics refers to the use of the kinematics equations of e.g. an animated character to determine the joint parameters that provide a desired configuration of the animated character. Specification of the movement of the animated character so that its configuration achieves a desired motion as a function of time is known as motion planning. Inverse kinematics transforms the motion plan into joint actuation trajectories for the animated character.

Existing skeletal tracking solutions have problems where the final inverse kinematic phase, which attempts to rectify the skeleton, will almost always reposition joints including the final position of the fingertips— as a result the fingertips are almost always pushed off centre and often outside of the hand silhouette. This causes a lot of inaccuracy and jitter between frames for the fingertip tracking. An example is shown in Figure 1, in which the four detected fingertips are located off-centre of the end regions of the fingers. Hands which are only partially in view are a very common occurrence e.g. when the hand is close to the camera and filling a lot of the view. This seriously undermines the stability of the skeletal approach, making it very difficult to use for interaction. Users do not get clear feedback why the hand tracking is failing as they are still clearly holding their hand up in front of the camera frustum— they do not understand that there is only a partial match in the hand tracking process. Accessibility to the bottom of the screen is very unreliable with the skeletal approach, because, when approaching the bottom of the screen, approximately the bottom half of the hand or hands may not be in view. Figure 2 shows an example of a poor result in attempted finger tracking, in which two hands are only partially in view when approaching the bottom of the screen. Figure 3 shows an example of a poor result in attempted finger tracking, in which one hand is only partially in view.

Similarly, using 'hand centre' tracking approaches is also unsatisfactory as the external shape of the hand can change a lot, and finding a stable centre of the blob for a 'cursor' is challenging— even selecting from 4 or 5 items horizontally is difficult to do with e.g. a 640x480 depth buffer. Examples of unsatisfactory hand centre tracking are shown in Figures 2 to 5, in which the large white circles identify the tracked hand centres. In particular closing the fingers of the hand into a fist has a number of problems for skeletal systems where the centre of the hand moves, and finger bones are left in an ambiguous position. Children's hands, endomorphic (soft, round body build) versus ectomorphic (lean, delicate body build) hand types and long sleeves all provide specific challenges for skeletal trackers, often resulting in quite noisy results. Figure 4 shows an example of a poor result in attempted finger tracking, in which the fingers of the hand have been closed into a fist. Figure 5 shows an example of a poor result in attempted finger tracking, in which the fingers of the hand have been closed into a fist. There is a need for an improved approach to finger tracking, which works well in the context of simulated finger-to -item (eg. finger-to-screen) interactions. SUMMARY OF THE INVENTION

According to a first aspect of the invention, there is provided a computer implemented method of fingertip centroid identification in real-time, implemented on a computer system comprising a processor, memory, and a camera system, the method including the steps of:

(i) the processor receiving image data from the camera system;

(ii) the processor running a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, the processor running a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) the processor calculating a centroid for each best fit closed shape which corresponds to an identified fingertip, and

(v) the processor storing in the memory the calculated centroids for the identified one or more fingertips. The term "concentric" denotes circles, arcs, or other shapes which share the same centre. Examples of closed shapes are circles, ellipses, ovals, triangles, squares and polygons.

An advantage is that fingertips can be detected and tracked accurately. An advantage is that fingertips can be detected and tracked accurately, even when a substantial part of the hand is not visible to the camera system. An advantage is that the use of kernels allows use in parallel with other processes, and may be optimised to take advantage of a graphics processing unit (GPU) or custom image processing hardware, such as in the form of a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC).

The method may be one in which sub pixel accuracy, and per frame accuracy with no lag, are provided. The method may be one in which the camera system is a 3D or depth sensing image camera system. An advantage is that it becomes easier to identify the hand or hand parts based on their distance from the camera system. The method may be one in which the 3D or depth sensing image camera system is an infra red-based depth sensor system or a stereoscopic camera system.

The method may be one in which the infra red-based depth sensor system is an Intel Real Sense or a Microsoft inect.

The method may be one wherein depth data is thresholded so that hands are the only likely objects in the view. An advantage is that by identifying the hand or hand parts based on their distance from the camera system, the identified hand or hand parts require less image processing than processing a full image with no depth-based filtering.

The method may be one wherein the processor runs a modified morphological operator to erode edges based on depth and/ or curvature. An advantage is that a simpler shape is produced for subsequent analysis. The method may be one wherein the processor runs a modified morphological operator to infill holes and average the z values. An advantage is that a simpler, smoother shape is produced for subsequent analysis. Different filters may be used depending on the type of camera used. Time of flight infrared cameras may use a median filter for example. The method may be one wherein the processor performs an identifying fill on spatially proximate pixels in the image data, so as to identfy possible hands or hand parts in the image data.

The method may be one wherein the processor identifies fingers and determines respective angles in two dimensional space of the identified fingers. An advantage is that input from the pointing direction of fingers is therefore readily obtained.

The method may be one wherein a z sample from a finger boundary is used to determine the z gradient of the fingertip, to determine if the finger is facing into, flat or out of the view. An advantage is that input from the tilting direction of fingers is therefore readily obtained.

The method may be one in which the camera system provides a stream of image data.

The method may be one wherein the processor calculates a stream of centroids from the stream of image data, the processor tracking the identified one or more fingertips by analyzing a stream of calculated centroids for the identified one or more fingertips. The method may be one wherein temporal tracking with a simple motion predictor is used to identify fingertips between frames.

The method may be one wherein the processor uses the tracked identified one or more fingertips to identify a gesture by the one or more fingertips.

The method may be one wherein the gesture is a tap, wiggle, swipe, hover, point, shake, tilt finger or dwell.

The method may be one wherein the dwell gesture performs a lock/select of a user interface item, and subsequent finger movement performs a drag of the user interface item.

The method may be one wherein a shake gesture releases the user interface item. The method may be one wherein a tilt finger gesture performs a drag or paint.

The method may be one wherein the hover gesture snaps a cursor to the nearest UI element as if there is a gravity effect. The method may be one wherein a plurality of fingers pointing up from an open hand is a rotate gesture.

The method may be one wherein the processor analyses stored fingertip tracking data and identifies an execution of a recognized fingertip gesture by a fingertip. The method may be one wherein the computer system further comprises a display.

The method may be one wherein on the display, an overlay is provided of depth data which has been thresholded on the users view.

The method may be one wherein the intensity with which a hand is rendered on the display falls off visually as a hand gets too far away from the intended position. The method may be one wherein a cursor is presented on the display for each of the identified fingertips.

The method may be one wherein a fingertip cursor has an icon and/or colour indication of its state.

The method may be one wherein the processor displays user interface elements on the display.

The method may be one wherein the processor identifies fingertip gestures for interaction with the user interface elements.

The method may be one wherein the processor detects selection of a displayed user interface element, when a tracked fingertip position satsifies a predefined spatial relationship with respect to a displayed user interface element.

The method may be one in which fingers tilted forward (i.e. tilt detected), or currently being moved toward the screen in the z axis (i.e. tapping motion detected), are used in a piano playing application. The method may be one in which the display is a HUD display, in which for stereo views the flattened hand visualisation converges at about the exact same depth as the head up display (HUD) displayed interface elements.

The method may be one in which the processor is a graphics processing unit (GPU). According to a second aspect of the invention, there is provided a computer program product for fingertip centroid identification in real-time, the computer program product executable on a processor of a computer system comprising the processor, memory, and a camera system, to:

(i) receive image data from the camera system;

(ii) execute a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, execute a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) calculate a centroid for each best fit closed shape which corresponds to an identified fingertip, and

(v) store in the memory the calculated centroids for the identified one or more fingertips.

The computer program product may be executable to perform a method of any aspect of the first aspect of the invention.

According to a third aspect of the invention, there is provided a graphics processing unit or specialised hardware (e.g. an FPGA or an ASIC), arranged to perform a method of any aspect of the first aspect of the invention in hardware, not in software. According to a fourth aspect of the invention, there is provided a fingertip tracking computer system including a processor, memory, a display, a 3D or depth sensing image camera system, and a computer program executable by the processor, the computer system receiving a stream of 3D or depth sensing image camera system data from the 3D or depth sensing image camera system, the computer program executable by the processor to identify fingertips in the 3D or depth sensing image camera system data, and to track fingertips in the 3D or depth sensing image camera system data, in which

(i) the processor receives image data from the camera system;

(ii) the processor runs a first kernel comprising a set of concentric closed shapes over image data to identify an occupancy pattern in which internal closed shapes are at least nearly fully occupied, and in which a subsequent closed shape has at least a relatively low occupancy level, so as to identify one or more fingertips in the image data;

(iii) for each identified fingertip, the processor runs a second kernel over the identified one or more fingertips to establish a best fit closed shape which covers each identified fingertip;

(iv) the processor calculates a centroid for each best fit closed shape which corresponds to an identified fingertip;

(v) the processor stores in the memory the calculated centroids for the identified one or more fingertips, and

(vi) the processor tracks the fingertips using the stored calculated centroids of the fingertips.

The fingertip tracking computer system may be one wherein the display is a desktop display, a laptop display, a tablet display, a mobile phone display, a smartwatch display, a smart TV display, a stereoscopic display, a holographic display, a wearable display or a HUD display.

The fingertip tracking computer system may be one wherein the computer system includes a desktop computer, a laptop computer, a tablet computer, a mobile phone computer, a smartwatch computer, a smart TV computer, a stereoscopic display computer, a holographic display computer, a wearable display computer or a HUD display computer.

The fingertip tracking computer system may be one wherein the tracked fingertips are displayed in the display.

The fingertip tracking computer system may be one wherein display data is sent to the display by wired connection. The fingertip tracking computer system may be one wherein display data is sent to the display wirelessly.

The fingertip tracking computer system may be one wherein the fingertip tracking computer system is a fixed system. The fingertip tracking computer system may be one wherein the fingertip tracking computer system is a mobile system. The fingertip tracking computer system may be one arranged to perform a method of any aspect of the first aspect of the invention.

BRIEF DESCRIPTION OF THE FIGURES

Aspects of the invention will now be described, by way of example(s), with reference to the following Figures, in which:

Figure 1 shows an example in which the four detected fingertips are located off-centre of the ends of the fingers. The large white circle identifies the detected hand centre. Figure 2 shows an example of a poor result in attempted finger tracking, in which two hands are only partially in view when approaching the bottom of the screen. The large white circles identify the detected hand centres.

Figure 3 shows an example of a poor result in attempted finger tracking, in which one hand is only partially in view. The large white circle identifies the detected hand centre. Figure 4 shows an example of a poor result in attempted finger tracking, in which the fingers of the hand have been closed into a fist. The large white circle identifies the detected hand centre.

Figure 5 shows an example of a poor result in attempted finger tracking, in which the fingers of the hand have been closed into a fist. The large white circle identifies the detected hand centre.

Figure 6 shows an example in which depth data has been thresholded so that the hands are the only likely objects in the view.

Figure 7 shows an example in which an identifying fill has been run, which has resulted in identifying hand A, hand B and object C.

Figure 8 shows an example in which a fingertip of the hand on the left has been identified, and in which four fingertips of the hand on the right have been identified. Figure 9 shows an example in which the respective pointing angles in two dimensional space of the fingers are determined.

Figure 10 shows an example in which the fingertips on the white keys four and five places from the left are highlighted because they are activating these keys, while the three rightmost detected fingertips are not activating a piano key, and hence are not highlighted.

Figure 11 shows an example in which for each of a left stereoscopic view and a right stereoscopic view, the fingertip is detected even though about half the hand is outside the tracked environment. Figure 12 shows an example of a fingertip tracking computer system comprising a processor, memory, a display and a 3D or depth sensing image camera system.

DETAILED DESCRIPTION

Ghost Hands 'Ghost hands' is a user interface paradigm for using 2D and 3D user interfaces with fingertips. The fingertips are scanned or detected using a 3D or depth sensing camera system such as an Intel Real Sense or Microsoft Kinect. A 3D or depth sensing camera system typically includes a conventional camera, an infrared laser projector, an infrared camera, and a microphone array. The infrared projector projects a grid onto the scene (in infrared light which is invisible to human eyes) and the infrared camera records it to compute depth information. A stereoscopic camera system is another example of a 3D or depth sensing camera system.

The Ghost Hands solution is applicable, for example, to desktop, Augmented Reality, Virtual Reality and Tabletop Mixed Reality scenarios. For example, where the 3D or depth sensing camera is mounted in the bezel of a laptop or tablet— where a 3D or depth sensing camera is mounted onto a wearable device, such as a stereoscopic headset facing outward— and also where an image projector is calibrated to the same field of view as the 3D camera allowing the users hands to interact with projected images.

In an example of a mixed reality application for this technology, a projector is aligned with the same physical space as the camera, with respect to a tabletop (i.e. the camera and the projector are pointing down towards the table top). Then images are projected onto the table, and users hands or other objects in the view are detected by the system in 3D and the image and projection angles may be adjusted as a result.

Solution Aspects

Overview

There is provided a machine vision fingertip tracking capability. There is provided a gesture library which can detect taps, wiggles, swipes and dwells. There is provided a user interface (UI) library for visualising the user's hands and fingertips and their state. In the tracking, sub pixel accuracy, per frame accuracy with no lag, and very natural feedback for users, may be provided.

Detailed Example

1) Overlay (e.g. a partially transparent overlay or a transparent overlay) may be provided in a display of a view of the raw depth data which has been thresholded on the users view (e.g. on a screen or in a stereo corrected view per eye).

2) The thresholding should allow for a normal range of hand motion near the camera, but the intensity with which the hand is rendered may fall off visually as a hand gets too far away from the intended position, so that users can be guided back into the correct distance for their hand from the camera.

3) A cursor (e.g. crosshairs) should be presented to the user on each of the detected digit tips e.g. at a smooth 60fps.

4) These fingertip cursors can now be used for interaction with 2D or 3D user interface elements; the fingertip cursors may have icon and/or colour indications of their state.

5) For stereo views the flattened hand visualisation should converge at about the exact same depth as the head up display (HUD) displayed interface elements so there is no ambiguity on picking the interface elements. In a full 3D UI the z buffer can be used to suitably distort the view for one or both of the eye views depending on how the z buffer view has been generated (e.g. structured light or stereopsis based techniques will have one eye already correct; time of flight will require distorting each view by half of the ocular distance).

a. Because virtual reality (VR) currently has a fixed projection/display distance, close range focus cannot be used as a cue: close range stereo is not ideal for the human visual system to align hands with objects in a 3d world. A compromise would be to show the fingertip cursors converging on the UI HUD plane while the hand/ fingers retain their stereo nature. b. The benefit of keeping the fingertip cursors and UI converged is that for the user, the usability will be much better and more natural. Virtual Reality use cases, just like desktop use cases, will often have hands reaching up into the view with potentially only the fingertips visible— making the approach described here ideal. rtip Gestures Examples

Simplicity and repeatability are critical for user interface interactions. With the algorithm-based approach described in this document, and a 640x480 z buffer, it is possible to accurately pick one of 26 items horizontally on the screen such as keys on a piano keyboard, at a repeatability of over 98%.

A clear user mental model and immediate feedback from the UI is required for discoverability and usability. On-screen hands make it easy for users to understand how their hand and the user interface interact. The ability to have the interactive fingertips low on the view makes it easy to access the whole user interface.

Hover, dwell, tap and point metaphors may all be supported.

Swipes (eg. of a finger or fingers) over an active area are also passed to the underlying object.

Dwell (eg. of a finger or fingers) to lock/ select and then drag. Shake or move out of view to release (e.g. show a tether line while locked/ selected).

Tilt finger to drag or paint.

Gravity well— the hover cursor will snap to the nearest UI element as if there is a gravity effect— e.g. the closer to the UI, the more tightly it snaps. This gives the user additional support in picking UI elements where they are required to tap or tilt the finger which may move the cursor. Note this effect is disabled when in other modes such as dragging or painting. 5 or 4 Digit Gestures Examples

1) Use an open hand to rotate, e.g. based on detection of four or five digits in a high five pose. When tested for a change in rotation, if a change in rotation of the hand is detected, this will rotate the locked/ selected item around the z axis.

2) Panning left, right, up and down with four fingers in a row may act as an x,y scrolling/rotating gesture for the locked/selected item. This interaction may also be utilised with as few as two fingers in a row.

3) Push and pull into the screen may act as a modifier gesture on the locked/selected item (for example, two common modifiers would be rotating around the x axis or scaling in overall size).

Halo fingertip detection

A traditional fingertip detector using contour tracking is good at detecting fingertips, but has some computational challenges, or shortcomings, around finding the centre of the fingertip.

In contrast the halo approach described here is kernel based, hence it may be used in parallel with other processes, and may be optimised to take advantage of a graphics processing unit (GPU) or specialised image processing hardware (e.g. an FPGA or an ASIC). A core motivation is to generate extremely accurate and stable centroids for the fingertips.

In image processing, a kernel, convolution matrix, or mask is a matrix useful for e.g. blurring, sharpening, embossing, edge detection, and for other functions. This may be accomplished by convolution between a kernel and an image.

In an example of fingertip detection:

1) Threshold the depth data so the hands are the only likely objects in the view.

An example in which depth data has been thresholded so that the hands are the only likely objects in the view is shown in Figure 6.

Run a modified morphological operator to erode edges based on curvature (e.g. using the z value differences we can estimate the normal vector— hence detect the edges of the fingers and ensure that even fingers held tightly side by side will be separated).

Run a modified morphological operator to infill holes and average the z values.

Run an identifying (eg. a colouring) fill over the remaining items (e.g. blobs) so overlapping hands, or other objects, can be separated and identified. An example is shown in Figure 7, in which an identifying fill which has been run on the data of Figure 6, after it has been subjected to steps 2 and 3, has resulted in identifying hand A, hand B and object C.

Run a sparse kernel comprising a series of concentric rings. This is used as a fast first pass to identify the best fingertip candidates at a range of scales (as the finger will appear larger in the camera view the closer it is to the camera). These rings are looking for an occupancy pattern where the internal rings are fully (or nearly fully) occupied, and then a subsequent ring has a halo with very few values, or with relatively few values (i.e. with very few occupied pixels, or with relatively few occupied pixels). For example, checking only the item (eg. blob) identifier (eg. colour) from the central rings— e.g. treat other items (e.g. blobs) as non-occupied pixels (e.g because they have different colours). An example is shown in Figure 8, in which a fingertip of the hand on the left has been identified, and in which four fingertips of the hand on the right have been identified. For the fingertip of the hand on the left, the inner two circles are fully occupied, whereas the outer circle has relatively few occupied pixels. For the fingertips of the hand on the right, the inner two circles are fully occupied, or nearly fully occupied, whereas the outer circles have relatively few occupied pixels.

From the list of best matches, a full kernel is run over the fingertip to establish the best fit circle which covers the tip of the finger. The pixels inside the circle are averaged for x,y and z values. For example, at a midrange from the camera with a 640x480 buffer, a fingertip is roughly 60x80=4800 pixels in size. This generates a centroid which is accurate to a 5 th of a pixel on a 640x480 buffer with strong spatial coherency between frames, as the rings keep a constant fit at the tip despite noise in the finger silhouette.

7) The occupancy of an outer ring (e.g. at two times the radius of the fit of the fingertip) is used to determine the angle in two dimensional space of the finger (e.g. a compass angle). An example is shown in Figure 9, in which for the detected fingertip of the hand on the left, an outer ring segment S at two times the radius of fit of the finger tip has been drawn over the finger, and an arrow T has been drawn which points along the derived direction of the finger, away from the palm of the hand. In Figure 9, the same procedure has been followed for the four detected fingertips of the hand on the right. The z sample from the boundary is used to determine the z gradient of the fingertip (e.g. from finger to tip) to determine if it is facing into, flat or out of the view.

8) Some detected 'fingers' can be rejected if the radius and the z distance are outside the scale expected from human child/adult fingertips.

9) Temporal tracking with a simple motion predictor is used to identify fingertips between frames to ensure a consistent identification (ID) number is passed back to the underlying software— ID numbers are recycled when a finger is not detected for at least 2 frames — this feeds into the gesture processing system.

In an example, for scrolling or panning, we don't require users to hold up their whole hand— but in fact just two fingers is enough to go into scrolling mode or into panning mode, or into a scrolling and panning mode.

Piano with overlay

Fingers tilted forward (i.e. tilt detected), or currently being moved toward the screen in the z axis (i.e. tapping motion detected), may be displayed with a (e.g. red) highlight indicating they are activating the piano key they are directly above. An example is shown in Figure 10, in which the fingertips on the white keys four and five places from the left are highlighted because they are activating these keys, while the three rightmost detected fingertips are not activating a piano key, and hence are not highlighted.

Stereo Views for Augmented and Virtual Reality

By tracking fingertips, the same camera used to scan the world (for augmented reality scenarios) can be used. A user does not need to have their hand completely in view. With the ghost hands approach fingertips show up clearly while traditional skeletal tracking faces ambiguous skeletal poses of the back of the hand. An example is shown in Figure 11, in which for each of a left stereoscopic view and a right stereoscopic view, the fingertip is detected even though about half the hand is outside the tracked environment.

Example Hardware Systems and Implementations

An example finger tracking hardware system comprises a computer system including a processor, memory, a display, a 3D or depth sensing image camera system, and a computer program executable by the processor, the computer system receiving a stream of 3D or depth sensing image camera system data from the 3D or depth sensing image camera system, the computer program executable by the processor to identify fingertips in the 3D or depth sensing image camera system data, and to track fingertips in the 3D or depth sensing image camera system data. The tracked fingertips may be displayed in the display. Display data may be sent to the display by wired connection, or wirelessly. The display may be, for example, a desktop display, a laptop display, a tablet display, a mobile phone display, a smartwatch display, a smart TV display, a stereoscopic display, a holographic display, a wearable display or a HUD display. The finger tracking hardware system may be a fixed or a mobile finger tracking hardware system. An example finger tracking hardware system is shown in Figure 12. The processor may also generate user interface elements, for display in the display together with the tracked fingertips. The processor may detect selection of a displayed user interface element, when a tracked fingertip position satsifies a predefined spatial relationship with respect to a displayed user interface element. Fingertip tracking data may be stored (e.g. in memory) for further analysis, such as for analysis in order to identify gestures. The processor may analyse stored fingertip tracking data and identify an execution of a fingertip gesture by a fingertip. Halo Fingertip pseudo code example

Function process_fmgertips_per_frameQ

{

buffer = Read_z_data0

z_buffer = pack_texture(buffer)

gpu_buffer = transfer_to_GPU(z_buffer)

threshold(gpu_buffer, maximum_z_distance)

z_erode(gpu_buffer, z_delta_erode_ratio)

z_fill_smooth(gpu_buffer, kernel_size, amount)

color_blob_id(gpu_buffer, minimum_blob_pixels)

candidate_list = sparse_halo(gpu_buffer)

loop (candidate_list)

{

vector3_position_list, vector3_angle_list = fit_halo(candidate_list_entry)

{

}

Function sparse_halo(gpu_buffer)

{

foreach_pixel(gpu_buffer)

{

center_pixel = pixels (center_vector)

foreach_ringQ

{

z_occupancy[ring] = pixels (sample_vectors)

}

if z_occupancy[inner_rings] = = high && z_occupancy[outer_rings] == low

{ candidate list + - center vector

return candidate_list

}

Function fit_halo (vector candidate_list_entry)

{

foreach_pixel_in_search_space(gpu_buffer)

{

foreach_ringQ

{

z_occupancy[ring] = pixels (full_circle)

}

if z_occupancy[inner_rings] = = high && z_occupancy[outer_rings] == low && ring_scale > min && ring_scale < max

{

vector3_position_list + = precise_center_vector vector3_angle_list + = angle_vector

}

}

return vector3_position_list, vector3_angle_list

} Note

It is to be understood that the above-referenced arrangements are only illustrative of the application for the principles of the present invention. Numerous modifications and alternative arrangements can be devised without departing from the spirit and scope of the present invention. While the present invention has been shown in the drawings and fully described above with particularity and detail in connection with what is presently deemed to be the most practical and preferred example(s) of the invention, it will be apparent to those of ordinary skill in the art that numerous modifications can be made without departing from the principles and concepts of the invention as set forth herein.




 
Previous Patent: IMPROVEMENTS RELATING TO JACKUP RIGS

Next Patent: A DEVICE