Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-TOUCH HUMAN INTERFACE SYSTEM AND DEVICE FOR GRAPHICAL INPUT, AND METHOD FOR PROCESSING IMAGE IN SUCH A SYSTEM.
Document Type and Number:
WIPO Patent Application WO/2013/054155
Kind Code:
A1
Abstract:
The present invention proposes a human interface system or interface device for two-dimensional interaction by contact or near contact with a computerized system, typically executing graphic processing software. This system detects contact position on a two dimensions contact surface of an interface device comprising a plurality of faces separated by at least two edges which intersect each other by at least one vertex or corner. This interface system comprises means for discriminating between at least two contact types within the group of: face contact, corner contact, and edge contact. In embodiments used with an IR illumination multi-touch surface, detection may moreover comprise the steps of: - pre-processing the image through several iterations of a "Top Hat" morphological operation of a multipixels structuring element; - binarizing said image using a variable threshold tuned with the speed of movement; and - limiting processing of the next image for interpreting a later contact position to an area close to the last recorded contact position.

Inventors:
CASIEZ GERY (FR)
VOGEL DANIEL (CA)
Application Number:
PCT/IB2011/002856
Publication Date:
April 18, 2013
Filing Date:
October 14, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LILLE 1 SCIENCES & TECHNOLOGIES (FR)
INST NAT RECH INF AUTOMAT (FR)
CASIEZ GERY (FR)
VOGEL DANIEL (CA)
International Classes:
G06F3/033; G06F3/042; G06F3/048
Foreign References:
US20090273585A12009-11-05
Other References:
REKIMOTO J ET AL: "TOOLSTONE: EFFECTIVE USE OF THE PHYSICAL MANIPULATION VOCABULARIES OF INPUT DEVICES", PROCEEDINGS OF THE 2000 ACM SIGCPR CONFERENCE. CHICAGO. IL, APRIL 6 - 8, 2000; [ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY], NEW YORK, NY : ACM, US, 5 November 2000 (2000-11-05), pages 109 - 117, XP001171598, ISBN: 978-1-58113-212-0, DOI: 10.1145/354401.354421
BRANDL ET AL.: "Proc. of the working conference on Advanced visual interfaces", 2008, ACM, article "Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces", pages: 154 - 161
HINCKLEY ET AL.: "Proc. of the 23rd annual ACM symposium on User interface software and technology", 2010, ACM, article "Manual Deskterity'' system in ''Pen + touch = new tools", pages: 27 - 36
REKIMOTO ET AL.: "Proc. of the 13th annual ACM symposium on User interface software and technology", 2000, ACM, article "ToolStone effective use of the physical manipulation vocabularies of input devices", pages: 109 - 117
Attorney, Agent or Firm:
PONTET ALLANO & ASSOCIES Selarl (Versailles, FR)
Download PDF:
Claims:
CLAIMS

1. Human interface system for two-dimensional interaction by contact or near contact with a computerized system, comprising means for detecting at least one so-called contact position of an interface device (1) related to a two dimensions so-called contact surface, notably an interactive display surface, wherein :

said interface device (1) comprises a plurality of faces (F1-F6) separated by at least two edges (E1-E12) which intersect each other by at least one corner (V1-V8), and

said interface system comprises means for discriminating between at least two contact types of said interface device (1) with the contact surface, within the group of: face contact, corner contact, and edge contact surface, and

said interface system comprising means for sending to said computerized system an input data comprising complementary information related to said contact position and including a result from said discriminating operation. 2. System according to the preceding claim, wherein the interface device (1) is provided with an elongated form, in which one length (101- 102) dimension is at least four times larger than its other dimensions, and notably five times larger. 3. System according to any one the preceding claims, wherein the system comprises means for identifying which geometrical feature is in contact with the contact surface, within a group of several geometrical features of the same type. 4. System according to any one of the preceding claims, wherein means for detecting a contact are embedded within the contact surface, such as an infrared multitouch interactive display.

5. System according to the preceding claim, wherein : - the interface device (1) comprises at least two geometrical features (Fl relatively to F2, F5 and F6 relatively to F3 and F4) of the same type which present respective passive markings (Fl l, F61, F62) with differentiated forms and/or color, and

- the contact surface comprises means for detecting and/or discriminating said differentiated markings through image recognition.

6. System according to the preceding claim, wherein :

- the interface device (1) comprises at least two geometrical features (VI- V4 relatively to V5-V8) of the same type which present respective active emitters (111, 112) which emit differentiated signals, and

- the contact surface comprises means for detecting and/or discriminating said differentiated emitters through recognition processing of said differentiated signals.

7. System according to any one of the preceding claims, wherein means for detecting a contact are embedded within the interface device (1), such as a proximity sensor or mechanical switch (S1-S8). 8. System according to any one of the preceding claims, wherein at least one geometrical feature (F4, F6) of the interface device (1) presents a passive haptic identification pattern (F41, respectively F61, F62) which makes it different from at least one other geometrical feature (F3, respectively F5) of a same type, so as to provide the user with a tactile feedback enabling an eye-free identification of said at least one geometrical feature (F4, F6).

9. Interface device (1) for a system according to any one of claims 1 to 8.

10. Method for receiving a graphic input from a human interface system according to any one of the claims 1 to 6 communicating with at least one computerized system, said method comprising processing at least one image (FIGURE 4a to FIGURE 6a) of the contact surface for interpreting a user interaction according to the following steps:

- detecting within said image one or several areas (441, 442, 54, 62) corresponding to one or several geometrical features of the interface device;

- comparing said detected areas with stored pattern data to discriminate said areas between at least two types of geometrical feature from the group of corner type, face type and edge type; and

- sending to said computerized system an input data representing type and position of a contact position for at least one geometrical feature detected within said one or several areas.

11. Method according to claim 10, wherein the step of detecting a type of geometrical feature comprises the steps of:

- comparing a region formed by the one or several detected areas (441, 442, 54, 62) with stored pattern data corresponding to one or several kind of face (F1-F6) of the interface device (1); and

- in case of a negative comparison, returning a data corresponding to a corner type contact position or possibly an edge type contact position.

12. Method according to any one of claims 10 to 11, furtherly comprising the steps of:

- in case of detection of a corner type, counting a number of detected corners;

- in case of detecting a plurality of corners (441, 442), returning data corresponding to an edge type (E1-E12) contact position; and

- in case of detecting only one corner (62), returning a data corresponding to a corner type (V1-V8) contact position. 13. Method according to any one of claims 10 to 12, furtherly comprising the following steps:

- in case of detection of a given type contact position, processing the related detected area (54) for identification of one or several specific geometrical features (F6) of said type within all the identically typed geometrical features (F1-F6) of the interface device (1), through image recognition of at least one permanent marking (541, 542) on said detected geometrical feature (54),

- or through chronological comparison of at least one such marking or color varying between several successive images for the same detected contact position;

- returning a data corresponding to said one ore several specific geometrical features (F6). 14. Method according to any one of claims 10 to 13, moreover comprising the following steps:

- in case of detection of an edge type contact position, identifying one or several specific edges within all the edges of the interface device, through either:

· processing the related detected area (441, 442) for identification of one or several specific edges (E1-E8) of said type within all the edges (E1-E12) of the interface device (1), through comparing a measured distance (D4) between said detected areas (441, 442) with stored distance values corresponding to one or several kind of edges (E1-E12) of the interface device (1); and/or

• using identification data of at least one corner connected to said detected edge, and/or

• image recognition of at least one marking or color on said edge;

- returning a data corresponding to said one ore several specific edges.

15. Method according to any one of claims 10 to 14, moreover comprising the following steps:

- in case of detection of a face type contact position, identifying one or several specific faces within all the faces of the interface device, through either:

• using identification data of at least one corner connected to said detected face, and/or

• using identification data of at least one edge connected to said detected face, and/or • image recognition of at least one marking or color on said face;

- returning a data corresponding to said one ore several specific faces.

16. Method according to any one of claims 10 to 15, wherein the step of detecting one or several areas corresponding to one or several geometrical features of the interface device moreover comprises the steps of:

- executing a pre-processing of said image (FIGURE 4a to FIGURE 6a) including a "Top Hat" morphological operation with several iterations, preferably between 2 and 5 iterations, of a multipixels structuring element comprising between 3 and 9 pixels in its two dimensions, and preferably between 3 and 5 pixels;

- binarizing said image using at least one given threshold; and

- applying several iterations of a dilatation morphological operation, preferably between 5 and 100 iterations, or between 10 and 40 iterations.

17. Method according to claim 16, wherein a plurality of timely successive images are processed for interpreting successive user generated contact positions and/or interface device movement, said method moreover comprising the following steps:

- computing respective timed positions of one or several detected areas (62) to provide an information on a speed of a displacement of the respective geometrical features (V1-V8) detected in said detected areas, preferably during the pre-processing step;

- using said speed information to compute a value for a variable threshold data according to a stored rule providing a lower threshold for a higher speed ; and

- using said threshold data as a threshold for binarizing one or several images (FIGURE 6a) of said detected areas (62).

18. Method according to claim 17, further comprising the following steps:

- using the speed information data to determine dimensions for a so-called future detection region which is included within a global image of the contact surface and smaller than said global image, according to a stored rule providing a smaller future detection region for a higher speed;

- defining a position for said future detection region, in relation with the last detected contact position (62); and

- limiting to said future detection region the processing of the next image for interpreting a later contact position and/or interface device movement.

Description:
« Multi-touch human interface system and device for graphical input, and method for processing image in such a system »

Introduction

The present invention proposes a human interface system or interface device for two-dimensional interaction by contact or near contact with a computerized system, typically executing graphic processing software. This human interface system comprises means for detecting at least one so-called contact position relatively to a two dimensions contact surface of an interface device comprising a plurality of faces separated by at least two edges which intersect each other by at least one vertex or corner. This interface system comprises:

- means for discriminating between at least two contact types with the contact surface, within the group of: face contact, corner contact, and edge contact, and

- means for sending to said computerized system input data comprising complementary information related to said contact position and including a result from said discriminating operation.

In embodiments used with an Infrared (IR) illumination multi-touch surface, detecting one or several geometrical features of the interface device may moreover comprise the steps of:

- pre-processing the image through several iterations of a "Top Hat" morphological operation of a multipixels structuring element;

- binarizing said image using a variable threshold tuned with the speed of movement; and

- limiting processing of the next image for interpreting a later contact position to an area close to the last recorded contact position.

Background

The present invention pertains to the field of human interface systems for computers, of the type which enables a user to interact with the computer in a graphically chosen position of an interaction surface.

While a computer mouse only detects relative movement within a recording surface, a so-called "absolute pointing" interface enables the direct detection of the absolute position where the user points or touches the interaction surface.

Such interfaces historically used opaque surfaces, such as in many electronic pens used with graphical tablets in Computer Aided Design or graphical software. Such interfaces commonly use interaction surfaces which are also dynamic displays, such as many interactive touch displays used with a pen-like hand-held device, or simply with the finger, for example in earlier electronic "notepads", or now in smartphones or in tablet computers without a keyboard, often even for interacting with the Operation System (OS) of such computerized devices.

Contact surface

An interaction surface is here defined as a two dimensional surface, typically planar, while not always planar, interacting with one or more objects through contact or near contact. Such interaction may be detected in different ways.

For a long time, such interface systems have been using electrical methods, such as resistive or capacitive detection or magnetic detection or near field detection or piezoelectricity, with or without a grid . For some time now, such interface systems also use optical methods, such as IR sensor patterns, or by analyzing an image of the bottom portion of objects near or above an interactive display table, captured from below by Infrared cameras, as in the "Surface" technology from the Microsoft corporation.

Interactive contact surfaces have been capable of detecting only a single touch for some time, but many are now able to detect and position several simultaneous contacts independently, and are called "multi-touch" surfaces.

Multiple modes

An important feature of a graphical human interface relies on its capacity to exploit several different modes, i.e. the interpretation and usage of given input data by software will be different according to different modes. One mode may be a drawing mode along the trajectory of the contact position, while another mode may use the contact position for erasing, and still another use it for selecting graphical objects on the screen.

While finger touch interaction is arguably more immediate and natural in many situations, its use is known to be imprecise and difficult to directly write with or generate precise drawing. Using a pen (or stylus) makes writing more natural and pointing more precise, and is widely used with specialized graphic software, such as those used for technical or artistic drawing .

The number of possible modes with a single device, without having to input a specific mode-changing command, directly impacts the potential performance and ergonomics of the software, for example, because it enables numerous functions with the same device, as well as the freedom of the software developer.

Various possibilities have been explored for enabling easy and intuitive mode-changing .

Multimode devices

Pen or interface devices have been proposed with one or several buttons on them, but each mode needs a button, and more than two modes may be necessary.

A pen with an opposite side with another contact point used as an "eraser" has been proposed but revealed to be quite slow, while providing only two modes.

Other methods for enabling multiple explicit modes from a pen have been proposed such as barrel rotation, or classifying pressure, tilt, or grip position. But these can be complex to use, error prone and ambiguous. For example, the "6D Art Pen" from the Wacom corporation provides up to 6 degrees-of-freedom and pressure sensing, so in theory it could combine mode changing techniques such as tilt, barrel rotation, and pressure, but in practice this is rarely done due to error prone classification. This device already has complex technical features, and sensing grip position is much more difficult. These types of devices are not only less reliable but complex and expensive. Inferring modes by the software is difficult to do accurately, and most users prefer explicit control .

Combined touch and device

Solutions have been proposed that use a combined finger touch and interface device, such as described in Brandl et al. "Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces." Proc. of the working conference on Advanced visual interfaces, ACM (2008), 154-161.

For example, mode-changing is obtained by touching the screen with different combinations of fingers or postures of the non-dominant hand, depending which mode is to be activated .

Another style of mode switching with bimanual pen and touch is to consider the contextual, coordinated relationship of direct manipulation objects (e.g . notes in a sketchbook application) to change modes, such as that proposed in Hinckley et al .'s "Manual Deskterity" system in "Pen + touch = new tools", Proc. of the 23rd annual ACM symposium on User interface software and technology, ACM (2010), 27-36.

When used alone, the pen always draws and the hand always manipulates; but when used in the context of an object, an implicit mode is entered, for example a pen stroke may cut like a knife, draw a straight line along an edge, or create a copy of the object. While the number of modes is increased, various hand postures need to be recalled and performed unambiguously.

Another evolution has been proposed through the "ToolStone" device, as described in Rekimoto et al . : "ToolStone : effective use of the physical manipulation vocabularies of input devices", Proc. of the 13th annual ACM symposium on User interface software and technology, ACM (2000), 109- 117.

This device is a cordless, multiple degree-of-freedom (MDOF) input device. It senses physical manipulation of itself, such as rotating, flipping, or tilting. It is most often used as a complementary input device for the dominant hand manipulating a pen in a bimanual interface, and provides several interaction techniques used for mode-changing between various tasks in graphical software (such as toolpalette selector, zooming, 3D rotation, and virtual camera control).

This is a device with a form of a rectangular parallelepiped with dimensions of 25x40x50 mm. A different mode is activated according to the face it lies on and the orientation of the face.

While such a device may make it easier to choose a mode and remember it, it still constitutes a supplementary device, which has to be manipulated in a coordinated way with the dominant hand .

However, since the pen itself supports few modes, single-handed usage may not take advantage of this type of device, thus the number of combined pen modes, or touch and pen modes, remains limited in many cases, such as mobile usage. Moreover, when frequently switching between pen-oriented modes, such as drawing, handwriting, gestures, and lasso selection, this can be confusing for many users and/or hurt performance.

An object of the invention is to remedy all or part of the drawbacks and problems of the prior art, and more particularly to provide a system that is simple and economic to build and implement; and/or enables simple, more ergonomic and intuitive to use and enables higher human performance.

Another object of the invention is to enable using very different modes with a unimanual (single-handed) interface device, in an easy or more intuitive way.

Still another object of the invention is to provide such a system which may be easily implemented within an existing computerized system, preferably with few or no hardware modifications of this computerized system.

Furthermore, physical features of existing pen or other interface devices may have better affordances with some modes and worse affordances with other modes. It can be because of the form of the device around the contact point, which may be too sharp for some usage mode or too blunt for another. It can be also because of the symbolic image of the device better fit to some mode than to another, as a sharp tip is easier to remember as a drawing mode than a blunt one because of the symbolic resemblance to a classic pen than with an eraser.

Thus, another object of the invention is to provide such a hand held device more adapted and intuitive to use for different kinds of modes.

Summary of the invention

Accordingly, the present invention proposes a human interface system or interface device for two-dimensional interaction by touch or near contact with a computerized system, typically executing graphic processing software, as recited in the present claims.

Preferably, this human interface system comprises means for detecting at least one so-called contact position of an interface device related to a two dimensional so-called contact surface, notably an interactive display surface, wherein said interface device comprises a plurality of faces separated by at least two edges which intersect each other by at least one corner (or vertex), and wherein said interface system comprises:

- means for discriminating between at least two contact types of said interface device with the contact surface, within the group of: face contact, corner contact, and edge contact, and

- means for sending to said computerized system input data comprising complementary information related to said contact position and including a result from said discriminating operation.

Typically, the system moreover comprises means for specifically discriminating between these three contact types: corner, edge and face.

Advantageously, the interface device is provided with an elongated form, in which one length dimension is at least four times larger than its other dimensions, and notably five times larger.

Various embodiments may be implemented, where:

- means for detecting a contact are embedded within the contact surface, such as an IR interactive display; and/or

- the interface device comprises at least two geometrical features of the same type which present respective passive markings with differentiated forms and/or color, and the contact surface comprises means for detecting and/or discriminating said differentiated markings through image recognition.

According to another aspect of the invention, it is proposed an interface device specifically designed for working in such an interface system.

According still to another aspect of the invention, it is proposed a method for receiving a graphic input from such a human interface system communicating with at least one computerized system, said method comprising processing at least one image of the contact surface for interpreting a user interaction according to the following steps:

- detecting within said image one or several areas corresponding to one or several geometrical features of the interface device;

- comparing said detected areas with stored pattern data to discriminate said areas between at least two types of geometrical features from the group of corner type, face type and edge type; and

- sending to said computerized system an input data representing type and position of a contact position for at least one geometrical feature detected within said one or several areas.

Preferably, the step of detecting a type of geometrical feature comprises the steps of:

- comparing a region formed by the one or several detected areas with stored pattern data corresponding to one or several kind of face of the interface device; and

- in case of a negative comparison, returning data corresponding to a corner type contact position or possibly an edge type contact position.

Advantageously, this method furtherly comprises the steps of:

- in case of detection of a corner type, counting a number of detected corners;

- in case of detecting a plurality of corners, returning data corresponding to an edge type contact position; and - in case of detecting only one corner, returning data corresponding to a corner type contact position.

Preferably, the step of detecting one or several areas corresponding to one or several geometrical features of the interface device moreover comprises the steps of:

- executing a pre-processing of said image including a "Top Hat" morphological operation with several iterations, preferably between

2 and 5 iterations, of a multipixel structuring element comprising between 3 and 9 pixels in its two dimensions, and preferably between

3 and 5 pixels;

- binarizing said image using at least one given threshold; and

- applying several iterations of a dilatation morphological operation, preferably between 5 and 100 iterations, or between 10 and 40 iterations.

Moreover, said method may comprise the following steps:

- computing respective timed positions of one or several detected areas to provide an information on a speed of a displacement of the respective geometrical features detected in said detected areas, preferably during the pre-processing step;

- using said speed information to compute a value for a variable threshold data according to a stored rule providing a lower threshold for a higher speed ; and

- using said threshold data as a threshold for binarizing one or several of said detected areas.

Furthermore, this method may comprise the following steps:

using the speed information data to determine dimensions for a so-called future detection region which is included within the global image of the contact surface and smaller than said global image, according to at least one stored rule;

defining a position for said future detection region, in relation with the last contact position; and - limiting to said future detection region the processing of the next image for interpreting a later contact position and/or interface device movement. Implementation with various detection technologies

Alternatively, or in combination with these features, the interface device may comprise at least two geometrical features of the same type which present respective active (such as luminous, radio, electrical, or magnetic, or vibration emitters) emitters which emit differentiated signals (such as wave length, or pulse frequency, or coding pulses). The contact surface then comprises means for detecting and/or discriminating said differentiated markings through recognition processing of said differentiated signals.

This and other aspects and features of the present invention will now become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments of the invention and the accompanying drawings. Brief description of the drawings

A detailed description of examples of implementation of the present invention is provided herein below with reference to the following drawings, in which :

- FIGURE 1 is a perspective illustrating an interface device according to the invention, in an embodiment with a form of an elongated rectilinear extruded rectangle with three face sizes and eight corner emitters for inputting on an IR image processing multi-touch surface;

- FIGURE 2 is a schematics of the electronic circuit for the device of FIGURE 1;

- FIGURE 3a to FIGURE 3g are illustrations of several ways to use the device of FIGURE 1;

- FIGURE 4a to FIGURE 4e are successive images within the image processing of a contact position from the device of FIGURE 1, in case of an edge contact; - FIGURE 5a to FIGURE 5d are successive images within the image processing of a contact position from the device of FIGURE 1, in case of a face contact;

- FIGURE 6a to FIGURE 6d are successive images within the image processing of a contact position from the device of FIGURE 1, in case of a corner contact;

- FIGURE 7a and FIGURE 7b are illustrations of an example of use of the interface device of FIGURE 1 as a selection device for the dominant hand used for selecting a contextual command in a graphic oriented Operation System or software;

- FIGURE 8a and FIGURE 8b are illustrations of an example of use of the interface device of FIGURE 1 as a selection device for the dominant hand used for opening and using a color attribute palette in a graphic oriented Operation System or software;

- FIGURE 9 is an illustration of an example of use of the interface device of FIGURE 1 as a selection device for the dominant hand used as a mouse- replacement relative pointer device in a graphic oriented Operation System or software;

- FIGURE 10, FIGURE 11 and FIGURE 12 are perspectives illustrating an interface device according to the invention, in other embodiments with various elongated forms, as:

o an extruded triangle with a bevel edge tip,

o an rectilinear extruded rectangle with a conic tip,

o an extruded sector of a circle;

- FIGURE 13 is a perspective illustrating an interface device according to the invention, in an embodiment with a form of an elongated rectilinear extruded rectangle with three face sizes and eight embedded corner mechanical switches for contact detecting and discriminating.

In the embodiments described hereafter, the interface system comprises a mobile handheld device, active or passive and preferably cordless, and an interaction surface.

The device is typically a polyhedron, i.e. with sensibly flat faces and straight edges, while alternatives may be implemented with only some of the faces being flat. Typically, the surface is a flat one, though non flat may be used too provided they comprise a sufficiently large and non angular portion.

The interface device and the surface are designed to cooperate in recording as input data the position and possibly angle(s) of the interface device relative to the surface when they come in contact or near contact.

The global interface system may also be realized with the only handheld device, in embodiments where this device is able to detect by itself its position relatively to one or several types of passive and/or existing surfaces. In such cases, the surface can be any surface like a table or wall with no sensing capability.

The interface system is able to detect which corner (or vertex, or corner point), edge or face of the handheld device is in contact with the surface. Corners, edges and faces are later called geometrical features of the device.

In the preferred embodiment described hereafter, the surface is a plane but it could alternatively be for instance a sphere or another shape; and the polyhedron is a rectangular prism or another shape that can be held by the hand in a way similar to a pen.

In the preferred embodiment, the sensing surface acts as a display but the display could also be separated from the sensing surface.

The system can detect the contacting geometrical features of the polyhedron using sensors in the surface, sensors in the polyhedron, sensors in the environment, or a combination of these methods.

Optionally, the system can detect the azimuth angle (i.e. within the plane of the contact surface) and elevation angle (i.e. above the plane of the contact surface) of the polyhedron relative to the contact surface, in addition to the geometrical feature position.

Detecting the contacting geometrical features of the polyhedron using sensors in the surface may be accomplished using multiple contact point sensing by mean of optical, resistive, capacitive, touch wave or force based sensing technology. Preferably, the polyhedron is designed in accordance with the sensing technology of the surface to help detecting which geometrical features are in contact with it, and track their position relatively to it.

In embodiments where detecting the contacting geometrical features of the polyhedron is done by using sensors in the polyhedron, this may be accomplished using several local optical proximity detectors or electromechanical switches embedded in each corner, possibly with a 6 degrees of freedom tracker, such that the position and orientation of the device may be calculated .

In embodiments where detecting the contacting geometrical features of the polyhedron is done by using sensors in the environment external to the handheld device, this may be accomplished with one or more optical sensors (e.g . cameras) placed in the immediate area such that the position and orientation of the polyhedron may be calculated . Description of an exemplary embodiment

As illustrated in FIGURE 1, the present and currently preferred embodiment has a form of an elongated rectilinear extruded rectangle.

It may be seen as a pen-like shape, similar to a pastel stick as used by many artists for drawing and coloring. Compared to a standard ink pen it is here shorter, faceted, and without a well-defined nib.

This form enables to obtain both a precise pen-like contact point on either corner, or a stable contact on either faces.

This extruded rectangle shape has been tested favorably, and the slight irregularity may help users (and software) distinguish end edges and side faces, even without looking at it. This form factor comprises 26 different contacts, potentially supporting up to 26 modes or commands. These contacts may be classified into corner-typed, edge-typed and face- typed :

- 8 corners or corners VI to V8;

- 4 end short edges El, E4, E5, E8 on its two ends 101 and 102;

- 4 end medium edges E2, E3, E6, E7 on its ends 101 and 102;

- 4 side long edges E9, E10, El l, E12 between its ends 101, 102;

- 2 end faces Fl, F2, or small faces, on its two ends 101 and 102; - 2 thin side faces F3, F4, or medium faces, between its ends 101;

- 2 thick sides faces F5, F6, or large faces, between its ends 101, 102.

Small face Fl of end 101 is marked with a visual dot Fl l, thus providing for the contact surface identification means for this specific small face Fl, within the group of small faces Fl and F2.

Thick side face F6 is marked with visual dots (F61 and F62 shown for face F6), thus providing for the contact surface identification means for this specific thick side face within the group of thick side faces F5 and F6. Similarly, thin side face F5 is marked with a different number of visual dots, here one dot F41, thus providing for the contact surface identification means for this specific thin side face within the group of thin side faces F3 and F4. Optionally, number of dots may vary from one face to another, even of the same dimensions, thus providing such specific identification of each of these faces.

Preferably, such marking dots F41, F61 and F62 also present passive haptic pattern such as an embossed marking, thus providing eye-free haptic means for tactile identification by the user.

While not illustrated in the present example, such passive haptic pattern may also be embodied differently, such as a specific recognizable texture on the whole surface of the geometrical feature to be identified. As examples, it may be one face with rough texture while others are smooth; or one edge with notches while others are continuous; or one corner rounded while others are sharp; or inversely. Hardware

A tested prototype device was 9x 11.5x84 mm, which fits comfortably in the palm, its length being around the width of the palm.

The stick 1 emits and reflects IR (IR) light from each corner VI to V8, and this IR light is captured by tabletop cameras, and translated by image processing algorithms into interface device input events. Emitting IR light uses simple and small electronics, and any unaltered diffuse IR illumination multi-touch tabletop can capture reflected its IR patterns, such as the "Surface" interactive table from the Microsoft company it was tested on. It thus provides a flexible and economic solution, avoiding the need of complex electronics such as with magneto-electric coils or in magneto- electric pens.

The stick 1 comprises an elongated rectangle parallelepiped bloc 100 in which is fixed or casted an internal electronic circuit including a power source or battery 18, such as a single alkaline AAAA battery, and optionally a central unit 19. These electronics command two oppositely facing Osram SFH485 LEDs, one for each end 101 and 102, which generate a 880 nm near-IR light matching the IR pass band filter of the Microsoft "Surface" touch table. Each LED is partially embedded in a highly polished 9 x 11.5 x 8 mm transparent end block 11 and 12, such as in acrylic, using an open flame welding . The acrylic end block and LED are wrapped with reflecting surface, such as foil tape or a metalized layer, to reflect IR light internally. Small openings 2 x 2 mm were furtherly cut into each corner to let IR light escape through the corners and only through them. To house the electronics and fix the LED-embedded acrylic blocks at the tips 101, 102 of the stick 1, everything is included in a global bloc such, here cast in urethane resin. This method creates a very solid feeling and durable stick. An alternative approach could be a housing of plastic.

After casting, the stick was painted a dark color, here a matte black, to increase the contrast of IR light. On one end, a marking was painted, such as a 3 mm colored dot Fl l, for identification. A reflecting layer superposed with a clear color, here white paper labels with metal foil back, was then applied to the side faces. The foil backing increases IR reflectivity. Each side label has one or several visual markings, here 3 mm black dots: using the number of dots and size of the label, it is possible to identify the side.

As illustrated in FIGURE 2, with one AAAA battery and the LEDs wired in parallel and a serialized with a 3.6 Ω resistor, emitted IR was found to be enough for tracking from the interactive table. However, higher voltage such as 3 V supply may be used for improving the emitted IR and making tracking high velocity movements more reliable. 3 V battery powered stick operated for about 25 hours. The circuit draws 20 mA and alkaline AAAA batteries are rated for 500 mAh. Rechargeable NiMH or NiCd AAAA batteries are only rated 1.2 V, and could be used in pair or with an LED driver, such as the Zetex ZXSC300.

While unexpected, it appeared that permanently activated LEDs resulted in only slight hover artifacts, when combining pinpoint IR light from the corners with a high quality diffuser (Evonik ACRYLITE 7D006) for the interactive table. It was thus possible to avoid using tip switches for activating LED only at contact. LEDs may thus be permanently activated during use, possibly through a mechanical on/off switch, or other means such as an accelerometer switch, a pressure sensing switch, together with a time-out deactivation.

Contact types and grips

Use of these different contact points, with examples of associated grips, is illustrated in FIGURE 3a to FIGURE 3g :

FIGURE 3a : When contacting a corner, here VI, the stick 1 is held like a pen using a precision grip, typically a dynamic tripod . The stick can be held at an angle to clearly disambiguate between adjacent contact points.

Depending on the device thresholds for detecting adjacent contacts, usable elevation angles are less than a pen (well within 0 to 90°, clustered near 45°, such as between 30° and 50°).

FIGURE 3b : contact on a short edge, here on edge El, defined between corners VI and V2.

FIGURE 3c: contact on a medium edge, here on edge E3, defined between corners VI and V3.

When contacting a short or medium edge, a slightly modified dynamic tripod grip may be used and the increased contact area of the edge adds stability. These contact types thus enable to define a linear input with still providing good fine manipulation capability.

FIGURE 3d : contact on a side long edge, here El l defined between corners V3 and V7; enables to define a linear input on a wider length, or may be seen as tilting position between two stable face-lying positions. FIGURE 3e: contact on an end face, here Fl with visible corners VA, V2, V4. These contacts enable a better stable contact, while still on a small, and thus still precise area.

Contacting an end face suggests a further modified dynamic tripod grip which is moving towards a power grip. Due to the high centre of gravity, the equilibrium is unstable and must be held to maintain state. Rolling the stick along the thumb enables 180° of azimuth rotation, like a pen barrel rotation from the prior art.

FIGURE 3f: contact on a thin side face, here F3.

FIGURE 3g : contact on a thin side face, here F5.

These contacts enable a more stable contact, where the elongated form factor makes it also possible to use the stick as rule and to define one or several linear longitudinal inputs.

The thick and thin side faces have a large contact area and low centre of gravity creating a very stable equilibrium : the grip can be loosened, or the stick set down, and the state maintained . The required pinch grip is much easier compared to the long edge of FIGURE 3d. Also, the stability means azimuth angle covers the full 360° by manipulating with a clutching action.

Time required to flip the stick 1 end-over-end is similar to a pen nib- to-eraser transition, which takes 1.3 s on average.

Moreover, with the invention, not all contact points are polar opposites. This makes many nearby transitions to be much faster, such as moving from a corner VI to an adjacent edge El in a "rolling" adjacency phrase for example.

A user "tucking" the pen in their palm to interleave dominant hand touch input is made easier by the stick's size being smaller than a standard pen. The stable equilibrium of the side contacts also enable the stick 1 to be set down and released (or grip partially loosened), which enables the same kind of modes as in the ToolStone prior art.

Labeling faces, visually as well as in a passive haptic way, may be used for facilitating learning and adding tactile patterns may be used for eye-free manipulation, thus making easier to use efficiently such increased number of single-hand different modes.

Image processing

When the stick 1 is used on an interactive IR surface (not represented), a software translates the IR light patterns of the sticks into events describing the current contact point, position, and, if available, azimuth angle. An embodiment was written in C# using the Emgu 2.2.1 (www.emgu.com) OpenCV wrapper.

Image processing steps are illustrated in FIGURE 4 to FIGURE 6 for different kinds of contact, here using the Microsoft Surface SDK 1.0 SP1 :

- FIGURE 4a to FIGURE 4e: a slow or still edge contact,

- FIGURE 5a to FIGURE 5d : a slow or still face contact, and

- FIGURE 6a to FIGURE 6d : a moving single corner contact.

FIGURE 4a to FIGURE 6a are image captures by the internal IR cameras, here as 768x 576 px, 8 bit greyscale images.

Pre-processing

FIGURE 4b to FIGURE 6b show the images returned after a preprocessing step where the image capture is accessed and prepared.

In this pre-processing, a Top Hat morphological operation is applied using 3 iterations of a 3x3 structuring element. This brightens the stick's sharp, bright shapes and darkens the duller smooth contacts and images of fingers 519 and palms. This initial operation has been found to be unexpectedly useful .

Position detection

FIGURE 4c to FIGURE 6c shows the images returned after a Position Detection step, where approximate size and position of the contact of the stick are detected and located .

During this Position Detection, the pre-processed image is binarized using a variable threshold value. 20 iterations of the Dilate morphological operation are then applied, creating large connected blobs 43, 53 and respectively 63. The variable threshold is computed from the velocity of the stick's contact in the previous temporal frame. Preferably, the value used for the variable threshold is linearly interpolated between a threshold of 20 when the stick is moving faster than 5 mm/s, and a threshold of 66 when the stick is moving slower than 1 mm/s.

When at rest or moving slowly (FIGURE 4 and FIGURE 5), such a high threshold isolates the stick's image from everything else. With fast movements, the camera exposure blurs the image 61 of the corners or sides, as can be seen in FIGURE 6a compared to 411, 412 in FIGURE 4a and 51 in FIGURE 5a. Using a reduced thus enable to better exploit the stick's image.

Of course, as the threshold is reduced, the intensity of other types of surface contacts will be above the threshold . To address this, when several blobs are detected, a comparison is done with the last recorded stick position, and the closest binarized image blob is used .

This was found to be a surprisingly effective and simple rule. Using this approximate contact position, velocity can then be updated (which is low-pass filtered with a 0.03 Hz cut off).

If the velocity is above 1.4 mm/s, processing is stopped and the approximate contact position, together with the last known stick contact type and azimuth angle, are used to construct a stick's movement event.

Side detection

If the velocity is lower, processing is continued with a Side Detection step with checking whether a side face F3 to F6 is touching, the result of which is shown in FIGURE 4d to FIGURE 5d.

The pre-processed image is first binarized using a threshold of 66.

Then, the minimum area rectangles 441, 442 and respectively 54 of white outer connected component blobs 43, 53 are found, here by using Emgu FindContours and GetMinAreaRect functions. These are compared to the expected width and height of the thick or thin side face labels, retrieved from a memory.

If a match is found, the marking within the blob is used to identify which side is touching by comparison with the marking features F41, F61 and F62 pattern stored for each faces of the stick. As illustrated in FIGURE 5d, the number of black connected component blobs 541, 542 within the matched blob 541 is used to identify which side face is touching, here thick side face F6. The minimum area rectangle function also provides the azimuth angle A5 of the side face related to the surface.

Corner, Edge, and End Detection.

If no matching blobs were found in the side detection step, processing goes on with searching for corners, as illustrated in FIGURE 4e.

The pre-processed image is binarized again, using a higher threshold, here of 75. Any connected component blobs with an area greater than the maximal length of the edges to be detected are removed (here 16 pixels for short and medium edges). The number of remaining blobs 441, 442 and their relative distance D4 enables to determine the type of contact point:

- 1 blob is a corner (VI to V8);

- 2 blobs are an edge (El to E12);

o a short edge if their distance is of 9.5 px ±3 px,

o a medium edge for 14 px ± 3 px,

o a long edge if their distance is of 100 px ± 3 px (for the present stick example);

- 4 blobs is an end face (Fl or F2); and

- any other number of blobs is considered ambiguous and flagged as such.

If an end face is detected, a lower threshold of 55 is then used in again a new binarization of the pre-processed image, applied to the area between the LEDs to determine if there is a white dot marking, which identifies which end face Fl or F2 is contacting the contact surface.

An alternative strategy would be to find the best match to all known contacts. For example, 3 blobs are either a corner with the stick held high, or an end with a slight tilt. These could be identified using intra-blob distances, rather than labeled ambiguous. In the Side Detection and Corner, Edge, and End Detection steps, an optimization was made to restrict the processing area to the contact blob bounding rectangle found in a first "rough" sub-step of the Position Detection step. The algorithm ran at 45 Hz on a standard Microsoft "Surface" interactive table under Vista, with a 2.13 GHz dual core CPU with 2 GB RAM .

Applications

As illustrated in FIGURE 3, this large variety of different contact points may have diverse affordances, which may be chosen according to symbolic intuitive features. For example : the small corner VI (FIGURE 3a) size and tripod grip suggests drawing or writing. A medium edge E3 (FIGURE 3c) is reminiscent of a highlighter. When oriented vertically, the same medium edge E3 may is easily associated to a text insertion point. A long side edge El l (FIGURE 3d) has a sharpness like a cutting blade. The flat end face Fl (FIGURE 3e) is easily reminded as acting like a stamp. When laid on its side F3 (FIGURE 3f) or F5 (FIGURE 3g), the stick creates a tangible handle.

Moreover, it can be seen that the precise input in corner or edge position is combined with more stable face positions in the same device.

As the stick is the same device that may be used for main tasks by the dominant hand on a direct input display, it is the natural focus of attention.

Thus, differently from a bimanual manipulation from the prior art, the invention makes it possible to leverage a proximal visual feedback to confirm the current mode: e.g. a rectangular "shadow" outlining the current contact may be displayed, and most activated modes may have characteristic visual feedback such as displaying a guideline, or rendering a dashed trail for lasso selection.

At a base level, the stick 1 can function like a pen, but also utilize the affordances and characteristics of different kind of contacts to switch between modes (e.g . corner for drawing, end edge for highlighting).

Face input can also be interleaved or coordinated with corner or edge input. For example, the stick 1 may be used in the non-dominant hand to set the mode for dominant hand touch.

An interesting possibility is same-hand touch input (i.e. finger-touch input) enabled by tucking the stick 1 in the palm, or setting it down on a stable side, or loosening the grip to free up a finger or thumb. The latter enables a hybrid style of "touch + handle" interaction, in which nearby touch input fine-tunes a mode enabled by the stick, as illustrated in FIGURE 8.

It can also be used for pen-context techniques where the mode enabled by the stick contact creates the context for touch, as illustrated in FIGURE 7.

The invention can also realize easy and intuitive touch-context techniques where the pen mode is inferred by the object context created by touch.

Moreover, the invention makes it possible to obtain numerous modes with a single interface device while keeping all "pen-griping" modes to writing-like modes, as is often recommended end implemented in recent practice, such as in Manual Deskterity's where: "the pen always writes".

Below is a list of interesting qualities which define the design space enabled by the invention, which cover these different functional styles:

• Mode Mapping : Providing precision and physical affordance when assigning contact point to mode (e.g . short edge is a highlighter, long side a ruler, end a stamp).

• Group Modes: Keeping contact points of task-related modes near each other (e.g. creation and revision at each end).

• Use Adjacency Phrases: Providing fast transitions among contacts can enable specialized modes (e.g . edge to corner roll).

· Parameters for Direct Manipulation : X-Y position and angle can be used for direct manipulation (e.g . position a guideline), to fine-tune a mode (e.g. pick default option).

• Parameters at Invocation : X-Y position and angle can fine-tune modes (e.g . object under contact to form context, angle on contact to pick sub mode).

• Set It Down : For stable sides, the stick will maintain the mode while freeing the hand for touch. Including nearby widgets revealed by the mode can further exploit this. • Use "Touch + Handle": By loosening the grip on the stick, a finger or thumb can manipulate nearby widgets.

• Leverage "Tucking" : Design techniques that interleave touch and stick with one hand .

· Set Context for Touch : Enabling "pen» context modes.

• Non-Dominant Hand Usage : Using more stable stick contacts to set the mode for dominant hand touch.

Examples of functional utilization

FIGURE 7 to FIGURE 9 illustrated in more details several examples of functional implementations, beyond the analogy with a real pastel crayon, which advantageously combine the pen-like input with other complementary modes. Demonstrator software was a sketching and drafting inspired application built using C#, WPF, and the Microsoft Surface SDK. It includes sketch-based drawing annotation like "Manual Deskterity", but also demonstrates additional types of precise pen-like input and modal tools like guidelines. Other techniques may be implemented which provide specific differences and advantages, such as unimanual multi-modal input leveraging the characteristics of different contact points, and pen context techniques where the invention's device sets the context for manual touch.

Pen-Like Input

The level of precision and affordance when contacting the corner or end edges is similar to one-handed pen-like interactions. The corner is used for freehand drawing, since this is arguably the most precise pen-like contact. Strokes made with the short end edge are interpreted as shape gestures. These shape strokes are analyzed with the . NET 4 ink recognizer and replaced by a beautified/corrected/perfected version. For example, a manually drawn circular shape may be recognized and replaced with a perfect circle; likewise for ellipses, rectangles, squares, triangles, etc., in a dashed stroke pattern in the current stroke color for visual feedback of the current "beautifying" mode.

The medium end edge may perform a lasso selection, thus implementing a third pen-like input mode. Lasso selections are less precise than drawing and writing, but still require control over shape. A thick black dashed stroke may then provide visual feedback.

In support of our drafting application scenario, users can enter typographic text by writing with the device according to the invention. Like drawing, writing is a precise task requiring fine grain manipulation, so a corner contact is most appropriate (cf. FIGURE 3a).

Automatically distinguishing between drawing and writing is often unreliable due to many ambiguous strokes, such as strokes resembling an Ό', Ί', or λ Ι_' for example. With the invention, the user explicitly may be enabled to switch between writing and drawing mode.

In embodiments where the stick 1 is able to distinguish between specific corners, two different corners may be used for these two different pen-like tasks.

Alternatively, contact point adjacencies may be exploited, such as through using a "roll" from a short end edge El to a corner VI to enter a corner text writing mode. A roll may then be recognized when there is a change to a corner contact less than one second after a short end edge contact.

A visual feedback may be provided with a little notification tab which says "Roll : Text", and with the stroke color changing to the current font color. Corner strokes may then return to freehand drawing after a subsequent mode change, or when the user explicitly exits by toggling with another "roll" from short end edge to corner.

Contextual Commands

Contextual commands are common in most applications. In Microsoft Windows, these are accessed with a right-click on an object and selecting from a context menu (such as 'copy'). In "Manual Deskterity", copying is performed by dragging the pen off an object held by a finger requiring two hands and making multiple and distant duplication difficult. Moreover, adding more contextual commands means adding more "touch + pen" gestures. As illustrated in FIGURE 7, with the invention an end contact on an object 71 may open a contextual menu showing various commands such as 'cut' 72, 'copy' 73, 'paste' 74, and 'attr' 75 (such as for pasting clipboard object attributes only, such as colors, typeface, etc.). The end stamping motion affordance (cf. FIGURE 3e) may be seen as intuitively matching these actions. To support one-handed operation, azimuth angle A7a, A7b may be used to pre-select a default command when the stick is immediately lifted. For example, when the thick side face F6 faces left-right as in FIGURE 7b, a given button 74 'paste' is pre-selected, but when facing up-down as in FIGURE 7a, another button 73 'copy' is pre-selected . Command to run is then selected, such as by a finger or the stick itself.

Optionally, the menu may be only revealed after a short time, such as 200 ms, encouraging expert users to quickly access these default commands without visual clutter and enabling easier novice-to-expert transition. Also, when a command is selected by a non-dominant finger, it may used as the new default action for the current stick's azimuth orientation. Thus, if the user selects the 'attr' button 75 (attributes) when in situation of FIGURE 7b, this command could become the new default when the thick side faces left-right, then allowing him to rapidly paste attributes to multiple drawing objects.

Optionally, a different context menu could be associated with each end 101, 102 of the stick 1.

Attribute Palettes

As illustrated in FIGURE 8, laying the stick down on one thick side

(here on face F5) may open one of several attribute palettes (such as 'fill color', 'stroke color', 'font color', and 'font properties'). Since this a very stable contact, the stick 1 can be released as shown in FIGURE 8b, and an attribute can then be selected with the same hand . The stick 1 may also be held by the dominant hand as the non-dominant hand selects within the palette.

Touch sensitive buttons 821, 822 may be displayed just above the stick 1 to cycle through different palettes. With this placement, a single- handed device and touch simultaneous manipulation is possible by loosening the grip to forefinger only, while using the middle and ring fingers to tap on the selection buttons 81a to 81h as shown in FIGURE 8a.

Also, touch-dragging on the palette bezel may be used as a command for the palette 81 to be "peeled" off the stick, and to remain visible after the stick is lifted away. This can be done with non-dominant fingers or with the dominant hand by loosening the grip, dropping the index finger, and then tucking the stick in the hand . When cycling palettes, peeled palettes may be temporarily brought to the stick's location with their peeled location shown as a dashed outline. Palettes may be hidden by tapping the same thick side F6 on or near the palette. This has the feeling of "picking up" the palette. It has been, found that with palettes it can also be natural to pass the stick to the non-dominant hand, and thus freeing the dominant hand to touch for attribute selection.

Alternate methods of cycling palettes have been explored, such as using an up or down rolling action from a side face to an adjacent long edge, such as from F5 in FIGURE 3g to El l in FIGURE 3d .

Guidelines and Alignment

In commercial applications, guidelines are usually created by dragging off rulers anchored on the edge of the canvas. On a large interactive tabletop, this may require reaching too far away, or may clutter the drawing area, and favors horizontal and vertical guidelines.

With the invention, users may be enabled to create guidelines (not represented) at any angle by using specific modes of the interface system, such as contacting the thin side face, such as F3 in FIGURE 3f. The stick may be used to translate and rotate the guideline, and similar to palettes, they may be peeled off (or perhaps "pinned down") with a dominant or non- dominant touch. Guidelines may be "picked up" by contacting the thin side nearby in the same orientation. Adjusting the guideline snap angle could be another example of single-handed, simultaneous stick and touch manipulation : with a loosened grip, the thumb is free to adjust a snap angle by dragging on the surface near the stick.

The guideline tool may also be enabled to align objects. Tapping or swiping a touch sensitive target just above the stick may align currently selected objects. Swiping left may left-align, swiping right may right-align, and tapping may centre-align. While holding the stick, these are most comfortably done with the non-dominant hand, though centre taps are not too difficult with the dominant hand . Moreover, given the stability of thin side, the dominant hand can perform a complete alignment task: after positioning a temporary alignment guideline, the stick is released . Being still in guideline mode, the desired alignment command may be performed with a tap or swipe, and then the stick is picked up to remove the temporary guideline.

Mouse-Like Pointing

As illustrated in FIGURE 9, in case where using a conventional mouse on an interactive tabletop can outperform touch, the interface device according to the invention may be implemented to behave like a standard mouse when it is laid down on one thick side face, such as on F5 in FIGURE 3f. A stylized image 91 of a mouse is rendered around the stick 1 with its entire surface 90 acting as a single touch sensitive button, or possibly as several buttons. This large button accommodates the restricted free finger movements when "clicking," while the remaining fingers continue to grip the stick 1. With a little practice, it is possible to click and drag objects while maintaining a grip. A "mouse-aligned reference frame" is then created by mapping the stick's movement vectors to cursor 99 movements in the display space. A pointer acceleration function is tuned for aggressive cursor movement with fast movements, but near 1 : 1 control-display gain when moved slowly. This enables precise selection and minimizes clutching over long distances.

Geometrical features specific identification

Optionally, the system may comprise means for specifically identifying which geometrical feature is in contact with the contact surface, within the plurality of geometrical features of the same type, such as:

- means for identifying which corner is in contact with the contact surface, and/or - means for identifying which face is in contact with the contact surface, and/or

- means for identifying which edge is in contact with the contact surface.

Such identification may be strictly individual, or limited to a subgroup within all those of the same type (e.g. as any corner from a specific end of the device 101 or 102).

Such identification may be implemented in several manners, possibly combined with each other.

Accordingly, the interface device comprises several geometrical features of the same type which present respective active (such as luminous, radio, electrical, or magnetic, or vibration emitters) emitters which emit differentiated signals (such as wave length, or pulse frequency, or coding pulses), and the contact surface comprises means for detecting and/or discriminating said differentiated markings through recognition processing of said differentiated signals.

As an example, LEDs 101 and 102 may be commanded to emit in discontinuous phases or pulses tuned to different frequencies, such as 50Hz and 100Hz. Chronological comparison is then done between several successive frames for the same detected contact position.

Other embodiment possibilities

In FIGURE 13 is illustrated an interface device 2 according to the invention, in another embodiment with a form of an elongated rectilinear extruded rectangle with three face sizes and eight corners similarly to the embodiment of FIGURE 1. The eight corners have embedded mechanical switches SI to S8 for contact inputting on a passive contact surface. Each switch is connected to a central unit 19 (the inside box). This central unit detects which geometrical feature is in contact with the surface by identifying which switches are pressed .

The tracking of the geometrical feature is operated by means of an embedded six degrees of freedom tracker located inside the central unit 19. This tracking is done relatively to the surface, or to a reference station with a position known relatively to the surface. The central unit is powered by a battery 18 and wirelessly sends the events, corresponding to the geometrical features and their position relatively to the touche surface, to a main central processor unit (not represented - e.g . computer) which updates the display in accordance with the function associated to the geometrical feature.

In still other embodiments not represented here, alternate or combined with the other here described, switches may be replaced by contact or proximity detectors, such as capacitive sensors or optical passive or active sensors.

In some embodiments, alternate or combined with the other here described, optical sensors embedded in corners may be chosen for reading a grid-type pattern existing on the interaction surface, thus providing directly a position data relatively to this surface.

Although various embodiments have been illustrated, this was for the purpose of describing, but not limiting, the invention. Various modifications will become apparent to those skilled in the art and are within the scope of this invention.