Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBJECT TRACKING USING OBJECT SPEED INFORMATION
Document Type and Number:
WIPO Patent Application WO/2018/140200
Kind Code:
A1
Abstract:
Embodiments herein include a method for tracking objects in a sensing region that includes determining a first position of a first object and a first position of a second object in the sensing region. The method includes determining that one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position. The method includes calculating a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames. The method includes predicting a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects, and determining which of the objects remained in the sensing region based on the predicted next positions.

Inventors:
XU JUNZE (CN)
Application Number:
PCT/US2018/012239
Publication Date:
August 02, 2018
Filing Date:
January 03, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SYNAPTICS INC (US)
International Classes:
G06F3/041; G06F3/044
Foreign References:
US20160357280A12016-12-08
US20150355740A12015-12-10
US20130257761A12013-10-03
US20120206380A12012-08-16
US20130050100A12013-02-28
Attorney, Agent or Firm:
TABOADA, Keith et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for tracking objects on a touch sensing device, comprising:

determining a first position of a first object and a first position of a second object in a sensing region;

determining that one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position;

calculating a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames;

predicting a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects; and

determining which of the objects remained in the sensing region based at least in part on the predicted next positions,

2. The method of claim 1 , wherein determining which of the objects remained in the sensing region based at least in part on the predicted next positions comprises determining a smaller of absolute values of each of the predicted next positions minus the current position of the object that remained in the sensing region, wherein the smaller absolute value determines the object that remained in the sensing region,

3. The method of claim 1 , wherein the time interval is determined by calculating a difference between time stamps associated with the two previous frames.

4. The method of claim 1 , wherein predicting the next positions for each of the objects further comprises adding a speed of an object multiplied by a time interval between a current frame and a previous frame to the first position of the object.

5. The method of claim 1 , wherein determining the difference in position of each respective object in two previous frames comprises determining a difference in position between each of the objects at the first position and at a second position in an immediately previous frame.

6. The method of claim 1 , wherein the time interval between the two previous frames comprises an inverse of a sensing frequency of a touch sensor device.

7. The method of claim 1 , wherein predicting the next positions of each of the objects comprises predicting each next position along a line that coincides with a direction of movement of an object from its respective position during a previous frame to the first position.

8. An input device for capacitive touch sensing, comprising:

a capacitive touch sensor configured to:

detect a first position of a first object and a first position of a second object in a sensing region; and

detect that a one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position; and

a processing system configured to:

calculate a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames;

predict a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects; and

determine which of the objects remained in the sensing region based at least in part on the predicted next positions.

9. The input device of claim 8, further comprising:

a memory configured to store positions of the objects in at least two previous frames.

10. The input device of claim 8, wherein the processing system is further configured to determine which of the objects remained in the sensing region based at least in part on the predicted next positions by determining a smaller of absolute values of each of the predicted next positions minus the current position of the object that remained in the sensing region, wherein the smaller absolute value determines the object that remained in the sensing region.

1 1 . The input device of claim 8, wherein the processing system is further configured to determine the time interval by calculating a difference between time stamps associated with the two previous frames.

12. The input device of claim 8, wherein the processing system is further configured to predict the next position of each of the objects by adding a speed of an object multiplied by a time interval between a current frame and a previous frame to the first position of the object.

13. The input device of claim 8, wherein the processing system is further configured to predict the next position of each of the objects by predicting each next position along a line that coincides with a direction of movement of an object from its respective position during a previous frame to the first position.

14. The input device of claim 8, wherein the time interval between the two previous frames comprises an inverse of a sensing frequency of a touch sensor device.

15. A processing system for tracking objects on a touch sensing device, the processing system comprising:

a determination module configured to:

determine a first position of a first object and a first positon of a second object in a sensing region;

determine that one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position; and

23 a processor configured to:

calculate a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames;

predict a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects; and

determine which of the objects remained in the sensing region based at least in part on the predicted next positions.

16. The processing system of claim 15, wherein the processor is further configured to determine which of the objects remained in the sensing region based at least in part on the predicted next positions by determining a smaller of absolute values of each of the predicted next positions minus the current position of the object that remained in the sensing region.

17. The processing system of claim 15, wherein the processor is further configured to determine the time interval by calculating a difference between time stamps associated with the two previous frames.

18. The processing system of claim 15, wherein the processor is further configured to predict the next position of each of the objects by adding a speed of an object multiplied by a time interval between a current frame and a previous frame to the first position of the object.

19. The processing system of claim 15, wherein the processor is further configured to predict the next position of each of the objects by predicting each next position along a line that coincides with a direction of movement of an object from its respective position during a previous frame to the first position. 20, The processing system of claim 15, wherein the processor determines the time interval by calculating a difference between time stamps associated with the two previous frames.

Description:
OBJECT TRACKING USING OBJECT SPEED INFORMATION

[0001] Embodiments of the present invention generally relate to a method and apparatus for touch sensing, and more specifically, to tracking objects using an input device.

Description of the Related Art

[00D2] Input devices including proximity sensor devices (also commonly called touchpads or touch sensor devices) are widely used in a variety of electronic systems. A proximity sensor device typically includes a sensing region, often demarked by a surface, in which the proximity sensor device determines the presence, location and/or motion of one or more input objects. Proximity sensor devices may be used to provide interfaces for the electronic system. For example, proximity sensor devices are often used as input devices for larger computing systems (such as opaque touchpads integrated in, or peripheral to, notebook or desktop computers). Proximity sensor devices are also often used in smaller computing systems (such as touch screens integrated in cellular phones).

SUMMARY

[00D3] Embodiments described herein include a method for tracking objects in a sensing region that includes determining a first position of a first object and a first position of a second object in the sensing region. The method includes determining that one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position. A speed of each of the first object and the second object is calculated based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames. A next position of each of the objects is predicted based on the first position of each of the objects and the speed of each of the objects, and then which of the objects remained in the sensing region is determined based on the predicted next positions. [00D4] In another embodiment, an input device for capacitive touch sensing includes a processor, a memory, and a capacitive touch sensor configured to detect a first position of a first object and a first position of a second object in a sensing region. The capacitive touch sensor is further configured to detect that a one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position. The input device includes a processing system configured to calculate a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames. The processing system is further configured to predict a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects, and determine which of the objects remained in the sensing region based at least in part on the predicted next positions.

[0005] In another embodiment, a processing system for capacitive touch sensing is configured to determine a first position of a first object and a first positon of a second object in a sensing region. The processing system is further configured to determine that one of the objects has left the sensing region and one of the objects has remained in the sensing region at a current position. The processing system is further configured to calculate a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames. The processing system is configured to predict a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects, and determine which of the objects remained in the sensing region based at least in part on the predicted next positions.

[00D6] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.

[0007] Figure 1 is a block diagram of a system that includes an input device according to an embodiment.

[0008] Figure 2 is an example sensor electrode pattern according to an embodiment.

[0009] Figures 3A-3C illustrate an example object tracking algorithm according to an embodiment.

[0010] Figure 4A-4C illustrate another example object tracking algorithm according to an embodiment.

[0011] Figure 5 is a flow diagram illustrating a method for tracking objects in a touch sensing region in accordance with an embodiment.

[0012] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. The drawings referred to here should not be understood as being drawn to scale unless specifically noted. Also, the drawings are often simplified and details or components omitted for clarity of presentation and explanation. The drawings and discussion serve to explain principles discussed below, where like designations denote like elements.

DETAILED DESCRIPTION

[0013] The following detailed description is merely exemplary in nature and is not intended to limit the embodiments or the application and uses of such embodiments. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.

[0014] Various embodiments of the present technology provide input devices and methods for improving usability. Particularly, embodiments described herein advantageously provide algorithms for tracking objects, such as fingers, in a sensing region. As objects move across a sensing region, one or more of the objects may leave the sensing region, such as one of a users fingers leaving the edge of a sensing region on a touchscreen. When this occurs, if can be difficult to determine which object or objects have left the sensing region and which object or objects have remained in the sensing region. Object speed information from previous frames can be calculated and used to predict a position for one or more objects as the objects move across the sensing region. This prediction can be compared to the position of the objects at the time that one or more objects have left the sensing region. The comparison determines which of the objects have left the sensing region and which remain in the sensing region.

[0015] Turning now to the figures, Figure 1 is a block diagram of an exemplary input device 100, in accordance with embodiments of the invention. The input device 100 may be configured to provide input to an electronic system (not shown). As used in this document, the term "electronic system" (or "electronic device") broadly refers to any system capable of electronically processing information. Some non-limiting examples of electronic systems include personal computers of ail sizes and shapes, such as desktop computers, laptop computers, nefbook computers, tablets, web browsers, e-book readers, and personal digital assistants (PDAs). Additional example electronic systems include composite input devices, such as physical keyboards that include input device 100 and separate joysticks or key switches. Further example electronic systems include peripherals such as data input devices (including remote controls and mice), and data output devices (including display screens and printers). Other examples include remote terminals, kiosks, and video game machines (e.g., video game consoles, portable gaming devices, and the like). Other examples include communication devices (including cellular phones, such as smart phones), and media devices (including recorders, editors, and players such as televisions, set-top boxes, music players, digital photo frames, and digital cameras). Additionally, the electronic system could be a host or a slave to the input device.

[0016] The input device 100 can be implemented as a physical part of the electronic system or can be physically separate from the electronic system. As appropriate, the input device 100 may communicate with parts of the electronic system using any one or more of the following: buses, networks, and other wired or wireless interconnections. Examples include l 2 C, SPL PS/2, Universal Serial Bus (USB), Bluetooth, RF, and IRDA.

[0017] In Figure 1 , the input device 100 is shown as a proximity sensor device (also often referred to as a "touchpad" or a "touch sensor device") configured to sense input provided by one or more input objects 140 in a sensing region 120. Example input objects include fingers and styii, as shown in Figure 1 .

[0018] Sensing region 120 encompasses any space above, around, in, and/or near the input device 100 in which the input device 100 is able to detect user input (e.g., user input provided by one or more input objects 140). The sizes, shapes, and locations of particular sensing regions may vary widely from embodiment to embodiment. In some embodiments, the sensing region 120 extends from a surface of the input device 100 in one or more directions into space until signal-to-noise ratios prevent sufficiently accurate object detection. The distance to which this sensing region 120 extends in a particular direction, in various embodiments, may be on the order of less than a millimeter, millimeters, centimeters, or more, and may vary significantly with the type of sensing technology used and the accuracy desired. Thus, some embodiments sense input that comprises no contact with any surfaces of the input device 100, contact with an input surface (e.g., a touch surface) of the input device 100, contact with an input surface of the input device 100 coupled with some amount of applied force or pressure, and/or a combination thereof. In various embodiments, input surfaces may be provided by surfaces of casings within which the sensor electrodes reside, by face sheets applied over the sensor electrodes or any casings, etc. In some embodiments, the sensing region 120 has a rectangular shape when projected onto an input surface of the input device 100.

[0019] The input device 100 may utilize any combination of sensor components and sensing technologies to detect user input in the sensing region 120. The input device 100 comprises one or more sensing elements for detecting user input. As several non-limiting examples, the input device 100 may use capacitive, elastive, resistive, inductive, magnetic, acoustic, ultrasonic, and/or optical techniques. Some implementations are configured to provide images that span one, two, three, or higher dimensional spaces. Some implementations are configured to provide projections of input along particular axes or planes. In some resistive implementations of the input device 100, a flexible and conductive first layer is separated by one or more spacer elements from a conductive second layer. During operation, one or more voltage gradients are created across the layers. Pressing the flexible first layer may deflect it sufficiently to create electrical contact between the layers, resulting in voltage outputs reflective of the point(s) of contact between the layers. These voltage outputs may be used to determine positional information.

[0020] In some inductive implementations of the input device 100, one or more sensing elements pick up loop currents induced by a resonating coil or pair of coils. Some combination of the magnitude, phase, and frequency of the currents may then be used to determine positional information.

[0021] In some capacitive implementations of the input device 100, voltage or current is applied to create an electric field. Nearby input objects cause changes in the electric field and produce detectable changes in capacitive coupling that may be detected as changes in voltage, current, or the like.

[0022] Some capacitive implementations utilize arrays or other regular or irregular patterns of capacitive sensing elements to create electric fields. In some capacitive implementations, separate sensing elements may be ohmicaliy shorted together to form larger sensor electrodes. Some capacitive implementations utilize resistive sheets, which may be uniformly resistive.

[0023] Some capacitive implementations utilize "self capacitance" (or "absolute capacitance") sensing methods based on changes in the capacitive coupling between sensor electrodes and an input object. In various embodiments, an input object near the sensor electrodes alters the electric field near the sensor electrodes, changing the measured capacitive coupling. In one implementation, an absolute capacitance sensing method operates by modulating sensor electrodes with respect to a reference voltage (e.g., system ground) and by detecting the capacitive coupling between the sensor electrodes and input objects.

8 [0024] Some capacitive implementations utilize "mutual capacitance" (or "transcapacitance") sensing methods based on changes in the capacitive coupling between sensor electrodes, !n various embodiments, an input object near the sensor electrodes alters the electric field between the sensor electrodes, changing the measured capacitive coupling, !n one implementation, a transcapacitive sensing method operates by detecting the capacitive coupling between one or more transmitter sensor electrodes (also "transmitter electrodes" or "transmitters") and one or more receiver sensor electrodes (also "receiver electrodes" or "receivers"). Transmitter sensor electrodes may be modulated relative to a reference voltage (e.g., system ground) to transmit transmitter signals. Receiver sensor electrodes may be held substantially constant relative to the reference voltage to facilitate receipt of resulting signals. A resulting signal may comprise effect(s) corresponding to one or more transmitter signals and/or to one or more sources of environmental interference (e.g., other electromagnetic signals). Sensor electrodes may be dedicated transmitters or receivers, or sensor electrodes may be configured to both transmit and receive. Alternatively, the receiver electrodes may be modulated relative to ground.

[0025] In Figure 1 , a processing system 1 10 is shown as part of the input device 100. The processing system 1 10 is configured to operate the hardware of the input device 100 to detect input in the sensing region 120. The processing system 1 10 comprises parts of, or all of, one or more integrated circuits (!Cs) and/or other circuitry components. For example, a processing system for a mutual capacitance sensor device may comprise transmitter circuitry configured to transmit signals with transmitter sensor electrodes and/or receiver circuitry configured to receive signals with receiver sensor electrodes. In some embodiments, the processing system 1 10 also comprises electronically-readable instructions, such as firmware code, software code, and/or the like. In some embodiments, components composing the processing system 1 10 are located together, such as near sensing element(s) of the input device 100. In other embodiments, components of processing system 1 10 are physically separate with one or more components close to sensing eiement(s) of input device 100 and one or more components elsewhere. For example, the input device 100 may be a peripheral coupled to a desktop computer, and the processing system 1 10 may comprise software configured to run on a central processing unit of the desktop computer and one or more ICs (perhaps with associated firmware) separate from the central processing unit. As another example, the input device 100 may be physically integrated in a phone, and the processing system 1 10 may comprise circuits and firmware that are part of a main processor of the phone. In some embodiments, the processing system 1 10 is dedicated to implementing the input device 100. In other embodiments, the processing system 1 10 also performs other functions, such as operating display screens, driving haptic actuators, etc.

[0026] The processing system 1 10 may be implemented as a set of modules that handle different functions of the processing system 1 10. Each module may comprise circuitry that is a part of the processing system 1 10, firmware, software, or a combination thereof. In various embodiments, different combinations of modules may be used. Example modules include hardware operation modules for operating hardware such as sensor electrodes and display screens, data processing modules for processing data such as sensor signals and positional information, and reporting modules for reporting information. Further example modules include sensor operation modules configured to operate sensing eiement(s) to detect input, identification modules configured to identify gestures such as mode changing gestures, and mode changing modules for changing operation modes.

[0027] In some embodiments, the processing system 1 10 responds to user input (or lack of user input) in the sensing region 120 directly by causing one or more actions. Example actions include changing operation modes, as well as GUI actions such as cursor movement, selection, menu navigation, and other functions. In some embodiments, the processing system 1 10 provides information about the input (or lack of input) to some part of the electronic system (e.g., to a central processing system of the electronic system that is separate from the processing system 1 10, if such a separate central processing system exists). In some embodiments, some part of the electronic system processes information received from the processing system 1 10 to act on user input, such as to facilitate a full range of actions, including mode changing actions and GUI actions.

[0028] For example, in some embodiments, the processing system 1 10 operates the sensing eiement(s) of the input device 100 to produce electrical signals indicative of input (or lack of input) in the sensing region 120. The processing system 1 10 may perform any appropriate amount of processing on the electrical signals in producing the information provided to the electronic system. For example, the processing system 1 10 may digitize analog electrical signals obtained from the sensor electrodes. As another example, the processing system 1 10 may perform filtering or other signal conditioning. As yet another example, the processing system 1 10 may subtract or otherwise account for a baseline, such that the information reflects a difference between the electrical signals and the baseline. As yet further examples, the processing system 1 10 may determine positional information, recognize inputs as commands, recognize handwriting, and the like.

[0029] "Positional information" as used herein broadly encompasses absolute position, relative position, velocity, acceleration, and other types of spatial information. Exemplary "zero-dimensional" positional information includes near/far or contact/no contact information. Exemplary "one-dimensional" positional information includes positions along an axis. Exemplary "two-dimensional" positional information includes motions in a plane. Exemplary "three-dimensional" positional information includes instantaneous or average velocities in space. Further examples include other representations of spatial information. Historical data regarding one or more types of positional information may also be determined and/or stored, including, for example, historical data that tracks position, motion, or instantaneous velocity over time.

[0030] In some embodiments, the input device 100 is implemented with additional input components that are operated by the processing system 1 10 or by some other processing system. These additional input components may provide redundant functionality for input in the sensing region 120 or some other functionality. Figure 1 shows buttons 130 near the sensing region 120 that can be used to facilitate selection of items using the input device 100. Other types of additional input components include sliders, balls, wheels, switches, and the like. Conversely, in some embodiments, the input device 100 may be implemented with no other input components.

[0031] In some embodiments, the input device 100 comprises a touch screen interface, and the sensing region 120 overlaps at least part of an active area of a display screen. For example, the input device 100 may comprise substantially transparent sensor electrodes overlaying the display screen and provide a touch screen interface for the associated electronic system. The display screen may be any type of dynamic display capable of displaying a visual interface to a user, and may include any type of light emitting diode (LED), organic LED (OLED), cathode ray tube (CRT), liquid crystal display (LCD), plasma, electroluminescence (EL), or other display technology. The input device 100 and the display screen may share physical elements. For example, some embodiments may utilize some of the same electrical components for displaying and sensing. As another example, the display screen may be operated in part or in total by the processing system 1 10.

[0032] It should be understood that while many embodiments of the invention are described in the context of a fully functioning apparatus, the mechanisms of the present invention are capable of being distributed as a program product (e.g., software) in a variety of forms. For example, the mechanisms of the present invention may be implemented and distributed as a software program on information bearing media that are readable by electronic processors (e.g., non-transitory computer-readable and/or recordable/writable information bearing media readable by the processing system 1 10). Additionally, the embodiments of the present invention apply equally regardless of the particular type of medium used to carry out the distribution. Examples of non-transitory, electronically readable media include various discs, memory sticks, memory cards, memory modules, and the like. Electronically readable media may be based on flash, optical, magnetic, holographic, or any other storage technology.

[0033] Figure 2 illustrates a system 210 including a processing system 1 10 and a portion of an example sensor electrode pattern configured to sense in a sensing region associated with the pattern, according to some embodiments. For clarity of illustration and description, Figure 2 shows a pattern of simple rectangles illustrating sensor electrodes, and does not show various components. This sensor electrode pattern comprises a plurality of transmitter electrodes 160 (160-1 , 160-2, 160-3, ... 60-n), and a plurality of receiver electrodes 170 (170-1 , 170-2, 170-3, ... 170-n) disposed over the plurality of transmitter electrodes 160.

[0034] Transmitter electrodes 160 and receiver electrodes 170 are typically ohmicaliy isolated from each other. That is, one or more insulators separate transmitter electrodes 160 and receiver electrodes 170 and prevent them from electrically shorting to each other. In some embodiments, transmitter electrodes 160 and receiver electrodes 170 are separated by insuiative material disposed between them at cross-over areas; in such constructions, the transmitter electrodes 160 and/or receiver electrodes 170 may be formed with jumpers connecting different portions of the same electrode. In some embodiments, transmitter electrodes 160 and receiver electrodes 170 are separated by one or more layers of insuiative material. In some other embodiments, transmitter electrodes 160 and receiver electrodes 170 are separated by one or more substrates; for example, they may be disposed on opposite sides of the same substrate, or on different substrates that are laminated together.

[0035] The areas of localized capacitive coupling between transmitter electrodes 160 and receiver electrodes 170 may be termed "capacitive pixels." The capacitive coupling between the transmitter electrodes 160 and receiver electrodes 170 change with the proximity and motion of input objects in the sensing region associated with the transmitter electrodes 160 and receiver electrodes 170.

[0036] !n some embodiments, the sensor pattern is "scanned" to determine these capacitive couplings. That is, the transmitter electrodes 160 are driven to transmit transmitter signals. Transmitters may be operated such that one transmitter electrode transmits at one time, or multiple transmitter electrodes transmit at the same time. Where multiple transmitter electrodes transmit simultaneously, these multiple transmitter electrodes may transmit the same transmitter signal and effectively produce an effectively larger transmitter electrode, or these multiple transmitter electrodes may transmit different transmitter signals. For example, multiple transmitter electrodes may transmit different transmitter signals according to one or more coding schemes that enable their combined effects on the resulting signals of receiver electrodes 170 to be independently determined.

[0037] The receiver sensor electrodes 170 may be operated singly or multiply to acquire resulting signals. The resulting signals may be used to determine measurements of the capacitive couplings at the capacitive pixels.

[0038] A set of measurements from the capacitive pixels form a "capacitive image" (also "capacitive frame") representative of the capacitive couplings at the pixels. Multiple capacitive images may be acquired over multiple time periods, and differences between them used to derive information about input in the sensing region. For example, successive capacitive images acquired over successive periods of time can be used to track the motion(s) of one or more input objects entering, exiting, and within the sensing region.

[0039] The background capacitance of a sensor device is the capacitive image associated with no input object in the sensing region. The background capacitance changes with the environment and operating conditions, and may be estimated in various ways. For example, some embodiments take "baseline images" when no input object is determined to be in the sensing region, and use those baseline images as estimates of their background capacitances.

[0040] Capacitive images can be adjusted for the background capacitance of the sensor device for more efficient processing. Some embodiments accomplish this by "baselining" measurements of the capacitive couplings at the capacitive pixels to produce a "baselined capacitive image." That is, some embodiments compare the measurements forming a capacitance image with appropriate "baseline values" of a "baseline image" associated with those pixels, and determine changes from that baseline image.

[0041] In some touch screen embodiments, transmitter electrodes 160 comprise one or more common electrodes (e.g., "V-com electrode") used in updating the display of the display screen. These common electrodes may be disposed on an appropriate display screen substrate. For example, the common electrodes may be disposed on the TFT glass in some display screens (e.g., In Plane Switching (IPS) or Plane to Line Switching (PLS)), on the bottom of the color filter glass of some display screens (e.g., Patterned Vertical Alignment (PVA) or Multi-domain Vertical Alignment (MVA)), etc. In such embodiments, the common electrode can also be referred to as a "combination electrode", since it performs multiple functions. In various embodiments, each transmitter electrode 160 comprises one or more common electrodes. In other embodiments, at least two transmitter electrodes 160 may share at least one common electrode.

[0042] In various touch screen embodiments, the "capacitive frame rate" (the rate at which successive capacitive images are acquired) may be the same or be different from that of the "display frame rate" (the rate at which the display image is updated, including refreshing the screen to redisplay the same image). In some embodiments where the two rates differ, successive capacitive images are acquired at different display updating states, and the different display updating states may affect the capacitive images that are acquired. That is, display updating affects, in particular, the background capacitive image. Thus, if a first capacitive image is acquired when the display updating is at a first state, and a second capacitive image is acquired when the display updating is at a second state, the first and second capacitive images may differ due to differences in the background capacitive image associated with the display updating states, and not due to changes in the sensing region. This is more likely where the capacitive sensing and display updating electrodes are in close proximity to each other, or when they are shared (e.g., combination electrodes).

[0043] For convenience of explanation, a capacitive image that is taken during a particular display updating state is considered to be of a particular frame type. That is, a particular frame type is associated with a mapping of a particular capacitive sensing sequence with a particular display sequence. Thus, a first capacitive image taken during a first display updating state is considered to be of a first frame type, a second capacitive image taken during a second display updating state is considered to be of a second frame type, a third capacitive image taken during a first display updating state is considered to be of a third frame type, and so on. Where the relationship of display update state and capaciiive image acquisition is periodic, capacitive images acquired cycle through the frame types and then repeats.

[0044] Processing system 1 10 may include a driver module 230, a receiver module 240, a determination module 250, and an optional memory 260. The processing system 1 10 is coupled to receiver electrodes 170 and transmitter electrodes 160 through a plurality of conductive routing traces (not shown in Figure 2).

[004S] The receiver module 240 is coupled to the plurality of receiver electrodes 170 and configured to receive resulting signals indicative of input (or lack of input) in the sensing region 120 and/or of environmental interference. The receiver module 240 may also be configured to pass the resulting signals to the determination module 250 for determining the presence of an input object and/or to the optional memory 260 for storage. In various embodiments, the IC of the processing system 1 10 may be coupled to drivers for driving the transmitter electrodes 160. The drivers may be fabricated using thin-film- transistors (TFT) and may comprise switches, combinatorial logic, multiplexers, and other selection and control logic.

[0046] The driver module 230, which includes driver circuitry, included in the processing system 1 10 may be configured for updating images on the display screen of a display device (not shown). For example, the driver circuitry may include display circuitry and/or sensor circuitry configured to apply one or more pixel voltages to the display pixel electrodes through pixel source drivers. The display and/or sensor circuitry may also be configured to apply one or more common drive voltages to the common electrodes to update the display screen. In addition, the processing system 1 10 is configured to operate the common electrodes as transmitter electrodes for input sensing by driving transmitter signals onto the common electrodes.

[0047] The processing system 1 10 may be implemented with one or more !Cs to control the various components in the input device. For example, the functions of the IC of the processing system 1 10 may be implemented in more than one integrated circuit that can control the display module elements (e.g., common electrodes) and drive transmitter signals and/or receive resulting signals received from the array of sensing elements. In embodiments where there is more than one IC of the processing system 1 10, communications between separate processing system !Cs may be achieved through a synchronization mechanism, which sequences the signals provided to the transmitter electrodes 160. Alternatively the synchronization mechanism may be internal to any one of the ICs.

[0048] Figures 3A-3C illustrate an example object tracking algorithm according to an embodiment. In Figures 3A-3C, two objects (such as a users fingers, stylus, active pen or other detectable object) are moving through a sensing region 120 of an input device. The circles illustrated in Figures 3A-3C (302, 304, 306, etc.) represent the position of the objects within sensing region 120. As the objects move through sensing region 120, the position of the objects is updated by receiver module 240 and determination module 250 (illustrated in Figure 2). The arrows illustrate the trajectory (i.e., the direction of movement) of the objects in various scenarios.

[0049] Figure 3A illustrates a first object at position 302 and a second object at position 304 at a first point in time. As described above, a capacitive image may be acquired at different time periods, and differences in the capacitive images are used to derive information about input in sensing region 120. Successive capacitive images acquired over successive periods of time are used to track the motions of input objects. The sensing frequency may be 60 Hertz in some embodiments, which results in 60 capacitive images acquired per second. Other embodiments may utilize a higher or lower sensing frequency. A time interval between successive capacitive images is the inverse of the sensing frequency.

[0050] At the first point in time in Figure 3A, when a capacitive image is captured, the objects are at positions 302 and 304. At a second point in time, only one object is detected at position 306. The second object has left sensing region 120. If the first object and the second object are following the trajectory marked by the arrows in Figure 3A, it appears that the first object is the object that is detected at position 306. That is, the first object has moved from position 302 to position 306. The second object has moved from position 304 to a position outside sensing region 120.

[00S1] However, Figure 3B illustrates an alternative scenario. In Figure 3B, at the first point in time, the first object is again illustrated at position 302 and the second object is again illustrated at position 304. At a second point in time, only one object is detected at position 306. If the first object and the second object are following the trajectory marked by the arrows in Figure 3B, it appears that the second object is the object that is detected at position 306. That is, the second object has moved from position 304 to position 306. The first object has moved from position 302 to a position outside the sensing region 120.

[00S2] Certain object tracking algorithms that determine which object has moved to position 306 at the second point in time may identify the wrong object at position 306. For example, some algorithms use the distance between a previous position (positions 302 and 304) and a current position (position 306), and choose the smaller distance as the correct match. That is, if position 306 is closer to position 302, then the first object that was at position 302 is identified as the object at position 306. If position 304 is closer to position 302, then the second object that was at position 304 is identified as the object at position 306. However, if one or more objects has left the sensing region 120, and two or more of the calculated distances are close, the tracking algorithm may make an incorrect match. As illustrated in Figures 3A and 3B, position 306 is roughly the same distance from both position 302 and position 304. Therefore, a simple algorithm that looks at the distance between a previous position (positions 302 and 304) and a current position (position 306) may make a mistake and may not correctly identify which of the two objects has moved to position 306.

[0053] Figure 3C illustrates an algorithm that considers capacitive images captured over at least two previous time periods (capacitive frames) to predict the position of the objects in sensing region 120 at a current time. Then, those predicted positions are compared to the actual position detected at the current time, and the closest predicted position is selected as the object that remains in sensing region 120. [00S4] In Figure 3C, an object is detected at position 306 at a current point in time. At a point in time before that (a previous time), two objects were detected, one at position 302 and one at position 304. At a point in time before the previous time (a previous-previous time), two objects were detected at positions 310 and 312. The trajectory of the two objects from the previous-previous time to the previous time is used to make a prediction about the positions of the two objects at the current time. Those predictions are shown as positions 320 and 322. The algorithm determines that predicted position 320 is closer to the actual position 306 than predicted position 322. Therefore, the object predicted to be at position 320 is matched to the object at position 306. The other object is not detected in sensing region 120 and is thus determined to be outside of sensing region 120,

[0055] The algorithm for determining which object is at a current location using at least two previous time periods can be illustrated with a series of equations. The object at position 302 is labeled "A" and the object at position 304 is labeled "B". The speed of object A as if moves across sensing region 120 is determined by subtracting the position of object A at position 310 from the position of object A at position 302, and dividing by the time interval between the capturing of those two positions:

Speedprev-A = (Positionprev-A Position pre v- P rey-A)/ timejapse 0 !d

[0056] The position of the objects at various points in time is stored in memory as an X- Y location within sensing region 120. An associated timestamp is also stored for each position. Position pre v-A is the position of object A at the previous point in time (illustrated as 302 in Figure 3C). Position pre v -prev -A ^ the position of object A at the previous-previous point in time (illustrated as 310 in figure 3C). The difference in those two positions is divided by the time interval that elapsed between the points in time that those two positions were determined, which is denoted by time_lapse 0 id- The time interval time_lapse 0 id is determined by the difference between stored iimestamps. The difference in position divided by the time lapse results in a speed of the object A (Speed prev -A) as object A moved from position 310 to position 302. [00S7] A similar calculation is performed to determine the speed of object B, as object B moved from position 312 to position 304:

Speedprev-B = (Posit!on pre v-B - Position prev -p re v-B)/ timejapse 0 jd

[00S8] Using the speed of each of the objects A and B, a predicted position can be calculated for each object. The speed of the object multiplied by the time lapse between the previous point in time and the time associated with the current location is added to the position at the previous point in time:

Position P redict-A = Position pre v-A + Speed P rev-A * timejapsenew

[0059] Positionprev-A is the position of object A at the previous point in time (illustrated as 302 in Figure 3C). Speed prev -A was calculated as shown above. The time interval timejapsenew is the time interval that elapsed between the time that position 302 was determined (the previous time) and the time that position 306 was determined (the current time). Adding the position Position pre v-A to the Speed pre v-A * timejapse ne w calculation results in a predicted position of object A at the current time, Position pre dict-A- This prediction position Position p! - e dict-A is illustrated as position 320 in Figure 3C.

[0060] A similar calculation is performed to predict the position of object B at the current time using the following formula:

Position P redict-B = Position pre v-B + Speed pr ev-B * timejapsenew

[0081] The predicted position of object B is illustrated as position 322 in Figure 3C. As described above, the algorithm determines that predicted position 320 is closer to the actual position 306 than predicted position 322. One method to determine which object is closer is to compare the absolute values of the differences in position between the prediction positions and the current position:

I POSiti Q n pr edici-A POSition CU rrerit | « | POSition pre djct-B - POSitioncurrent | [0062] In this example, object A is closer to the current position at position 308. Therefore, object A is matched to the object at position 306 and object B is determined to be outside sensing region 120,

[0083] Figures 4A-4C illustrate another example object tracking algorithm according to an embodiment. The procedures described above with respect to Figures 3A-3C operate similarly in Figures 4A-4C. Figures 4A-4C illustrate the tracking of three objects in a sensing region 120. The circles illustrated in Figures 4A-4C (402, 404, 406, etc.) represent the position of the objects within sensing region 120. As the objects move through sensing region 120, the position of the objects is updated by receiver module 240 and determination module 250 (illustrated in Figure 2). The arrows illustrate the trajectory of the objects in various scenarios.

[0084] Figure 4A illustrates a first object A at position 402, a second object B at position 404, and a third object C at position 406 at a first point in time. At a second point in time, two objects are detected at positions 408 and 410. The arrows illustrate the trajectory of the three objects. If the three objects move across sensing region 120 with the trajectory illustrated by the three arrows, object A moves to position 408 and object B moves to position 410. Object C moves to a position outside sensing region 120.

[006S] Figure 4B illustrates an alternative scenario where the three objects are moving in different trajectories than those illustrated in Figure 4A. If the three objects move across sensing region 120 with the trajectory illustrated by the three arrows, object B moves to position 408 and object C moves to position 410. Object A moves to a position outside sensing region 120. Therefore, a simple algorithm that looks at the distance between a previous position (positions 402, 404, and 406) and a current position (positions 408 and 410) may make a mistake and may not correctly identify which of the objects has left sensing region 120.

[0066] Figure 4C illustrates an algorithm to determine which of the three objects has left sensing region 120. In Figure 4C, an algorithm considers capacitive images captured over at least two previous time periods to predict the position of the objects in sensing region 120 at a current time. Then, those predicted positions are compared to the actual position detected at the current time, and the closest predicted positions are selected as the objects that remain in sensing region 120,

[0087] The equations described above with respect to Figure 3C are also used to predict the positions of the objects in Figure 4C. First, the speed of each of the three objects is calculated:

Speedprev-A = (Positionprev-A - Position P rev-prev-A)/ time_lapse 0 id Speedprev-B = (Positionprev-B - Position P rev-prev-B)/ time_lapse 0 id Speedprev-c = (Position prev- c - Position pr ev-prev-c)/ time_lapse 0 id [0068] Then, a position is predicted for each of the three objects:

Positioripredict-A = Positionprev-A + Speedprev-A * timejapsenew Positionpredict-B = Positionprev-B + Speed prev -B * time_lapse ne w

Positionpredici-c = Position pre v-c + Speed prev , c * time_lapse ne w

[0069] The predicted positions (420, 422, and 424) are compared to the detected positions of the objects remaining in sensing region 120 (408, 410) and the best match is determined to be object B at position 408 and object C at position 410,

[0070] To recap, at the previous point in time, the three objects were detected at positions 402, 404, and 408. At a time before that (the previous-previous time), the objects were detected at positions 412, 414, and 416. As described above, the speed of the objects between the previous-previous point in time and the previous point in time is used to predict the position of the objects at the current point in time. Those predictions are illustrated as positions 420, 422, and 424. Then, the algorithm determines that the predicted position 422 is closest to current position 408 and predicted position 424 is closes to current position 410. The algorithm concludes that object B has moved to position 408 and object C has moved to position 410. Object A is outside of sensing region 120.

[0071] Figure 5 is a flow diagram illustrating a method 500 for tracking objects on a touch sensing device in accordance with one embodiment of the invention. Although the method steps are described in conjunction with Figures 1 -4, persons skilled in the art will understand that any system configured to perform the method steps, in any feasible order, fails within the scope of the present invention. In various embodiments, the hardware and/or software elements described in Figures 1 -4 can be configured to perform the method steps of Figure 5. It is contemplated that the method 500 may be performed using other suitable input devise.

[0072] The method 500 begins at step 510, where the processing system 1 10 determines a first position of a first object and a first position of a second object in the sensing region 120. As an example, receiver module 240 can receive resulting signals indicative of input in sensing region 120 using an absolute or transcapacitive sensing routine as described above. The receiver module 240 passes the resulting signals to the determination module 250 for determining the presence of an input object, and passes the resulting signals to memory 260 for storage. The stored signals are used in the algorithms described above to determine positions of the objects and to predict positions of the objects.

[0073] The method 500 proceeds to step 520, where the processing system 1 10 determines that one of the objects has left the sensing region 120 and one of the objects has remained in the sensing region 120 at a current position. Receiver module 240 and determination module 250 again receive signals from touch sensing region 120 and perform this determination using an absolute or transcapacitive sensing routine as described above.

[0074] At step 530, the processing system 1 10 calculates a speed of each of the first object and the second object based on a difference in position of each respective object in two previous frames divided by a time interval between the two previous frames. Timesiamps are stored in memory 260 for each frame. A time interval between two positions can be calculated by subtracting the time stamps associated with the two positions.

[0075] The method 500 proceeds to step 540, where the processing system 1 10 predicts a next position of each of the objects based on the first position of each of the objects and the speed of each of the objects. Exemplary equations for calculating these predicted positions are described above with respect to Figures 3A-3C and 4A-4C, although other prediction algorithms may be utilized.

[0076] Finally, the method 500 proceeds to step 550 where the processing system 1 10 determines which of the objects remained in the sensing region 120 based at least in part on the predicted next position. As described above, the predicted position that is closest to the detected position of the object that remained in sensing region 120 determines which object remained in sensing region 120. Also, multiple objects may have remained in or left sensing region 120, and the algorithms described above can be performed on as many objects as necessary to determine which objects remain in sensing region 120.

[0077] Thus, the embodiments and examples set forth herein were presented in order to best explain the embodiments in accordance with the present technology and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed.

[0078] In view of the foregoing, the scope of the present disclosure is determined by the claims that follow.