Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A DUAL PIXEL-TYPE ARRAY FOR IMAGING AND MOTION CENTROID LOCALIZATION
Document Type and Number:
WIPO Patent Application WO/2002/082545
Kind Code:
A1
Abstract:
An imager chip (10) is disclosed that has two different pixel types inerleaved on a common array. The dual-pixel design enables optimization for two separate tasks. One type of pixel is an Active Pixel Sensor \'APS\' (16), which is used to produce a low-noise image. The other type of pixel is a custom-designed pixel (18) optimized for computing the centroid of a moving object in a scene.

Inventors:
ETIENNE-CUMMINGS RALPH (US)
CLAPP MATTHEW (US)
Application Number:
PCT/US2002/010666
Publication Date:
October 17, 2002
Filing Date:
April 05, 2002
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV JOHNS HOPKINS (US)
ETIENNE-CUMMINGS RALPH (US)
CLAPP MATTHEW (US)
International Classes:
H03F3/08; H04N3/15; H04N5/232; (IPC1-7): H01L27/00
Foreign References:
US5212392A1993-05-18
US5742699A1998-04-21
US6303920B12001-10-16
US5698861A1997-12-16
US6093923A2000-07-25
US5990471A1999-11-23
Attorney, Agent or Firm:
Molan, Robert A. (VA, US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:
1. An imager for tracking a moving object comprising: an imager subsystem for obtaining real-time images of the moving object, a tracking subsystem for computing the centroid position of the moving object, and a dual pixel array for imaging and motion centroid localization to find a position of the moving object.
2. The imager recited in claim 1 wherein the dual pixel array includes a plurality of a first type of pixel for performing the imaging, whereby the first type of pixel is part of the imager subsystem, and a plurality of a second type of pixel for performing the motion centroid localization, whereby the second type of pixel is part of the tracking subsystem.
3. The imager recited in claim 1 wherein the imager subsystem includes a plurality of a first type of pixel for imaging the object, and wherein the centroid tracking subsystem includes a plurality of a second type of pixel to help compute the centroid position of the moving object.
4. The imager recited in claim 1 wherein the imager subsystem is an Active Pixel Sensor ("APS") imager subsystem.
5. The imager recited in claim 3 wherein the first and second types of pixels are interleaved in an array.
6. The imager recited in claim 2 wherein the centroid tracking subsystem includes a circuit for performing a center of mass calculation to find a central row and a central column of all of the second type of pixels sensing a change of light intensity.
7. The imager recited in claim 3 wherein the centroid tracking subsystem includes a circuit for performing a center of mass calculation to find a central row and a central column of all of the second type of pixels sensing a change of light intensity.
8. The imager recited in claim 1 wherein the centroid tracking subsystem outputs a first voltage that is an x-coordinate of the moving object and a second voltage that is a y-coordinate of the moving object.
9. The imager recited in claim 1 wherein the imager subsystem is an Active Pixel Sensor imager subsystem that obtains real-time images of the moving object.
10. The imager recited in claim 1 further comprising a dual pixel array for real-time imaging and motion centroid localization.
11. The imager recited in claim 2 wherein the first type of pixels are formed into a first array, and wherein the second type of pixels are formed into a second array, interleaved with the first array.
12. The imager recited in claim 11 wherein the number of the first type of pixel is twice as many as the number of the second type of pixel.
13. The imager recited in claim 2 further comprising a first reset circuit for resetting the first type of pixels at a first cycle rate and a second reset circuit for resetting the second type of pixels at a second cycle rate, whereby the imager subsystem can image the object at a frame rate of at least 30fps and the centroid tracking subsystem can provide centroid data at a rate of at least 180 coordinates per second.
14. The imager recited in claim 3 further comprising a first reset circuit for resetting the first type of pixels at a first cycle rate and a second reset circuit for resetting the second type of pixels at a second cycle rate, whereby the imager subsystem can image the object at a frame rate of at least 30fps and the centroid tracking subsystem can provide centroid data at a rate of at least 180 coordinates per second.
15. The imager recited in claim 13 wherein the frame rate is about 8300fps and wherein the centroid data rate is about 3580 coordinates per second.
16. The imager recited in claim 14 wherein the frame rate is 8300fps and wherein the centroid data rate is 3580 coordinates per second.
17. The imager recited in claim 2 wherein the imager subsystem is comprised of a first chain of cyclical shift registers for re-setting selected rows of the first type of pixels and a second chain of cyclical shift registers for selecting rows of the first type of pixels.
18. The imager recited in claim 3 wherein the imager subsystem is comprised of a first chain of cyclical shift registers for re-setting selected rows of the first type of pixels and a second chain of cyclical shift registers for selecting rows of the first type of pixels.
19. The imager recited in claim 17 further comprising a switching circuit to subtract a re-set voltage level when a reset signal is applied by shift registers in the first chain of cyclical shift registers to the first type of pixels from an output signal voltage of the first type of pixels to compensate for noise and/or different re- set voltage levels resulting from different light intensities incident on the first type of pixels.
20. The imager recited in claim 18 further comprising a switching circuit to subtract a re-set voltage level when a reset signal is applied by shift registers in the first chain of cyclical shift registers to the first type of pixels from an output signal voltage of the first type of pixels to compensate for noise and/or different re- set voltage levels resulting from different light intensities incident on the first type of pixels.
21. The imager recited in claim 1 wherein the centroid tracking subsystem includes column decode circuitry and row decode circuitry to detect pixels of the second type sensing a change of light intensity.
22. The imager recited in claim 6 wherein the circuit for performing a center of mass calculation calculates the centroid of activated rows of pixels of the second type and the centroid of activated columns of pixels of the second type separately from one another to arrive at a final x, y-coordinate for the moving object.
23. The imager recited in claim 7 wherein the circuit for performing a center of mass calculation calculates the centroid of activated rows of pixels of the second type and the centroid of activated columns of pixels of the second type separately from one another to arrive at a final x, y-coordinate for the moving object.
24. The imager recited in claim 22 wherein each activated row of pixels of the second type and each activated column of pixels of the second type are assigned a weight of 1 in the respective center of mass calculation for the activated rows and columns of pixels of the second type.
25. The imager recited in claim 6 wherein the center of mass calculation is performed using a resistive ladder voltage divider.
26. The imager recited in claim 7 wherein the center of mass calculation is performed using a resistive ladder voltage divider.
27. An imager chip for tracking a moving object comprising: an Active Pixel Sensor imager subsystem for obtaining real- time images of the moving object, a tracking subsystem for computing the centroid position of the moving object, and a pixel array including a plurality of a first type of pixel used by the imager subsystem for imaging the moving object and a plurality of a second type of pixel used by the tracking subsystem for help compute the centroid position of the moving object.
28. The imager chip recited in claim 27 wherein the imager subsystem obtains real-time images of the moving object.
29. The imager chip recited in claim 27 wherein the centroid tracking subsystem includes a circuit for performing a first center of mass calculation to find a central row of all of the second type of pixels sensing a change in light intensity and a second center of mass calculation to find a central column of all of the second type of pixels sensing a change in light intensity.
30. The imager chip recited in claim 29 wherein the first center of mass calculation to find a central row and the second center of mass calculation to find a central column are done in parallel.
31. The imager chip recited in claim 29 wherein the centroid tracking subsystem uses the center of mass calculations to output a first voltage that is an x-coordinate of the moving object and a second voltage that is a y-coordinate of the moving object.
32. The imager chip recited in claim 27 further comprising a first re-set circuit for re-setting the first types of pixels at a first cycle rate whereby the imager subsystem can image the moving object at a frame rate of at least 30fps and a second re-set circuit for re-setting the second types of pixels at a second cycle rate, whereby the centroid tracking subsystem can provide centroid data for the moving object at a rate of at least 180 coordinates per second.
33. The imager chip recited in claim 31 wherein the frame rate is about 8300fps and wherein the centroid data rate is about 3580 coordinates per second.
34. The imager recited in claim 11 wherein the first array is 120 columns x 36 rows, and wherein the second array, interleaved with the first array, is 60 columns x 36 rows.
Description:
A DUAL PIXEL-TYPE ARRAY FOR IMAGING AND MOTION CENTROID LOCALIZATION FIELD OF THE INVENTION The present invention relates to imagers for tracking moving objects, and, more particularly, to an imager chip that uses a dual-pixel array for imaging and motion centroid localization to find the position of a moving object in a scene.

BACKGROUND OF THE INVENTION The tracking of moving objects has traditionally been important for military applications, and much of the relevant literature for the past couple of decades is devoted to such applications. However, as imagers and processors have become cheaper, their potential uses have also broadened. Machine vision for mobile robots, automation of security camera tasks for surveillance, image stabilization for medical applications, and other motor control applications have now become practical.

In medical applications involving eye surgery, for example, there are two problems, i. e., tremors in the surgeon\'s hand and tremors in the eyes of the patient. An imager that can quickly and automatically track a point on the instrument used by a surgeon during eye surgery could be used in a feedback loop to adjust the tip of the instrument in the opposite direction, effectively eliminating the surgeon\'s tremors. Such an imager could also be used in a similar way to adjust camera position and/or instrument position in the case of an eye tremor, moving the actual tissue being operated upon to effectively eliminate the patient\'s tremors. For feedback loop applications such as these, speed is of paramount importance. Thus, performing the required computation for tracking through an external processor would add a delay that would be unacceptable.

In the case of surveillance, a camera sensing a moving object that can move itself to point in the proper direction to center the movement in its field of view, and even alert a human operator that movement has occurred, could be substituted for a human operator panning a camera in the direction of some activity. It would be desirable for this surveillance function to be carried out cheaply and simply in the camera itself without the need for external computers to perform the necessary computations.

For mobile platforms such as robots, extracting extra information about the environment is generally useful. In such applications, generally, moving objects in a scene are more deserving of the robot\'s attention than static ones.

Low power is also very important for robots, and a low-power, single-chip imager giving information about the position of moving objects around the robot would be very valuable.

One type of low-power, single-chip imager design that has been of interest lately is CMOS imagers, which are used as an alternative to couple charged display ("CCD") arrays. Specifically, Active Pixel Sensors have been used with Correlated Double Sampling circuits to produce charge-integrating imagers approaching CCD image quality. In addition to image quality, the most important advantages of using CMOS technology include lower power- consumption, lower fabrication costs, and the ability to integrate processing tasks on the same chip as the imaging array. The advantages of system integration are the basis of the"System-On-A-Chip"approach. Specifically, integration enables greater flexibility of processing, simplifies interconnects between imaging and processing blocks, and further reduces costs by assembling all functionality onto one die. However, many CMOS imager designs use these advantages simply to make a cheaper, more efficient substitute for a CCD array and scanning circuitry.

The combination of imaging and computation is not a new concept.

Numerous designs for imagers have been created in the past with computation and imaging combined on the same die. The most notable examples of this design philosophy carry out computation on the focal plane itself. Most of these designs use current-mode pixels, employing photodiodes or phototransistors to produce a current which is an instantaneous function of the incident light. The use of current-mode pixels facilitates continuous-time processing, and enables the use of space-efficient analog computation circuits.

Unfortunately, when image quality is important, current-mode pixels suffer a worse noise performance than their charge-integrating cousins. Integration serves to average out such noise fluctuations, but its disadvantage is that it produces discreet, time data that is not as compatible with many space-friendly analog processing circuits. Furthermore, capacitors are more easily matched than the transconductance or threshold voltages of transistors. Thus, the prior designs require a choice between image quality and focal plane computation.

There have been other approaches to object tracking in the past that have used centroid location and tracking. For systems that include the centroid calculation in a feedback loop, high speed and low latency are vital. Latency is an especially important issue for image stabilization and mechanical feedback systems. They demand quick response or risk becoming oscillatory or simply ineffective. Finally, offloading the task of computing a centroid frees memory and processing time, which can be used for higher-level tasks such as image segmentation or for other jobs not related to image processing.

The most common approach to centroid location and tracking used in the past involves communicating all pixels from the imager block to a processing block, usually a digital processor, before computing the centroid. This approach not only involves the computation time of the processing block, but also the time it takes to read every pixel out of the array itself into memory. For a system with a digital processor, part of the communication process is analog- to-digital conversion--costly in terms of time, chip area, and power. If pixels are communicated out of the array one at a time as is customary for most imagers, the time involved scales as O (n2) with the length of a side of the array.

As resolution increases, this additional time involved in moving pixel information can become considerable. A 2Mpixel sensor with a 10MHz pixel clock takes 200ms to output every pixel-costly in terms of latency. The pixel clock speed can be increased, but this again costs more chip area and/or power.

Some prior centroid-tracker designs use the focal-plane for fast, low- power processing, but they compute the centroid of the whole scene based on the brightness of every pixel in the imager. While useful for tracking a bright spot in a scene, these imagers are unable to discriminate between objects of interest and the background.

SUMMARY OF THE INVENTION It is an object of the present invention to provide an imager for tracking moving objects with a high degree of speed and power efficiency. It is another object of the present invention to provide a low power imager for tracking moving objects that requires very little space. It is a further object of the present invention to provide an imager chip that extracts moving object information from a scene with a degree of speed and power efficiency greater than that of an imager that uses physical or logical separation of the imaging and processing circuitry. It is yet another object of the present invention to provide an object tracking chip that uses a dual pixel array to enable optimization of imaging and motion centroid localization.

The imager of the present invention consists of two subsystems, the Active Pixel Sensor ("APS") imager subsystem and the centroid-tracking subsystem. The APS imager subsystem operates as an imager for obtaining real-time images of what the chip sees. The centroid tracking subsystem computes the location of moving targets within a scene. The APS imager subsystem and the centroid tracking subsystem are operated independently of one another, except for a mixed-pixel array that includes two distinct types of pixels. This dual-pixel design enables optimization for two separate tasks. One pixel-type is a standard Active Pixel Sensor used by the APS imager subsystem for lower-noise imaging. The other pixel type is a custom-designed pixel used to help compute the centroid position of a moving object in a scene. The two pixel types are interleaved in the array to ensure automatic registration of imaging tasks.

In the embodiment of the imager chip disclosed herein, the APS pixel array is 120 columns x 36 rows, with a pixel size of 14.7, um x 14.71tu in a 0. 5, um CMOS process, and the centroid pixel array is 60 columns x 36 rows, with a pixel size of 29.4jam x 29.4jim. It should be noted that the pixel arrays could be much larger if a larger chip is used. It should also be noted that the pixel sizes could be smaller in different technologies. The chip is fabricated using standard scalable rules on a 0. 5jnm 1P3M CMOS process. The chip can take APS images at a frame rate of 30fps-8300fps, and centroid data at a rate of 180-3580 (x, y) coordinates per second. The chip nominally consumes 2.6mW of power.

The Active Pixel Sensor pixel is a standard pixel circuit that enables images to be constructed using standard CMOS fabrication processes, instead of more specialized and expensive CCD processes. In the APS system of the present invention, the APS pixel includes a photodiode, whose area is exposed to light, and three transistors that are protected from incident light by a metal layer over top. The voltage on the photodiode is first reset to a high voltage, and the voltage is read out later after a specific amount of integration time.

More light hitting the photodiode will cause more current to flow through it, causing a faster rate of decrease in voltage from a node capacitance. The final result is an output voltage that is high for low light hitting the APS pixel, and low for more light hitting the pixel.

The centroid-tracking subsystem finds the centroid of moving objects within the scene using focal-plane computation. The centroid-tracking pixels in the array include circuitry for both photoreception and preliminary processing. Basic processing at the pixel level allows the elimination of more complex processing later. Edge circuitry receives simple binary data from these pixels in parallel along share row and column lines. With one edge circuit cell per row or column, the size of the edge circuitry scales linearly with the size of the edge of the array. Digitization of pixel analog values is not necessary.

There is also no need for a separate storage elements, because the centroid is computed instantaneously from the pixels themselves. Since the computation is carried out in parallel, the speed of the centroid circuitry is nearly constant over all imager sizes, depending mainly on the circuit reset frequency. The most significant scaling effect is an increase of line capacitance for row and column shared lines proportional to the square root of the total pixels. This relates primarily to rise and fall times of these signals, and does not dramatically affect computation speed. The imager finds moving objects as"interesting,"and computes the position of these objects independently of the appearance of the background. This rule is a simple way to increase the number of situations that allow the centroid-tracking system to yield meaningful data.

The centroid tracking subsystem describes the pixel coordinates of the centroid of a moving object in the chip\'s field of view. All pixels that see a change of light intensity of a certain amount are first flagged as seeing movement. This is a purely temporal measure of movement. Spatial metrics are not used. Next, all rows containing flagged pixels are input to a center-of- mass calculation to find the central row, and all columns are put through the same computation to find the central column. Conceptually, the center of mass calculation works in the same way as finding the right balance point to place a fulcrum under a beam with masses on it. The fulcrum position is calculated by multiplying each mass by its distance from the end of the beam, and then dividing by the total mass. In the imager of the present invention, the weights of each row or column are all considered to be 1, regardless of how many pixels in a particular row or column are flagged. This is an approximation, but the error from a true center of mass calculation is usually negligible, especially for small objects. The centroid of all of the columns is considered to be the x- coordinate, and the centroid of the rows is considered to be the y-coordinate.

These are output as voltages from the imager chip.

The APS imager subsystem views the objects in a scene and the scene itself as a regular image. High fidelity images are important to applications such as medical imaging, where accurate visual information is important to human beings who will be viewing them. Good image quality is also important for computer vision applications that use the centroid-tracking capability of the chip as a first pre-processing step before more complicated algorithms that need the full image information. For these uses, current-mode photoreceptors, such as those used in the computation pixels, would produce an image that is too noisy. Instead, the Active Pixel Sensors are used for imaging. They are tiled with the computation pixels so that the centroid position reported has a very direct and strict mapping to the pixel position in the image. There is no calibration necessary as would be required for an optical setup involving two separate cameras.

The APS subsystem uses photodiodes in an integrative mechanism. The photodetectors are the same for both types of pixels, however, the way that the photocurrent is used is different. In the APS subsystem, the photocurrent is integrated on a capacitor and the capacitor voltage is output after some time. In the centroid motion pixel, the photocurrent is placed across a resistor and the voltage across the resistor provides a continuous measure of light intensity.

BRIEF DESCRIPTION OF THE DRAWINGS FIGURE 1 is a system level block diagram of the imager chip of the present invention.

FIGURE 2 is a schematic of the imager chip active pixel sensor ("APS") pixel.

FIGURE 3 is a schematic of the imager chip APS row circuitry.

FIGURE 4 is a schematic of imager chip Correlated Double Sampling, column buffer, and output switching circuit.

FIGURE 5 is a schematic of the imager chip centroid-tracking pixel.

FIGURE 6 is a schematic of one segment of the imager chip\'s centroid edge circuitry.

FIGURES 7 (a) and 7 (b) are sample pictures from the APS image array.

FIGURE 8 is a position plot of output centroid data from the imager chip of the present invention and a sum of a reverse-video series of APS images in the background.

FIGURE 9 is a reverse-video of one APS frame image with six corresponding centroid positions and six centroid positions from a previous APS frame.

FIGURE 10 is a reverse-video sum of APS images of Figure 8 and a stationary LED, with all centroid positions.

FIGURES 11 A to l lC are histograms of centroid response with a target of three LEDs varying from three blinking LEDs to two blinking LEDs and one steady-on LED to one blinking LED and two steady-on LEDs.

FIGURE 12 is a histogram over an image array showing reported centroid positions for two blinking LEDs.

DESCRIPTION OF THE PREFERRED EMBODIMENT Figure 1 is a block diagram of the imager chip 10 of the present invention. Chip 10 consists of two subsystems, an APS imager subsystem 12 and a centroid-tracking subsystem 14. The APS subsystem 12 operates as an imager for obtaining real-time images of what chip 10 sees. The centroid tracking subsystem 14 computes the location of moving targets within the scene. Each subsystem is operated independently of the other. No resources are shared between the two subsystems, except for a mixed-pixel array 20 that is the focal plane of chip 10. Mixed-pixel array 20 includes two distinct types of pixels, a standard Active Pixel Sensor pixel 16, used by APS imager subsystem 12 for lower-noise imaging, and a custom-designed pixel 18, used by centroid-tracking subsystem 14 to help compute the centroid position of a moving object in a scene. APS subsystem 12 includes APS column processing circuitry 22, APS row drivers 24 and a plurality of APS pixels 16. Centroid tracking subsystem 14 includes centroid column decoding circuitry 17, centroid row decoding circuitry 19 and a plurality of centroid pixels 18.

Figure 1 shows the layout on chip 10 of mixed-pixel array 20 to APS column processing circuit 22, APS row drivers 24, centroid column decoders 17, and centroid row decoders 19. The array of APS pixels 16 is 120 columns x 36 rows, with a pixel size of 14.7ym x 14.7its. The array of centroid pixels 18 is preferably 60 columns x 36 rows, with a larger preferable pixel size of 29.4ym x 29.4, um. Pixel 18 for centroid computation is exactly twice as long on each side as APS pixel 16, to facilitate tiling. Pixels in the same row are of the same type, and array 20 alternates between rows 13 of centroid pixels 18 and rows 11 of APS pixels 16. Due to the difference in size of the pixels 16 and 18, each row 11 of APS pixels 16 includes twice as many pixels as each row 13 of centroid pixels 18. Imager chip 10 is fabricated in a standard analog 0. 5ym 1P3M CMOS process, although any standard CMOS process can be used.

As shown in Figure 2, the circuit for APS pixel 16 is a basic three- transistor/one-photodiode circuit. Thus, each APS pixel 16 includes a photodiode 25, a reset transistor 26, an output transistor 28, and a transistor select switch 30 to address the pixel 16. APS pixel 16 has no provision for electronic shuttering, and is optimized primarily for density and secondly for fill factor. All transistors in pixel 16 are NMOSFET to reduce the area of pixel 16."Column"node 35 is the output of pixel circuit 16. A"row"output is not needed because row selection is done at the bottom of array 20, in the Correlated Double Sampling circuit shown in Figure 4.

The gate capacitance of transistor 28 is reset when a low signal is applied to the gate of transistor 26 through"reset"node 34. This pulls the gate of transistor 28 to supply voltage Vdd. The current through photodiode 25 discharges the gate of transistor 28 towards ground. When the pixel value is read-out, by turning on transistor 30, transistor 28 behaves as a source follower so that its gate voltage (minus the voltage drop given by the threshold voltage of transistor 28 and the current flowing through it) the appears at the"column" node 35. This voltage is interrogated by the CDS circuit of Figure 4.

The row circuitry for the APS pixels 16, i. e., APS Row Drivers 24, is comprised of two cyclical shift register chains, one for row re-set signals and the other for row select signals. Figure 3 shows a schematic for a portion of a shift register chain 37 for row re-set signals, e. g., 34 and 34A shown in Figure 3, although it should be understood that a similar shift register chain would be used for the row select signals called"rowSelect"signal 36 in Figure 2. Each row 11 of APS pixels 16 receives"reset"and"rowSelect"signals 34 and 36, respectively, from one stage of each chain of shift registers. Shift registers, e. g., 42 and 42A in Figure 3, are clocked by a signal 40 called"rowClk", which causes the shift registers\'bit pattern to advance forward by one row, e. g., from the output of register 42 to the output of register 42A, causing readout of the next row to begin. The reset shift registers 42,42A, et cetera, can be pre- loaded with blocks of ones and zeros in a flexible way, allowing integration time for each row to be specified as a modifiable fraction of a total frame time.

This can be viewed as a"rolling shutter." Since imager 10 is an integrative imager, the current flowing through photodiode 25 is allowed to flow out of the gate of reset transistor 26 for a prescribed period of time (the integration time) after reset occurs. The gate capacitance of transistor 28 after reset is fully charged. The brighter the light incident on photodiode 25, the smaller the voltage on the gate of transistor 28 after the prescribed time. The more light that is incident on photodiode 25, the greater the current flow through it. For a given current flow through photodiode 25, the longer the prescribed time, the less will be the voltage stored in the gate capacitance of transistor 28 after the prescribed time. Thus, the amount of voltage on the gate of transistor 28 can be controlled by controlling the time over which current flowing through photodiode 25 is allowed to affect such voltage.

Another reset circuit (not shown) connected to the reset lines 34 of the APS pixels 16 facilitates reset timing on a shorter time scale than one row clock. A separate global signal 38, called"directReset"in Figure 3, is combined in an AND gate, e. g., 39, with each row\'s shift register output signal, e. g., 41, from a corresponding shift register, e. g., 42. Using reset signal 38, integration can be stopped and reset initiated in the middle of one row\'s output cycle. This is especially important to facilitate the operation of the Correlated Double Sampling circuit shown in Figure 4 and described below.

Each column of pixels in mixed-pixel array 20 has its own dedicated processing circuitry. The APS column processing circuitry 22 includes its most important block, the Correlated Double Sampling ("CDS") circuit 53 shown in Figure 4. The function of CDS circuit 53 is to subtract the reset voltage from the output signal voltage, ensuring that only the difference between the two is measured, instead of the absolute output signal itself. The output and reset signals appear at the"column"node 35 of Figure 2. The reset signal is the output voltage when the reset signal 34 on the gate of transistor 26 is low. It consists of the drop across transistor 26 and the gate-source drop across transistor 28. The output consists of the integrated photocurrent pulse the reset voltage 35 drops. Hence, CDS circuit 53 takes a difference between these two signals, leaving only the integrated photocurrent voltage. This drastically reduces offset errors in readout. It also compensates for noise and different reset voltage levels resulting from different light intensities incident on photodiode 25 during reset. CDS circuit 53 is a switched capacitor circuit including an NMOSFET transistor 46 having its drain connected through a capacitor 51 to an"input"signal 52. Connected to the gate of transistor 46 is a clamping signal 54 called"phiClamp". Also connected to the source of transistor 46 is a bias signal called"nBias". The phiClamp signal 54 is a digital signal that is used to turn on transistor 46. When it turns on transistor 46, phiClamp is high and the voltage at the input of buffer 49 and one side of capacitor 51 is set to VBias at node 56. This is done when the output of pixel 16 is available. Then phiClamp goes low, which causes the input of buffer 49 to float. At that time, pixel 16 is reset. Capacitor 51 then causes the voltage in buffer 49 to be equal to the difference between pixel 16\'s reset signal 34 and the output signal 35 plus Vbias 56. This is the image voltage. Vbias 56 is provided from off-chip. PhiClamp 54 is another control signal provided by scanning circuits (not shown). The simple switched capacitor circuit 53 shown in Figure 4 is used because it makes efficient use of space, which is especially important for the imager chip 10 of the present invention, since the switching circuit is used in each column of APS pixels 16.

Buffer circuit 49 follows CDS circuit 53 for output buffering, and is a two-stage operational amplifier circuit with Miller compensation in a unit-gain configuration. Buffer circuit 49 includes an operational amplifier 44 with negative feedback, a switch 55, including transistors 48 and 50, that is connected to the output of opamp 44 and the output node 62 of buffer circuit 49, and a NOT gate 61 driven by output signal 60 named"phiOut". Biasing opamp 44 is bias signal"nBias"at node 56. The signal nBias 56 sets the operating point of opamp 44. It comes from outside of chip 10. PhiOut at node 60 is another digital control signal which samples and holds the final pixel value. The signal"output"is presented off-chip 10 by a multiplexer that selects each pixel in a row, one at a time. Finally, the end of APS column processing circuit 22 employs yet another shift register chain (not shown) to sequentially activate the switches that output to single output node 62 one column voltage at a time.

The basic functionality of the centroid tracking subsystem 14 is the computation of the centroid of all pixels 18 whose brightness levels vary with time. This approximates finding the centroid of a moving object. A moving object will at least cause pixels 18 at its edges to change (in the case of a solid colored object, for example), and, at most, many pixels 18 within the object\'s image will also change if it contains details or texture. The centroid of time- varying pixels in both types of images will be close to the center of the object.

This scheme works most accurately for small objects. In the present invention, only an increase in brightness is detected. The output of centroid tracking subsystem 14 is a set of two voltages, one for the x position output by centroid row decode circuit 19, and one for the y position output by centroid column decode circuit 17.

The method employed by the present invention to detect pixel brightness changes is like a simplified form of an address event representation imager.

That is, the only output from each pixel 18 is a digital event, i. e., the assertion of that pixel\'s row and column through"row"signal 92 and"column"signal 94.

Centroid edge circuitry decoders, 17 and 19, then processes the activated rows and columns of pixels 18 to find the centroid. The circuit used by decoders 17 and 19 for this calculation is shown in Figure 6. Moving the more complicated processing to the edges of array 20 keeps the size of pixels 18 smaller and helps to increase fill factor for the motion sensitive pixels 18.

Figure 5 is a schematic of the centroid-tracking pixels 18 used with the centroid-tracking subsystem 14. Each pixel 18 includes a photodiode 63, which is biased by an NMOSFET transistor 64 with its gate voltage fixed by a bias signal 82 called"pixBias". PixBias is a biasing signal provided from off chip that sets the operating point of the centroid motion pixel 18. The voltage at the source of load NMOSFET 64 will be proportional to either the logarithm or the square root of the intensity of the light incident on photodiode 63, depending on whether photodiode 63\'s current operates NMOSFET 64 in the sub- threshold or above-threshold region, respectively. Since the function of the circuit of Figure 5 is to detect a relative change in pixel brightness, the circuit is designed to be sensitive to the same multiplicative change in the photocurrent of photodiode 63 at any absolute brightness level. The logarithmic transfer function of the sub-threshold operating transistor 64 translates a multiplicative increase or decrease in the photocurrent of photodiode 63 into an additive increase or decrease in output voltage, simplifying the task for the next stage of pixel circuit 18. The square root function does not share this property exactly, but has a similar curve and approximates a logarithm. For most light levels, pixels 18 of chip 10 operate in the sub-threshold region.

The photo sensitive voltage at the juncture between photodiode 63 and transistor 64 is AC coupled to the remainder of pixel circuit 18 through a PMOS capacitor 66 with the well of capacitor 66 tied to the drain and source of capacitor 66. The rest of pixel 18 consists of a resettable comparator circuit 81, implemented using a CMOS inverter 68 biased through a signal called "invPbias", and a feedback switch 83 including transistors 70 and 72. The inverter includes a cascode transistor 69 to enhance gain. PhiReset is a digital signal provided from off chip that resets the centroid motion pixel 18.

PhiResetB is the complement of PhiReset. Invpbias and Vcas are bias voltages that set the operating point of comparator 81.

Operation of pixel 18 starts with reset of a comparator block within pixel 18 that is comprised of transistors 69,70 and 72. In reset, transistors 74 and 76 are on. It should be noted that transistor 74 functions as a charge injection compensation device when transistor feedback switch 76 is opened. The inverter feedback switch 83 is closed, input is made equal to output, and the inverter 83 settles at its switching voltage. At this time, the voltage difference between the cathode of photodiode 63 and the input voltage of inverter 68 is stored across PMOS capacitor 66. PMOS capacitor 66 is held in inversion, since the inverter reset voltage ("phiResetb") is significantly lower than the voltage of photodiode 63. When the switch 83 is opened, inverter 68 goes into open-loop operation. As the light level on photodiode 63 increases, the voltage on its cathode will decrease. Since the input to the inverter circuit 68 is floating, (high impedance), its voltage will now track the voltage on photodiode 63, offset by the voltage across capacitor 66 itself. When the voltage on photodiode 63 decreases by a given amount AV corresponding to a given factor increase in photocurrent, inverter 68 will trip and its output will go high. If light on the pixel decreases, however, no event will be signaled because inverter 68 will move even farther away from its switching threshold. The inverter drives two NMOS pull-down transistors 78 and 80, attached, respectively, to the particular row and column lines 92 and 94 associated with pixel 18. These lines 92 and 94 are set up in a wired-OR configuration, with weak PMOS pull-up transistors 100 (Figure 6) on the edges of the array of motion pixels 18. Switches (not shown) can disconnect the array of motion pixels 18 from the edge circuitry 17 and 19 to avoid current draw during reset.

To compute the 2-D centroid of a moving object, the centroid of the activated rows and activated columns are separately computed in order to arrive at a final (x, y) coordinate. A center-of-mass algorithm is employed, resulting in sub-pixel precision. All rows containing pixels that are flagged as seeing movement (i. e., an increase in brightness) are input into the center of mass calculation to find the central row. All columns containing flagged pixels are put through the same computation to find the central column. For the center of mass calculation, the weights of each row or column are all considered to be 1, regardless of how many pixels in a particular row or column experience an increase in brightness.

The centroid column decode 17 and the centroid row decode 19 shown in Figure 1 simultaneously perform the column and row center of mass calculations, respectively. To this end, transistors 78 and 80 in the centroid pixel circuit of Figure 5 each send a current through the"row"and"column" nodes 92 and 94, respectively. Nodes 92 and 94, in turn, are connected to "row"or"column"ports 108 used in the edge circuitry shown in Figure 6.

Figure 6 is a schematic of the centroid column decode edge circuitry 17 and the centroid row decode edge circuitry 19 shown in Figure 1, since operation of column decode edge circuitry 17 is identical to the operation of row decode edge circuitry 19. Both the column and row decode circuits 17 and 19, respectively, include an array of the edge circuit of Figure 6, with each such edge circuit being connected to its two neighbor edge circuits through the nodes 112 and 114 called"resLeft"and"resRight", respectively. The node 110 called"vCentroid"is common for all the Figure 6 edge circuits.

The edge circuitry 17 of the centroid subsystem 14 receives a series of column outputs 108 corresponding to each column of the array of centroid pixel 18. Columns containing pixels 18 that have experienced an increase in brightness will show up as a logic low signal at output 108. The center-of-mass calculation computes a weighted average of every activated column using the column position as weight. For example, if only the 20th and 21s\'columns of pixels 18 are activated, the result of the center-of-mass calculation would be 20.5. This example illustrates sub-column position precision. The position weights are represented as a set of voltages from a resistive ladder voltage divider 111 in Figure 6, with as many taps as there are columns. These voltages are buffered using simple 5-transistor differential amplifiers 96 shown in Figure 6. The 20th column with a low (activated) output will first set an SR flip-flop 98, locking it high until the flip-flop 98 is reset with an externally provided reset signal called"edgeReset". The output of SR flip-flop 98 turns on weak PMOS transistor 102 acting as a resistor, which connects the column weight voltages to the centroid output node 110 called"vCentroid". All active columns will have their weight voltages connected to this common node 110 through a PMOS resistor 102, and this network of voltages interconnected through identical pseudo-resistors computes the average of all voltages connected. The computation is done by virtue of physics, i. e., when voltages are shorted together through identical resistors, the common node voltage is the average of the individual voltages. The output voltage is thus the center of mass value of all active columns. The centroid of all of the columns is considered to be the x-coordinate, and the centroid of the rows is considered to be the y-coordinate.

Each centroid pixel 18\'s position factors into the center of mass calculation with equal weight. Because the region of interest is defined as everywhere pixel light intensity has changed, it is necessary to assume that every point has a weight of"0"or"1". It is possible to use other functions, such as one that would weight each pixel by the extent to which its light intensity has changed. However, this is not the preferred metric, which is a binary condition, i. e., change or no change.

Circuits 17 and 19 also do not consider the number of pixels activated in a column or row. These circuits give every column or row the same weight independent of the number of activated pixels. Instead of noting the actual centroid of the pixels that are activated, circuits 17 and 19 detect the centroid of a rectangular box coincident with the edges of the region of activated pixels.

The limiting speed of the operation of centroid subsystem 14 is dependent on the reset time and the propagation delay of pixel inverter 68. The reset time is the period between resets for Figure 5. It is the time period between assertion of signal 86 called"phiReset". Signal 86 is a signal provided by the user of chip 10. There is a certain minimum time that the inverter in comparator 83 of pixels 18 need to be reset to settle to their final trip point. The minimum reset time for the inverter in comparator 83 is approximately its The propagation delay of the inverter in comparator 83 is directly dependent upon the bias of transistor 69. The parasitic capacitance on the output of the inverter in comparator 83 is approximately 12.3fF. This yields a propagation delay of 134, us. Summing this delay time with the minimum reset time gives a total minimum cycle time of 279ps and a maximum centroid rate of approximately 3580Hz. It should be noted that increasing the inverter bias current by an order of magnitude will decrease the inverter propagation time by a factor of 10, and will increase the current consumption of centroid system 14 by only about 1.5%.

The output voltage range of APS system 12 is originally limited by the maximum reset voltage in the pixel, minus the lowest voltage for reliable operation of photodiode 25. The reset voltage is approximately Vdd-VTN, or 3.3V-1. OV=2. 3V for NMOSFET transistor 26 including bulk effect. Reset transistor 26 supplies current to photodiode 25 during the reset cycle. Exactly how much current 16 photodiode 25 draws is determined by the light intensity falling on pixel 16 at the time of reset, and the output voltage ("column"signal 35) of pixel 16 will reflect this. Since these factors can and do vary during the operation of the imager 10, the reset voltage also varies. Part of the function of the Correlated Double Sampling circuit shown in Figure 4 is to compensate for this normal variance of the reset signal. The follower, transistor 28 in pixel 16, causes the column voltage 35 to drop by another VTN, which with the bulk effect reduces the maximum (reset) voltage level on the column signal 35 to 1.3V.

APS subsystem 12 is readily capable of imaging a moving object at a frame rate of 30 frames per second ("fps"). Faster frame rates are possible, but there is a direct trade-off between exposure time and frame rate, faster rates necessitating higher light levels. The absolute limit on the speed of APS subsystem 12 is limited by the column current sources (not shown) that bias the source-followers 28 in each of APS pixels 16 for operation. These current sources, which are connected to column voltage node 35, flow to ground and are necessary to power source follower 28 when pixel 16 is selected by turning on transistor 30. These current sources are normally biased to around 260nA for low-power operation. This current drive, combined with the column line capacitance of 200fF gives a maximum fall time of 1.3V/, us. This makes the worst case settling time for one column about 925ns with a 1.2V voltage range.

The settling time for each CDS amp 44 to be switched onto the pixel bus is 20ns. Thus, the columns in a row take 925ns to settle, and each pixel clocked out takes 20ns to settle. Where imager chip 10 has 36 rows and 120 columns, the maximum frame rate is approximately 8300fps, ignoring the exposure problems associated with short integration time. Sample images 128 and 130 taken with the array of APS pixels 16 running at 30 frames per second can be seen in Figures 7 (a) and (b).

Power consumption of the whole subsystem is the sum of digital row circuitry, pixel reset current, pixel amplifier output current, CDS circuitry, and finally the digital shift registers for outputting each pixel. Assuming a normal photocurrent of 2pA/pixel, which is observed under normal indoor lighting conditions, the total current of the APS subsystem is approximately 493 pA..

The power consumption of the centroid-tracking circuitry 14 depends on the photocurrent drawn by the continuously biased photodiode 63, the operation of the pixel inverter 83, and the digital and analog circuitry 17 and 19 on the periphery of the chip. Photocurrent can easily vary by decades depending on the intensity of the incident light on photodiode 63. This photocurrent is approximately 6 picoamps for indoor lighting conditions.

Given this level of incident light, the continuously-biased pixels 18 use about 13nA over the whole array of centroid pixels 18. The total current of this block is approximately 116uA, of which the largest component goes to the buffer diffamps on the resistive ladder of the edge circuitry.

Centroid tracking system 14 was tested using an analog oscilloscope screen 132 as a target. The X/Y mode setting was used, and two function generators (not shown) set to 10 and 20 Hz supplied the scope channels. In this way, a moving point of light tracing a stable"figure-8"pattern 134 could be observed on oscilloscope screen 132. APS image data and centroid coordinate data were taken simultaneously. Centroid voltages were converted to digital data, and sent to a controlling computer (not shown). A composite image of all APS frames was produced by summing all frames and then inverting the brightness of the image for easier printing. On top of this composite image is plotted the centroid positions reported by the centroid-tracking subsystem 14 of chip 10. The result is displayed in Figure 8. The data is an excellent match of the target, which was comprised of two sine waves in the x-and y-directions.

There are six centroid coordinates taken for every APS frame taken. One such APS image 136 and centroid coordinates 138 of the current and previous frame (plotted as"o"and"x", respectively on display 132) are shown in Figure 9. It is obvious that while the APS imager 12 sees one smear of the path of the oscilloscope point, the centroid-tracking circuitry 14 is able to accurately and precisely plot specific points along the path in real time.

Figure 10 shows another example of the cumulative centroid positions 138A reported for an oscilloscope target 134A. This time, a non-blinking, stationary LED 140 was placed next to the moving oscilloscope target 134A to show that stationary LED 140 has no effect on centroid positions 138A, despite the fact that LED 140 is much brighter than oscilloscope target 134A.

With faster moving targets, the speed of centroid subsystem 14 could be increased even more. Centroid pixels 18 are sensitive to changes in incident light since their last reset. Therefore, faster changes in light (faster movement) would allow for shorter reset intervals and higher measurement frequency.

In addition to trials involving a single moving target, experiments using chip 10 with multiple targets were performed. Figures 12 (a) through 12 (c) shows a target consisting of 3 LEDs 140A, 140B and 140C laid out in a triangle formation being imaged. All LEDs were either blinking, or steadily on, and were stationary. Three different tests were performed. The first test shown in Figure 11 (a) involved all three LEDs 140A, 140B and 140C blinking at exactly the same time. Figure 11 (a) shows a histogram of the centroid positions 142 reported by chip 10, with blinking LED positions 140A, 140B and 140C marked by circles. From this histogram, it is seen that the vast majority of centroid positions 142 reported are in the center of the 3 LED positions. A second test (Figure 11 (b)) was the same as the first test, except that LED 140C was continuously on and not blinking. In Figure 11 (b), the non-blinking LED 140C is marked with a square outline instead of a circle outline. Again, the centroid positions 142 plotted lie in-between the two blinking LEDs 140A and 140B, and are unaffected by the steadily-on LED 140C. Similarly, Figure 11 (c) shows a third test with one blinking LED 140B marked with a circular outline, and two non-blinking steadily-on LEDs 140A and 140C marked with square outlines. In this case, there the only centroid positions 142 reported are right on top of the only element of the scene that is changing in time, i. e., LED 140B.

A fourth test with multiple blinking LEDs was performed that involved uncorrelated blinking. Two LEDs 144A and 144B with separate blinking periods and phases, at different x-and y-positions, were set up in front of imager 10, and centroid positions 146A and 146B were recorded. Figure 12 shows a histogram of the number of values recorded at a specific region of the array. It can be seen that in addition to the two centroid positions 146A and 146B of the actual LEDs 144A and 144B showing a marked response, the linear combination of their positions also shows a considerable number of recorded coordinates 146C. If two LEDs are seen to blink in the same period of time detect, the centroid of their positions will be computed and reported. This is the normal operation of centroid subsystem 14. Multiple target tracking is still possible, however, with the addition of some basic statistical analysis of the positions reported. Through techniques such as SVD, the linearly-independent positions 146A and 146B can be extracted, and the linear combination of the two positions 146C can be recognized as a false position. Of course this has more limited applicability. For instance, if true movement of a third object happened to coincide with the linear combination of the movement of other two objects, it might be falsely omitted. But for simple observations of a few objects, it is possible to extract meaningful position data all objects involved.

A system with broader applicability could be constructed by changing the edge circuitry 17 and 19 of the centroid subsystem 14, allowing the location of multiple regions of activity in the array 20.

Although the present invention has been described in terms of particular embodiments, it is not intended that the invention be limited to those embodiments. Modifications of the disclosed embodiments within the spirit of the invention will be apparent to those skilled in the art. The scope of the present invention is defined by the claims that follow.