Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR REDUCING THE ERROR INDUCED IN PROJECTIVE IMAGE-SENSOR MEASUREMENTS BY PIXEL OUTPUT CONTROL SIGNALS
Document Type and Number:
WIPO Patent Application WO/2017/205829
Kind Code:
A1
Abstract:
An image sensor for forming projective measurements includes a pixel-array wherein each pixel is coupled with conductors of a pixel output control bus and with a pair of conductors of a pixel output bus. In certain pixels, the pattern of coupling to the pixel output bus is reversed, thereby beneficially de-correlating the image noise induced by pixel output control signals from vectors of the projective basis.

Inventors:
MCGARRY JOHN (US)
Application Number:
PCT/US2017/034830
Publication Date:
November 30, 2017
Filing Date:
May 26, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
COGNEX CORP (US)
International Classes:
H04N5/378; H04N5/335
Foreign References:
US20160010990A12016-01-14
US20150062396A12015-03-05
Other References:
None
Attorney, Agent or Firm:
VACAR, Dan V. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. An image sensor comprising:

a first output conductor of a pixel output bus;

a second output conductor of the pixel output bus; and

a set of pixels comprising

a first pixel comprising a first output select transistor coupled to a first pixel output control bus for switching pixel output to the first output conductor, and a second output select transistor coupled to the first pixel output control bus for switching pixel output to the second output conductor, and

a second pixel comprising a first output select transistor coupled to a second pixel output control bus for switching pixel output to the second output conductor, and a second output select transistor coupled to the second pixel output control bus for switching pixel output to the first output conductor.

2. The image sensor of claim 1, wherein the set of pixels corresponds to a column of a rectangular pixel-array.

3. The image sensor of claim 2, wherein each pixel of the set of pixels is coupled to the pixel output bus through a respective crossover.

4. The image sensor of claim 3, wherein the second pixel of the set of pixels is coupled to the second pixel output control bus through crossovers.

5. An image sensor comprising: a set of pixels;

a first output conductor; and

a second output conductor;

wherein a first subset of the set of pixels are coupled to respective output control buses to receive a first pixel output control signal to switch pixel output to the first output conductor, and to receive a second pixel output control signal to switch pixel output to the second output conductor, and

wherein a second subset of the set of pixels are coupled to respective output control buses to receive the first pixel output control signal to switch pixel output to the second output conductor, and to receive the second pixel output control signal to switch pixel output to the first output conductor.

6. The image sensor of claim 5, comprising a rectangular pixel-array, wherein the set of pixels is a column of the rectangular pixel-array, and

pixels in every other row of the rectangular pixel-array belong to the second subset of the set of pixels.

7. The image sensor of claim 5 or claim 6, wherein each pixel of the set of pixels comprises a crossover to couple the output of the set of pixels.

8. The image sensor of claim 7, wherein the second subset of pixels receives pixel output control signals that are inverted by crossovers coupled to the pixel output control buses.

9. An image sensor comprising: a pixel array including pixels partitioned into rows and columns, wherein each pixel of the pixel array is coupled with

(i) a first conductor and a second conductor of a pixel select line, of a given row of the pixel array, with which at least some other pixels from the same given row also are coupled, and

(ii) a first conductor and a second conductor of a pixel output bus, of a given column of the pixel array, with which all other pixels from the same given column also are coupled, wherein the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, are swapped on at least one row of the pixel array.

10. The image sensor of claim 9, wherein the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, are swapped on every row of the pixel array.

11. The image sensor of claim 9, wherein the first conductor and the second conductor of the pixel select line, of at least one row of the pixel array, are swapped in correspondence with the swapped first conductor and the second conductor of the pixel output bus on the at least one row of the pixel array.

12. The image sensor of claim 9, wherein

the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, are swapped on every row of the pixel array, and

the first conductor and the second conductor of the pixel select line, of alternating rows of the pixel array, are swapped.

13. The image sensor of claim 12, comprising: circuitry coupled with the pixel array and configured to provide select signals on the select lines for the pixels in the rows,

wherein the select signals are provided in accordance with a sampling matrix comprising a product of a random basis function and a filtering function, such that coefficients associated with the sampling matrix have support from an equal number of even and odd rows of the pixel array, and

wherein, for each column, current signals from a first set of pixels selected with respective pixel select signals provided on the first conductor of the pixel select lines are summed on the first conductor of the pixel output bus of the column, and current signals from a second set of pixels selected with respective select signals provided on the second conductor of the pixel select lines are summed on the second conductor of the pixel output bus of the column.

14. The image sensor of claim 13, further comprising:

comparators, wherein each respective one of the comparators is coupled with the first and second conductors of the pixel output bus of each respective column of the pixel array, and each respective one of the comparators is configured to binarize, for each respective column of the pixel array, a difference between the summed current signals on the first conductor of the pixel output bus and the summed current signals on the second conductor of the pixel output bus.

15. The image sensor of claim 13, wherein the random basis function is a sparse random basis function. 16. The image sensor of claim 9, wherein all pixels within each respective row are coupled with the same pixel select line of the respective row.

17. The image sensor of claim 9, wherein every other k^-one of the pixels within each respective row are coupled with a common one of multiple pixel select lines of the respective row.

Description:
METHOD FOR REDUCING THE ERROR INDUCED IN PROJECTIVE IMAGE- SENSOR MEASUREMENTS BY PIXEL OUTPUT CONTROL SIGNALS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority under 35 U.S.C. § 119(e)(1) of U.S. Provisional Application No. 62/342,632, filed on May 27, 2016, which is incorporated by reference herein.

BACKGROUND

[0002] The disclosed technologies relate generally to machine vision, and more particularly to machine vision systems for sensing depth information of a scene illuminated by a plane of light.

[0003] A method for acquiring 3 -dimensional (3D) range images includes providing a light source with line generating optics to illuminate a single plane of a scene, positioning a digital camera to view the light plane such that objects illuminated by the light source appear in the optical image formed by the camera lens, capturing a digital image of the scene, processing the digital image to extract the image coordinates of points in the scene illuminated by the light source, and processing the image coordinates according to the triangulation geometry of the optical system to form a set of physical coordinates suitable for measurement of objects in the scene.

[0004] A major limitation associated with such a conventional machine vision process is that a 2-dimensional intensity image of substantial size must be captured by the digital camera for each and every line of physical coordinates formed by the system. This can make the time to capture the 3D image of a scene as much as 100 times longer than the time required to acquire an intensity image of the same size scene, thereby rendering laser-line based 3D image formation methods too slow for many industrial machine-vision applications. [0005] Moreover, in an active pixel CMOS (Complementary Metal-Oxide Semiconductor) image sensor, the state of pixel output control signals is known to influence the voltage associated with charge stored by the floating-diffusion node of the pixels. This influence is related to capacitive coupling of the selected signal conductor with the floating-diffusion node, which may change its effective capacitance, and therefore the charge to voltage conversion factor. In a conventional image sensor, this influence is generally not a problem because pixels only have one output and only one row of the image sensor can be selected at any given time. Therefore, even though activation of the pixel output select signal may influence the charge-to-voltage conversion factor of pixels on the row selected, the influence is, substantially, similar for each pixel selected for readout and the influence on the output image is uniform.

SUMMARY

[0006] Unlike conventional CMOS image sensor technology, the disclosed technologies are related to image sensors that form coefficients of a projective measurement by selecting the output of a first plurality of pixels of a pixel array to a first conductor of a pixel output bus, while selecting the output of a second plurality of pixels to a second conductor of the pixel output bus, according to a set of pixel output control signals determined by information of a sampling matrix as described below.

[0007] In an active-pixel CMOS image sensor the influence on the charge-to-voltage conversion factor, induced at a pixel's floating-diffusion node through capacitive coupling, may be different for each state of the output control signals in spatial proximity to the pixel. For example, in an embodiment with 3 : 1 spatial interleaving, described below in connection with FIGs. 3-4, there is a different capacitive coupling value for each possible state of the 6 pixel output select lines per row of the pixel-array. This implies 64 possible states, although, in practice, the number is smaller due to constraints typically imposed on coefficients of the sampling matrix. For example, a 3 : 1 spatially-interleaved sampling matrix consisting of binary coefficients may have 2 3 = 8 possible states, and if the same sampling pattern is applied to all columns then there are only 2 1 = 2 possible states.

[0008] As described below, the sampling matrix Φ may be designed to increase the sparseness of the signal encoded in a measurement Y of an image signal X by rejecting certain aspects of the image signal that are unrelated to image features of interest. An example of such unrelated image features is a constant background level in the sensed signal that may be related to biasing of pixel current sources. However, non-uniform capacitive coupling of pixel output select conductors with the pixel floating-diffusion nodes can substantially alter the intended spatial frequency response characteristic of the image sensor, ultimately resulting in the need to acquire substantially more measurement coefficients to provide for the accurate encoding of a signal of interest.

[0009] Unfortunately, the unavoidable spatial proximity of the conductors of the pixel output control bus to the pixel floating-diffusion-node makes it difficult to completely eliminate parasitic capacitance that may exist between these circuit elements and therefore the unwanted signal coupling.

[0010] Notwithstanding the practical difficulty of reducing the capacitive coupling between conductors of the pixel output control bus and pixel floating-diffusion nodes, the technologies disclosed herein mitigate much of the detrimental effects of such capacitive coupling in the formation of coefficients of the measurement matrix through proper arrangement of pixel input/output connections.

[0011] In at least one aspect, the disclosed technologies can be implemented as an image sensor including a set of pixels; a first output conductor; and a second output conductor. A first subset of the set of pixels are coupled to respective output control buses to receive a first pixel output control signal to switch pixel output to the first output conductor, and to receive a second pixel output control signal to switch pixel output to the second output conductor. Additionally, a second subset of the set of pixels are coupled to respective output control buses to receive the first pixel output control signal to switch pixel output to the second output conductor, and to receive the second pixel output control signal to switch pixel output to the first output conductor.

[0012] Implementations can include one or more of the following features. In some implementations, the image sensor can include a rectangular pixel-array. Here, the set of pixels is a column of the rectangular pixel-array, and pixels in every other row of the rectangular pixel- array belong to the second subset of the set of pixels. In the foregoing implementations, each pixel of the set of pixels can include a crossover to couple the output of the set of pixels. Further, the second subset of pixels can receive pixel output control signals that are inverted by crossovers coupled to the pixel output control buses.

[0013] In at least one aspect, the disclosed technologies can be implemented as an image sensor including a first output conductor of a pixel output bus; a second output conductor of the pixel output bus; and a set of pixels including (i) a first pixel comprising a first output select transistor coupled to a first pixel output control bus for switching pixel output to the first output conductor, and a second output select transistor coupled to the first pixel output control bus for switching pixel output to the second output conductor, and (ii) a second pixel comprising a first output select transistor coupled to a second pixel output control bus for switching pixel output to the second output conductor, and a second output select transistor coupled to the second pixel output control bus for switching pixel output to the first output conductor. [0014] Implementations can include one or more of the following features. In some implementations, the set of pixels can correspond to a column of a rectangular pixel-array. Further, each pixel of the set of pixels can be coupled to the pixel output bus through a respective crossover. Furthermore, the second pixel of the set of pixels can be coupled to the second pixel output control bus through crossovers.

[0015] In at least one aspect, the disclosed technologies can be implemented as an image sensor including a pixel array including pixels partitioned into rows and columns, wherein each pixel of the pixel array is coupled with (i) a first conductor and a second conductor of a pixel select line, of a given row of the pixel array, with which at least some other pixels from the same given row also are coupled, and (ii) a first conductor and a second conductor of a pixel output bus, of a given column of the pixel array, with which all other pixels from the same given column also are coupled, wherein the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, are swapped on at least one row of the pixel array.

[0016] Implementations can include one or more of the following features. In some implementations, the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, can be swapped on every row of the pixel array. In some implementations, the first conductor and the second conductor of the pixel select line, of at least one row of the pixel array, can be swapped in correspondence with the swapped first conductor and the second conductor of the pixel output bus on the at least one row of the pixel array.

[0017] In some implementations, the first conductor and the second conductor of the pixel output bus, of each respective column of the pixel array, can be swapped on every row of the pixel array, and the first conductor and the second conductor of the pixel select line, of alternating rows of the pixel array, can be swapped. In the foregoing implementations, the image sensor can include circuitry coupled with the pixel array and configured to provide select signals on the select lines for the pixels in the rows. Here, the select signals are provided in accordance with a sampling matrix comprising a product of a random basis function and a filtering function, such that coefficients associated with the sampling matrix have support from an equal number of even and odd rows of the pixel array. Additionally, for each column, current signals from a first set of pixels selected with respective pixel select signals provided on the first conductor of the pixel select lines are summed on the first conductor of the pixel output bus of the column, and current signals from a second set of pixels selected with respective select signals provided on the second conductor of the pixel select lines are summed on the second conductor of the pixel output bus of the column.

[0018] Further in the foregoing implementations, the image sensor can include comparators. Each respective one of the comparators is coupled with the first and second conductors of the pixel output bus of each respective column of the pixel array, and each respective one of the comparators is configured to binarize, for each respective column of the pixel array, a difference between the summed current signals on the first conductor of the pixel output bus and the summed current signals on the second conductor of the pixel output bus. In some cases, the random basis function is a sparse random basis function.

[0019] Further in the foregoing implementations, all pixels within each respective row can be coupled with the same pixel select line of the respective row. Furthermore in the foregoing implementations, every other k^-one of the pixels within each respective row are coupled with a common one of multiple pixel select lines of the respective row.

[0020] Details of one or more implementations of the disclosed technologies are set forth in the accompanying drawings and the description below. Other features, aspects, descriptions and potential advantages will become apparent from the description, the drawings and the claims. BRIEF DESCRIPTION OF THE DRAWINGS

[0021] FIG. 1 shows aspects of a machine vision system in an operational environment.

[0022] FIG. 2A is a flow diagram of an example of a process depicting computations performed by the machine vision system of FIG. 1.

[0023] FIG. 2B is a flow diagram of an example of another process depicting computations performed by the machine vision system of FIG. 1.

[0024] FIG. 3 is a high-level block-diagram of an image sensor architecture that can be configured to perform the processes of FIGs. 2 A and 2B.

[0025] FIG. 4 is a circuit diagram showing more detailed aspects of the image sensor of FIG. 3.

[0026] FIG. 5 is a circuit diagram of a portion of an image sensor having a pixel array where, in certain pixels, the pattern of coupling to a pixel output bus is reversed and the state of a control signal provided to the pixel is inverted.

[0027] Like reference numbers and designations in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0028] FIG. 1 is a diagram of a vision system 100 for implementing a method for capturing 3D range images. FIG. 1 comprises, laser-line generator 101, object conveyor 102, object of interest 103, laser illuminated object plane 104, digital camera 105, digital communication channel 109, and digital computer for storing, processing, interpreting and displaying 3D range data extracted from an object of interest 103, which are graphically represented in FIG. 1 by result 110. Digital camera 105 further comprises, imaging lens 106, image sensor 107, and local image processor 108. [0029] In operation a narrow plane of illumination 112, formed by laser-line generator 101 intersects a 3D scene including conveyor 102 and object-of-interest 103. The narrow plane of illumination formed by laser-line generator 101 is coincident with object plane 104 of imaging lens 106. The imaging lens 106 collects light scattered by the 3D scene and focuses it on image sensor 107. Image sensor 107, which comprises a rectangular array of photosensitive pixels, captures an electrical signal representative of the average light intensity signal formed by lens 106 over an exposure time period. The electrical signal formed on image sensor 107 is converted into a digital information stream, which is received by local digital processor 108. Digital processor 108 formats the digital image information for transmission to digital computer 111. In some implementations local digital processor 108 also processes the image to form an alternative representation of the image or to extract relevant features to arrive at a critical measurement or some other form of compact classification based on the information of the digital image.

[0030] Generally, the image captured by digital camera 105 is processed, either by local digital processor 108 or digital computer 111, to measure the displacement of the line formed by the intersection of the illumination-plane with the object in the scene. Each displacement measurement represents an image coordinate that may be transformed into an object surface coordinate in object plane 104, according to a predetermined camera calibration. In some applications, object 103 is moved through the plane of the laser-line generator 101 while successively capturing images and extracting displacements coordinates at regular intervals. In some applications, the laser-line generator 101 is moved relative to a stationary object 103 while successively capturing images and extracting displacements coordinates. In either of these ways, a map of the surface of object 103 that is visible to the vision system of FIG. 1 can, over time, be constructed by digital computer 111. [0031] In the following descriptions uppercase symbols, generally, represent matrix quantities, row numbers of a matrix are identified by the subscript i, column numbers by the subscript j, and frame time by the subscript t. Lowercase symbols represent scalar or vector values, for example, Xij refers to one element of X and Xj refers to a column vector of X. Parentheses are used to collectively reference all of the vectors or elements of a matrix, for example X = (x ; ) = (x £ ) .

[0032] In the example illustrated in FIG. 1, the image signal X formed on the image sensor 107 includes three segments of a laser line, with a third segment being horizontally between and vertically offset from a first segment and a second segment, representative of, for example, the image of the intersection of illumination plane 112 with conveyor 102 and object 103. The image signal X may also include unwanted off-plane illumination artifacts and noise (not shown). The illumination artifacts may be light internally diffused from one portion of an object to another, for example, light of the laser line, and the noise may be introduced by ambient light or by the image sensor.

[0033] In general, the function of the computations performed by the vision system of FIG. 1 is to extract row offset parameters associated with the image features of the curve formed of the intersection of a plane of illumination with objects of interest in a physical scene. Conventional technologies for performing such computations include sampling the image signal, forming a digital image, filtering the digital image, and extracting image features from the filtered digital image.

[0034] To improve the speed of image feature extraction for the vision system 100, the foregoing conventional technologies are supplemented by the technologies disclosed herein. The disclosed technologies use 1-bit compressive sensing techniques in which an image signal X is filtered as it is being encoded in a measurement signal Y. In 1-bit compressive sensing, each measurement is quantized to 1-bit by the function sign . ) , and only the signs of the measurements are stored in the measurement vectors y.

Y = sign(AX), where X E NixN 2 and Y E {-l,l} MxN2 .

[0035] Here, the image signal X is formed on an image sensor 107 having a pixel array with N 1 rows and N 2 columns, and A is a sampling matrix, where A G {— l,0,l} MxNl and M « N . This aspect of the disclosed technologies represents a simplification of the analog-to-digital conversion process, and the fact that the number of samples M can be much smaller than the number of rows of the image signal X allows for the noted increase in processing speed. Another aspect of the disclosed technologies is that, unlike the conventional technologies, the original image signal X is not encoded in the measurement Y, because doing so would, necessarily, require the encoding of additional image information that is not directly relevant to extracting the offset parameters of the intersection of the illumination plane with objects of interest in the physical scene. Rather, a filtered image signal Z is encoded in the measurement Y. One reason for this is that the number of samples required to embed all variation of the signal to a specific error tolerance e is of order O (K log (N)). By filtering the image signal X to attenuate spatial frequencies that do not contain information essential to the extraction of the laser-line offset parameters, the sparseness of Z increases, such that K z < K x and the number of samples required to robustly encode the filtered signal in the measurement Y will, in practice, always be less (often much less) than, the number of samples required to encode the raw image signal X, assuming that the error tolerance e remains the same.

[0036] FIG. 2A is a flow diagram of an example of a process depicting computations performed by the machine vision system 100.

[0037] In the computations outlined in FIG. 2A: • The symbol X, X G " lXNz , represents an image intensity signal as it exists on N x X N2 pixel elements of an image sensor, with for example pixel elements of the image sensor forming a pixel array which may have N x pixel rows and N 2 pixel columns.

• The symbol Ψ, Ψ G {— l,0,l} Wl Wl , represents an image filtering function comprised of, and in some embodiments consisting of coefficients used to compute a central difference approximation of the partial first derivative with respect to rows of the image signal X.

• The symbol r G {— Ι,Ο,Ι}" 3 , represents a sparse random sequence, which in some embodiments is based on a Markov chain of order m, where m > 1. Here, N 3 is the size of a spatial filtering kernel ψ, as described below.

• The symbol Θ, Θ G {— l,0,l} M Wl , represents a random basis function, created by drawing row vectors from r.

• The symbol Φ, Φ G {— l,0,l} MxWl , represents an image sampling function, formed from the product of the random basis Θ and the filtering function Ψ.

• The symbol Y, Y G {— l,l} MxW2 , represents a measurement of the filtered image intensity signal, formed from the product of the sampling function Φ and the image signal X, quantized by sign(. ) to two levels {—1,1}.

• The symbol W, W G {— ... M} NlXNz , represents a estimate of the filtered image signal, formed from the product of the measurement Y and the transpose of the random basis function Θ.

• The symbol Z, Z G {— M ... M} NlXN , represents an estimate of the product of the original image signal X and the filtering function Ψ.

• The symbol Δ, Δ G {0,1,2 ... N 1 } PxNz represents image offset parameters of the local signal extremes, i.e., the P relevant signal peaks of the signal Z on each column. [0038] In FIG. 2A, block 215 represents information of the image signal ^, which is information representative of light energy of a scene. The information may be received by an image sensor, for example image sensor 107 of FIG. 1. The light energy may be light scattered from the scene, with at least some of the light focused by a lens onto the image sensor. The image may also include unwanted off-plane illumination artifacts and noise (not shown). The illumination artifacts may be light internally diffused from one portion of an object to another, for example light of the laser line, and the noise may be introduced by ambient light or by the image sensor, for example.

[0039] Block 217 includes a representation of a process that generates a measurement Y of the image intensity signal X. The measurement Y represents a product of the image signal X and the sampling function Φ, quantized to two levels. In most embodiments, the sampling function is a product of a random basis function and a spatial filtering function. In some embodiments, the random basis function is sparse, the non-zero elements drawn from a Bernoulli distribution or some other generally random distribution. In some embodiments, the sampling function is expected to generally pass spatial frequencies associated with portions of an image forming a laser line and to substantially reject spatial frequencies associated with portions of an image including noise and other unwanted image information. In some embodiments, the process of block 217 extracts information of the image signal X by iteratively generating elements of a measurement Y. Generation of the information of the measurement Y may be performed, in some embodiments, by an image sensor device and/or an image sensor device in conjunction with associated circuitry.

[0040] In some embodiments elements of Fare generated in M iterations, with for example each of the M iterations generating elements of a different y t . In some embodiments, for example embodiments with an image sensor having pixel elements arranged in N t rows and N 2 columns and a sampling function having M rows and N 1 columns, in each iteration information of a different particular row of the sampling function is effectively applied to columns of the image sensor to obtain, after performing sign operations on a per column basis, a y t . In some embodiments, elements of a y t are obtained substantially simultaneously. In some embodiments, comparators are used to perform the sign operations.

[0041] In some embodiments, for each iteration information of each row <pi of the sampling function is used to generate pixel output control signals (also referred to as select signals) applied to pixel elements of the image sensor, with each row of pixel elements receiving the same control signal or signals. Accordingly, in some embodiments, for a first iteration control signal(s) based on information of 1;1 may be applied to pixel elements of a first row of pixel elements, control signal(s) based on information of <p 1 2 may be applied to pixel elements of a second row, and so on. Similarly, for a Mth iteration, control signal(s) based on information of φ Μ 1 may be applied to pixel elements of the first row, control signal(s) based on information of φ Μ 2 may be applied to pixel elements of the second row, and so on.

[0042] In some embodiments, and as shown in FIG. 2A, the image signal sampling information is provided from the sampling function generator block 260. As illustrated in FIG. 2A, the sampling function generator block 260 is associated with an image processor 220, which in various embodiments may be the local digital processor 108 or digital computer 1 1 1 of FIG. 1. It should be recognized, however, that in various embodiments the sampling function generator 260, or portions thereof, may be included in the image sensor 21 1. In some embodiments, the image sensor 21 1, or memory or circuitry associated with the image sensor 21 1, provides storage for storing the image signal sampling information, for example as illustrated by block 216 of FIG. 2 A. In some embodiments, neither the image sensor nor the image processor include a sampling function generator block, with instead pre-generated image signal sampling information being stored in storage of or associated with the image sensor. In some embodiments, the image signal sampling information may be stored in both of two storage elements, with a first storage element physically closer to some pixel elements and a second storage element physically closer to other pixel elements. For example, if columns of pixel elements forming the pixel array are considered to be arranged in a manner defining a square or rectangle, the first storage element may be about what may be considered one side of the pixel array, and the second storage element may be about an opposing side of the pixel array. In some such embodiments, pixel elements closer to the first storage element may receive pixel output control signals associated with the first storage element, and pixel elements closer to the second storage element may receive pixel output control signals associated with the second storage element.

[0043] Referring now to blocks 261, 262, 259 of the sampling function generator 260, in some embodiments of the vector r, r E {— Ι,Ο,Ι}" 3 , where N 3 = N 1 + 2Md and d = support(ip), is the size of the spatial filtering kernel ψ. In some embodiments, information of the vector r can be understood as having been formed from the element-wise product of two vectors b, b E {— Ι,Ι}" 3 and c, c E {Ο,Ι}" 3 , as in the following:

r = (n) = (biCi)

where b is based on a random distribution:

P(b t = 1) = P(b t = -1) = 1/2

and c is based on a Markov chain of order m = 2d:

= 0 r < d

= l, r > d

[0044] The random basis functions Θ are derived by sampling the vector r according to the following equation: = r W & where & = - 1) + 0 ' - 1)

[0045] In words, the rows of the random basis functions Θ are N 1 element segments of r that are shifted by no less than m relative to each other.

[0046] The sampling functions Φ can be thought of as being formed from the convolution of the rows of Θ with a filtering kernel ψ as follows:

which in FIG. 2A is stated as:

Φ = ΘΨ, where Ψ = / * ψ

[0047] In some embodiments the convolution kernel ψ performs spatial filtering based on a central difference approximation of the first derivative, for example, ψ = (+1, + 1, + 1, 0,— 1,— 1,—1), in which case:

m≥ 2d = 14

[0048] In general, m should be of sufficient size to ensure that the range of the sampling function Φ, which is limited by the image sensor hardware to discrete levels, is guaranteed. In most embodiments, the elements of O are all in range, i.e., φι j G {—1,0,1} and that the rows of the sampling function Φ are sufficiently uncorrected.

[0049] In block 223, the process buffers a measurement Y of the image signal. The measurement is comprised of the column vectors y ; - of the measurement of the image intensity signals. In most embodiments, the measurement of the image signal is formed by circuitry of or associated with the image sensor 211, and the measurement may be stored in memory of or associated with the image processor 220. The image sensor and the image processor for the embodiment of FIG. 2 A and the other embodiments, may be coupled by a serial data link, in some embodiments, or a parallel data link, in other embodiments. In addition, operations of blocks 225- 231, discussed below, may also be performed by circuitry of or associated with the image processor.

[0050] In block 225, the process forms W as a first estimate of the filtered image Z. In the embodiment of FIG. 2A, the estimate is determined by the product of the transpose of the random basis function Θ and the measurement Y. In block 227, the process refines the estimate of the filtered image Z. In some embodiments, and as shown in FIG. 2A, the estimate of the filtered image formed by the process of block 225 is refined by convolution with a kernel .

[0051] In some applications involving laser-line illumination, the laser-line may sometimes be modeled by a square pulse of finite width where the width of the laser-line pulse is greater than (or equal to) the support of the filtering kernel ψ. In accordance with the model described above the image averaging kernel is sometimes matched to the expected output of the filtering kernel ψ. For example, if the filtering kernel is given by ψ = (+1, +1, +1,0,— 1,— 1,— 1) then the convolution kernel of block 227 may be = (1,2,3,3,2,1).

[0052] It may be noted that the refinement step of block 227 can be performed in block 225 by folding the kernel into the transpose of the random basis function Θ before computing its product with the measurement Y. However, performing the operation by convolution in block 227 provides for a significant computational advantage in some embodiments where the matrix multiplication of block 225 is performed by methods of sparse matrix multiplication.

[0053] Block 229 buffers a final estimate of the filtered image Z. Locations of edges of laser lines in the estimate are determined by the process in block 231 , for example using a peak detection algorithm.

[0054] FIG. 2B is a flow diagram of an example of another process depicting computations performed by the machine vision system 100. [0055] The process of FIG. 2B, takes advantage of the a priori knowledge that the temporal image stream formed of an illumination plane passing over a 3-dimensional object of interest is more generally sparse than anticipated by methods of FIG. 2A; the image signal being sparse and/or compressible, not only with respect to the row dimension of the signal X, but also with respect to columns and with respect to time. In other words, adjacent columns j of X are likely to be very similar, i.e., highly correlated with each other. Likewise, the image signal X is typically very similar from one frame time to another. A frame time may be, for example, a time period in which M samples are obtained for each of the columns of the image signal.

[0056] FIG. 2B shows computations of a vision system, similar to that of FIG. 2A, except that the random basis function Θ and sampling function Φ are partitioned into multiple independent segments, and these segments are used in a spatiotemporally interleaved fashion. Preferably, and in some embodiments, the spatiotemporally interleaving guarantees that, in any given frame time t, no column j of the image is sampled with the same pattern as either of its spatial neighbors j— 1 or j + 1 and that the sampling pattern used in the current frame-time is different from the sampling pattern of the previous frame time and the sampling pattern of the next frame time.

[0057] As compared to FIG. 2A, the computations outlined in FIG. 2B show, what may be thought of as, nine smaller sampling functions used over three frame times, three sampling functions being applied concurrently to X at any given time t. In practice this method allows the number of samples M per frame-time t to be reduced relative to the methods outlined in FIG. 2A, while maintaining the same error tolerance associated with the binary e-stable embedding of the signal Z, and thereby providing for significantly more computational efficiency relative to the vision system of FIG. 2A. [0058] Although FIG. 2B shows the use of both spatial and temporal interleaving of the sampling function, in alternative embodiments, however, use of sampling functions may be interleaved in space only, or in time only.

[0059] In the process outlined in FIG. 2B :

• The symbol X t , X G represents an image intensity signal as it exists on the N x pixel rows and N 2 pixel columns of the pixel array at time t.

• The symbol Ψ, Ψ G {—l,0,l} NlXNl , represents an image filtering function comprised of, and in some embodiments consisting of coefficients used to compute a central difference approximation of the partial first derivative.

• The symbol r G {— Ι,Ο,Ι}" 3 , represents a sparse random sequence, which in some embodiments is based on a Markov chain of order m, where m > 1.

• The symbol 0 h k , 0 G {-l,0,l} MxWl , h = (y ' %3 + 1), and k = (t%3 + 1) represents an array of random basis functions Θ, created by drawing row vectors from r.

• The symbol < h ; k, Φ ε {— l,0,l} M Wl represents an array of image sampling functions, formed from the product of the random basis 0h ; k and the filtering function Ψ.

• The symbol Y t , Y G {—i,i} MxW 2 represents a measurement of the filtered image intensity signal at time t, formed from the product of the sampling functions Φ^, Φ ^ and Φ 3 k , and the image signal X t , quantized by sign(. ) to two levels {—1,1}·

• The symbol W t , W G {— M ... M} NlXN , represents an estimate of the filtered image signal, formed from the product of the measurement Y t and the transpose of the random basis functions Θ^, 0 2 k and 0 3 k convolved by .

• The symbol 7, t _ x , Z G {— M ... M} NlXN , represents an estimate of the product of the original image signal X and the filtering function Ψ, formed from the sum of W t , W t _ x and W t _ 2 · • The symbol Δ^, Δ G {0,1,2 ... NJ^" 2 represents image offset parameters of the local signal extremes, i.e., the P relevant signal peaks of the signal Z on each column at time t— 1.

[0060] Accordingly, as with the process of FIG. 2 A, in block 255 the process of FIG. 2B receives information representative of light energy of a scene, and in block 256 the process iteratively generates vectors of a measurement of image intensity signals, based on the relative light energy of the image of the scene. As in the process of FIG. 2A, the functions provided by blocks 255 and 256 may be performed using an image sensor 251.

[0061] In the process of FIG. 2B, however, generation of the measurement is performed using a plurality of sampling functions. In the embodiment of FIG. 2B, nine sampling functions are used, interleaved spatially and temporally. In some embodiments, three different sampling functions are used at any frame-time t, with a prior frame time and a succeeding frame time using different sets of three sampling functions. The nine sampling functions, or information to generate the sampling functions, may be dynamically generated, at 259', 26 , 262, and/or stored in memory 291 of, or associated with, the image sensor 251.

[0062] In block 263, the process buffers a measurement Y t of the image signal X at frame time t. In most embodiments, the measurement Y of the image signal is formed by circuitry of or associated with an image sensor and stored in memory of or associated with an image processor. In addition, operations of blocks 265-281, discussed below, may also be performed by circuitry of or associated with the image processor.

[0063] In block 265, the process computes partial estimates of the filtered image signal Z. In the embodiment of FIG. 2B, the estimate W is determined by taking the product of the transpose of the corresponding random basis function © h;k 293 and the measurement Y t , with a new estimate W formed for each frame-time t. [0064] In block 267, the process convolves the partial sums emitted by block 265 kernel , which in addition to refining the estimate of the filtered image as described earlier, with respect to FIG. 2A, combines neighboring column vectors, such that each column vector is replaced by the sum of itself and its immediate neighbors on the left and right.

[0065] In block 279, the process combines the partial sums output by block 267 over the previous three frame times 269 to form the final estimate of the filtered image signal Z at frame- time t— 1, storing the result in block 280. As in FIG. 2A, parameters of the illumination plane are determined by the process in block 281, for example using a peak detection algorithm.

[0066] FIG. 3 is a high-level block-diagram depicting an image sensor architecture. The image sensor of FIG. 3 can be used in the machine vision system 100 in conjunction with either of the processes described above in connection with FIGs. 2A-2B. The image sensor of FIG. 3 includes sampling function storage buffer 300; sampling function shift register input buffers 311,312,313; sampling function shift registers 321,322,323; pixel array 301 with pixel columns 331,332,333,334 included therein; analog signal comparator array 340, including analog signal comparators 341,342,343, and 344; 1-bit digital output signal lines 351,352,353,354; and output data multiplexer 302. Each of the pixel columns include a plurality of pixel elements. Generally, each pixel element includes a radiation sensitive sensor (light sensitive in most embodiments) and associated circuitry.

[0067] Pixel elements of pixel array 301 accumulate photo-generated electrical charge at local charge storage sites. The photo-generated charge on the image sensor pixels may be considered an image intensity signal in some aspects. In some embodiments, each pixel element includes a fixed capacitance that converts accumulated charge into a pixel voltage signal. Each pixel voltage signal controls a local current source, so as to provide for a pixel current signal. The pixel current source can be selected and switched, under the control of a sampling function, on to one of two signal output lines 314 available per pixel column. A pair of output lines 314 associated with a column of the image sensor of FIG. 3 is also referred to as a pixel output bus. A pixel output bus 314 is shared by all pixels on a column, such that each of the two current output signals formed on a column represent the summation of current supplied by selected pixels.

[0068] As may be seen from the use of the three sampling function shift registers, the embodiment of FIG. 3 is suited for use in a system implementing spatial interleaving (and spatio- temporal interleaving) as discussed with respect to FIG. 2B. The architecture of FIG. 3 may also be used for the non-interleaved embodiment of FIG. 2A, with either the three shift registers filled with identical information, or with the three shift registers replaced by a single register.

[0069] In some embodiments of the disclosed technologies, the rows of the sampling function φι are dynamically formed from the contents of a memory buffer using shift registers. There are three different sampling function rows active at any time. Sampling function shift register 321, which contains 0 i lj/c , provides the pixel output control signals for all pixels in columns {1,4,7 ... }. Sampling function shift register 322, which contains 0^, provides the output control for all pixels in columns {2,5,8 ... }. Sampling function shift register 323, which contains 0 ί 3 Α; , provides the pixel output control signals for all pixels in columns {3,6,9 ... }. In some embodiments of the disclosed technologies, the sampling function storage buffer 300 is a digital memory buffer holding pixel controls signals, each pixel control signal consisting of 2-bits representing which, if any, of the two current output lines 314 to be selected. In some embodiments the digital memory holding the pixel output control signals is accessed as words of 2(m)-bits in length, where m > 2(supp(ip)). In some embodiments of the disclosed technologies, m = 16 > 2(support(xp)) and the memory data width is 32-bits. [0070] To dynamically generate a new row i of the sampling functions, the image sensor of FIG. 3 copies three words from storage buffer 300 into shift register input buffers 311,312,313 then causes the contents of the input buffers 311,312,313 and the shift registers 321,322,323 to jointly shift m places. In some embodiments, the sampling function shift registers 321,322,323 further comprise an N x element long shadow register to provide a means to maintain the state of the pixel output control signals applied to pixel array 301 while the next shift operation occurs. In some embodiments of the disclosed technologies sampling function memory buffer 300 is accessed in a cyclical pattern such that the process of filling shift registers 321,322,323 with the first row need only be performed once, on power-up initialization.

[0071] Subsequent to initialization, new rows of the sampling function are formed and applied to pixel array 301 for each cycle of the shift register, thereby causing two new current output signals per column, indicative of the summation of selected pixels outputs, to form on the inputs of current comparator array 340. The two current outputs signals of a column are compared to form a 1-bit value that is representative of their relative magnitude. Column output bits, taken together, represent one row of digital output, and form a row vector of a measurement of image intensity signals on the image sensor pixels of the pixel array. Rows of digital output are multiplexed by multiplexer 302 into smaller words to form a digital output stream.

[0072] In operation M rows of three different sampling functions are generated for every frame time t to form a measurement matrix Y t , in some embodiments consisting of M-bits for each of the N 2 columns of the pixel array. In accordance with FIGs. 2A and 2B, each bit of the measurement matrix y i ; - can be thought of as the sign of the vector product of one column of the pixel array Xj t and one row of one of the sampling functions as previously explained with respect to FIG. 2B. [0073] FIG. 4 is a circuit diagram showing more detailed aspects of portions of an image sensor in accordance with aspects of the disclosed technologies. The portions of the image sensor of FIG. 4 are, in some embodiments, portions of the image sensor of FIG. 3. The embodiment of FIG. 4 includes a pixel array 400, with only four elements of the pixel array shown for clarity. For instance, only the last two rows, e.g., Row(Ni-l) and Row(Ni), of three adjacent columns, e.g., Col(j-l), Col(j) and Col(j+l), of the image sensor of FIG. 4 are shown. Each of the columns of the image sensor of FIG. 4 includes two current output lines 414 to which all the pixels of the column are coupled. Each of the two current output lines is a conductor. The two current output lines 414 lead to a current comparator 404 through a current conveyor 401, a current limiter 402, and a current mirror 403. Each of the rows of the image sensor of FIG. 4 includes pixel output control lines 405. In the example shown in FIG. 4, the pixel output control lines 405 include a pair of pixel output control lines connected with a pixel of col(j-l) and other pixels of every other third column of the of the image sensor of FIG. 4; a pair of pixel output control lines connected with a pixel of col(j) and other pixels of every other third column of the of the image sensor of FIG. 4; and a pair of pixel output control lines connected with a pixel of col(j+l) and other pixels of every other third column of the of the image sensor of FIG. 4. Moreover, each pixel of the pixel array 400 includes a pinned photodiode 406, a reset transistor 407, a transfer gate 408, a transconductor 409, output select transistors 410, 411 and floating diffusion node 412.

[0074] The pinned photodiode 406 can be reset through reset transistor 407, allowed to accumulate photo-generated electric charge for an exposure period, with the charge transferred to the floating diffusion node 412 through transfer gate 408 for temporary storage. The voltage V FD at the floating diffusion node 412 controls transconductor 409 to provide a current source that is proportional to the voltage signal. Depending on the state of pixel control lines 405, the current from a pixel can be switched through transistors 410 or 41 1 to one of two current output lines 414 shared by all the pixels on a column. For this reason, the two current output lines 414(j-l) form a pixel output bus for the column. As such, the two current output lines 414(j) for each column j are also referred to as conductors of the pixel output bus. Conceptually, the column output currents represent the simple sum of the currents from selected pixels, but in practice, there are additional factors. A more realistic estimate includes offset and gain error introduced by readout circuitry blocks and the non-linearity error introduced by transconductor 409, as follows:

= <pi(aV FD (i,j) 2 + bV FD (i,j) + c))

where a, b and c are the coefficients of the second order adjustment for / = f(V FD ), V FD being the voltage stored in the floating diffusion 412 of a pixel. The coefficients depend on the operation point of the transistor (V dd , Vo+ and Vo-). Although the coefficients a, b and c are approximately equal for all pixels, some mismatch may need to be considered.

[0075] Voltages Vo+ and Vo- of each column are fixed using respective current conveyors 401. In some embodiments the current conveyor 401 is based on a single p-channel metal-oxide semiconductor (PMOS) transistor, where.

V 0 _ = V aw + V t +

[0076] Current conveyor 401 is biased with a current I cc to ensure the minimum speed necessary to fulfill the settling requirements. The positive and negative branches are balanced using a current mirror 403, and the sign is obtained using current comparator 404. A current limiter 402 is included to avoid break-off problems caused by image columns having an excessive number of bright pixels driving the column output lines.

[0077] In some embodiments of the disclosed technologies, a first select transistor of a pixel on an even row of the pixel array is connected to the first conductor of its column output bus and the second output select transistor connected to the second conductor of its column output bus, as described above in connection with the embodiment shown in FIG. 4. However, in these embodiments, in odd rows of the pixel-array the first select transistor of a pixel is connected to the second output conductor and the second output select transistor connected to the first output conductor, and, in odd rows, the state of output select signals applied to the pixel is inverted. Notwithstanding some practical limitation with regard to the nature of the image-signal and the construction of the sampling matrix Φ, this arrangement of pixel connections provides for measurement coefficients that are substantially free of the effects associated with capacitive coupling of pixel floating-diffusion nodes with pixel output select signal conductors.

[0078] Some innovative aspects of these embodiments can be understood from the following example relating to spatially non-interleaved sampling described above in connection with FIG. 2A. Consider a sampling matrix Φ G {— l,l} M W values drawn from a Bernoulli distribution applied to an image signal vector Xj G E w , such that current measurements = <$> iXj are distributed around Y ave = 0. When pixel output control signals (also referred to as select signals) couple with floating-diffusion nodes it introduces multiplicative error represented here by variables e p (positive) and e n (negative), which may be conceptualized as remapping the sampling matrix Φ such that (l→ e p ) and (—1→— e n ). In this case, measurements y^ = <$> iXj are distributed around Y ave = - (e p — e n ) ||x/ || . In other words, the relative difference in the magnitude of the multiplicative error coefficients shifts the measurement distribution away from zero by an amount related to the magnitude of the image-signal. This translation error in the measurement signal distribution represents a major complication for the design and application of image sensors that digitize image measurements formed by projection of the image signal into a non-canonical basis.

[0079] However, if, for example, the image signal-vector Xj is split into even and odd terms.

Xodd = x i=l:2 :N,j an< ^ Xeven = x i=2 2 N,j , and the multiplicative error coefficients values are swapped on every other row,

Yave = 2 \\ l + ( e p 0 dd ~ e n odd )

Podd " even odd Peven

Then

Y ave = 0, when = \\X 0 dd \\ l = \\X \\ i .

In other words, when the sum of the odd and even rows of the column vectors X is approximately equal, which is typical of natural images, the effect of capacitive coupling on the measurement distribution will be significantly reduced or eliminated altogether.

[0080] To extend the above example to spatial-interleaved sampling, assume, for example, three sampling vectors Φ 1( Φ, 2 Φ 3 applied in an interleaved pattern to the column vectors of X. By definition of Φ, each pixel must be in one of two output selection states, implying that there are a total of 8 possible multiplicative error coefficients corresponding to each of the 8, equally probable, patterns of 3 : 1 interleaved pixel selection. Given a pixels output selection state, there exist a first subset of 4 equally possible error coefficients corresponding to one output selection state and a second complementary set of 4 equally possible error coefficients corresponding to the other output selection state. Since the error coefficients are randomly distributed, each occurring with equal probability, it is reasonable to represent the 2 sets of 4 possibilities by average error coefficient e p and e n , which, logically, reduces the problem to the non-interleaved case, previously explained.

[0081] FIG. 5 is a circuit diagram showing portions of an image sensor in accordance with some embodiments of the disclosed technologies. The image sensor shown in FIG. 5 can be used in the machine vision system 100 in conjunction with processes similar to the ones described above in connection with FIGs. 2A-2B. Moreover, consistent with the image sensor described above in connection with FIGs. 3-4, each pixel (e.g., from the set of pixels 501, 502, 503, etc.) includes a pinned photodiode 506, a reset transistor 507, a transfer gate 508, a transconductor 509, a first output select transistor 510, a second output select transistor 511, and a floating-diffusion node 512. The pinned photodiode 506 can be reset through reset transistor 507, allowed to accumulate photo-generated electrical charge for an exposure period, with the charge transferred to floating diffusion-nodes 512, through transfer gate 508 for temporary storage. The voltage at the floating- diffusion node controls transconductor 509 to supply output current that is proportional to the voltage signal sensed on floating-diffusion node 512. A pixel output current can be switched to conductors 514a, 514b of pixel output bus 514 through activation of the first output select transistors 510 or the second output select transistors 511 according to the state of a pixel output control bus 505. Note that the pixel output control bus 505 is also referred to as a pixel select line. In addition to the forgoing components, the circuit diagram of FIG. 5 depicts parasitic capacitive coupling elements 521, 531, 541. In view of this, it can be appreciated that the state of the pixel select bus 505 may influence the effective capacitance of floating diffusion-node 512 and therefore the voltage that is supplied to transconductor 509, for a fixed amount of stored charge.

[0082] In embodiments of the disclosed technologies relating to FIG. 5, the pixels of the image sensor are arranged in a pixel-array having rows 535, 536, 537 and columns 545, 546, 547. In this example, row 536 of the pixel-array 500 is arranged such that first output select transistor 510 switches the pixel output to the first conductor 514a of the pixel's column output bus 514, and second output select transistor 511 switches the pixel output to the second conductor 514b of the pixel's column output bus 514, while on row 535 and 537 the configuration is reversed, by means of crossovers 513, such that the first output select transistor 510 switches the pixel's output to the second conductor 514b of the pixel's column output bus 514, and second output select transistor switches the pixel output to the first conductor 514a of the pixel' s column output bus 514. As such, for the set of pixels 501, 502, 503, etc., that are arranged in column 546, a first subset of pixels including pixel 502, etc., are coupled to respective output control buses 505 to receive a first pixel output control signal (e.g. 1,0) to switch pixel output to the first output conductor 514a, and to receive a second pixel output control signal (e.g. 0,1) to switch pixel output to the second output conductor 514b; and a second subset of pixels including pixels 501, 503, etc., are coupled to respective output control buses 505 to receive the first pixel output control signal to switch pixel output to the second output conductor 514b, and to receive the second pixel output control signal to switch pixel output to the first output conductor 514b. Note that each of the first and second subsets of pixels is a proper subset including one or more pixels.

[0083] In rows 535 and 537 where the pixel output configuration is reversed, as described above, the state of signals on pixel output control bus 505 is inverted. In some implementations, e.g., in the portion of a pixel-array 500 depicted in FIG. 5, the pixel output configuration is reversed on rows 535 and 537 by means of crossovers 513, and the pixel output control signals supplied are inverted by crossovers 515. Note that in this example, each crossover 513 is formed by swapping the first conductor 514a and the second conductor 514b of a pixel output bus 514. Also in this example, each crossover 515 is formed by swapping a pair of conductors of a pixel output control bus 505.

[0084] In other implementations, the pixel output configuration can be reversed on rows 535 and 537 without using crossovers 513 along the pixel output bus 514. In such implementations, uncrossed first and second conductors are used for the pixel output bus, like in the case of the pixel output bus 414 shown in FIG. 4. However, to obtain the reversal of the pixel output configuration on rows 535 and 537 using an output bus like the output bus 414, for pixels in row 536, an output terminal of the first output select transistor 510 extends, and is directly connected, to the first conductor of the pixel's column output bus, and an output terminal of the second output select transistor 511 extends, and is directly connected, to the second conductor of the pixel's column output bus; while for pixels in rows 535 and 537, an output terminal of the first output select transistor 510 extends, and is directly connected, to the second conductor of the pixel's column output bus, and an output terminal of the second output select transistor 511 extends, and is directly connected, to the first conductor of the pixel's column output bus.

[0085] In some implementations, the pixel output control signals supplied on pixel output control bus 505 can be inverted by other means, for example by negating coefficients (comprising ternary values +1, 0 or -1) on every other column of sampling matrix Φ. For example, the negating of the noted coefficients can be performed upon retrieval of the sampling matrix Φ from storage 216 or 291. As described above in connection with FIGs. 2A-2B, rows of the sampling matrix Φ are combined, at 217 or 256, with columns of the image signal X in the image readout operation Y = ΦΧ. Thus, columns of the sampling matrix are related through the readout operation to rows of the image and negating the coefficients on every other column of the sampling matrix Φ represents a functionally equivalent alternative to providing crossovers 515.

[0086] The arrangement of pixel input/output connection disclosed above greatly reduces the influence of the pixel output control signals on the measurement y i ; - = by de-correlating the noise induced in the sensed image-signal Xj with the state of the output control bus 505 generated by sampling vector Φ έ . When the error induced on the image signal Xj is distributed evenly between pixels selected to the first and second conductor 514a, 514b of a pixel output bus 514 the error contributions tend to cancel in the differential comparison of aggregated pixel output currents related to the computation of ij7 - = Φ;Χ/ , which is a fundamental operation associated with projective measurement readout performed by a certain class of CMOS image sensors, for example the image sensor described above in connection with FIGs. 3-4.

[0087] While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. [0088] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments.

[0089] Other embodiments fall within the scope of the following claims.