Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE SENSOR AND METHOD OF GENERATING AN IMAGE SIGNAL
Document Type and Number:
WIPO Patent Application WO/2024/054511
Kind Code:
A1
Abstract:
An image sensor (1) comprises a pixel array (2) comprising a plurality of pixels (3), each pixel (3) configured to generate a photo signal in response to electromagnetic radiation captured by a photosensitive element of the pixel (3). The image sensor (1) further comprises a driver circuit (4) comprising a memory (5), the driver circuit (4) being configured to read out the photo signals generated by each of the plurality of pixels (3), generate corrected photo signals by applying a compensation algorithm to the photo signals, wherein the compensation algorithm is based on correction data stored in the memory (5), and provide the corrected photo signals to an image processing unit (6) for reconstructing the image and generating the image signal.

Inventors:
TAMMA ANANTH (US)
Application Number:
PCT/US2023/032103
Publication Date:
March 14, 2024
Filing Date:
September 06, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AMS SENSORS USA INC (US)
International Classes:
H04N25/62; H01L27/146
Foreign References:
US20120206635A12012-08-16
US20120173175A12012-07-05
US20140178078A12014-06-26
US20110069210A12011-03-24
USPP63404278P
Attorney, Agent or Firm:
HSIEH, Timothy, M. (US)
Download PDF:
Claims:
Claims

1. An image sensor (1) , comprising: a pixel array (2) comprising a plurality of pixels (3) , each pixel (3) configured to generate a photo signal in response to electromagnetic radiation captured by a photosensitive element of the pixel (3) ; a driver circuit (4) comprising a memory (5) ; and an image processing unit (6) configured to output an image signal ; wherein the driver circuit (4) is configured to:

- read out the photo signals generated by each of the plurality of pixels (3) ;

- generate corrected photo signals by applying a compensation algorithm to the photo signals, wherein the compensation algorithm is based on correction data stored in the memory (5) ; and

- provide the corrected photo signals to the image processing unit (6) for reconstructing the image and generating the image signal.

2. The image sensor (1) according to claim 1, wherein the driver circuit (4) is configured to apply the compensation algorithm based on the correction data for inverting the cross-talk or point spread function, PSF, expressed as a matrix .

3. The image sensor (1) according to claim 1 or 2, wherein the memory (5) comprises correction data that is dependent on an angle of incidence of the electromagnetic radiation; and the driver circuit (4) is further configured to: - determine a Chief Ray Angle, CRA, based on data stored on the memory (5) ;

- determine from the CRA the angle of incidence for each pixel (3) of the pixel array (2) ; and

- apply to each photo signal the compensation algorithm based on the determined angle of incidence of the respective pixel (3) .

4. The image sensor (1) according to one of claims 1 to 3, wherein the correction data stored in the memory (5) describes the optical point spread function, PSF, of at least some of the pixels (3) of the pixel array (2) ; and the driver circuit (4) , for generating the corrected photo signals, is configured to load pre-calculated, inverted optical PSF data, and multiply the inverted optical PSF data with the photo signals of the corresponding pixels (3) .

5. The image sensor (1) according to claim 4, wherein the driver circuit (4) is further configured to interpolate the inverted optical PSF for the remaining pixels (3) , and multiply the interpolated inverted optical PSF with the photo signals of the remaining pixels (3) .

6. The image sensor (1) according to one of claims 1 to 5, wherein the correction data comprises a plurality of [K x K] matrices, wherein each of the [K x K] matrices is associated to a [K x K] subarray of the pixel array (2) , and each element of the [K x K] matrices describes an amount of electromagnetic radiation captured by the associated pixel (3) when only a single pixel (3) of the [K x K] subarray is illuminated.

7. The image sensor (1) according to one of claim 6, wherein K is an odd integer and the single pixel (3) is a center pixel (3) of the [K x K] subarray.

8. The image sensor (1) according to claim 6 or 7, wherein the correction data stored in the memory (5) comprises a [K x K] matrix for a portion of [K x K] subarrays of the pixel array (2) , and the driver circuit (4) is further configured to interpolate inverted [K x K] matrices for the remaining [K x K] subarrays of the pixel array (2) , and multiply the interpolated inverted [K x K] matrices with the photo signals of the remaining [K x K] subarrays of the pixel array ( 2 ) .

9. The image sensor (1) according to claim 6 or 7, wherein each of the [K x K] matrices is associated to multiple [K x K] subarrays of the pixel array (2) that are located symmetrically distributed around a center point of the pixel array ( 2 ) .

10. The image sensor (1) according to one of claims 1 to 9, wherein the correction data comprises correction values and the driver circuit (4) is configured to generate the corrected photo signals by multiplying the photo signal with a corresponding correction value.

11. The image sensor (1) according to one of claim 10, wherein the correction data comprises sets of correction values for different angles of incidence and the driver circuit (4) is configured to generate the corrected photo signals by multiplying the photo signal with a corresponding correction value selected based on a determined angle of incidence .

12. The image sensor (1) according to one of claims 1 to 9, wherein the correction data comprises angle-of-incidence dependent functions or interpolation expressions, and the driver circuit (4) is configured to: generate from the functions or expressions correction values based on determined angles of incidence and/or pixel positions within the pixel array (2) , and generate the corrected photo signals by multiplying the photo signal with a corresponding correction value.

13. The image sensor (1) according to one of claims 1 to 12, wherein generating the corrected photo signals is performed in the analog or digital domain.

14. A camera system (100) comprising an image sensor (1) according to one of claims 1 to 13.

15. A method of generating an image signal using an image sensor (1) , the method comprising: generating photo signals by means of a pixel array (2) comprising a plurality of pixels (3) ; reading out the photo signals; generating corrected photo signals by applying a compensation algorithm to the photo signals, wherein the compensation algorithm is based on correction data stored in a memory ( 5 ) ; reconstructing the image signal from the corrected photo signals by means of an image processing unit (6) ; and outputting the image signal.

Description:
Description

IMAGE SENSOR AND METHOD OF GENERATING AN IMAGE SIGNAL

CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of priority of U.S. Provisional Patent Application No. 63/404,278 filed September 7, 2022, the entire contents of which is hereby incorporated by reference.

FIELD OF THE INVENTION

This disclosure relates to an image sensor, to a camera system comprising such an image sensor, and to a method of generating an image signal.

BACKGROUND OF THE INVENTION

Image sensors, particularly CMOS image sensors, have found widespread applications in digital camera systems. Such image sensors employ an array of light capturing elements, the so- called pixels, which are typically arranged in a matrix formed from rows and columns. The pixels convert incident electromagnetic radiation, e.g. infrared, visible or ultraviolet light, x-rays, etc., into a charge that can be detected as an electronic photo signal and further processed for reconstructing and generating a digital image.

Imaging systems are often characterized by their modulation transfer function (MTF) , a function derived from the optical transfer function (OTF) as MTF = abs (OTF) 2 neglecting phase effects. The MTF of an optical system such as a camera, microscope , human eye , or proj ector speci fies how di f ferent spatial frequencies are handled by the system . It is used by optical engineers to describe how the optics proj ect light from the obj ect or scene onto a detector array, retina, screen, or simply the next item in the optical transmission chain . The MTF is a very critical parameter for CMOS Image Sensors ( CIS ) as it directly af fects the resolution and image quality of the camera and the imaging system .

The ubiquitous CMOS image sensors , CIS , made using silicon, spatially sample the incident image with sampling frequencies determined by the pixel pitch and pixel architecture . The MTF contribution due to sampling due to pixel pitch sets the upper bound on the MTF of the sensor . For optimal results , the pixels need to be perfectly isolated from each other both optically and electrically . However, due to the fundamental nature of fabrication limitations , material choices used for forming pixels , pixel architecture choices , and cost of implementations , the degree of optical and electrical isolation between pixels typically is not perfect . This causes some incident light and generated electrons to leak from one pixel to neighboring pixels causing cross-talk among pixels . This cross-talk leads to a reduction in MTF of the sensor, in turn resulting in a degradation of the image quality in terms of sharpness , blurring, etc . In addition, the employment of lenses with a low f-number causes light that enters the pixel to be incident at vastly di f ferent angles of incidence on the CIS surface . This again leads to various amounts of cross-talk .

It is an obj ect to be achieved to provide an image sensor with enhanced performance and ef ficient cross-talk compensation, which overcomes the limitations of state-of- the-art devices described above . It is further an obj ect to provide a method of generating an image signal .

These obj ects are achieved with the subj ect-matter of the independent claims . Further developments and embodiments are described in dependent claims .

SUMMARY OF THE INVENTION

The invention overcomes the issue of image quality deterioration due to pixel cross-talk observed in modern image sensors . This is achieved by compensation of the crosstalk in the recorded image data after the image has been recorded . This compensation is performed in-hardware , on- chip, i . e . on the CIS sensor chip or on an application AS IC, based on pre-calibrated coef ficients derived from a physical model of the pixel (based on full-wave optical simulations of the pixel ) and/or optical measurements of pixel performance . These pre-calibrated coef ficients are stored on-chip on the sensor . The compensation is performed j ust after the image is recorded on the sensor and can be performed both in the analog or the digital domain . These coef ficients are also generated for di f ferent locations on the image sensor surface based on the incident angle of light , corresponding to the f- number of the lens . This allows for improved image sharpness compensation .

In at least one embodiment , an image sensor according to the improved concept comprises a pixel array having a plurality of pixels , with each pixel being configured to generate a photo signal in response to electromagnetic radiation captured by a photosensitive element of the respective pixel . The image sensor further comprises a driver circuit having a memory, and an image processing unit configured to output an image signal . The driver circuit is configured to read out the photo signals generated by each of the plurality of pixels and generate corrected photo signals by applying a compensation algorithm to the photo signals : Therein, the compensation algorithm is based on correction data stored in the memory . The driver circuit is further configured to provide the corrected photo signals to the image processing unit for reconstructing the image and generating the image signal .

The pixels of the image sensor are conventional pixels in a back-side illumination, BS I , architecture . Speci fically for a detection in the near-infrared, NIR, domain these pixels can comprise structures for enhancing light absorption at NIR wavelength . For example , these structures are implemented as pyramidal structures arranged at an entrance surface of the pixel . Within the pixel array, the individual pixels are delineated by deep trench isolation, DTI , or more speci fically by backside deep trench isolation, BDTI , wherein the DTI or BDTI can consist of a trench etched into the pixel substrate , e . g . silicon, and filled with a silica, such as an oxide . The trench isolation works on the principle of total internal reflection, TIR . The described pixel architectures themselves are a well-established concept and are not further detailed throughout this disclosure , except for certain aspects highlighting the working principle of the improved concept .

Arranging such pixels side-by-side in rows and columns for forming an image sensor, however, introduces the possibility of optical and electrical cross-talk between the pixels . These cross-talks typically are caused by di f ferent ef fects . Firstly, light leakage due to the aforementioned DTI can occur. Typically, a DTI does not extend along the full pixel depth for manufacturing purposes. Thus, light leakage from neighboring pixels close to the photosensitive surface is not suppressed by the DTI in these cases. Moreover, the DTI itself is typically formed from an oxide material and relies on the principle of total internal reflection at the interface between the oxide and the substrate material of the pixel, e.g. silicon. The internal reflection, however, may not be perfect, particularly it can be angle dependent, such that light leakage through the DTI towards neighboring pixels is likewise possible. Also, light that is reflected or scattered from a bottom interface between silicon and metal layers can cause a leakage of optical signals into neighboring pixels regardless of the DTI characteristics. Moreover, also additional optical components arranged at an entrance surface of the pixels, e.g. lens elements and optical spacers and filters, can scatter light into unwanted directions towards neighboring pixels. It is further noted that for pixels operating at NIR wavelengths, the aforementioned NIR enhancement structures, e.g. pyramidal structures, are added to help spread light inside the pixel, thereby improving the Quantum Efficiency (QE) of the pixel. It is noted that due to the symmetry and nature of scattering, the light leakage into neighboring pixels need not be symmetric - i.e., light scattering into pixels on X and Y axis need not be similar. Any compensation mechanism needs to account for this asymmetry. However, the nature of the light leakage at interfaces causes more light to leak out due to increased scattering of these structures. Specifically, the scattering structures scatter light such that light which was impinging on the pixel at normal incidence, for instance, is then scattered at a wider range of angles . Some of these angle might be relatively normal to the BDTI surface ( on the BDTI frame of reference ) causing light leakage into the neighboring pixels .

Secondly, electrical cross-talk between pixels can occur particularly as modern image sensors become smaller with the pixel density signi ficantly increasing . Thus , even in the absence of light leakage , the photo signals generated by neighboring pixels may include additional contributions due to the aforementioned electronic leakage . In general , for BS I pixel architectures , the electrical cross-talk is smaller than optical cross-talk . However, the proposed technique can deal with both electrical and optical cross-talk .

In order to compensate for the aforementioned leakage channels , the image sensor comprises a driver circuit that is configured to control a readout of the photo signals from the pixel array and compensate for the leakage before the image is reconstructed by means of an image processing unit . To this end, the driver circuit comprises a memory element , on which correction data is stored that is read out and processed by the driver circuit and subsequently applied to the photo signals for generating corrected photo signals . For example , the correction data is generated during an initial calibration process of the respective image sensor . The compensation of the image data is performed on-chip - either on the sensor chip or on an accompanying image processing AS IC . The compensation can be performed using pre-calibrated coef ficients or based on a physical model of the pixel array . The coef ficients are stored in the memory on-chip for easy loading . The coef ficients are generated by developing a physical model for the pixel array using optical simulations or measurements . In some embodiments , the driver circuit is configured to apply the compensation algorithm based on the correction data for inverting the cross-talk or point spread function between neighboring pixels , described as a matrix, of the image sensor, or pixel array . With the growing role played by optical devices in measurement , communication, and photonics , there is a clear need for characteri zing optical components . A basic and useful parameter, especially for imaging systems , is the Modulation Trans fer Function, or MTF . Basically, the MTF can be understood as a measure of the ability of an optical system to trans fer various levels of detail from obj ect to image . For image sensors , the total MTF can be separated into the MTF describing the light propagation through a medium, air, the MTF describing the propagation through a lens focusing the incident light onto an image sensor, and the MTF of the image sensor . Moreover, the readout and image processing procedures likewise can be described by respective MTFs . Here , the MTF of the image sensor is due to optical cross-talk, commonly described via the point spread function, which can be expressed as a matrix . The driver circuit can apply an inverse of this matrix in order to compensate for the MTF of the image sensor pixel array . The inverse matrix can be dependent on an optical wavelength .

In these embodiments , the driver circuit is configured to compensate the detected photo signals for the main loss channel , which is the optical and electrical cross-talk between the pixels . To this end, the correction data contains information about the cross-talk between neighboring pixels , described as a matrix, and this information can be used to compensate for the MTF of the CMOS image sensor pixel array, and optionally of a lens , such that the driver circuit can use this information to compensate the photo signals generated by the individual pixels of the pixel array for this MTF, e . g . by applying an inverse cross-talk function to the photo signals for compensating for the MTF .

In some embodiments , the memory comprises correction data that is dependent on an angle of incidence of the electromagnetic radiation . Moreover, in such embodiments the driver circuit is further configured to determine a Chief Ray Angle , CRA, based on data stored in the memory, determine from the CRA the angle of incidence for each pixel of the pixel array, and apply to each photo signal the compensation algorithm based on the determined angle of incidence of the respective pixel . The fact that pixels in di f ferent regions of the pixel array receive incident light at di f ferent angles means that a correction factor to be applied to the respective photo signals can be angle-of-incidence dependent . To this end, the driver circuit determines a CRA based on lens-dependent CRA versus image height data stored in the memory, which is the centroid of the incidence angle of the chief ray incident on the sensor surface . This CRA versus image height data is read from the memory by the algorithm to determine how the incidence angle varies along the location of the individual pixels of the array . From this CRA, the actual incident angle for each individual pixel can be determined . For an ideal compensation, the memory comprises a set of correction data for di f ferent incidence angles , or an expression or a look-up table that is angle dependent such that a correction value to be applied can be calculated .

In some embodiments , the correction data stored in the memory describes the optical point spread function, PSF, of at least some of the pixels of the pixel array . Moreover, in such embodiments , the driver circuit for generating the corrected photo signals is configured to load pre-calculated, inverted optical PSF data, and multiply the inverted optical PSF data with the photo signals of the corresponding pixels . The point spread function describes the response of an imaging system to a point source or point obj ect . In other words , the correction data can be obtained by illuminating a single pixel of the pixel array and measuring the photo signals of said illuminated pixel and its neighboring pixels , optionally also the second neighbors . This way, the cross-talk of the pixel array can be easily determined during a pre-calibration or an optical simulation, based on which the correction data is generated and stored in the memory . The driver circuit then calculates an inverse function of the correction data in order to generate the corrected photo signals . Alternatively, the correction data already comprises the inverted PSF at di f ferent optical wavelengths , particularly NIR .

In some embodiments , the driver circuit is further configured to interpolate the inverted optical PSF for the remaining pixels and multiply the interpolated inverted optical PSF with the photo signals of the remaining pixels . For example , the correction data only comprises correction data for some pixels of the pixel array, e . g . in order to remain memory conservative . In order to compensate all photo signals from the pixel array, the driver circuit can be configured to interpolate the correction data for those pixels , for which no correction data is stored in the memory . For example , the memory comprises correction data for a rectangular pixel array only for the pixels in one quadrant , such that the symmetry of the pixel array can be used to determine correction values or expressions for all remaining pixels . Alternatively, a rotational symmetry of the image sensor, e . g . , based on the image circle or image radius , could be used to further reduce the number of pixels , to which correction data is saved in the memory .

In some embodiments , the correction data comprises a plurality of [K x K] matrices . Therein, each of the [K x K] matrices is associated to a [K x K] subarray of the pixel array, and each element of the [K x K] matrices describes an amount of electromagnetic radiation captured by the associated pixel when only a center pixel of the [K x K] subarray is illuminated . For even integer matrices , a corner pixel can be illuminated, from which the cross-talk matrix can be reconstructed . To calculate the MTF due to optical cross-talk, it is suf ficient to know the optical point spread function in matrix form, i . e . the information how much light leaks to neighboring pixels when a particular pixel only is illuminated . Such a calculation can be performed using computer simulations or can be measured using optical experiments . In both simulations and experiments , only one pixel is selectively illuminated and the power flow/ spill- over into neighboring pixels is noted . The driver circuit calculates an inverse cross-talk, or PSF, matrix for compensating the photo signals for the pre-calibrated cross talk . Said inverse matrix is normali zed such that the total energy amounts to unity . Alternatively, the inverse data is preloaded into the memory as the aforementioned calculations can become resource hungry and may take too long for an ef ficient readout with compensation mechanism . In some cases , the inverse of certain matrices cannot be found due to matrix singularity . In such cases , the method to find the inverse can be performed by an ' approximate inverse ' method . Moreover, a small random error can be applied to each value in the [K x K] matrices before generating an inverse . However, it must be ensured that the amount of noise is minimal and signi ficantly lower than the actual dark noise in each pixel .

In some embodiments , K is an odd integer and the single pixel is a center pixel of the [K x K] subarray . Typically, the optical cross-talk between pixels can be estimated as 3x3 or 5x5 matrices with the center pixel of the matrix only being illuminated . The matrix contains data on how much power flows in nearest adj acent neighbors , diagonal neighbors , 2nd neighbors , etc . The si ze of this CT matrix [K, K] determines the si ze of the buf fer used to hold the data for the data readout from each rows .

In some embodiments , the correction data stored in the memory comprises a [K x K] matrix for a portion of [K x K] subarrays of the pixel array . Moreover, in such embodiments , the driver circuit is further configured to interpolate inverted [K x K] matrices for the remaining [K x K] subarrays of the pixel array and multiply the interpolated inverted [K x K] matrices with the photo signals of the remaining [K x K] subarrays of the pixel array . As mentioned before , for memory conserving purposes the correction data comprises [K x K] matrices only for some pixels . The driver circuit for generating the corrected photo signals for the remaining pixels can be configured to interpolate the correction data or compensation values generated from the correction data . For example , the memory comprises correction data for hal f of the pixels in a row, while interpolation techniques are used to determine the correction data for the remaining pixels . In some embodiments , each of the [K x K] matrices is associated to multiple [K x K] subarrays of the pixel array that are located in a rotational symmetry around a center point of the pixel array . In cases , in which the pixel array has a certain symmetry, for memory conserving purposes , the correction data comprises matrices that are associated to more than one subarray of pixels . For example , the correction data comprises [K x K] matrices for subarrays of pixels arranged within a first quadrant of an image sensor, wherein said matrices are likewise associated to corresponding subarrays of pixels in the remaining quadrants of the pixel array . Corresponding in this context means the location of the respective subarrays with respect to a center of the pixel array .

In some embodiments , the correction data comprises correction values and the driver circuit is configured to generate the corrected photo signals by multiplying the photo signal with a corresponding correction value . For example , the correction data comprises a correction factor for each pixel , such that the corresponding photo signal can be corrected by means of multiplication with the correction value in a straightforward manner . Alternatively, the correction can be performed based on an addition or subtraction operation . In some cases , in which only a few elements of the matrix dominate , instead of a full matrix multiplication, only a few multiply-addition operations can be performed .

In some embodiments , the correction data comprises sets of correction values for di f ferent angles of incidence and the driver circuit is configured to generate the corrected photo signals by multiplying the photo signal with a corresponding correction value selected based on a determined angle of incidence . As mentioned above , di f ferent pixels can experience di f ferent angles of incidence of the detected radiation . Thus , the correction data can comprise multiple correction values for each pixels that are associated to di f ferent angles of incidence . The driver circuit is configured to determine the angle of incidence for each pixel , e . g . via a CRA verses image height dependence , based on the employed lens , which is stored in the memory and is input into the compensation algorithm, and apply the appropriate correction value to each photo signal based on the determined angle of incidence for each pixel .

In some embodiments , the correction data comprises angle-of- incidence dependent functions or interpolation expressions . Moreover, in such embodiments , the driver circuit is configured to generate from the functions or expressions , correction values based on determined angles of incidence and/or pixel positions within the pixel array, and generate the corrected photo signals by multiplying the photo signal with a corresponding correction value . In contrast to the correction data directly comprising correction values , for memory conserving purposes for example , the correction data can be comprised of functions or expressions , from which the respective correction values can be calculated . For example , the functions are angle dependent , such that a correction value for each pixel can be determined based on an angle of incidence of that speci fic pixel of the pixel array .

In some embodiments , generating the corrected photo signals is performed in the analog or digital domain . The driver circuit can be configured to directly correct the photo signals before the photo signals are digitali zed . To this end, the system can comprise a DAC for converting digital correction data into analog data that can be used to generate the corrected photo signals in the analog domain .

Alternatively, the driver circuit can be configured to first digitali ze the photo signals via an ADC or a system of ADCs and subsequently generate the corrected photo signals using digital correction data from the memory .

The aforementioned obj ect is further solved by a camera system comprising an image sensor according to one of the embodiments described above .

The aforementioned obj ect is further solved by a method of generating an image signal using an image sensor . The method comprises generating photo signals by means of a pixel array comprising a plurality of pixels , reading out the photo signals , and generating corrected photo signals by applying a compensation algorithm to the photo signals , wherein the compensation algorithm is based on correction data stored in a memory . The method further comprises reconstructing the image signal from the corrected photo signals by means of an image processing unit and outputting the image signal .

Further embodiments of the method and the camera system become apparent to the skilled reader from the aforementioned embodiments of the image sensor and vice-versa .

BRIEF DESCRIPTION OF THE DRAWINGS

The following description of figures may further illustrate and explain aspects of the image sensor and the method of generating an image signal . Components and parts of the image sensor that are functionally identical or have an identical ef fect are denoted by identical reference symbols . Identical or ef fectively identical components and parts might be described only with respect to the figures where they occur first . Their description is not necessarily repeated in successive figures .

DETAILED DESCRIPTION

In the figures :

Figures 1 to 3 are schematic views of pixel structures employed in an image sensor ;

Figure 4 shows a schematic illustrating the contributions to the total modulation trans fer function of the image sensor without the improved concept ;

Figure 5 shows a schematic illustrating the contributions to the total modulation trans fer function of the image sensor according to the improved concept ;

Figures 6 to 8 show schematics of an image sensor according to the improved concept ;

Figure 9 is a graph illustrating the Chief Ray

Angle versus image height relationship ; and

Figure 10 is a schematic of a camera system comprising an image sensor according to the improved concept .

Figure 1 shows a schematic view of a pixel structure 3 employed in a pixel array 2 of an image sensor 1 according to the improved concept . The pixel 3 is a CMOS image sensor, CIS , pixel , for example , and comprises a semiconductor photodiode 3a, e . g . a silicon photodiode , formed atop a metal layer structure 3d for electrically contacting the photodiode 3a . For optically isolating the pixel 3 from neighboring pixels , a deep trench isolation, DTI , 3b is formed on a side surface of the pixel . The DTI 3b is a trench etched into the silicon and filled with a silica such as an oxide . The DTI is configured to totally internally reflect incident light such that light that enters the pixel 3 does not leave the pixel volume through the side surface . The DTI 3b typically partially extends from a light entrance surface towards the metal layer structure 3d as indicated .

Particularly for detecting light in the NIR domain of the electromagnetic spectrum, enhancement structures 3c are formed on or within the pixel volume for improving the quantum ef ficiency of the pixel 3 . Typically, these enhancement structures 3c are formed as pyramidal structures and are configured to alter an angle of incidence of the incident radiation in various directions by scattering such that photons are distributed into substantially the entire pixel volume . The characteristics and working principles of these enhancement structures 3c are an established concept and are not further detailed throughout this disclosure . The pixel 3 further comprises an optical filter 3e for rej ecting unwanted light and a lens element 3 f , e . g . a micro lens , for directing incident light 10 towards the pixel volume formed by the photodiode 3a . The lens element 3 f can be further arranged such that only a particular angle of incidence or a range of angles , defining a f ield-of-view of the pixel 3 , is allowed to enter the pixel volume while light incident at other angles is rej ected .

As indicated by the arrows within the photodiode 3 , the fact that the DTI 3b does not extend fully towards the metal layer structure 3d, which is due to manufacturing purposes , means that optical light paths into neighboring pixels 3 are enabled, thus forming a first optical leakage channel . A second optical leakage channel is due to the imperfect total internal reflection at the interface between the photodiode 3a and the DTI 3b, which may not be 100% due to natural transmission particularly for some angles of incidence . Also , through this second loss channel , light can enter a neighboring pixel 3 and be detected there erroneously . The leakage into the neighboring pixels 3 in both these loss channels can even be enhanced by the enhancement structures 3c, such that a substantial amount of light is lost and/or erroneously detected in a neighboring pixel 3 .

Figure 2 shows a further embodiment of a pixel 3 employed in a pixel array 2 of an image sensor 1 according to the improved concept . In some cases , the pixel 3 is configured to preferentially detect incident light 10 that has an oblique incidence as indicated . To this end, the lens element 3 f of the respective pixel 3 is laterally translated or shi fted, such that incident light 10 at a speci fic angle of incidence enters the pixel while other angles of incidence are prevented from entering the pixel volume . This , however, has the consequence that particularly for NIR pixels having the aforementioned enhancement structures 3c, light can be scattered towards the opposite side of incidence and enter a neighboring pixel . Thus , the leakage in this case is characteri zed by a preferred direction .

Figure 3 shows an exemplary pixel array 2 comprising a number of pixels 3 , for simplicity and illustrative purposes shown as a one-dimensional array . As shown, for some lens elements 3 f and filter 3e configurations , light may even be scattered into neighboring pixels 3 before entering the pixel volume in the first place , hence creating a third optical loss channel . Further loss channels may be created due to light that is reflected or scattered from the interface between the silicon comprising the photodiode 3a and the metal layer structure 3d . In addition to the described optical loss channels creating the optical cross-talk, the pixels 3 in a pixel array 2 can further experience an electronic loss channel due to photo charge trans fer between the pixels caused by an imperfect electric isolation, which is particularly present in micro pixel arrays that are densely populated . However, in general for backside illuminated pixel structures as illustrated in Figures 1 to 3 , the electrical cross-talk is negligible compared to the optical cross-talk .

Figure 4 shows a schematic illustrating the contributions to the total modulation trans fer function MTF otai of a conventional image sensor 1 , describing essentially the degradation in resolution of the system, or degradation in range of spatial frequencies that can pass through the system . Firstly, incident light 10 propagates from a scene or obj ect 11 towards a lens 12 or lens system of a camera housing an image sensor 1 . As the medium through which the light propagates is usually not evacuated, e . g . it is air, some degradation of the MTF due to scattering processes can occur . Typically, free-space propagation acts as a low-pass filter, such that higher frequency content is lost . This can be described by the term MTFp r o P . Furthermore , likewise the lens 12 and the pixel array 2 introduce further MTF degradation channels , the latter being detailed with reference to Figs . 1 to 3 . These contributions to the total modulation trans fer function are referred to as MTFLens and MTFsensor, respectively . Finally, the readout of the driver circuit 4 and the image reconstruction of the processing unit 6 likewise introduce reduction or degradation in MTF due to due to the readouts performed and the application of algorithms , etc . Any operation, physical or numerical or any data manipulation, af fects the spatial content and therefore the MTF . , These ef fects are captured as MTF Re adout and MT FASIC , respectively . The total modulation trans fer function MTF otai can thus be defined as :

MTF Totai = MTF Prop * MTF Lens * MTF Sensor * MTF Read0Ut * MTF ASIC .

Therein, the contribution due to the image sensor MTFsensor can be distinguished between contributions due to the fundamental sampling determined by the pixel pitch, which cannot be changed, and the optical and electrical cross-talk, i . e . MTF A rray, MTF Op t and MTF E iec, such that :

MTF Sensor = MTF Array * MTF Opt * MTF Eiec = MTF Array * MTF CT .

Therein, the latter two terms can be uni fied to a contribution due to cross-talk, MT FC , which can be compensated for . It is again noted that the electrical crosstalk typically is negligible , however .

Figures 1 to 4 illustrate the various MTF degradation channels in a pixel array 2 due to imperfect optical isolation of the pixels 3 . Hence , a compensation process is necessary in order to account for the light leakage . As it is undesirable to change the pixel architecture due to well- established manufacturing processes and limitations , the present disclosure proposes a compensation during the readout of the photo signals based on pre-calibrated correction data . Particularly, it is easier and cheaper to manufacture the aforementioned partial BDTI . I f a full BDTI , however, is used the pixel design changes and hence less area is available for circuits etc . With partial BDTI , more transistors can be fit into pixel and a larger photo-diode is possible . Figure 5 extends the concept of conventional image sensors illustrated in Fig . 4 and adds a driver circuit 4 that is further configured to generate a compensation signal and process the signal from the pixel array 2 , i . e . the photo signals , using the compensation signal to generate compensated photo signals , based on which the image is eventually reconstructed by means of the image processing unit 6 .

Speci fically, the driver circuit 4 is configured to generate a compensation function MTFcomp that essentially constitutes an inverted cross-talk matrix function of the cross-talk of the pixel array 2 and to apply this compensation function to the photo signals from the pixel array, such that an ef fective compensated modulation trans fer function of the sensor, MTF SenS or, com P , is given by :

MT Fsensor ,comp ~ MTF Sensor * MTF Comp . Ideally, MTFcomp is the inverse function of the cross-talk, e . g . only the optical cross-talk as the electric contribution is typically negligible , such that :

MTF Comp = MTF^ , which constitutes the MTF calculated from the inverted cross-talk matrix . It is noted that that the abovestated equations account for a compensation in the MTF domain, however, the compensation calculations are implemented not in MTF domain but in the optical PSF domain, particularly as matrices which we are referred to as crosstalk matrices ( CT ) . First the PSF of the pixels is precalculated, and the inverse of this optical PSF matrix is stored in the memory 5 for the purpose of compensation .

To calculate the MTF degradation and compensation due to optical cross-talk, it is suf ficient to know the optical point spread function in matrix form, i.e. the information of how much light leaks to neighboring pixels when a single particular pixel is illuminated. Such a calculation can be performed using computer simulations or can be measured using optical experiments. In both simulations and experiments, only one pixel is selectively illuminated and the power f low/ spill-over into neighboring pixels is noted. Typically, the optical cross-talk between pixels can be described using [K x K] matrices, e.g. [3 x 3] or [5 x 5] matrices with the center pixel of the matrix only being illuminated. The matrix then contains data on how much power flows in nearest adjacent neighbors, diagonal neighbors, 2nd neighbors, etc. The size of this cross-talk matrix, denoted as CT matrix [K, K] determines the size of the buffer used to hold the data for the data readout from each rows. For example, the matrices are normalized matrices such that the sum of all elements equals one and each element essentially signifies a percentage of detected photon energy.

Fig. 6 shows a schematic of a first exemplary embodiment of an image sensor 1 according to the improved concept. The image sensor 1 comprises the pixel array 2 having a plurality of pixels 3, e.g. conventional pixels of the same architecture having a partial BDTI separating the pixels, arranged in rows and columns, e.g. a [M x N] matrix, for forming a capturing surface of the image sensor 1. For reading out the photo signals generated by each pixel 3 in response to incident light 10, column and row drivers 2a, 2b serve to control the readout of the pixels. For example, the image sensor is read out row-by-row by the driver circuit 4, which controls the column and row drivers 2a, 2b accordingly. The aforementioned compensation is performed in the digital domain in this exemplary embodiment. Thus, the photo signals from the pixel array 2 are converted to digital signals by passing them through an analog-to-digital converter, ADC, 4a.

The digitized signals are stored in a buffer 4b of the driver circuit. Therein, a size of the buffer is determined by the amount of rows, or columns, that are simultaneously read out by the driver circuit. For example, the buffer is chosen to be [M x K] in size if the ADC is configured to output one row at any given point in time during the readout and analog-to- digital conversion. Specifically, the buffer is an SRAM buffer comprising K rows of [M x 1] data, representing the photo signals from a single row. If the ADC is designed to simultaneously output [K] rows of data, the buffer size can be reduced to [K x K] . The driver circuit 4 further comprises a memory 5, on which the correction data is stored as [K x K] matrices of values or functions or expressions, e.g. functions of incident angle and/or pixel location within the pixel array 2. As mentioned above, K can be an odd integer, K equals 3 or 5, for example. Both, the signals stored in the buffer and the correction data from the memory are provided to a calculation unit 4d, e.g. a multiply-accumulate (MAC) or multiply-add (MAD) unit, which generates corrected photo signals .

For example, [K x K] segments of the sensor data stored in the buffer 4b, representing the signals obtained from a corresponding [K x K] subarray of pixels 3, are picked and provided to the calculation unit 4d. The calculation unit 4d also receives a corresponding [K x K] matrix from the memory 5 in terms of pixel location of the chosen subarray and in terms of incident angle, e.g. determined via the Chief Ray Angle and the respective pixel location. The calculation unit 4d can then be configured to perform first a matrix inversion of the memory data for generating an inverse of the crosstalk matrix or point spread function, and then perform a matrix multiplication of the inverted matrix and the data from the buf fer 4a and output a [K x K] matrix constituting the corrected photo signals . This process is repeated for each [K x K] subarray stored in the buf fer 4b . In some cases , the inverse of a matrix cannot be found due to a singularity . In such cases , the method to find the inverse can instead be performed by an ' approximate inverse ' method . Moreover, to prevent such cases , a small random error can be applied to each value in the [K x K] matrix having a singularity, before performing the inversion . However, it must be ensured that the amount of noise introduced with said small error is minimal and lower than the actual dark noise in each pixel 3 . Alternatively, the inverted data is pre-calculated and stored in the memory 5 .

The memory 5 can comprise correction data for all subarrays in the buf fer 4b, such that corrected photo signals can be generated by means of corresponding correction data from the memory . Alternatively, the memory 5 can comprise correction data for some subarrays . In this case , the calculation unit 4d can be configured to interpolate the correction data from the memory for [K x K] subarrays from the buf fer 4b that do not have corresponding correction data stored in the memory 5 . The calculation unit 4d can comprise a further buf fer 4e for storing the corrected photo signals .

The driver circuit further comprises a timing and sequence generator 4c which is configured to control the readout and compensation processes . To this end, the timing and sequence generator 4c is coupled to the row and column drivers 2a, 2b, the ADC 4a, the buffer 4b, the calculation unit 4d and the memory 5.

The corrected photo signals are provided to an image processing unit 6, which is configured to reconstruct an image and out a corresponding image signal for further processing or for displaying purposes. The processing unit 6 can be arranged on a common substrate with the remaining components of the image sensor 1. Alternatively, it can be arranged on an ASIC die, on which a sensor chip with the remaining components of the image sensor 1 is arranged.

Fig. 7 shows a schematic of a second exemplary embodiment of an image sensor 1 according to the improved concept. Compared to the first embodiment of Fig. 6, in this embodiment the compensation is likewise performed in the digital domain, however, the driving circuit comprises a plurality of ADCs 4a. For example, the driver circuit 4 comprises K ADCs 4a. This means that a simultaneous readout of K consecutive rows or columns of the pixel array 2 can be performed. In consequence, the buffer 4b, e.g. an SRAM buffer, is of size [K x K] having K sub-buffers of size [K x 1] . The remaining principle remains the same as detailed in context of the first embodiment of Fig. 6.

Fig. 8 shows a schematic of a third exemplary embodiment of an image sensor 1 according to the improved concept. In this embodiment, the compensation is performed in the analog domain. To this end, the driver circuit comprises a switching or multiplexing unit 4f that couples the pixels 3 of the pixel array 2 to the buffer 4b, which can be implemented as a capacitor array for storing charges from the pixels. For a readout in a row-by-row manner, the buffer 4b comprises K sub-buf fers of si ze [M x 1 ] similar to the embodiment of Fig . 7 . The switching or multiplexing unit 4 f controls the storage of the row signals into these sub-buf fers of the buf fer 4b . The correction data, e . g . [K x K] matrices are stored in digital form, thus requiring conversion via a digital-to- analog converter, DAC, 4g to the analog domain before reaching the calculation unit 4d . The calculation unit 4d receives the analog correction data from the memory 5 via the DAC 4g and the stored photo signals from the buf fer 4b and generates the corrected photo signals that are subsequently being digiti zed via an ADC 4a before being supplied to an image processing unit 6 .

The calculation unit 4d is a multiply-add-unit , for example , which receives [K x K] image data from the buf fer, representing a [K x K] subarray of pixels 3 , and [K x K] correction coef ficients or functions from the memory 5 . The output of the calculation unit 5 is the matrix multiplication and is a [K x K] compensated image .

Fig . 9 illustrates an exemplary relationship between the Chief Ray Angle , CRA, and the image height for an exemplary image sensor 1 . This relationship is important as the variation of the angle of incidence of light on a pixel 3 at location (X, Y) within the pixel array 2 can be determined via the image height , i . e . the radial distance on the sensor surface with origin at center of the pixel array 2 . The Chief Ray Angle is a centroid of the incidence angle of the chief ray which is incident on the sensor surface and is measured in degrees .

For any given image sensor architecture and camera system, this CRA vs . image height is a fixed relationship and can thus be used to determine the angle of incidence of every pixel 3 , indicated by index ( i , j ) and described by spatial position (x, y) , of the pixel array 2 . This is possible as a pixel with index ( i , j ) is mapped to a speci fic location (x, y) on the sensor surface . Thus , the two-dimensional angle of incidence ( 0 , cp) can be determined for each pixel . As the cross-talk in a pixel can be dependent on the angle of incidence , the correction data in the memory can comprise multiple sets of data for various angles of incidence . Thus , the cross-talk can be ef ficiently compensated for each pixel due to accounting for the varying angle of incidence across the sensor surface .

Fig . 10 shows an embodiment of a camera system 100 comprising an input lens 12 and an image sensor 1 according to the improved concept . For example , the camera system 100 is a camera sensor of a smart phone . Alternatively, the camera system 100 can be a standalone camera system or a camera module of any portable or mobile electronic device including tablet or laptop computers , augmented or virtual reality glasses , smartwatches or other wearable devices or dedicated distance sensor .

The embodiments of the image sensor 1 , the camera system 100 and the method of generating corrected photo signals disclosed herein have been discussed for the purpose of familiari zing the reader with novel aspects of the idea . Although preferred embodiments have been shown and described, changes , modi fications , equivalents and substitutions of the disclosed concepts may be made by one having skill in the art without unnecessarily departing from the scope of the claims . It will be appreciated that the disclosure is not limited to the disclosed embodiments and to what has been particularly shown and described hereinabove . Rather, features recited in separate dependent claims or in the description may advantageously be combined . Furthermore , the scope of the disclosure includes those variations and modi fications , which will be apparent to those skilled in the art and fall within the scope of the appended claims .

The term " comprising" , insofar it was used in the claims or in the description, does not exclude other elements or steps of a corresponding feature or procedure . In case that the terms " a" or " an" were used in conj unction with features , they do not exclude a plurality of such features . Moreover, any reference signs in the claims should not be construed as limiting the scope .

This patent application claims the priority of US patent application US 63/ 404 , 278 , the disclosure content of which is hereby incorporated by reference .

References

1 image sensor

2 pixel array

2a, 2b row and column driver

3 pixel

3a photodiode

3b deep-trench isolation

3c enhancement structure

3d metal layer structure

3e filter element

3 f lens element

4 driver circuit

4a analog-to-digital converter

4b buf fer

4c timing and sequence generator

4d calculation unit

4e second buf fer

4 f switching or multiplexing unit

4g digital-to-analog converter

5 memory

6 image processing unit

10 incident light

11 obj ect or scene

12 input lens

100 camera system

0 , cp incident angle