Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-CHANNEL HIGH-RESOLUTION IMAGING DEVICES INCORPORATING METALENSES
Document Type and Number:
WIPO Patent Application WO/2023/012110
Kind Code:
A1
Abstract:
An apparatus includes, in some implementations, at least one image sensor, a plurality of metalenses, and readout and processing circuitry. The at least one image sensor includes a plurality of pixel arrays, each of the which is associated, respectively, with a different one of a plurality of optical channels configured for detection of incoming light rays of a particular wavelength or a particular range of wavelengths centered on the particular wavelength. Each of the metalenses is disposed, respectively, in a different one of the optical channels and is configured, respectively, to focus incoming light rays onto a different one of the pixel arrays. The readout and processing circuitry is operable to read out signals from the pixel arrays and to generate a respective lower-resolution image for each of the optical channels, and to process the lower-resolution images to obtain a higher-resolution monochromatic image. Methods of operation are described as well.

Inventors:
QUAADE ULRICH (DK)
MATTINSON FREDRIK (DK)
FRANCOIS OLIVIER (DK)
EILERTSEN JAMES (DK)
JOHANSEN VILLADS EGEDE (DK)
Application Number:
PCT/EP2022/071563
Publication Date:
February 09, 2023
Filing Date:
August 01, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NIL TECH APS (DK)
International Classes:
G06T3/40; G02B1/00; H04N1/40
Foreign References:
US20110080487A12011-04-07
US20200388642A12020-12-10
US20160241751A12016-08-18
Attorney, Agent or Firm:
FISH & RICHARDSON P.C. (DE)
Download PDF:
Claims:
What is claimed is:

1. An apparatus comprising: at least one image sensor including a plurality of pixel arrays, each of the pixel arrays being associated, respectively, with a different one of a plurality of optical channels configured for detection of incoming light rays of a particular wavelength or a particular range of wavelengths centered on the particular wavelength; a plurality of metalenses, each of which is disposed, respectively, in a different one of the plurality of optical channels and is configured, respectively, to focus incoming light rays onto a different one of the pixel arrays; and readout and processing circuitry operable to read out signals from the plurality of pixel arrays and to generate a respective lower-resolution image for each of the optical channels, and to process the lower-resolution images to obtain a higher- resolution monochromatic image.

2. The apparatus of claim 1 wherein each of the plurality of metalenses is configured to focus incoming light rays of the particular wavelength, or falling within the particular range of wavelengths, onto a respective one of the pixel arrays.

3. The apparatus of claim 1 wherein each of the plurality of optical channels includes a respective optical filter.

4. The apparatus of claim 3 wherein each optical filter is configured to pass light having the particular wavelength or falling within the particular range of wavelengths.

5. The apparatus of any one of claims 3 or 4 wherein each of the optical filters is disposed between the image sensor and a different respective one of the metalenses.

6. The apparatus of any one of claims 3 or 4 wherein each of the optical filters is disposed over a different respective one of the metalenses.

7. The apparatus of any one of claims 1-6 wherein each of the pixel arrays is operable to acquire an image of a scene, and wherein there is a sub-pixel shift in the image acquired by a first one of the pixel arrays relative to the image acquired by a second one of the pixel arrays.

8. The apparatus of any one of claims 1-7 wherein the at least one image sensor includes a plurality of image sensors, each of which includes a different respective one of the pixel arrays.

9. The apparatus of any one of claims 1-7 wherein the at least one image sensor is a single image sensor that includes each of the pixel arrays.

10. The apparatus of any one of claims 1-9 wherein the readout and processing circuitry is operable to process the lower-resolution images to obtain a higher- resolution monochromatic image using a super-resolution protocol.

11. A method comprising: acquiring, by each of two or more pixel arrays associated with different respective optical channels of an imaging device, a respective lower-resolution image of a scene, where each of the lower-resolution images is based on light rays passing through a respective metalens in a respective one of the optical channels; reading out, from the pixel arrays, signals representing the acquired lower- resolution images; and using a super-resolution protocol to obtain a higher-resolution monochromatic image of the scene based on the lower-resolution images.

12. The method of claim 11 including displaying the higher-resolution image on a display screen of a computing device.

13. The method of claim 11 including displaying the higher-resolution image on a display screen of a smartphone.

14. The method of any one of claims 11-13 wherein each respective one of the plurality of metalenses focuses incoming light rays of a particular wavelength, or falling within a particular range of wavelengths centered on the particular wavelength, onto the respective one of the pixel arrays.

15. The method of claim 14 wherein each of the metalenses comprises meta-atoms arranged to resonate at a fixed frequency corresponding to the particular wavelength.

16. The method of any one of claims 11-15 wherein there is a sub-pixel shift in the lower-resolution image acquired by a first one of the pixel arrays relative to the lower- resolution image acquired by a second one of the pixel arrays.

Description:
MULTI-CHANNEL HIGH-RESOLUTION IMAGING DEVICES INCORPORATING METALENSES

FIELD OF THE DISCLOSURE

[0001] The present disclosure relates to multi-channel imaging devices.

BACKGROUND

[0002] Multi-channel imaging devices can acquire images using image sensors. For example, light entering through an aperture at one end of an image device is directed to one or more image sensors, which include pixels that generate signals in response to sensing received light. The imaging devices sometimes are incorporated into handheld or other portable electronic devices such as smartphones. However, space in such portable devices often is at a premium. Thus, reducing the size or dimensions of the imaging device can be important for such applications.

SUMMARY

[0003] The present disclosure describes multi-channel high-resolution imaging devices incorporating metalenses.

[0004] In one aspect, for example, the present disclosure describes an apparatus that includes at least one image sensor, a plurality of metalenses, and readout and processing circuitry. The at least one image sensor includes a plurality of pixel arrays, each of the which is associated, respectively, with a different one of a plurality of optical channels configured for detection of incoming light rays of a particular wavelength or a particular range of wavelengths centered on the particular wavelength. Each of the metalenses is disposed, respectively, in a different one of the optical channels and is configured, respectively, to focus incoming light rays onto a different one of the pixel arrays. The readout and processing circuitry is operable to read out signals from the pixel arrays and to generate a respective lower-resolution image for each of the optical channels, and to process the lower-resolution images to obtain a higher-resolution monochromatic image. [0005] Some implementations include one or more of the following features. For example, in some instances, each of the metalenses is configured to focus incoming light rays of the particular wavelength, or falling within the particular range of wavelengths, onto a respective one of the pixel arrays. In some cases, each of the optical channels includes a respective optical filter. In some instances, each optical filter is configured to pass light having the particular wavelength or falling within the particular range of wavelengths. Each of the optical filters can be disposed, for example, between the image sensor and a different respective one of the metalenses. In some cases, each of the optical filters is disposed over a different respective one of the metalenses.

[0006] In some implementations, each of the pixel arrays is operable to acquire an image of a scene, wherein there is a sub-pixel shift in the image acquired by a first one of the pixel arrays relative to the image acquired by a second one of the pixel arrays. In some implementations, the at least one image sensor includes a plurality of image sensors, each of which includes a different respective one of the pixel arrays, whereas in some implementations, the at least one image sensor is a single image sensor that includes each of the pixel arrays.

[0007] In some implementations, the readout and processing circuitry is operable to process the lower-resolution images to obtain a higher-resolution monochromatic image using a super-resolution protocol.

[0008] The present disclosure also describes a method that includes acquiring, by each of two or more pixel arrays associated with different respective optical channels of an imaging device, a respective lower-resolution image of a scene. Each of the lower-resolution images is based on light rays passing through a respective metalens in a respective one of the optical channels. The method includes reading out, from the pixel arrays, signals representing the acquired lower-resolution images, and using a super-resolution protocol to obtain a higher-resolution monochromatic image of the scene based on the lower-resolution images. [0009] Some implementations include one or more of the following features. For example, in some instances, the method includes displaying the higher-resolution image on a display screen of a computing device (e.g., on a display screen of a smartphone). In some cases, each respective one of the metalenses focuses incoming light rays of a particular wavelength, or falling within a particular range of wavelengths centered on the particular wavelength, onto the respective one of the pixel arrays. Each of the metalenses can comprise, for example, meta-atoms arranged to resonate at a fixed frequency corresponding to the particular wavelength. In some implementations, there is a sub-pixel shift in the lower-resolution image acquired by a first one of the pixel arrays relative to the lower-resolution image acquired by a second one of the pixel arrays.

[0010] Some implementations include one or more of the following advantages. For example, using metalenses can be advantageous because they can be relatively flat, ultrathin, lightweight, and/or compact. Thus, using metalenses can help reduce the total track length (TTL) of the imaging device. Further, in some implementations, the metalenses can be used in conjunction with one or more relatively low-cost, low- resolution image sensors in a manner that allows for high-resolution images to be obtained.

[0011] The details of one or more implementations are set forth in the accompanying drawings and the description below. Other aspects, features and advantages will be apparent from the following detailed description, the accompanying drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] FIG. 1 illustrates a first example of an imaging device.

[0013] FIG. 2 illustrates a second example of an imaging device.

[0014] FIG. 3 illustrates a third example of an imaging device.

[0015] FIG. 4 illustrates a fourth example of an imaging device. [0016] FIG. 5 is a flow chart of an example method for operation of the imaging devices of FIGS. 1 through 4.

DETAILED DESCRIPTION

[0017] As illustrated in the example of FIG. 1, a multi-channel imaging device 100 is operable to capture images by respective pixel arrays 102A, 102B that are associated with different channels and are part of one or more image sensors. In the illustrated example, a single image sensor 104 is shown and includes both pixel arrays 102A, 102B.

[0018] In some implementations, each pixel array 102A, 102B is part of a different respective small image sensor, rather than a single larger image sensor. In any event, each image sensor can be implemented, for example, as a relatively low-cost, low- resolution CCD (charge-coupled device) image sensor or CMOS (complementary metal-oxide-semiconductor) image sensor. That is, although more expensive, high- resolution image sensors can be employed, it is not necessary to do so. Further, although the example of FIG. 1 shows only two optical channels 106 A, 106B, some implementations may include a greater number of optical channels.

[0019] Each optical channel is configured for detection of incoming light rays of a particular wavelength or a particular range of wavelengths centered on the particular wavelength. For each channel 106 A, 106B, a respective metalens is provided to focus incoming light rays onto a respective one of the pixel arrays 102A, 102B. That is, a first metalens 108 A, is disposed over the first part of the image sensor 104 that includes the first pixel array 102A, and a second metalens 108B, is disposed over the second part of the image sensor 104 that includes the second pixel array 102B. The first metalens 108 A is configured to focus incoming light rays onto the first pixel array 102A, and the second metalens 108B is configured to focus incoming light rays onto the second pixel array 102B. Each metalens 108 A, 108B has a metasurface, which refers to a surface with distributed small structures (e.g., meta-atoms) arranged to interact with light in a particular manner. For example, a metasurface, which also may be referred to as a metastructure, can be a surface with a distributed array of nanostructures. The nanostructures are configured to interact, individually or collectively, with light waves so as to change a local amplitude, a local phase, or both, of an incoming light wave.

[0020] The meta-atoms (e.g., nanostructures) can be arranged to act as a metalens that resonates at a fixed frequency with a relatively sharp bandwidth. That is, the dimensions (e.g., diameter and length), shape, and material of the meta-atoms can be designed to induce a phase delay in an incident wave of a particular wavelength so as to focus an incident wave on a particular spot. In some implementations, the metalenses 108 A, 108B are configured for a particular wavelength or narrow band of wavelengths in the infrared part of the electromagnetic spectrum, whereas in other implementations, the metalenses are configured for a particular wavelength or narrow band of wavelengths in another part of the spectrum (e.g., visible). In any event, each of the metalenses can be configured to focus, onto the respective pixel arrays, incoming light rays of a particular wavelength, or falling within a particular (e.g., narrow) range of wavelengths centered on the particular wavelength.

[0021] The metalenses 108A, 108B can be supported, for example, by a glass or other substrate 110. Although the example of FIG. 1 shows the metalenses 108 A, 108B on the upper surface of the substrate 110, in some case, the metalenses are disposed on the lower surface of the substrate 110, as shown in FIG. 2. In some cases, metalenses may be disposed on both sides of the substrate 110.

[0022] In the imaging device 100, each of the optical channels 106A, 106B is configured to acquire monochromatic images of substantially the same color. That is, both channels are configured to detect optical signals having a particular wavelength or falling within the same relatively narrow wavelength range. In particular, the first pixel array 102A can capture an image based on light rays passing through the first optical channel 106A, and the second pixel array 102B can capture an image based on light rays passing through the second optical channel 106B. Further, each pixel array 102A, 102B is operable to acquire an image of a scene from a viewpoint that differs slightly from that of the other pixel array. That is, the image of the scene acquired by a particular one of the pixel arrays differs somewhat from images of the same scene acquired by the other pixel array. The slight difference in viewpoints results in a small shift (i.e., sometimes referred to as “motion”) in the images of the scene acquired by the pixel arrays 102A, 102B. The size of the shift may be, for example, sub-pixel.

[0023] On the one hand, introducing metalenses into an imaging device as described here is counterintuitive because metalenses are generally known to exhibit relatively large chromatic aberrations, and because they typically generate relatively small images, which makes it difficult in some cases to use the entire active area of a standard image sensor. Nevertheless, by associating each metalens with an optical channel that encompasses only a portion of the total pixels of the image sensor(s), and configuring each of the optical channels 106 A, 106B in the imaging device 100 for a single wavelength or a relatively narrow band of wavelengths, the imaging device 100 can take advantage of benefits that metalenses can offer. In particular, using metalenses 108A, 108B rather than other types of lenses (e.g., refractive lenses) in the imaging device 100 can be advantageous because the metalenses can be relatively flat, ultrathin, lightweight, and compact. Thus, using metalenses can help reduce the total track length (TTL) or z-height of the imaging device 100. Further, as explained below, the metalenses can be used in conjunction with one or more relatively low- cost, low-resolution image sensors 104 in a manner that allows for high-resolution images to be obtained.

[0024] In some cases, it can be beneficial to include an optical filter in each of the channels 106A, 106B. The filters can help eliminate or reduce optical noise that may be present. For example, if the channels 106A, 106B are designed to detect infrared radiation, an infrared filter 120 can be included in each channel, as shown in FIG. 3. In other implementations, optical filters can be included in the optical channels to pass only radiation in a particular part of the visible portion of the spectrum (e.g., red, green or blue). FIG. 3 shows filters 120 as being disposed on the image sensor 104, that is, between the image sensor 104 and the metalenses 108A, 108B. In some implementations, the filters 120 can be disposed over the metalenses 108A, 108B, as shown in FIG. 4.

[0025] The imaging device 100 can include control circuitry 111 (e.g., logic) operable to control the image sensor(s) 104 to acquire images of a scene 112 containing one or more objects. In some implementations, the control circuitry 111 may be responsive to user input (e.g., a user interacting with, or otherwise providing input to, a user interface of a smart phone of other computing device coupled to the control circuitry).

[0026] The imaging device 100 also can include readout and processing circuitry 114, which can include, for example, a microprocessor and one or more associated memories storing instructions for execution by the microprocessor. The control circuitry 111 can be coupled to the readout and processing circuitry 114 to provide, for example, timing and control signals for reading out the pixel signals. Thus, signals from the pixel arrays 102A, 102B in the various channels 106 A, 106B of the imaging device 100 can be read out by the readout and processing circuitry 114, which can include, for example, one or more integrated circuits in one or more semiconductor chips with appropriate digital logic and/or other hardware components (e.g., readout registers; amplifiers; analog-to-digital converters; clock drivers; timing logic; and/or signal processing circuitry).

[0027] Depending on the implementation, the readout circuitry can include, for example, active MOS readout amplifiers per pixel. In some implementations, the readout circuitry is operable for intra-pixel charge transfer along with an in-pixel amplifier to achieve correlated double sampling (CDS). The readout circuit can include, in some instances, a source follower or a charge amplifier with row- and column-selection. In some cases, the readout circuit includes a digital readout integrated circuit (DROIC) or a digital pixel readout integrated circuit (DPROIC). In some instances, the pixels are demodulation pixels. Other pixel readout circuits can be used in some implementations.

[0028] The readout and processing circuitry 114 is operable to process the pixel signals and to generate, for example, a small, low-resolution image 113 A, 113B for each channel 106 A, 106B. Thus, the readout and processing circuitry 114 is operable to read out signals from each of the pixels in the pixel arrays 102A, 102B, where the signals from pixels in a particular one of the pixel arrays correspond to a relatively small, low-resolution image of the scene 112. [0029] The readout and processing circuitry 114 also is operable to process the low- resolution images to obtain a higher-resolution monochromatic image 118 using, for example, a super-resolution protocol 115. Super-resolution reconstruction refers to a process of combining information from multiple low-resolution images with sub-pixel displacements to obtain a higher resolution image. The super-resolution reconstruction can include, for example, interpolation-based methods, reconstructionbased methods, or learning-based methods. In some instances, a standard superresolution protocol can be used, such as an example-based technique, a sparse-codingbased technique, a projection onto convex sets (POCS) technique, or a Bayesian technique. Some implementations use a fusion super-resolution technique, in which high-resolution images are constructed from low-resolution images, thereby increasing the high-frequency components and removing the degradations caused by the recording process of low-resolution imaging acquisition. In some instances, the super-resolution protocol employs convolutional neural networks. Other superresolution techniques can be used as well.

[0030] The super-resolution reconstructed monochromatic image generated by the readout and processing circuitry 114 can be provided, for example, to a display 116, which displays the super-resolution reconstructed image. The display 116 can include, for example, a screen of a computing device (e.g., a smartphone, tablet, personal computer, or other small computing device).

[0031] The imaging device 100 can be used in any of a wide range of applications including, for example, cameras in smartphones and other handheld or portable computing devices, as well as medical imaging, satellite imaging, surveillance, facial recognition, high definition television, and others. In some instances, at least a portion of the readout and processing circuitry 114 for the imaging device 100 may be integrated into the smartphone or other computing device’s own processing circuitry. In other instances, the readout and processing circuitry 114 may be separate from such circuitry in the computing device.

[0032] FIG. 5 illustrates an example of a method of using the imaging devices 100 of FIGS. 1, 2, 3 or 4. As indicated by 200, each of two or more pixel arrays associated with different respective optical channels of the imaging device acquires a respective low-resolution image of a scene that includes one or more objects. Each low- resolution image is based (at least in part) on light rays passing through a respective metalens in a respective one of the optical channels. The low-resolution images for the optical channels are substantially monochromatic and are based on light of the same wavelength or the same narrow range of wavelengths. In some instances, the low-resolution images are acquired in response to user input (e.g., input provided by the user through an interactive user interface). As indicated by 202, signals representing the acquired low-resolution images are read out from the pixel arrays. Then, as indicated by 204, the method includes using a super resolution protocol to obtain a higher-resolution monochromatic image of the scene based on the low- resolution images. In some instances, the higher-resolution image is displayed, for example, on a display screen of a smartphone or other computing device, as indicated by 206.

[0033] Various aspects of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Thus, aspects of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware.

[0034] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program, which may be stored as instructions in one or more memories, can be deployed to be executed on one computer or on multiple interconnected computers.

[0035] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

[0036] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[0037] Various modifications will be readily apparent from the foregoing detailed description and the drawings. Accordingly, other implementations also are within the scope of the claims.