Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTISPECTRAL IMAGE SENSOR ARRANGEMENT, ELECTRONIC DEVICE AND METHOD OF MULTISPECTRAL IMAGING
Document Type and Number:
WIPO Patent Application WO/2024/046727
Kind Code:
A1
Abstract:
A multispectral image sensor arrangement comprises a main image sensor (IS), a multispectral image sensor (MS) and a processing unit (PU). The image sensor (IS) is operable to acquire a spatially resolved first image (IM1) of a scene. The multispectral sensor (MS) is operable to acquire a spectrally resolved second image (IM2) of the same scene. The processing unit (PU) is operable to define one or more regions-of-interest, ROIs, in the first image (IM1), define one or more spectral ROIs in the second image (IM2) corresponding to the ROIs in the first image (IM1), determine spectral data from the spectral ROIs of the second image (IM2), and to use the determined spectral data to adjust a spectral representation of the first image (IM1).

Inventors:
GAIDUK ALEXANDER (DE)
SIESS GUNTER (DE)
MOZAFFARI MOHSEN (DE)
Application Number:
PCT/EP2023/072014
Publication Date:
March 07, 2024
Filing Date:
August 09, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AMS SENSORS GERMANY GMBH (DE)
International Classes:
G01J3/02; G01J3/28
Domestic Patent References:
WO2019089531A12019-05-09
WO2021105398A12021-06-03
Foreign References:
EP2944930A22015-11-18
US20150086117A12015-03-26
US20070064119A12007-03-22
US20120062888A12012-03-15
US20080297861A12008-12-04
DE102022121896A1
Attorney, Agent or Firm:
EPPING HERMANN FISCHER PATENTANWALTSGESELLSCHAFT MBH (DE)
Download PDF:
Claims:
Claims

1. A multispectral image sensor arrangement, comprising a main image sensor (IS) , a multispectral image sensor (MS) and a processing unit (PU) , wherein:

- the main image sensor (IS) is operable to acquire a spatially resolved first image (IM1) of a scene;

- the multispectral sensor (MS) is operable to acquire a spectrally resolved second image ( IM2 ) of the same scene; and the processing unit (PU) is operable to:

- define one or more regions-of-interest , ROIs, in the first image ( IM1 ) ,

- define one or more spectral ROIs in the second image ( IM2 ) corresponding to the ROIs in the first image (IM1) ,

- determine spectral data from the spectral ROIs of the second image ( IM2 ) , and

- use the determined spectral data to adjust a spectral representation of the first image (IM1) , wherein

- the main image sensor (IS) has a higher spatial resolution than the multispectral image sensor (MS) , and

- the multispectral image sensor (MS) has a higher spectral resolution than the main image sensor (IS) .

2. The sensor arrangement according to claim 1, wherein the processing unit (PU) is operable to define position, size and shape of the ROIs and spectral ROIs.

3. The sensor arrangement according to claim 1 or 2, wherein the processing unit (PU) is operable to:

- initiate image acquisition of the spatially resolved first image (IM1) by means of pixels of the image sensor (IS) , and initiate image acquisition of the spectrally resolved second image ( IM2 ) by means of spectral pixels of the multispectral sensor (MS) , wherein the second image is acquired using spectral pixels corresponding to the defined spectral ROIs.

4. The sensor arrangement according to one of claims 1 to 3, wherein the processing unit (PU) is operable to adjust, or calibrate, a spectral representation of the one or more ROIs of the first image (IM1) with spectral data determined from the corresponding spectral ROIs.

5. The sensor arrangement according to one of claims 1 to 4, wherein the processing unit (PU) is operable to adjust, or calibrate a spectral representation of the first image (IM1) without spectral data determined from spectral ROIs, denoted void spectral ROI .

6. The sensor arrangement according to one of claims 1 to 5, wherein the processing unit (PU) is operable to define the ROIs in the first images:

- by user input,

- by object recognition, and/or

- by database matching.

7. The sensor arrangement according to one of claims 1 to 6, wherein :

- the main image sensor (IS) has a first f ield-of-view (FOV1) and the multispectral image sensor (MS) has a second f ield-of-view (FOV2) ,

- the main image sensor (IS) and the multispectral image sensor (MS) are arranged next to each other such that the first and second f ields-of-view are overlapping and/or - the main image sensor (IS) and the multispectral image sensor (MS) have a shared focal plane (FP) .

8. The sensor arrangement according to one of claims 1 to 7, wherein :

- the main image sensor (IS) has a high spatial resolution and comparably low spectral resolution,

- the multispectral image sensor (MS) has a high spectral resolution and comparably low spatial resolution.

9. The sensor arrangement according to one of claims 1 to 8, wherein the main image sensor (IS) has a spatial resolution of more than 20 megapixel and a spectral resolution of at most three color channels and the multispectral image sensor (MS) has a spatial resolution of at most 20 percent or at most 10 percent of the spatia resolution of the main image sensor (IS) and a spectral re olution of at least 6 color channels .

10. The sensor arrangement according to one of claims 1 to 9, further comprising at least one additional optical sensor, wherein the processing unit is operable to adjust the spectral representation of the first image (IM1) using the determined spectral data and using data generated by the additional optical sensor.

11. The sensor arrangement according to one of claims 1 to 10, wherein at least the main image sensor (IS) and the multispectral image sensor (MS) are integrated into a sensor module .

12. The sensor arrangement according to claim 11, wherein the processing unit (PU) is integrated into the sensor module. 13. An electronic device comprising:

- at least one multispectral image sensor arrangement according to one of claims 1 to 12, and

- a host system, wherein the host system comprises one of:

- a mobile device, a digital camera, such as a security camera or a drone camera, or a spectrometer.

14. A method of multispectral imaging using a main image sensor (IS) , a multispectral image sensor (MS) and a processing unit (PU) , the method comprising the steps of:

- using the main image sensor (IS) , acquiring a spatially resolved first image (IM1) of a scene;

- using the multispectral sensor (MS) , acquiring a spectrally resolved second image ( IM2 ) of the same scene; and, using the processing unit (PU) :

- defining one or more regions-of-interest , ROIs, in the first image (IM1) ,

- defining one or more spectral ROIs in the second image

( IM2 ) corresponding to the ROIs in the first image (IM1) ,

- determining spectral data from the spectral ROIs of the second image ( IM2 ) , and

- using the determined spectral data to adjust a spectral representation of the first image (IM1) , wherein

- the main image sensor (IS) has a higher spatial resolution than the multispectral image sensor (MS) , and

- the multispectral image sensor (MS) has a higher spectral resolution than the main image sensor (IS) .

15. The method according to claim 14, wherein the ROIs in the first image (IM1) are defined first, or vice versa.

Description:
Description

MULTISPECTRAL IMAGE SENSOR ARRANGEMENT , ELECTRONIC DEVICE AND METHOD OF MULTISPECTRAL IMAGING

This disclosure relates to a multispectral image sensor arrangement , an electronic device and to a method of multispectral imaging .

With the advent of Smartphones and compact cameras , digital photography has become widely present in modern everyday li fe . The quality of pictures in consumer photography frequently requires color reproduction that is acceptable to the customer . The reproduction of colors depends on the illumination conditions and ability of the imaging system to sense spectral information, i . e . colors/ spectral distribution of detected light . Typically a spectral response of a consumer image sensor is quanti fied or calibrated and spectral characteristics of a limited number of channels ( like RGB, RGBW, etc . ) are known as average characteristics for a type of image sensor . Multiple methods in image analysis allow the introduction of color correction matrices and white reference for a given illumination condition integrated or averaged over a definite spatial region .

Frequently, dedicated light sensors are used in consumer and professional photography equipment to improve color reproduction in photography and to integrate over large observation angles . Information from such sensors complements various light metering information that is obtained from the main photographic image sensor . However, in the case of various illumination types or strong illumination or strong contrast di f ference in an image scene it is advantageous to not only include space averaging information from the assisting light sensor, but also to sense lighting conditions in di f ferent regions . The selection of regions (position and si ze ) is typically under weak control .

An obj ect to be achieved is to provide a multispectral image sensor arrangement for electronic devices that overcomes the aforementioned limitations and provides improved color correction or calibration . A further obj ect is to provide an electronic device comprising such an image sensor arrangement and a method of multispectral imaging .

These obj ectives are achieved with the subj ect-matter of the independent claims . Further developments and embodiments are described in the dependent claims .

The following relates to an improved concept in the field of imaging . One aspect relates to the sensing of lighting conditions from di f ferent regions of an image . The proposed concept provides means to control selection of regions-of- interest , ROI s , ( including position and si ze ) and provides control over light monitoring regions .

For example , a multispectral image sensor arrangement , comprising a main image sensor, a multispectral image sensor and a processing unit to perform image processing of images taken by the sensors , is speci fied . The processing unit allows a functional workflow to be implemented that combines information from the main image sensor and the multispectral image sensor and, optionally, other sensors as well as databases or arti ficial intelligence , Al , based cores to address a task that a user wishes to solve. Such tasks can be light identification, chemical identification, plant identification, skin identification, face identification etc. Furthermore, the improved concept enables a functional workflow that flexibly combines spatial and spectral information from multiple image sensors that have various spatial and spectral resolutions or temporal responses.

In at least one embodiment, a multispectral image sensor arrangement comprises a main image sensor, a multispectral image sensor and a processing unit. The image sensor is operable to acquire a spatially resolved first image of a scene. The multispectral sensor is operable to acquire a spectrally resolved second image of the same scene.

The processing unit is operable to perform a number of image processing and control steps. One step is to define one or more regions-of-interest , ROIs, in the first image. Another step is to define one or more spectral ROIs in the second image corresponding to the ROIs in the first image. Spectral data is determined from the spectral ROIs of the second image. The determined spectral data is used to adjust a spectral representation of the first image, i.e. image data of the first image is complemented with additional application-specific spectral data from the multispectral sensor. The sequence of procedural steps conducted by the processing unit may vary according to several possible workflows, for example.

The regions-of-interest, ROIs, can be used to identify, and isolate, objects in the scene. For example, the first image may show a bright light source of some color, which is localized only in a small part of the image (spatial distribution) . The color of the bright light source , however, may have a di f ferent dominant color ( spectral distribution) than other, or even most other, parts of the image . Thus , an appropriate ROT can be used to create a subset of image data, in order to adj ust , or calibrate , a spectral representation of said ROT of the first image . Alternatively, the subset of image data can be used to adj ust , or calibrate , a spectral representation of the entire first image , e . g . complemented with data from a dedicated optical sensor . The calibration involves spectral data which can be retrieved from the second image , which contains spectral data of the corresponding spectral ROT . The ROT and the spectral ROT may not be exactly the same , as typically the main image sensor and multispectral image sensor may have di f ferent spatial resolutions .

The proposed concept addresses a number of shortcomings of previous solutions suggested in the art . The multispectral image sensor arrangement allows the combination of high spatial resolution of the main image sensor with high quality spectral identi fication of the multispectral image sensor, or high speci ficity for a speci fic application . Often there is no need to reconstruct the exact spectrum in a point or in a ROT , but rather to have a dedicated spectral range in the point or the ROT . This spectral range is typically not available in a standard monochromatic or RGB color sensor . Correlating the images allows information from image sensors to be combined with di f ferent spatial and spectral resolutions and to perform calibration on a finer scale , as defined by the regions-of-interest . For example , in photography, an accurate spectral identi fication of a direct or di f fuse light source can be achieved in one or more local positions simultaneously, rather than being roughly estimated at a global level or at a smaller location. The proposed concept can also be applied to different fields, e.g. spectral identification and color matching applications. The range of applications may also include sensing of reflected light in fields such as medical imaging, digital health, wellness, diagnostics, agriculture, food inspection, counterfeit detection, security, sorting, etc. In general, analysis of multispectral data is dedicated to the application and to the optical system (with potentially non- negligible optical aberrations) . The parameters of the analysis are flexible and potentially spectral settings are tunable .

The proposed concept suggests ways to combine two image sensors with different spatial and spectral characteristics, i.e. spectral analysis is performed for objects visible in images obtained by the main image sensor, but depending on the optical and digitalization properties and resolution of multispectral image sensor. The properties of the multispectral image sensor may vary along the field and spectral channels (optics-defined) and are digitally corrected using the processing unit, e.g. by means of firmware and software.

Hereinafter, the term "region-of-interest" or ROI for short, refers to samples within a data set, i.e. a sub-image within the first or second (spectral) image. The image sensor comprises an array of pixels, e.g. implemented as a CMOS photodetector or CCD. The multispectral image sensor comprises an array of spectral pixels, e.g. the spectral pixels provide optical channels distributed over the visible, IR and/or UV range. There may be extra channels present such as Clear, Flicker and NIR channel. The term "spectral" indicates that the image sensor, or spectral pixels , is arranged to generate an image of spectral data . The spectral image represents a three-dimensional array of data which combines precise spectral information with two-dimensional spatial correlation . Spectral information enables accurate obj ect color measurements , spectral detection and characteri zation . The term " spatial" indicates that the image sensor, or pixels , is arranged to generate an image of spatial data . The spatial image represents a three- dimensional array of data which combines intensity, or color, and depth of field/ focus for each sensor and spectral channel .

The term "adj ust a spectral representation" refers to a combination of the spatial and spectral information . The image sensor generates an image , which could be representing a true color image of a scene . In order to reproduce true color, or any other color representation or desired color balance , a calibration may be needed . Calibration may be a special case of spectral representation adj ustment . Typically, this can be achieved by means of a dedicated optical sensor . However, as these sensors typically integrate light over a large f ield-of-view, FOV, calibration may not be correct for every smaller area of an image . Thus , the proposed ROI s allow calibration to be extended or added to smaller local areas . However, the term " spectral representation" may be understood in a broader sense . The image acquired by the main image sensor may also be used for spectral imaging . The image may be adj usted, or calibrated, such that the spectral representation indicates accurate spectral information over the entire image . Examples for calibration include white balance or color balance in digital imaging . The processing unit can be implemented in different ways. For example, the processing unit comprises an image processor as a type of media processor or specialized digital signal processor (DSP) used for image processing. The processing unit can be implemented as an ASIC or microprocessor, for example. In general, the processing unit can be part of a dedicated sensor module or be part of a larger electronic device such as a digital camera.

In at least one embodiment, the processing unit is operable to define the position, size and shape of the ROIs and spectral ROIs. Ultimately, the spectral spatial distribution depends on the relative size of an object, detector pixel size and parameters of an optical system, such as a point- spread-function, PSF, of a camera lens, for example. In this respect, position, size and shape of the ROIs may only be limited by the design of the image sensors, e.g. number and shape of pixels.

For example, the processing unit may also be operable to define more ROIs and corresponding spectral ROIs for the same object, and to adjust a spectral representation based on a comparison of said more ROIs and corresponding spectral ROIs. For example, a smaller ROI selection may produce better spectral reconstruction (or better estimation of the light source type) compared to a larger ROI selection, depending on the size of an object in the scene.

In at least one embodiment, the processing unit is operable to initiate image acquisition of the spatially resolved first image by means of pixels of the image sensor. Furthermore, the processing unit is operable to initiate image acquisition of the spectrally resolved second image by means of spectral pixels of the multispectral sensor . The second image is acquired using spectral pixels corresponding to the defined spectral ROI s .

In at least one embodiment , the processing unit is operable to initiate image acquisition of the spatially resolved first image by means of pixels of the image sensor . Furthermore , the processing unit is operable to initiate image acquisition of the spectrally resolved second image by means of spectral pixels of the multispectral sensor, wherein the second image is acquired using all spectral pixels corresponding to the defined spectral ROI s .

In other words , the first image is acquired first and analyzed for ROI s . Then, the second image may be acquired as a whole and corresponding spectral ROI s are used for further analysis . Or, alternatively, the multispectral image sensor is used to only acquire data corresponding to the spectral ROI s , rather than taking an entire image . In both cases , image processing by means of the processing unit focuses on the ROI s rather than the entire images . This renders processing fast as well as accurate . In at least one embodiment , di f ferent ROI s can give di f ferent digitali zation levels/ sampling/binning for the corresponding ROI s in the multispectral image sensor .

The processing unit may be arranged not only to conduct steps of image processing, but may also be involved in controlling operation of the sensors . The two examples discussed above can be used as alternatives or may complement each other . In at least one embodiment, the processing unit is operable to adjust, or calibrate, a spectral representation of the one or more ROIs of the first image with spectral data determined from the corresponding spectral ROIs. In other words, the spectral data determined from the corresponding spectral ROIs can be used to adjust spectral representation only of the corresponding ROI in the first image, leaving the rest of the spectral representation untouched. In this way, only parts of the image can be adjusted with spectral data from dedicated ROIs, while the rest may not be adjusted or may be adjusted with sensor data of another sensor, like an optical sensor.

In at least one embodiment, the processing unit is operable to adjust, or calibrate a spectral representation of the first image without spectral data determined from spectral ROIs, denoted void spectral ROI. For multiple reasons, specific areas in the image may be excluded from analysis (set to ROI voids) . For example: this can be due to saturation of one or several spectral channels or due to the need to compare spatial-spectral distribution of the detected light from light sources from different color temperature or different light types or the need to analyze spatially distributed reflections or it can be due to known properties of the optical systems of the multispectral imaging sensor and/or main imaging sensor. This may be possible by defining position, size and shape of VOID ROIs and the appropriate definition of valid ROIs.

In at least one embodiment, the processing unit is operable to define the ROIs in the first images by user input, by object recognition, and/or by database matching. Object recognition may involve known procedures of image processing, such as edge detection techniques, such as the Canny edge detection, to find edges , as well as speci fic multispectral channel detection or speci fic feature detection .

In at least one embodiment , the main image sensor has a first f ield-of-view and the multispectral image sensor has a second f ield-of-view . The main image sensor and the multispectral image sensors are arranged next to each other such that the first and the second f ields-of-view are overlapping . In addition, or alternatively, the main image sensor and the multispectral image sensor have a shared focal plane . For example , the main image sensor and multispectral image sensor comprise electronic sensors and at least one optical element or multiple optical elements for imaging .

In at least one embodiment , the main image sensor has a high spatial resolution and comparably low spectral resolution . The multispectral image sensor has a high spectral resolution and comparably low spatial resolution . In at least one embodiment , the main image sensor has a higher spatial resolution than the multispectral image sensor . The multispectral image sensor has a higher spectral resolution than the main image sensor . For example , the main imaging sensor has a high number of pixels and multispectral sensor has a low number of pixels ( > 4x to l O Ox or to l O O Ox or to 80000x lower ) .

At least one embodiment further comprises at least one additional optical sensor wherein the processing unit is operable to adj ust , or calibrate , the spectral representation of the first image using the determined spectral data and using data generated by the additional optical sensor . The additional optical sensor can be used to adj ust , or calibrate , the spectral representation of the first image in all areas other than the defined ROI s , or be used as an additional source of information to adj ust , or calibrate , the spectral representation of the first image using the determined spectral data and the data generated by the additional optical sensor .

In at least one embodiment , at least the main image sensor and the multispectral image sensor are integrated into a sensor module .

In at least one embodiment , the processing unit is integrated into the sensor module .

In at least one embodiment , an electronic device comprises at least one multispectral image sensor arrangement according to one of the aforementioned aspects , and a host system . The host system comprises one of a mobile device , digital camera, such as security camera or drone camera, or , i . e . handheld, spectrometer .

Furthermore , a method of multispectral imaging is suggested . The method uses a multispectral image sensor arrangement having a main image sensor, a multispectral image sensor and a processing unit . The method involves , using the main image sensor, acquiring a spatially resolved first image of a scene and, using the multispectral sensor, acquiring a spectrally resolved second image of the same scene . Using the processing unit , one or more regions-of-interest , ROI s , are defined in the first image , and one or more spectral ROI s are defined in the second image corresponding to the ROI s in the first image . Then, spectral data is determined from the spectral ROI s of the second image . Finally, a spectral representation of the first image is adj usted using the determined spectral data .

Further embodiments of the method become apparent to the skilled reader from the aforementioned embodiments of the avalanche diode arrangement and the electronic device , and vice-versa .

The following description of figures may further illustrate and explain aspects of the multispectral image sensor arrangement , electronic device and method of multispectral imaging . Components and parts of the multispectral image sensor arrangement that are functionally identical or have an identical ef fect are denoted by identical reference symbols . Identical or ef fectively identical components and parts might be described only with respect to the figures where they occur first . Their description is not necessarily repeated in successive figures .

Figure 1 shows an example embodiment of a multispectral image sensor arrangement ,

Figure 2 shows an example image acquired by the main image sensor,

Figure 3 shows an example image acquired by the multi-spectral image sensor, and

Figure 4 shows another example image acquired by the main image sensor . Figure 1 shows an example embodiment of a multispectral image sensor arrangement. The multispectral image sensor arrangement comprises an imaging module IM with a processing unit PU. The imaging module comprises a main image sensor IS and a multispectral image sensor MS.

The main image sensor IS is complemented with first optics 01. The main image sensor comprises an array of pixels, e.g. implemented as a CMOS photodetector or charge-coupled device, CCD. The first optics 01 provides a first f ield-of-view F0V1 characterized by a first solid angle 01.

The main image sensor IS features high lateral spatial resolution for both input and output, and high axial resolution, with short depth-of-f ield, DoF. For example, lateral spatial resolution may be in the range of 20 MP, 40 MP, 60 MP, 100 MP, 200 MP or more for input and in the range of 40 MP, 20 MP, 8 MP, 2 MP for output (MP = Megapixel) . However, spectral resolution of the main image sensor can be low, i.e. the main image sensor may be arranged to provide a monochromatic (1 channel) or red, green, blue, or RGB, (3 channels) image as output. In addition, the main image sensor can be arranged with, or complemented with, autofocus, distance estimation, segmentation, region-of-interest , ROI, selection, coordinates selection, 3D reconstruction etc. functionality. Said functions may be controlled by the processing unit PU, or dedicated control circuitry.

The multispectral image sensor MS is complemented with second optics 02 and comprises an array of spectral pixels, e.g. the spectral pixels provide optical spectral channels distributed over the visible, IR and/or UV range. The term "spectral" indicates that the multispectral image sensor, or spectral pixels, is arranged to generate an image of spectral data. The second optics 02 provides a second f ield-of-view FOV2 characterized by a second solid angle 02.

The multispectral image sensor MS features a given lateral spatial resolution for an input and a lower spatial resolution for an output. This means that the lateral spatial resolution for a spectral image sensor could be e.g. in the range of 5 MP (the MS input) or lower, and could be low compared to the main image sensor IS that could be in the range of 40MP, for example. That th output of MS has a lower spatial resolution means (e.g., in the range of 2 MP, 1 MP, 100 kP, 10 kP, 2 kP) that the multispectral image sensor may not necessarily output an entire image but rather only ROIs or integral values for given ROIs, as will be discussed in further detail below (kP = kilo Pixel) . The multispectral image sensor may also feature low axial resolution, with longer DoF. However, spectral resolution of the multispectral image sensor can be high, i.e. the multispectral image sensor may at least have six spectral channels or higher (e.g., in the range of 6, 12, 20, 50, 120 channels or 4 channels with defined spectral positions and FWHM) or provide a highly defined spectral position in a spectral image as output (specificity) . For example, the multispectral image sensor can be sensitive in VIS, VIS/NIR, NIR, SWIR, and VIS/SWIR. In addition, the multispectral image sensor can be arranged with, or complemented with, receiving metadata from the main image sensor or other sensors/single point calibrated color sensors or databases or coordinates, associated with the imaging module IM. The multispectral image sensor can be arranged to provide additional color-related and/or spatial information to machine learning based algorithms for obj ect identi fication, or spectral information with or without spatial and/or spectral averaging . Said functions may be controlled or implemented by the processing unit PU, or dedicated control circuitry .

The main image sensor IS and the multispectral image sensor MS , together with their respective optics 01 , 02 , are arranged next to each other in the imaging module IM such that the first and second f ields-of-view F0V1 , F0V2 are overlapping . The optics are further arranged so that the main image sensor and the multispectral image sensor have a shared focal plane FP at a distance dl away from the imaging module IM . The focal plane maybe shared in the sense that the depths of field DOF1 , DOF2 corresponding to the respective optics 01 , 02 are overlapping .

The processing unit PU can be implemented in di f ferent ways . For example , the processing unit comprises an image processor as a type of media processor or speciali zed digital signal processor ( DSP ) used for image processing . The processing unit can be implemented as an AS IC or microprocessor, for example . In general , the processing unit can be part of a dedicated sensor module or be part of a larger electronic device such as a digital camera .

Figure 2 shows an example image acquired by the main image sensor . The image sensor IS is operable to acquire a spatially resolved first image IM1 of a scene . Depicted are di f ferent obj ects of various si ze and shape . The obj ects are shown as black and white but generally may also have different colors, or spectral content. The objects can be represented by defining regions-of-interest , ROIs, in the first image IM1, as will be discussed further below.

Figure 3 shows an example image acquired by the multi- spectral image sensor. The multispectral image sensor MS is operable to acquire a spectrally resolved second image IM2 of the same scene. The multispectral image sensor can have different f ield-of-view FOV2, electronic sensor pixel size, and optics-specific point spread function, PSF for short, compared to the image sensor IS. For example, the resolution of the multispectral image sensor could be limited by its PSF or by its pixel size. The PSF can even vary from one spectral channel to another spectral channel, and can vary for a given spectral channel.

The proposed principle suggests a specific method, e.g. to be conducted by means of the processing unit PU, to provide accurate spectral data to objects visible in first images IM1 of the main image sensor IS, e.g. by applying digital processing and knowledge about combined opto-electronic performance of the multispectral image sensor MS. This may involve sampling of all spectral channels, which may have the constraint of being appropriate for all channel-dependent PSF sizes. In this context, the sampling should be able to digitalize the spectral channel with the smallest PSF according to the Nyquist criterion.

In the drawing a second image IM2 is depicted and shows a number of objects represented by respective PSFs of variable size (shown in the upper row of the image) . A given PSF may vary in size over area for a single channel, as indicated in the middle row of the image. The PSF can be variable in size over different spectral channels. A spectral measurement according to the proposed concept may include the constraint that the areas of all involved PSF overlap for best accuracy (shown in the bottom row of the image) . Other areas may have reduced accuracy of spectral measurements and/or reconstruction .

Sampling seeks to realize the smallest size PSF to be ensured for the digitalization of spectral channels. The drawing below the example image IM2 shows different grids representing different ROIs and sampling (fine, mid, coarse) . Furthermore, each grid also depicts PSFs of two different sizes. Coarse sampling (on the right side example) does not ensure appropriate sampling for both PSF and would lead to systematic error. Finer sampling works better for different or all PSF sizes and would allow improved channel overlap.

Figure 4 shows another example image acquired by the main image sensor. The drawing illustrates that regions-of- interest, ROIs, can be defined in the first image IM1, i.e. the objects shown in the first image IM1 can be selected for multispectral analysis by ROIs of different sizes and shapes.

In the drawing objects are schematically presented as stars and three-fold lines of different sizes. ROI of different sizes are presented as dashed squares. In the top left of the drawing objects are smaller than ROI size. ROI includes objects. In the top right, objects are of comparable or the same size as ROI size. ROI can include an object completely or partially. On the bottom, a ROI is smaller than the depicted objects, i.e. one ROI can be within an object or partially overlap with an object. The ROIs size can be analyzed and adjusted to fit a size definition based on the sampling accuracy of the multispectral sensor MS (a set of spectral channels that is application dependent, e.g. depends on the actual implementation of the multispectral image sensor MS, such as number of spectral pixels or channels) . In other words, the ROIs in the first image IM1 are defined based on the size definition of the multispectral image sensor. In turn, the size definition includes a predefined sampling parameter, e.g. to ensure that Nyquist criterion is met for the multispectral image sensor.

For example, if the resolution defined based on the object size in the first image IM1 and required from multispectral image sensor is too fine (larger than the resolution that can be provided by the multispectral image sensor) then the loss on accuracy of spectral measurements may be inevitable and is indicated or reported by the processing unit PU, e.g. to a user. In turn, the parameters of multispectral acquisition can be adjusted according to workflows described below.

In the following two example workflows are presented, which can be considered as example embodiments of a method of multispectral imaging. A method of multispectral imaging includes procedural steps, e.g. conducted or initiated by the processing unit PU using the main image sensor IS and multispectral image sensor MS. Details related to said steps, including a succession of procedural steps, may vary.

Generally, the method comprises the steps of:

- using the main image sensor IS, acquiring a spatially resolved first image IM1 of a scene; using the multispectral sensor MS, acquiring a spectrally resolved second image IM2 of the same scene; and, using the processing unit PU :

- defining one or more regions-of-interest , ROIs, in the first image IM1,

- defining one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1,

- determining spectral data from the spectral ROIs of the second image IM2, and

- using the determined spectral data to adjust a spectral representation of the first image IM1.

Workflows may use the first image IM1 to define initial ROIs. Other workflows use the second image of the multispectral sensor MS to define initial ROIs. Both ROIs or spectral ROIs can be determined as initial ROIs.

According to a first workflow, a spatially resolved first image IM1, e.g. a high resolution RGB/BW picture, of a scene is acquired using the main image sensor IS.

One or more regions-of-interest, ROIs, are defined in the first image IM1. Methods for defining the ROIs could involve coordinates/areas definition from the main image sensor, e.g. based on intensity threshold analysis, gradient analysis, artificial intelligence, Al, object recognition, RGB component recognition, etc. ROIs can also be user supervised (e.g. user input, such as touch, audio, eye tracking, etc.) or other sources of a coordinate's definition.

Then, a spectrally resolved second image IM2, e.g. a lower resolution multispectral image, of the same scene is acquired using the multispectral sensor MS. The next step can be summarized as defining one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1. However, this step may involve a number of additional steps, which are discussed now. One step relates to applying aberration corrections necessary for a specific multispectral image sensor, e.g. for all spectral channels or selected spectral channel (s) .

"Aberration corrections" for the multispectral image sensor MS could include:

- compensation of geometrical aberrations. For example, specifically, correction of distortion or field curvature, coma for all spectral channels or specific channel (s) of interest,

- compensation of chromatic axial or chromatic lateral aberrations for all spectral channels or specific channel (s) of interest,

- compensation of specific geometry design of the multispectral image sensor. For example, including compensation of parallax for light field technology or for multiple image sensors,

- compensation of other aberrations that affect the parameters of point spread function (PSF) of the multispectral image sensor or all spectral channels or a specific channel of interest. Specifically, these aberrations include defocus, spherical, coma, astigmatism, and/or field curvature.

These corrections could be quantified for each individual multispectral image sensor or be averaged for a type of multispectral image sensor in the form of a lookup table, LUT . Specific parameters may be called up in the step of processing by means of the processing unit, when one or more spectral channels are allocated to a spectral ROI by the needs of an application, for example.

A next step may involve overlap, correlation or comparison of the first and second images IM1, IM2, e.g. in view of the requirements of the specific application. For example, the first and second f ields-of-view F0V1, F0V2 as well as scaling can be different. For example, F0V2 can be 45 degree while the F0V1 could be 120 degree. An "overlap" information from the imaging module IM could be applied for one or multiple regions of interest (ROI) , defined as follows. The overlap may not be available for the complete area of one of the sensors without additional motion of the two sensors:

- a single coordinate (point) ,

- a line around a specific area,

- a specific area with a definite size or dimensions, e.g. in x and y,

- a specific depth or dimension Z at a specific coordinate.

An overlap, correlation or comparison procedure to define one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1 could include the following steps:

- a geometrical relation or overlap is estimated for a definite ROI in the first image IM1 and a spectral ROI in the second image. The overlap could be visualized or highlighted in the first image; this step can be guided by using the "overlap" information discussed above,

- parameters of PSF variation in the selected spectral ROI for the multispectral sensor are estimated. The PSF defines spatial accuracy within a spectral ROI where the spectral reconstruction is reliable. This parameter, a point-spread-function (PSF) can be the same over the area of analysis (from minimum size up to complete area of the multispectral image sensor) or can vary over the area of analysis .

- the estimated parameters of PSF variation in spectral channels of MS provide a measure as to how accurate spectral analysis representation could be in the IM1.

As a further step, the spectral ROI selection can be visualized or highlighted as an overlap in the first image IM1.

Further steps may include spatial and/or spectral averaging or adjustment of parameters for multispectral data acquisition .

An option for averaging could include the following processing steps:

- evaluate a relation between the size of a ROI in the first image IM1 and the size of a specific corresponding spectral channel PSF from image IM2 at the corresponding coordinates, e.g. a size of the ROI in the first image IM1 in each coordinate could be at least three times larger than the selected PSF size in the corresponding area in the second image IM2 (a) ,

- then for ROIs passing said requirement (a) , apply binning or averaging of data in the second image,

- For ROIs not passing the requirement (a) , it is possible to exclude those from analysis or indicate a low spectral accuracy,

- For ROIs passing the requirement (a) and having a contour it is possible to indicate a low spectral accuracy of the contour line, - Optionally, repeat the evaluation procedure for another specific spectral channel PSF of image IM2 or for a number of specific spectral channels,

- Optionally, combine spectral channels (depending on the specific application requirement, if high spectral accuracy is not needed or if a particular spectral combination or spectral selectivity is needed) .

Another option for averaging could be as follows:

- Evaluate the relation between the size of a ROI from image IM1 and the size of the specific spectral channel PSF from image IM2 in the corresponding coordinates,

- The size of the ROI in each coordinate may be required to be at least three times larger than the selected channel PSF size in the corresponding area (a) ,

- Evaluate the intensity of spectral PSF to detect either overexposure, underexposure or a signal with low SNR (b) ,

- For ROIs passing the requirements (a) and (b) apply binning or averaging of data in the second image IM2,

- For ROIs passing the requirements (a) and not passing the requirement (b) apply estimate integration time or/gain reduction factor,

- For ROIs not passing the requirement (a) and passing the requirement (b) it is possible to exclude those from analysis or indicate a low spectral accuracy,

- For ROIs passing the requirement (a) and (b) and having a contour it is possible to indicate a low spectral accuracy of the contour line,

- Optionally, repeat the procedure above for another specific spectral channel PSF or for a number of specific spectral channels, Optionally, combine spectral channels (depending on the specific application requirement, if high spectral accuracy is not needed or if a particular spectral combination or spectral selectivity are needed) .

As a next step, verify if it is necessary and possible to obtain additional multispectral image IM2 with higher or lower spatial resolution or different exposure time or gain. Estimate new imaging parameters: binning, integration time for all spectral channels or selected spectral channel (s) .

The steps summarized above can be repeated for further spatial ROIs in the image IM1.

The next steps involve determining spectral data from the selected spectral ROIs of the second image IM2, and, as a consequence, use the determined spectral data to adjust a spectral representation of the first image IM1. The adjusted spectral representation may be a color or white balance for the corresponding ROI .

According to a second workflow, a spatially resolved first image IM1, e.g. a high resolution RGB/BW picture, of a scene is acquired using the main image sensor IS.

Then, a spectrally resolved second image IM2, e.g. a lower resolution multispectral image, of the same scene is acquired using the multispectral sensor MS.

As a next step, aberration corrections are conducted, if necessary for a specific multispectral image sensor MS, e.g. for all spectral channels or selected spectral channel (s) . "Aberration corrections" could be similar to the steps discussed above in the context of the first workflow.

In a next step, spectral ROIs in the second image IM2 are defined, which correspond to ROIs in the first image IM1. Spatial ROIs and parameters are selected from the second image IM2. Selection methods could be based on:

- Coordinates/areas definition from the second image IM2 based on specific application-defined single spectral channels, e.g. image analysis of intensity threshold, gradients analysis, noise levels, etc.,

- Coordinates/areas definition from the second image IM2 based on specific application-defined multiple spectral channels, e.g. cross correlation analysis, spectral channels math, etc.

Note that aberration corrections and definition of ROIs can be switched in sequence, e.g. depending on specific system performance .

A further step may relate to an overlap, correlation or comparison procedure to define one or more spectral ROIs in the second image IM2 corresponding to the ROIs in the first image IM1. For example, a first image IM1 after overlap with results from a second image IM2 will show areas with the strongest features derived from multispectral analysis definition and specific for an application.

As discussed above, the f ields-of-view of the two sensors as well as scaling can be different. Thus, overlap may not be available for the complete area of one of the sensors without the additional motion of two sensors. Based on the results , regions-of-interest , ROI s , are defined in the first image IM1 and spectral ROI s are defined in the second image IM2 corresponding to the ROI s in the first image IM1 . The selection allows for customer feedback and customer selection of regions of interest . Depending on the user input , the previous steps can be repeated . Optionally, the procedure allows switching to the first workflow depending on the input of customer .

While this speci fication contains many speci fics , these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features speci fic to particular embodiments of the invention . Certain features that are described in this speci fication in the context of separate embodiments can also be implemented in combination in a single embodiment . Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination . Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination .

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results . In certain circumstances , multitasking and parallel processing may be advantageous . Furthermore , as used herein, the term "comprising" does not exclude other elements . In addition, as used herein, the article "a" is intended to include one or more than one component or element , and is not limited to be construed as meaning only one .

This patent application claims the priority of German patent application 10 2022 121 896 . 1 , the disclosure content of which is hereby incorporated by reference .

References

01 first solid angle

02 second solid angle dl ( focal ) distance

DOF1 first depth-of- f ield

DOF2 second depth-of- f ield

F0V1 first f ield-of-view

F0V2 second f ield-of-view FP focal plane

IM imaging module

IS main image sensor

MS multispectral image sensor

01 first optics 02 second optics

PU processing unit