Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
Document Type and Number:
WIPO Patent Application WO/2023/078753
Kind Code:
A1
Abstract:
An electronic device comprising circuitry configured to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

Inventors:
KENJO YUKINAO (DE)
Application Number:
PCT/EP2022/079911
Publication Date:
May 11, 2023
Filing Date:
October 26, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY SEMICONDUCTOR SOLUTIONS CORP (JP)
SONY DEPTHSENSING SOLUTIONS SA/NV (BE)
International Classes:
G01S7/493; G01S17/36; G01S17/894; G06T5/00
Foreign References:
US20170322309A12017-11-09
US20210231812A12021-07-29
Attorney, Agent or Firm:
MFG PATENTANWÄLTE (DE)
Download PDF:
Claims:
CLAIMS

1. An electronic device comprising circuitry configured to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

2. The electronic device of claim 1, wherein the blending comprises a weighted selection of the best IQ signal from the IQ data obtained from spot noise reduction filtering or from the back- ground spot detection filtering.

3. The electronic device of claim 1, wherein the blending comprises determining the third sparse based on the first sparse IQ data, the second sparse IQ data and a blend ratio.

4. The electronic device of claim 3, wherein the blend ratio depends on NR filtered amplitude.

5. The electronic device of claim 1, wherein the blending is performed based on the equation

Zout — (1 — r)ZDGS + rZbg where Zout is the I or Q value of the third sparse IQ data obtained by the blending, r is the blend DGS is the I or Q value of the first sparse IQ data obtained by DGS filtering and Zbg is the I or Q value of the second sparse IQ data obtained by background spot detection filtering.

6. The electronic device of claim 1, wherein the circuitry is configured to perform dot finding on dense IQ data of the image to obtain dot positions.

7. The electronic device of claim 6, wherein the circuitry is configured to perform bilateral fil- tering on the dense IQ data based on the dot positions to obtain fourth sparse IQ data of the image.

8. The electronic device of claim 7, wherein the circuitry is configured to perform blending of the first sparse IQ data, with the second sparse IQ data, and with the fourth sparse IQ data to ob- tain the third sparse IQ data for determining sparse depth information with high completeness.

9. The electronic device of claim 8, wherein the blending comprises determining the third sparse IQ data based on the first sparse IQ data, the second sparse IQ data, the fourth sparse IQ data, a first blend ratio, and a second blend ratio.

10. The electronic device of claim 8, wherein the blending is performed based on the equation,

Zout = (1 — r1)ZDGS + r1{(1 — r2) Zbg2 + r2Zbg1} where Zout is the I or Q value of the third sparse IQ data obtained by the blending, r1 is a first blend ratio, r2 is a second blend ratio, ZDGS is the I or Q value of the first sparse IQ data obtained by DGS filtering, Zbg1 is the I or Q value of the second sparse IQ data obtained by background spot detection filtering and Zbg2 is the I or Q value of the fourth sparse IQ data obtained by the bilateral filtering.

11. The electronic device of claim 1, wherein the circuitry is configured to perform the back- ground spot detection filtering on dense IQ data of the image based on prefixed dot positions to ob- tain the second sparse IQ data.

12. The electronic device of claim 1, wherein the circuitry is configured to perform spot noise reduction filtering on dense IQ data of the image to obtain the first sparse IQ data.

13. The electronic device of claim 6, wherein the circuitry is configured to perform direct-global- separation, DGS, filtering on the dense IQ data based on the dot positions to obtain the first sparse IQ data.

14. The electronic device of claim 1, wherein the circuitry is further configured to perform IQ values determination on raw image data of the image captured by a Time-of-Flight image sensor to obtain dense IQ data of the image.

15. The electronic device of claim 1, wherein the circuitry is further configured to perform depth determination on the third sparse IQ data to obtain the sparse depth information for generating a depth map with high completeness.

16. The electronic device of claim 6, wherein the circuitry is configured to perform dot finding on the dense IQ data based on prefixed dot positions to obtain the dot positions.

17. The electronic device of claim 1, wherein the circuitry is further configured to perform sparse Gaussian filtering on dense IQ data of the image based on prefixed dot positions to obtain fifth sparse IQ data of the image.

18. The electronic device of claim 17, wherein the circuitry is further configured to perform IQ to amplitude conversion on the fifth sparse IQ data to obtain an amplitude value.

19. The electronic device of claim 18, wherein if the amplitude value is above a predefined threshold value, the circuitry is further configured to perform validation/invalidation on the fifth sparse IQ data to obtain the second sparse IQ data.

20. The electronic device of claim 18, wherein if the amplitude value is below a predefined threshold value, the circuitry is further configured to perform DGS target mask generation to obtain a DGS target mask.

21. The electronic device of claim 20, wherein the circuitry is further configured to perform DGS filtering on the dense IQ data based on DGS target mask to obtain the first sparse IQ data.

22. A method comprising performing blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to ob- tain third sparse IQ data for determining sparse depth information with high completeness.

23. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 22.

Description:
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM

TECHNICAL FIELD

The present disclosure generally pertains to the field of Time-of-Flight imaging, and in particular to devices, methods, and computer programs for Time-of-Flight image processing.

TECHNICAL BACKGROUND

A Time-of-Flight (ToF) camera is a range imaging camera system that determines the distance of objects by measuring the time of flight of a light signal between the camera and the object for each point of the image. Generally, a Time-of-Flight camera has an illumination unit that illuminates a region of interest with modulated light, and a pixel array that collects light reflected from the same region of interest.

In indirect Time-of-Flight (iToF), three-dimensional (3D) images of a scene are captured by an iToF camera, which is also commonly referred to as “depth map”, or “depth image” wherein each pixel of the iToF image is attributed with a respective depth measurement. The depth image can be determined directly from a phase image, which is the collection of all phase delays determined in the pixels of the iToF camera.

Although there exist techniques for determining depths images with an iToF camera, it is generally desirable to provide techniques which improve the determining of depths images with an iToF camera.

SUMMARY

According to a first aspect, the disclosure provides an electronic device comprising circuitry configured to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

According to a second aspect, the disclosure provides a method comprising performing blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

According to a third aspect, the disclosure provides a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

Further aspects are set forth in the dependent claims, the following description, and the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are explained by way of example with respect to the accompanying drawings, in which:

Fig. 1 schematically shows the basic operational principle of an indirect Time-of-Flight imaging system, which can be used for depth sensing or providing a distance measurement;

Fig. 2 schematically shows a spot ToF imaging system which produces a spot pattern on a scene;

Fig. 3 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness;

Fig. 4 schematically shows in more detail an embodiment of the spot noise reduction filtering performed in the process of blending sparse IQ data described in Fig. 3, wherein dot finding, and direct-global-separation (DGS) filtering is performed;

Fig. 5 shows a flow diagram visualizing in more detail a local-maxima search as it may be used for dot finding of Fig. 4;

Fig. 6 schematically shows a direct-global-separation (DGS) algorithm applied to pixels of a spot pixel region;

Fig. 7 schematically shows in more detail an embodiment of background spot detection filtering performed in the process of blending sparse IQ data described in Fig. 3;

Fig. 8 schematically shows a diagram of an embodiment of the process of blending described in Fig. 3, wherein the sparse IQ values obtained by the BSDF and the sparse IQ values obtained by the spot noise reduction filtering are blended with a blend ratio 0.6 to obtain blended sparse IQ values;

Fig. 9 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness, wherein the spot noise reduction filtering is performed based on prefixed dot positions stored in a database;

Fig. 10 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness, wherein the background spot detec- tion filtering is performed by performing sparse Gaussian filtering on dense IQ values based on pre- fixed dot positions stored in a database;

Fig. 11 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data and with sparse IQ data from bilateral filtering of raw image data to obtain a depth map with high completeness, wherein the bilateral filtering is performed based on dot positions ob- tained by dot finding;

Fig. 12 schematically shows two diagrams of an embodiment of the process of blending described in Fig. 11, wherein the sparse IQ values obtained by the BSDF, the sparse IQ values obtained by the spot noise reduction filtering and the sparse IQ values obtained by the bilateral filtering are blended with two blend ratios r 1; r 2 to obtain blended sparse IQ values;

Fig. 13 schematically shows a diagram of an embodiment of the process of blending described in Fig. 11, wherein the sparse IQ values obtained by the BSDF, the sparse IQ values obtained by the spot noise reduction filtering and the sparse IQ values obtained by the bilateral filtering are blended with a blend ratio to obtain blended sparse IQ values;

Fig. 14 shows a flow diagram visualizing a method for blending sparse IQ data to obtain a depth map with high completeness;

Fig. 15 illustrates two depth maps generated by a spot ToF device capturing a scene; and

Fig. 16 schematically describes an embodiment of an iToF device that can implement the processes for long depth detection range measurement of a spot or a pixel in an iToF system.

DETAILED DESCRIPTION OF EMBODIMENTS

Before a detailed description of the embodiments under reference of Fig. 1 to Fig. 16, general expla- nations are made.

The embodiments described below in more detail disclose an electronic device comprising circuitry configured to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high complete- ness.

Circuitry may include a processor, a memory (RAM, ROM or the like), a DNN unit, a storage, input means (mouse, keyboard, camera, etc.), output means (display (e.g. liquid crystal, (organic) light emitting diode, etc.), loudspeakers, etc., a (wireless) interface, etc., as it is generally known for elec- tronic devices (computers, smartphones, etc.).

IQ data may represent raw data of an image captured by an image sensor. The image sensor may be for example an indirect time-of-flight (iToF) sensor. IQ data may include IQ values of a pixel in the pixel domain and IQ values of a spot (i.e. spot region) in the spot domain.

The spot noise reduction filtering and the background spot detection filtering (BSDF) may be imple- mented as two parallel pipelines, or the like. The spot noise reduction filtering may be a noise reduc- tion (NR) filter applied to an image to get better SNR from dark areas. For example, multipath problem using Spot ToF camera system may be solved with the spot noise reduction filter. How- ever, it may be difficult for the low intensity areas due to the pseudo maximum intensity pixels gen- erated by shot noise. This may be solved by applying invalidation filter in order to avoid non- accurate and noisy depth generation in sparse depth map. Although, to get high completeness for the spot depth image may be important. A sparse depth image based on spot may have same spot number corresponding illuminator’s spot number to get high completeness.

Blending of first sparse IQ data with second sparse IQ data may be any process that mixes the data, e.g. in a weighted manner. For example, blending of data may be performed using, as a weight, a blend ratio, or the like, without limiting the present disclosure in that regard. In a case where the blend ratio is 0 or 1, the blending may be performed by selecting the first sparse IQ data or second sparse IQ data. In this manner, a false negative detection of spots may be improved.

Depth information with high completeness may be used to generate depth map with high complete- ness. A depth map with high completeness may be a complete depth map within for each dot (found or pre-fixed) the sparse IQ information (e.g. sparse IQ data/ values) having the lowest noise. For example, a depth map with high completeness may be a sparse depth map with almost 100% completeness.

According to some embodiments, the blending may comprise a weighted selection of the best IQ signal from the IQ data obtained from spot noise reduction filtering or from the background spot detection filtering. For example, if the weight of the weighted selection, e.g. the blend ratio is 0 or 1, the blending may be performed by selecting the first sparse IQ data or second sparse IQ data. The best IQ signal may be the IQ showing the lowest SNR for each dot.

According to some embodiments, the blending may comprise determining the third sparse based on the first sparse IQ data, the second sparse IQ data and a blend ratio.

According to some embodiments, the blend ratio may depend on NR filtered amplitude, without limiting the present disclosure in that regard. Alternatively, edge detection result may be considered. For example, if there is an edge near the dot, it should not use DGS result since the accuracy may worse on the edge. It may be better to apply bilateral filter for this dot.

According to some embodiments, the blending may be performed based on the equation

Z out = (1 — r )Z DGS rZ bg where Z out is the I or Q value of the third sparse IQ data obtained by the blending (309), r is the blend ratio, Z DGS is the I or Q value of the first sparse IQ data obtained by DGS filtering (402) and

Z bg is the I or Q value of the second sparse IQ data obtained by background spot detection filter- ing (305).

According to some embodiments, the circuitry may be configured to perform dot finding on dense IQ data of the image to obtain dot positions.

According to some embodiments, the circuitry may be configured to perform bilateral filtering on the dense IQ data based on the dot positions to obtain fourth sparse IQ data of the image.

According to some embodiments, the circuitry may be configured to perform blending of the first sparse IQ data, with the second sparse IQ data, and with the fourth sparse IQ data to obtain the third sparse IQ data for determining sparse depth information with high completeness.

According to some embodiments, the blending may comprise determining the third sparse IQ data based on the first sparse IQ data, the second sparse IQ data, the fourth sparse IQ data, a first blend ratio, and a second blend ratio.

According to some embodiments, the blending may be performed based on the equation,

Z out = (1 — r 1 )Z DGS + r 1 {(1 — r 2 ) Z bg2 + r 2 Z bg1 where Z out is the I or Q value of the third sparse IQ data obtained by the blending (1102), r 1 is a first blend ratio, r 2 is a second blend ratio, Z DGS is the I or Q value of the first sparse IQ data ob- tained by DGS filtering (402), Z bg1 is the I or Q value of the second sparse IQ data obtained by background spot detection filtering (305) and Z bg2 is the I or Q value of the fourth sparse IQ data obtained by the bilateral filtering (1100).

According to some embodiments, the circuitry may be configured to perform the background spot detection filtering on dense IQ data of the image based on prefixed dot positions to obtain the sec- ond sparse IQ data.

According to some embodiments, the circuitry may be configured to perform spot noise reduction filtering on dense IQ data of the image to obtain the first sparse IQ data. According to some embodiments, the circuitry may be configured to perform direct-global-separa- tion, DGS, filtering on the dense IQ data based on the dot positions to obtain the first sparse IQ data.

According to some embodiments, the circuitry may be further configured to perform IQ values de- termination on raw image data of the image captured by a Time-of-Flight image sensor to obtain dense IQ data of the image.

According to some embodiments, the circuitry may be further configured to perform depth determi- nation on the third sparse IQ data to obtain the sparse depth information for generating a depth map with high completeness.

According to some embodiments, the circuitry may be configured to perform dot finding on the dense IQ data based on prefixed dot positions to obtain the dot positions. A dot is the center of a spot and, in a case where dots of the spots are not found in the image, no depth may be derived for the spots that were not found, and thus, incomplete depth maps may be generated.

According to some embodiments, the circuitry may be further configured to perform sparse Gauss- ian filtering on dense IQ data of the image based on prefixed dot positions to obtain fifth sparse IQ data of the image.

According to some embodiments, the circuitry may be further configured to perform IQ to ampli- tude conversion on the fifth sparse IQ data to obtain an amplitude value.

According to some embodiments, if the amplitude value is above a predefined threshold value, the circuitry may be further configured to perform validation/ invalidation on the fifth sparse IQ data to obtain the second sparse IQ data. For example, invalidate sparse IQ data, i.e. data on the spot do- main, related to a spot may mean that all IQ values (included in the IQ data) of the pixels (image sensor pixel domain) within the spot pixel region (i.e. within the spot) are set to zero, or to a prede- termined value, or to the value of neighboring spot. Invalidate sparse IQ data related to a spot may also mean that the IQ value of the pixel related to the spot in the spot domain may be set to zero, or to a predetermined value, or to the value of neighboring pixel in the spot domain.

According to some embodiments, if the amplitude value is below a predefined threshold value, the circuitry may be further configured to perform DGS target mask generation to obtain a DGS target mask.

According to some embodiments, the circuitry may be further configured to perform DGS filtering on the dense IQ data based on DGS target mask to obtain the first sparse IQ data. The embodiments described below in more detail disclose a method comprising performing blend- ing of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

Such a method may show better performances than existing methods in terms of SNR under a large range of light conditions and may be utilized in mobile applications. For some application, it may be beneficial to have a depth maps where all spots (with depth information) are kept.

With the above described method, when a spot is not identified with the dot finding function, it may be assumed that the kernel signal measured at that spot location (known from the background spot overlay), is similar to a Full Field noisy value for which depth can be determined. Also, when a spot is not identified, it may be assumed that this is because the scene is too far or with too low reflectiv- ity.

The embodiments described below in more detail disclose a computer program comprising instruc- tions which, when the program is executed by a computer, cause the computer to perform blending of first sparse IQ data obtained from spot noise reduction filtering of an image with second sparse IQ data obtained from background spot detection filtering of the image to obtain third sparse IQ data for determining sparse depth information with high completeness.

Operational principle of an indirect Time-of-Flight imaging system (iToF)

Fig. 1 schematically shows the operational principle of an indirect Time-of-Flight imaging system, which can be used for depth sensing or providing a distance measurement. The iToF imaging sys- tem 101 includes an iToF camera, for instance the imaging sensor 102 and a processor (CPU) 105. The scene 107 is actively illuminated with amplitude-modulated infrared light LMS at a predeter- mined wavelength using the illumination unit 110, for instance with some light pulses of at least one predetermined modulation frequency generated by a timing generator 106. The amplitude-modu- lated infrared light LMS is reflected from objects within the scene 107. A lens 103 collects the re- flected light RL and forms an image of the objects onto an imaging sensor 102, having a matrix of pixels, of the iToF camera. In indirect Time-of-Flight (iToF) the CPU 105 correlates the reflected light RL with the demodulation signal DML which yields an in-phase component value (“I value”) for each pixel and quadrature component values (“Q-value”) for each pixel, so called I and Q values (see Fig. 5). Based on the I and Q values for each pixel a phase delay value may be calculated for each pixel which yields a phase image. Based on the phase image a depth value may be determined for each pixel which yields the depth image. Still further, based on the I and Q values an amplitude value and a confidence value may be determined for each pixel which yields the amplitude image and the confidence image.

In a full field IToF system for each pixel of the image sensor 102 a phase delay value and a depth value may be determined. In a spot ToF system (see Fig. 2) a scene may be illuminated with spots by a spot illuminator and the phase a value and a depth value may only be determined for (a subset of) the pixels of the image sensor 102 which capture the reflected spots from the scene.

Spot Time-of-Flight imaging (spot ToF)

Fig. 2 schematically shows a spot ToF imaging system which produces a spot pattern on a scene. The spot ToF imaging system comprises a spot illuminator 110, which produces a pattern 202 of spots 201 on a scene 107 comprising objects 203 and 204. An iToF camera 102 captures an image (e.g. raw image data) of the spot pattern on the scene 107. The pattern 202 of light spots 201 pro- jected onto the scene 107 by illumination unit 110 results in a corresponding pattern of light spots in the amplitude image and depth image captured by the pixels of the image sensor (102 in Fig. 1) of iToF camera 102. The light spots will appear in the amplitude image produced by iToF camera 102 as a spatial light pattern including high-intensity areas 201 (the light spots), and low-intensity areas 202. The spot illuminator 110 and the camera 102 are a distance B apart from each other. This dis- tance B is called baseline. The scene 107 has distance d. However, every object 203, 204 or object point within the scene 107 may have an individual distance d from baseline B. The depth image of the scene captured by ToF camera 102 defines a depth value for each pixel of the depth image and thus provides depth information of scene 107 and objects 203, 204.

Typically, the pattern of light spots projected onto the scene 107, may result in a corresponding pat- tern of light spots captured on the pixels of the image sensor 102. In other words, spot pixel regions may be present among the plurality of pixels (and thus in the pixel values included in the obtained image data) and a valley pixel regions may be present among the plurality of pixels (and thus in the pixel values included in the obtained image data). The spot pixel regions (i.e. the pixel values of pix- els included in the spot pixel regions) may include signal contributions from the light reflected from the scene 107 but also from background light, multi-path interference. The valley pixel region (i.e. the pixel values of pixels outside the spot pixel regions that are included in the valley pixel region) may include signal contributions from background light and from multi-path interference. There- fore, the CPU may apply a direct-global-separation algorithm (DGS) to the I and Q values to each spot, i.e. the pixels inside a spot pixel region in order to reduce noise, for example from background light and from multi-path interference (see Fig. 6). Fig. 3 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness.

Raw data 300, e.g. raw image data, received from an image sensor (see 102 in Fig. 1) are input to an IQ values determination 301 to determine I and Q values. The determined I and Q values are dense IQ values 302 and are determined for each pixel based on the received raw data 300. A spot noise reduction filtering 303 is performed to the dense IQ values 302 to obtain sparse IQ values, e.g. sparse IQ data Z DGS . A background spot detection filtering 305 is performed to the dense IQ values 302 based on prefixed dot positions 307 stored in a database 306 to obtain sparse IQ values Z bg . Blending 309 of the sparse IQ values Z DGS and the sparse IQ values Z bg is performed to obtain sparse IQ values 210. A depth determination 311 is performed on the sparse IQ values Z out to ob- tain sparse depth 312, i.e. sparse depth information.

The raw data 300 is obtained in the pixel domain of the image sensor capturing an image of a scene. To each pixel of the raw data 300 of the image sensor corresponds one set of dense IQ values, for example an I value and a respective Q value of the pixel.

In the pixel domain, a spot is defined by a spot pixel region which includes a plurality of pixels re- lated to the spot. The process of Fig. 3 maps the dense IQ data from the pixel domain of the image sensor to the spot domain. In the spot domain, each set of sparse IQ values describes a respective spot.

After blending 309, the sparse IQ values Z out corresponding to the entire captured image are con- verted by the depth determination 311 to sparse depth information 312, for generating a depth map with high completeness.

In the embodiment of Fig. 3, the prefixed dot positions 307 are input from the database 306 to background spot detection filtering, such that background spot detection filtering 305 is performed to the dense IQ values 302 based on prefixed dot positions 307. In this manner, a mask image is cre- ated to limit the local-maxima search performed by the spot noise reduction filtering 303. The pix- els, i.e. dots, that are not found by background spot detection filtering 305 are searched and found by spot noise reduction filtering 303, and in particular by dot finding, to obtain the sparse IQ data Z DGS .

Spot noise reduction filtering 303 is described in more detail in Figs. 4, 5 and 6. For the spot noise reduction filtering 303, for example a DGS algorithm may be used. DGS filtering can be performed based on dot finding, wherein each dot position obtained by the dot finding represents the center of a spot, i.e. the center of a spot pixel region including a plurality of pixels.

The background spot detection filtering 305 is described in more detail in Fig. 7 and the blending is described in more detail in Fig. 8.

For noisy illumination or background scene position, e.g. in dark area due to noise, the dot position may not be found. However, in such situations, according to the embodiment of Fig. 3 this is com- pensated by using the background spot detection filtering (BSDF) to identify IQ information, i.e. IQ values, for all prefixed spot positions 307.

Blending 309 may for example be realized as described in Fig. 8 and corresponding description be- low. Blending 309 selects for depth calculation and for each dot (potentially in a “weighted” man- ner) the IQ values (i.e. the IQ data) having the best IQ signal, e.g. IQ values related to the dots being detected by spot noise reduction filtering 303 or IQ values related to the dots with prefixed positions obtained from background spot detection filtering. The IQ values having the best IQ sig- nal may be for example the IQ values showing the lowest SNR for each dot. With the above de- scribed blending process, false negative detection of spots may be improved and a depth map with high completeness may be obtained.

It should be further noted that, in the embodiment of Fig. 3, the spot noise reduction filtering 303 and the background spot detection filtering 305 are performed in parallel on the raw IQ image to convert the dense IQ of the raw IQ to sparse IQ, without limiting the present embodiment in that regard.

Still further, the prefixed dot positions stored in the database may be any number of prefixed dot positions, e.g. 5000.

Fig. 4 schematically shows in more detail an embodiment of the spot noise reduction filtering per- formed in the process of blending sparse IQ data described in Fig. 3, wherein dot finding, and di- rect-global-separation (DGS) filtering is performed.

Dense IQ values 302 are input to spot noise reduction filtering 303. The dense IQ values 302 are input to dot finding 400 to obtain dot positions 401. DGS filtering 402 is performed on the dense IQ values 302 based on the dot positions 401 to obtain sparse IQ values Z DGS . In this way, when an accurate pixel position (dot position 401) for a spot is determined, DGS filtering 402 can be applied, and direct depth can be obtained.

Dot finding 400 may for example be implemented as described in more detail below with regard to Fig. 5. Each of the dot positions 401 obtained by the dot finding 400 represents the center of a spot. DGS filtering 402 may for example be implemented as described in more detail below with regard to Fig. 6. The DGS filtering is applied to the I and Q values of each spot, i.e. the pixels inside a spot pixel region. This may reduce noise, for example from background light and from multi-path inter- ference. Based on the dot positions 401 and the dense IQ data 302, DGS filtering 402 provides spot positions to get illumination most focused pixels for respective spot intensity distributions on the image sensor.

It should be further noted that in a case where actual spot positions cannot be determined during dot finding 400, e.g. in dark area due to noise, and the DGS filtering 402 does not allow to get accu- rate depth measurements in this area, then DGS filtering 402 may be only applied for the spots hav- ing enough SNR.

Dot finding may for example be performed by a local-maxima search using a 7x7 kernel and by scanning with shifting the kernel pixel-wise over all the image area. By performing this filtering, the dot positions having strong signal, i.e. high intensity, are detected. A dot position as determined by dot finding 400 represents the position of a spot.

Fig. 5 shows a flow diagram visualizing in more detail a local-maxima search as it may be used for dot finding of Fig. 4. As described with regard to Fig. 4 above, the dense IQ values 302 are input to dot finding 400 to obtain dot positions 401.

At 500, a target pixel within IQ data (IQ image data) is set as center pixel of the kernel, e.g. of a 7x7 kernel. The position of the target pixel is e.g. (x = 3, y = 3). At 501, the intensity, i.e. the amplitude, of the neighboring pixels within the kernel is checked. The positions of the neighboring pixels may be (x = 0,y = 0), ..., (x = 6,y = 0), (x = 0,y = 1), ..., (x = 6,y = 1), ..., (x = 6, y = 6). At 502, if the amplitude of the center pixel (the target pixel) is the maximum amplitude within the kernel, then the method proceeds at 503. At 503, the target pixel is defined as a dot position 401. If the am- plitude of the center pixel (the target pixel) is not the maximum amplitude within the kernel, the method proceeds at 504. At 504, the next pixel within the IQ data (IQ image data) is the next target pixel which is set as center pixel of the kernel. The above method steps are repeated until the pixels over all the image area (IQ image data) are scanned.

The kernel is then shifted by one pixel (pixel-wise), that is, to the next pixel within IQ data, namely the pixel next to the center/ target pixel described above, the steps 501 to 503 are performed. Here the position of the target pixel is e.g. (x = 4, y = 3). The positions of the neighboring pixels may be (x = 1,y = 0), ..., (x = 7,y = 0), (x = 1,y = 1), ..., (x = 7,y = 1), ..., (x = 7,y = 6). If the am- plitude of the center pixel is the maximum, then the position of the center pixel (x = 4, y = 3) is set as dot position 401. The above method steps are repeated until the pixels over all the image area (IQ image data) are scanned.

It should be noted that the pixels, i.e. dots, that are not found by background spot detection filtering 305 are searched and found by spot noise reduction filtering 303 and in particular by dot finding based on the above described method of Fig. 5, to obtain the sparse IQ data Z DGS .

It should be noted that in a case where a spot is not identified with the dot finding 400, it may be assumed that the kernel signal measured at that spot location, which is known from the background spot overlay, is similar to a Full Field noisy value for which depth can be determined. Also, in a case where a spot is not identified, it may be assumed that this is because the scene is too far or with too low reflectivity. In such case, there may be no distribution with peak (e.g. like gaussian distribution) in the 7x7 kernel, and noisy values may be obtained like when a normal illuminator (e.g. a Full Field camera and not a Spot ToF camera) is used. The noise reduction filtering may be applied on the pixel area to get meaningful information from the low intensity pixel area.

Fig. 6 schematically shows a direct-global-separation (DGS) algorithm applied to pixels of a spot pixel region in raw image data. Generally, the I-Q values of a pixel of the raw image (dense data) may be displayed in a coordinate system having the in-phase component I on the horizontal axis and the quadrature component Q on the vertical axis. Each pixel value of a spot pixel region IQ 600 (in- cluding the spot peak pixel) has a different phase, which is given by the angle of the arrow from the origin of coordinates to the respective IQ value, even though they belong to the same spot pixel re- gion. A pixel 601 that is in the vicinity of the spot pixel region 600 but still outside of the spot pixel region, that is a valley pixel region value 601, is displayed in the coordinate system. Vicinity may be that a number of pixels between the valley pixel and the spot pixel region is equal or smaller than a number of pixels between the valley pixel and another spot pixel region. Here, the amplitude of the valley pixel region IQ value 601 (phase amplitude value), which is given by the length of the arrow from the origin of coordinates to the respective IQ value, may stem from background noise or mul- tipath interference.

The phase of the spot pixel region IQ values 600 may be corrected (or accuracy may be improved) by subtracting the valley pixel region IQ value 601 from the spot pixel region IQ values 600, thereby corrected spot pixel region IQ values 602 are obtained. The pixels inside the corrected spot pixel re- gion 602 may then have the same phase. Because the spot peak pixel is included in the spot pixel re- gion, by applying the DGS to I and Q values of each spot pixel region corrected I and Q value for the spot peak pixel of each spot is also obtained. It should be noted that the root-cause of multipath interference is the blending reflection time from multiple paths. It can also be explained as the blending two points on the IQ plane.

In the embodiment of Fig. 6, the spot pixel region IQ values 600 represent the measured points, the corrected spot pixel region IQ values 602 represent the ground truth points direct component of re- flection, and the valley pixel region IQ value 601 represent the ground truth in-direct component of reflection. The mixing of multiple points on the IQ plane may be calculated by the addition of vec- tors, and the spot pixel region IQ values 600 may be the observed result of the addition of the valley pixel region IQ value 601 and corrected spot pixel region IQ values 602. Since an approximation of the spot pixel region IQ values 600 and the valley pixel region IQ value 601 can be observed, the corrected spot pixel region IQ values 602 may be recovered from them approximately.

In the embodiment of Fig 6, the DGS filtering 402 may allow to obtain direct IQ information of the scene when the spot is detected fast. Fast spot detection may be realized by local maximum search. For example, local maximum search is carried out by searching 7px x 7px kernel, wherein such local maximum search scans the raw IQ image data with shifting the kernel of 1 pixel over all the image area.

Fig. 7 schematically shows in more detail an embodiment of background spot detection filtering per- formed in the process described in Fig. 3. Background spot detection filtering, BSDF, (see 305 in Fig. 3) is applied to pre-fixed dot positions in the image, as described in Fig. 3. In this embodiment, BSDF is performed by applying a filter kernel w i,j to the dense IQ data Z j (see 302 in Fig. 3) in the pixel domain of the raw image to obtain sparse IQ data (see Z bg in Fig. 3) in the spot domain: where Ω i is the radius of the filter kernel, and σ s is the space sigma.

In the embodiment of Fig. 7, the space sigma σ s is the parameter which tunes the blending strength of the neighbor IQ value. If a large value is used for the space sigma σ s , the filter kernel may behave like a box filter.

Fig. 8 schematically shows a diagram of an embodiment of the process of blending described in

Fig. 3, wherein the sparse IQ values obtained by the BSDF and the sparse IQ values obtained by the spot noise reduction filtering are blended with a blend ratio 0.6 to obtain blended sparse IQ values. The abscissa displays the noise reduction filtered amplitude and the ordinate the blend ratio during blending. The upper horizontal dashed line represents the maximum value of the blend ratio, here blend ratio is equal to 1, the lower horizontal dashed line represents the actual value of the blend ra- tion, here the actual blend ratio is 0.6, and the vertical dashed lines represent the two thresholds of the noise reduction filtered amplitude, namely thresho and threshi.

Here, for noise reduction filtered amplitude values between 0.0 and thresho, the blending (see 309 in Fig. 3) outputs, as sparse IQ values (Z out in Fig. 3), the sparse IQ values (Z bg in Fig. 3) obtained by the background spot detection filtering (305 in Fig. 3). Between the thresholds of the noise re- duction filtered amplitude, thresho and threshi, the blending (309 in Fig. 3) outputs, as sparse IQ val- ues (see Z out in Fig. 3), the sparse IQ values obtained by blending with a corresponding blend ratio the sparse IQ values obtained by the background spot detection filtering (see 305 in Fig. 3) with the sparse IQ values obtained by the spot noise reduction filtering (see 303 in Fig. 3). Above threshold threshi, the blending (see 309 in Fig. 3) outputs, as sparse IQ values (see Z out in Fig. 3), the sparse IQ values obtained by spot noise reduction filtering (see 303 in Fig. 3), and in particular the sparse IQ values (Z DGS in Fig. 3) obtained by the DGS filtering (see 402 in Fig. 4).

In the embodiment of Fig. 8, as process of blending, alpha blending is performed between the DGS sparse IQ values and the background sparse IQ values, using the amplitude as a guide, namely blend ratio = where is the filtered amplitude.

The filtered amplitude is given by where I( is the in-phase component value (“I value”) and Qi is the quadrature component values (“Q-value”) for spot region, i.e. dot position.

The alpha blending is given by

Z out — (1 — r)Z DGS + rZ bg where Z out is the output I or Q value obtained by the blending, r is the blend ratio, Z DGS is the output I or Q value obtained by the DGS filtering performed during the spot noise reduction filter- ing and Z bg is the output I or Q value obtained by the background spot detection filtering.

In the present embodiment, the blend ratio is r = 0.6, and thus, the alpha blending is given by

Z out — (1 — 0.6)Z DGS + 0.6 Z bg

In this manner, 60% of the sparse IQ values (see Z bg in Fig. 3) obtained by the background spot detection filtering (see 305 in Fig. 3) are blended with 40% of the sparse IQ values (see Z DGS in Fig. 3) obtained by the spot noise reduction filtering (see 303 in Fig. 3), and in particular, by the DGS filtering (see 402 in Fig. 4). In this manner, the sparse IQ values (see Z out in Fig. 3) obtained by the process of blending (see 309 in Fig. 3) performed in Fig. 3, is a mixture of two sets of sparse IQ values.

For example, the I value obtained by the above described blending is given by l out = (1 — 0.6)I DGS + 0.6 I bg

Similarly, the Q value obtained by the above described blending is given by

Q out = (1 — 0-6)Q DGS + 0.6 Q bg

Hence, the output IQ values obtained by the blending are the sparse IQ values I out, Q out .

It should be noted that the blend ratio value is not limited to T = 0.6 described in the embodiment of Fig. 8. Alternatively, the blend ratio value, r, may be any value r between 0 and 1.

Fig. 9 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness, wherein the spot noise reduction fil- tering is performed based on prefixed dot positions stored in a database.

Raw data 300, e.g. raw image data, received from an image sensor (see 102 in Fig. 1) are input to an IQ values determination 301 to determine I and Q values. The determined I and Q values are dense IQ values 302 and are determined for each pixel based on the received raw data 300. A spot noise reduction filtering 303 is performed on the dense IQ values 302 based on prefixed dot positions 307 stored in a database 306 to obtain sparse IQ values 900. A background spot detection filtering 305 is performed to the dense IQ values 302 based on the prefixed dot positions 307 stored in the data- base 306 to obtain sparse IQ values Z bg . By performing spot noise reduction filtering 303, dot find- ing 400 and DGS filtering 402 is performed, as described in Figs. 3 to 6 above, to obtain sparse IQ values 900. Dot finding 400 is performed on the dense IQ values 302 based on the prefixed dot po- sitions 307 stored in the database 306 to obtain dot positions 401. DGS filtering is performed on the dense IQ values 302 based on the dot positions 401 obtained by the dot finding 400, to obtain the sparse IQ values 900. Blending 309 of the sparse IQ values 900 and the sparse IQ values Z bg is performed to obtain sparse IQ values 901. A depth determination 311 is performed on the sparse IQ values Z out to obtain sparse depth 902, i.e. sparse depth information.

It should be noted that for the spot noise reduction filtering 303 a DGS algorithm may be used (see Fig. 4). DGS filtering can be performed based on dot finding. However, for noisy illumination or background scene position, e.g. in dark area due to noise, the dot position may not be found. The background spot detection filtering (BSDF) can be used to identify IQ information, i.e. IQ values, for all prefixed spot positions 307. By performing blending 309, the IQ values having the best IQ signal for depth calculation are selected for each dot, e.g. the dots being detected or the dots with prefixed positions. The IQ values having the best IQ signal may be for example the IQ values show- ing the lowest SNR for each dot.

Spot noise reduction filtering 303 is described in more detail in Figs. 4, 5 and 6. For the spot noise reduction filtering 303, for example a DGS algorithm may be used. DGS filtering can be performed based on dot finding, wherein each dot position obtained by the dot finding represents the center of the spot, i.e. the center of a spot pixel region including a plurality of pixels.

The background spot detection filtering 305 is described in more detail in Fig. 7 and the blending is described in more detail in Fig. 8.

It should be further noted that, in the embodiment of Fig. 9, the spot noise reduction filtering 303 and the background spot detection filtering 305 are performed in parallel on the raw IQ image to convert the dense IQ of the raw IQ to sparse IQ, without limiting the present embodiment in that regard.

Still further, the prefixed dot positions stored in the database may be any number of prefixed dot positions, e.g. 5000.

Fig. 10 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data to obtain a depth map with high completeness, wherein the background spot detec- tion filtering is performed by performing sparse Gaussian filtering on dense IQ values based on pre- fixed dot positions stored in a database.

Raw data 300, e.g. raw image data, received from an image sensor (see 102 in Fig. 1) are input to an IQ values determination 301 to determine I and Q values. The determined I and Q values are dense IQ values 302 and are determined for each pixel based on the received raw data 300. A background spot detection filtering 305 is performed to the dense IQ values 302 to obtain sparse IQ values Z bg . By performing background spot detection filtering 305, a sparse Gaussian filtering 1000 is per- formed on the dense IQ values 302 based on prefixed dot positions 307 stored in a database 306 to obtain sparse IQ values 1001. An IQ to amplitude conversion 1002 is performed on the sparse IQ values 1001 to obtain an amplitude 1003, i.e. a filtered amplitude for each prefixed dot positions corresponding to a set of sparse IQ values. Thresholding 1004 is performed on the filtered ampli- tude 1003 based on a predetermined threshold value. If the filtered amplitude 1003 of each dot posi- tion associated to a set of sparse IQ values is higher than the predetermined threshold value (if > threshold), validation/ invalidation 1005 is performed on the sparse IQ values 1001 based on the predetermined threshold value to obtain validated sparse IQ values Z bg . In this manner the pre- fixed position’s (background) dot is killed before the blender, i.e. blending 309. If filtered amplitude 1003 is lower than the predetermined threshold value (if < threshold), a DGS target mask gen- eration 1006 is performed using a filter kernel to obtain sparse IQ values 1007. If A t < threshold the filter kernel invalidates all the pixels included inside the kernel and outputs the sparse IQ values 1007 . In this way, a DGS filtering, wherein spots are shifted depending on kernel distance, may be avoided. A DGS filtering 1008 is performed on the dense IQ values 302 based on the sparse IQ val- ues obtained by the DGS target mask generation 1006 to obtain sparse IQ values 1007. Blending 309 of the sparse IQ values Z bg and the sparse IQ values Z DGS is performed to obtain sparse IQ values Z out . A depth determination 311 is performed on the sparse IQ values Z out to obtain sparse depth 1011, i.e. sparse depth information.

In the embodiment of Fig. 10, IQ to amplitude conversion 1002 is performed on the sparse IQ val- ues 1001 to obtain an amplitude 1003, i.e. a filtered amplitude for each prefixed dot positions corresponding to a set of sparse IQ values.

The filtered amplitude is given by where I i is the in-phase component value (“I value”) and Q i is the quadrature component values (“Q-value”) for spot region, i.e. dot position. If filtered amplitude 1003 is lower than the predeter- mined threshold value (if < threshold), the DGS target mask generation 1006 is performed and the pixels located inside the kernel are invalidated. If > threshold, the prefixed position’s (background) dot is killed before the blender. In this manner, the thresholding 1004 may allow to avoid parallax issues.

In this way, the local maxima search performed during DGS target mask generation 1006 may re- duce the area of pixels to be searched. If the result of thresholding is that amplitude is lower than the threshold, < threshold, for a given prefixed dot, sparse IQ values Z DGS are output, since the intensity around this dot is small. Therefore, local maxima search in DGS filtering 1008 may be avoided, so the neighbor pixels (i.e. dots) are removed from the search. Here, when the filter kernel invalidates all the pixels included inside the kernel means that these pixels included inside the kernel are removed from the search.

Fig. 11 schematically shows an embodiment of blending sparse IQ data obtained from background spot detection filtering of raw image data with sparse IQ data from spot noise reduction filtering of raw image data and with sparse IQ data from bilateral filtering of raw image data to obtain a depth map with high completeness, wherein the bilateral filtering is performed based on dot positions ob- tained by dot finding.

Raw data 300, e.g. raw image data, received from an image sensor (see 102 in Fig. 1) are input to an IQ values determination 301 to determine I and Q values. The determined I and Q values are dense IQ values 302 and are determined for each pixel based on the received raw data 300. A spot noise reduction filtering 303 is performed on the dense IQ values 302 to obtain sparse IQ values Z DGS . A background spot detection filtering 305 is performed to the dense IQ values 302 based on prefixed dot positions 307 stored in a database 306 to obtain sparse IQ values Z bg . By performing spot noise reduction filtering 303, dot finding 400 and DGS filtering 402 is performed, as described in Figs. 3 to 6 above, to obtain sparse IQ values Z DGS . Dot finding 400 is performed on the dense IQ values 302 to obtain dot positions 401. DGS filtering is performed on the dense IQ values 302 based on the dot positions 401 obtained by the dot finding 400, to obtain the sparse IQ values Z DGS . A bilateral filtering 1100 is performed on the dense IQ values 302 based on the dot positions 401 obtained by dot finding 400 to obtain sparse IQ values Blending 1102 of the sparse IQ values Z DGS , the sparse IQ values Z bg and the sparse IQ values Z bg2 is performed to obtain sparse IQ values Z out . A depth determination 311 is performed on the sparse IQ values Z out to obtain sparse depth 1104, i.e. sparse depth information.

In the embodiment of Fig. 11, the bilateral filtering is performed by applying a filter kernel (see 700 in Fig. 7) to a dot position (see 701 in Fig. 7) obtained by the dot finding (see 400 in Fig. 11). The filter kernel to be applied has a radius (2; and the applied bilateral filter is given by where σ s is the space sigma is the range sigma, Ω i is the radius of the filter kernel, and Z is I or Q.

In the embodiment of Fig. 11, the range sigma, σ r , is the parameter which tunes the blending strength of neighbor IQ values corresponding the distance of the IQ, which is described as |Z i — Z j |. If a large value for the range sigma, σ r , is used, the filter kernel may behave like a box fil- ter.

Fig. 12 schematically shows two diagrams of an embodiment of the process of blending described in Fig. 11, wherein the sparse IQ values obtained by the BSDF, the sparse IQ values obtained by the spot noise reduction filtering and the sparse IQ values obtained by the bilateral filtering are blended with two blend ratios r 1; r 2 to obtain blended sparse IQ values.

In part (a) of Fig. 12, the abscissa displays the noise reduction filtered amplitude and the ordinate the blend ratio 1, r 1 , during blending (see 1102 in Fig. 11). The horizontal dashed line represents the maximum value of the blend ratio, here blend ratio is equal to 1, and the vertical dashed lines repre- sent the two thresholds of the noise reduction filtered amplitude, namely low thresh 1 and high threshi. Here, above the second threshold, namely threshold high threshi, the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) the sparse IQ values obtained by spot noise reduction filtering (see 303 in Fig. 11), and in particular the sparse IQ values obtained by the DGS filtering (see 402 in Fig. 11), namely the sparse IQ values Z DGS in Fig. 11. The blending be- low the second threshold, namely threshold high threshi, is described in more detail in Fig. 13.

In part (b) of Fig. 12, the abscissa displays the noise reduction filtered amplitude and the ordinate the blend ratio 2, r 2 , during blending (see 1102 in Fig. 11). The horizontal dashed line represents the maximum value of the blend ratio, here blend ratio is equal to 1, and the vertical dashed lines represent the two thresholds of the noise reduction filtered amplitude, namely low thresh 2 and high thresh^. Here, between noise reduction filtered amplitude values 0.0 and the first threshold value, namely, low thresh 2 , the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) the sparse IQ values obtained by the background spot detection filtering (see 305 in Fig. 11), namely the sparse IQ values Z bg in Fig. 11. The blending above the second threshold, namely threshold high thresh 2 , is described in more detail in Fig. 13.

In the embodiment of Fig. 12, the blend ratio 1 is and the blend ratio 2 is where is the filtered amplitude.

The blending 1102 described in Fig. 11 above is a combination of part (a) and part (b) of Fig. 12, which is described in more detail in Fig. 13 in the following.

Fig. 13 schematically shows a diagram of an embodiment of the process of blending described in Fig. 11, wherein the sparse IQ values obtained by the BSDF, the sparse IQ values obtained by the spot noise reduction filtering and the sparse IQ values obtained by the bilateral filtering are blended with a blend ratio to obtain blended sparse IQ values. The abscissa displays the noise reduction filtered amplitude and the ordinate the blend ratio, r i , during blending (see 1102 in Fig. 11). The horizontal dashed line represents the maximum value of the blend ratio, here maximum blend ratio is equal to 1, and the vertical dashed lines represent the four thresholds of the noise reduction fil- tered amplitude, namely low threshi, low thresh 2 , high thresh 2 , and high threshi.

Here, between noise reduction filtered amplitude values 0.0 and the first threshold value, namely, low thresh 2 , the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) the sparse IQ values obtained by the background spot detection filtering (see 305 in Fig. 11), namely the sparse IQ values in Fig. 11. Between the first and the second threshold, namely low thresh2 and low threshi, the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) sparse IQ values obtained by blending, with a corresponding blend ratio, the sparse IQ values ob- tained by the background spot detection filtering (see 305 in Fig. 11) with the sparse IQ values ob- tained by the bilateral filtering (see 1100 in Fig. 11). Between the second and the third threshold, namely low threshi and high thresh 2 , the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) sparse IQ values obtained by blending, with a corresponding blend ratio, the sparse IQ values obtained by the background spot detection filtering (see 305 in Fig. 11) with the sparse IQ values obtained by the bilateral filtering (see 1100 in Fig. 11) and with the sparse IQ val- ues obtained by the DGS filtering (see 402 in Fig. 11). Between the third and the fourth threshold, namely high thresh2 and high threshi, the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 11) sparse IQ values obtained by blending, with a corresponding blend ratio, the sparse IQ values obtained by the DGS filtering (see 402 in Fig. 11) with the sparse IQ values ob- tained by the bilateral filtering (see 1100 in Fig. 11). Above the fourth threshold, namely threshold high threshi, the blending (see 1102 in Fig. 11) outputs as sparse IQ values (see Z out in Fig. 3) the sparse IQ values obtained by spot noise reduction filtering (see 303 in Fig. 11), and in particular the sparse IQ values obtained by the DGS filtering (see 402 in Fig. 11), namely the sparse IQ values Z DGS in Fig" 11 •

In the embodiment of Fig. 13, as process of blending, alpha blending is performed between the bi- lateral sparse IQ values Z bg2 , the DGS sparse IQ values and the background sparse IQ values, us- ing the amplitude as a guide, namely = blend ratio = where is the filtered amplitude, and i = 1, 2.

The alpha blending is given by

Z out = (1 — r 1 )Z DGS + r 1 {(1 — r 2 ) Z bg2 + r 2 Z bg1 } where Z out is the output I or Q value obtained by the blending, r 1 , r 2 is the blend ratio, Z DGS is the output I or Q value obtained by the DGS filtering performed during the spot noise reduction filtering, Z bg1 is the output I or Q value obtained by the background spot detection filtering using prefixed dot positions and Z bg2 is the output I or Q value obtained by the bilateral filtering using dot finding function.

Fig. 14 shows a flow diagram visualizing a method for blending sparse IQ data to obtain a depth map with high completeness.

At 1400, the IQ values determination (see 301 in Figs. 3, 9, 10 and 11) receives raw data (see 300 in Figs. 3, 9, 10 and 11) from an image sensor. At 1401, determination of I and Q values for each pixel is performed based on the received raw data to obtain dense IQ values (see 302 in Figs. 3, 9, 10 and 11). At 1402, the background spot detection filtering (see 305 in Figs. 3, 9, 10 and 11) receives prefixed dot positions (see 307 in Figs. 3, 9, 10 and 11) from a database (see 306 in Figs. 3, 9, 10 and 11). At 1403, background spot detection filtering is performed on the dense IQ values based on the received prefixed dot positions to obtain sparse IQ values Z bg . At 1404, spot noise reduction filtering (see 303 in Figs. 3, 9 and 11) is performed on the dense IQ values (see 302 in Figs. 3, 9 and 11) to obtain sparse IQ values Z DGS . At 1405, blending (see 309 in Figs. 3 and 8) of the sparse IQ values Z DGS with the sparse IQ values Z bg is performed to obtain sparse IQ values Z out . At 1406, depth determination (see 311 in Figs. 3, 9, 10 and 11) is performed based on the sparse IQ values to obtain sparse depth measurements, such that long depth detection range measurements of a spot or a pixel in an iToF system are obtained to obtain a depth map with high completeness.

Fig. 15 illustrates two depth maps generated by a spot ToF device capturing a scene. In part (a) of Fig. 15 the depth map is obtained by performing only dot finding and DGS filtering, such that an incomplete depth map is obtained sue to not found spots. In part (b) of Fig. 15 the depth map is ob- tained by performing blending of the sparse IQ values obtained by the DGS filtering with the back- ground spot detection filtering described in the present disclosure. The depth map of part (b) of Fig. 15 is a depth map with high completeness, which can be acquired by a sparse depth image based on spot having same spot number corresponding illuminator’s spot number.

Fig. 16 schematically describes an embodiment of an iToF device that can implement the processes for long depth detection range measurement of a spot or a pixel in an iToF system to obtain a depth map with high completeness. The electronic device 1600 may further implement all other processes of a standard iToF/ spot ToF system (see Figs. 1 and 2), like I-Q value determination, phase, ampli- tude, confidence, and reflectance determination. The electronic device 1600 may further implement a DGS algorithm (see Fig. 6), a reflectance sharpening filter, or the like. The electronic device 1600 comprises a CPU 1601 as processor. The electronic device 1600 further comprises an iToF sensor 1606 connected to the processor 1601. The processor 1601 may for example implement two parallel pipelines (see Fig. 3) to determine spot position and their related IQ using two different technics and a blender (see Fig. 3) that realizes the process described with regard to Figs. 3, 4, 5, 6, 7 and 8. The electronic device 1600 further comprises a user interface 1607 that is connected to the proces- sor 1601. This user interface 1607 acts as a man-machine interface and enables a dialogue between an administrator and the electronic system. For example, an administrator may make configurations to the system using this user interface 1607. The electronic device 1600 further comprises a Blue- tooth interface 1604, a WLAN interface 1605, and an Ethernet interface 1608. These units 1604,

1605 act as 1/ O interfaces for data communication with external devices. For example, video cam- eras with Ethernet, WLAN or Bluetooth connection may be coupled to the processor 1601 via these interfaces 1604, 1605, and 1608. The electronic device 1600 further comprises a data storage 1602, which may be the calibration storage described with regards to Fig. 7, and a data memory 1603 (here a RAM). The data storage 1602 is arranged as a long-term storage, e.g. for storing the algorithm pa- rameters for one or more use-cases, for recording iToF sensor data obtained from the iToF sensor

1606 the like. The data memory 1603 is arranged to temporarily store or cache data or computer in- structions for processing by the processor 1601.

It should be noted that the description above is only an example configuration. Alternative configu- rations may be implemented with additional or other sensors, storage devices, interfaces, or the like.

It should be further noted that alternatively the electronic device 1600 may be implemented with a digital signal processor (DSP) or a graphics processing unit (GPU), without limiting the present dis- closure in that regard.

It should be further noted that a ToF sensor, a processor, or an application processor may imple- ment the processes for long depth detection range measurement of a spot or a pixel in an iToF sys- tem.

***

It should also be noted that the division of the electronic device of Fig. 16 into units is only made for illustration purposes and that the present disclosure is not limited to any specific division of functions in specific units. For instance, at least parts of the circuitry could be implemented by a re- spectively programmed processor, field programmable gate array (FPGA), dedicated circuits, and the like. All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example, on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.

In so far as the embodiments of the disclosure described above are implemented, at least in part, us- ing software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a com- puter program is provided are envisaged as aspects of the present disclosure.

The methods as described herein are also implemented in some embodiments as a computer pro- gram causing a computer and/ or a processor to perform the method, when being carried out on the computer and/ or processor. In some embodiments, also a non-transitory computer-readable record- ing medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be per- formed.

It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is however given for illustrative purposes only and should not be construed as binding. Changes of the ordering of method steps may be apparent to the skilled person.

The method of Fig. 14 can also be implemented as a computer program causing a computer and/or a processor to perform the method, when being carried out on the computer and/ or processor. In some embodiments, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the pro- cessor described above, causes the method described to be performed.

Note that the present technology can also be configured as described below.

(1) An electronic device comprising circuitry configured to perform blending (309; 1102) of first sparse IQ data (Z DGS ) obtained from spot noise reduction filtering (303) of an image with second sparse IQ data (Z bg ; Z bg 1) obtained from background spot detection filtering (305) of the image to obtain third sparse IQ data (Z out ) for determining sparse depth information (312; 902; 1011; 1104) with high completeness.

(2) The electronic device of (1), wherein the blending (309) comprises a weighted selection of the best IQ signal from the IQ data obtained from spot noise reduction filtering (303) or from the background spot detection filtering (305). (3) The electronic device of (1) or (2), wherein the blending (309) comprises determining the third sparse (Z out ) based on the first sparse IQ data (Z DGS ), the second sparse IQ data (Z bg ) and a blend ratio (r).

(4) The electronic device of (3), wherein the blend ratio (r) depends on NR filtered amplitude.

(5) The electronic device of anyone of (1) to (4), wherein the blending (309) is performed based on the equation

Z out = (1 — r)Z DGS + rZ bg where Z out is the I or Q value of the third sparse IQ data obtained by the blending (309), r is the blend ratio, Z DGS is the I or Q value of the first sparse IQ data obtained by DGS filtering (402) and

Z bg is the I or Q value of the second sparse IQ data obtained by background spot detection filter- ing (305).

(6) The electronic device of anyone of (1) to (5), wherein the circuitry is configured to perform dot finding (400) on dense IQ data (302) of the image to obtain dot positions (401).

(7) The electronic device of (6), wherein the circuitry is configured to perform bilateral filtering (1100) on the dense IQ data (302) based on the dot positions (401) to obtain fourth sparse IQ data (Z bg2 ) of the image.

(8) The electronic device of (7), wherein the circuitry is configured to perform blending (1102) of the first sparse IQ data (Z DGS ), with the second sparse IQ data (Z bg1 ), and with the fourth sparse IQ data (Z bg2 ) to obtain the third sparse IQ data (Z out ) for determining sparse depth information (1104) with high completeness.

(9) The electronic device of (8), wherein the blending (1102) comprises determining the third sparse IQ data (Z out ) based on the first sparse IQ data (Z DGS ), the second sparse IQ data (Z bg1 ), the fourth sparse IQ data (Z bg2 ), a first blend ratio (r 1 ), and a second blend ratio (r 2 ).

(10) The electronic device of (8), wherein the blending (1102) is performed based on the equa- tion,

Z out = (1 — r 1 )Z DGS + r 1 {(1 — r 2 ) Z bg2 + r 2 Z bg1 where Z out is the I or Q value of the third sparse IQ data obtained by the blending (1102), r 1 is a first blend ratio, r 2 is a second blend ratio, Z DGS is the I or Q value of the first sparse IQ data ob- tained by DGS filtering (402), Z bg1 is the I or Q value of the second sparse IQ data obtained by background spot detection filtering (305) and Z bg2 i s the I or Q value of the fourth sparse IQ data obtained by the bilateral filtering (1100).

(11) The electronic device of anyone of (1) to (10), wherein the circuitry is configured to perform the background spot detection filtering (305) on dense IQ data (302) of the image based on prefixed dot positions (307) to obtain the second sparse IQ data (Z bg ; Z bg1 ).

(12) The electronic device of anyone of (1) to (11), wherein the circuitry is configured to perform spot noise reduction filtering (303) on dense IQ data (302) of the image to obtain the first sparse IQ data (Z DGS ).

(13) The electronic device of (6), wherein the circuitry is configured to perform direct-global-sep- aration, DGS, filtering (402; 1008) on the dense IQ data (302) based on the dot positions (401) to obtain the first sparse IQ data (Z DGS ).

(14) The electronic device of anyone of (1) to (13), wherein the circuitry is further configured to perform IQ values determination (301) on raw image data (300) of the image captured by a Time-of- Flight image sensor (102) to obtain dense IQ data (302) of the image.

(15) The electronic device of anyone of (1) to (14), wherein the circuitry is further configured to perform depth determination (311) on the third sparse IQ data (Z out ; 901; Z out ) to obtain the sparse depth information (312; 902; 1011; 1104) for generating a depth map with high completeness.

(16) The electronic device of (6), wherein the circuitry is configured to perform dot finding (400) on the dense IQ data (302) based on prefixed dot positions (307) to obtain the dot positions (401).

(17) The electronic device of anyone of (1) to (16), wherein the circuitry is further configured to perform sparse Gaussian filtering on dense IQ data (302) of the image based on prefixed dot posi- tions (307) to obtain fifth sparse IQ data (1001) of the image.

(18) The electronic device of (17), wherein the circuitry is further configured to perform IQ to amplitude conversion (1002) on the fifth sparse IQ data (1001) to obtain an amplitude value (1003).

(19) The electronic device of (18), wherein if the amplitude value (1003) is above a predefined threshold value, the circuitry is further configured to perform validation/ invalidation (1005) on the fifth sparse IQ data (1001) to obtain the second sparse IQ data (Z bg ).

(20) The electronic device of (18), wherein if the amplitude value (1003) is below a predefined threshold value, the circuitry is further configured to perform DGS target mask generation to obtain a DGS target mask. (21) The electronic device of (20), wherein the circuitry is further configured to perform DGS fil- tering (1008) on the dense IQ data (302) based on DGS target mask to obtain the first sparse IQ data (Z DGS ).

(22) A method comprising performing blending (309; 1102) of first sparse IQ data (Z DGS ) obtained from spot noise reduction filtering (303) of an image with second sparse IQ data (Z bg ; Z bg1 ) obtained from background spot detection filtering (305) of the image to obtain third sparse IQ data (Z out ) for determining sparse depth information (312; 902; 1011; 1104) with high completeness.

(23) A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of (22).