Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PHYSICS-BASED RECOVERY OF LOST COLORS IN UNDERWATER AND ATMOSPHERIC IMAGES UNDER WAVELENGTH DEPENDENT ABSORPTION AND SCATTERING
Document Type and Number:
WIPO Patent Application WO/2020/234886
Kind Code:
A1
Abstract:
A method comprising: receiving an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering; estimating, based, at least in part, on a range map of the scene, one or more image formation model parameters; and recovering the scene from the input image, based, at least in part, on the estimating.

Inventors:
AKKAYNAK DERYA (US)
TREIBITZ AVITAL (IL)
Application Number:
PCT/IL2020/050563
Publication Date:
November 26, 2020
Filing Date:
May 21, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CARMEL HAIFA UNIV ECONOMIC CORPORATION LTD (IL)
SEAERRA VISION LTD (IL)
International Classes:
G06T5/00; G06T7/90
Domestic Patent References:
WO2014060562A12014-04-24
WO2014060562A12014-04-24
Foreign References:
US20070274604A12007-11-29
CN106296597A2017-01-04
CN108765342A2018-11-06
Other References:
ROZNERE, MONIKA ET AL.: "Real-time Model-based Image Color Correction for Underwater Robots", ARXIV:1904.06437, 12 April 2019 (2019-04-12), XP081842062
LI, JIE ET AL.: "WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images", IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 3, no. 1, 26 October 2017 (2017-10-26), pages 387 - 394, XP080748369
CHIANG, JOHN Y. ET AL.: "Underwater image enhancement by wavelength compensation and dehazing", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 21.4, 4 April 2012 (2012-04-04), pages 1756 - 1769, XP011492008
BRYSON ET AL., TRUE COLOR CORRECTION OF AUTONOMOUS UNDERWATER VEHICLE IMAGERY
See also references of EP 3973500A4
Attorney, Agent or Firm:
GEYRA, Assaf et al. (IL)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system comprising:

at least one hardware processor; and

a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to:

receive an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering,

estimate, based, at least in part, on a range map of said scene, one or more image formation model parameters, and

recover said scene from said input image, based, at least in part, on said estimating.

2. The system of claim 1, wherein said image is selected from the group consisting of grayscale image, RGB image, RGB-Depth (RGBD) image, multi-spectral image, and hyperspectral image.

3. The system of any one of claims 1 or 2, wherein said medium is one of: water and ambient atmosphere.

4. The system of claim 3, wherein said scene is under water.

5. The system of any one of claims 1-4, wherein said recovering removes an effect of said wavelength-dependent absorption and/or scattering medium from said input image.

6. The system of any one of claims 1-5, wherein said image formation model parameters include at least one of: backscatter parameters in said input image, and attenuation parameters in said input image.

7. The system of any one of claims 1-6, wherein said image formation model parameters are estimated separately with respect to each color channel in said input image.

8. The system of any one of claims 1-7, wherein said estimating of said one or more image formation model parameters is based, at least in part, on distances to each object in said scene, wherein said distances are obtained using said range map.

9. The system of any one of claims 1-8, wherein said range map is obtained using one of: a structure-from-motion (SFM) range imaging techniques, stereo imaging techniques, and monocular techniques.

10. A method comprising:

receiving an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering;

estimating, based, at least in part, on a range map of said scene, one or more image formation model parameters; and

recovering said scene from said input image, based, at least in part, on said estimating.

11. The method of claim 10, wherein said image is selected from the group consisting of grayscale image, RGB image, RGB-Depth (RGBD) image, multi-spectral image, and hyperspectral image.

12. The method of any one of claims 10 or 11 , wherein said medium is one of: water and ambient atmosphere.

13. The method of claim 12, wherein said scene is under water.

14. The method of any one of claims 10-13, wherein said recovering removes an effect of said wavelength-dependent absorption and/or scattering medium from said input image.

15. The method of any one of claims 10-14, wherein said image formation model parameters include at least one of: backscatter parameters in said input image, and attenuation parameters in said input image.

16. The method of any one of claims 10-15, wherein said image formation model parameters are estimated separately with respect to each color channel in said input image.

17. The method of any one of claims 10-16, wherein said estimating of said one or more image formation model parameters is based, at least in part, on distances to each object in said scene, wherein said distances are obtained using said range map.

18. The method of any one of claims 10-17, wherein said range map is obtained using one of: a structure-from-motion (SFM) range imaging techniques, stereo imaging techniques, and monocular techniques.

19. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to:

receive an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering;

estimate, based, at least in part, on a range map of said scene, one or more image formation model parameters; and

recover said scene from said input image, based, at least in part, on said estimating.

20. The computer program product of claim 19, wherein said image is selected from the group consisting of grayscale image, RGB image, RGB -Depth (RGBD) image, multi- spectral image, and hyperspectral image.

21. The computer program product of any one of claims 19 or 20, wherein said medium is one of: water and ambient atmosphere.

22. The computer program product of claim 21 , wherein said scene is under water.

23. The computer program product of any one of claims 19-22, wherein said recovering removes an effect of said wavelength-dependent absorption and/or scattering medium from said input image.

24. The computer program product of any one of claims 19-23, wherein said image formation model parameters include at least one of: backscatter parameters in said input image, and attenuation parameters in said input image.

25. The computer program product of any one of claims 19-24, wherein said image formation model parameters are estimated separately with respect to each color channel in said input image.

26. The computer program product of any one of claims 19-25, wherein said estimating of said one or more image formation model parameters is based, at least in part, on distances to each object in said scene, wherein said distances are obtained using said range map.

27. The computer program product of any one of claims 19-26, wherein said range map is obtained using one of: a structure-from-motion (SFM) range imaging techniques, stereo imaging techniques, and monocular techniques.

Description:
PHYSICS-BASED RECOVERY OF LOST COLORS IN UNDERWATER AND

ATMOSPHERIC IMAGES UNDER WAVELENGTH DEPENDENT ABSORPTION

AND SCATTERING

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority to U.S. Provisional Patent Application No. 62/850,752, filed May 21, 2019, the contents of which are incorporated herein by reference in their entirety.

BACKGROUND

[0002] Large image datasets like ImageNet have been instrumental in igniting the artificial intelligence boom, which fueled many important discoveries in science and industry in the last two decades. The underwater domain, which has no shortage of large image datasets, however, has not benefited from the full power of computer vision and machine learning methods which made these discoveries possible, partly because water masks many computationally valuable features of a scene.

[0003] An underwater photo is the equivalent of one taken in air, but covered in thick, colored fog, and subject to an illuminant whose white point and intensity changes as a function of distance. It is difficult to train learning-based methods for different optical conditions that represent the global ocean, because calibrated underwater datasets are expensive and logistically difficult to acquire.

[0004] Existing methods that attempt to reverse the degradation due to water are either unstable, too sensitive, or only work for short object ranges. Thus, the analysis of large underwater datasets often requires costly manual effort. On average, a human expert spends over 2 hours identifying and counting fish in a video that is one hour long.

[0005] The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the figures. SUMMARY

[0006] The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods which are meant to be exemplary and illustrative, not limiting in scope.

[0007] There is provided, in an embodiment, a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: receive an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering, estimate, based, at least in part, on a range map of the scene, one or more image formation model parameters, and recover the scene from the input image, based, at least in part, on the estimating.

[0008] There is provided, in an embodiment, a method comprising receiving an input image, wherein the input image depicts a scene within a medium which has wavelength- dependent absorption and/or scattering; estimating, based, at least in part, on a range map of the scene, one or more image formation model parameters; and recovering the scene from the input image, based, at least in part, on the estimating.

[0009] There is provided, in an embodiment, a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: receive an input image, wherein the input image depicts a scene within a medium which has wavelength-dependent absorption and/or scattering; estimate, based, at least in part, on a range map of the scene, one or more image formation model parameters; and recover the scene from the input image, based, at least in part, on the estimating.

[0010] In some embodiments, the image is selected from the group consisting of grayscale image, RGB image, RGB-Depth (RGBD) image, multi-spectral image, and hyperspectral image.

[001 1] In some embodiments, the medium is one of: water and ambient atmosphere.

[0012] In some embodiments, the scene is under water. [0013] In some embodiments, the recovering removes an effect of the wavelength- dependent absorption and/or scattering medium from the input image.

[0014] In some embodiments, the image formation model parameters include at least one of: backscatter parameters in the input image, and attenuation parameters in the input image.

[0015] In some embodiments, the image formation model parameters are estimated separately with respect to each color channel in the input image.

[0016] In some embodiments, the estimating of the one or more image formation model parameters is based, at least in part, on distances to each object in the scene, wherein the distances are obtained using the range map.

[0017] In some embodiments, the range map is obtained using one of: a structure-from- motion (SFM) range imaging techniques, stereo imaging techniques, and monocular techniques.

[0018] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the figures and by study of the following detailed description.

BRIEF DESCRIPTION OF THE FIGURES

[0019] Exemplary embodiments are illustrated in referenced figures. Dimensions of components and features shown in the figures are generally chosen for convenience and clarity of presentation and are not necessarily shown to scale. The figures are listed below.

[0020] Fig 1A-1B show is a schematic illustration of a present method for removing water from underwater images, by removing the degradation due to water, according to an embodiment;

[0021 ] Fig. 2 is a schematic illustration of underwater image formation, according to an embodiment;

[0022] Fig. 3A is a three-dimensional (3D) model created from 68 photographs, according to an embodiment;

[0023] Fig. 3B is a range map for the image in Fig. 3A, according to an embodiment; [0024] Fig. 4A shows a color chart imaged underwater at various ranges;

[0025] Fig. 4B shows B c calculation for each color channel, according to an embodiment; and

[0026] Fig. 5A-5D, 6A-6E and 7A-7E show experimental results, according to an embodiment.

DETAILED DESCRIPTION

[0027] Disclosed herein is a method that recovers lost colors in underwater images using a physics-based approach. In some embodiments, the present method estimates the parameters of the image formation model.

[0028] In some embodiments, the images may be acquired using a plurality of imaging formats, including grayscale, RGB, RGB-Depth (RGBD), multi-spectral, hyperspectral, and/or additional and/or other imaging techniques.

[0029] Robust recovery of lost colors in underwater images remains a challenging problem. This is partly due to the prevalent use of an atmospheric image formation model for underwater images. A recently proposed physically accurate and/or revised model showed:

(i) Attenuation coefficient of the signal is not uniform across the scene but depends on object range and reflectance, and

(ii) the coefficient governing the increase in backscatter with distance differs from the signal attenuation coefficient.

[0030] Using more than 1,100 images from two optically different water bodies, the present inventors show that the present method comprising a revised image formation model outperforms those using the atmospheric model. Consistent removal of water will open up large underwater datasets to powerful computer vision and machine learning algorithms, creating exciting opportunities for the future of underwater exploration and conservation. [0031] The present method aims to consistently remove water from underwater images, so that large datasets can be analyzed with increased efficiency. In some embodiments, the present method estimates model parameters for a given RGBD image.

[0032] In some embodiments, the present method provides for an image formation model derived for imaging in any medium which has wavelength-dependent absorption and/or scattering. In some embodiments, such medium may be, but is not limited to, water and ambient atmosphere. Accordingly, in some embodiments, the present method provides for deriving an image formation model for underwater imaging, for imaging in fog or haze conditions, and the like.

[0033] In some embodiments, the present method parametrizes the distance-dependent attenuation coefficient, which greatly reduces the unknowns in the optimization step.

[0034] The present inventors have used more than 1,100 images acquired in two optically different water types. On these images and another underwater RGBD dataset, the present inventors show qualitatively and quantitatively that the present method, which is the first to utilize the revised image formation model, outperforms others that use currently known models. Reference is now made to Fig 1A-1B, which show is a schematic illustration of the present method which removes water from underwater images (Fig. 1A) by removing the degradation due to water (Fig. IB).

[0035] Reference is now made to Fig. 2, which is a schematic illustration of underwater image formation governed by an equation of the form I c = D c + B c . D c contains the scene with attenuated colors, and B c is a degrading signal that strongly depends on the optical properties of the water, and eventually dominates the image (shown in Fig. 2 as a gray patch). Insets show relative magnitudes of D c and B c for a Macbeth chart imaged at 27m in oceanic water.

[0036] A known image formation model for bad weather images assumes that the scattering coefficient is constant over the camera sensitivity range in each color channel, resulting in a coefficient per wavelength. This model then became extensively used for bad weather, and later adapted for the underwater environment. For scene recovery, these methods require more than one frame of the scene, or extra information, such as 3D structure. These models are further simplified to include only one attenuation coefficient, uniform across all color channels. This was done to enable recovery from single images in haze and later used also for underwater recovery. While using the same coefficient for all color channels in underwater scenes is a very crude approximation, using a coefficient per channel may yield better results. Nevertheless, as will be further shown below, the accuracy of these methods is inherently limited by the model.

[0037] In previous work, backscatter is estimated from single images using the Dark Channel Prior (DCP) method (see, K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. Trans. IEEE PAMI, 33(12):2341-2353, 2011), some variants of it, or other priors. Attenuation coefficients can be measured by ocean optics instruments such as transmissiometers or spectrometers. However, they cannot be used as -is for imaging because of differences in spectral sensitivity and acceptance angle. In addition, these instruments are expensive and cumbersome to deploy.

[0038] Thus, it is generally best to estimate the attenuation coefficients directly from images. The most basic method for that is to photograph a calibration target at a known distance. In one method, coefficients were taken from the estimated veiling-light, ignoring the illumination color. In another method, the attenuation coefficients per channel were estimated using the grey-world assumption. Others methods alleviate this problem by using fixed attenuation coefficients measured for just one water type.

[0039] Known distances slightly simplify the problem and were used to estimate backscatter together with attenuation by fitting data from multiple images to the image formation model. Deep networks were recently used for reconstructing underwater scenes. Their training, however, relies on purely synthetic data, and thus highly depends on the quality of the simulation models. All the methods so far assume the attenuation coefficients are only properties of the water and are uniform across the scene per color channel, but it has been showed that this is an incorrect assumption that leads to errors in reconstruction.

Scientific Background

[0040] Underwater image formation is governed by:

I c — D c + B c (1) where c = R, G, B is the color channel, I c is the image captured by the camera (with distorted colors), D c is the direct signal which contains the information about the (attenuated) scene, and B c is the backscatter, an additive signal that degrades the image due to light reflected from particles suspended in the water column. The components D c and B c are governed by two distinct coefficients and , which are wideband (RGB)

attenuation and backscatter coefficients, respectively.

[0041] The expanded form of Eq. 1 is given as:

where z is range (distance) between the camera and the objects in the scene along the line of sight, is veiling light, and J c is the unattenuated scene that would be captured at the location of the camera had there been no attenuation along z. Vectors v D = {z, p, E, S c , b} and v B = { E, S c , b , b] represent the dependencies of the coefficients on range

z, reflectance p, spectrum of ambient light E, spectral response of the camera S c , and the physical scattering and beam attenuation coefficients of the water body, b and b , all of which are functions of wavelength l.

[0042] Previously, it was assumed that and that these coefficients had a single

value for a given scene, but it was previously shown in D. Akkaynak and T. Treibitz [2018] (see, D. Akkaynak and T. Treibitz. A revised underwater image formation model. In Proc. IEEE CVPR , 2018) that they are distinct, and furthermore, that they had dependencies on different factors. Eq. 2 is formulated for imaging in the horizontal direction. However, it may be applied to scenes captured in different directions with the assumption that the deviations are small.

[0043] The equations connecting RGB coefficients to wavelength dependent

physical quantities are:

[0044] Here, l 1 and l 2 are the limits of the visible range (400 and 700nm), E is the spectrum of ambient light at depth d.

[0045] Light penetrating vertically attenuates based on the diffuse downwelling attenuation K d (l) , different than the beam attenuation coefficient b(l) , which is solely a function of the type, composition, and density of dissolved substances in the ocean. If E(0,l) is light at the sea surface, then E(d,l) at depth d is:

[0046] Reference is now made to Fig. 3A, which shows a 3D model created from 68 photographs using Photoscan Professional (Agisoft LLC).

[0047] Reference is now made to Figs. 3B, which shows a model range map z (in meters) for the image in Figs. 1 A- IB obtained from this model. A color chart is placed on the seafloor to set the scale.

[0048] The veiling light in Eq. 2 is given as:

where

The Present Method

[0049] Based on Eqs. 2-4, to recover J c , the following parameters need to be known or estimated:

• optical water type determined by b, b, and K d ;

• light E d ;

• distance between the camera and scene along the line-of-sight z;

• depth at which the photo was taken d ;

• the reflectance of each object in the scene p; and

• the spectral response of the camera S c . [0050] These parameters are rarely, if ever, known at the time an underwater photo is taken.

[0051] In D. Akkaynak and T. Treibitz. [2018] and in D. Akkaynak, T. Treibitz, T. Shlesinger, R. Tamir, Y. Loya, and D. Iluz.,“What is the space of attenuation coefficients in underwater computer vision?” In Proc. IEEE CVPR , 2017, it was shown that was most strongly governed by z, and was most affected by the optical water type and

illumination E. Therefore, in some embodiments, the present method attempts to tackle these specific dependencies. Because the coefficients vary with imaging angle and exposure, it is assumed that they generally cannot be transferred across images, even those taken sequentially with the same camera, and therefore the relevant parameters for a given image are estimated from that image only.

Imaging and Range Map Generation

[0052] As heavily depends on z, a range map of the scene is required, which may be obtained using, e.g., structure-from-motion (SFM), commonly used underwater to measure structural complexity of reefs and in archaeology. The present method requires an absolute value for z, whereas SFM provides range only up to scale, so objects of known sizes are placed in the scene (see Figs. 3A-3B). When imaging from underwater vehicles, their navigation sensors can provide velocity or altitude. An alternative is stereo imaging, which requires the use of two synchronized cameras, and a straightforward in-water calibration before imaging survey begins. Additionally, methods for estimating range from monocular imaging can be used.

Scene Reconstruction

[0053] From Eqs. 1 and 2 there can be obtained:

where D c = I c — B c .

[0054] In one example, Eq 2 is solved where the z dependency of is explicitly kept, but other dependencies are ignored. J c is an image whose colors are only corrected along z, and depending on the imaging geometry, it may need further correction to achieve the colors of an image that was taken at sea surface. Let J s denote the image taken at the surface. Then,

J s = J c /W c , (9) where W c is the white point of the ambient light at the camera (i.e., at depth d), and J s is J c globally white balanced.

Parameter Estimation

[0055] Backscatter increases exponentially with z , and eventually saturates (Fig. 2). Where scene reflectance p c ® 0 (all light absorbed), or E ® 0 (complete shadow), the captured RGB intensity I c ® B c .

[0056] In some embodiments, the present disclosure provides for searching the image for very dark or shadowed pixels, and using them to get an initial estimate of backscatter. This approach attempts to find the backscattered signal where the D c is minimum, but differs from previous methods in utilizing a known range map rather than relying on an estimated range map.

[0057] In some embodiments, the present method searches for the darkest RGB triplets, rather than identifying the darkest pixels independently in each color channel, and thus not forming a dark channel image. The small number of unconnected pixels identified by the present method is sufficient in view of the available corresponding range information, and a physical model of how B c behaves with z.

[0058] In some embodiments, backscatter may be estimated as follows: first the range map may be partitioned into evenly spaced clusters spanning the minimum and maximum values of z. In each range cluster, I c is searched for the RGB triplets in the bottom 1 percentile, denoted by Then across the whole image, is an overestimate of backscatter, which is modeled as:

where the expression represents a small residual term that behaves like the direct

signal. [0059] Using non-linear least squares fitting, the parameters are

estimated subject to the bounds of , e.g., [0,1], [0,5], [0,1], and [0,5], respectively. For this step, the z-dependency of may be ignored. If information about the camera sensor,

water type, etc., is available, the bounds for may be further refined using a loci.

[0060] Depending on the scene, the residual can be left out of Eq. 10 if the reflectance of found dark pixels are perfect black; if they are under a shadow; if z is large; or if the water is extremely turbid ( B c » D c ). In all other cases, the inclusion of the residual term is important. In reef scenes, due to their complex 3D structure, there are often many shadowed pixels which provide direct estimates of backscatter.

[0061] In some embodiments, backscatter estimation may be performed using additional and/or other methods, such as, but not limited to, histograms, statistical analyses, deep learning methods, and the like.

[0062] Figs. 4A-4B demonstrate the performance of this method in a calibrated experiment.

[0063] Fig. 4A shows a color chart imaged underwater at various ranges. The top row in Fig. 4A shows the raw images I c , and the bottom row shows the corresponding D c backscatter-removed images.

[0064] Fig. 4B shows B c calculation for each color channel according to the present method (x’s), and the color-chart ground truth backscatter calculations (o’s). As can be seen, the values are almost identical.

[0065] To acquire the images in Fig. 4A, a chart was mounted on a buoy line in blue water (to minimize interreflections from to the seafloor and surface), and photographed from increasingly reducing distances. The veiling effect of backscatter is clearly visible in the images acquired form farther away, and it is decreasing as z between the camera and the chart decreases (Fig. 4A).

[0066] For each image, the ground-truth backscatter is calculated using the achromatic patches of the chart, and also estimated using the present method. The results are presented in Fig. 4B. The resulting B c values are almost identical; no inputs (e.g., water type) other than I c and z were needed to obtain this result. Note that the black patch of the color chart was not picked up in W in any of the images, indicating that it is indeed a just dark gray and much lighter than true black or shadowed pixels.

[0067] It was previously shown that varies most strongly with range z. Inspecting Eq.

3 suggests that this variation is in the form of an exponential decay. Accordingly, before extracting from images, the relationship between and z must be formulated.

[0068] Figs. 5A-5D show an experiment where a color chart and a Nikon D90 camera were mounted on a frame roughly 20cm apart, and lowered in this configuration from surface level to a depth of 30m underwater, while taking photographs. Backscatter and attenuation between the camera and the chart are both negligible, because the distance z between the chart and the camera is small, yielding I c ® J c . In this setup, the color loss is due to the effective attenuation coefficient acting in the vertical distance d from the sea surface, and is captured in the white point of ambient light W c at each depth.

[0069] Fig. 5A shows raw images captured by the camera (top row; not all are shown), and the same images after white balancing using the achromatic patch (bottom row). Brightness in each image was manually adjusted for visualization.

[0070] Fig. 5B shows a graph representing the value of The spectral response was obtained using a Nikon D90 camera assuming standard illuminant in accordance with CIE D65 light at the surface. The reflectance of the second brightest gray patch was measured, noting that it does not reflect uniformly. The diffuse downwelling attenuation coefficient K d (l) used for the optical water type was measured in situ.

[0071] Fig. 5C illustrates a measured water type curve which agrees well with the oceanic water types (black curves in the graph).

[0072] In Fig. 5D, decays as a 2-term exponential with z as shown by all three methods. In some embodiments, additional and/or different parametrizations may be used.

[0073] From each image, the effective was calculated in the vertical direction in two

different ways: from pairwise images, and by using Eq. 9 with W c extracted from the intensity of the second (24%) gray patch in the color chart. Additionally, Eq. 3 was used to calculate the theoretical value of in the respective water type using the spectral response of the camera, and the measured K d (l) of the water body (which acts in the vertical direction). All three ways of estimating in Figs. 5A-5D demonstrate that decays with distance, in this case d.

[0074] Based on the data in Figs. 5A-5D and additional simulations, the dependency of on any range z may be described using a 2-term exponential in the form of:

[0075] In some embodiments, additional and/or other parametrizations may be used, such as polynomials, a line model (for short ranges), or a 1-term exponential.

[0076] In some embodiments, an initial, coarse estimation of may be obtained from an image. Assuming B c is successfully removed from image can be estimated

from the direct signal D c . Note from Eq. 2 that the direct signal is the product of the scene J c (at the location of the camera) attenuated by Thus, the recovery of the scene J c reduces to a problem of the estimation of the illumiant map between the camera and the scene, which varies spatially. Given an estimate of the local illuminant map an estimate of may be obtained as follows:

[0077] Estimation of an illuminant locally is a well-studied topic in the field of computational color constancy. Several methods, most notably the Retinex model which mimics a human’s ability to discount varying illuminations, have been applied on underwater imagery, and a recent work showed that there is a direct linear relationship between atmospheric image dehazing and Retinex. If backscatter is properly removed from original images, many of the multi-illuminant estimation methods may be expected to work well on underwater images.

[0078] In some embodiments, a variant of the local space average color (FSAC) method may be employed, as it utilizes a known range map. This method works as follows: for a given pixel (x, y) in color channel c , local space average color a c (x, y) is estimated iteratively through updating the equations:

where the neighborhood N e is defined as the 4-connected pixels neighboring the pixel at ( x , y) which are closer to it than a range threshold 6:

[0079] Here, the initial value of a(x, y) is taken as zero for all pixels, since after a large number of iterations the starting value will be insignificant. The parameter p describes the local area of support over which the average is computed and depends on the size of the image; large p means that local space average color will be computed for a small neighborhood. Then, the local illuminant map is found as where / is a factor

based on geometry scaling all color channels equally and can be found based on the scene viewed. We use f = 2 for a perpendicular orientation between the camera and the scene.

[0080] In some embodiments, the initial estimate of may be refined using the known range map z, corresponding to the given z in the image. Accordingly, in some embodiments, Eq. 12 may be re-written as:

with a minimization:

where is defined in the form of Eq. 11 with parameters a, b, c, d. The lower and upper bounds for these parameters to obtain a decaying exponential will be [0,— ¥, 0,— ¥ ], and [ ¥ , 0, ¥ , 0], respectively, but can be narrowed using the initial estimate obtained from Eq. 12.

[0081] In some embodiments, attenuation parameters estimation may be performed using additional and/or other methods, such as, but not limited to, histograms, statistical analyses, deep learning method, and the like.

[0082] In some embodiments, estimation of backscatter parameters and attenuation parameters may be performed as a single step analysis, using any one or more suitable statistical methods, and/or deep learning methods.

[0083] Using the estimated parameters, J c may be recovered using Eq. 8. [0084] In some embodiments, white balancing may be performed, before or after performing the steps of the present method. In J c , spatial variation of ambient light has already been corrected., so all that remains is the estimation of the global white point W c . This can be done using statistical or learning based methods. In some embodiments, for scenes that contain a sufficiently diverse set of colors, a method such as Gray World Hypothesis may be used, and for monochromatic scenes, a spatial-domain method that does not rely on color information may be used.

Photofinishing

[0085] In some embodiments, a camera pipeline manipulation platform may be used to convert any outputs of the present method to a standard color space. In some embodiments, any other photofinishing methods can be applied.

Datasets

[0086] Five underwater RGBD datasets were used for testing (see Table 1 below). All were acquired under natural illumination, in raw format, with constant exposure settings for a given set, and contain multiple images with color charts.

Table 1: Datasets used for testing with SFM-based range maps for each image. Each set contains multiple images with a color chart.

Experimental Results

[0087] The present method was validated using the dataset detailed in Table 1 above and a stereo RGBD dataset. The present method was evaluated using the following scenarios:

(i) Scenario 1 (S1): Simple contrast stretch.

(ii) Scenario 2 (S2): Applying the DCP model with an incorrect estimate of B c (e.g., because the model typically overestimates B c in underwater scenes). For this purpose, the built-in imreducehaze function in MATLAB was used.

(iii)Scenario 3 (S3): Applying a known model with a correct estimate of B c (i.e., correct and and assuming

(iv)Scenario 4 (S4): Applying a revised model, with a correct estimate of B c , and J c obtained as without explicitly computing

(v) Scenario 5 (S5): The present method, which uses the revised model where

from Eq. 11.

[0088] Because the present method is the first algorithm to use the revised underwater image formation model and has the advantage of having a range map, it was not tested against single image color reconstruction methods that also try to estimate the range/transmission. After a meticulous survey of these methods, it was found that DCP- based ones were not able to consistently correct colors, and others were designed to enhance images rather than achieve physically accurate corrections (see, e.g., D. Berman, D. Levy, S. Avidan, and T. Treibitz,“Underwater single image color restoration using haze-lines and a new quantitative dataset”, Arxiv, 2018). A previous proposed method was aimed to recover physically accurate colors (using the former model), but it only works for horizontal imaging with sufficiently large distances in the scene, making it unsuitable for many of our images. [0089] Raw images, range maps, and the corresponding S 1 -S5 results are presented in Figs. 6A-6E (datasets 1-5 in Table 1), and on the stereo database (Figs. 7A-7E). For evaluation, RGB angular error was used between the six grayscale patches of each chart and a pure gray color, averaged per chart:

[0090] In Figs. 6A-6E, for each chart and method, y rounded to the nearest integer is given in inset; Chart #1 is closest to the camera. Average errors for the dataset are:

[0091] Reference is now made to Figs. 7A-7E, which show results on the stereo dataset. Their range maps were further processed to remove spurious pixels rounded to the

nearest integer is given in inset; chart #1 is closest to the camera. SI and S2 do not utilize range maps; for others, lack of values indicate missing range information. The average

errors across all images are:

[0092] Fower value indicates better correction (though see exceptions below); errors

(in degrees) are listed in the insets of Figs. 6A-6E and 7A-7E per color chart for scenes that had them, rounded to the nearest integer.

[0093] In all cases, the simple contrast stretch SI, which is global, works well when scene distances are more or less uniform. The DCP method (S2) often overestimates backscatter (which can improve visibility), and generally distorts and halucinates colors. For example what should be uniformly colored sand appears green and purple in both datasets.

[0094] In Dl_3272, the gray patches of the color chart in S2 have visible purple artifacts, yet their error is lower than that of S5, suggesting that is not an optimal metric for quantifying color reconstruction error. In S3-S5, the correct amount of B c is subtracted. In S3 attenuation is corrected with a constant as had been done by methods using the

former model.

[0095] When there is large variation in range (e.g., Fig. 7), the failure of the constant assumption is most evident, and this is also where S5 has the biggest advantage (though S3 also fails on scenes with short ranges, e.g., D3 and D4). Range maps often tend to be least accurate furthest from the camera, which also adds to the difficulty of reconstructing colors at far ranges. S4 sometimes yields lower errors on the color cards than S5. This makes sense as it is easier to calculate the illuminant on the cards; however S5 results are better on the complete scenes. S4 can be used for a first-order correction that is better than previous methods.

[0096] The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.

[0097] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Rather, the computer readable storage medium is a non-transient (i.e., not-volatile) medium.

[0098] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.

[0099] Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instmction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. [0100] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.

[0101] These computer readable program instructions may be provided to a processor of modified purpose computer, special purpose computer, a general computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein includes an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.

[0102] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

[0103] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

[0104] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.