Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR VISION ENHANCEMENT VIA VIRTUAL DIFFRACTION AND COHERENT DETECTION
Document Type and Number:
WIPO Patent Application WO/2024/092060
Kind Code:
A1
Abstract:
Systems and methods for vision enhancement via virtual diffraction and coherent detection in accordance with embodiments of the invention are illustrated. One embodiment includes a method for performing a 2D Fourier transform to project an input image into the spectral domain creating a spectral domain representation, multiplying the spectral domain representation by a complex exponential having an argument that is a low-pass 2D function of frequency, creating a complex representation, performing an inverse 2D Fourier transform to project the complex representation back into the spatial domain creating an intermediate output, obtaining an output phase for each pixel of the intermediate output by computing an inverse tangent of the quotient of the pixel's imaginary component by its real component, and generating an output image by modifying pixels of the input image with the output phase of the corresponding pixel of the intermediate output.

Inventors:
JALALI BAHRAM (US)
MACPHEE CALLEN (US)
Application Number:
PCT/US2023/077807
Publication Date:
May 02, 2024
Filing Date:
October 25, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV CALIFORNIA (US)
International Classes:
G06V10/58; G06F18/2413; G06N20/20
Domestic Patent References:
WO2022165526A12022-08-04
Foreign References:
US20190125190A12019-05-02
US20190307334A12019-10-10
US20210290052A12021-09-23
Attorney, Agent or Firm:
SUNG, Brian, K. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS: 1. A computer-implemented method for performing image enhancement using simulative (virtual) diffraction and coherent detection, the method comprising: performing a 2D Fourier transform to project an input image into the spectral domain creating a spectral domain representation; multiplying the spectral domain representation by a complex exponential having an argument that is a low-pass 2D function of frequency, creating a complex representation; performing an inverse 2D Fourier transform to project the complex representation back into the spatial domain creating an intermediate output; obtaining an output phase for each pixel of the intermediate output by computing an inverse tangent of the quotient of the pixel’s imaginary component by its real component; and generating an output image by modifying pixels of the input image with the output phase of the corresponding pixel of the intermediate output. 2. A computer-implemented method for performing image enhancement using approximated simulative (virtual) diffraction and coherent detection, the method comprising: adding a constant DC bias term to each pixel of an input image creating a first modified image; multiplying each pixel of the first modified image by a negative constant gain value to create a second modified image; dividing each pixel of the second modified image by the spatially corresponding pixel of the input image creating a third modified image; and obtaining an output image by computing the inverse tangent of each pixel of the third image .

3. The method of claims 1 and 2, further comprising transforming the input image to HSV (hue, saturation, value) color space prior to projecting the input image into the spectral domain. 4. The method of claim 3, further comprising modifying the value channel of the HSV representation alone for low-light image enhancement. 5. The method of claim 3, further comprising modifying the saturation channel alone for color enhancement. 6. The method of claim 1, further comprising adding a small constant bias term to the spectral domain representation for numerical stabilization and noise reduction. 7. The method of claim 1, wherein the complex exponential is a low pass filter comprising a Gaussian function with zero mean. 8. The method of claim 1, further comprising multiplying the argument of the inverse tangent by a phase activation gain constant. 9. The method of claim 1, further comprising approximating the real part of pixels after inverse Fourier transform by the input image pixels. 10. The method of claims 1 and 2 further comprising normalizing the output phase to match the imaging formatting convention. 11. The method of claim 2 further comprising calculating an additive bias term and a multiplicative gain term from a preset function based on an enhancement power term.

12. A vision enhancement system using simulated (virtual) diffraction and coherent detection, the system comprising: a computing device, comprising: at least one processor; a memory; and at least one non-transitory computer-readable media comprising program instructions that are executable by the at least one processor such that the system is configured to perform the method of claims 1-11.

Description:
Systems and Methods for Vision Enhancement via Virtual Diffraction and Coherent Detection STATEMENT OF FEDERAL SUPPORT [0001] This invention was made with government support under N00014-14-1- 0505 awarded by the U.S. Navy, Office of Naval Research. The government has certain rights in the invention. CROSS-REFERENCE TO RELATED APPLICATIONS [0002] The current application claims the benefit of and priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/380,927 entitled “VEViD: Vision Enhancement via Virtual Diffraction and Coherent Detection” filed October 25, 2022. The disclosure of U.S. Provisional Patent Application No.63/380,927 is hereby incorporated by reference in its entirety for all purposes. FIELD OF THE INVENTION [0003] The present invention generally relates to image processing and, more specifically, computer vision enhancement. BACKGROUND [0004] Image enhancement refers to the process of improving the visual quality of an image by manipulating its attributes, such as brightness, contrast, and sharpness. Image enhancement techniques aim to produce images that are more visually appealing, informative, and useful for various applications. Image enhancement can be performed on digital images obtained from various sources such as cameras, satellites, and medical imaging devices. The need for image enhancement arises due to various factors, such as low lighting conditions, poor-quality sensors, and environmental factors that can affect the quality of the captured image. [0005] Image enhancement techniques can be classified into two categories: point processing and spatial processing. Point processing techniques involve manipulating the pixel values of an image without considering the spatial relationship between neighboring pixels. Spatial processing techniques, on the other hand, use information from neighboring pixels to enhance the quality of the image. Image enhancement finds applications in numerous fields, such as medical imaging, satellite imaging, surveillance, and entertainment. SUMMARY OF THE INVENTION [0006] Systems and methods for vision enhancement via virtual diffraction and coherent detection in accordance with embodiments of the invention are illustrated. One embodiment includes a method for performing image enhancement using simulated (virtual) diffraction and coherent detection. The method includes steps for performing a 2D Fourier transform to project an input image into the spectral domain creating a spectral domain representation, multiplying the spectral domain representation by a complex exponential having an argument that is a low-pass 2D function of frequency, creating a complex representation, performing an inverse 2D Fourier transform to project the complex representation back into the spatial domain creating an intermediate output, obtaining an output phase for each pixel of the intermediate output by computing an inverse tangent of the quotient of the pixel’s imaginary component by its real component, and generating an output image by modifying pixels of the input image with the output phase of the corresponding pixel of the intermediate output. [0007] One embodiment includes a method for method for performing image enhancement using approximated simulative (virtual) diffraction and coherent detection. The method includes adding a constant DC bias term to each pixel of an input image creating a first modified image, multiplying each pixel of the first modified image by a negative constant gain value to create a second modified image, dividing each pixel of the second modified image by the spatially corresponding pixel of the input image creating a third modified image, and obtaining an output image by computing the inverse tangent of each pixel of the third image. [0008] In a further embodiment, transforming the input image to HSV (hue, saturation, value) color space prior to projecting the input image into the spectral domain. [0009] In still another embodiment, modifying the value channel of the HSV representation alone for low-light image enhancement. [0010] In a still further embodiment, modifying the saturation channel alone for color enhancement. [0011] In yet another embodiment, adding a small constant bias term to the spectral domain representation for numerical stabilization and noise reduction. [0012] In a yet further embodiment, the complex exponential is a low pass filter comprising a Gaussian function with zero mean. [0013] In another additional embodiment, multiplying the argument of the inverse tangent by a phase activation gain constant. [0014] In a further additional embodiment, approximating the real part of pixels after inverse Fourier transform by the input image pixels. [0015] In another embodiment again, comprising normalizing the output phase to match the imaging formatting convention. [0016] One embodiment includes a vision enhancement system using simulated (virtual) diffraction and coherent detection. The system includes a computing device. The computing device further includes at least one processor, a memory, and at least one non-transitory computer-readable media comprising program instructions that are executable by the at least one processor such that the system is configured to perform the method of performing a 2D Fourier transform to project an input image into the spectral domain creating a spectral domain representation, multiplying the spectral domain representation by a complex exponential having an argument that is a low-pass 2D function of frequency, creating a complex representation, performing an inverse 2D Fourier transform to project the complex representation back into the spatial domain creating an intermediate output, obtaining an output phase for each pixel of the intermediate output by computing an inverse tangent of the quotient of the pixel’s imaginary component by its real component, and generating an output image by modifying pixels of the input image with the output phase of the corresponding pixel of the intermediate output. [0017] One embodiment includes a method for performing image enhancement using simulated diffraction and coherent detection. The method includes converting a raw input image from an RGB (red, green, blue) color space representation to obtain a light field representation in hsv (hue, saturation, value) color space, transforming the light field representation to the Fourier domain, generating an imaginary component of an output phase for each pixel of the converted input image by applying a spectral phase filter to the converted input image, obtaining a complex signal of the light field in the spatial domain based on the transformed light field representation and the imaginary component, obtaining a combined output phase for each pixel of the converted input image using the complex signal of the light field, and applying the combined output phase for each pixel to the raw input image to obtain an output image. [0018] One embodiment includes a method for performing image enhancement using simulated diffraction and coherent detection. The method includes converting a raw input image from an RGB (red, green, blue) color space representation to obtain a light field representation in hsv (hue, saturation, value) color space, generating a real component of an output phase for each pixel of the converted input image using an approximation based on the corresponding pixel of the converted input image, generating an imaginary component of the output phase for each pixel of the converted input image by applying a simplified spectral phase filter to the converted input image, obtain a combined output phase for each pixel of the converted input image using the real and imaginary components of the output phase, and applying the combined output phase for each pixel to the raw input image to obtain an output image. [0019] Additional embodiments and features are set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the specification or may be learned by the practice of the invention. A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0020] The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiments of the invention and should not be construed as a complete recitation of the scope of the invention. [0021] Fig.1 illustrates spatial domain representations of a low-light enhancement process in accordance with an embodiment of the invention. [0022] Fig.2A illustrates a process for image enhancement in accordance with an embodiment of the invention. [0023] Fig. 2B illustrates a pseudocode of an image enhancement algorithm in accordance with an embodiment of the invention. [0024] Fig.3 illustrates the effects of low-light image enhancement in accordance with an embodiment of the invention. [0025] Fig.4 illustrates an accelerated image enhancement process in accordance with an embodiment of the invention. [0026] Fig. 5 illustrates a comparison of enhancement results by a Vision Enhancement via Virtual Diffraction and Coherent Detection (VEViD) algorithm on an NVIDIA Jetson Nano for real-time video processing. [0027] Fig.6 illustrates a network architecture for performing vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. [0028] Fig.7 illustrates a computing device that can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. [0029] Fig. 8 illustrates an edge server that can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. DETAILED DESCRIPTION [0030] Image enhancement plays a crucial role in improving the visual quality of images, making them more visually appealing, informative, and useful for various applications. In today's world, images are captured from various sources, such as cameras, satellites, and medical imaging devices. However, factors such as low lighting conditions, poor-quality sensors, and environmental factors can affect the quality of the captured image. For example, when digital images are captured in environments with low light, they often suffer from undesirable visual qualities such as low contrast, loss of features, and poor signal-to-noise ratio. [0031] This is where image enhancement technology can demonstrate its value. By altering image qualities such as brightness, contrast, and sharpness, image enhancement techniques can effectively enhance the visual quality of images. Image enhancement techniques play a crucial role in producing high-quality images that can be utilized in various fields, including but not limited to medical imaging, satellite imaging, surveillance, and entertainment. [0032] Low-light image enhancement aims to improve the visual quality of images to achieve two goals: enhancing the viewing experience for humans and increasing the accuracy of machine learning. Real-time processing can benefit convenient viewing, while adaptability with machine learning algorithms is a requirement for emerging applications such as autonomous vehicles and security. Capturing video and/or photos in low-light conditions may sometimes be a prerequisite. As video capturing involves a tradeoff between light sensitivity and frame rate, increasing exposure time as a solution to improve image quality can sacrifice frame rate. In some cases, like live-cell tracking in biology, image enhancement is crucial as low light conditions are necessary to avoid cell death caused by exposure to light. [0033] Current methods for image enhancement are largely split between two categories of approaches: classical and deep-learning-based. Many classical algorithms involve the use of Retinex theory, which stems from concepts in human perception theory concerning the decomposition of an image into an illumination and a reflectance constituent. An example of a Retinex-based algorithm is LIME, which utilizes optimized Retinex theory for illumination map generation for high-quality enhancement. Other classical algorithms can include the histogram equalization method, which creates an expanded, more uniform histogram for contrast enhancement and increased dynamic range. However, histogram equalization methods often suffer from color distortion and other artifacts and may require additional processing and optimization. [0034] Deep-learning-based image enhancement methods stem from the advancement of powerful data-driven machine-learning algorithms in recent years. Supervised learning methods such as LLNet, MBLLEN, EEMEFN, and TBEFN make use of ground truth datasets for training autoencoder-based algorithms. These methods are capable of high performance in target lighting conditions, but they can be limited in application to greater domains where training data are not readily available. The accuracy of loss functions can be affected by the absence of training data, which can make these methods less effective. Even if the lack of training data is overcome, these methods often do not clearly define the exact enhancement that is desired. This means that these algorithms may be able to produce enhancement effects that could satisfy the threshold set forth by the loss function but are still visually unsatisfactory. [0035] Systems and methods in accordance with many embodiments of the invention, hereinafter referred to as VEViD systems, which stands for vision enhancement via virtual diffraction and coherent detection, can remedy the above limitations of existing methods by providing enhancement algorithms based on the processes of propagation and detection of light. In many embodiments, VEViD systems can emulate physics-based processes in real-time in the virtual space and apply those processes to digital images to enhance colors and light levels. More specifically, VEViD systems can discretize digital images to the 2D spatial domain to present images as spatially varying light fields. Light fields can be subject to the physical processes. VEViD systems can simulate the propagation of light through a physical medium with engineered diffractive properties. In several embodiments, physical phenomena such as diffraction are coded as a part of the algorithm, with coherent detection included in the algorithm. Coherent detection can be performed to extract phases of images in the spatial domain to enhance colors and light levels. [0036] In some embodiments, light fields are pixelated, and propagation can impart a phase with dependence on frequency. Temporal frequencies exist in three bands corresponding to the RGB color channels of a digital image. In a variety of embodiments, output images are represented by the phase of the output instead of the intensity. Unlike current methods that involve either sequences of hand-crafted empirical rules or learning- based methods that require training and lack interpretability, in numerous embodiments, VEViD systems can leverage laws of physics as a blueprint for crafting an algorithm. The benefits of having a physics-based algorithm can include low computational burden owing to its simplicity, generalizability to a wide range of domains, and the potential for implementation in the analog (physical) domain using diffractive optics. In several embodiments, VEViD systems can be implemented in analog physical devices for fast and efficient computation. Physics and Mathematics Foundation [0037] Electromagnetic diffraction optical imaging is a process in which light acquires a frequency-dependent phase upon propagation. The phase can increase with spatial frequency, and in the paraxial approximation, the phase can be a quadratic function of frequency. While the human eye and common image sensors respond to the power in the light, electromagnetic diffraction optical imaging systems can work with both the intensity and phase of light, with the latter being measured through coherent detection. In many embodiments, VEViD systems can discretize digital images to the 2D spatial domain to present images as spatially varying light fields. VEViD systems can subject the field to physical processes akin to diffraction and coherent detection but in a virtual fashion. The light field may be pixelated, and propagation of the light field can impart a phase with an arbitrary dependence on frequency, which can be different from the monotonically increasing behavior of physical paraxial diffraction. [0038] A general solution to the homogeneous electromagnetic wave equation in rectangular coordinates ^ ^^, ^^, ^^^ can be presented as: ା ^ ା^ ^^^ ^^, ^^, ^^^ ൌ ^ ^ ^ ^ ^^൫ ^^௫, ^^௬, 0൯ ^^ ା^^^௭ ^^ ^൫^^௫ା^^௬൯ ^^ ^^௫ ^^ ^^௬, ^1^ where ^ ^ ^ ^ ^ ^^ , content of the signal after a distance ^^ can gain a phase which can be represented by a spectral phase, ^^^ ^^ , ^^ ^, ^ ^ ^^൫ ^^௫, ^^௬, ^^൯ ൌ ^ ^ ^^൫ ^^௫, ^^௬, 0൯ ^^ ି^థ൫^^,^^൯ . ^2^ The phase represents propagated signal subjected to diffractive phase may be rewritten as, ^ ^^ ^ ^^, ^^, ^^ ^ ൌ IFT^ ^ ^ ^^൫ ^^௫, ^^௬, 0൯ ^^ ି^థ൫^^,^^൯ ^, ^ 3 ^ where IFT refers to dependent phase profile that is entirely described by the arbitrary phase ^^൫ ^^ , ^^ ൯. The propagation can convert a real-valued input ^^ ^ ^ ^^, ^^, 0^ to a complex function ^^ ^ ^ ^^, ^^, ^^^. [0039] The above analysis can be translated from a continuous-valued ^^ ^ ^^, ^^ ^ in the spatial domain to discrete waveform ^^ ^ ^^, ^^ ^ for digital images. Similarly, analysis on the continuous momentum ൫ ^^ , ^^ ൯ in the frequency domain may be performed on discrete momentum ^ ^^ ^ , ^^ ^ ^. [0040] Light fields can be defined as the distribution of “field” strength across a two- dimensional landscape of the input signal with the pixel brightness mapped onto the field strength. The equivalent temporal frequency of a light field may have three bands corresponding to the three fundamental color channels (RGB). To obtain light fields of color images, in many embodiments, input RGB images are transformed into the hsv color space. Input image transformed into the hsv color space may be denoted as ^^ ^ ^^, ^^; ^^ ^ , where c is the index for the color channel. To preserve color integrity, the diffractive transformation may be operated only on the "v" channel of the image when performing low-light enhancement. [0041] ^^ ^ ^ ^^, ^^; ^^^, which represents an image frame, can be conceptualized as an information-carrying "pulse" that is subjected to diffraction, producing a complex output ^^ ^^[ ^^, ^^; ^^], before applying coherent detection to extract the phase of the output. A normalization process ^^(⋅) may be performed to map “phixel” values to the appropriate range for digital image representation. Mathematically, this is formulated as, ^ ^^ ^ ^^, ^^; ^^ ^ ൌ ℱ ି^^ ^ ^^^ ^ ^^, ^^; ^^ ^ ^ ^^ ^ ⋅ ^^ ^ ^^^, ^^^ ^^ . ^ 4 ^ [0042] In ^^] denotes the spatial coordinates, and ^^ represents a certain channel in the hsv color space. In several embodiments, VEViD can provide low-light enhancement when operating on the value (v) channel. In some embodiments, color enhancement when operating on the saturation (s) channel. ℱ and ℱ ି^ refer to the 2D Fourier transform and inverse Fourier transform, respectively, that may be performed on input signals. ^^ is a regularization term, and ^^ is the inverse gain term, which will be explained later. The function ^^ ^^ ^^ ି^ ^⋅^ can calculate the phase pixel, which is referred to as “phixel” discussed embodiments, the spectral phase filter kernel ^^^ ^^ ^ , ^^ ^ ^ has a phase profile ^^^ ^^ ^ , ^^ ^ ^ that comes with a low-pass characteristic. [0043] In many embodiments, VEViD systems perform diffraction on input light fields using spectral phase filters having a low pass characteristic. A wide range of low- pass spectral phase functions can be used. A Gaussian function with zero mean and variance ^^ for the frequency-dependent phase can be utilized for this purpose, ^ ^ ^^^, ^^^ ൌ ^^ ∙ exp ^െ ^^ ଶ ^ ^ ଶ ^ ^ ^ ^^ ^ ^ ^ ൌ ^^ ∙ ^ ^ ^ . ^ 6 ^ This can result in a ൌ where S is a model parameter that maps to propagation loss (or gain). In the propagation of physical waves, the spectral phase induced by diffraction can depend on the propagation length. In several embodiments, this propagation length is reflected in the phase scale parameter, S. The value of S and, hence, the propagation length can be constrained by the requirement that the propagation-induced phase must be small. Following the application of the spectral phase and inverse Fourier transform, coherent detection may be performed on the light field. In many embodiments, coherent detection is performed to produce the real and imaginary components of the light field from which the phase is obtained. The combined processes of diffraction with the low pass spectra l phase and coherent detection can produce system output ^^^ ^^, ^^^, ^ ^ ^ ^^, ^^; ^^ ^ ൌ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^ ^^ ି^థ^^^,^^^ ⋅ ^^ ^^ ^ ^^ ^ ^^, ^^; ^^ ^^ ^ , ^8^ where FT of the complex-valued function of its argument. [0044] Fig.1 illustrates spatial domain representations of a low-light enhancement process in accordance with an embodiment of the invention. The input image is a real- valued function. After simulated diffraction, the real component of the input is nearly unchanged. However, the input has acquired a significant imaginary component. After coherent detection, the input is once again a real-valued function but is significantly different from the input. The effect on the spatial frequency domain is shown in Figure 1 (bottom row). The imaginary portion of the spectrum may adopt a central low-frequency spike, while the real portion can undergo corresponding attenuation in its low-frequency component due to energy conservation. Vision Enhancement via Virtual Diffraction and Coherent Detection (VEViD) [0045] In numerous embodiments, VEViD systems can emulate physics-based processes such as diffraction in a virtual space to perform image enhancements. Fig.2A illustrates a process for image enhancement in accordance with an embodiment of the invention. Process 200 converts (210) an input image to a light field representation in the hsv (hue, saturation, value) color space. Input images may be real-valued. In many embodiments, input images are converted to light fields in the hsv color space. In some embodiments, a small constant bias term, b, can be added to the field for the purposes of numerical stabilization and noise reduction. [0046] Process 200 transforms (220) the light field to the Fourier domain. Transformation can be done using FFT. To achieve low-light enhancement, the hue and saturation channels need not be transformed, as they can help with retaining the original color mapping of the input image. In several embodiments, VEViD systems transform the v channel for light-level enhancement. [0047] In many embodiments, VEViD systems are capable of performing color enhancement for realistic tone matching. Color enhancement may be performed in a process similar to the process described in Fig.2, with the transformation being applied to the saturation channel instead of the value channel. [0048] Process 200 applies (230) a spectral phase filter to the transformed light field. Spectral phase filters can be used to introduce diffraction to the light field representation, and the spectral phase induced by diffraction can produce an imaginary component. In several embodiments, the light field is multiplied elementwise by spectral phase filters which are complex exponentials with an argument. In numerous embodiments, the argument defines the frequency-dependent phase. [0049] Process 200 obtains (240) a complex signal of the light field in the spatial domain. In certain embodiments, VEViD systems use inverse Fourier transform (IFFT) to obtain an intermediate output, which can be a complex signal for each pixel in the spatial domain based on the transformed light field. [0050] Process 200 obtains (250) an output phase with coherent detection. In several embodiments, coherent detection is performed by multiplying the obtained complex signal with a parameter G called a phase activation gain before the computation of the output phase. In numerous embodiments, the output phase is obtained by computing an inverse tangent of the quotient of each pixel’s imaginary component by its real component. Mathematically, the inverse tangent operation in phase detection can behave like an activation function. In several embodiments, the output phase is normalized to match the image formatting convention [0 to 255]. [0051] Process 200 applies (260) output phase into the original image. This output can be applied to the original image as the new v channel for low-light enhancement or the s channel for color enhancement. In many embodiments, this obtains an output image based on the input image with lighting levels enhanced. [0052] Process 200 can be performed by computing devices that utilize VEViD systems to process and enhance images and/or videos. Computing devices may include but are not limited to personal computers, mobile phones, and dashcams. Process 200 can also be performed by data servers or edge servers to utilize VEViD systems to process and enhance images and /or videos. [0053] While specific processes for image enhancement are described above, any of a variety of processes can be utilized to enhance images as appropriate to the requirements of specific applications. In certain embodiments, steps may be executed or performed in any order or sequence not limited to the order and sequence shown and described. In a number of embodiments, some of the above steps may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. In some embodiments, one or more of the above steps may be omitted. [0054] Fig. 2B illustrates a pseudocode of an image enhancement algorithm in accordance with an embodiment of the invention. [0055] The input image is converted from RGB to hsv color space. A small constant bias term, b, can be added to the field for the purposes of numerical stabilization and noise reduction. The addition of b can improve the results obtained. The real-valued input image can be transformed into the Fourier domain by Fast Fourier Transform (FFT) and subsequently multiplied elementwise by the complex exponential with an argument that defines the frequency-dependent phase. IFFT can be performed on the signal to return a complex signal in the spatial domain. The signal may be multiplied by a parameter G called phase activation gain before the computation of an output phase. The output phase is then normalized to match the image formatting convention [0 to 255]. This output phase can be applied to the original image depending on the specific enhancement that was requested. [0056] Fig.3 illustrates the effects of low-light image enhancement of some images in accordance with an embodiment of the invention. In certain embodiments, VEViD systems can produce enhanced images with natural colors. The images can be seen as pairwise, with the left image being the original unenhanced image and the right image being the output image of a VEVid process. VEViD Lite [0057] Low latency is a crucial metric for real-time applications, including video analytics and broadcast. In some embodiments, systems, also known as VEViD Lite, can be accelerated through mathematical approximations that reduce computation time without appreciable decline in the quality of enhancement. In selected embodiments, VEViD Lite is a closed-form equivalent model of VEViD where fast simulations can be performed using closed-form equations. [0058] In general, the most time-intensive operations in VEViD involve the forward and inverse Fourier transforms. Latency in processing can be reduced if there are equivalent formulations of VEViD that take place entirely in the spatial domain. By processing entirely in the spatial domain, Fourier transforms may not be necessary to enhance images, which can significantly improve the runtime of the algorithm. In many embodiments, VEViD Lite systems can be applied towards real-time enhancement of high-resolution and high frame-rate videos. [0059] Fig.4 illustrates an accelerated image enhancement process in accordance with an embodiment of the invention. Similar to VEViD as discussed in embodiments above with respect to Fig.2A, process 400 converts (410) an input image to a light field representation in the hsv (hue, saturation, value) color space. [0060] Process 400 generates (420) the real components of an output phase using an approximation based on the input. The output of the normal VEViD system can be represented as follows: ^ ^ ^ ^^, ^^; ^^ ^ ൌ tan ି^ ^ ^^ ∙ ^^ ^^^ ^^^^ ^^, ^^; ^^^^ ^ ^ ^^^ ^^ ^ ^^ ^^, ^^; ^^^^ ^9^ In numerous no imaginary an imaginary component, but the change in the real component may be negligible. In some embodiments, original inputs may be used to approximate the real components of outputs: ^^ ^^^ ^^ ^ ^ ^^, ^^; ^^^^ ^ ^^ ^ ^ ^^, ^^; ^^^. ^10^ Hence, Equation 9 representing outputs of VEViD can be simplified as: ^ ^^ ^^, ^^; ^^^ ൌ tan ି^ ^ ^^ ∙ ^^ ^^^ ^^^^ ^^, ^^; ^^^^ ^ ^^ ^ . ^11^ [0061] Process by applying a simplified spectral phase filter to the converted light field. In many embodiments, passing a real-valued input light field through spectral phase filters can generate an imaginary component. The generation of imaginary components will be discussed in detail further below. [0062] Process 400 obtains (440) the output phase using the real and imaginary components of the output phase. Process 400 applies (450) the output phase into the original input image. In many embodiments, this obtains an output image based on the input image with lighting levels or colors enhanced. [0063] For the imaginary components of the output in Equation 9, further simplification can be made by linearizing the complex exponential operation encountered in the spectral phase. This can be done by restricting the phase to be small (nominal near field), exp^െ ^^ ∙ ^^^ ^^^, ^^^^^ ൌ cos^ ^^^ ^^^, ^^^^^ െ ^^ ∙ sin^ ^^^ ^^^, ^^^^^ ^ 1 െ ^^ ^^^ ^^^, ^^^^ ^ 1 െ ^^ ∙ ^^. ^12^ This leads to the following expression for the imaginary component: ^ ^ ^^ ^ ^^^ ^ ^^, ^^; ^^ ^^ ൌ ^^ ^^ ^ ^^ ^^ ^^ ^ ^^ ^^ ^ ^^^ ^ ^^, ^^; ^^ ^^ ∙ exp ^ െ ^^ ∙ ^^ ^ ^^^, ^^^ ^^^^ ^ ^^ ^^^ ^^ ^^ ^^^ ^^ ^^ ^ ^^^ ^ ^^, ^^; ^^ ^^ ∙ 1 െ ^^ ∙ ^^ ^ ^^^, ^^^ ^ ൧^ ൌ െ ^^ ^^ ^^^ ^^ ^^ ^ ^^^ ^ ^^, ^^; ^^ ^^ ∙ ^^ ^ ^^^, ^^^ ^ ൧. ^13^ [0064] By assuming the phase angle induced by simulated (virtual) diffraction to be small, the inherent nonlinearity of the complex exponential of the phase function can be removed. In some embodiments, the main effect of the spectral phase induced by diffraction is to produce an imaginary component. The real part of the output may be a bright-field image with a large initial value, whereas the imaginary part may be a dark- field image that is zero before diffraction. The existence of any numerical noise may affect the imaginary part far more than the real part. To mitigate this effect, in many embodiments, the imaginary component can be regularized with a constant, ^^, ^^ ^ ^ ^^, ^^; ^^^ → ^^ ^ ^ ^^, ^^; ^^^ ^ ^^. ^14^ [0065] In numerous embodiments, VEViD Lite systems can eliminate the Fourier transforms required in VEViD to obtain a simplified equivalent model by adjusting spectral phase filters used for creating diffractions. In several embodiments, the spectral phase is taken to be a constant number. When the phase variance, T, approaches infinity, spectral phase filters can be simplified to: ^ ^ ଶ ଶ lim ^^^ ^^ , ^^ ^ ൌ S ^ ^ ^^^ ் ^ ^ ∗ exp ^െ ^^ ^ ^ ^^. ^15^ By assuming the is avoided. In many embodiments, applying this simplification to the imaginary component of the diffracted image leads to the elimination of the Fourier and inverse Fourier transform operations, li→m^ ^^ ^^^ ^^^^ ^^, ^^; ^^^^ ൌ ் li→m^^െ ^^ ^^ ^^^ ^^ ^^^ ^^^^ ^^, ^^; ^^^ ^ ^^^ ∗ ^^^^ ൌ െ ^^ ∗ ^ ^^^ ^ ^^, ^^; ^^ ^ ^ ^^^ ^16^ [0066] Combining these steps leads to a simple closed-form formulation of the VEViD algorithm, l im ^^^ ^^, ^^; ^^^ ൌ ் li→m ି^ ^^ ^^^ ^^^^ ^^, ^^; ^^^^ ି^ െ ^^^^ ^^, ^^; ^^^ ^ ^^ →^ ^ tan ^ ^^ ∗ ^^ ∙ ^^ ^^^ ^^ ^ ൌ tan ^ ^^ ∗ ^ , ^17^ ^^ ^^, ^^; ^^^^ ^^^^ ^^, ^^; ^^^ where G can absorb S, and be redefined as the product of ^^ ∗ ^^. By eliminating the Fourier transforms, Equation 17 can become a computationally-accelerated reformulation of VEViD. [0067] Process 400 can be performed by computing devices that utilize VEViD Lite systems to process and enhance images and/or videos. Computing devices may include but are not limited to personal computers, mobile phones, and dashcams. Process 400 can also be performed by data servers or edge servers to utilize VEViD Lite systems to process and enhance images and /or videos. [0068] While specific processes for accelerated image enhancement processes are described above, any of a variety of processes can be utilized for accelerated image enhancement as appropriate to the requirements of specific applications. In certain embodiments, steps may be executed or performed in any order or sequence not limited to the order and sequence shown and described. In a number of embodiments, some of the above steps may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. In some embodiments, one or more of the above steps may be omitted. [0069] In Equation 11, the division by the input image ^^ ^ ^ ^^, ^^; ^^^ in the argument of the arctan function can emphasize the low-intensity regions of the image producing low-light enhancement. In many embodiments, the arctan operation compresses the output, preventing an undesirable dynamic range expansion and suppressing the noise. Together, these operations can redistribute the energy while managing the dynamic range and noise. VEViD Lite+ [0070] Parameters b and G discussed above may be strongly correlated despite the parameters themselves performing different functions in the system. In many embodiments, consolidating their effects into a single parameter can greatly simplify the optimization of the algorithm and make it very simple to use. An empirical approach to combine the correlated ^^ and ^^ parameters into a single parameter, ^^ can be utilized to reduce VEViD Lite to a single-parameter model: ^ ^^ ^^^ ൌ 1 5 ^^ଶ ^ 0.05 . ^18^ 1 െ √ ^^. ^19^ ^ ^ ^ ^ ^^ ^^ ^^ ^^, ^^; ^^ ൌ ^^^ ^20^ In numerous embodiments, Equation 20 is a further simplified algorithm that can be referred to as VEViD Lite+. The single independent variable ^^ can be interpreted as the enhancement “Power” and is defined within the range of [0,1). The computational procedure of VEViD Lite+ is generally similar to VEViD-Lite, as shown in Equation 20, with ^^ and ^^ being computed from the single variable, ^^. [0071] In many embodiments, VEViD Lite+ adds a constant DC bias term to each pixel of an input image to create a first modified image, VEViD Lite+ can multiply each pixel of the first modified image by a negative constant gain value to create a second modified image. In several embodiments, VEViD Lite+ divides each pixel of the second modified image by the spatially corresponding pixel of the input image creating a third modified image. VEViD Lite+ may obtain an output image by computing the inverse tangent of each pixel of the third image. In some embodiments, VEViD Lite+ approximately performs an identity mapping when ^^ is set to 0, leaving the input image nearly unchanged. Increasing the value of ^^ can lead to greater enhancement. ^^ should be kept below 1, as setting ^^ equal to 1 would yield ^^ equal to 0, causing all the output phixels to be zero. Cloud Implementation [0072] Cloud computing plays a crucial role in various fields, which can provide significant advantages such as scalable and powerful computational resources. Cloud computing platforms can offer the capability of processing high-resolution images and high-framerate videos using more advanced and sophisticated neural network models. Amazon Rekognition is a cloud-based computer vision service offered by Amazon Web Services (AWS). It leverages advanced deep-learning techniques to automate a wide range of image and video analysis tasks. In many embodiments, all versions of VEViD systems discussed above can be implemented on the cloud and used together with example services such as Amazon Rekognition. Such implementation can aim to improve the accuracy and reliability of Amazon Rekognition on object detection and face detection tasks in challenging light-constrained environments. [0073] Object detection in light-constrained scenes presents a major challenge, particularly in applications such as security surveillance and autonomous driving at night. Creating and labeling a dataset specifically for low-light conditions can be a burdensome task. Consequently, existing off-the-shelf pre-trained object detection models often struggle to generalize well to light-constrained scenes due to insufficient data in their training dataset. In many embodiments, systems can serve as a powerful tool to address this issue without the need for laborious data labeling and retraining of neural network models. Systems can provide substantial accuracy and robustness in Amazon Rekognition. These advantages can be extended to the realm of enhancing cloud-based security camera systems in light-constrained scenes. In many embodiments, systems can detect previously missing objects after being integrated into example surveillance systems, thereby minimizing the risk of missing critical identities or events. In certain embodiments, systems can help avoid false detections, which is crucial for reducing unnecessary alerts and ensuring accurate threat identification. In some embodiments, systems can enhance object detection by capturing finer details such as clothing attributes, aiding security cameras in accurately identifying individuals or objects in a scene. Systems can also maintain the true color of objects, which is a feature that can be invaluable in situations where IR (Infrared) cameras fall short. Edge Implementation [0074] In numerous embodiments, all versions of VEViD systems are also well- suited for operating on embedded systems with limited power and memory resources because of their low complexity and high efficiency. This can open up numerous possibilities for edge computing applications such as smart surveillance systems and autonomous vehicles. Systems can reduce the need for transferring large amounts of data to remote servers, thereby minimizing bandwidth requirements and ensuring uninterrupted operation. In many embodiments, systems can enhance privacy and security by keeping sensitive data within the edge device itself, reducing the risks of transmitting sensitive information over networks. Low energy consumption of embedded systems can also be beneficial for sustainability efforts. In numerous embodiments, systems can be implemented on low-cost and low-power embedded systems. [0075] Nighttime driving in limited light conditions poses a significant challenge for both human drivers and autonomous driving systems. In nighttime driving scenarios, the scenes captured by human eyes and digital cameras under such conditions exhibit adverse visual qualities such as low contrast, feature loss, and low signal-to-noise ratio, which may be undesirable for both human perception and machine vision. In many embodiments, systems can be implemented on edge processors such as those installed in vehicles to perform real-time low-light enhancement. [0076] Fig. 5 illustrates a comparison of enhancement results by VEViD on an NVIDIA Jetson Nano for real-time video processing. In this example, the Jetson is powered by an inverter connected to the car’s cigarette lighter, and a USB camera is mounted at the front of the car and connected to the Jetson to capture real-time videos. VEViD implemented on the Jetson can process and enhance images and video sequences under dark conditions. The visual qualities of enhancement VEViD and VEViD-Lite are comparable and even better than neural network-based methods such as Zero-DCE++ while being 30 times faster. This real-time enhancement capability of VEViD for nighttime driving scenes can be beneficial for both human drivers and autonomous driving systems, allowing them to better spot potential dangers and make real-time control decisions. Hardware Implementation [0077] Processes that provide the methods and systems for performing vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with some embodiments can be executed by a computing device or computing system, such as a desktop computer, tablet, mobile device, laptop computer, notebook computer, server system, and/or any other device capable of performing one or more features, functions, methods, and/or steps as described herein. [0078] Fig.6 illustrates a network architecture for performing vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. Such embodiments may be useful where computing power is not possible at a local level and a central computing device (e.g., server) performs one or more features, functions, methods, and/or steps described herein. In such embodiments, a computing device 610 (e.g., personal computer) is connected to a network 620 (wired and/or wireless), where it can receive inputs from one or more computing devices, including data from a records database or repository 630 containing video and/or image data for enhancing, data provided from a personal computing device, and/or any other relevant information from one or more other remote devices 610 and/or 640. Once computing device 610 performs one or more features, functions, methods, and/or steps described herein, any outputs can be transmitted to one or more computing devices 610 for entering into records. [0079] Fig.7 illustrates a computing device that can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. Computing device 700 includes a processor 710. Processor 710 may direct the enhancement application 731 to perform vision enhancements based on media data 732 and model data 733. In many embodiments, processor 710 can include a processor, a microprocessor, a controller, or a combination of processors, microprocessor, and/or controllers that performs instructions stored in the memory 730 to perform vision enhancement. Processor instructions can configure the processor 710 to perform processes in accordance with certain embodiments of the invention. In various embodiments, processor instructions can be stored on a non- transitory machine-readable medium. Computing device 700 further includes a network interface 720 that can receive media data from external sources. Computing device 700 may further include a memory 730 to store enhancement models under model data 733. Computing device 700 may further include peripherals 740 to allow for user control and perform analysis of the enhancement process. [0080] Although a specific example of a computing device is illustrated in this figure, any of a variety of computing devices can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection similar to those described herein as appropriate to the requirements of specific applications in accordance with embodiments of the invention. [0081] Fig. 8 illustrates an edge server that can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection in accordance with an embodiment of the invention. Edge server 800 includes a processor 810. Processor 810 may direct the enhancement application 831 to perform vision enhancements based on media data 832 and model data 833. In many embodiments, processor 810 can include a processor, a microprocessor, a controller, or a combination of processors, microprocessor, and/or controllers that performs instructions stored in the memory 830 to perform vision enhancement. Processor instructions can configure the processor 810 to perform processes in accordance with certain embodiments of the invention. In various embodiments, processor instructions can be stored on a non- transitory machine-readable medium. Edge server 800 further includes a network interface 820 that can receive media data from external sources. Edge server 800 may further include a memory 830 to store enhancement models under model data 833. Computing device 800 may further include peripherals 840 to allow for user control and perform analysis of the enhancement process. [0082] Although a specific example of an edge server is illustrated in this figure, any of a variety of edge servers can be utilized to perform vision enhancements using simulated (virtual) diffraction and coherent detection similar to those described herein as appropriate to the requirements of specific applications in accordance with embodiments of the invention. [0083] In accordance with still other embodiments, the instructions for the processes can be stored in any of a variety of non-transitory machine-readable media appropriate to a specific application. [0084] Although specific methods of performing vision enhancements using simulated (virtual) diffraction and coherent detection are discussed above, many different methods of performing vision enhancements using simulated (virtual) diffraction and coherent detection can be implemented in accordance with many different embodiments of the invention. It is therefore to be understood that the present invention may be practiced in ways other than specifically described, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.