Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LIDAR SYSTEM WITH SUPRESSED DOPPLER FREQUENCY SHIFT
Document Type and Number:
WIPO Patent Application WO/2022/223112
Kind Code:
A1
Abstract:
A LIDAR system which reduces or suppress the frequency shift induced by the movement of objects in a scene relative to the LIDAR, and which comprises a light source, an input aperture (101), a splitter (2) configured to split a reflected light into a reference channel (4) and a first imaging channel (3), a first imaging optical IQ receiver (5) configured to obtain a first interference signal, a reference optical IQ receiver (6) configured to obtain a reference interference signal, an imaging oscillator (111), configured to be temporarily coherent with the reflected light, at least a mixer (12), connected to the first imaging optical IQ (5) and to the reference optical IQ (6) and configured to obtain a first intermodulation product with a higher frequency and an intermodulation product of interest with its Doppler Shift scaled.

Inventors:
MARGALLO BALBÁS EDUARDO (ES)
RUBIO GUIVERNAU JOSÉ LUIS (ES)
Application Number:
PCT/EP2021/060395
Publication Date:
October 27, 2022
Filing Date:
April 21, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OMMATIDIA LIDAR S L (ES)
International Classes:
G01S7/4912; G01S7/493; G01S17/34
Foreign References:
US20200072979A12020-03-05
US7307700B12007-12-11
Attorney, Agent or Firm:
PONS ARIÑO, Angel (ES)
Download PDF:
Claims:
CLAIMS

1. A light detection and ranging (LIDAR) system with supressed Doppler frequency shift, wherein the system comprises:

- at least one light source (102) configured to emit a first light,

- at least one imaging input aperture (101) and one imaging channel (3) associated to the at least one imaging input aperture (101), configured to receive an input reflected light that is reflected by a moving object (108) that is irradiated by the light source (102),

- at least one reference aperture (103) and one reference channel (4) associated to the at least one reference aperture (103), configured to receive a reference reflected light that is reflected by the moving object that is irradiated by the light source (102),

- at least one imaging oscillator (111),

- at least one first imaging optical receiver (5) associated to the imaging input aperture (101) and the imaging oscillator (111) and configured to obtain an interference signal between the input reflected light and the imaging oscillator

(111),

- a reference oscillator (113),

- a reference optical receiver (6) associated to the reference aperture (103) and the reference oscillator (113) and configured to obtain a reference interference signal between the reference reflected light and the reference oscillator (113),

- a signal filter arrangement (700), (900) positioned following the reference optical receiver (6), wherein the signal filter arrangement (700), (900) comprises a temporal filtering unit (704) configured to accumulate samples of the reference interference signal and combine the samples to increase the SNR of the reference interference signal, and

- at least one mixer (12), connected to the at least a first imaging optical receiver (5) and to the signal filter arrangement (700), (900) and configured to produce an intermodulation product (16) between the interference signal and the reference interference signal, such that the Doppler frequency shift caused by the moving object (108) is cancelled or decreased.

2. The LIDAR system of claim 1 , wherein the at least a first imaging optical receiver (5) is an optical IQ receiver configured to obtain an interference signal between the input reflected light and the imaging oscillator (111) comprising a first in phase component (7) and a first quadrature component (8), and the reference optical receiver (6) is an optical IQ receiver configured to obtain a reference interference signal comprising a reference in-phase component (9) and a reference quadrature component (10).

3. The LIDAR system of claim 2, further comprising a time derivation module (15), associated to the reference optical receiver (6) and intended to time derivate the reference in-phase component (9) and the reference quadrature component (10).

4. The LIDAR system of claim 2, wherein the at least one mixer (12) comprises:

- a first mixer (121), intended to mix the first quadrature component (8) and the reference in-phase component (9),

- a second mixer (122), intended to mix the first in-phase component (7) and the reference in-phase component (9).

5. The LIDAR system of claim 2, wherein the at least one mixer (12) comprises:

- a first mixer (121), intended to mix the first quadrature component (8) and the reference quadrature component (10),

- a second mixer (122), intended to mix the first in-phase component (7) and the reference quadrature component (10).

6. The LIDAR system of claim 2, wherein the at least one mixer (12) comprises:

- a third mixer (123), intended to mix the first in-phase component (7) and the time-derived reference quadrature component (91), and

- a fourth mixer (124), intended to mix the first quadrature component (8) and the time-derived reference quadrature component (91).

7. The LIDAR system of claim 2, wherein the at least one mixer (12) comprises:

- a third mixer (123), intended to mix the first in-phase component (7) and the time-derived reference in-phase component (90), and

- a fourth mixer (124), intended to mix the first quadrature component (8) and the time-derived reference in-phase component (90).

8. The LIDAR system of any of claims 4 to 7, further comprising a low-pass filter (23) associated with each mixer (12).

9. The LIDAR system of claim 1, wherein the reference oscillator (113) and the imaging oscillator (111) share a common origin.

10. The LIDAR system of claim 1 , wherein the reference aperture (103) is the same as the input aperture (101) and the reference channel (4) and the imaging channel (3) are derived from it by means of a splitter (2).

11. The LIDAR system of claim 1, wherein the reference oscillator’s (113) wavelength stays static and the first optical oscillator’s (111) wavelength is swept following a standard FMCW (Frequency Modulated Continuous Wave) scheme.

12. The LIDAR system of claim 1, further comprising one or more low-pass filters (13), associated with the optical receivers (5, 6) and configured to filter the interference signal and the reference interference signal.

13. The LIDAR system of claim 2, further comprising transimpedance amplifiers (14) positioned following the reference optical receiver (6) and the first imaging optical receiver (5), and configured to amplify the reference in-phase component (9), the reference quadrature component (10), the first in-phase component (7) and the first quadrature component (8).

14. The LIDAR system of any of the claims 1 , or 4 to 7, wherein the mixers (12) are Gilbert cells.

15. The LIDAR system of claim 1, wherein the signal filter arrangement (900) mixes the reference interference signal with a plurality of other reference signals (902).

16. The LIDAR system of claim 1 , wherein the temporal filtering unit (704) is configured to combine the samples by averaging the samples.

17. The LIDAR system of claim 1 , wherein the temporal filtering unit (704) is configured to combine the samples by using a series of phase locked loops (PLLs).

18. A LIDAR system that comprises:

- at least one light source (102) configured to emit a first light,

- at least one imaging input aperture (101) and one imaging channel (3) associated to the at least one imaging input aperture (101), configured to receive an input reflected light that is reflected by a moving object (108) that is irradiated by the light source (102), - at least one reference aperture (103) and one reference channel (4) associated to the at least one reference aperture (103), configured to receive a reference reflected light that is reflected by the moving object that is irradiated by the light source,

- at least one imaging oscillator (111),

- at least one first imaging optical receiver (5) associated to the imaging input aperture (101) and the imaging oscillator (111) and configured to obtain an interference signal between the input reflected light and the imaging oscillator

(111),

- a reference oscillator (113),

- a reference optical receiver (6) associated to the reference aperture (103) and the reference oscillator (113) and configured to obtain a reference interference signal between the reference reflected light and the reference oscillator (113),

- a signal filter arrangement (700), (900) positioned following the reference optical receiver (6), wherein the signal filter arrangement (700), (900) comprises a temporal filtering unit (704) configured to accumulate samples of the reference interference signal and combine the samples to increase the SNR of the reference interference signal, and

- an optical modulator (17) connected to the at least one imaging oscillator (111), and configured to apply an amplitude or phase modulation to the at least one imaging oscillator (111) based on a signal derived from the reference channel (4), such that an intermodulation product (16) between the interference signal and the reference interference signal appears at the output of the at least a first imaging optical receiver (5), such that the Doppler frequency shift caused by the moving object is cancelled or decreased.

19. The LIDAR system of claim 18, wherein the at least a first imaging optical receiver (5) is an optical IQ receiver configured to obtain an interference signal between the input reflected light and the imaging oscillator (111) comprising a first in phase component (7) and a first quadrature component (8), and the reference optical receiver (6) is an optical IQ receiver configured to obtain a reference interference signal comprising a reference in-phase component (9) and a reference quadrature component (10).

20. The LIDAR system of claim 19, further comprising transimpedance amplifiers (14) positioned following the reference optical receiver (6) and the first imaging optical receiver (5), and configured to amplify the reference in-phase component (9), the reference quadrature component (10), the first in-phase component (7) and the first quadrature component (8).

21. The LIDAR system of claim 18, wherein the reference oscillator (113) and the imaging oscillator (111) share a common origin.

22. The LIDAR system of claim 18, wherein the reference aperture (103) is the same as the input aperture (101) and the reference channel (4) and the imaging channel (3) are derived from it by means of a splitter (2).

23. The LIDAR system of claim 18, wherein the reference oscillator’s (113) wavelength stays static and the first optical oscillator’s (111) wavelength is swept following a standard FMCW (Frequency Modulated Continuous Wave) scheme.

24. The LIDAR system of claim 18, further comprising one or more low-pass filters (13), associated to the optical receivers (5, 6) and configured to filter the interference signal and the reference interference signal.

25. The LIDAR system of claim 18, wherein the signal filter arrangement (900) mixes the reference interference signal with a plurality of other reference signals (902).

26. The LIDAR system of claim 18, wherein the temporal filtering unit (704) is configured to combine the samples by averaging the samples.

27. The LIDAR system of claim 18, wherein the temporal filtering unit (704) is configured to combine the samples by using a series of phase locked loops (PLLs).

28. A LIDAR system that comprises:

- at least one light source (102) configured to emit a first light,

- at least one imaging input aperture (101) and one imaging channel (3) associated to the at least one imaging input aperture (101), configured to receive an input reflected light that is reflected by a moving object (108) that is irradiated by the light source (102),

- at least one reference aperture (103) and one reference channel (4) associated to the at least one reference aperture (103), configured to receive a reference reflected light that is reflected by the moving object that is irradiated by the light source, - at least one imaging oscillator (111),

- at least one first imaging optical receiver (5) associated to the imaging input aperture (101) and the imaging oscillator (111) and configured to obtain an interference signal between the input reflected light and the imaging oscillator

(111),

- a reference oscillator (113),

- a reference optical receiver (6) associated to the reference aperture (103) and the reference oscillator (113) and configured to obtain a reference interference signal between the reference reflected light and the reference oscillator (113),

- a signal filter arrangement (700), (900) positioned following the reference optical receiver (6), wherein the signal filter arrangement (700), (900) comprises a temporal filtering unit (704) configured to accumulate samples of the reference interference signal and combine the samples to increase the SNR of the reference interference signal, and

- wherein the at least one light source comprises a source modulation scheme (1100) configured to apply an amplitude or phase modulation to the emitted first light based on a signal derived from the reference channel (4), such that an intermodulation product (16) between the interference signal and the reference interference signal appears at the output of the at least a first imaging optical receiver (5), such that the Doppler frequency shift caused by the moving object is cancelled or decreased.

29.- A method for suppressing Doppler frequency shift in a LIDAR system, which uses the system of any of the preceding claims, and comprises the steps of:

- emitting a first light (110), aimed at a moving object (108),

- receiving a reflected light (112) coming from the moving object (108),

- obtaining a first interference signal between the reflected light (112) and an imaging oscillator (111),

- obtaining a reference interference signal between the reflected light (112) and a reference oscillator (113),

- accumulating samples of the reference interference signal and combining the samples to increase the SNR of the reference interference signal, and

- obtaining an intermodulation product (16) between the interference signal and the reference interference signal, such that the Doppler frequency shift caused by the moving object (108) is cancelled or decreased.

Description:
LIDAR SYSTEM WITH SUPRESSED DOPPLER FREQUENCY SHIFT

OBJECT OF THE DISCLOSURE

[0001] The object of the disclosure is a LIDAR system which allows reducing or completely suppressing the frequency shift induced by the movement of objects in a scene relative to the LIDAR, an effect known as Doppler frequency shift.

BACKGROUND

[0002] A light detection and ranging (LIDAR) device creates a distance map to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Differences in the properties of laser light, including total round-trip times, phase or wavelength can then be used to make digital 3D representations of the target.

[0003] LIDAR is commonly used to make high-resolution maps, with applications in geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry. The technology is also used in control and navigation for some autonomous vehicles.

[0004] Some LIDAR’s make use of what is known as coherent detection. In this detection scheme, the light reflected on the sample is mixed with a local oscillator that is coherent with the reflected light. This approach has several advantages, such as optical gain that allows single-photon sensitivity, and enables the use of changes in the phase and wavelength of light to measure distance.

[0005] A common problem that appears when making use of this type of LIDARs is the frequency shift induced by the movement of the objects in the scene relative to the device, an effect known as Doppler frequency shift. Such frequency shifts may be large relative to the bandwidth of the signals used to measure relevant properties of the objects and may complicate the extraction of such relevant data. This problem becomes of major importance if the relative speed of the objects is significant, as in the case of vehicles, aircrafts or satellites.

[0006] This frequency shift is variable and often unknown and can expand the bandwidth of the detected signals very significantly. In the case of ground vehicles, the relative speed can reach 300km/h and higher. This relative speed corresponds to a Doppler frequency shift of 54.0MHz for illumination of l =1.55pm. This variable frequency shift complicates the electronic readout and signal processing chain of systems that depend on coherent detection of the object signal.

[0007] Even if the signal chain may be still manageable for a small number of channels, it adds to the cost, size and complexity of the final LIDAR system. Furthermore, it poses a major obstacle for the practical implementation of multi-channel coherent LIDAR system with a large number of inputs.

[0008] To solve the explained problem there have been several approaches, including one of them being using a non-uniform sampling or other compressed sensing schemes to reduce the overall data rate of the signals.

[0009] In general, all of the approaches developed have the same drawbacks: complex electronics readout circuitry and general signal processing chain, which makes them expensive, big in size, and in general difficult to implement and to scale for multi-channel architectures with a large number of channels.

SUMMARY

[0010] The LIDAR system object of the present disclosure describes a modification of a coherent LIDAR system that makes use of one or more input apertures, and which is simple in its implementation. Its goal is to reduce or completely eliminate the frequency shift induced by the movement of objects in a scene relative to the LIDAR, an effect known as Doppler frequency shift.

[0011] According to some embodiments, the reduction or elimination of the frequency shift is done by measuring the Doppler shifted signal in a reference channel and then, making use of mathematical properties of signal mixing in the time domain, to shift the frequency of one or more imaging channels to cancel or reduce said Doppler shift.

[0012] According to some embodiments, the light detection and ranging (LIDAR) system with supressed Doppler frequency shift comprises at least a light source configured to emit a first light, aimed to an external object. The first light is reflected diffusely or specularly on the object and is then received in at least an input aperture, being therefore a reflected light. [0013] The reflected light can then be split in a splitter, positioned following the at least an input aperture, being the splitter configured to split the reflected light into a reference channel and at least a first imaging channel.

[0014] A part of the split reflected light is then guided through the at least a first imaging channel to a first imaging optical IQ (In-phase and Quadrature) receiver associated to the first imaging channel. The first imaging optical IQ receiver is configured to obtain a first interference signal which comprises a first in-phase component and a first quadrature component.

[0015] Additionally, another part of the reflected light is guided through the reference channel into a reference optical IQ receiver associated to the reference channel. The reference optical IQ receiver is configured to obtain a reference interference signal which comprises a reference in-phase component and a reference quadrature component.

[0016] At least a local optical oscillator is associated to the first imaging optical IQ receiver and to the reference optical IQ receiver and is configured to be temporarily coherent with the reflected light.

[0017] Lastly, in an embodiment, the system comprises at least a mixer, connected to the first imaging optical IQ receiver and to the reference optical IQ receiver, and configured to obtain a first intermodulation product with a higher frequency and a second intermodulation product of interest with its Doppler Shift scaled or completely eliminated.

[0018] The system described above is one possible embodiment. However, the system can comprise a reference aperture and several input apertures, or a reference channel and several imaging channels associated to one or more input apertures. The system can also comprise a single local optical oscillator associated to all the optical IQ receivers, or a reference local optical oscillator associated to the reference optical IQ receiver and an imaging local optical oscillator associated to the imaging optical IQ receivers, or a reference local optical oscillator associated to the reference optical IQ receiver, and several imaging local optical oscillators, associated each to one or more imaging optical IQ receiver.

[0019] The system can also comprise an optical amplitude and/or phase modulator applied to the imaging local optical oscillators, such that the generation of the intermodulation products happens directly at the photodetector without the need for electronic mixing.

DESCRIPTION OF THE DRAWINGS

[0020] To complement the description being made and in order to aid towards a better understanding of the characteristics of the LIDAR system, in accordance with a preferred example of a practical embodiment thereof, a set of drawings is attached as an integral part of said description wherein, with illustrative and non-limiting character, the following has been represented.

[0021] Figure 1 shows an example LIDAR system imaging an object in an embodiment.

[0022] Figure 1 A shows a scheme of the input aperture, optical IQ receivers and reference and imaging local optical oscillators in an embodiment.

[0023] Figure 1 B shows an alternative implementation in which 2x4 MM Is are used for the optical IQ receivers in an embodiment.

[0024] Figure 2 shows a scheme of the LIDAR system in an embodiment with a reference channel and an imaging channel.

[0025] Figure 3 shows a scheme of the LIDAR system in an embodiment with a reference channel and an array of imaging channels with direction information encoded in the relative phase between them.

[0026] Figure 4 shows a scheme of the LI DAR system in an embodiment with a plurality of input apertures and an amplitude modulator for direct mixing of the reference signal on the photodetectors.

[0027] Figure 5 shows two schemes of a Gilbert cell, one including the photodetectors to enable direct multiplication of the differential photocurrent.

[0028] Figure 6 shows an integration scheme of a Gilbert cell, using switched capacitors. [0029] Figure 7 shows a signal filter arrangement used on a reference channel of the LIDAR system in an embodiment.

[0030] Figure 8 shows an example reference sampling period obtained using the signal filter arrangement of Figure 7.

[0031] Figure 9 shows another signal filter arrangement used on a reference channel of the LIDAR system in an embodiment.

[0032] Figure 10 shows an example reference sampling period and intermediate reference sample period obtained using the signal filter arrangement of Figure 9.

[0033] Figure 11 shows an example source modulation scheme to provide multiple source channels in an embodiment.

DETAILED DESCRIPTION

[0034] With the help of figures 1 to 11, preferred embodiments of the present disclosure are described below.

[0035] Embodiments herein relate to a LIDAR system, such as LIDAR system (100) illustrated in Figure 1 , which comprises at least a light source (102) configured to emit light (110), aimed at an external object (108). The light is reflected from object (108) and the reflected light (112) is received by a light receiving unit (104). More specifically, the light is received at a reference input aperture (103) and in an imaging input aperture (101) in a first embodiment, as discussed in more detail with reference to later figures. Light source (102) may represent a single light source, or multiple light sources having different wavelengths. In some embodiments, light source (102) includes one or more laser sources.

[0036] LIDAR system (100) also includes a processor (106) that is configured to receive electrical signals from light receiving unit (104) and perform one or more processes using the received electrical signals. For example, processor (106) may use the received electrical signals to reconstruct a 3D image that includes object (108). As noted above, movement of object (108) (identified by the upward arrow) while trying to capture reflected light (112) causes a frequency shift induced by the movement of object (108) relative to the LIDAR system (100), an effect known as Doppler frequency shift. [0037] As seen in Figure 1A, the reference input aperture (103) allows the LIDAR system (100) to produce a reference interference signal between the reflected light coming from object (108) and a reference oscillator (113). This reference interference signal is then used to modulate an interference signal formed between the reflected light (112) coming from object (108) collected by the imaging input aperture (101), and an imaging oscillator (111). According to some embodiments, both reference oscillator (113) and imaging oscillator (111) are generated from the same light source, such as light source (102).

[0038] In any given implementation, the reference input aperture (103) and one or more of the imaging input aperture(s) (101) may overlap, as shown for example in the embodiment of Figure 1A, as well as the reference oscillator (113) and imaging oscillators (111).

[0039] According to some embodiments, the reference oscillator (113) and imaging oscillator (111) exhibit some degree of temporal coherence with the reflected light (112), in such a way that the interference signal formed can be processed at electrical frequencies.

[0040] In one example, shown in Figure 1A, the system comprises a single input aperture (101,103). In this case the system comprises light source (102), which emits a light aimed to object (108). The light reflects off the object and the reflected light (112), which enters the system by the input aperture (101,103), is split into a first imaging channel (3) and a reference channel (4) by a splitting element (2), which may be a 1x2 splitter.

[0041] According to some embodiments, at least two channels, (e.g., the reference channel (4) and the first imaging channel (3), are affected by the movement of the objects through Doppler frequency shift substantially in the same manner, while the non-Doppler information-bearing modulation stays different between them. This allows the signals on both channels to combine in a way where the Doppler frequency shift is eliminated or greatly reduced, while the information- bearing modulation is recovered.

[0042] As shown in Figure 1 A, the first imaging channel (3) is fed to a first imaging optical IQ receiver (5), and the reference channel (4) is fed to a reference optical IQ receiver (6). The first imaging optical IQ receiver (5) is associated with an imaging oscillator (111) and the reference optical IQ receiver (6) is associated with a reference oscillator (113). Within the optical IQ receivers (5, 6), both oscillators (111 , 113) are fed through 90° hybrids generating the phase shift between the in-phase component (7, 9) and the quadrature component (8, 10) of each channel, according to some embodiments.

[0043] In other embodiments, the IQ receivers (5, 6) are implemented by means of 2x4 MMI couplers designed to provide the phase shifts between the 4 outputs (7, 8, 9, 10) and each of the two inputs (3, 4). In Figure 1B, an embodiment is shown in which the first imaging optical IQ receiver (5) is a 2x4 MMI coupler which is fed with the first imaging channel (3) and an imaging oscillator (111), and the reference optical IQ receiver (6) is a 2x4 MMI coupler which is fed with the reference channel (4) and a reference oscillator (113).

[0044] In an embodiment, the imaging oscillator (111) has its wavelength swept following a standard FMCW (Frequently Modulated Continuous Wave) scheme and the reference oscillator (113) keeps its wavelength static. According to some embodiments, the reflected light (112) has components that are coherent with both components in the oscillators (111, 113). For this, either illumination is derived from a combination of both components, or both components share a common origin with the illumination that guarantees mutual coherence.

[0045] According to some embodiments, the first imaging optical IQ receiver (5) is associated with the first imaging channel (3) and it is configured to obtain a first interference signal comprising a first in-phase component (7) and a first quadrature component (8). The reference optical IQ receiver (6) is associated to the reference channel (4) and configured to obtain a reference interference signal comprising a reference in-phase component (9) and a reference quadrature component (10).

[0046] Both interference signals will be affected by Doppler substantially in the same way (with small differences due to the different wavelengths, in some embodiments). However, only the first interference signal, associated with the imaging oscillator (111), carries information about distance between object (108) and LIDAR system (100) in its interference frequency.

[0047] As seen in Figure 2, mixing the first interference signal and the reference signal, in at least one mixer 121 - 124 (sometimes also identified collectively as 12), results in the generation of two intermodulation products. For example, the mixing produces a first intermodulation product with higher frequency, which can be discarded, and an output intermodulation product (16) with lower frequency, which has its Doppler shift significantly scaled, and which provides the possibility of taking the ranging and amplitude information to baseband, thus minimizing sampling frequency and electronics readout complexity.

[0048] For illustration, the first interference signal and the reference interference signal are derived for this implementation as discussed herein. It is assumed that the imaging input aperture (101) and the reference aperture (103) are substantially at the same position, except for possible relative phase shifts if the imaging input aperture (101) is part of an array. In the case of equal illumination of the scene with two light sources of two wavelengths (with associated wavenumbers and angular frequencies k lr k 2 and o> lt w 2 , respectively) and equal amplitude A, the light signal at a distance x from the light source is:

[0049] Where it is assumed that the first wavelength of the first light source of the LIDAR system undergoes a linear frequency modulation with constant K. If the object that reflects the light emitted by the first light source is a single diffuse reflector, the object, at a distance x j with intensity reflectivity p j in the direction of the input aperture (101) and relative velocity in the direction between input aperture (101) and object v j t the reflected field of the light collected at the input aperture (101) will be:

[0050] Where i is the index of the input aperture in case there is an array of apertures. The Doppler shift is visible in the 2/ i^· and 2 k 2 v j terms in the equations, modifying the frequency of the reflected light.

[0051] For the calculation of the interference signals in the optical IQ receivers (5, 6), it is assumed for simplicity that the two wavelength components of the reference and imaging oscillators (111 , 113) have unity amplitude: [0052] After the imaging optical IQ receiver (5) and the reference optical IQ receiver (6), the first interference signal and the reference interference signal are, respectively:

[0053] In these, the beating products where the difference of optical angular frequencies persist will be at a very high frequency for electrical standards once detected. For example, assuming that the two wavelengths of the light of the light sources are 0.1 nm apart at a wavelength of 1.55 pm, the intermodulation product has a frequency of 12.5 GHz:

[0054] On the contrary, the beating products where the local oscillator and reflected light frequencies are equal are demodulated to a lower frequency, derived from the frequency difference between the emitted and received phase modulation frequency plus or minus the Doppler shift.

[0055] For the typical speed of ground vehicles, the Doppler shift will be equal or lower than (100)MHz, so it is possible to suppress the higher frequency mixing terms (those which include the difference of optical angular frequencies) by means of a low-pass filter, according to some embodiments. Therefore, as shown in figure 2, a first set of low-pass filters (13) can be associated with the optical IQ receivers (5, 6) in order to filter the first in-phase component (7), the first quadrature component (8), the reference in-phase component (9) and the reference quadrature component (10).

[0056] The low-frequency components of the interference signals are provided as the following:

[0057] The depth and speed information are encoded in the frequency (and phase) of both photocurrents. By focusing on the frequency information only, it is observed that the frequencies of it and I 2 (t) are:

[0058] The components of these two frequency shifts scale differently with the line rate. The modulation constant K makes a direct impact on distance-derived frequencies. However, the Doppler shift remains independent and is determined by scene properties. Since the Doppler shift can go up to frequencies of several tens of MHz, it typically utilizes fast acquisition electronics, which can add to the cost of the system. These video frequencies may also be a problem when it comes to scaling up the scene detection with multiple parallel imaging channels (3).

[0059] However, the difference of these two frequencies is:

[0060] According to some embodiments, if the two wavelengths of the lights emitted by the two light sources are chosen to be close to each other (for example, a separation of 0.1 nm at a wavelength of 1.55 pm), the difference of Doppler frequency shifts is significantly reduced (2kHz for a V of 50m/s).

[0061] However, it is noteworthy that both wavelengths can be equal. In this case, the Doppler shift may be totally suppressed, whereas the frequency shift due to FMCW is preserved. This approach simplifies the optical system and the associated electro-optical circuitry. [0062] If both wavelengths are equal, the Doppler shift may be totally suppressed and signal frequency is moved to baseband. This lower Doppler frequency allows for significant reduction of line rate, data throughput and hardware complexity in systems where a large number of input apertures (101) are desired. If the Doppler frequency is preserved, then the Doppler shift should be disambiguated from the FMCW modulation in order to be measured. One example way to achieve this is to change K in the FMCW frequency sweep over time (e.g. alternating its sign) and to compare the resulting electrical frequency shifts between both modulation slopes.

[0063] One example way to subtract the frequencies obtained from the optical IQ receivers (5, 6) above is to multiply one of the currents with the complex conjugate of the other. Standard frequency mixing techniques can be applied. This can be done in the digital or analog domain and potentially on the basis of the interference signals as indicated below:

[0064] In an embodiment, this can be implemented, as shown in figure 2, using one or more mixers 121-124 connected to the first imaging optical IQ (5) and to the reference optical IQ (6) outputs or to the first low-pass filter set (13) outputs. Each of the four multiplicative terms above contains a first intermodulation product with the difference of frequencies D/ (low-frequency) and a second intermodulation product which includes the addition of doppler frequencies 2v j (— + —) .

[0065] When the four multiplicative terms are combined, the terms related to the addition of doppler frequencies are cancelled, and only the low-frequency intermodulation products, which contain the depth information in its frequency (as per D/ above), remain as output intermodulation products (16).

[0066] According to some embodiments, higher frequency components of each of the multiplicative terms are filtered out using a second set of low-pass filters (23), such that only the low-frequency intermodulation products are kept. These low-frequency intermodulation products contain the depth information in its frequency (as per Af above), as output intermodulation products (16). According to some embodiments, the output intermodulation products (16) are amplified using one or more non-linear amplifiers (25). [0067] In the embodiment shown in figure 2, the one or more mixers 121-124 include a first mixer 121 designed to mix the first quadrature component (8) and the reference in- phase component (9), giving the multiplicative term ( * I 2 i), and a second mixer 122, designed to mix the first in-phase component (7) and the reference in-phase component (9), giving the multiplicative term ( I lt * I 2 i).

[0068] In an alternative demodulation technique, one can work with the individual components of the interference signals, meaning the first in-phase component (7), the first quadrature component (8), and the derivatives of the reference interference signals, as provided by a time derivation module (15), which produces the time-derivative of the reference in-phase component (90) and the time-derivative of the reference quadrature component (91), and adapt FM demodulation techniques that simultaneously carry out baseband conversion and demodulation.

[0069] This can be particularly useful in embodiments where both the imaging oscillator and the reference oscillator are the same, since in that situation the frequency difference in the multiplicative terms as expressed above would be D/ = 0, and the use of time- derivatives allows to extract the frequency-encoded depth-information to the amplitude of the time-derived signals.

[0070] For example, the operation that can be performed in the one or more mixers (121) - (124) in this case, in which the imaging and the reference oscillator are the same, is:

[0071] Similarly to the direct frequency mixing approach, in this case one can generate the four multiplicative terms above and combine them to leave only the DC component, or alternatively one can filter out the higher frequency component of each of the multiplicative terms using a second set of low-pass filters (23) and keep only the DC components which contain the depth and doppler information in its amplitude. [0072] In order to separate the doppler and depth information, one can change K in the FMCW frequency sweep overtime, e.g., alternating its sign, and to compare the resulting DC components shifts between both modulation slopes.

[0073] A drawback of direct FM demodulation is the fact that the reflectivity of the object (P j ) and the frequency shift get mixed in this DC value. According to some embodiments, this can be addressed by demodulating the amplitude separately: < 2 ( 2 ( + 4 / (04 / (0 = p]A(Xj) 2 i

[0074] Alternatively, in cases where the imaging and reference oscillators are the same, the object reflectivity can be obtained also from the multiplicative terms between the signal components and the reference components before the time-derivative (e.g. as provided by the first mixer (121) and second mixer (122) from figure 2).

[0075] For use of the direct FM demodulation approach, Figure 2 illustrates a time derivation module (15) and the one or more mixers (121) - (124) that include a third mixer (123), designed to mix the first in-phase component (7) and the time-derived reference quadrature component (91), and a fourth mixer (124), designed to mix the first quadrature component (8) and the time-derived reference quadrature component (91). Therefore, the embodiment in figure 2 provides a demodulation scheme that includes both frequency and amplitude demodulation simultaneously.

[0076] Figure 3 shows an implementation where multiple imaging channels (3) are combined with a common reference channel (4) obtained from reflected light (112) coming from the same scene but mixed with a separate optical source (one of a different wavelength but which is coherent with at least a fraction of the power collected from the scene).

[0077] The advantage of the scheme shown in figure 3 is that the different imaging channels (3) preserve the relative phase difference (contained in the IQ data) in the electrical domain after demodulation. This allows for the coherent combination of the demodulated signals coming from said imaging channels (3) in order to recover the different directions.

[0078] For the various mixers (represented collectively as 12 in Figure 3), it is possible to use different construction schemes. For example, the mixers may be implemented in the analog domain on the basis of circuits that rely on a translinear scheme. One of these circuits may be a Gilbert cell, an example of which is depicted in Figure 5. This circuit has the advantage of working in all four quadrants of the interference signals. Given that the inputs to the cell are differential and voltage-based, the photocurrents coming from the optical IQ receivers (5, 6) above may be amplified by a transimpedance amplifier (14) to a voltage and, if appropriate, derived in the analog domain, according to some embodiments.

[0079] In order to simplify the Gilbert cell, it may be possible to use the photocurrents of a balanced differential pair as the source of both input signals and current bias. This will reduce the need for intermediate transimpedance amplifiers and make the cell more amenable to replication to achieve large scale integration. According to some embodiments, the imaging oscillator (111) to be mixed with the different imaging channels (3) can be generated and distributed as a voltage signal over the detection array (e.g., the imaging channels) from a single imaging input aperture (101) without major scalability issues.

[0080] In order to simplify the readout of the cell, integration schemes with switched capacitors and multiplexed video outputs can be applied as shown, for example, in Figure 6. Readout of such switched capacitors can be structured in the same way as normal imaging sensors. For example, the switched capacitors can be organized by column and multiplexing schemes can be used to route the analog values to appropriate ADC circuitry.

[0081] Lastly, in order to provide the desired mixing function, it is also possible to modulate the amplitude of the optical local oscillator that goes to each of the imaging channels. If this is done, no electronic mixing is needed after photodetection, which provides advantages in terms of system complexity. According to some embodiments, an optical modulator (17) is used to modulate the amplitude of the optical local oscillator, as shown in Figure 4. In an embodiment, the optical modulator (17) is an optical amplitude modulator, whether based on electro-optic absorption, a Mach-Zehnder interferometer or otherwise.

[0082] If the amplitude modulation leaves some level of phase modulation, a phase modulator can be added in series to ensure constant phase operation and avoid undesired frequency shifts in the reference channel. Amplitude modulation can also be obtained in different ways, such as through an optical amplifier, modulation of a laser current, etc.

[0083] In some embodiments, the first in-phase component (7), the first quadrature component (8), the reference in-phase component (9) and the reference quadrature component (10) are multiplied with different versions of the signal and shifted 90° relative to each other in order to achieve the desired mathematical result directly. To achieve this physically, distribution of separately modulated reference signals to each output mixer (12) may be used. Given the fact that the modulation to be applied to these two channels is also orthogonal in the electrical domain, it is possible, in some embodiments, to add them together in the modulation signal, as shown in Figure 4.

[0084] According to some embodiments, the products between the first in-phase component (7) and the first quadrature component (8) or between the reference in-phase component (9) and the reference quadrature component (10) produce high-frequency intermodulation products that can be filtered out.

[0085] In order to separate the amplitude and distance information, the modulation signal applied to the optical modulator (17) can be switched between different modes (with or without time derivative) so that alternatively depth information and/or signal amplitude is recovered, according to some embodiments. This time-domain multiplexing, which may be suitable for implementation with an integrator that is synchronized with the switching of the demodulation signal, can also be replaced by other multiplexing schemes (frequency domain multiplexing, code multiplexing, etc.). Switching the demodulation signal on both the imaging channel and the reference channel can be performed using switches (27).

[0086] According to some embodiments, Figure 4 shows the combination of the two implementation options described above - Doppler frequency demodulation by means of amplitude modulation of the optical reference signal and time multiplexing of amplitude/frequency demodulation, for the case of a single wavelength.

[0087] According to some embodiments, rather than modulating the optical local oscillator signals (e.g., by using optical modulator 17), different optical source channels are modulated to provide modulated source beams of illumination directed towards one or more objects. In this way, the light is modulated at the source before being transmitted towards the one or more objects. Figure 10 illustrates a source modulation scheme (1100) that can provide different modulation to any number of optical source channels. A laser source (1102) has its output split amongst any number of different channels using any number of 1x2 optical splitters (1104). Laser source (1102) may be the same as light source (102) used to generate the imaging light (110). In some other embodiments, light source (102) represents all of source modulation scheme (1100).

[0088] Each of the different source channels of source modulation scheme (1100) can have its optical signal amplified using a semiconductor optical amplifier (SOA) 1106, and subsequently modulated using optical modulator (1108), according to some embodiments. In some arrangements, optical modulator (1108) is before SOA (1106) on one or more of the source channels. Any of the optical modulators (1108) can be configured to modulate phase, frequency, or both phase and frequency of the corresponding optical signal, such that each of the source channels provides an optical output (1110) that can be independently modulated with respect to the optical outputs (1110) of the other source channels. Optical modulators (1108) may be any type of electro-optical modulator. According to some embodiments, any of the one or more SOAs (1106) and/or one or more optical modulators (1108) receive a signal from the reference channel to affect the amplitude, phase, and/or frequency modulation being performed on a given source channel. According to some embodiments, the various optical outputs (1110) are transmitted towards one or more objects and received from the one or more objects on imaging channels (3) as illustrated in Figures 3 or 4. The received light across the various imaging channels (3) can be mixed with the imaging oscillator (111) at the various imaging receivers (5) without the need for mixers (12) or optical modulator (17), since the modulation has already been performed on the source light, according to some embodiments. Imaging oscillator (111) may represent light generated from laser source (1102).

[0089] When Doppler shifts are large (e.g., due to high relative speed of the object being imaged), demodulation of the individual signals from the array to baseband provides for highly scalable but slow electronics readout. This achieves the desired effect but may suffer from significant signal-to-noise (SNR) degradation, especially when performance is considered relative to the potential array gain resulting out of the mixing. This may be particularly relevant at optical wavelengths, where signals collected by the different elements of the array are -in the ideal case- dominated by shot noise that stems from the discrete nature of photon detection. If the reference channel is not provided any SNR advantage relative to the other inputs to the mixers in the array, then the array gain from the coherent combination of the array outputs may be negated. Additionally, at low input signal SNR per element, there is an additional degradation, something characteristic of incoherent demodulation. In a general LIDAR system, this can reduce the range that is achieved using such a construction.

[0090] Thus, according to some embodiments, an additional signal filter arrangement is provided on the reference channel to provide a clean set of tones and minimize noise impact to the mixers. The sampling period of a camera reading out the imaging array is typically of the order of 100ps-20ms and is many orders of magnitude longer than what is possible for single-channel reference sampling (which can be in excess of 1GSPS), but can be faster than the frame update rate (typically around 50ms) for many other applications. Therefore, according to some embodiments, additional filtering is applied to the reference signals, for example through long acquisition windows and narrow digital filters that are centered around the signal peaks in the spectrum.

[0091] Figure 7 illustrates an example of a signal filter arrangement (700) provided on the reference channel to increase the SNR of the reference signal. According to some embodiments, signal filter arrangement (700) is provided after the reference optical IQ receiver (6) but before the signal is mixed with the imaging channel(s) via, for example, mixers (12). According to some embodiments, signal filter arrangement (700) is provided after the reference optical IQ receiver (6) in the system illustrated in Figure 4, where the amplitude of the optical local oscillator that goes to each of the imaging channels is modulated such that no electronic mixing is needed (e.g., mixers 12 are not needed). According to some embodiments, signal filter arrangement (700) includes transimpedance amplifier (14) and low pass filter (13), which may be the same as transimpedance amplifier (14) and low pass filter (13) as seen on the reference channel from any of Figures 2-4. Following these elements, signal filter arrangement (700) includes an analog-to-digital converter (A/D) (702) and a temporal filtering unit (704). A/D (702) can be any standard analog-to-digital converter to convert the analog voltage output from transimpedance amplifier (14) into a digital signal.

[0092] According to some embodiments, temporal filtering unit (704) comprises a plurality of accumulators and filters that accumulate samples of the reference channel signal and average the samples to increase the SNR of the reference signal. Frequency bands having a low amplitude, or an amplitude beneath a given threshold, are suppressed to reduce noise and maximize the clean portions of the signal. [0093] The filtered reference signal with the increased SNR is identified as Ref1 being output from the temporal filtering unit (704). According to some embodiments, the Ref1 signal is mixed with one or more of the imaging channels (represented as imaging array (706) using mixers (12). According to some other embodiments, the Ref1 signal is used to affect the modulation provided by optical modulator (17) to the imaging oscillator (111) that is mixed with the various imaging channels (3) of imaging array 706. According to some other embodiments, the Ref1 signal is used to affect the modulation provided to the different source channels of source modulation scheme (1100). In any case, a clean carrier for each object in the field of view can be produced, which can in turn be used to optimize output SNR, even for low input SNR levels per channel. A longer sample accumulation time for the reference channel relative to the camera will give its channel an intrinsic SNR advantage from averaging under additive white Gaussian noise (AWGN) conditions, while subsequent thresholding and filtering can optimize low SNR performance levels. Figure 8 illustrates the camera sampling rate and the higher sampling rate produced on the reference channel using signal filter arrangement (700), according to some embodiments.

[0094] According to some embodiments, temporal filtering unit (704) includes a series of phase locked loops (PLLs) assuming that a single tone can be expected per reference channel input. This scheme works when imaged objects generate carriers with stable frequencies during the extended reference sample collection window, meaning that the objects have stable distances and relative velocities, at least over the integration time. Stable frequencies may not be generated, however, if the objects are subjected to +-1 g acceleration or higher and camera integration times are 0.1 ms or higher, for example. However, it is possible to compensate the chirp numerically at the filtering stage. This can be done through parallel application of multiple chirps to the digitized reference signal, corresponding to different object accelerations, finding the maximum for each peak, and then filtering and applying the filtered signal with the corresponding chirp as an output to the digital processor. In some cases with a large integration window, compensation becomes increasingly complex as the phase error becomes larger with time and the potential gain from integration increases.

[0095] When multiple objects are being imaged simultaneously, the situation changes, as the presence of multiple received tones increases the noise bandwidth of the demodulation output and hence has an impact on the output of the array, which can negate the coherent combination of the signal and result in an SNR performance that grows with the square root of the elements in the array only. One way of dealing with multiple objects is to combine the detection and demodulation scheme discussed above with a suitable illumination control in a way that only one or a small number of targets is producing reflections at a given point in time. In one example, the optical source can be implemented using an optical phase array (OPA) to scan the scene. The OPA can be implemented using source modulation scheme (1100) with phase modulation applied (e.g., using optical modulators 1108) to each of the source channels. In another example, it is possible to do a spatial Fourier transform of the incoming optical signal through a lens focusing light on subarrays that correspond to specific directions. When this is done using a cylindrical lens, each subarray becomes a 1D coherent receiver array and the number of directions imaged (and the number of corresponding targets) becomes significantly smaller.

[0096] According to some embodiments, a different signal filter arrangement (900) can be provided on the reference channel (e.g., of any of the systems illustrated in one of Figures 2-4) to generate an intermediate array with a mixer that allows faster acquisition after mixing, as illustrated in Figure 9. This staged approach allows for better tolerance to shifts in frequency as it allows the downmixing frequency to adapt with a higher rate. Given that the sampling rate will be higher than for the camera array, this intermediate array can have a lower number of elements, and hence a lower angular resolution. However, this intermediate array will be able to resolve the directions of the different tones and apply both directional and frequency filtering, with different demodulation outputs, which can be useful in multi target situations to reduce clutter and improve SNR.

[0097] According to some embodiments, signal filter arrangement (900) includes the temporal filtering unit (704) as discussed above with reference to Figure 7. The output from temporal filtering unit (704) (Ref1) is still mixed with each of the imaging channels from imaging array (706). However, signal filter arrangement (900) also generates a set of additional reference outputs (collectively referred to as Ref2 in Figure 9) to mix with the imaging channels from imaging array (706). According to some embodiments, each of the additional reference outputs (Ref2) corresponds to a coarse direction of received light from the scene containing the multiple objects. A plurality of secondary reference channels (902) are mixed with the Ref1 signal using a series of mixers (904). According to some embodiments, each of the plurality of secondary reference channels (902) represents a smaller version of imaging array (706) with some direction discrimination ability when they are all combined to generate the set of additional reference outputs (Ref2). The output from mixers (904) is received by a second A/D and then a fast Fourier transform (FFT) is performed on the signal using FFT element (906) in order to more easily distinguish the noise from the signal peaks and to transform the secondary reference channels (902) back to the channel domain. A thresholding/filtering stage (908) is used to filter out those frequency components having a low amplitude or an amplitude below a given threshold (e.g., removing the noise components). According to some embodiments, a phase alignment stage (910) is used to coherently accumulate the signals to compensate for acceleration or deceleration of the imaged objects and break the reference signal into intermediate sample periods. Each one of the generated additional reference outputs (Ref2) can be mixed with the signal of a particular imaging channel of imaging array 706, according to some embodiments. According to some other embodiments, the Ref2 signal is used to affect the modulation provided to the different source channels of source modulation scheme (1100). Figure 10 illustrates the camera sampling rate and the higher sampling rates produced on the reference channel for both the Ref1 signal and the Ref2 signal, with the Ref2 signal sampling rate being an intermediate sample rate, according to some embodiments.

[0098] Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like refer to the action and/or process of a computer or computing device, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (for example, electronic) within the registers and/or memory units of the computer system into other data similarly represented as physical quantities within the registers, memory units, or other such information storage transmission or displays of the computer system. The embodiments are not limited in this context.

[0099] The terms “circuit” or “circuitry,” as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuitry may include a processor and/or controller configured to execute one or more instructions to perform one or more operations described herein. The instructions may be embodied as, for example, an application, software, firmware, etc. configured to cause the circuitry to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on a computer-readable storage device. Software may be embodied or implemented to include any number of processes, and processes, in turn, may be embodied or implemented to include any number of threads, etc., in a hierarchical fashion. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), an application-specific integrated circuit (ASIC), a system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc. Other embodiments may be implemented as software executed by a programmable control device. As described herein, various embodiments may be implemented using hardware elements, software elements, or any combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth.

[0100] Any of the various electro-optical or electrical elements discussed with reference to any of the systems disclosed herein may be components arranged on a planar light wave circuit (PLC) or an optical integrated circuit (OIC). Accordingly, the PLC or OIC may include any number of integrated waveguide structures to guide light around the PLC or OIC. The PLC or OIC may include a silicon-on-insulator (SOI) substrate using silicon waveguides. In some other embodiments, the PLC or OIC includes a lll-V semiconductor material having waveguides comprising gallium nitride (GaN), silicon nitride (S1 3 N4), indium gallium arsenide (InGaAs), gallium arsenide (GaAs), indium phosphide (InP), indium gallium phosphide (InGaP), or aluminum nitride (AIN) to name a few examples.