Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LIGHT DETECTION AND RANGING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2023/065004
Kind Code:
A1
Abstract:
There is provided a light detection and ranging (LiDAR) system and a method of operation of same. The LiDAR system is a neuromorphic LiDAR system-on-chip (SOC) may include a CMOS chip that includes a field analog vision (FAV) module for generating transformed point clouds in a Cartesian coordinate system and a silicon photonics chip that includes an optical computing system for performing computations of layers of a neural network model to process the transformed point clouds in a Cartesian coordinate system with minimum latency. The LiDAR system can be configured as a hybrid 2.5D integrated circuit or a hybrid 3D integrated circuit which reduces a size of the LiDAR system.

Inventors:
SAHA SREENIL (CA)
SALMANI MAHSA (CA)
ESHAGHI ARMAGHAN (CA)
Application Number:
PCT/CA2021/051469
Publication Date:
April 27, 2023
Filing Date:
October 19, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CANADA CO LTD (CA)
International Classes:
G01S17/89; G06E3/00; G06N3/02; G11C27/00; G01S17/894
Foreign References:
US20200219264A12020-07-09
US20210150230A12021-05-20
Other References:
YING LI; LINGFEI MA; ZILONG ZHONG; FEI LIU; DONGPU CAO; JONATHAN LI; MICHAEL A. CHAPMAN: "Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 May 2020 (2020-05-20), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081675428
Attorney, Agent or Firm:
MBM INTELLECTUAL PROPERTY LAW LLP (CA)
Download PDF:
Claims:
WE CLAIM:

1 . A light detection and ranging (LiDAR) system comprising: a field analog vision (FAV) module comprising: a scanning sensor configured to scan an environment and generate one or more points clouds based on the scanned environment, each point cloud including data points in a spatial coordinate system, a coordinate transformer configured to convert each of the one or more point clouds to a transformed point cloud that includes data points in a Cartesian coordinate system, a memory operatively connected to the coordinate transformer, the memory configured to store a neural network model and each transformed point cloud received from the coordinate transformer, and one or more high voltage drivers operatively connected to the memory, the one or more high voltage drivers configured to, for each transformed point cloud, generate one or more electrical analog signals indicative of the transformed point cloud; and an optical computing (OC) system operatively connected to the one or more high voltage drivers, the OC system configured to process each respective transformed point cloud by performing computations of a neural network model on each respective transformed point cloud received through one or more electrical analog signals from the one or more high voltage drivers.

2. The system of claim 1 , wherein the coordinate transformer comprises a sine generator and a cosine generator, each of the sine generator the cosine generator implemented using analog computational circuits including one or more of a geometricmean circuit, a divider circuit and a multiplier circuit.

3. The system of claim 1 , wherein the memory is a digital memory, the system further comprising an analog-to-digital converter operatively connected to the digital memory, the analog-to-digital converter configured to convert each transformed point cloud from an analog domain to a digital domain.

4. The system of claim 3, further comprising: a digital-to-analog converter to convert the one or more transformed point clouds stored in the digital memory from the digital domain to the analog domain.

32

5. The system of claim 1 , wherein the memory is a non-volatile analog memory operatively connected to the coordinate transformer.

6. The system of claim 1 , wherein the one or more electrical analog signals are utilized to drive micro ring modulators in the OC system.

7. The system of claim 1 , wherein the LiDAR system is configured as an integrated circuit comprising a silicon interposer configured to provide connectivity between the FAV module and the OC system, the integrated circuit including a chip including the FAV module, a chip including the memory and a chip including the OC system being placed onto the silicon interposer.

8. The system of claim 7, wherein the connectivity is provided using wire bonds, micro solder bumps, copper pillars or combination thereof.

9. The system of claim 1 , wherein the LiDAR system is configured as an integrated circuit including the FAV module and the memory being placed onto a chip including the OC system.

10. A method performed by a light detection and ranging system including a field of view (FOV) module and an optical computing system, the method comprising: scanning, by a scanning sensor of the FAV module, an environment and generating, by the scanning sensor, one or more points clouds based on the scanned environment each of the one or more point clouds including data points in a spatial coordinate system;; converting, by a coordinate transformer, each of the one or more point clouds to a transformed point cloud, wherein axes of the transformed point clouds is aligned with axes of the scanning sensor and an transformed point cloud including data points in a Cartesian coordinate system storing in a memory each transformed point cloud; and generating one or more electrical analog signals indicative of the one or more transformed point clouds; and processing, by the optical computing system, each respective transformed point cloud by performing computations of a neural network model on each respective transformed point cloud received through one or more electrical analog signals from the one or more high voltage drivers.

33

11. The method of claim 10, further comprising converting each transformed point cloud into a digital transformed point cloud, and wherein storing each transformed point cloud comprisies storing each digital transformed point cloud in a digital memory. 12. The method of claim 11 , further comprising converting each transformed point cloud into an analog transformed point cloud prior to generating the one or more electrical analog signals and wherein storing each transformed point cloud comprising storing each analog transformed point cloud in an analog memory.

Description:
LIGHT DETECTION AND RANGING SYSTEM

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This is the first application filed for the present invention.

FIELD

[0002] The present disclosure pertains to light detection and ranging (LiDAR) technology, and in particular to LiDAR sensors.

BACKGROUND

[0003] In the field of autonomous vehicles, growing safety concerns and the need for accurate detection of objects (e.g. other vehicles, pedestrians and other relevant entities in the surrounding environment of the autonomous vehicle) have led to the use of light detection and ranging (LiDAR) sensors in complement with cameras. A LiDAR sensor may be mounted to the vehicle and may be used to scan an environment in a field of view of the LiDAR sensor to generate point clouds having high spatial resolution. Existing LiDAR sensors communicate with a separate computing system which processes point clouds provided by the LiDAR sensors. For example, the separate computing system that process the point clouds may perform the computations using layers of a neural network model for a perception task, such as object detection, object recognition or sematic segmentation. Due to the separate computing system, the hardware size of existing LiDAR sensors is not ideal and therefore reduction in hardware size is desired.

[0004] Further, processing of the point clouds by a computing system can be time consuming due to the large number of data points in these point clouds generated by a LiDAR sensor. A further problem is that the processing of the point clouds can be computationally heavy due to the nature of algorithms used for processing point clouds, for example due to this processing including iterative computation.

[0005] In order to address this issue of computation burden associated with the processing of point clouds generated by LiDAR sensors, computing systems that process point clouds generally include graphics processing units (GPUs). GPUs include a plurality of parallel cores, include a high memory bandwidth for data transfer between components and are used as specialized hardware accelerators which perform computing tasks quicker than central processing units thereby enabling point clouds to be processed in real time. However, limitations exist for GPUs. For example, GPUs cannot be used as standalone devices for hardware acceleration due to their dependency on a central processing unit (CPU) for data offloading and the scheduling the execution of instructions of algorithms used to process point clouds. Another problem is that GPUs have high execution time, especially for large data, which can be due to the GPUs inability to handle all data at the same time. In other words, existing LiDAR sensors provide point clouds to separate computing systems that include GPUs as specialized hardware accelerators which process point clouds. However, these separate computing systems do not provide sufficient computing power for processing point clouds.

[0006] Therefore there is a need for a new LiDAR system, that is not subject to one or more limitations of the prior art.

[0007] This background information is provided to reveal information believed by the applicant to be of possible relevance to the present invention. No admission is necessarily intended, nor should be construed, that any of the preceding information constitutes prior art against the present invention.

SUMMARY

[0008] An object of embodiments of the present disclosure is to provide a light detection and ranging (LiDAR) system for scanning an environment in a field of view (FOV) of the LiDAR system and for processing one or more point clouds generated by each scan of the environment. The LiDAR system is implemented as a system-on-chip (SOC) that includes a complementary metal-oxide-semiconductor (CMOS) chip and a silicon photonics (SiPH) chip. The CMOS chip includes a field analog vision (FAV). The FAV module includes a scanning device configured to scan an environment and generate one or more point clouds based on the scanned environment and a coordinate transformer configured to convert each of the one or more point clouds to a transformed point cloudi that includes data points in a Cartesian coordinate system. The FAV module further includes a memory operatively connected to the coordinate transformer, the memory configured to store a neural network model and each transformed point cloud received from the coordinate transformer. The FAV module further includes one or more high voltage drivers operatively connected to the memory, the one or more high voltage drivers configured to generate one or more electrical analog signals indicative of each respective transformed point cloud. The LIDAR system further includes a silicon photonics chip implementing an optical computing (OC) system that is operatively connected to the one or more high voltage drivers. The OC system configured to perform receive each transformed point cloud through one or more electrical analog signals from the high voltage drivers and perform computations of each layer of the neural network model to process the respective transformed point cloud.

[0009] In some embodiments, the coordinate transformer includes a sine generator and a cosine generator, each of the sine generator the cosine generator implemented using analog computational circuits including one or more of a geometric-mean circuit, a divider circuit and a multiplier circuit.

[0010] In some embodiments, the memory is a digital memory, the system further including an analog-to-digital converter operatively connected to the digital memory, the analog-to-digital converter configured to convert the one or more transformed point clouds from the analog domain to the digital domain. In some embodiments, the system further includes a digital-to-analog converter to convert the one or more point clouds stored in the digital memory from the digital domain to the analog domain.

[0011] In some embodiments, the memory is a non-volatile analog memory operatively connected to the coordinate transformer.

[0012] In some embodiments, the one or more electrical analog signals are utilized to drive micro ring modulators in the OC system.

[0013] In some embodiments, the system is integrated in a circuit including a silicon interposer configured to provide connectivity between the FAV module, the memory and the OC system. The system including a chip including the FAV module, a chip including the memory and a chip including the OC system being placed onto the silicon interposer.

[0014] In some embodiments, the system is integrated in a circuit without a silicon interposer. The system including the FAV module and the memory being placed onto a chip including the OC system.

[0015] In accordance with embodiments, there is provided a method performed by a light detection and ranging (LiDAR) system. The LiDAR system includes a field analog vision (FAV) module and an optical computing system. The method includes generating by the FAv module, one or more transformed point clouds, each transformed point cloud including data points in a Cartesian coordinate system. Generating includes scanning an environment and generating one or more points clouds based on the scanned environment, each point cloud including data points in a spatial coordinate system. Generating further including converting the one or more point clouds to the one or more transformed point clouds, storing in a memory the one or more transformed point clouds and generating one or more electrical analog signals indicative of the transformed point clouds. The method further includes performing by the optical computing system, computations of each layer of a neural network model based on the transformed point clouds received through the one or more electrical analog signals.

[0016] In some embodiments, the method further includes converting the one or more transformed point clouds into a digital domain prior to storing the one or more transformed point clouds, wherein the memory is a digital memory. In some embodiments, the method further includes converting the one or more transformed point clouds into an analog domain prior to generating the one or more electrical analog signals.

[0017] In some embodiments, storing includes storing the one or more transformed point clouds in an analog memory.

[0018] Embodiments have been described above in conjunctions with aspects of the present invention upon which they can be implemented. Those skilled in the art will appreciate that embodiments may be implemented in conjunction with the aspect with which they are described, but may also be implemented with other embodiments of that aspect. When embodiments are mutually exclusive, or are otherwise incompatible with each other, it will be apparent to those skilled in the art. Some embodiments may be described in relation to one aspect, but may also be applicable to other aspects, as will be apparent to those of skill in the art.

BRIEF DESCRIPTION OF THE FIGURES

[0019] Further features and advantages of the present disclosure will become apparent from the following detailed description, taken in combination with the appended drawings, in which: [0020] FIG. 1A illustrates a scanning spinning light detection and ranging (LiDAR) sensor.

[0021] FIG. 1 B illustrates a flash LiDAR sensor.

[0022] FIG. 1C illustrates a direct time of flight (DTOF) system of a LIDAR sensor for directly measuring the time of flight (TOF) to compute the distance to an object in the environment of the LiDAR sensor.

[0023] FIG. 1 D illustrates a principle of the indirect time of flight (ITOF) measuring the distance to an object in an environment of a LiDAR sensor.

[0024] FIG. 2 illustrates an overall structure of a LiDAR system, in accordance with an embodiment of the present disclosure.

[0025] FIG. 3 illustrates a structure of a field analog vision (FAV) module of the LiDAR system of FIG. 2, in accordance with embodiments of the present disclosure.

[0026] FIG. 4A illustrates a desired coordinate system utilized in the FAV module of FIG. 3, in accordance with embodiments of the present disclosure.

[0027] FIG. 4B illustrates the FAV module of FIG. 3 scanning an environment in a field of view (FOV) of the LiDAR system, in accordance with embodiments of the present disclosure.

[0028] FIG. 5 illustrates a computation process of the coordinate transformer of the field analog vision module of FIG. 3 for transforming a point cloud in a spherical coordinate system into a transformed point cloud in a Cartesian coordinate system, in accordance with embodiments of the present disclosure.

[0029] FIG. 6 illustrates, in block diagrams, a complementary metal oxide semiconductor (CMOS) sine function generator and a CMOS cosine function generator of the coordinate transformer of the FAV module of FIG. 3, in accordance with embodiments of the present disclosure.

[0030] FIG. 7 illustrates examples of a geometric-mean circuit, a squarer/divider circuit and a multiplier circuit used in the sine function generator and the cosine function generator of FIG. 6, in accordance with embodiments of the present disclosure. [0031] FIG. 8 illustrates a photonics-based computing system that includes an optical computing system integrating a FAV module and a digital memory, in accordance with embodiments of the present disclosure.

[0032] FIG. 9 illustrates a FAV module of the LiDAR system with an analog memory pre/post processing unit, in accordance with embodiments of the present disclosure.

[0033] FIG. 10 illustrates a photonics-based computing system that includes an optical computing system integrating a FAV module and a non-volatile analog memory device, in accordance with embodiments of the present disclosure.

[0034] FIG. 11A illustrates an analog CMOS-based resistive processing unit (RPU) device used in an analog memory of FIG. 10, in accordance with embodiments of the present disclosure.

[0035] FIG. 11 B illustrates a CMOS memristor (emulator) circuit used in an analog memory of FIG. 10, in accordance with embodiments of the present disclosure.

[0036] FIG. 12A illustrates a system-in-package (SiP) of the LiDAR system of FIG. 2, in accordance with an embodiment of the present disclosure.

[0037] FIG. 12B illustrates another SiP of the LiDAR system of FIG. 2, in accordance with another embodiment of the present disclosure.

[0038] FIG. 13 illustrates an example method performed by a light detection and ranging system of the present disclosure.

[0039] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.

DETAILED DESCRIPTION

[0040] The present disclosure provides a light detection and ranging (LiDAR) system for scanning an environment that includes objects in a field of view (FOV) of the LiDAR system. According to embodiments, the LiDAR system is a neuromorphic system-on- chip (SOC) including a field analog vision (FAV) module for generating one or more point clouds in a Cartesian coordinate system and a 3D integrated CMOS-photonics chip that forms an optical computing system for performing computations of layers of a neural network model to process the one or more point clouds in a Cartesian coordinate system with minimum latency. The optical computing system includes an optical neural network (ONN) onto which the processed point clouds are loaded. The ONN is a class of neural networks that can exploit the high bandwidth and low latency of optical computation to perform the tasks that are desirable in LiDAR applications, for example object detection, object classification, part segmentation, and the like. In various embodiments, analog circuitry processes the raw data obtained from a scanning sensor, for example an image sensor or a photodiode, and transforms point clouds in a spatial coordinate system (for example spherical coordinates) into point clouds in a Cartesian coordinate system in order to generate an appropriate format for processing the point clouds using the optical computing (OC) system of the LiDAR system. In some embodiments, a hybrid 2.5D system-in-package (SiP) integration of CMOS-photonics systems can reduce the size of an optical computing based LiDAR system. In some embodiments, a hybrid 3D SiP integration of CMOS-photonics systems can reduce the size of the optical computing based LiDAR system.

[0041] According to embodiments, the LiDAR sensor generates a labeled point cloud as an output (e.g. each point in the point cloud is associated with a class label identifying the class of the object (e.g. car, pedestrian, etc.). In some embodiments, the labeled point cloud can additionally include instance labels.

[0042] According to embodiments, there is provided a non-volatile analog memory device in the LiDAR system. The LiDAR system can have a custom analog memory architecture to remove or reduce data movement between architecture components, since such data movements can be time-consuming. Hybrid 2.5D integrated circuit and 3D integrated circuit can include SiP integration of CMOS-analog memory stackphotonics systems can provide a high-performance and rugged automotive product in a small form factor.

[0043] LiDAR has been emerging as a powerful tool to collect billions of points on physical surfaces, over large areas, in a short period of time. The point clouds generated by a LiDAR sensor can be processed using machine learning algorithms to generate, for example, an image which includes labels, for example, for objects detected in the point clouds. Today, LiDAR is an important part of the sensor suite for 3D multi-target detection and for perception of an environment in which vehicles, watercrafts, robots, planes, helicopters or any static objects (e.g. light pole, building, etc.) operate or are located. The rush towards, for example, autonomous cars and robotic vehicles has forced the requirements of LiDAR sensors in new directions from those of remote sensing. [0044] Light detection and ranging (LiDAR) sensors use light pulses to illuminate and scan an environment of the LiDAR sensor, whereas radio detecting and ranging (RADAR) sensors use radio waves to scan an environment of the RADAR sensor and sound navigation and ranging (SONAR) sensors use sound waves to scan an environment of the LiDAR sensor. Light, radio waves or sound waves are backscattered or reflected off of objects in the environment of the LiDAR sensor, the RADAR sensor, or the SONAR sensor and the returned signals (i.e. the backscattered or reflected light, radio waves or sound waves) are measured to compute distances to objects in the environment of the LiDAR sensor, the RADAR sensor, or the SONAR sensor.

[0045] When scanning the environment in which the LiDAR sensor is located (e.g. FOV), the LiDAR sensor estimates or computes the distance to each object in the environment. In order to compute the distance to an object, the time of flight (TOF) for the emitted light pulse (e.g. laser pulse) can be measured directly or indirectly. FIG. 1 C illustrates a direct time of flight (DTOF) system 110 of a LiDAR sensor which is configured to directly measure the time of flight (TOF) for the light of an emitted light pulse in order to compute the distance to an object in the environment of the LiDAR sensor. The DTOF system 110 includes the electronics 111 , pulse light source 112, transmitter optics 113, receiver optics 114, and avalanche photo-diode (APD) I single photon avalanche diode (SPAD) 115.

[0046] The distance to an object 116 (i.e. d targe d can be estimated by measuring the time (i.e. A 7) between the moment that a light pulse (e.g. laser pulse) is emitted at the transmitter optics 113 and the moment when the backscattered or reflected light from the target object 116 is received by the receiver optics 114. The distance to the object 116 (i.e. dtarget) can be determined based on Equation 1 : speed of light (1)

[0047] FIG. 1 D illustrates an indirect time of flight (ITOF) method for measuring a distance to an object performed by a LiDAR sensor. When scanning the environment, the LiDAR sensor can measure, using the ITOF method, a distance to an object in the environment in which the LiDAR sensor is located. The principle of ITOF is associated with homodyne detection (i.e. the phase of first harmonic), which is a method of extracting information encoded as the modulation of the phase and/or frequency of an oscillating signal by comparing that signal with a standard oscillation that would be identical to the signal if it carried null information. For ITOF, a continuous modulated sinusoidal light wave is emitted and the phase difference between the outgoing signal and the incoming signal is measured. The distance to the target object (i.e. d tarae! ) can be computed using the phase difference between the outgoing signal and the incoming signal, and can be determined based on Equation 2:

[0048] As illustrated above, a LiDAR sensor can measure the distance to an object directly or indirectly. A LiDAR sensor may use the DTOF method to measure the time of flight directly to estimate a distance to an object in the environment of the LiDAR sensor. When the LiDAR sensor scans the environment, the distance to the target object(s) can be estimated using the direct measurement of the time of flight or the DTOF method. In other words, the task of scanning the environment can be accomplished by measuring the time of flight of the laser (or other type of light beam). For example, for a vertical-cavity surface-emitting laser (VCSEL) the output pulse length ranges from 0.2ns to 5ns. The shorter the output pulse length, the better the resolution and there is improved eye safety. The DTOF may be limited to a small number of sensor elements, and can be used to measure the distance ranging from short up to long range (e.g. few kilometers). The maximum measureable range can typically be dictated by the optical power budget, as the optical pulse returning from the target object needs to be received by the receiver optics.

[0049] On the other hand, in the ITOF method, the phase shift is measured to estimate the distance to a target object. When the LiDAR sensor scans the environment, the distance to the target object(s) can be estimated using the measurement of the phase shift or the ITOF method. In other words, the task of scanning the environment can be accomplished by measuring the phase shift. VCSEL output may be a 20-100MHz modulated sine wave. ITOF uses a very small pixel, and typically uses standard complementary metal oxide semiconductor (CMOS) technology and enables high pixel count (QQVGA-VGA). ITOF may be used to measure the distance in short and/or medium ranges, with the distance range typically within 50m. For ITOF the maximum measureable range can typically be dictated by the modulation frequency.

[0050] There are different types of LiDAR sensors. Two types of LiDAR sensors include spinning scanning (or beam steering) LiDAR sensors and flash LiDAR sensors. FIG. 1A illustrates a scanning (or beam steering system) LiDAR sensor 120 scanning an environment of the LiDAR sensor that includes a vehicle. FIG. 1 B illustrates an example of a flash LiDAR sensor 130.

[0051] Scanning LiDAR sensor (e.g. scanning LiDAR 120) scans the targeted scene in the environment in which the LiDAR sensor is located using narrow lasers emitted by a narrow laser transmitter which moves across the field of view (FOV) over time. The targeted scene may include one or more objects, for example in the case of LiDAR sensor installed on a vehicle, other vehicles, pedestrians and other relevant entities. To scan the targeted scene using narrow lasers, the existing scanning LiDAR sensors include either a mechanical system or a solid state system. The mechanical system includes a rotating mirror system to enable high speed 360 degree detection. The solid state system operates typically using a micro-electromechanical system (MEMS) technology or an optical phased array (OPA) system in order to scan a surrounding environment of the LiDAR sensor. The solid state LiDAR sensor does not need spatial modulation of the light source, however it faces the challenge of the scanning transmitter and receiver operating simultaneously. Some solid state scanning LiDAR sensors include a phased array (e.g. photonic integrated circuits (IC)) for beam steering light emitted by a light source. Such systems remain solid state and can scan both axes. Some solid state scanning LiDAR sensors include MEMS mirrors. Some solid state scanning LiDAR sensors use rolling shutter single photon avalanche diode (SPAD) array and a solid state laser scanner. These systems can be characterized by the features of scanning of the transmitter and receiver using two independent mechanisms, CMOS implementation and standard optical components.

[0052] Rotating LiDAR sensors achieve the required performances (for example, a combination of long-range, high spatial resolution, real-time performance and tolerance to solar background in the daytime) using a rotating wheel configuration rotating at high speed and including multiple stacked detectors.

[0053] A second type of LiDAR sensor is a flash LiDAR sensor (e.g. flash LiDAR 130). A flash LiDAR sensor generates a flash of light to illuminate the environment of the flash LiDAR sensor (e.g. a field of view (FOV)) of the LiDAR sensor using a wide- angle beam of light. As such, the flash LiDAR sensor does include any moving components. The flash LiDAR sensor does not require spatial modulation of a light source of the LiDAR sensor, as the light source of the LiDAR sensor illuminates the entire environment (e.g. FOV). The flash LiDAR emits a single light pulse to capture an entire scene in the three dimensional environment and therefore does not require scanning. Each pixel in the flash LiDAR sensor is illuminated by the laser and is actively collecting light simultaneously, similar to a camera with a flash.

[0054] When scanning the environment in which the LiDAR sensor is located (e.g. FOV), the LiDAR sensor estimates or computes the distance to each object in the environment. In order to compute the distance to an object, the time of flight for the emitted light pulse (e.g. laser pulse) can be measured directly or indirectly. FIG. 1C illustrates a direct time of flight (DTOF) system 110 of a LiDAR sensor which is configured to directly measure the time of flight (TOF) for the light of an emitted light pulse in order to compute the distance to an object in the environment of the LiDAR sensor. The DTOF system 110 includes the electronics 111 , pulse light source 112, transmitter optics 113, receiver optics 114, and avalanche photo-diode (APD) I single photon avalanche diode (SPAD) 115.

[0055] The distance to an object 116 (i.e. dtarget) can be estimated by measuring the time (i.e. A 7) between the moment that a light pulse (e.g. laser pulse) is emitted at the transmitter optics 113 and the moment when the backscattered or reflected light from the target object 116 is received by the receiver optics 114. The distance to the object 116 (i.e. dtarget) can be determined based on Equation 1 : speed of light (1)

[0056] FIG. 1 D illustrates an indirect time of flight (ITOF) method for measuring a distance to an object performed by a LiDAR sensor. When scanning the environment, the LiDAR sensor can measure, using the ITOF method, a distance to an object in the environment in which the LiDAR sensor is located. The principle of ITOF is associated with homodyne detection (i.e. the phase of first harmonic), which is a method of extracting information encoded as the modulation of the phase and/or frequency of an oscillating signal by comparing that signal with a standard oscillation that would be identical to the signal if it carried null information. For ITOF, a continuous modulated sinusoidal light wave is emitted and the phase difference between the outgoing signal and the incoming signal is measured. The distance to the target object (i.e. d tar get can be calculated using the phase difference between the outgoing signal and the incoming signal, and can be determined based on Equation 2: [0057] As stated above, LiDAR is an important part of the sensor suite for 3D multitarget detection and for perception of an environment in which vehicles, watercrafts, robots, planes, helicopters or any static objects (e.g. light pole, building, etc.) operate or are located. In the case of autonomous vehicles, LiDAR sensors can be integrated as part of their suite of sensors which are used to “perceive” an environment in which the autonomous vehicle operates, despite the high cost and moving parts associated with the LiDAR sensor. One typical case is the incorporation of LiDAR sensor into autonomous vehicles to develop robo-vehicles and associated services. For example, at the Ecomobilite par Vehicules Autonomes sur le territoire de Paris Saclay (EVAPS) field operational test in France, several participants (e.g. Renault™, Transdev™, Vedecom™, etc.) collaborated together to exploit mobility services associated with autonomous vehicles. In fact, there are more than 20 companies developing distinctive LiDAR systems for autonomous driving systems ranging from low level to high end car manufacturers. However, which LiDAR type(s) will dominate in autonomous driving is still unknown. There are various existing LiDAR products including a mechanical spinning 905nm LiDAR from Velodyne™, a 1550nm MEMS LiDAR from Luminar™, and a flash LiDAR from Continental™.

[0058] Large-scale automotive applications can demand additional requirements, for instance the capability to industrialize the sensor(s) with more reliability, faster computational speed and easier manufacturing methods thereby achieving lower manufacturing cost per unit. Large-scale automotive applications may further require that the sensor(s) be small, and more specifically small enough to fit in a small space in the car.

[0059] Further work relating to LiDAR based algorithms has significantly increased. For autonomous vehicles, LiDAR is primarily used for perception and localization. In the context of autonomous driving, a perception system (which can include LiDAR) provides a machine interpretable representation of the surroundings of the vehicle (e.g. environment around the vehicle). The processing of the raw LiDAR data, as represented by an unorganized 3D point cloud, can suffer from many problems at least in part due to time and energy efficiency of the GPU and/or CPU and limited memory resources. These problems can negatively impact the performance and can lead to a low quality in visualization of the surroundings.

[0060] As stated above, large-scale automotive applications require the development of a reliable, small, packaged sensor that can be fitted in small volumes associated with a car. Such a reliable, small, packaged sensor can be used in other markets, which can include robotics and defense applications. This has resulted in a quest for a state- of-the-art LiDAR system.

[0061] A central processing unit (CPU) has been used to perform all the computation for LiDAR applications. However, with recent advances in graphics processing units (GPU) by vendors such as NVIDIA™ and AMD™, a GPU is popularly used as a low- cost computing platform that has the ability to perform massively parallel data streaming. A GPU also became a popular computing platform to perform LiDAR data processing in real-time. For example, NVIDIA™ provides a low power embedded GeForce™ GT 650M GPU that is often selected as a prototyping platform for the parallel implementation (for example, a low-cost computing platform for massively parallel data streaming).

[0062] The processing of point clouds generated by a LiDAR sensor is time consuming due to the significant size of the point cloud as well as the computationheavy nature of the machine learning based algorithms (e.g. the algorithms are typically computationally iterative). To address this issue, GPUs, which include massively parallel cores and high memory bandwidth, have been widely used for executing instructions of machine learning algorithms to process point clouds.

[0063] An advantage of GPUs lies in the standard unified programming interface, which supports single instruction multiple data (SIMD) execution, low cost for upgrading and a higher floating point operations per second (FLOPS) I Watt ratio. A GPU based parallel LiDAR processing algorithm can be implemented with GPU specific memory architecture optimizations. Due to built-in support for rendering pipelines using OpenGL™ and Cg™, GPUs can exhibit a low latency for computation and pipeline rendering, thereby making GPUs a viable alternative for handling processes performed in association with a LiDAR system.

[0064] Nonetheless, GPUs as processing units have some limitations. For example, GPUs cannot be used as standalone devices for hardware acceleration, due to their dependency on a CPU for data offloading and scheduling of algorithm execution. Another problem is that GPUs have high execution time, especially for large data, due to for example, their inability to handle all data at the same time. For image sizes smaller than 150 pixels x 150 pixels, the processing algorithm has an approximately constant execution time, because on this scale GPUs can process completely parallel processing of all pixels. However, larger images yield an increased execution time, as a single GPU does not have enough processors to handle all pixels and other necessary memory read I write constraints at the same time. As the per-pixel computations are not parallelized, the processing time is approximately linearly proportional to the mean number of active bins per pixel.

[0065] According to embodiments, a LiDAR system as further disclosed herein can overcome at least some of the drawbacks associated with conventional LiDAR sensors. With demand for reduced hardware size and increased computing power, a more advanced LiDAR system is desired for task automation which is highly time-consuming when performed using a conventional LiDAR sensor and a separate computing system. A problem associated with current LiDAR sensors is the necessary high computation rate requirements for processing point clouds generated by the LiDAR sensor. According to embodiments, the issue of a high computation rate requirement may be addressed using a LiDAR system that includes an optical or photonic computing system.

[0066] The present disclosure provides a LiDAR system which is implemented as a hybrid 3D-integrated CMOS-photonics neuromorphic system-on-chip (SOC). According to embodiments, CMOS chip (e.g. field analog vision module) processes the analog signals received from the photodiode to generate processed data, transforms (or converts) the processed data into Cartesian coordinates, and generates electrical analog signals indicative of the transformed data. The electrical analog signal can be conveyed to an optical I photonic computing system though drivers (e.g. of the CMOS chip) interfacing with the optical I photonic computing system. All electronic (analog and digital) components are integrated onto the CMOS chip. The silicon photonics (SiPh) chip implements the optical I photonic computing system which can offer significant improvements over digital electronics in terms of energy, speed, and compute density, due to the high bandwidth (e.g. in the THz range) associated with photonics. All photonic components are integrated onto the silicon photonics (SiPh) chip. In a silicon photonics (SiPh) chip, processing data is performed in the optical domain. The optical neuromorphic SOC of the present disclosure may be as small as a single chip of 1 cm 2 . In comparison, a current LiDAR system is required to interface with a separate chip or computing system including GPUs or tensor processing units (TPU), which are much greater in size (e.g. approximately 24 cm 2 for GPUs, 331 mm 2 for TPUs).

[0067] The present disclosure provides light detection and ranging (LiDAR) system for scanning an environment of the LiDAR system in a field of view (FOV) of the LIDAR system. According to embodiments, there is provided a neuromorphic LiDAR SOC system which includes an integrated CMOS-photomcs chip that forms an optical computing system having the ability to process point cloud(s) with a low latency. In various embodiments, custom-designed analog/digital blocks process the raw signals detected by the scanning sensor (e.g. photodiode, other image sensors), to generate processed data (e.g. point cloud(s) which are then transformed into point cloud(s) in a Cartesian coordinate system in order to generate a desired format for the point cloud(s) for further optical computation. The transformed point cloud(s) are loaded into the optical computing platform in a continuous manner for processing. Specifically, the transformed point cloud(s) are transmitted from the analog/digital blocks (e.g. field analog vision module) to the optical computing system, for example through electrical analog signal generated by high voltage drivers of the analog/digital blocks, in a continuous manner. There is further provided hybrid 2.5D integrated circuit and 3D integrated circuit system-in-package (SiP) which includes an integration of CMOS- photonics systems which can reduce the size of the LiDAR system.

[0068] According to embodiments, the LiDAR system includes a non-volatile analog memory device. In some embodiments, larger-scale systems are realized such that the benefits of using analog computation of the raw data can result in improvements when further using an analog memory. The LiDAR system may have custom analog memory device integrated therein, such that with the processing unit and the memory unit being analog reduces latency time related to reading from and writing to the memory intermediate results from the processing of the point cloud using the optical computing platform. According to embodiments, there are provided a hybrid 2.5D integrated circuit and 3D integrated circuit including SiP integration of CMOS-analog memory stack-photonics systems which can result in a high-performance LIDAR system which has a small form factor. This configuration can, for example, provide a rugged product for use in automotive perception or sensory systems.

Hybrid CMOS-Photonics LiDAR System

[0069] According to embodiments, there is provided a hybrid CMOS-photonics LiDAR system including a field analog vision (FAV) module for generating point clouds in a Cartesian coordinate system and a 3D integrated CMOS-photonics chip that forms an optical computing system. The FAV module includes a LiDAR scanning sensor for generating point clouds in a spatial coordinate system, a coordinate transformer for transforming each point cloud in the spatial coordinate system into transformed point cloud in the Cartesian coordinate system and a memory that stores a neural network model and the transformed point clouds. The optical computing system performs computations of layers of a neural network model in the optical domain to process the point cloud(s) clouds to generate a map (i.e. an image in which each pixel has a color that is representative of an object class for objects detected in the point clouds) in a Cartesian coordinate system with a low latency. The hybrid CMOS-photonics LiDAR system can deliver greater bandwidth when compared to digital electronics chips and can also offer better energy efficiency. The hybrid CMOS-photonics LiDAR system can perform computations in the optical domain with low energy consumption. Linear or unitary operations are examples of such computations where minimal energy would be needed. The hybrid CMOS-photonics LiDAR system can produce lower latency, in particular when compared to currently existing LiDAR systems where computations are performed separately from the LiDAR sensor. In various embodiments, the photonic devices in the hybrid CMOS-photonics LiDAR system do not have an issue relating to data movement or clock distribution time along metal wires. As such, the hybrid CMOS-photonics LiDAR system according to embodiments, requires a small number of photonic devices to perform operations.

[0070] In a LiDAR scanning process, a scanning device, which emits one or more laser beams, scans a scene or region that encompasses the structure or object of interest (e.g. scene encompassing a surface of a target object). The laser beam is emitted towards the scene and reflected back to and captured by the scanning device. Therefore, the scanning device measures a large number of points lying on visible surfaces in the scene. Each scan point has a measured location in three-dimensional (3D) space, potentially with some measurement error. The measured location can typically be recorded relative to a certain point (x,y,z) in the local coordinate system of the scanner. The resultant point collection is typically referred to as one or more point clouds. Each point cloud can include points lying on a plurality of different surfaces in the scanned scene or region.

[0071] FIG. 2 illustrates an overall structure of a LiDAR system 200, in accordance with an embodiment of the present disclosure. The LiDAR system 200 includes a field analog vision (FAV) module 210, an optical computing (OC) system 230, a memory 240 and a read-out analog to digital converter (ADC) 250.

[0072] The field analog vision (FAV) module 210 includes a LiDAR scanning sensor 211 configured to scan an environment 260 of the LiDAR system which includes one or more objects. The LiDAR scanning sensor 211 generates point clouds, where each point cloud includes points where light backscattered or reflected off of the surface of an object in the environment 260 in three-dimensional space. As such, in various embodiments, the LiDAR scanning sensor 211 includes one or more optical pulse transmitters emitting light pulse(s) and one or more optical pulse receivers for receiving the backscattered or reflect light (i.e. the light of the light pulse(s) are backscattered or reflected by objects in the environment 260 back to the LiDAR scanning sensor 211). A more detailed structure of the field analog vision (FAV) module 210 in the LiDAR system 200 is illustrated in FIG. 3. Based upon the light reflected back by the surface of an object in the environment 260, the LiDAR scanning sensor 210 generates a point cloud 220 in the Cartesian coordinate system that includes points in space where light was reflected off of objects. Each point in the point cloud 220 may also include other attributes of the reflected light, such as the intensity of the detected reflected light. The optical computing (OC) system 230 interacts with the FAV module 210 to obtain groups of measured points in the point cloud 220 that are transformed into a Cartesian coordinate system. The groups of measured points in the Cartesian coordinate system may be used to generate a model of the object (e.g. targeted object in the environment 260) which can subsequently be stored in a database. The optical computing system can implement a neural network operative in the optical domain in order to process the point clouds received from the FAV module.

[0073] As illustrated in FIG. 3, a synchronization module 312 provides synchronization between the transmitter and the receiver associated with the FAV module. Synchronization can be provided by for example a phase lock loop (PLL) or voltage controller oscillator (VCO). The synchronization module is operatively coupled to a laser driver 314 for providing signals thereto for the driving of the laser source 315 to emit light 318. The light emitted by the laser source 315 reflects off an object in the environment 260 and subsequently received by the FAV. The reflected light 322 is received and detected by an array of photodiodes (APD) 323 wherein this signal is transferred to a trans-impedance amplifier (TIA) 310 which is configured to produce the spherical coordinates associated with a measured point associated with the target object and an intensity of the received light. The coordinate transformer 320 is configured to transform a point cloud in a spatial coordinate system to a point cloud in the Cartesian coordinate system (i.e. coordinate system transformation from spherical coordinates to Cartesian coordinates). While not illustrated in FIG. 3, the coordinate transformer 320 includes a function generator and a look up table (LUT). The coordinate system transformation may be performed with access to the LUT. The resultant Cartesian coordinates from the coordinate transformer 320 are transmitted to an analog to digital converter (ADC) 331 which converts Cartesian coordinates in an analog format to a digital format for subsequent saving in digital memory 330. As previously noted, the TIA 310 also produces an indication of the intensity of the returned light which is transferred to ADC 333 for conversion from an analog format to a digital format for subsequent saving in digital memory 330. The digital memory 330 is accessed by drivers 324 which are communicatively connected to the optical computing system. The output 326 (e.g. electrical analog signal) of the drivers 324 is conveyed to the optical computing system. The optical computing system is configured to perform further processing of the information indicative of the target object using the output 326 from the drivers 324, as is discussed in further detail elsewhere herein. The output 326 can be also utilized to drive micro ring modulators in the OC system.

[0074] With further reference, to FIG. 2, according to embodiments, the FAV module 210 creates the desired point cloud (e.g. point cloud 220) prior to transmitting the data to the OC system (e.g. OC system 230). Hence, the raw data from the FAV module 210 can be converted into a coordinate system that can be manipulated by the OC system. In order for the FAV module 210 to create a point cloud map, the FAV module 210 needs to know the position of each scanned or measured point in the final coordinate system bounded to a local ground position. A desired final coordinate system utilized in the LiDAR scanning system (e.g. LiDAR system 200) is illustrated in FIG. 4A, in accordance with embodiments of the present disclosure. In various embodiments, various data for each scanned point (measured point of the point cloud) may be collected for example by the scanning device 211. The data collected may include one or more parameters indicative of a time stamp, a distance, an azimuth angle (a), a vertical angle (u>), an intensity and a reflectivity.

[0075] According to embodiments, the FAV module 210 performs an interpolation and conversion for each measured point which is acquired from the detector (e.g. scanning device 211) integrated in the FAV module 210. For this specific requirement, the incoming coordinate system (e.g. spherical coordinate system) is transformed into the Cartesian coordinate system. For example, the spherical data indicative of the measured points that is obtained from the sensor in the FAV module 210, can be expressed in terms of radius R, elevation angle and azimuth angle a, and can be transformed to Cartesian coordinates, as illustrated in FIG. 4A. The X, Y, Z axes of the Cartesian coordinate system are aligned with the sensor’s axes and origin in the LiDAR’s mounting point. Equations 3, 4 and 5 can be used for the conversion from a spherical coordinate system to a Cartesian coordinate system for each of the measurement points. x = R ■ cos(to) ■ sin a) (3) y = R ■ cos(to) ■ cos(a) (4) z = R ■ si (to) (5)

For the above equations, each of x, y, z represents Cartesian coordinates of the measured point with respect of the X, Y, Z axes, and R represents the distance between the LiDAR’s optical pulse (e.g. laser pulse) transmitter I receiver and the measured point (e.g. data point 410 in FIG. 4A). The angles a and correspond to azimuth and vertical (elevation) angles, respectively.

[0076] FIG. 4B illustrates a LiDAR system (e.g. LiDAR system 200) scanning across an environment in a field of view (FOV) of the LiDAR system, in accordance with embodiments of the present disclosure. As illustrated in FIG. 4B, the vertical angle that the laser source 315 of the LiDAR system scans across the FOV is 30 degrees (°) (e.g. in the range of -15° and +15°). According to embodiments, the laser source 315 includes 16 rotational firing planes, thus there are 16 laser beams emitted from the laser source 315. However it would be readily understood that a different number of firing planes may be suitable. As an example, a Velodyne™ VLP-16 laser scanner can have the channel number and vertical angle association as defined in Table 1 . TABLE 1

[0077] According to certain embodiments, the cycle time between the laser firings is 2.304 ps. As there are 16 firing planes, the cycle time to fire all 16 laser beams is 16 x 2.304 ps = 36.864 ps. Provided that the recharge period is 18.43 ps, the timing cycle to fire and recharge all 16 laser beams is 36.864 ps + 18.43 ps = 55.296 ps.

[0078] In terms of the horizontal (azimuth) angles, the laser source 315 spins around the vertical axis (i.e. z-axis) thereby providing 360 degree coverage in terms of the horizontal FOV. In certain embodiments, the motor of the laser source 315 may be configured to rotate between 300 revolutions per minute (RPM) and 1200 RPM.

[0079] As calculated above, the firing timing of the laser source 315 is 55.296 ps per firing cycle, and the speed of rotation can be calculated as follows: rev 1 min ° 55.296 X 10 6 s

Azimuth Resolution = RPM - x — - x 360 - x — - - — min 60 s rev firing cycle

[0080] For example, if the motor of the laser source 315 may be configured to rotate at the speed of 600 RPM, then the angular resolution of the laser source 315 is 600 x 0.000331776°lfiring cycle « 0.199° /firing cycle.

[0081] Based on the above, in various embodiments, the angular resolution of the laser source 315 is dependent upon the rotation speed of the laser source 315 (e.g. the speed of rotation changes the angular resolution of the laser source 315). Table 2 provides the angular resolutions of the laser source 315 at various rotation speeds.

TABLE 2

[0082] According to embodiments, the trans-impedance amplifier (TIA) (e.g. TIA 310 in FIG. 3) determines the distance R between the LiDAR’s transmitter I receiver (e.g. transmitter I receiver of the scanning device 211 in FIG. 2) and the data point or measured point. The determined distance R is conveyed to the coordinate transformer (e.g. coordinate transformer320 in FIG. 3) for the coordinate system transformation. As such, the TIA 310 and the coordinate transformer 320 are operatively connected to each other. The computation process performed by the coordinate transformer320 for transforming the measured data in spherical coordinates into measured data in Cartesian coordinates is illustrated in FIG. 5, in accordance with embodiments of the present disclosure. Referring to FIG. 5, the coordinate transformer 320 includes a lookup table 510, a sine function generator 520, a cosine function generator 530 and a multiplier 540. The coordinate transformer 320 searches the lookup table 510 with measured point in spherical coordinates obtained from the sensor in the FAV module (e.g. FAV module 210) in order to obtain an elevation angle and an azimuth angle associated with the measured point. The sine function generator 520 and cosine function generator 530 calculates a sine value and a cosine value of the elevation angle and azimuth angle received from the lookup table 510. The calculated sine value and cosine value are conveyed to the multiplier 540. The multiplier 540 calculates the x, y, z values of the data point in the Cartesian coordinate system using the calculated sine value and cosine value received from the sine function generator 520 and the cosine function generator 530 and the distance R, which defines the distance between the LiDAR’s transmitter I receiver and the measured point.

[0083] FIG. 6 illustrates, in block diagrams, a CMOS sine function generator 610 and a CMOS cosine function generator 620 included in the function generator, in accordance with embodiments of the present disclosure. According to embodiments, each circuit in the CMOS sine function generator 610 and cosine function generator 620 can be implemented using precise analog computational circuits. The analog computational circuits include a geometric-mean circuit, a squarer/divider circuit and a multiplier circuit. The sine function generator 610 and cosine function generator 620 calculate the sine value and the cosine value of a provided angle (e.g. elevation angle and azimuth angle) using the analog computational circuits as illustrated in FIG. 6. The sine function implemented by the sine function generator 610 can be mathematically expressed as defined in Equation 6.

^z l-x 2 )(2.83-x 2 ) f (x — - — - (6)

7 V 0.9(1 + 0.3x 2 ) v

Similarly, the cosine function implemented at the cosine function generator 620 can be mathematically expressed as defined in Equation 7.

[0084] FIG. 7 illustrates examples of a geometric-mean circuit 710, a squarer/divider circuit 720 and a multiplier circuit 730 which can be used in the sine function generator 610 and the cosine function generator 620, in accordance with embodiments of the present disclosure.

[0085] According to embodiments, each of the transistors used in the circuits 710, 720 and 730 have separate wells connected to sources in order to remove the body effect, as transistors are based on the trans-linear topology. The geometric-mean circuit 710 may be a current-mode geometric-mean circuit and the squarer/divider circuit 720 may be a current-mode squarer/divider circuit. The geometric-mean circuit 710 and the squarer/divider circuit 720 can be connected in a cascade configuration. The transistors in the geometric-mean circuit 710 and the squarer/divider circuit 720 can operate in strong inversion, which can allow transition between the saturation and triode region of MOS transistors M 2A-D . Transition between these operation regions can be used to perform nonlinear processing, for example to achieve multiplication and division of currents. The geometric-mean circuit 710 provides a current/ GM = + / 2 = 2^/Zx/y, which is mirrored and injected into the squaring input of the squarer/divider i 2 circuit 720. The squarer/divider circuit 720 yields an output current / 0UT = — = 4 / lV thereby achieving the multiplier/divider operation. The multiplier circuit

730 is realized by combining two programmable transconductors, wherein the difference of output current yields a multiplication as I Q = — I 2 = 4Kxy orV 0 = V 01 — V 02 = —4KZfXy . The differential configuration uses four MOS transistors operating in the linear region, thereby improving the linearity and power supply rejection ratio (PSRR) due to an improved nonlinearity cancellation.

[0086] According to embodiments, the point clouds in the Cartesian coordinate system generated by the coordinate transformer are stored in the digital memory. The digital memory is operatively associated with or interfaced with a high voltage driver array. The driver array generates the appropriate electrical analog signals to drive the micro ring modulators (e.g. CMOS photonic micro ring modulators) of the optical computing system. The micro ring modulators may be fabricated in a standard silicon photonics process. According to embodiments, the size of the high voltage driver array can depend on the number of the micro ring modulators in the input ring bank and output ring bank of the optical computing system.

[0087] FIG. 8 illustrates a block diagram, a photonics-based LiDAR system that includes an optical computing system, a FAV module and a digital memory, in accordance with embodiments of the present disclosure. Referring to FIG. 8, the photonics-based system 800 includes the optical computing system 810. The optical computing system 810 may include an optical matrix multiplier. The optical computing system 810 may be identical to or essentially similar to the OC system 230 illustrated in FIG. 2. The optical computing system 810 includes the input ring bank 811 and the output ring bank 812. The optical computing system 810 is operatively associated with and connected to the FAV module 830. Specifically, one or more drivers 832 of the FAV module 830 may operatively interface with the OC system 810.

[0088] The one or more drivers 832 are operatively associated with the digital memory 831 , as illustrated in FIG. 8. The digital memory 831 and the ADCs 820 are also operatively associated with each other, as illustrated in FIG. 8. The ADCs 820 may be identical to or essentially similar to the ADC 250 illustrated in FIG. 2. In some embodiments, the ADCs 820 are integrated into the FAV module 830. In some embodiments, the ADCs 820 are integrated into a separate CMOS chip.

[0089] According to embodiments, the photonics-based computing system 800 converts the digital data to the analog domain, (e.g. converted to analog data) using one or more digital-to-analog convertors (DACs). The digital data is indicative of information relating to the one or more surrounding objects (e.g. target objects) that is stored in the digital memory 831. The information relating to the one or more surrounding objects can include the Cartesian coordinates of the scanned or measured points associated with the surrounding object, the associated intensity and I or reflectivity of the measured points, and the distance between the measured points associated with the one or more target object and the LiDAR system.

[0090]According to embodiments, the analog data is loaded into the optical neural network (ONN) of the optical computing system in a continuous manner utilizing wavedivision multiplexing (WDM). For instance, the processed point clouds are transmitted from the drivers 832 of the FAV module 830 to the optical computing system 810 in a continuous manner by utilizing wave-division multiplexing (WDM). The optical computing system can be configured as a neural network formed using optical components, which is generally referred to as an optical neural network (ONN)). The processed point clouds may be transmitted from the drivers 832 to the optical computing system 810 through one or more electrical analog signals. The electrical analog signals are indicative of the processed point clouds or information contained therein. The electrical analog signals may be also used to drive micro ring modulators in the optical computing system 810. An ONN is a class of neural networks that can exploit the high bandwidth and low latency of optical computation to perform perception tasks on point clouds such as object detection, object classification, part segmentation, and the like.

[0091] According to embodiments, various optical architectures with different optical and analog components may be employed by the ONN to perform the required calculations on the received data (e.g. the analog data, data contained in the electrical analog signals received from the drivers 832). For instance, broadcast-and-weight (B&W) architecture is an approach that can be employed for performing optical multiply-and-accumulate (MAC) operations. In the broadcast-and-weight architecture, different light wavelength channels are weighted by separate micro ring resonators (MRR), each of which serves as the multiplication portion of a MAC operation. Elements of the input ring bank 811 are linked to elements of the output ring bank (weights) 812 via an optical waveguide. Once the multiplication of signals and weights are performed, accumulation can be performed by detecting the total power of all wavelength channels.

[0092] According to embodiments, balanced photodiode architecture can be incorporated at the output in order to facilitate representing positive and negative weights in analog photonics. It is followed by a trans-impedance amplifier (TIA) and analog-to-digital convertors (ADCs) to provide electronic gain and digital conversion, respectively.

[0093] Various embodiments of the hybrid CMOS-photonics LiDAR system can offer significant improvement in performance over conventional LiDAR sensors which use computing units based on digital electronics. For instances, the integrated photonicsbased computing system of the instant application can deliver much higher bandwidth while providing better energy and time efficiency, when compared to digital electronics chips. The improvements can be achieved based on the feature that optical signals have approximately 5 THz spectral bandwidth, which can provide 5 Tb/s information capacities for each spatial mode and polarization. [0094] In addition, various embodiments of the hybrid CMOS-photonics LiDAR system of the instant application can perform computations in the optical domain with minimal energy consumption. Linear or unitary operations are examples of such computation where minimal energy would be needed.

[0095] Furthermore, various embodiments of the hybrid CMOS-photonics LiDAR system produce lower latency, in particular when compared to existing LiDAR systems. The photonic devices of the hybrid CMOS-photonics LiDAR system do not have problems of data movement or clock distribution time along metal wires, and the hybrid CMOS-photonics LiDAR system requires a small number of photonic devices to perform operations (e.g. MAC operations), thereby reducing the latency.

[0096] According to embodiments, based on a hybrid 2.5D integrated circuit or hybrid 3D integrated circuit which is configured using an integrated CMOS-photonics technology, the size of the optical computing based LiDAR system of the instant application can be smaller than current LiDAR devices which typically require GPUs or TPUs.

Integrated Non-Volatile Analog Memory Devices

[0097] According to embodiments, one or more analog memories are incorporated into the LiDAR system of the present disclosure to accelerate processing of the one or more point clouds and reduce the use of data converters that are used for interfacing with the optical computing system. By using one or more analog memories connected to the optical computing system, a reduction in latency time related to reading from and writing to the memory intermediate results from the processing of the point clouds can be realised. FIG. 9 illustrates an example FAV module of the LiDAR system with an analog memory pre/post processing unit, in accordance with embodiments of the present disclosure. According to embodiments, utilization of analog memory can provide one or more advantages including lower power consumption, higher computational efficiency, high noise margins, high speed operation and minimum capacitive loading.

[0098] With further reference to FIG. 9, multiple components are similar to that as previously discussed with respect to FIG. 3, however rather than including a digital memory, the FAV module includes an analog memory. As such, the resultant transformed point clouds in Cartesian coordinates in an analog format received from the function generator 320 are stored in analog memory 930. As previously noted, the TIA 310 also produces an indication of the intensity of the returned light which is stored in an analog format in analog memory 930. The analog memory 930 is accessed by drivers 924 which are communicatively connected to the optical computing system, which is configured to perform further processing of the information indicative of the target object as is discussed in further detail elsewhere herein. The output 926 (e.g. electrical analog signal) of the drivers 924 is provided to the optical computing (OC) system. The optical computing system is configured to implement a neural network model which is used to process each transformed point cloud received through the output 926 from the drivers 924. The output 926 can then be utilized to drive micro ring modulators in the OC system. The optical computing system may include an optical neural network which implements the neural network model.

[0099] In various embodiments, the output analog data is directly stored in the analog memory 930, thereby eliminating the requirement of ADCs for converting the analog data to digital data and to store the digital data in the memory. For example, the Cartesian coordinates of the data points of a point cloud generated by the function generator are stored in the analog memory. In various embodiments, the analog memory is operatively associated with or interfaced with a high voltage driver array (e.g. drivers 924) which generates the appropriate pulses to drive the micro ring modulators (e.g. CMOS photonic micro ring modulators) in the optical computing system. The micro ring modulators may be fabricated in a standard silicon photonics process. The size of the driver array depends on the number of the micro ring modulators in the input and output rings banks of the optical computing system.

[0100] According to embodiments of the present disclosure, non-volatile analog memory devices are incorporated into the hybrid CMOS-photonics LiDAR system illustrated above. Such incorporation allows implementation of large scale neuromorphic LiDAR systems as well as the exploitation of the benefits of analog computation which can result in tangible improvements at the application level. Utilization of analog memory can offer an advantage with respect of power consumption and computation efficiency as data converters are not required to interface with optical computing system. One example of the optical computing system integrating a non-volatile analog memory device as well as the FAV module is illustrated in FIG. 10. It is desirable to realize CMOS memristor (emulator) circuits for system-level design of a large array of analog-memory so that it can be integrated into a commercially available CMOS system. [0101] As stated above, FIG. 10 illustrates a photonics-based computing system that includes an optical computing system integrating a FAV module and a non-volatile analog memory device, in accordance with embodiments of the present disclosure. Referring to FIG. 10, the photonics-based system 1000 includes the optical computing system 1010. The optical computing system 1010 may include an optical matrix multiplier. The optical computing system 1010 may be identical or essentially similar to the OC system 230 illustrated in FIG. 2 or the optical computing system 810 illustrated in FIG. 8. The optical computing system 1010 includes the input ring bank 1011 and the output ring bank 1012. The optical computing system 1010 is operatively associated with and connected to the FAV module 1030. Specifically, one or more drivers 1032 of the FAV module 1030 may be operatively interfacing with the OC computing system 1010.

[0102] The one or more drivers 1032 are operatively associated with the analog memory 1031 , as illustrated in FIG. 10. The analog memory 1031 may include a pre/post processing unit.

[0103] According to embodiments, the photonics-based computing system 1000 does not need to convert the data containing information about the surrounding objects (e.g. target object), as such information is directly stored in the analog memory 1031. For example, the Cartesian coordinates of the scanned or measured points of the target object computed by the function generator can be stored in the analog memory 1031 , without format conversion (e.g. analog to digital). The stored information may be transmitted from the drivers 1032 to the optical computing system 1010 through one or more electrical analog signals. The electrical analog signals are indicative of the information stored in the analog memory 1031. The information about the surrounding object or target object may include the Cartesian coordinates of the scanned points of the surrounding object, intensity, reflectivity, distance between the object and the LiDAR system, etc.

[0104] FIG. 11A illustrates an analog CMOS-based resistive processing unit (RPU) device 1110 used in a LiDAR system, in accordance with embodiments of the present disclosure. According to embodiments, the analog CMOS-based RPU device 1110 is implemented with conventional circuit components, a capacitor and a set of transistors, based on the specifications and operation principles for the RPU device. In this configuration, the capacitor, C H , serves as a memory element in the cell and stores the weight value in the form of electric charge. The capacitor voltage is directly applied to the gate terminal of the read transistor, P 6 , and modulates its channel resistance, R on ,P6- It will be understood that R O n,P6 is the on-resistance that depends on the gate voltage of P 6 . Therefore, the charge state stored in the capacitor can be accessed by applying small bias across P 6 and measuring the current, l read . Based on EN and ENb, the current source transistors, P 3 and N 3 , receive bias voltages from the biasing circuit through switches, P 4 and N 4 , respectively. Either P 3 or N 3 provides charging or discharging current to C H , respectively, depending on a programming polarity signal, V PRG , in the biasing circuit, which controls the V H level and thus resistance of P 6 can be defined by Equation 8.

[0105] FIG. 11 B illustrates a CMOS memristor (emulator) circuit 1120 used in the LiDAR system, in accordance with embodiments of the present disclosure. The CMOS memristor (emulator) circuit 1120 is implemented by an n-channel MOSFET (NMOS), where the transistor M1 1123 implements a floating variable resistance between the terminal A 1121 and the terminal B 1122. The variable resistance is achieved by operating the transistor M1 1123 in linear (triode) or near-linear region. The voltage difference across the terminal A 1121 and the terminal B 1122, V AB , is sensed, integrated over time and used to control the gate of the transistor M1 1123. The integrator holds the ‘state’ of the variable resistor and updates it according to the voltage difference V AB . If a positive voltage is applied across the resistor, the gate voltage is increased and therefore the resistance is decreased (or conductance is increased) across the terminal A 1121 and the terminal B 1122. In contrast, if a negative voltage is applied across the resistor, the gate voltage is decreased and therefore the resistance is increased (or conductance is decreased). According to embodiments, the CMOS memristor (emulator) circuit 1120 embodies a variable resistor with a dynamic memory, and carries out the functionality of the idealized memristor concept.

[0106] FIGs. 12A and 12B illustrate a system-in-package (SiP) implementation of the LiDAR system, in accordance with embodiments of the present disclosure. Specifically, FIG. 12A provides schematics of 2.5D integrated circuit 1210 and FIG. 12B provides schematics of a 3D integrated circuit 1220 which implement the LIDAR system of the present disclosure. It should be noted that the peripheral electronic components of the LiDAR system (e.g. capacitors, resistors, etc.) that would connect to the CMOS chip to the optical computing system are omitted in FIGs. 12A and 12B for ease of illustration. [0107] As illustrated in FIGs. 12A and 12B, the 2.5D integrated circuit 1210 includes the silicon interposer platform 1211 containing redistribution layers. Optical coupling between the optical fibers and the optical computing system is depicted as edge couplers that could be used for the 2.5D integrated circuit 1210 as well as the 3D integrated circuit 1220.

[0108] In the 2.5D integrated circuit 1210 illustrated in FIG. 12A, the silicon interposer platform 1211 is utilized to provide connectivity between the CMOS chip implementing the FAV module 1212, stacked memory die 1213 and photonic integrated circuit (PIC) (e.g. optical computing system 1214). The CMOS chip implementing the FAV module 1212 includes an image sensor die 1212a, control die 1212b and laser source 1212c. The wire bonds 1215, micro solder bumps 1216, copper pillars or any combination thereof provide the connectivity between the photonic chip implementing the optical computing system 1214), the CMOS chip implementing the FAV module 1212), digital chips (e.g. stacked memory die 1213) and the interposer (silicon interposer platform 1211) with high input/output (I/O) rate. In some embodiments, the 2.5D integrated circuit 1210 is also referred to as a multi-chip module (MCM) where all the chips are flipped onto the interposer (e.g. silicon interposer platform 1211) serving as a redistribution layer.

[0109] The other SiP integration approach is the 3D integrated circuit 1220 as illustrated in FIG. 12B. In the 3D integrated circuit 1220, the FAV module 1222 and memory stack chips 1223 are placed onto the optical computing system 1224 as photonic components are larger than CMOS analog circuit(s) (e.g. FAV module 1222) or CMOS digital circuit(s) (e.g. memory stack chips 1223). The 3D integrated circuit 1220 exhibits less parasitic capacitance than 2.5D integrated circuit, as the flip chip micro solder bumps 1223 introduce small amounts of parasitic resistance (typically below 1 Q) and capacitance (typically in the order of 20 ~ 30 fF). The 3D integrated circuit 1220 enables placement of the chips throughout the two-dimensional flip chipped area. The 3D integrated circuit 1220 also enhances the connectivity between the chips as the size of the micro solder bump 1223 or copper pillar bond is reduced to approximately 10pm, thereby supporting pitches well below 100pm. High density flip chip copper pillar (Cu-Cu) bonding allows for a denser pitch than wire bonding, and therefore significantly improves bandwidth densities by generating a 250-fold increase in terms of the number of I/Os.

[0110] As stated above according to embodiments, one or more analog memories is incorporated into the hybrid CMOS-photonics LiDAR system to accelerate signal processing and reduce the use of data converters that are used for interfacing with the optical computing system. The optical computing system of the LiDAR system (e.g. optical computing system illustrated in FIG. 10) can perform computations in the analog domain and the output is stored directly in the analog memory. Therefore, data converters such as ADCs are not required to store data (e.g. transformed point clouds) in a digital memory. Utilization of an analog memory can provide advantages such as lower power consumption, higher computational efficiency, high noise margins, high speed operation and minimum capacitive loading. Moreover, the utilization of analog memory will also help to overcome the limited count rate and bandwidth of the system and resolve the bottlenecks of a conventional computing systems used for processing point clouds, such as the I/O interfacing with digital systems via DACs and ADCs.

[0111] FIG. 13 illustrates a method performed by a light detection and ranging (LiDAR) system of the present disclosure. The method 1300 begins at 1310, where the LIDAR system, using a LiDAR scanning sensor of LiDAR system scans an environment of the LiDAR system and generates one or more point clouds based on the scan of the environment. Each generated point cloud includes data points in a spatial coordinate system. The method 1300 then proceeds to 1320. At 1320, the LiDAR system converts, using a coordinate transformer, each of the one or more points clouds to a transformed point cloud that includes data points in a Cartesian coordinate system. The method 1300 then proceeds to 1330. At 1330, the LiDAR system stores each transformed point cloud in the memory of the LiDAR system. The LiDAR system may store each transformed point cloud in a digital memory or an analog memory. The method 1300 then proceeds to 1340. At 1340, the LiDAR system generates, for each respective transformed point cloud using one or more high voltage drivers of the LIDAR system, one or more electrical analog signals indicative of the respective transformed point cloud. The method 1300 then proceeds to 1350. At 1350, the LiDAR system, processes each respective transformed point cloud by performing computations of a neural network model on each respective transformed point cloud received through one or more electrical analog signals from the one or more high voltage drivers.

[0112] In some embodiments, the method 1300 may further include converting each respective transformed point cloud into a digital domain prior to storing the respective transformed point cloud in the memory, when the memory is a digital memory. In some embodiments, the method 1300 method may further include converting each respective transformed point cloud into an analog domain prior to generating the one or more electrical analog signals. [0113] It will be appreciated that, although specific embodiments of the technology have been described herein for purposes of illustration, various modifications may be made without departing from the scope of the technology. The specification and drawings are, accordingly, to be regarded simply as an illustration of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present invention.

[0114] It is obvious that the foregoing embodiments of the invention are examples and can be varied in many ways. Such present or future variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.