Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LIDAR SYSTEM HAVING A LINEAR FOCAL PLANE, AND RELATED METHODS AND APPARATUS
Document Type and Number:
WIPO Patent Application WO/2023/129725
Kind Code:
A1
Abstract:
A light detection and ranging (LIDAR) device including a plurality of laser sources configured to provide a plurality of transmit beams, each laser source being positioned with a respective offset of a first plurality of offsets relative to a reference line, a plurality of transmit/receive (T/R) interfaces configured to pass the plurality of transmit beams and reflect received light towards a plurality of detectors, each T/R interface being positioned with a respective offset of a second plurality of offsets relative to the reference line, and a plurality of lenses positioned between the plurality of laser sources and the plurality of T/R interfaces, each lens being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to provide beam-steering of the plurality of transmit beams.

Inventors:
REKOW MATHEW NOEL (US)
PFNUER STEFAN (US)
Application Number:
PCT/US2022/054360
Publication Date:
July 06, 2023
Filing Date:
December 30, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VELODYNE LIDAR USA INC (US)
International Classes:
G01S17/931; G01S7/481; G01S17/894
Domestic Patent References:
WO2020135802A12020-07-02
WO2017164989A12017-09-28
Foreign References:
US20190052844A12019-02-14
US202117392080A2021-08-02
Attorney, Agent or Firm:
STONE, Samuel S. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A lidar device, comprising: a plurality of laser sources configured to provide a plurality of transmit beams, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line; a plurality of transmit/receive (T/R) interfaces configured to pass the plurality of transmit beams and reflect received light towards a plurality of detectors, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a second plurality of offsets relative to the reference line; and a plurality of lenses positioned between the plurality of laser sources and the plurality of T/R interfaces, each lens of the plurality of lenses being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to provide beam-steering of the plurality of transmit beams.

2. The lidar device of claim 1, wherein each lens of the plurality of lenses intersects a respective transmit beam of the plurality of transmit beams, and wherein each lens of the plurality of lenses includes one or more micro-optic lenses configured to provide beam shaping of the respective transmit beam of the plurality of transmit beams.

3. The lidar device of claim 1, wherein each T/R interface of the plurality of T/R interfaces includes at least one mirror.

4. The lidar device of claim 3, wherein each detector of the plurality of detectors is included in a respective T/R interface of the plurality of the T/R interfaces.

5. The lidar device of claim 1, wherein the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces are disposed on a substrate.

6. The lidar device of claim 5, wherein the first plurality of offsets, the second plurality of offsets, and the third plurality of offsets correspond to positions of the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces on the substrate relative to the reference line.

7. The lidar device of claim 1, wherein the lidar device is configured to be included in a lidar system.

8. The lidar device of claim 7, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

9. The lidar device of claim 7, wherein the lidar device corresponds to two or more channels of a plurality of channels of the lidar system.

10. The lidar device of claim 7, wherein the lidar device is aligned to a linear focal plane of the lidar system.

11. The lidar device of claim 7, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

12. The lidar device of claim 11, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to steer the plurality of transmit beams towards the center of the system lens.

13. The lidar device of claim 12, wherein the first plurality of offsets are larger than the third plurality of offsets and the third plurality of offsets are larger than the second plurality of offsets.

14. The lidar device of claim 11, wherein the lidar device is aligned using an active alignment process that includes energizing at least one laser source of the plurality of laser sources and measuring energy associated with at least one transmit beam of the plurality of transmit beams at the center of the system lens.

15. The lidar device of claim 1, wherein the first plurality of offsets correspond to a first pitch between the plurality of laser sources, the second plurality of offsets correspond to a second pitch between the plurality of T/R interfaces, and the third plurality of offsets correspond to a third pitch between the plurality of lenses.

16. The lidar device of claim 15, wherein the plurality of laser sources are fabricated as a laser source array having the first pitch, the plurality of T/R interfaces are fabricated as a T/R interface array having the second pitch, and the plurality of lenses are fabricated as a lens array having the third pitch.

17. The lidar device of claim 16, wherein each of the laser source array, the T/R interface array, and the lens array is a monolithic array component.

18. A method for operating a lidar device, the method comprising: providing a plurality of transmit beams via a plurality of laser sources, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line; conditioning the plurality of transmit beams via a plurality of lenses, each lens of the plurality of lenses being positioned with a respective offset of a second plurality of offsets relative to the reference line; and passing the plurality of transmit beams and reflecting received light towards a plurality of detectors via a plurality of transmit/receive (T/R) interfaces, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, provide beam-steering of the plurality of transmit beams.

19. The method of claim 18, wherein each lens of the plurality of lenses intersects a respective transmit beam of the plurality of transmit beams, and wherein each lens of the plurality of lenses includes one or more micro-optic lenses and conditioning the plurality of transmit beams includes beam shaping the plurality of transmit beams using the micro-optic lenses.

20. The method of claim 18, wherein each T/R interface of the plurality of T/R interfaces includes at least one mirror.

21. The method of claim 20, wherein each detector of the plurality of detectors is included in a respective T/R interface of the plurality of the T/R interfaces.

22. The method of claim 18, wherein the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces are disposed on a substrate.

23. The method of claim 22, wherein the first plurality of offsets, the second plurality of offsets, and third plurality of offsets correspond to positions of the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces on the substrate relative to the reference line.

24. The method of claim 18, wherein the lidar device is configured to be included in a lidar system.

25. The method of claim 24, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

26. The method of claim 24, wherein the lidar device corresponds to two or more channels of a plurality of channels of the lidar system.

27. The method of claim 24, wherein the lidar device is aligned to a linear focal plane of the lidar system.

28. The method of claim 24, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

29. The method of claim 28, wherein the plurality of laser sources and the plurality of lenses, as positioned, steer the plurality of transmit beams towards the center of the system lens.

30. The method of claim 29, wherein the first plurality of offsets are larger than the third plurality of offsets and the third plurality of offsets are larger than the second plurality of offsets.

31. The method of claim 28, further comprising aligning the lidar device using an active alignment process that includes energizing at least one laser source of the plurality of laser sources and measuring energy associated with at least one transmit beam of the plurality of transmit beams at the center of the system lens.

32. The method of claim 18, wherein the first plurality of offsets correspond to a first pitch between the plurality of laser sources, the second plurality of offsets correspond to a second pitch between the plurality of T/R interfaces, and the third plurality of offsets correspond to a third pitch between the plurality of lenses.

33. The method of claim 32, wherein the plurality of laser sources are fabricated as a laser source array having the first pitch, the plurality of T/R interfaces are fabricated as a T/R interface array having the second pitch, and the plurality of lenses are fabricated as a lens array having the third pitch.

34. The method of claim 33, wherein each of the laser source array, the T/R interface array, and the lens array is a monolithic array component.

35. A method for manufacturing a lidar device, the method comprising: providing a laser source array including a plurality of laser sources disposed with a first pitch on a first substrate, the plurality of laser sources configured to provide a respective plurality of transmit beams, wherein providing the laser source array comprises positioning at least one laser source of the plurality of laser sources with a first offset relative to a reference line; providing a lens array including a plurality of lenses disposed with a second pitch on a second substrate, the plurality of lenses configured to condition the respective plurality of transmit beams provided by the laser source array, wherein providing the lens array comprises positioning at least one lens of the plurality of lenses with a second offset relative to the reference line; and providing a transmit/receive (T/R) interface array including a plurality of T/R interfaces disposed with a third pitch on a third substrate, the plurality of T/R interfaces configured to pass the respective plurality of transmit beams conditioned by the lens array and to reflect received light towards a plurality of detectors, wherein providing the T/R interface array comprises positioning at least one T/R interface of the plurality of T/R interfaces with a third offset relative to the reference line, wherein the laser source array and the lens array, as positioned, are configured to provide beam steering of one or more of the plurality of transmit beams.

36. The method of claim 35, further comprising: fabricating the plurality of laser sources as the laser source array having the first pitch; fabricating the plurality of lenses as the lens array having the second pitch; and fabricating the plurality of T/R interfaces as the T/R interface array having the third pitch.

37. The method of claim 36, wherein each of the laser source array, the T/R interface array, and the lens array is fabricated as a monolithic array component.

38. A lidar device, comprising: a laser source configured to provide a transmit beam, the laser source being positioned with a first offset relative to a reference line; a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, the T/R interface being positioned with a second offset relative to the reference line; and a lens positioned between the laser source and the T/R interface, the lens being positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam.

39. The lidar device of claim 38, wherein the lens includes one or more micro-optic lenses configured to provide beam shaping of the transmit beam.

40. The lidar device of claim 38, wherein the T/R interface includes at least one mirror.

41. The lidar device of claim 40, wherein the detector is included in the T/R interface.

42. The lidar device of claim 38, wherein the laser source, the lens, and the T/R interface are disposed on a substrate.

43. The lidar device of claim 42, wherein the first, second, and third offsets correspond to positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line.

44. The lidar device of claim 38, wherein the lidar device is one of a plurality of lidar devices included in a lidar system.

45. The lidar device of claim 44, wherein the lidar device corresponds to a channel of a plurality of channels of the lidar system.

46. The lidar device of claim 44, wherein the lidar device is aligned to a linear focal plane of the lidar system.

47. The lidar device of claim 44, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

48. The lidar device of claim 44, wherein the lidar system includes a system lens and the reference line is coincident with a boresight line of the system lens.

49. The lidar device of claim 47, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam towards the center of the system lens.

50. The lidar device of claim 47, wherein the lidar device is aligned using an active alignment process that includes energizing the laser source and measuring energy associated with transmit beam at the center of the system lens.

51. A method for operating a lidar device, the method comprising: providing a transmit beam via a laser source positioned with a first offset relative to a reference line; conditioning the transmit beam via a lens positioned with a second offset relative to the reference line; and passing the transmit beam and reflecting received light towards a detector via a transmit/receive (T/R) interface positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam.

52. The method of claim 51, wherein the lens includes one or more micro-optic lenses and conditioning the transmit beam includes beam shaping the transmit beam using the one or more micro-optic lenses.

53. The method of claim 51, wherein the T/R interface includes at least one mirror.

54. The method of claim 53, wherein the detector is included in the T/R interface.

55. The method of claim 51, wherein the laser source, the lens, and the T/R interface are disposed on a substrate.

56. The method of claim 55, wherein the first, second, and third offsets correspond to positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line.

57. The method of claim 51, wherein the lidar device is one of a plurality of lidar devices included in a lidar system.

58. The method of claim 57, wherein the lidar device corresponds to a channel of a plurality of channels of the lidar system.

59. The method of claim 57, wherein the lidar device is aligned to a linear focal plane of the lidar system.

60. The method of claim 57, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

61. The method of claim 60, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam towards the center of the system lens.

62. The method of claim 60, further comprising aligning the lidar device using an active alignment process that includes energizing the laser source and measuring energy associated with the transmit beam at the center of the system lens.

63. A method for manufacturing a lidar device, the method comprising: providing a laser source configured to provide a transmit beam, wherein providing the laser source comprises positioning the laser source with a first offset relative to a reference line; providing a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, wherein providing the T/R interface comprises positioning the T/R interface with a second offset relative to the reference line; and providing a lens positioned between the laser source and the T/R interface, wherein providing the comprises positioning the lens with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam toward a center of a system lens of a lidar system comprising the lidar device.

64. The method of claim 63, wherein the T/R interface includes at least one mirror.

65. The method of claim 64, wherein the detector is included in the T/R interface.

66. The method of claim 63, wherein positioning the laser source, the T/R interface, and the lens comprises disposing the laser source, the lens, and the T/R interface on a substrate.

67. The method of claim 66, wherein the first offset, the second offset, and the third offset correspond to respective positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line.

68. The method of claim 66, further comprising at least one of coupling, bonding, attaching, or fastening the laser source, the lens, and the T/R interface to the substrate.

69. The method of claim 65, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

70. The method of claim 69, wherein the positioning of the laser source, the T/R interface, and the lens aligns the lidar device to a linear focal plane of the lidar system.

71. The method of claim 69, further comprising aligning the lidar device using an active alignment process that includes energizing the laser source and measuring energy associated with transmit beam at the center of the system lens.

72. The method of claim 69, wherein the reference line corresponds to the center of the system lens.

73. The method of claim 63, wherein the reference line is coincident with a boresight line of the system lens.

74. The method of claim 63, wherein the first offset is larger than the third offset and the third offset is larger than the second offset.

-SO-

Description:
LIDAR SYSTEM HAVING A LINEAR FOCAL PLANE, AND RELATED METHODS

AND APPARATUS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to and the benefit of U.S. Patent Application No. 17/567,004, titled “LIDAR SYSTEM HAVING A LINEAR FOCAL PLANE, AND RELATED METHODS AND APPARATUS” and filed on December 31, 2021, and U.S. Patent Application No. 17/567,005, titled “LIDAR SYSTEM HAVING A LINEAR FOCAL PLANE, AND RELATED METHODS AND APPARATUS” and filed on December 31, 2021, the entire contents of each of which are hereby incorporated by reference herein.

FIELD OF TECHNOLOGY

[0002] The present disclosure relates generally to lidar technology and, more specifically, to lidar systems having linear focal planes.

BACKGROUND

[0003] Lidar (light detection and ranging) systems measure the attributes of their surrounding environments (e.g., shape of a target, contour of a target, distance to a target, etc.) by illuminating the environment with light (e.g., laser light) and measuring the reflected light with sensors. Differences in laser return times and/or wavelengths can then be used to make digital, three- dimensional (“3D”) representations of a surrounding environment. Lidar technology may be used in various applications including autonomous vehicles, advanced driver assistance systems, mapping, security, surveying, robotics, geology and soil science, agriculture, unmanned aerial vehicles, airborne obstacle detection (e.g., obstacle detection systems for aircraft), etc. Depending on the application and associated field of view, multiple optical transmitters and/or optical receivers may be used to produce images in a desired resolution. A lidar system with greater numbers of transmitters and/or receivers can generally generate larger numbers of pixels.

[0004] In a multi-channel lidar device, optical transmitters can be paired with optical receivers to form multiple “channels.” In operation, each channel’s transmitter can emit an optical signal (e.g., laser light) into the device’s environment, and the channel’s receiver can detect the portion of the signal that is reflected back to the channel’ s receiver by the surrounding environment. In this way, each channel can provide “point” measurements of the environment, which can be aggregated with the point measurements provided by the other channel(s) to form a “point cloud” of measurements of the environment.

[0005] The measurements collected by a lidar channel may be used to determine the distance (“range”) from the device to the surface in the environment that reflected the channel’s transmitted optical signal back to the channel’s receiver. In some cases, the range to a surface may be determined based on the time of flight of the channel’s signal (e.g., the time elapsed from the transmitter’s emission of the optical signal to the receiver’s reception of the return signal reflected by the surface). In other cases, the range may be determined based on the wavelength (or frequency) of the return signal(s) reflected by the surface.

[0006] In some cases, lidar measurements may be used to determine the reflectance of the surface that reflects an optical signal. The reflectance of a surface may be determined based on the intensity of the return signal, which generally depends not only on the reflectance of the surface but also on the range to the surface, the emitted signal’s glancing angle with respect to the surface, the power level of the channel’s transmitter, the alignment of the channel’s transmitter and receiver, and other factors.

[0007] The foregoing examples of the related art and limitations therewith are intended to be illustrative and not exclusive, and are not admitted to be “prior art.” Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the drawings.

SUMMARY

[0008] Lidar sensors having linear focal planes, and methods of manufacturing and operating such lidar sensors, are described herein.

[0009] At least one aspect of the present disclosure is directed to a lidar device. The lidar device includes a plurality of laser sources configured to provide a plurality of transmit beams, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line, a plurality of transmit/receive (T/R) interfaces configured to pass the plurality of transmit beams and reflect received light towards a plurality of detectors, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a second plurality of offsets relative to the reference line, and a plurality of lenses positioned between the plurality of laser sources and the plurality of T/R interfaces, each lens of the plurality of lenses being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to provide beam-steering of the plurality of transmit beams.

[0010] Another aspect of the present disclosure is directed to a method for operating a lidar device. The method includes providing a plurality of transmit beams via a plurality of laser sources, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line, conditioning the plurality of transmit beams via a plurality of lenses, each lens of the plurality of lenses being positioned with a respective offset of a second plurality of offsets relative to the reference line, and passing the plurality of transmit beams and reflecting received light towards a plurality of detectors via a plurality of transmit/receive (T/R) interfaces, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, provide beamsteering of the plurality of transmit beams.

[0011] Another aspect of the present disclosure is directed to a method for manufacturing a lidar device. The method includes providing a laser source array including a plurality of laser sources disposed with a first pitch on a first substrate, the plurality of laser sources configured to provide a respective plurality of transmit beams, wherein providing the laser source array comprises positioning at least one laser source of the plurality of laser sources with a first offset relative to a reference line, providing a lens array including a plurality of lenses disposed with a second pitch on a second substrate, the plurality of lenses configured to condition the respective plurality of transmit beams provided by the laser source array, wherein providing the lens array comprises positioning at least one lens of the plurality of lenses with a second offset relative to the reference line, and providing a transmit/receive (T/R) interface array including a plurality of T/R interfaces disposed with a third pitch on a third substrate, the plurality of T/R interfaces configured to pass the respective plurality of transmit beams conditioned by the lens array and to reflect received light towards a plurality of detectors, wherein providing the T/R interface array comprises positioning at least one T/R interface of the plurality of T/R interfaces with a third offset relative to the reference line, wherein the laser source array and the lens array, as positioned, are configured to provide beam steering of one or more of the plurality of transmit beams.

[0012] Another aspect of the present disclosure is directed to a lidar device. The lidar device includes a laser source configured to provide a transmit beam, the laser source being positioned with a first offset relative to a reference line, a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, the T/R interface being positioned with a second offset relative to the reference line, and a lens positioned between the laser source and the T/R interface, the lens being positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam.

[0013] Another aspect of the present disclosure is directed to a method for operating a lidar device. The method includes providing a transmit beam via a laser source positioned with a first offset relative to a reference line, conditioning the transmit beam via a lens positioned with a second offset relative to the reference line, and passing the transmit beam and reflecting received light towards a detector via a transmit/receive (T/R) interface positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam.

[0014] Another aspect of the present disclosure is directed to a method for manufacturing a lidar device. The method includes providing a laser source configured to provide a transmit beam, wherein providing the laser source comprises positioning the laser source with a first offset relative to a reference line, providing a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, wherein providing the T/R interface comprises positioning the T/R interface with a second offset relative to the reference line, and providing a lens positioned between the laser source and the T/R interface, wherein providing the comprises positioning the lens with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam toward a center of a system lens of a lidar system comprising the lidar device.

[0015] The above and other preferred features, including various novel details of implementation and combination of events, will now be more particularly described with reference to the accompanying figures and pointed out in the claims. It will be understood that the particular systems and methods described herein are shown by way of illustration only and not as limitations. As will be understood by those skilled in the art, the principles and features described herein may be employed in various and numerous embodiments without departing from the scope of any of the present inventions. As can be appreciated from the foregoing and the following description, each and every feature described herein, and each and every combination of two or more such features, is included within the scope of the present disclosure provided that the features included in such a combination are not mutually inconsistent. In addition, any feature or combination of features may be specifically excluded from any embodiment of any of the present inventions. [0016] The foregoing Summary, including the description of some embodiments, motivations therefor, and/or advantages thereof, is intended to assist the reader in understanding the present disclosure, and does not in any way limit the scope of any of the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0017] The accompanying figures, which are included as part of the present specification, illustrate the presently preferred embodiments and together with the generally description given above and the detailed description of the preferred embodiments given below serve to explain and teach the principles described herein.

[0018] FIG. 1 is an illustration of an exemplary lidar system, in accordance with some embodiments.

[0019] FIG. 2A is an illustration of the operation of a lidar system, in accordance with some embodiments.

[0020] FIG. 2B is an illustration of optical components of a channel of a lidar system with a movable mirror, in accordance with some embodiments.

[0021] FIG. 2C is an illustration of an example of a 3D lidar system, in accordance with some embodiments.

[0022] FIG. 2D is a block diagram of a transmitter-receiver optical sub-assembly (TROSA), according to some embodiments.

[0023] FIG. 3 A is a functional block diagram of an example 3D lidar system.

[0024] FIG. 3B depicts an illustration of the timing of emission of a pulsed measurement beam and capture of the returning measurement pulse, according to an example.

[0025] FIG. 3C depicts a view of a light emission/collection engine of 3D lidar system, according to an example.

[0026] FIG. 3D depicts a view of collection optics of a 3D lidar system, according to an example.

[0027] FIG. 4 is a schematic diagram of a single channel lidar system, according to an example.

[0028] FIGS. 5A-5B are schematic diagrams of a lidar device array, according to an example.

[0029] FIG. 6 is a schematic diagram of a lidar device assembly in accordance with some embodiments. [0030] FIGS. 7A-7B are schematic diagrams of lidar device assemblies in accordance with some embodiments.

[0031] FIG. 8 is a schematic diagram of a lidar device array in accordance with some embodiments.

[0032] FIG. 9 is a schematic diagram of a multi-channel lidar system in accordance with some embodiments.

[0033] FIG. 10 is a schematic diagram of a lidar device assembly in accordance with some embodiments.

[0034] FIG. 11 is a method for operating a lidar system in accordance with some embodiments.

[0035] FIG. 12A is a flowchart of a method for manufacturing a lidar device array in accordance with some embodiments.

[0036] FIG. 12B is a flowchart of another method for manufacturing a lidar device array in accordance with some embodiments.

[0037] FIG. 13 A is a schematic diagram of a lidar device in accordance with some embodiments.

[0038] FIG. 13B is a schematic diagram of a lidar device in accordance with some embodiments.

[0039] FIG. 14 is an illustration of an example continuous wave (CW) coherent lidar system, in accordance with some embodiments.

[0040] FIG. 15 is an illustration of an example frequency modulated continuous wave (FMCW) coherent lidar system, in accordance with some embodiments.

[0041] FIG. 16A is a plot of a frequency chirp as a function of time in a transmitted laser signal and reflected signal, in accordance with some embodiments.

[0042] FIG. 16B is a plot illustrating a beat frequency of a mixed signal, in accordance with some embodiments.

[0043] FIG. 17 is a diagram of a vehicle including a plurality of sensors, in accordance with some embodiments.

[0044] FIG. 18 is a block diagram of a silicon photonic integrated circuit (PIC) in accordance with some embodiments.

[0045] FIG. 19 is a block diagram of an example computer system, in accordance with some embodiments. [0046] FIG. 20 is a block diagram of a computing device/information handling system, in accordance with some embodiments.

[0047] While the present disclosure is subject to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. The present disclosure should be understood to not be limited to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.

DETAILED DESCRIPTION

[0048] Systems and methods relating to lidar devices having linear focal planes are described herein. It will be appreciated that, for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the exemplary embodiments described herein may be practiced without these specific details.

[0049] Three of the most significant technical challenges faced by the lidar industry are (1) reducing the manufacturing cost for lidar devices while maintaining existing performance levels, (2) improving the reliability of lidar devices under automotive operating conditions (e.g., weather, temperature, and mechanical vibration), and (3) increasing the range of lidar devices. One approach to reducing manufacturing costs is to reduce the amount of hardware (e.g., channels, transmitters, emitters, receivers, detectors, etc.) in the lidar device while increasing the utilization of the remaining hardware to maintain performance levels. One approach to improving device reliability is to develop lidar devices that use fewer moving mechanical parts (e.g., by eliminating or simplifying mechanical beam scanners). One approach to extending range is to develop lidar devices that use solid-state lasers.

Motivation for and Benefits of Some Embodiments

[0050] In many cases, lidar systems include multiple channels (or lidar devices). These channels are often arranged such that each channel points towards the center of a common lens (e.g., each channel’s light source is oriented such that the chief rays of the beams it emits pass through the center of the lens). By physically pointing each channel towards the center of the common lens, the lidar system can achieve overall better system performance (e.g., reduce light clipping at the lens). However, due to the unique positioning of each lidar channel, the lidar system is configured with a curved focal plane. As such, manufacturing the lidar system may involve individually aligning each lidar channel, which can be expensive and time consuming. In addition, the unique positioning of each lidar channel around the curved focal plane can increase the size of the lidar system.

[0051] In at least one embodiment, at least one component included in a lidar device is positioned such that the lidar device provides micro-optic beam steering. For example, micro-optic beam steering is provided if a lens (e.g., microlens) of a lidar channel changes the direction of the chief ray of a laser beam passing through the lens by more than a threshold amount. For example, the laser and/or the lens (e.g., microlens) of a lidar channel may be positioned (e.g., relative to each other) to provide micro-optic beam steering. In some examples, the micro-optic beam steering allows the lidar device to be included in a device array having a linear (or flat) focal plane arrangement. As such, the size of the lidar device array (and the lidar system) can be reduced. In certain examples, the time and cost per channel alignment of the lidar device array can be reduced by using multi-channel array components that are aligned at the component level

Some Examples of Lidar Systems

[0052] A lidar system may be used to measure the shape and contour of the environment surrounding the system. Lidar systems may be applied to numerous applications including autonomous navigation and aerial mapping of surfaces. In general, a lidar system emits light that is subsequently reflected by objects within the environment in which the system operates. The light may be emitted by a laser (e.g., a rapidly firing laser). Laser light travels through a medium and reflects off points of surfaces in the environment (e.g., surfaces of buildings, tree branches, vehicles, etc.). The reflected (and/or scattered) light energy returns to a lidar detector where it may be sensed and used to perceive the environment.

[0053] The science of lidar systems is based on the physics of light and optics. Any suitable measurement techniques may be used to determine the attributes of objects in a lidar system’s environment. In some examples, the lidar system is configured to emit light pulses (e.g., individual pulses or sequences of pulses). The time each pulse (or pulse sequence) travels from being emitted to being received (“time of flight” or “TOF”) may be measured to determine the distance between the lidar system and the object that reflects the pulse. Lidar systems that operate in this way may be referred to as “pulsed lidar,” “TOF lidar,” “direct TOF lidar,” or “pulsed TOF lidar.” In some other examples, the time of flight may be calculated indirectly (e.g., using amplitude-modulated continuous wave (AMCW) structured light). Lidar systems that operate in this way may be referred to as “indirect TOF lidar” or “iTOF lidar.” In still other examples, the lidar system can be configured to emit continuous wave (CW) light. The wavelength (or frequency) of the received, reflected light may be measured to determine the distance between the lidar system and the object that reflects the light. In some examples, lidar systems can measure the speed (or velocity) of objects. Lidar systems that operate in this way may be referred to as “coherent lidar,” “continuous wave lidar,” or “CW lidar.” In a CW lidar system, any suitable variant of CW lidar sensing may be used. For example, frequency modulated continuous wave (FMCW) lidar sensing may be used.

[0054] FIG. 1 depicts the operation of a lidar system 100, according to some embodiments. In the example of FIG. 1, the lidar system 100 includes a lidar device 102, which may include a transmitter 104 that generates and emits a light signal 110, a receiver 106 that detects and processes a return light signal 114, and a control & data acquisition module 108. The transmitter 104 may include a light source (e.g., “optical emitter” or “emitter”), electrical components operable to activate (e.g., drive) and deactivate the light source in response to electrical control signals, and optical components adapted to shape and redirect the light emitted by the light source. The receiver 106 may include a light detector (e.g., “optical detector,” “photodetector,” or “detector”) and optical components adapted to shape return light signals 114 and direct those signals to the detector. In some implementations, one or more optical components (e.g., lenses, mirrors, etc.) may be shared by the transmitter and the receiver.

[0055] The lidar device 102 may be referred to as a lidar transceiver or “channel.” In operation, the emitted light signal 110 propagates through a medium and reflects off an object(s) 112, whereby a return light signal 114 propagates through the medium and is received by receiver 106. In one example, each lidar channel may correspond to a physical mapping of a single emitter to a single detector (e.g., a one-to-one pairing of a particular emitter and a particular detector). In other examples, however, each lidar channel may correspond to a physical mapping of multiple emitters to a single detector or a physical mapping of a single emitter to multiple detectors (e.g., a “flash” configuration). In some examples, a lidar system 100 may have no fixed channels; rather, light emitted by one or more emitters may be detected by one or more detectors without any physical or persistent mapping of specific emitters to specific detectors.

[0056] Any suitable light source may be used including, without limitation, one or more gas lasers, chemical lasers, metal -vapor lasers, solid-state lasers (SSLs) (e.g., Q-switched SSLs, Q-switched solid-state bulk lasers, etc.), fiber lasers (e.g., Q-switched fiber lasers), liquid lasers (e.g., dye lasers), semiconductor lasers (e.g., laser diodes, edge emitting lasers (EELs), vertical-cavity surface emitting lasers (VCSELs), quantum cascade lasers, quantum dot lasers, quantum well lasers, hybrid silicon lasers, optically pumped semiconductor lasers, etc.), and/or any other device operable to emit light. For semiconductor lasers, any suitable gain medium may be used including, without limitation, gallium nitride (GaN), indium gallium nitride (InGaN), aluminum gallium indium phosphide (AlGalnP), aluminum gallium arsenide (AlGaAs), indium gallium arsenide phosphide (InGaAsP), lead salt, etc. For Q-switched lasers, any suitable type or variant of Q- switching can be used including, without limitation, active Q-switching, passive Q-switching, cavity dumping, regenerative Q-switching, etc. The light source may emit light having any suitable wavelength or wavelengths, including but not limited to wavelengths between 100 nm (or less) and 1 mm (or more). Semiconductor lasers operable to emit light having wavelengths of approximately 905 nm, 1300 nm, or 1550 nm are widely commercially available. In some examples, the light source may be operated as a pulsed laser, a continuous-wave (CW) laser, and/or a coherent laser. A light signal (e.g., “optical signal”) 110 emitted by a light source may consist of a single pulse, may include a sequence of two or more pulses, or may be a continuous wave.

[0057] A lidar system 100 may use any suitable illumination technique to illuminate the system’s field of view (FOV). In some examples, the lidar system 100 may illuminate the entire FOV simultaneously. Such illumination techniques may be referred to herein as “flood illumination” or “flash illumination.” In some examples, the lidar system 100 may illuminate fixed, discrete spots throughout the FOV simultaneously. Such illumination techniques may be referred to herein as “fixed spot illumination.” In some examples, the lidar system 100 may illuminate a line within the FOV and use a scanner (e.g., a ID scanner) to scan the line over the entire FOV. Such illumination techniques may be referred to herein as “scanned line illumination.” In some examples, the lidar system 100 may simultaneously illuminate one or more spots within the FOV and use a scanner (e.g., a ID or 2D scanner) to scan the spots over the entire FOV. Such illumination techniques may be referred to herein as “scanned spot illumination.”

[0058] Any suitable optical detector may be used including, without limitation, one or more photodetectors, contact image sensors (CIS), solid-state photodetectors (e.g., photodiodes (PD), single-photon avalanche diode (SPADs), avalanche photodiodes (APDs), etc.), photomultipliers (e.g., silicon photomultipliers (SiPMs), and/or any other device operable to convert light (e.g., optical signals) into electrical signals. In some examples, CIS can be fabricated using a complementary metal-oxide semiconductor (CMOS) process. In some examples, solid-state photodetectors can be fabricated using semiconductor processes similar to CMOS. Such semiconductor processes may use silicon, germanium, indium gallium arsenide, lead (II) sulfide, mercury cadmium, telluride, MoS2, graphene, and/or any other suitable material(s). In some examples, an array of integrated or discrete CIS or solid-state photodetectors can be used to simultaneously image (e.g., perform optical detection across) the lidar device’s entire field of view or a portion thereof. In general, solid-state photodetectors may be configured to detect light having wavelengths between 190 nm (or lower) and 1.4 pm (or higher). PDs and APDs configured to detect light having wavelengths of approximately 905 nm, 1300 nm, or 1550 nm are widely commercially available.

[0059] The lidar system 100 may include any suitable combination of measurement technique(s), light source(s), illumination technique(s), and detector(s). Some combinations may be more accurate or more economical on certain conditions. For example, some combinations may be more economical for short-range sensing but incapable of provide accurate measurements at longer ranges. Some combinations may pose potential hazards to eye safety, while other combinations may reduce such hazards to negligible levels.

[0060] The control & data acquisition module 108 may control the light emission by the transmitter 104 and may record data derived from the return light signal 114 detected by the receiver 106. In some embodiments, the control & data acquisition module 108 controls the power level at which the transmitter 104 operates when emitting light. For example, the transmitter 104 may be configured to operate at a plurality of different power levels, and the control & data acquisition module 108 may select the power level at which the transmitter 104 operates at any given time. Any suitable technique may be used to control the power level at which the transmitter 104 operates. In some embodiments, the control & data acquisition module 108 or the receiver 106 determines (e.g., measures) particular characteristics of the return light signal 114 detected by the receiver 106. For example, the control & data acquisition module 108 or receiver 106 may measure the intensity of the return light signal 114 using any suitable technique.

[0061] Operational parameters of the transceiver 102 may include its horizontal field of view (“FOV”) and its vertical FOV. The FOV parameters effectively define the region of the environment that is visible to the specific lidar transceiver 102. More generally, the horizontal and vertical FOVs of a lidar system 100 may be defined by combining the fields of view of a plurality of lidar devices 102. [0062] To obtain measurements of points in its environment and generate a point cloud based on those measurements, a lidar system 100 may scan its FOV. A lidar transceiver system 100 may include one or more beam-steering components (not shown) to redirect and shape the emitted light signals 110 and/or the return light signals 114. Any suitable beam-steering components may be used including, without limitation, mechanical beam steering components (e.g., rotating assemblies that physically rotate the transceiver(s) 102, rotating scan mirrors that deflect emitted light signals 110 and/or return light signals 114, etc.), optical beam steering components (e.g., lenses, lens arrays, microlenses, microlens arrays, beam splitters, etc.), microelectromechanical (MEMS) beam steering components (e.g., MEMS scan mirrors, etc.), solid-state beam steering components (e.g., optical phased arrays, optical frequency diversity arrays, etc.), etc.

[0063] In some implementations, the lidar system 100 may include or be communicatively coupled to a data analysis & interpretation module 109, which may receive outputs (e.g., via a connection 116) from the control & data acquisition module 108 and may perform data analysis on those outputs. By way of example and not limitation, connection 116 may be implemented using wired or wireless (e.g., non-contact communication) technique(s).

[0064] FIG. 2A illustrates the operation of a lidar system 202, in accordance with some embodiments. In the example of FIG. 2A, two return light signals 203 and 205 are shown. Because laser beams generally tend to diverge as they travel through a medium, a single laser emission may hit multiple objects at different ranges from the lidar system 202, producing multiple return signals 203, 205. The lidar system 202 may analyze multiple return signals 203, 205 and report one of the return signals (e.g., the strongest return signal, the last return signal, etc.) or more than one (e.g., all) of the return signals. In the example of FIG. 2A, lidar system 202 emits laser light in the direction of near wall 204 and far wall 208. As illustrated, the majority of the emitted light hits the near wall 204 at area 206 resulting in a return signal 203, and another portion of the emitted light hits the far wall 208 at area 210 resulting in a return signal 205. Return signal 203 may have a shorter TOF and a stronger received signal strength compared to return signal 205. In both single- and multiple-return lidar systems, it is important that each return signal is accurately associated with the transmitted light signal so that one or more attributes of the obj ect reflecting the light signal (e.g., range, velocity, reflectance, etc.) can be correctly estimated.

[0065] Some embodiments of a lidar system may capture distance data in a two-dimensional (“2D”) (e.g., within a single plane) point cloud manner. These lidar systems may be used in industrial applications, or for surveying, mapping, autonomous navigation, and other uses. Some embodiments of these systems rely on the use of a single laser emitter/detector pair combined with a moving mirror to effect scanning across at least one plane. This mirror may reflect the emitted light from the transmitter (e.g., laser diode), and/or may reflect the return light to the receiver (e.g., to the detector). Use of a movable (e.g., oscillating) mirror in this manner may enable the lidar system to achieve 90 - 180 - 360 degrees of azimuth (horizontal) view while simplifying both the system design and manufacturability. Many applications require more data than just a 2D plane. The 2D point cloud may be expanded to form a 3D point cloud, in which multiple 2D point clouds are used, each corresponding to a different elevation (e.g., a different position and/or direction with respect to a vertical axis). Operational parameters of the receiver of a lidar system may include the horizontal FOV and the vertical FOV.

[0066] FIG. 2B depicts a lidar system 250 with a movable (e.g., rotating or oscillating) mirror, according to some embodiments. In the example of FIG. 2B, the lidar system 250 uses a single emitter 252 / detector 262 pair combined with a fixed mirror 254 and a movable mirror 256 to effectively scan across a plane. Distance measurements obtained by such a system may be effectively two-dimensional (e.g., planar), and the captured distance points may be rendered as a 2D (e.g., single plane) point cloud. In some embodiments, but without limitation, the movable mirror 256 may oscillate at very fast speeds (e.g., thousands of cycles per minute).

[0067] The emitted laser signal 251 may be directed to a fixed mirror 254, which may reflect the emitted laser signal 251 to the movable mirror 256. As movable mirror 256 moves (e.g., oscillates), the emitted laser signal 251 may reflect off an object 258 in its propagation path. The reflected return signal 253 may be coupled to the detector 262 via the movable mirror 256 and the fixed mirror 254. In some embodiments, the movable mirror 256 is implemented with mechanical technology or with solid state technology (e.g., MEMS).

[0068] FIG. 2C depicts a 3D lidar system 270, according to some embodiments. In the example of FIG. 2C, the 3D lidar system 270 includes a lower housing 271 and an upper housing 272. The upper housing 272 includes a cylindrical shell element 273 constructed from a material that is transparent to infrared light (e.g., light having a wavelength within the spectral range of 700 to 1,700 nanometers). In one example, the cylindrical shell element 273 is transparent to light having wavelengths centered at 905 nanometers.

[0069] In some embodiments, the 3D lidar system 270 includes a lidar transceiver, such as transceiver 102 shown in FIG. 1, operable to emit laser beams 276 through the cylindrical shell element 273 of the upper housing 272. In the example of FIG. 2C, each individual arrow in the sets of arrows 275, 275’ directed outward from the 3D lidar system 270 represents a laser beam 276 emitted by the 3D lidar system. Each beam of light emitted from the system 270 (e.g., each laser beam 276) may diverge slightly, such that each beam of emitted light forms a cone of light emitted from system 270. In one example, a beam of light emitted from the system 270 illuminates a spot size of 20 centimeters in diameter at a distance of 100 meters from the system 270.

[0070] In some embodiments, the transceiver 102 emits each laser beam 276 transmitted by the 3D lidar system 270. The direction of each emitted beam may be determined by the angular orientation co of the transceiver’ s transmitter 104 with respect to the system’s central axis 274 and by the angular orientation y of the transmitter’s movable mirror (e.g., similar or identical to movable mirror 256 shown in FIG. 2B) with respect to the mirror’ s axis of oscillation (or rotation). For example, the direction of an emitted beam in a horizontal dimension may be determined by the transmitter’s angular orientation co, and the direction of the emitted beam in a vertical dimension may be determined by the angular orientation y of the transmitter’s movable mirror. Alternatively, the direction of an emitted beam in a vertical dimension may be determined the transmitter’ s angular orientation co, and the direction of the emitted beam in a horizontal dimension may be determined by the angular orientation y of the transmitter’ s movable mirror. (For purposes of illustration, the beams of light 275 are illustrated in one angular orientation relative to a nonrotating coordinate frame of the 3D lidar system 270 and the beams of light 275' are illustrated in another angular orientation relative to the non-rotating coordinate frame.)

[0071] The 3D lidar system 270 may scan a particular point (e.g., pixel) in its field of view by adjusting the angular orientation co of the transmitter and the angular orientation y of the transmitter’s movable mirror to the desired scan point (co, y) and emitting a laser beam from the transmitter 104. Accordingly, the 3D lidar system 270 may systematically scan its field of view by adjusting the angular orientation co of the transmitter and the angular orientation y of the transmitter’s movable mirror to a set of scan points (coi, \|/j) and emitting a laser beam from the transmitter 104 at each of the scan points.

[0072] Assuming that the optical component(s) (e.g., movable mirror 256) of a lidar transceiver remain stationary during the time period after the transmitter 104 emits a laser beam 110 (e.g., a pulsed laser beam or “pulse” or a CW laser beam) and before the receiver 106 receives the corresponding return beam 114, the return beam generally forms a spot centered at (or near) a stationary location L0 on the detector. This time period is referred to herein as the “ranging period” or “listening period” of the scan point associated with the transmitted beam 110 and the return beam 114.

[0073] In many lidar systems, the optical component(s) of a lidar transceiver do not remain stationary during the ranging period of a scan point. Rather, during a scan point’s ranging period, the optical component(s) may be moved to orientation(s) associated with one or more other scan points, and the laser beams that scan those other scan points may be transmitted. In such systems, absent compensation, the location Li of the center of the spot at which the transceiver’s detector receives a return beam 114 generally depends on the change in the orientation of the transceiver’s optical component(s) during the ranging period, which depends on the angular scan rate (e.g., the rate of angular motion of the movable mirror 256) and the range to the object 112 that reflects the transmitted light. The distance between the location Li of the spot formed by the return beam and the nominal location L0 of the spot that would have been formed absent the intervening rotation of the optical component(s) during the ranging period is referred to herein as “walk-off.”

[0074] Referring to FIG. 2D, a block diagram of a transmitter-receiver optical subassembly (TROS A) 281 is shown, according to some embodiments. In some embodiments, the TROS A 281 may include a TOSA 280, an optical detector 287, a beam splitter 283, signal conditioning electronics 289, an analog to digital (A/D) converter 290, controller 292, and digital input/output (I/O) electronics 293. In some embodiments, the TROS A components illustrated in FIG. 2D are integrated onto a common substrate 282 (e.g., printed circuit board, ceramic substrate, etc.). In some embodiments, the TROSA components illustrated in FIG. 2D are individually mounted to a common substrate 282. In some embodiments, groups of these components are packaged together and the integrated package(s) is / are mounted to the common substrate.

[0075] The TOSA 280 may include one or more light sources and may operate the light source(s) safely within specified safety thresholds. A light source of the TOSA may emit an optical signal (e.g., laser beam) 285.

[0076] A return signal 284 may be detected by the TROSA 281 in response to the optical signal 285 illuminating a particular location. For example, the optical detector 287 may detect the return signal 284 and generate an electrical signal 288 based on the return signal 284. The controller 292 may initiate a measurement window (e.g., a period of time during which collected return signal data are associated with a particular emitted light signal 285) by enabling data acquisition by optical detector 287. Controller 292 may control the timing of the measurement window to correspond with the period of time when a return signal is expected in response to the emission of an optical signal 285. In some examples, the measurement window is enabled at the time when the optical signal 285 is emitted and is disabled after a time period corresponding to the time of flight of light over a distance that is substantially twice the range of the lidar device in which the TROS A 281 operates. In this manner, the measurement window is open to collect return light from objects adjacent to the lidar device (e.g., negligible time of flight), objects that are located at the maximum range of the lidar device, and objects in between. In this manner, other light that does not contribute to a useful return signal may be rejected.

[0077] In some embodiments, the signal analysis of the electrical signal 288 produced by the optical detector 287 is performed by the controller 292, entirely. In such embodiments, the signals 294 provided by the TROS A 281 may include an indication of the distances determined by controller 292. In some embodiments, the signals 294 include the digital signals 291 generated by the A/D converter 290. These raw measurement signals 291 may be processed further by one or more processors located on board the lidar device or external to the lidar device to arrive at a measurement of distance. In some embodiments, the controller 292 performs preliminary signal processing steps on the signals 291 and the signals 294 include processed data that are further processed by one or more processors located on board the lidar device or external to the lidar device to arrive at a measurement of distance.

[0078] In some embodiments a lidar device (e.g., a lidar device 100, 202, 250, or 270) includes multiple TROS As 281. In some embodiments, a delay time is enforced between the firing of each TROSA and/or between the firing of different light sources within the same TROSA. In some examples, the delay time is greater than the time of flight of the light signal 285 to and from an object located at the maximum range of the lidar device, to reduce or avoid optical cross-talk among any of the TROS As 281. In some other examples, an optical signal 285 is emitted from one TROSA 281 before a return signal corresponding to a light signal emitted from another TROSA 281 has had time to return to the lidar device. In these embodiments, there may be sufficient spatial separation between the areas of the surrounding environment interrogated by the light signals of these TROSAs to avoid optical cross-talk.

[0079] In some embodiments, digital I/O 293, A/D converter 290, and signal conditioning electronics 289 are integrated onto a single, silicon-based microelectronic chip. In another embodiment, these same elements are integrated into a single gallium-nitride or silicon based circuit that also includes components of the TOSA 280 (e.g., an illumination driver). In some embodiments, the A/D converter 290 and controller 292 are combined as a time-to-digital converter.

[0080] As depicted in FIG. 2D, return light 284 reflected from the surrounding environment is detected by optical detector 287. In some embodiments, optical detector 287 includes one or more avalanche photodiodes (APDs) and/or single-photon avalanche diodes (SPADs). Any suitable optical detector may be used. In some embodiments, optical detector 287 generates an output signal 288 that is amplified by signal conditioning electronics 289. In some embodiments, signal conditioning electronics 289 include an analog trans-impedance amplifier. However, in general, the amplification of output signal 288 may include multiple amplifier stages. In this sense, an analog transimpedance amplifier is provided by way of non-limiting example, as many other analog signal amplification schemes may be used.

[0081] In some embodiments, the amplified signal is communicated to A/D converter 290, and the digital signals generated by the A/D converter are communicated to controller 292. Controller 292 may generate an enable/disable signal to control the timing of data acquisition by ADC 290.

[0082] As depicted in FIG. 2D, the optical signal 285 emitted from the TROS A 281 and the return signal 284 directed toward the TROS A 281 share a common path within the lidar device. In the embodiment depicted in FIG. 2D, the return light 284 is separated from the emitted light 285 by a beam splitter 283. The beam splitter may direct the light 285 emitted by the TOSA 280 toward the lidar device’s environment, and direct the return light 284 to the optical detector 287. Any suitable beam splitter may be used, including (without limitation) a polarizing beam splitter, nonpolarizing beam splitter, dielectric film, etc.). Some non-limiting examples of suitable beam splitters are described in International Patent Publication No. WO 2017/164989.

[0083] FIG. 3A depicts a lidar system 300 in one embodiment. The lidar system 300 includes a master controller 390, one or more lidar devices 330, one or more beam shaping optical assemblies 363, and one or more beam scanning devices 364. A lidar device 330 includes a return signal receiver integrated circuit (IC), an illumination driver integrated circuit (IC) 352, a light source 360, a photodetector 370, and a trans-impedance amplifier (TIA) 380. In some embodiments, each of these components is mounted to a common substrate 335 (e.g., printed circuit board) that provides mechanical support and electrical connectivity among the components. In such embodiments, the lidar device 330 may be referred to herein as an “integrated lidar device.”

[0084] Light source (or emitter) 360 emits a light signal 362 in response to an electrical signal (e.g., electrical current) 353. The light signal 362 may include one or more pulses. In some embodiments, the light source 360 is laser based (e.g., a laser diode). In some embodiments, the light source is based on one or more light emitting diodes. In general, any suitable light source (e.g., pulsed light source) may be used. Light signal 362 exits lidar device 300 and reflects from an object in the surrounding 3D environment under measurement. A portion of the reflected light is collected as return measurement light signal 371 associated with the light signal 362. As depicted in FIG. 3A, light signal 362 emitted from integrated lidar device 330 and corresponding return measurement light signal 371 directed toward integrated lidar device 330 share a common optical path.

[0085] In one aspect, the light signal 362 is focused and projected toward a particular location in the surrounding environment by one or more beam shaping optical components (collectively, “beam shaping optical assembly”) 363 and a beam scanning device 364 of lidar measurement system 300. In a further aspect, the return measurement light 371 is directed and focused onto photodetector 370 by beam scanning device 364 and the one or more beam shaping optical components 363 of lidar measurement system 300. The beam scanning device is used in the optical path between the beam shaping optics and the environment under measurement. The beam scanning device effectively expands the field of view and/or increases the sampling density within the field of view of the 3D lidar system.

[0086] In the embodiment depicted in FIG. 3A, beam scanning device 364 includes a moveable mirror component that is rotated about an axis of rotation 367 by rotary actuator 365. Command signals 366 generated by master controller 390 are communicated from master controller 390 to rotary actuator 365. In response, rotary actuator 365 scans moveable mirror component 364 in accordance with a desired motion profile. Other beam scanning techniques maybe used. In some embodiments, in addition to rotating about an axis of rotation 367 (or as an alternative to such rotation), the movable mirror may be oscillate on one or more axes to scan in one or more dimensions (e.g., horizontally or vertically). The oscillation may provide the lidar system with 5 - 180 degrees (e.g., 5-120 degrees, 15-120 degrees, 70 degrees, 90 degrees, or 120 degrees) of view in the direction scanned via the mirror’s oscillation. Any suitable technique may be used to control the mirror’s oscillation including, without limitation, the techniques described in U.S. Patent Application Serial No. 17/392,080, titled “Scanning Mirror Mechanisms for LIDAR Systems, and Related Methods and Apparatus” and filed under Attorney Docket No. VLI-047CP on August 2, 2021, which is hereby incorporated by reference herein in its entirety. [0087] Integrated lidar device 330 includes a photodetector (or detector) 370 having an active sensor area 374. As depicted in FIG. 3A, light source 360 is located outside the field of view of the active area 374 of the photodetector. As depicted in FIG. 3 A, an overmold lens 372 is mounted over the photodetector 370. The overmold lens 372 includes a conical cavity that corresponds with the ray acceptance cone of return light signal 371. Light signal 362 from light source 360 is injected into the detector reception cone by a fiber waveguide. An optical coupler optically couples light source 360 with the fiber waveguide. At the end of the fiber waveguide, a mirror component 361 is oriented at a 45 degree angle with respect to the waveguide to inject the light signal 362 into the cone of return light signal 371. In one embodiment, the end faces of fiber waveguide are cut at a 45 degree angle and the end faces are coated with a highly reflective dielectric coating to provide a mirror surface. In some embodiments, the waveguide includes a rectangular shaped glass core and a polymer cladding of lower index of refraction. In some embodiments, the entire optical assembly is encapsulated with a material having an index of refraction that closely matches the index of refraction of the polymer cladding. In this manner, the waveguide injects the light signal 362 into the acceptance cone of return light signal 371 with minimal occlusion.

[0088] The placement of the waveguide within the acceptance cone of the return light signal 371 projected onto the active sensing area 374 of photodetector 370 may be selected to facilitate maximum overlap of the illumination spot and the detector field of view in the far field.

[0089] As depicted in FIG. 3 A, return light signal 371 reflected from the surrounding environment is detected by photodetector 370. In some embodiments, photodetector 370 is an avalanche photodiode. Photodetector 370 generates an output signal 373 that is amplified by an amplifier 380 (e.g., an analog trans-impedance amplifier (TIA)). In general, the amplifier 380 may include one or more amplifier stages of any suitable type. In this sense, an analog trans-impedance amplifier is provided by way of non-limiting example, as many other analog signal amplification schemes may be contemplated within the scope of this disclosure. Although TIA 380 is depicted in FIG. 3 A as a discrete device separate from the receiver IC 350, in general, TIA 380 may be integrated with receiver IC 350. In some embodiments, it is preferable to integrate TIA 380 with receiver IC 350 to save space and reduce signal contamination.

[0090] The amplified signal 381 is communicated to return signal receiver IC 350. Receiver IC 350 includes timing circuitry and a time-to-digital converter that estimates the time of flight of the light signal from light source 360, to a reflective object in the 3D environment, and back to the photodetector 370. A signal 355 indicative of the estimated time of flight is communicated to master controller 390 for further processing and/or communication to a user of the lidar system 300. In addition, return signal receiver IC 350 may be configured to digitize segments of the amplified signal 381 that include peak values (e.g., return pulses), and communicate signals 356 indicative of the digitized segments to master controller 390. In some embodiments, master controller 390 processes these signal segments to identify properties of the detected object. In some embodiments, master controller 390 communicates signals 356 to a user of the lidar system 300 for further processing.

[0091] Master controller 390 is configured to generate a measurement command signal 391 (e.g., pulse command signal) that is communicated to receiver IC 350 of integrated lidar device 330. Measurement command signal 391 is a digital signal generated by master controller 390. Thus, the timing of measurement command signal 391 is determined by a clock associated with master controller 390. In some embodiments, the measurement command signal 391 is directly used to trigger generation of light signal 362 by illumination driver IC 352 and data acquisition by receiver IC 350. However, in some embodiments, illumination driver IC 352 and receiver IC 350 do not share the same clock as master controller 390. For this reason, precise estimation of time of flight becomes much more computationally tedious when the measurement command signal 391 is directly used to trigger light signal generation (e.g., “transmission” or “transmitting” by lidar device 330) and data acquisition (“reception” or “receiving” by lidar device 330).

[0092] In general, a lidar system includes a number of different lidar devices 330 (e.g., integrated lidar devices) each emitting a beam (e.g., pulsed beam) of light from the lidar device into the surrounding environment and measuring return light reflected from objects in the surrounding environment.

[0093] In these embodiments, master controller 390 communicates a measurement command signal 391 to each different lidar device. In this manner, master controller 390 coordinates the timing of lidar measurements performed by any number of lidar devices. In a further aspect, beam shaping optical components 363 and beam scanning device 364 are in the optical path of the light signal (e.g., pulses) and return light signal (e.g., pulses) associated with each of the integrated lidar devices. In this manner, beam scanning device 364 directs each light signal and return light signal of lidar system 300.

[0094] In the depicted embodiment, receiver IC 350 receives measurement command signal 391 and generates a measurement trigger signal VTRG 351 (e.g., pulse trigger signal) in response to the measurement command signal 391. Measurement trigger signal 351 is communicated to illumination driver IC 352 and directly triggers illumination driver IC 352 to electrically couple light source 360 to a power supply, thereby generating light signal 362. In addition, measurement trigger signal 351 directly triggers data acquisition of the return light signal and associated time of flight calculation. In this manner, measurement trigger signal 351 generated based on the internal clock of receiver IC 350 is used to trigger both light signal generation and return light signal data acquisition. This process facilitates precise synchronization of light signal generation and return light signal data acquisition, which enables precise time of flight calculations by time- to-distance conversion.

[0095] FIG. 3B depicts an illustration of the timing associated with the emission of a light signal from lidar device 330 and capture of the return light signal. For ease of illustration, in the example of FIG. 3B, the light signal 362 includes a single illumination pulse. As depicted in FIG. 3 A, a measurement is initiated by the rising edge of measurement trigger signal 351 generated by receiver IC 350. As depicted in FIGS. 3A and 3B, an amplified return signal 381 is received by receiver IC 350. As described herein, a measurement window (e.g., a period of time over which collected return signal data is associated with a particular illumination beam) is initiated by enabling data acquisition at the rising edge of measurement trigger signal 351. Receiver IC 350 controls the duration of the measurement window, Tmeasurement, to correspond with the window of time when return measurement light is expected in response to the emission of the light signal (e.g., a measurement pulse sequence). In some examples, the measurement window is enabled at the rising edge of measurement trigger signal 351 and is disabled at a time corresponding to the time of flight of light over a distance that is approximately twice the range of the lidar system. In this manner, the measurement window is open to collect return light signals from objects adjacent to the lidar system (e.g., negligible time of flight) to objects that are located at the maximum nominal range of the lidar system. In this manner, other light that cannot possibly contribute to a useful return light signal is rejected.

[0096] As depicted in FIG. 3B, amplified return signal 381 includes three return measurement pulses that correspond with the emitted light signal. In general, signal detection is performed on all detected measurement pulses. Further signal analysis may be performed to identify the closest valid signal 381B (i.e., first valid instance of a return measurement pulse), the strongest signal, and the furthest valid signal 381C (i.e., last valid instance of a return measurement pulse in the measurement window). Any of these return measurement pulses may be correspond to and be reported as potentially valid distance measurements by the lidar system. [0097] Internal system delays associated with emission of light from the lidar system (e.g., signal communication delays and latency associated with the switching components, energy storage components, and pulsed light emitting device) and delays associated with collecting light and generating signals indicative of the collected light (e.g., amplifier latency, analog-digital conversion delay, etc.) can contribute to errors in the estimation of the time of flight of a measurement pulse of light. Thus, measurement of time of flight based on the elapsed time between the rising edge of the measurement trigger signal 351 and each valid return pulse (e.g., 38 IB and 381C) can introduce undesirable measurement error. In some embodiments, a calibrated, pre-determined delay time is used to compensate for the electronic delays to arrive at a corrected estimate of the actual optical time of flight. However, the accuracy of a static correction to dynamically changing electronic delays is limited. Although frequent re-calibrations may be used to account for such changes, this re-calibration comes at a cost of computational complexity and may interfere with system up-time.

[0098] In another aspect, receiver IC 350 may measure time of flight based on the time elapsed between the detection of a detected pulse 381 A due to internal cross-talk between the light source 360 and photodetector 370 and a valid return pulse (e.g., 381B or 381C). In this manner, systematic delays can be eliminated from the estimation of time of flight. Pulse 381 A is generated by internal cross-talk with effectively no distance of light propagation. Thus, the delay in time from the rising edge of the measurement trigger signal to the detection of pulse 381 A captures all of the systematic delays associated with illumination and signal detection. By measuring the time of flight of valid return pulses (e.g., return pulses 381B and 381C) with reference to detected pulse 381 A, all of the systematic delays associated with illumination and signal detection due to internal cross-talk are eliminated. As depicted in FIG. 3B, receiver IC 350 estimates the time of flight, TOFi, associated with return pulse 38 IB and the time of flight, TOF2, associated with return pulse 38 IC with reference to return pulse 381 A.

[0099] In some embodiments, the signal analysis is performed by receiver IC 350, entirely. In these embodiments, signals 355 communicated from integrated lidar device 330 include an indication of the time of flight determined by receiver IC 350. In some embodiments, signals 356 include digitized segments of return signal 381 generated by receiver IC 350. These raw measurement signal segments are processed further by one or more processors located on board the 3D lidar system, or external to the 3D lidar system to arrive at another estimate of distance, an estimate of one of more physical properties of the detected object, or a combination thereof. [0100] FIG. 3C depicts a light emission/collection engine 312 in one embodiment. Light emission/collection engine 312 includes an array 313 of lidar devices (e.g., integrated lidar devices) 330. In some embodiments, each lidar device 330 includes a light emitting component, a light detecting component, and associated control and signal conditioning electronics integrated onto a common substrate (e.g., electrical board).

[0101] Light emitted from each integrated lidar device passes through beam shaping optical assembly 363, which includes beam shaping optical components 316 that collimate the emitted light to generate a light signal 362 projected from the 3D lidar system into the environment. In this manner, an array of light signals 305, each emitted from a different lidar device, are emitted from 3D lidar system 300 as depicted in FIG. 3C. In general, any number of lidar devices can be arranged to simultaneously emit any number of light signals from 3D lidar system 300. Light reflected from an object in the environment due to its illumination by a particular lidar device is collected by beam shaping optical components 316. The collected light passes through beam shaping optical components 316 where it is focused onto the detecting component of the same, particular lidar device. In this manner, collected light associated with the illumination of different portions of the environment by light signals generated by different lidar devices is separately focused onto the detector of each corresponding lidar device.

[0102] FIG. 3D depicts a view of beam shaping optical components 316 in greater detail. As depicted in FIG. 3D, beam shaping optical components 316 include four lens components 316A- D arranged to focus collected light 318 onto each detector of the array 313 of lidar devices 330. In the embodiment depicted in FIG. 3D, light passing through optical assembly 363 is reflected from mirror 324 (e.g., mirror 364) and is directed onto each detector of the array 313 of lidar devices 330. In some embodiments, one or more of the beam shaping optical components 316 is constructed from one or more materials that absorb light outside of a predetermined wavelength range. The predetermined wavelength range may include the wavelengths of light emitted by the array 313 of lidar devices 330. In one example, one or more of the lens components are constructed from a plastic material that includes a colorant additive to absorb light having wavelengths less than infrared light generated by each of the array 313 of lidar devices 330. In one example, the colorant is Epolight 7276A available from Aako BV (The Netherlands). In general, any number of different colorants can be added to any of the plastic lens components of optical components 316 to filter out undesired spectra. Some Examples of Lidar Systems with Curved Focal Planes

[0103] FIG. 4 is a schematic diagram of a single channel lidar system 400. In one example, the lidar system 400 includes a lidar arrangement 402 that corresponds to a portion of the lidar device 102 of FIG. 1. For example, the lidar arrangement 402 includes a laser source 404, a lens 406, and a transmit/receive (T/R) interface 408. In one example, the T/R interface 408 includes a mirror 408a. In some examples, the T/R interface 408 includes the mirror 408a and a detector 408b, which may be referred to collectively as a mirror-detector sub-assembly. In other examples, the detector 408b can be included as a separate component (or device). Alternatively, the mirror 408a may be interchanged with a beam splitter component (or device). In some examples, the lidar arrangement 402 may correspond to the lidar systems of FIGS. 2A, 2B, and 2C.

[0104] The laser source 404 is configured to provide a transmit beam (i.e., light) 420 (e.g., emitted light signal 110) to the lens 406. The lens 406 includes one or more micro-optic lenses configured to provide beam shaping functionality for the laser source 404. The mirror 408a passes the transmit beam 420, and reflects received light 422 (e.g., return light signal 114) towards the detector 408b. In one example, the mirror 408a may be configured as a pinhole mirror to pass the transmit beam 420 and reflect the received light 422; however, in other examples, the mirror 408a may include a material and/or coating that provides the selective reflection of light. In some examples, the detector 408b is an avalanche photodiode (APD).

[0105] Being that the transmit beam 420 represents a single channel, the lidar arrangement 402 may direct the transmit beam 420 to a system-level lens 450 before the transmit beam 420 is projected into the environment along with transmit beams from other channels. Likewise, the received light 422 may be directed to the lidar arrangement 402 via the lens 450.

[0106] Multiple lidar devices can be arranged to provide an array of lidar devices. FIG. 5A depicts an example lidar device array 500. The lidar device array 500 includes a plurality of lidar devices 502 including a first device 502a, a second device 502b, and a third device 502c. In one example, each of the lidar devices 502 includes components corresponding to the lidar arrangement 402 of FIG. 4; however, in other examples, each of the lidar devices 502 may include components corresponding to two or more instances of the lidar arrangement 402 (i.e., multiple channels in array form).

[0107] The lidar devices 502 are positioned along a curved focal plane 504. In one example, the lidar devices 502 are positioned along the curved focal plane 504 such that each channel is pointing towards the center of the lens 450 (e.g., each channel’s light source is oriented such that the chief rays of the beams it emits pass through the center of the lens 450). By physically pointing each channel towards the center of the lens 450, the lidar array 500 can achieve overall better system performance (e.g., reduce light clipping at the lens 450). However, due to the unique positioning of each lidar device 502, manufacturing the lidar device array 500 may involve individually aligning each lidar device 502 with respect to the system lens 450. Also, the process of aligning each device 502 individually can be expensive and time consuming. For example, the alignment process for each device may include iteratively tuning the laser source 404 on/off while adjusting the position of the lens 406 and/or the mirror 408a. In other examples, the alignment process may include precise time consuming lateral placement of each device 502 with respect to a reference point (e.g., fiducial) on a suitable mounting platform.

[0108] In addition, the unique positioning of each lidar device 502 around the curved focal plane 504 can increase the size of the lidar device array 500 (and the lidar system housing the array 500). For example, FIG. 5B depicts a component-level view of the lidar device array 500. As shown, each lidar device 502 includes a laser source 504, a lens 506, and a T/R interface 508. Due to the positioning of the lidar devices 502, a spacing (e.g., SI) exists between the components of each device. Such component-to-component spacing is undesirable as it increases the size and material cost of the lidar device array 500. In some examples, due to the orientations of the components, each lidar device 502 contains individual (i.e., discrete) components which can increase the cost of each channel.

Some Embodiments of Lidar Systems with Linear (or Flat) Focal Planes

[0109] An improved lidar device and device array are provided herein. In at least one embodiment, at least one component (e.g., laser source, lens, etc.) included in the lidar device is positioned such that the lidar device provides micro-optic beam steering. For example, micro-optic beam steering is provided if a lens (e.g., microlens) of a lidar channel changes the direction of the chief ray of a laser beam passing through the lens by more than a threshold amount. For example, micro-optic beam steering may be provided if the direction of the chief ray of a laser beam exiting the lens deviates from the direction of the chief ray of the laser beam entering the lens by more than 0.1- 0.5 degrees, 0.5-1.0 degrees, 1-2 degrees, 2-5 degrees, 5-10 degrees, 10-20 degrees, or more than 20 degrees. For example, the laser and/or the lens (e.g., microlens) of a lidar channel may be positioned (e.g., relative to each other) to provide micro-optic beam steering. In some examples, the micro-optic beam steering allows the lidar device to be included in a device array having a linear (or flat) focal plane arrangement. As such, the size of the lidar device array (and the lidar measurement system) can be reduced. In certain examples, the time and cost per channel alignment of the lidar device array can be greatly reduced by using multi-channel array components that are aligned at the component level.

[0110] FIG. 6 depicts an example of a lidar device assembly 600 in accordance with some embodiments. In one example, the lidar device assembly 600 includes a lidar device 602 (e.g., an integrated lidar device). The lidar device 602 includes a laser source 604, a lens 606, and a T/R interface 608. In some examples, the laser source 604 corresponds to the laser source 404, the lens 606 corresponds to the lens 406, and the T/R interface 608 corresponds to the T/R interface 408 of the lidar arrangement 402 of FIG. 4.

[OHl] In one example, the components of the lidar device 602 are configured to be disposed on a substrate 610. The substrate 610 may be a printed circuit board (PCB). The components may be disposed on a first surface or a second surface of the substrate 610. In the illustrated example, the components are disposed on the first (e.g., top) surface of the substrate 610. In some examples, the components of the lidar device 602 may be disposed on the substrate 610 within a common device package; however, in other examples, the substrate 610 may be an external component on which the lidar device 602 is disposed. The components of the lidar device 602 can be electrically and/or mechanically coupled to the substrate 610.

[0112] The position of each component of the lidar device 602 is offset relative to a reference line 614. In one example, the reference line 614 corresponds to the center (or boresight) line of a system lens (e.g., system lens 450). In other examples, the reference line 614 may correspond to the next adjacent channel (e.g., a predetermined channel boundary or the nearest component of the adjacent channel). In this context, “offset” corresponds to a distance between a component and the reference line 614. In one example, the laser source 604 is positioned with a first offset Aa, the lens 606 is positioned with a second offset Ab, and the T/R interface 608 is positioned with a third offset Ac. In some examples, the offsets Aa-Ac can be scaled to provide micro-optic beam steering of a transmit beam 612 (e.g., the chief ray of the channel). For example, the lidar device 602 can be configured with a first offset Aa that is larger than the second offset Ab and the third offset Ac to steer the transmit beam 612 in a first direction (e.g., a ‘downward’ direction in the illustration of FIG. 6). In one example, the first offset Aa of the laser source 604 may be approximately 506 pm, the second offset Ab of the lens 606 may be approximately 501.8 pm, and the third offset Ac of the T/R interface 608 may be approximately 500 pm. Due to the relationship between the first offset Aa and the second offset Ab, the transmit beam 612 is provided from the laser source 604 to an upper portion of the lens 606, causing the transmit beam 612 to be directed in the downward direction through the T/R interface 608. In some examples, the second offset Ab of the lens 606 and the third offset Ac of the T/R interface 608 may be approximately the same; however, in other examples, the offsets Ab and Ac may be different. In certain examples, the relationship between the second offset Ab and the third offset Ac corresponds to the configuration of the T/R interface 608. In some examples, the offset Ab is less than the offset Aa to steer the transmit beam 612 toward the reference line 614.

[0113] As described above, the offsets Aa-Ac can be scaled to provide micro-optic beam steering of the transmit beam 612. In this context, “scaled” is used interchangeably with “adjusted” or “modified.” In some examples, scaling the offsets Aa-Ac includes a proportional adjustment of the offsets. In other words, after adjusting a first offset (e.g., Aa), at least one other offset (e.g., Ab and/or Ac) may be subsequently adjusted to maintain a proportional relationship with the first offset. In other examples, scaling the offsets Aa-Ac includes a disproportionate adjustment of one or more offsets relative to the other offsets.

[0114] While the lidar device 602 is described above as having a ‘downward’ beam steering configuration, it should be appreciated that the lidar device 602 can be configured differently. For example, the lidar device 602 may be configured with different component offsets to adjust the beam steering direction. In addition, the components of the lidar device 602 may be disposed on a different (e.g., opposite) side or region of the substrate 610 to change the beam steering direction. Alternatively, the lidar device assembly 600 may be rotated about an axis parallel to the reference line 614to adjust the beam steering direction.

[0115] FIG. 7A depicts an example lidar device assembly 700a in accordance with aspects described herein. The lidar device assembly 700a includes the lidar device 602 that is configured to be disposed on the substrate 610. In the illustrated example, the components are disposed on the first (e.g., top) surface of the substrate 610; however, in other examples, the components may be disposed on the second (e.g., bottom) surface of the substrate 610. In one example, the lidar device 602 is configured with component offsets such that the components are positioned to transmit or steer the transmit beam 612 in a boresight direction. In other examples, the components of the lidar device 602 of FIG. 7A may be aligned such that the transmit beam overlaps with the reference line 614 without any of the components of the lidar device 602 performing any beam steering. [0116] As described above, the component offsets Aa-Ac can be scaled relative to the reference line 614 to provide micro-optic beam steering of the transmit beam 612. In one example, the lidar device 602 can be configured with a first offset Aa, a second offset Ab, and a third offset Ac that are all approximately the same to steer the transmit beam 612 in a boresight direction. In one example, the first offset Aa of the laser source 604, the second offset Ab of the lens 606, and the third offset Ac of the T/R interface 608 may each be approximately 0 um. Due to the relationship between the first offset Aa and the second offset Ab, the transmit beam 612 is provided from the laser source 604 to a center portion of the lens 606, causing the transmit beam 612 to be directed in the boresight direction through the T/R interface 608. As shown, the offsets Aa-Ac can be scaled such that the transmit beam 612 intersects the reference line 614 (e.g., at the center of the system lens).

[0117] Similarly, FIG. 7B depicts an example of a lidar device assembly 700b in accordance with aspects described herein. The lidar device assembly 700b includes the lidar device 602 that is configured to be disposed on the substrate 610. In one example, the lidar device 602 is configured with component offsets such that the components are positioned to transmit or steer the transmit beam 612 in a second direction (e.g., an ‘upward’ direction in the illustration of FIG. 7B).

[0118] As described above, the component offsets Aa-Ac can be scaled to provide micro-optic beam steering of the transmit beam 612. Similar to the configuration shown in FIG. 6, the lidar device 602 can be configured with a first offset Aa that is larger than the second offset Ab and the third offset Ac to steer the transmit beam 612. In one example, the first offset Aa of the laser source 604 may be approximately 506 pm, the second offset Ab of the lens 606 may be approximately 501.8 pm, and the third offset Ac of the T/R interface 608 may be approximately 500 pm. However, being that the components are disposed on the opposite side of the reference line 614 (e.g., in a lower region of the substrate 610), the transmit beam 612 is provided from the laser source 604 to a lower portion of the lens 606, causing the transmit beam 612 to be directed in the ‘upward’ direction through the T/R interface 608. In some examples, the second offset Ab of the lens 606 and the third offset Ac of the T/R interface 608 may be approximately the same; however, in other examples, the offsets Ab and Ac may be different. In certain examples, the relationship between the second offset Ab and the third offset Ac corresponds to the configuration of the T/R interface 608. In some examples, the offset Ab is less than the offset Aa to steer the transmit beam 612 toward the reference line 614. [0119] In some examples, rather than disposing the components in the lower region of the substrate 610, the configuration of the lidar device assembly 700b can be realized by rotating the lidar device assembly 600 of FIG. 6 about an axis parallel to the reference line 614. For example, the lidar device assembly 600 may be rotated 180 degrees resulting in a mirrored configuration that provides similar performance to the lidar device assembly 700b (beam steering in an upward direction).

[0120] In the examples of FIGS. 6, 7A, and 7B, each of the offsets (Aa, Ab, Ac) is illustrated as a distance between a reference line 614 and a component (e.g., 604, 606, 608) of a lidar device. In some embodiments, the offset Aa may specifically refer to the distance between the reference line 614 and the chief ray of the transmit beam 612 at the location where the transmit beam is emitted from the laser source 604. In some embodiments, the offset Ab may specifically refer to the distance between the reference line 614 and the chief ray of the transmit beam 612 at the location where the transmit beam exits the lens 606. In some embodiments, the offset Ac may specifically refer to the distance between the reference line 614 and the chief ray of the transmit beam 612 at the location where the transmit beam exits the T/R interface 608.

[0121] In some examples, being that the lidar device 602 is capable of providing micro-optic beam steering, the lidar device 602 may be included in a lidar array having a linear (or flat) focal plane. For example, FIG. 8 depicts a lidar device array 800 in accordance with some embodiments. The lidar device array 800 includes a plurality of lidar devices 802 including a first device 802a, a second device 802b, and a third device 802c. In one example, each of the lidar devices 802 corresponds to the lidar device 602. In some examples, all of the lidar devices 802 are configured to be disposed on a common substrate 810; however, in other examples, each of the lidar devices 802 can be disposed on individual substrates.

[0122] The lidar devices 802 are positioned along a flat focal plane 804. In one example, one or more of the lidar devices 802 include at least one component positioned to steer the channel (e.g., to steer a beam emitted by the channel) towards the center of the lens 450. For example, the first lidar device 802a may be configured as shown in FIG. 6 to steer the transmit beam 812a in a downward direction towards the center of the lens 450. The second lidar device 802b may be configured as shown in FIG. 7A to transmit the transmit beam 812b in a boresight direction towards the center of the lens 450. The third lidar device 802c may be configured as shown in FIG. 7B to steer the transmit beam 812c in an upward direction towards the center of the lens 450. In some examples, the micro-optic beam steering provided by the lidar devices 802a and 802c prevents portions of received light from being clipped (i.e., outside the lens 450). As such, by steering each channel towards the center of the lens 450, the lidar array 800 can be implemented with a flat focal plane while maintaining desired (e.g., optimized) system performance and reducing (e.g., minimizing) the clipping (or loss) of light at the lens 450 (as indicated by receive cones 814a, 814b).

[0123] In some examples, the flat focal plane arrangement of the array 800 allows for a linear channel alignment. As such, the active process of aligning (and/or calibrating) the mirror and/or the lens of the devices 802 may be simplified considerably. In certain examples, the simplified alignment process can reduce the amount of time needed for aligning (and/or calibrating) the array 800, which can decrease manufacturing costs.

[0124] While the above examples describe one lidar channel per device, it should be appreciated that each of the lidar devices may be configured as multi-channel devices. For example, the first lidar device 802a may be configured to steer two or more transmit beams in a downward direction towards the center of the lens 450. Likewise, the second lidar device 802b may be configured to steer or transmit two or more transmit beams in a boresight direction towards the center of the lens 450. Similarly, the third lidar device 802c may be configured to steer two or more transmit beams in an upward direction towards the center of the lens 450. In some examples, all channels within each device 802 may have the same component offset configuration; however, in other examples, at least one channel within each device 802 may have a slightly different component offset configuration than other channels within the same device 802.

[0125] In one example, given that the lidar devices 802 can be included in a lidar array having a linear (or flat) focal plane, the size and cost of the lidar array (and the system 100) may be reduced by consolidating the lidar devices 802 into a multi-channel lidar device.

[0126] FIG. 9 is a schematic diagram of a multi-channel lidar system 900. In one example, the lidar system 900 is similar to the lidar system 400 of FIG. 4, except the lidar system 900 includes a lidar arrangement 902 having array components. For example, the lidar arrangement 902 includes a laser source array 904, a lens array 906, and a T/R interface array 908. In one example, the T/R interface array 908 includes a single mirror; however, in other examples, the T/R interface array 908 may include an array of mirrors rather than a single mirror. In some examples, the T/R interface array 908 also includes an array of detectors; however, in other examples, the array of detectors can be provided as a separate array component (or device). [0127] The laser source array 904 is configured to provide a plurality of transmit beams (i.e., light) 920 to the lens array 906. The lens array 906 is an array of micro-optic lenses configured to provide beam shaping and steering functionality at the laser source array 904. The mirror(s) of the T/R interface array 908 pass(es) the plurality of transmit beams 920 and reflects received light 922 towards the detectors of the detector array. In some examples, the detectors of the detector array are avalanche photodiodes (APDs).

[0128] The lidar arrangement 902 is configured to provide multiple lidar channels. In one example, each transmit beam of the plurality of transmit beams 920 corresponds to a single channel and is provided from the laser source array 904 to a corresponding lens of the lens array 906. Likewise, each detector of the detector array may correspond to a single channel. In one example, the received light 922 shown in FIG. 9 represents received light for a single channel. The lidar arrangement 902 may direct the plurality of transmit beams 920 to a system-level lens 950 before the plurality of transmit beams 920 are projected into the environment. Likewise, the received light 922 may be directed to the lidar arrangement 902 via the lens 950.

[0129] In one example, the lidar arrangement 902 can be implemented as a lidar device having component pitches (e.g., scaled component pitches) that provide micro-optic beam steering to direct the plurality of transmit beams towards the center of the lens 950. For example, FIG. 10 depicts a lidar device assembly 1000 in accordance with aspects described herein. In one example, the lidar device assembly 1000 includes a lidar device (e.g., integrated lidar device) 1002. The lidar device 1002 includes a laser source array 1004, a lens array 1006, and a T/R interface array 1008. In some examples, the laser source array 1004 corresponds to the laser source array 904, the lens array 1006 corresponds to the lens array 906, and the T/R interface array 1008 corresponds to the T/R interface array 908 of the lidar arrangement 902 of FIG. 9. It should be appreciated that the lidar device 1000 may be configured to provide any number of channels (e.g., 2, 4, 8, 16, 24, 32, 48, 50, 64, 128, 2-128, or more channels). For example, each channel of the lidar device 1000 may correspond to one laser source included in the laser source array 1004, one lens included in the lens array 1006, and one T/R interface included in the T/R interface array 1008. In some examples, each of the components 1004, 1006, and 1008 can be provided (and fabricated) as monolithic arrays.

[0130] In one example, the components of the lidar device 1002 are configured to be disposed on a substrate 1010. The substrate 1010 may be a printed circuit board (PCB). The components may be disposed on a first surface or a second surface of the substrate 1010. In the illustrated example, the components are disposed on the first (e.g., top) surface of the substrate 1010. In some examples, the components of the lidar device 1002 may be disposed on the substrate 1010 within a common device package; however, in other examples, the substrate 1010 may be an external component on which the lidar device 1002 is disposed. The components of the lidar device 1002 can be electrically and/or mechanically coupled to the substrate 1010.

[0131] Similar to the examples described above, each device included in the array components 1004, 1006, and 1008 is positioned with an offset relative to a reference line 1014. For example, the reference line 1014 may correspond to the center (or boresight) line of a system lens (e.g., system lens 950). As described above, the device offsets can be scaled to provide micro-optic beam steering of a plurality of transmit beams 1012. In some examples, the components 1004, 1006, and 1008 can be configured with scaled device offsets to steer the plurality of transmit beams 1012 in a convergent direction (e.g., towards the center of the lens 950). In other words, the offset of each device (within a respective array component) can be scaled to adjust the pitch between devices included in each of the components 1004, 1006, and 1008 such that the plurality of transmit beams 1012 converge. For example, the devices included in the components 1004, 1006, and 1008 can be configured such that one or more of channels of the lidar device 1002 have a configuration similar to the lidar device 602 of FIG. 6 to steer one or more transmit beams in a downward direction (e.g,. transmit beams 1012a, 1012b). Likewise, the devices included in the components 1004, 1006, and 1008 can be configured such that one or more channels of the lidar device 1002 have a configuration similar to the lidar device 602 of FIG. 7 A to transmit or steer one or more transmit beams in a boresight direction (e.g., transmit beam 1012c). In addition, the devices included in the components 1004, 1006, and 1008 can be configured such that one or more of channels of the lidar device 1002 have a configuration similar to the lidar device 602 of FIG. 7B to steer one or more transmit beams in an upward direction (e.g., transmit beams 1012d, 1012e).

[0132] While the lidar device 1002 is described above as having a convergent beam steering configuration, it should be appreciated that the lidar device 1002 can be configured differently. For example, the devices of each channel included in the components 1004, 1006, and 1008 may be configured with different offsets (or pitches) to adjust the beam steering direction. In addition, the components of the lidar device 1002 may be disposed on a different (e.g., opposite) side or region of the substrate 1010 to change the beam steering direction. Alternatively, the lidar device assembly 1000 may be rotated about an axis parallel to the reference line 1014 to adjust the beam steering direction. In certain examples, rather than providing a single lidar device 1002 having many channels (e.g., 128), it may be beneficial to include several lidar devices 1002 in an array (e.g., 4 devices having 32 channels each). As such, the lidar device 1002 may be included in a lidar array having a linear (or flat) focal plane. For example, the lidar device 1002 may be included in the lidar device array 800 of FIG. 8 (e.g., as the lidar devices 802). Each of the lidar devices 1002 included in the array may be configured with different beam steering directions (e.g., upward, downward, boresight, etc.). As described above, the flat focal plane arrangement of the array 800 allows for a linear channel alignment. As such, the active process of aligning (or calibrating) the mirror and/or the lens of the devices 1002 may be simplified considerably. In addition, being that the components are fabricated as monolithic arrays, the channels of each device 1002 can be aligned at the component level (rather than the channel level). For example, if a lidar device array includes four lidar devices having 32 channels each, the device array may be aligned by aligning the four lidar devices rather than 128 individual channels.

[0133] As described above, the lidar devices provided herein may enable the size of the device arrays (and corresponding lidar systems) to be reduced. For example, the flat focal plane arrangement of the lidar device array 800 can enable smaller, more efficient layouts compared to lidar device arrays having curved focal planes causing excessive device spacing (e.g., lidar device array 500 of FIG. 5B). Likewise, the use of integrated lidar devices including array components (e.g., device 1002) may allow multiple channels to be provided in a compact form factor.

[0134] In addition, the compact and efficient nature of the lidar devices and device arrays provided herein may enable more channels to be included in lidar systems. As such, these lidar devices and device arrays can be used to improve the resolution of lidar systems while reducing (or maintaining) the size of the system. In some examples, the lidar devices 602, 1002 may be advantageously used in space constrained lidar applications. For example, the lidar devices 602, 1002 may be used in small, handheld lidar devices, such as a device included in a mobile phone (or device that plugs into a mobile phone).

[0135] FIG. 11 illustrates a flowchart of a method 1100 suitable for implementation by a lidar system as described herein. In some embodiments, lidar system 100 is operable in accordance with method 1100 illustrated in FIG. 11. However, in general, the execution of method 1100 is not limited to the embodiments of lidar system 100 described with reference to FIG. 1. In some examples, a lidar system including the lidar devices and/or the device arrays provided herein may be operable in accordance with method 1100 illustrated in FIG. 11. These illustrations and corresponding explanation are provided by way of example as many other embodiments and operational examples may be contemplated.

[0136] In block 1101, a plurality of light signals (e.g., pulsed beams of light) are emitted into a 3D environment from a plurality of light sources (e.g., pulsed light sources). Each of the plurality of light signals is incident on a beam scanning device.

[0137] In block 1102, each of the plurality of light signals is redirected in a different direction based on an optical interaction between each light signal and the beam scanning device.

[0138] In block 1103, an amount of return light reflected from the 3D environment illuminated by each light signal is redirected based on an optical interaction between each amount of return light and the beam scanning device.

[0139] In block 1104, each amount of return light reflected from the 3D environment illuminated by each light signal is detected (e.g., by a photosensitive detector).

[0140] In block 1105, an output signal indicative of the detected amount of return light associated with each light signal is generated.

[0141] In block 1106, a distance between the plurality of light sources and one or more objects in the 3D environment is determined based on a difference between a time when each light signal is emitted from the lidar device and a time when each photosensitive detector detects an amount of light reflected from the object illuminated by the light signal.

[0142] FIG. 12A illustrates a method 1200 for manufacturing a lidar device array in accordance with aspects described herein. In one example, the lidar device array corresponds to the lidar device array 800 including lidar devices 802 of FIG. 8.

[0143] In block 1202, a laser source is provided. In one example, the laser source (e.g., laser source 612) is configured to provide a transmit beam (e.g., configured to emit a laser beam). In some examples, providing the laser source includes positioning the laser source with a first offset relative to a reference line (e.g., reference line 612).

[0144] In block 1204, a T/R interface is provided. In one example, the T/R interface (e.g., T/R interface 608) is configured to pass the transmit beam and reflect received light towards a detector. In some examples, providing the T/R interface includes positioning the T/R interface with a second offset relative to the reference line. [0145] In block 1206, a lens is provided. In one example, the lens (e.g., lens 606) is positioned between the laser source and the T/R interface. In some examples, providing the lens includes positioning the lens with a third offset relative to the reference line.

[0146] In one example, the components provided in blocks 1202-1206 correspond to a lidar device 802 configured to be included in the lidar device array 800. In some examples, the components correspond to a single lidar channel. In certain examples, the components of the lidar device 802 are configured to be disposed on a substrate (e.g., substrate 810). In some examples, the positions of the components (relative to each other, relative to components of other lidar devices in the lidar device array, or relative to a reference line) correspond to the position of the lidar device 802 in the lidar device array 900. For example, the lidar device 802 may have a position in an upper region of the substrate (e.g., lidar device 802a), a middle region of the substrate (e.g., lidar device 802b), or a lower region of the substrate (e.g., lidar device 802c). It should be appreciated that the regions described above are provided merely as examples and that the lidar device 802 may be positioned in different regions of the substrate.

[0147] The component offsets of the lidar device 802 can be scaled based on the position of the device in the lidar device array 800. In one example, the component offsets can be scaled such that the device steers transmitted light towards the center of a system lens (e.g., lens 450). For example, if positioned in the upper region of the substrate, the component offsets can be scaled such that the device steers transmitted light in a first direction (e.g., a downward direction), similar to the lidar device 602 of FIG. 6. If positioned in the middle region of the substrate, the component offsets can be scaled such that the device transmits or steers transmitted light in a second direction (e.g., a boresight direction), similar to the lidar device 602 of FIG. 7A. Likewise, if positioned in the lower region of the substrate, the component offsets can be scaled such that the device steers transmitted light in a third direction (e.g., an upward direction), similar to the lidar device 602 of FIG. 7B. Again, it should be appreciated that the regions described above are provided merely as examples and that the component offsets of the lidar device 802 may be scaled differently based on the configuration of the lidar device array 800. In some examples, the component offsets of the lidar device 802 can be scaled (e.g., adjusted) using one or more precision die bonder machines.

[0148] In block 1208, it is determined if more lidar devices 802 are to be added to the lidar device array 800. In response to a determination that more lidar devices 802 are being added, the method returns to block 1202 and the lidar device configuration process repeats. Otherwise, the method continues to block 1210. [0149] In block 1210, the lidar devices 802 are aligned to the linear focal plane of the lidar device array 800. In some examples, the alignment process includes the use of active alignment equipment (e.g., laser energized equipment) and/or passive alignment equipment (e.g., camera vision equipment). For example, each lidar device 802 may be aligned using an active alignment process that includes energizing the laser source and measuring energy (e.g., light) associated with the transmit beam at the center of the system lens 450. In such a process, the alignment of a lidar device 802 may be iteratively adjusted until the energy detected in the transmit beam at the center of the system lens 450 meets or exceeds a threshold energy level. As described above, being that the scaled component offsets of the lidar devices 802 provide micro-optic beam steering, the lidar devices 802 can be arranged with a linear (or flat) focal plane (e.g., focal plane 804). As such, the process of aligning (and/or calibrating) the mirror and/or the lens of each lidar device 802 can be simplified considerably. In certain examples, the simplified alignment process can reduce the amount of time and cost needed for aligning (and/or calibrating) the lidar device array 800. In some examples, once the alignment is set, the lidar devices 802 and/or the lidar device components can be coupled, bonded, attached, and/or fastened to the substrate.

[0150] FIG. 12B illustrates another method 1250 for manufacturing a lidar device array in accordance with aspects described herein. In one example, the lidar device array includes multiple instances of the lidar device 1002 of FIG. 10.

[0151] In block 1252, a laser source array is provided. In one example, the laser source array (e.g., laser source 1004) includes a plurality of laser sources configured to provide a plurality of transmit beams (e.g., configured to emit a plurality of laser beams). In some examples, providing the laser source array includes positioning each laser source of the plurality of laser sources with a respective offset of a first plurality of offsets relative to a reference line (e.g., reference line 1014). A first pitch between the laser sources included in the laser source array can be scaled to adjust the first plurality of offsets (and the positions of the laser sources). In some examples, the laser source array may be fabricated or pre-assembled on a substrate with the laser sources in the abovedescribed positions (e.g., positioned with the above-described offsets and/or pitch). In some examples, providing the laser source array includes positioning the laser source array (e.g., positioning a first laser source of the laser source array) at a first offset relative to a reference line.

[0152] In block 1254, a T/R interface array is provided. In one example, the T/R interface array (e.g., T/R interface array 1008) includes a plurality of T/R interfaces configured to pass the plurality of transmit beams and reflect received light towards a plurality of detectors. In some examples, providing the T/R interface array includes positioning each T/R interface of the plurality of T/R interfaces with a respective offset of a plurality of second offsets relative to the reference line. A second pitch between the T/R interfaces included in the T/R interface array can be scaled to adjust the second plurality of offsets (and the positions of the T/R interfaces). In some examples, the T/R interface array may be fabricated or pre-assembled on a substrate with the T/R interfaces in the above-described positions (e.g., positioned with the above-described offsets and/or pitch). In some examples, providing the T/R interface array includes positioning the T/R interface array (e.g., positioning a first T/R interface of the T/R interface array) at a second offset relative to the reference line.

[0153] In block 1256, a lens array is provided. In one example, the lens array (e.g., lens array 1006) is positioned between the plurality of laser sources and the plurality of T/R interfaces. In some examples, providing the lens array includes positioning each lens of the lens array with a respective offset of a third plurality of offsets relative to the reference line. A third pitch between the lenses included in the lens array can be scaled to adjust the third plurality of offsets (and the positions of the lenses). In some examples, the lens array may be fabricated or pre-assembled on a substrate with the lenses in the above-described positions (e.g., positioned with the abovedescribed offsets and/or pitch). In some examples, providing the lens array includes positioning the lens array (e.g., positioning a first lens of the lens array) at a third offset relative to the reference line.

[0154] In one example, the array components provided in blocks 1252-1256 correspond to a lidar device 1002 configured to be included in the lidar device array. In one example, the array components correspond to multiple lidar channels. In some examples, the array components 1004, 1006, and 1008 of the lidar device 1002 are configured to be disposed on a substrate (e.g., substrate 810). In some examples, the positions of the array components correspond to the position of the lidar device 1002 in the lidar device array. For example, the lidar device 1002 may have a position in an upper region of the substrate (e.g., lidar device 802a), a middle region of the substrate (e.g., lidar device 802b), or a lower region of the substrate (e.g., lidar device 802c). It should be appreciated that the regions described above are provided merely as examples and that the lidar device 1002 may be positioned in different regions of the substrate.

[0155] The device offsets (or pitches) of the array devices included in each array component 1004, 1006, and 1008 can be scaled based on the position of the lidar device 1002 in the lidar device array or based on the position of the lidar device 1002 relative to a reference line (e.g., a reference line through the center of the system lens). In one example, the device offsets (or pitches) of the array devices can be scaled such that the lidar device 1002 transmits light towards the center of a system lens (e.g., lens 450). For example, if positioned in the upper region of the substrate, the device offsets (or pitches) of the array devices can be scaled such that each channel of the lidar device 1002 steers transmitted light in a first direction (e.g., a downward direction), similar to the lidar device 602 of FIG. 6. If positioned in the middle region of the substrate, the device offsets (or pitches) of the array devices can be scaled such that each channel of the lidar device 1002 transmits or steers transmitted light in a second direction (e.g., a boresight direction), similar to the lidar device 602 of FIG. 7A. Likewise, if positioned in the lower region of the substrate, the device offsets (or pitches) of the array devices can be scaled such that each channel of the lidar device 1002 steers transmitted light in a third direction (e.g., an upward direction), similar to the lidar device 602 of FIG. 7B. Again, it should be appreciated that the regions described above are provided merely as examples and that the device offsets (or pitches) of the array devices may be scaled differently based on the configuration of the lidar device 1002 or the lidar device array. The scaling of device offsets (or pitches) is described in greater detail below with respect to FIGS. 13A, 13B. In some examples, the device offsets (or pitches) of the array devices can be scaled (e.g., adjusted) using one or more precision die bonder machines.

[0156] In block 1258, it is determined if more lidar devices 1002 are to be added to the lidar device array. In response to a determination that more lidar devices 1002 are being added, the method returns to block 1252 and the lidar device configuration process repeats. Otherwise, the method continues to block 1260.

[0157] In block 1260, the lidar devices 1002 are aligned to the linear focal plane of the lidar device array. In some examples, the alignment process includes the use of active alignment equipment (e.g., laser energized equipment) and/or passive alignment equipment (e.g., camera vision equipment). For example, each lidar device 1002 may be aligned using an active alignment process that includes energizing at least one laser source of the plurality of laser sources and measuring energy (e.g., light) associated with at least one transmit beam at the center of the system lens 450. In such a process, the alignment of a lidar device 1002 and/or the alignment of the array components (1004, 1006, 1008) of the lidar device 1002 may be iteratively adjusted until the energy detected in the transmit beam at the center of the system lens 450 meets or exceeds a threshold energy level. For example, the alignment of the lidar device 1002 may be adjusted with respect to the reference line 1014 or with respect to the system lens 450. In some examples, the alignment of the array components (1004, 1006, 1008) may be adjusted with respect to each other, with respect to the reference line 1014, or with respect to the system lens 450. As described above, being that the scaled pitches of the lidar devices 1002 provide micro-optic beam steering, the lidar devices 1002 can be arranged with a linear (or flat) focal plane (e.g., focal plane 804). As such, the active process of aligning (and/or calibrating) the mirror and/or the lens of each lidar device can be simplified considerably. In addition, being that the lidar devices 1002 include array components, the lidar devices 1002 can be aligned at the component level, rather than at the channel level. In certain examples, the simplified alignment process can reduce the amount of time and cost needed for aligning (and/or calibrating) the lidar device array. In some examples, once the alignment is set, the lidar device and/or the array components 1004, 1006, and 1008 of the lidar device 1002 can be coupled, bonded, attached, and/or fastened to an underlying substrate.

[0158] Referring to FIG. 13 A, an example configuration of the lidar device 1002 including array components with scaled device offsets is shown. In the illustrated example, the lidar device 1002 includes three lidar channels. A first channel (A) includes a first laser source device 1004a of the laser source array 1004, a first lens device 1006a of the lens array 1006, and a first T/R interface device 1008a of the T/R interface array 1008. Likewise, a second channel (B) includes a second laser source device 1004b of the laser source array 1004, a second lens device 1006b of the lens array 1006, and a second T/R interface device 1008b of the T/R interface array 1008; and a third channel (C) includes a third laser source device 1004c of the laser source array 1004, a third lens device 1006c of the lens array 1006, and a third T/R interface device 1008c of the T/R interface array 1008. As shown, the offsets of the devices can be scaled relative to the reference line 1014 to steer the transmit beams 1012 (e.g., the chief rays of each channel). For example, the first laser source device 1004a of the first channel (A) is configured with a first offset Aa to steer, in conjunction with the devices 1006a, 1008a, the first transmit beam 1012a. Likewise, the second laser source device 1004b of the second channel (B) is configured with a second offset Ab to steer, in conjunction with the devices 1006b, 1008b, the second transmit beam 1012b; and the third laser source device 1004c of the third channel (C) is configured with a third offset Ac to steer, in conjunction with the devices 1006c, 1008c, the third transmit beam 1012c. In some examples, the device offsets may be scaled such that the pitches between the devices included in each of the array components 1004, 1006, and 1008 are substantially equal. For example, the device offsets Aa-Ac can be scaled to set a first pitch Pl for the array component 1004 corresponding to the spacings between the laser source devices 1004a- 1004c. Likewise, the device offsets of the lens devices 1006 can be scaled to set a second pitch P2 for the array component 1006; and the device offsets of the T/R interface devices 1008 can be scaled to set a third pitch P3 for the array component 1008. In other examples, the device offsets (or pitches) may be configured differently to change the beam steering angles of the transmit beams 1012. For example, the device offsets can be scaled such that the pitches between devices of the same array component are different. In other examples the device offsets (or pitches) can be scaled to transmit or steer the transmit beams 1012 in different directions (e.g., boresight, upward, etc.).

[0159] While the example illustrated in FIG. 13A depicts the transmit beams 1012 being steered to converge at a common point (e.g., the center of system lens 950), it should be appreciated that in other examples the transmit beams 1012 can be steered differently. For example, the lidar device 1002 may be configured such that the transmit beams 1012 diverge away from a common point. In other examples, the transmit beams 1012 can be steered such that the beams are transmitted in a parallel manner. Referring to FIG. 13B, another example configuration of the lidar device 1002 including array components with scaled device pitches is shown. In the illustrated example, the lidar device 1002 includes seven lidar channels, where each channel includes a device from the laser source array 1004, the lens array 1006, and the T/R interface array 1008. As shown, the device pitches of the components 1004, 1006, and 1008 can be scaled to steer the transmit beams 1012. For example, the laser source array 1004 is configured with a first device pitch Pl, the lens array 1006 is configured with a second device pitch P2, and the T/R interface array 1008 is configured with a third device pitch P3 to steer the transmit beams 1012 in a convergent direction (e.g., towards the center of the systems lens 950). In one example, the first pitch Pl is scaled to be larger than the second pitch P2 and the third pitch P3 to steer the transmit beams 1012 in the convergent direction. In some examples, the second pitch P2 is larger than the third pitch P3; however, in other examples, the second pitch P2 and the third pitch P3 may be substantially the same. As described above, the device pitches P1-P3 may be set by scaling the device offsets relative to the reference line 1014.

[0160] As described above, the pitches P1-P3 can be scaled to provide micro-optic beam steering of the transmit beam 1012. In this context, “scaled” is used interchangeably with “adjusted,” “selected,” or “modified.” In some examples, scaling the pitches P1-P3 includes a proportional adjustment of the pitches. In other words, after adjusting a pitch (e.g., Pl), at least one other pitch (e.g., P2 and/or P3) may be subsequently adjusted to maintain a proportional relationship with the adjusted pitch. In other examples, scaling the pitches Pl -P3 includes a disproportionate adjustment of one or more pitches relative to the other pitches. [0161] Referring again to FIG. 1, control & data acquisition module 108, data analysis & interpretation module 109, or any external computing system may include, but is not limited to, a personal computer system, mainframe computer system, workstation, image computer, parallel processor, or any other device known in the art. In general, the term “computing system” may be broadly defined to encompass any device having one or more processors, which execute instructions from a memory medium.

[0162] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer- readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

[0163] As described above, an improved integrated lidar device and device array is provided herein. In at least one embodiment, the pitch of at least one component included in the lidar device is scaled to provide micro-optic beam steering. In some examples, the micro-optic beam steering allows the lidar device to be included in a device array having a linear (or flat) focal plane arrangement. As such, the size of the lidar device array (and the lidar measurement system) can be reduced. In certain examples, the time and cost per channel alignment of the lidar device array can be reduced by using multi-channel array components that are aligned at the component level. Some Examples of Continuous Wave (CW) Lidar Systems

[0164] As discussed above, some lidar systems may use a continuous wave (CW) laser to detect the range and/or velocity of targets, rather than pulsed TOF techniques. Such systems include continuous wave (CW) coherent lidar systems and frequency modulated continuous wave (FMCW) coherent lidar systems. For example, any of the lidar systems 100, 202, 250, and 270 described above can be configured to operate as a CW coherent lidar system or an FMCW coherent lidar system.

[0165] Lidar systems configured to operate as CW or FMCW systems can avoid the eye safety hazards commonly associated with pulsed lidar systems (e.g., hazards that arise from transmitting optical signals with high peak power). In addition, coherent detection may be more sensitive than direct detection and can offer better performance, including single-pulse velocity measurement and immunity to interference from solar glare and other light sources, including other lidar systems and devices.

[0166] FIG. 14 illustrates an exemplary CW coherent lidar system 1400 configured to determine the radial velocity (or speed) of a target. Lidar system 1400 includes a laser 1402 configured to produce a laser signal which is provided to a splitter 1404. The laser 1402 may provide a laser signal having a substantially constant laser frequency.

[0167] In one example, the splitter 1404 provides a first split laser signal Txl to a direction selective device 1406, which provides (e.g., forwards) the signal Txl to a scanner 1408. In some examples, the direction selective device 1406 is a circulator. The scanner 1408 uses the first laser signal Txl to transmit light emitted by the laser 1402 and receives light reflected by the target 1410 (e.g., “reflected light” or “reflections”). The reflected light signal Rx is provided (e.g., passed back) to the direction selective device 1406. The second laser signal Tx2 (provided by the splitter 1404) and the reflected light signal Rx are provided to a coupler (also referred to as a mixer) 1412. The mixer may use the second laser signal Tx2 as a local oscillator (LO) signal and mix it with the reflected light signal Rx. The mixer 1412 may be configured to mix the reflected light signal Rx with the local oscillator signal LO. The mixer 1412 may provide the mixed optical signal to differential photodetector 1414, which may generate an electrical signal representing the beat frequency fteat of the mixed optical signals, where fteat = | frx2 - fax | (the absolute value of the difference between the frequencies of the mixed optical signals). In some embodiments, the current produced by the differential photodetector 1414 based on the mixed light may have the same frequency as the beat frequency fteat. The current may be converted to a voltage by an amplifier (e.g., a transimpedance amplifier (TIA)), which may be provided (e.g., fed) to an analog- to-digital converter (ADC) 1416 configured to convert the analog voltage signal to digital samples for a target detection module 1418. The target detection module 1418 may be configured to determine (e.g., calculate) the radial velocity of the target 1410 based on the digital sampled signal with the beat frequency fteat.

[0168] In one example, the target detection module 1418 may identify Doppler frequency shifts using the beat frequency fbeat and determine the radial velocity of the target 1410 based on those shifts. For example, the radial velocity of the target 1410 can be calculated using the following relationship: where, fa is the Doppler frequency shift, X is the wavelength of the laser signal, and vt is the radial velocity of the target 1410. In some examples, the direction of the target 1410 is indicated by the sign of the Doppler frequency shift fa. For example, a positive signed Doppler frequency shift may indicate that the target 1410 is traveling towards the system 1400 and a negative signed Doppler frequency shift may indicate that the target 1410 is traveling away from the system 1400.

[0169] In one example, a Fourier Transform calculation is performed using the digital samples from the ADC 1416 to recover the desired frequency content (e.g., the Doppler frequency shift) from the digital sampled signal. For example, a controller (e.g., target detection module 1418) may be configured to perform a Discrete Fourier Transform (DFT) on the digital samples. In certain examples, a Fast Fourier Transform (FFT) can be used to calculate the DFT on the digital samples. In some examples, the Fourier Transform calculation (e.g., DFT) can be performed iteratively on different groups of digital samples to generate a target point cloud.

[0170] While the lidar system 1400 is described above as being configured to determine the radial velocity of a target, it should be appreciated that the system can be configured to determine the range and/or radial velocity of a target. For example, the lidar system 1400 can be modified to use laser chirps to detect the velocity and/or range of a target.

[0171] Some examples have been described in which a DFT is used to generate points of a point cloud based on a group of samples. However, frequency analysis techniques (e.g., spectrum analysis techniques) other than the DFT may be used to generate points of a point cloud based on a group of samples. Any suitable frequency analysis technique may be used, including, without limitation, Discrete Cosine transform (DCT), Wavelet transform, Auto-Regressive moving average (ARMA), etc.

[0172] FIG. 15 illustrates an exemplary FMCW coherent lidar system 1500 configured to determine the range and/or radial velocity of a target. Lidar system 1500 includes a laser 1502 configured to produce a laser signal which is fed into a splitter 1504. The laser is “chirped” (e.g., the center frequency of the emitted laser beam is increased (“ramped up” or “chirped up”) or decreased (“ramped down” or “chirped down”) overtime (or, equivalently, the central wavelength of the emitted laser beam changes with time within a waveband). In various embodiments, the laser frequency is chirped quickly such that multiple phase angles are attained. In one example, the frequency of the laser signal is modulated by changing the laser operating parameters (e.g., current/voltage) or using a modulator included in the laser source 1502; however, in other examples, an external modulator can be placed between the laser source 1502 and the splitter 1504.

[0173] In other examples, the laser frequency can be “chirped” by modulating the phase of the laser signal (or light) produced by the laser 1502. In one example, the phase of the laser signal is modulated using an external modulator placed between the laser source 1502 and the splitter 1504; however, in some examples, the laser source 1502 may be modulated directly by changing operating parameters (e.g., current/voltage) or may include an internal modulator. Similar to frequency chirping, the phase of the laser signal can be increased (“ramped up”) or decreased (“ramped down”) over time.

[0174] Some examples of systems with FMCW-based lidar sensors have been described. However, some embodiments of the techniques described herein may be implemented using any suitable type of lidar sensors including, without limitation, any suitable type of coherent lidar sensors (e.g., phase-modulated coherent lidar sensors). With phase-modulated coherent lidar sensors, rather than chirping the frequency of the light produced by the laser (as described above with reference to FMCW techniques), the lidar system may use a phase modulator placed between the laser 1502 and the splitter 1504 to generate a discrete phase modulated signal, which may be used to measure range and radial velocity.

[0175] As shown, the splitter 1504 provides a first split laser signal Txl to a direction selective device 1506, which provides (e.g., forwards) the signal Txl to a scanner 1508. The scanner 1508 uses the first laser signal Txl to transmit light emitted by the laser 1502 and receives light reflected by the target 1510. The reflected light signal Rx is provided (e.g., passed back) to the direction selective device 1506. The second laser signal Tx2 and reflected light signal Rx are provided to a coupler (also referred to as a mixer) 1512. The mixer may use the second laser signal Tx2 as a local oscillator (LO) signal and mix it with the reflected light signal Rx. The mixer 1512 may be configured to mix the reflected light signal Rx with the local oscillator signal LO to generate a beat frequency fteat. The mixed signal with beat frequency fteat may be provided to a differential photodetector 1514 configured to produce a current based on the received light. The current may be converted to voltage by an amplifier (e.g., a transimpedance amplifier (TIA)), which may be provided (e.g., fed) to an analog-to-digital converter (ADC) 1516 configured to convert the analog voltage to digital samples for a target detection module 1518. The target detection module 1518 may be configured to determine (e.g., calculate) the range and/or radial velocity of the target 1510 based on the digital sample signal with beat frequency fteat.

[0176] Laser chirping may be beneficial for range (distance) measurements of the target. In comparison, Doppler frequency measurements are generally used to measure target velocity. Resolution of distance can depend on the bandwidth size of the chirp frequency band such that greater bandwidth corresponds to finer resolution, according to the following relationships:

Q

Range resolution: A/? = (given a perfectly linear chirp), and where c is the speed of light, BW is the bandwidth of the chirped laser signal, fteat is the beat frequency, and TchirpRamp is the time period during which the frequency of the chirped laser ramps up (e.g., the time period corresponding to the up-ramp portion of the chirped laser). For example, for a distance resolution of 3.0 cm, a frequency bandwidth of 5.0 GHz may be used. A linear chirp can be an effective way to measure range and range accuracy can depend on the chirp linearity. In some instances, when chirping is used to measure target range, there may be range and velocity ambiguity. In particular, the reflected signal for measuring velocity (e.g., via Doppler) may affect the measurement of range. Therefore, some exemplary FMCW coherent lidar systems may rely on two measurements having different slopes (e.g., negative and positive slopes) to remove this ambiguity. The two measurements having different slopes may also be used to determine range and velocity measurements simultaneously.

[0177] FIG. 16A is a plot of ideal (or desired) frequency chirp as a function of time in the transmitted laser signal Tx (e.g., signal Tx2), depicted in solid line 1602, and reflected light signal Rx, depicted in dotted line 1604. As depicted, the ideal Tx signal has a positive linear slope between time tl and time t3 and a negative linear slope between time t3 and time t6. Accordingly, the ideal reflected light signal Rx returned with a time delay td of approximately t2-tl has a positive linear slope between time t2 and time t5 and a negative linear slope between time t5 and time t7.

[0178] FIG. 16B is a plot illustrating the corresponding ideal beat frequency fteat 1606 of the mixed signal Tx2 x Rx. Note that the beat frequency fteat 1606 has a constant value between time t2 and time t3 (corresponding to the overlapping up-slopes of signals Tx2 and Rx) and between time t5 and time t6 (corresponding to the overlapping down-slopes of signals Tx2 and Rx).

[0179] The positive slope (“Slope P”) and the negative slope (“Slope N”) (also referred to as positive ramp (or up-ramp) and negative ramp (or down-ramp), respectively) can be used to determine range and/or velocity. In some instances, referring to FIGS. 16A-16B, when the positive and negative ramp pair is used to measure range and velocity simultaneously, the following relationships are utilized:

>

Range: f beat_P~ f beat_N)

Velocity: V = - - where fieat p and fieat are beat frequencies generated during positive (P) and negative (N) slopes of the chirp 1602 respectively and X is the wavelength of the laser signal.

[0180] In one example, the scanner 1508 of the lidar system 1500 is used to scan the environment and generate a target point cloud from the acquired scan data. In some examples, the lidar system 1500 can use processing methods that include performing one or more Fourier Transform calculations, such as a Fast Fourier Transform (FFT) or a Discrete Fourier Transform (DFT), to generate the target point cloud from the acquired scan data. Being that the system 1500 is capable of measuring range, each point in the point cloud may have a three-dimensional location (e.g., x, y, and z) in addition to radial velocity. In some examples, the x-y location of each target point corresponds to a radial position of the target point relative to the scanner 1508. Likewise, the z location of each target point corresponds to the distance between the target point and the scanner 1508 (e.g., the range). In one example, each target point corresponds to one frequency chirp 1602 in the laser signal. For example, the samples collected by the system 1500 during the chirp 1602 (e.g., tl to t6) can be processed to generate one point in the point cloud.

Additional Embodiments, Computing Devices, and Information Handling Systems

[0181] In some embodiments, lidar systems and techniques described herein may be used to provide mapping and/or autonomous navigation for a vehicle. FIG. 17 illustrates a vehicle 1700 having a plurality of sensors 1702. As shown, a first sensor 1702a, a second sensor 1702b, a third sensor 1702c, and a fourth sensor 1702d may be positioned in a first location on (or inside) the vehicle 1700 (e.g., the roof). Likewise, a fifth sensor 1702e may be positioned in a second location on (or inside) the vehicle 1700 (e.g., the front of the vehicle 1700) and a sixth sensor 1702f may be positioned in a third location on (or inside) the vehicle 1700 (e.g., the back of the vehicle 1700). In other examples, a different number or configuration of sensors may be used.

[0182] In some examples, at least one sensor of the plurality of sensors 1702 is configured to provide (or enable) 3D mapping of the vehicle’s surroundings. In certain examples, at least one sensor of the plurality of sensors 1702 is used to provide autonomous navigation for the vehicle 1700 within an environment. In one example, each sensor 1702 includes at least one lidar system, device, or chip. The lidar system(s) included in each sensor 1702 may include any of the lidar systems disclosed herein. In some examples, at least one sensor of the plurality of sensors 1702 may be a different type of sensor (e.g., camera, radar, etc.). In one example, the vehicle 1700 is a car; however, in other examples, the vehicle 1700 may be a truck, boat, plane, drone, vacuum cleaner (e.g., robot vacuum cleaner), robot, train, tractor, ATV, or any other type of vehicle or moveable object.

[0183] In some embodiments, lidar systems and techniques described herein may be implemented using Silicon photonics (SiP) technologies. SiP is a material platform from which photonic integrated circuits (PICs) can be produced. SiP is compatible with CMOS (electronic) fabrication techniques, which allows PICs to be manufactured using established foundry infrastructure. In PICs, light propagates through a patterned silicon optical medium that lies on top of an insulating material layer (e.g., silicon on insulator (SOI)). In some cases, direct bandgap materials (e.g., indium phosphide (InP)) are used to create light (e.g., laser) sources that are integrated in an SiP chip (or wafer) to drive optical or photonic components within a photonic circuit. SiP technologies are increasingly used in optical datacom, sensing, biomedical, automotive, astronomy, aerospace, augmented reality (AR) applications, virtual reality (VR) applications, artificial intelligence (Al) applications, navigation, image identification, drones, robotics, etc. [0184] FIG. 18 is a block diagram of a silicon photonic integrated circuit (PIC) 1800 in accordance with aspects described herein. In one example, the lidar systems described herein can be implemented as the PIC 1800. The PIC 1800 includes a transmitter module 1802, a steering module 1804, and a receiver module 1806. As shown, the transmitter module 1802, the steering module 1804, and the receiver module 1806 are integrated on a silicon substrate 1808. In other examples, the transmitter module 1802, the steering module 1804, or the receiver module 1806 may be included on a separate substrate. In some embodiments, the steering module 1804 is used by the PIC 1800 in connection with transmission (e.g., emission) and reception (e.g., collection) of optical signals. In some examples, the silicon substrate 1808 is an SOI substrate with a silicon layer (e.g., between 200 nm and 10 micron thick) disposed over an oxide layer (e.g., approximately 2 micron thick). In certain examples, the silicon substrate 1808 can include multiple silicon and/or oxide layers.

[0185] In one example, the transmitter module 1802 includes at least one laser source. In some examples, the laser source(s) are implemented using a direct bandgap material (e.g., InP) and integrated on the silicon substrate 1808 via hybrid integration. The transmitter module 1802 may also include at least one splitter, a combiner, and/or a direction selective device that are implemented on the silicon substrate 1808 via monolithic or hybrid integration. In some examples, the laser source(s) are external to the PIC 1800 and the laser signal(s) can be provided to the transmission module 1802.

[0186] In some embodiments, lidar systems and techniques described herein may be implemented using micro-electromechanical system (MEMS) devices. A MEMS device is a miniature device that has both mechanical and electronic components. The physical dimension of a MEMS device can range from several millimeters to less than one micrometer. Lidar systems may include one or more scanning mirrors implemented as a MEMS mirror (or an array of MEMS mirrors). Each MEMS mirror may be a single-axis MEMS mirror or dual-axis MEMS mirror. The MEMS mirror(s) may be electromagnetic mirrors. A control signal is provided to adjust the position of the mirror to direct light in at least one scan direction (e.g., horizontal and/or vertical). The MEMS mirror(s) can be positioned to steer light transmitted by the lidar system and/or to steer light received by the lidar system. MEMS mirrors are compact and may allow for smaller form-factor lidar systems, faster control speeds, and more precise light steering compared to other mechanicalscanning lidar methods. MEMS mirrors may be used in solid-state (e.g., stationary) lidar systems and rotating lidar systems. [0187] In embodiments, aspects of the techniques described herein (e.g., timing the emission of the transmitted signal, processing received return signals, and so forth) may be directed to or implemented on information handling systems/computing systems. For purposes of this disclosure, a computing system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, a computing system may be a personal computer (e.g., laptop), tablet computer, phablet, personal digital assistant (PDA), smart phone, smart watch, smart package, server (e.g., blade server or rack server), network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.

[0188] FIG. 19 is a block diagram of an example computer system 1900 that may be used in implementing the technology described in this document. General-purpose computers, network appliances, mobile devices, or other electronic systems may also include at least portions of the system 1900. The system 1900 includes a processor 1910, a memory 1920, a storage device 1930, and an input/output device 1940. Each of the components 1910, 1920, 1930, and 1940 may be interconnected, for example, using a system bus 1950. The processor 1910 is capable of processing instructions for execution within the system 1900. In some implementations, the processor 1910 is a single-threaded processor. In some implementations, the processor 1910 is a multi -threaded processor. In some implementations, the processor 1910 is a programmable (or reprogrammable) general purpose microprocessor or microcontroller. The processor 1910 is capable of processing instructions stored in the memory 1920 or on the storage device 1930.

[0189] The memory 1920 stores information within the system 1900. In some implementations, the memory 1920 is a non-transitory computer-readable medium. In some implementations, the memory 1920 is a volatile memory unit. In some implementations, the memory 1920 is a nonvolatile memory unit.

[0190] The storage device 1930 is capable of providing mass storage for the system 1900. In some implementations, the storage device 1930 is a non-transitory computer-readable medium. In various different implementations, the storage device 1930 may include, for example, a hard disk device, an optical disk device, a solid-date drive, a flash drive, or some other large capacity storage device. For example, the storage device may store long-term data (e.g., database data, file system data, etc.). The input/output device 1940 provides input/output operations for the system 1900. In some implementations, the input/output device 1940 may include one or more network interface devices, e.g., an Ethernet card, a serial communication device, e.g., an RS-232 port, and/or a wireless interface device, e.g., an 802.11 card, a 3G wireless modem, or a 4G wireless modem. In some implementations, the input/output device may include driver devices configured to receive input data and send output data to other input/output devices, e.g., keyboard, printer and display devices 1960. In some examples, mobile computing devices, mobile communication devices, and other devices may be used.

[0191] In some implementations, at least a portion of the approaches described above may be realized by instructions that upon execution cause one or more processing devices to carry out the processes and functions described above. Such instructions may include, for example, interpreted instructions such as script instructions, or executable code, or other instructions stored in a non- transitory computer readable medium. The storage device 1930 may be implemented in a distributed way over a network, for example as a server farm or a set of widely distributed servers, or may be implemented in a single computing device.

[0192] Although an example processing system has been described in FIG. 19, embodiments of the subject matter, functional operations and processes described in this specification can be implemented in other types of digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible nonvolatile program carrier for execution by, or to control the operation of, a data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine- readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.

[0193] The term “system” may encompass all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. A processing system may include special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a programmable general purpose microprocessor or microcontroller. A processing system may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[0194] A computer program (which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

[0195] The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA, an ASIC, or a programmable general purpose microprocessor or microcontroller.

[0196] Computers suitable for the execution of a computer program can include, by way of example, general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. A computer generally includes a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic disks, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.

[0197] Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[0198] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s user device in response to requests received from the web browser.

[0199] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.

[0200] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship between client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship with each other. [0201] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.

[0202] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0203] FIG. 20 depicts a simplified block diagram of a computing device/information handling system (or computing system) according to embodiments of the present disclosure. It will be understood that the functionalities shown for system 2000 may operate to support various embodiments of an information handling system - although it shall be understood that an information handling system may be differently configured and include different components.

[0204] As illustrated in FIG. 20, system 2000 includes one or more central processing units (CPU) 2001 that provide(s) computing resources and control(s) the computer. CPU 2001 may be implemented with a microprocessor or the like, and may also include one or more graphics processing units (GPU) 2017 and/or a floating point coprocessor for mathematical computations. System 2000 may also include a system memory 2002, which may be in the form of randomaccess memory (RAM), read-only memory (ROM), or both.

[0205] A number of controllers and peripheral devices may also be provided. For example, an input controller 2003 represents an interface to various input device(s) 2004, such as a keyboard, mouse, or stylus. There may also be a wireless controller 2005, which communicates with a wireless device 2006. System 2000 may also include a storage controller 2007 for interfacing with one or more storage devices 2008, each of which includes a storage medium such as a magnetic tape or disk, or an optical medium that might be used to record programs of instructions for operating systems, utilities, and applications, which may include embodiments of programs that implement various aspects of the techniques described herein. Storage device(s) 2008 may also be used to store processed data or data to be processed in accordance with some embodiments. System 2000 may also include a display controller 2009 for providing an interface to a display device 2011, which may be a cathode ray tube (CRT), a thin film transistor (TFT) display, or other type of display. The computing system 2000 may also include an automotive signal controller 2012 for communicating with an automotive system 2013. A communications controller 2014 may interface with one or more communication devices 2015, which enables system 2000 to connect to remote devices through any of a variety of networks including the Internet, a cloud resource (e.g., an Ethernet cloud, a Fiber Channel over Ethernet (FCoE)/Data Center Bridging (DCB) cloud, etc.), a local area network (LAN), a wide area network (WAN), a storage area network (SAN), or through any suitable electromagnetic carrier signals including infrared signals.

[0206] In the illustrated system, all major system components may connect to a bus 2016, which may represent more than one physical bus. However, various system components may or may not be in physical proximity to one another. For example, input data and/or output data may be remotely transmitted from one physical location to another. In addition, programs that implement various aspects of some embodiments may be accessed from a remote location (e.g., a server) over a network. Such data and/or programs may be conveyed through any of a variety of machine- readable medium including, but not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store or to store and execute program code, such as application specific integrated circuits (ASICs), programmable logic devices (PLDs), flash memory devices, and ROM and RAM devices. Some embodiments may be encoded upon one or more non-transitory, computer-readable media with instructions for one or more processors or processing units to cause steps to be performed. It shall be noted that the one or more non-transitory, computer-readable media shall include volatile and non-volatile memory. It shall also be noted that alternative implementations are possible, including a hardware implementation or a software/hardware implementation. Hardware-implemented functions may be realized using ASIC(s), programmable arrays, digital signal processing circuitry, or the like. Accordingly, the “means” terms in any claims are intended to cover both software and hardware implementations. Similarly, the term “computer-readable medium or media” as used herein includes software and/or hardware having a program of instructions embodied thereon, or a combination thereof. With these implementation alternatives in mind, it is to be understood that the figures and accompanying description provide the functional information one skilled in the art would require to write program code (i.e., software) and/or to fabricate circuits (i.e., hardware) to perform the processing required.

[0207] It shall be noted that some embodiments may further relate to computer products with a non-transitory, tangible computer-readable medium that has computer code thereon for performing various computer-implemented operations. The medium and computer code may be those specially designed and constructed for the purposes of the techniques described herein, or they may be of the kind known or available to those having skill in the relevant arts. Examples of tangible, computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store or to store and execute program code, such as application specific integrated circuits (ASICs), programmable logic devices (PLDs), flash memory devices, and ROM and RAM devices. Examples of computer code include machine code, such as produced by a compiler, and files containing higher level code that is executed by a computer using an interpreter. Some embodiments may be implemented in whole or in part as machine-executable instructions that may be in program modules that are executed by a processing device. Examples of program modules include libraries, programs, routines, objects, components, and data structures. In distributed computing environments, program modules may be physically located in settings that are local, remote, or both.

[0208] One skilled in the art will recognize no computing system or programming language is critical to the practice of the techniques described herein. One skilled in the art will also recognize that a number of the elements described above may be physically and/or functionally separated into sub-modules or combined together.

[0209] In embodiments, aspects of the techniques described herein (e.g., timing the emission of optical signals, processing received return signals, generating point clouds, performing one or more (e.g., all) of the steps of the methods described herein, etc.) may be implemented using machine learning and/or artificial intelligence technologies.

[0210] “Machine learning” generally refers to the application of certain techniques (e.g., pattern recognition and/or statistical inference techniques) by computer systems to perform specific tasks. Machine learning techniques may be used to build models based on sample data (e.g., “training data”) and to validate the models using validation data (e.g., “testing data”). The sample and validation data may be organized as sets of records (e.g., “observations” or “data samples”), with each record indicating values of specified data fields (e.g., “independent variables,” “inputs,” “features,” or “predictors”) and corresponding values of other data fields (e.g., “dependent variables,” “outputs,” or “targets”). Machine learning techniques may be used to train models to infer the values of the outputs based on the values of the inputs. When presented with other data (e.g., “inference data”) similar to or related to the sample data, such models may accurately infer the unknown values of the targets of the inference data set.

[0211] A feature of a data sample may be a measurable property of an entity (e.g., person, thing, event, activity, etc.) represented by or associated with the data sample. A value of a feature may be a measurement of the corresponding property of an entity or an instance of information regarding an entity. Features can also have data types. For instance, a feature can have an image data type, a numerical data type, a text data type (e.g., a structured text data type or an unstructured (“free”) text data type), a categorical data type, or any other suitable data type. In general, a feature’s data type is categorical if the set of values that can be assigned to the feature is finite.

[0212] As used herein, “model” may refer to any suitable model artifact generated by the process of using a machine learning algorithm to fit a model to a specific training data set. The terms “model,” “data analytics model,” “machine learning model” and “machine learned model” are used interchangeably herein.

[0213] As used herein, the “development” of a machine learning model may refer to construction of the machine learning model. Machine learning models may be constructed by computers using training data sets. Thus, “development” of a machine learning model may include the training of the machine learning model using a training data set. In some cases (generally referred to as “supervised learning”), a training data set used to train a machine learning model can include known outcomes (e.g., labels or target values) for individual data samples in the training data set. For example, when training a supervised computer vision model to detect images of cats, a target value for a data sample in the training data set may indicate whether or not the data sample includes an image of a cat. In other cases (generally referred to as “unsupervised learning”), a training data set does not include known outcomes for individual data samples in the training data set.

[0214] Following development, a machine learning model may be used to generate inferences with respect to “inference” data sets. For example, following development, a computer vision model may be configured to distinguish data samples including images of cats from data samples that do not include images of cats. As used herein, the “deployment” of a machine learning model may refer to the use of a developed machine learning model to generate inferences about data other than the training data.

[0215] “Artificial intelligence” (Al) generally encompasses any technology that demonstrates intelligence. Applications (e.g., machine-executed software) that demonstrate intelligence may be referred to herein as “artificial intelligence applications,” “Al applications,” or “intelligent agents.” An intelligent agent may demonstrate intelligence, for example, by perceiving its environment, learning, and/or solving problems (e.g., taking actions or making decisions that increase the likelihood of achieving a defined goal). In many cases, intelligent agents are developed by organizations and deployed on network-connected computer systems so users within the organization can access them. Intelligent agents are used to guide decision-making and/or to control systems in a wide variety of fields and industries, e.g., security; transportation; risk assessment and management; supply chain logistics; and energy management. Intelligent agents may include or use models.

[0216] Some non-limiting examples of Al application types may include inference applications, comparison applications, and optimizer applications. Inference applications may include any intelligent agents that generate inferences (e.g., predictions, forecasts, etc.) about the values of one or more output variables based on the values of one or more input variables. In some examples, an inference application may provide a recommendation based on a generated inference. For example, an inference application for a lending organization may infer the likelihood that a loan applicant will default on repayment of a loan for a requested amount, and may recommend whether to approve a loan for the requested amount based on that inference. Comparison applications may include any intelligent agents that compare two or more possible scenarios. Each scenario may correspond to a set of potential values of one or more input variables over a period of time. For each scenario, an intelligent agent may generate one or more inferences (e.g., with respect to the values of one or more output variables) and/or recommendations. For example, a comparison application for a lending organization may display the organization’s predicted revenue over a period of time if the organization approves loan applications if and only if the predicted risk of default is less than 20% (scenario #1), less than 10% (scenario #2), or less than 5% (scenario #3). Optimizer applications may include any intelligent agents that infer the optimum values of one or more variables of interest based on the values of one or more input variables. For example, an optimizer application for a lending organization may indicate the maximum loan amount that the organization would approve for a particular customer.

[0217] As used herein, “data analytics” may refer to the process of analyzing data (e.g., using machine learning models, artificial intelligence, models, or techniques) to discover information, draw conclusions, and/or support decision-making. Species of data analytics can include descriptive analytics (e.g., processes for describing the information, trends, anomalies, etc. in a data set), diagnostic analytics (e.g., processes for inferring why specific trends, patterns, anomalies, etc. are present in a data set), predictive analytics (e.g., processes for predicting future events or outcomes), and prescriptive analytics (processes for determining or suggesting a course of action).

[0218] Data analytics tools are used to guide decision-making and/or to control systems in a wide variety of fields and industries, e.g., security; transportation; risk assessment and management; supply chain logistics; and energy management. The processes used to develop data analytics tools suitable for carrying out specific data analytics tasks generally include steps of data collection, data preparation, feature engineering, model generation, and/or model deployment.

[0219] As used herein, “spatial data” may refer to data relating to the location, shape, and/or geometry of one or more spatial objects. Data collected by lidar systems, devices, and chips described herein may be considered spatial data. A “spatial object” may be an entity or thing that occupies space and/or has a location in a physical or virtual environment. In some cases, a spatial object may be represented by an image (e.g., photograph, rendering, etc.) of the object. In some cases, a spatial object may be represented by one or more geometric elements (e.g., points, lines, curves, and/or polygons), which may have locations within an environment (e.g., coordinates within a coordinate space corresponding to the environment). In some cases, a spatial object may be represented as a cluster of points in a 3D point-cloud.

[0220] As used herein, “spatial attribute” may refer to an attribute of a spatial object that relates to the object’s location, shape, or geometry. Spatial objects or observations may also have “non- spatial attributes.” For example, a residential lot is a spatial object that that can have spatial attributes (e.g., location, dimensions, etc.) and non-spatial attributes (e.g., market value, owner of record, tax assessment, etc.). As used herein, “spatial feature” may refer to a feature that is based on (e.g., represents or depends on) a spatial attribute of a spatial object or a spatial relationship between or among spatial objects. As a special case, “location feature” may refer to a spatial feature that is based on a location of a spatial object. As used herein, “spatial observation” may refer to an observation that includes a representation of a spatial object, values of one or more spatial attributes of a spatial object, and/or values of one or more spatial features.

[0221] Spatial data may be encoded in vector format, raster format, or any other suitable format. In vector format, each spatial object is represented by one or more geometric elements. In this context, each point has a location (e.g., coordinates), and points also may have one or more other attributes. Each line (or curve) comprises an ordered, connected set of points. Each polygon comprises a connected set of lines that form a closed shape. In raster format, spatial objects are represented by values (e.g., pixel values) assigned to cells (e.g., pixels) arranged in a regular pattern (e.g., a grid or matrix). In this context, each cell represents a spatial region, and the value assigned to the cell applies to the represented spatial region.

[0222] “Computer vision” generally refers to the use of computer systems to analyze and interpret image data. In some embodiments, computer vision may be used to analyze and interpret data collected by lidar systems (e.g., point-clouds). Computer vision tools generally use models that incorporate principles of geometry and/or physics. Such models may be trained to solve specific problems within the computer vision domain using machine learning techniques. For example, computer vision models may be trained to perform object recognition (recognizing instances of objects or object classes in images), identification (identifying an individual instance of an object in an image), detection (detecting specific types of objects or events in images), etc.

[0223] Computer vision tools (e.g., models, systems, etc.) may perform one or more of the following functions: image pre-processing, feature extraction, and detection / segmentation. Some examples of image pre-processing techniques include, without limitation, image re-sampling, noise reduction, contrast enhancement, and scaling (e.g., generating a scale space representation). Extracted features may be low-level (e.g., raw pixels, pixel intensities, pixel colors, gradients, patterns and textures (e.g., combinations of colors in close proximity), color histograms, motion vectors, edges, lines, corners, ridges, etc.), mid-level (e.g., shapes, surfaces, volumes, patterns, etc.), or high-level (e.g., objects, scenes, events, etc.). The detection / segmentation function may involve selection of a subset of the input image data (e.g., one or more images within a set of images, one or more regions within an image, etc.) for further processing.

Some Embodiments

[0224] Some embodiments may include any of the following: [0225] Al . A lidar device, comprising: a plurality of laser sources configured to provide a plurality of transmit beams, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line; a plurality of transmit/receive (T/R) interfaces configured to pass the plurality of transmit beams and reflect received light towards a plurality of detectors, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a second plurality of offsets relative to the reference line; and a plurality of lenses positioned between the plurality of laser sources and the plurality of T/R interfaces, each lens of the plurality of lenses being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to provide beam-steering of the plurality of transmit beams.

[0226] A2. The lidar device of clause Al, wherein each lens of the plurality of lenses intersects a respective transmit beam of the plurality of transmit beams, and wherein each lens of the plurality of lenses includes one or more micro-optic lenses configured to provide beam shaping of the respective transmit beam of the plurality of transmit beams.

[0227] A3. The lidar device of clauses Al or A2, wherein each T/R interface of the plurality of T/R interfaces includes at least one mirror.

[0228] A4. The lidar device of any of clauses Al to A3, wherein each detector of the plurality of detectors is included in a respective T/R interface of the plurality of the T/R interfaces.

[0229] A5. The lidar device of any of clauses Al to A4, wherein the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces are disposed on a substrate.

[0230] A6. The lidar device of clause A5, wherein the first plurality of offsets, the second plurality of offsets, and the third plurality of offsets correspond to positions of the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces on the substrate relative to the reference line.

[0231] A7. The lidar device of any of clauses Al to A6, wherein the lidar device is configured to be included in a lidar system.

[0232] A8. The lidar device of clause A7, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

[0233] A9. The lidar device of clause A7 or A8, wherein the lidar device corresponds to two or more channels of a plurality of channels of the lidar system. [0234] A10. The lidar device of any of clauses A7 or A9, wherein the lidar device is aligned to a linear focal plane of the lidar system.

[0235] Al l. The lidar device of any of clauses A7 to A10, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

[0236] A12. The lidar device of clause Al l, wherein the plurality of laser sources and the plurality of lenses, as positioned, are configured to steer the plurality of transmit beams towards the center of the system lens.

[0237] A13. The lidar device of clause A12, wherein the first plurality of offsets are larger than the third plurality of offsets and the third plurality of offsets are larger than the second plurality of offsets.

[0238] A14. The lidar device of any of clauses Al l to A13, wherein the lidar device is aligned using an active alignment process that includes energizing at least one laser source of the plurality of laser sources and measuring energy associated with at least one transmit beam of the plurality of transmit beams at the center of the system lens.

[0239] A15. The lidar device of any of clauses Al to A14, wherein the first plurality of offsets correspond to a first pitch between the plurality of laser sources, the second plurality of offsets correspond to a second pitch between the plurality of T/R interfaces, and the third plurality of offsets correspond to a third pitch between the plurality of lenses.

[0240] Al 6. The lidar device of clause Al 5, wherein the plurality of laser sources are fabricated as a laser source array having the first pitch, the plurality of T/R interfaces are fabricated as a T/R interface array having the second pitch, and the plurality of lenses are fabricated as a lens array having the third pitch.

[0241] Al 7. The lidar device of clause Al 6, wherein each of the laser source array, the T/R interface array, and the lens array is a monolithic array component.

[0242] Al 8. A method for operating a lidar device, the method comprising: providing a plurality of transmit beams via a plurality of laser sources, each laser source of the plurality of laser sources being positioned with a respective offset of a first plurality of offsets relative to a reference line; conditioning the plurality of transmit beams via a plurality of lenses, each lens of the plurality of lenses being positioned with a respective offset of a second plurality of offsets relative to the reference line; and passing the plurality of transmit beams and reflecting received light towards a plurality of detectors via a plurality of transmit/receive (T/R) interfaces, each T/R interface of the plurality of T/R interfaces being positioned with a respective offset of a third plurality of offsets relative to the reference line, wherein the plurality of laser sources and the plurality of lenses, as positioned, provide beam-steering of the plurality of transmit beams.

[0243] A19. The method of clause A18, wherein each lens of the plurality of lenses intersects a respective transmit beam of the plurality of transmit beams, and wherein each lens of the plurality of lenses includes one or more micro-optic lenses and conditioning the plurality of transmit beams includes beam shaping the plurality of transmit beams using the micro-optic lenses.

[0244] A20. The method of clause Al 8 or Al 9, wherein each T/R interface of the plurality of T/R interfaces includes at least one mirror.

[0245] A21. The method of any of clauses Al 8 to A20, wherein each detector of the plurality of detectors is included in a respective T/R interface of the plurality of the T/R interfaces.

[0246] A22. The method of any of clauses A18 to A21, wherein the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces are disposed on a substrate.

[0247] A23. The method of clause A22, wherein the first plurality of offsets, the second plurality of offsets, and third plurality of offsets correspond to positions of the plurality of laser sources, the plurality of lenses, and the plurality of T/R interfaces on the substrate relative to the reference line.

[0248] A24. The method of any of clauses Al 8 to A23, wherein the lidar device is configured to be included in a lidar system.

[0249] A25. The method of clause A24, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

[0250] A26. The method of clause A24 or A25, wherein the lidar device corresponds to two or more channels of a plurality of channels of the lidar system.

[0251] A27. The method of any of clauses A24 to A26, wherein the lidar device is aligned to a linear focal plane of the lidar system.

[0252] A28. The method of any of clauses A24 to A27, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

[0253] A29. The method of clause A28, wherein the plurality of laser sources and the plurality of lenses, as positioned, steer the plurality of transmit beams towards the center of the system lens. [0254] A30. The method of clause A29, wherein the first plurality of offsets are larger than the third plurality of offsets and the third plurality of offsets are larger than the second plurality of offsets.

[0255] A31. The method of any of clauses A28 to A30, further comprising aligning the lidar device using an active alignment process that includes energizing at least one laser source of the plurality of laser sources and measuring energy associated with at least one transmit beam of the plurality of transmit beams at the center of the system lens.

[0256] A32. The method of any of clauses Al 8 to A31, wherein the first plurality of offsets correspond to a first pitch between the plurality of laser sources, the second plurality of offsets correspond to a second pitch between the plurality of T/R interfaces, and the third plurality of offsets correspond to a third pitch between the plurality of lenses.

[0257] A33. The method of clause A32, wherein the plurality of laser sources are fabricated as a laser source array having the first pitch, the plurality of T/R interfaces are fabricated as a T/R interface array having the second pitch, and the plurality of lenses are fabricated as a lens array having the third pitch.

[0258] A34. The method of clause A33, wherein each of the laser source array, the T/R interface array, and the lens array is a monolithic array component.

[0259] A35. A method for manufacturing a lidar device, the method comprising: providing a laser source array including a plurality of laser sources disposed with a first pitch on a first substrate, the plurality of laser sources configured to provide a respective plurality of transmit beams, wherein providing the laser source array comprises positioning at least one laser source of the plurality of laser sources with a first offset relative to a reference line; providing a lens array including a plurality of lenses disposed with a second pitch on a second substrate, the plurality of lenses configured to condition the respective plurality of transmit beams provided by the laser source array, wherein providing the lens array comprises positioning at least one lens of the plurality of lenses with a second offset relative to the reference line; and providing a transmit/receive (T/R) interface array including a plurality of T/R interfaces disposed with a third pitch on a third substrate, the plurality of T/R interfaces configured to pass the respective plurality of transmit beams conditioned by the lens array and to reflect received light towards a plurality of detectors, wherein providing the T/R interface array comprises positioning at least one T/R interface of the plurality of T/R interfaces with a third offset relative to the reference line, wherein the laser source array and the lens array, as positioned, are configured to provide beam steering of one or more of the plurality of transmit beams.

[0260] A36. The method of clause A35, further comprising: fabricating the plurality of laser sources as the laser source array having the first pitch; fabricating the plurality of lenses as the lens array having the second pitch; and fabricating the plurality of T/R interfaces as the T/R interface array having the third pitch.

[0261] A37. The method of clause A35 or A36, wherein each of the laser source array, the T/R interface array, and the lens array is fabricated as a monolithic array component.

[0262] B38. A lidar device, comprising: a laser source configured to provide a transmit beam, the laser source being positioned with a first offset relative to a reference line; a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, the T/R interface being positioned with a second offset relative to the reference line; and a lens positioned between the laser source and the T/R interface, the lens being positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam.

[0263] B39. The lidar device of clause B38, wherein the lens includes one or more micro-optic lenses configured to provide beam shaping of the transmit beam.

[0264] B40. The lidar device of clause B38 or B39, wherein the T/R interface includes at least one mirror.

[0265] B41. The lidar device of any of clauses B38 to B40, wherein the detector is included in the T/R interface.

[0266] B42. The lidar device of any of clauses B38 to B41, wherein the laser source, the lens, and the T/R interface are disposed on a substrate.

[0267] B43. The lidar device of clause B42, wherein the first, second, and third offsets correspond to positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line.

[0268] B44. The lidar device of any of clauses B38 to B43, wherein the lidar device is one of a plurality of lidar devices included in a lidar system.

[0269] B45. The lidar device of clause B44, wherein the lidar device corresponds to a channel of a plurality of channels of the lidar system. [0270] B46. The lidar device of clause B44 or B45, wherein the lidar device is aligned to a linear focal plane of the lidar system.

[0271] B47. The lidar device of any of clauses B44 to B46, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

[0272] B48. The lidar device of any of clauses B44 to B47, wherein the lidar system includes a system lens and the reference line is coincident with a boresight line of the system lens.

[0273] B49. The lidar device of clause B47, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam towards the center of the system lens.

[0274] B50. The lidar device of clause B47 or B49, wherein the lidar device is aligned using an active alignment process that includes energizing the laser source and measuring energy associated with transmit beam at the center of the system lens.

[0275] B51. A method for operating a lidar device, the method comprising: providing a transmit beam via a laser source positioned with a first offset relative to a reference line; conditioning the transmit beam via a lens positioned with a second offset relative to the reference line; and passing the transmit beam and reflecting received light towards a detector via a transmit/receive (T/R) interface positioned with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam.

[0276] B52. The method of clause B51, wherein the lens includes one or more micro-optic lenses and conditioning the transmit beam includes beam shaping the transmit beam using the one or more micro-optic lenses.

[0277] B53. The method of clause B51 or B52, wherein the T/R interface includes at least one mirror.

[0278] B54. The method of any of clauses B51 to B53, wherein the detector is included in the T/R interface.

[0279] B55. The method of any of clauses B51 to B54, wherein the laser source, the lens, and the T/R interface are disposed on a substrate.

[0280] B56. The method of clause B55, wherein the first, second, and third offsets correspond to positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line. [0281] B57. The method of any of clauses B51 to B56, wherein the lidar device is one of a plurality of lidar devices included in a lidar system.

[0282] B58. The method of clause B57, wherein the lidar device corresponds to a channel of a plurality of channels of the lidar system.

[0283] B59. The method of clause B57 or B58, wherein the lidar device is aligned to a linear focal plane of the lidar system.

[0284] B60. The method of any of clauses B57 to B59, wherein the lidar system includes a system lens and the reference line corresponds to a center of the system lens.

[0285] B61. The method of clause B60, wherein the laser source and the lens, as positioned, are configured to steer the transmit beam towards the center of the system lens.

[0286] B62. The method of clause B60 or B61, further comprising aligning the lidar device using an active alignment process that includes energizing the laser source and measuring energy associated with the transmit beam at the center of the system lens.

[0287] B63. A method for manufacturing a lidar device, the method comprising: providing a laser source configured to provide a transmit beam, wherein providing the laser source comprises positioning the laser source with a first offset relative to a reference line; providing a transmit/receive (T/R) interface configured to pass the transmit beam and reflect received light towards a detector, wherein providing the T/R interface comprises positioning the T/R interface with a second offset relative to the reference line; and providing a lens positioned between the laser source and the T/R interface, wherein providing the comprises positioning the lens with a third offset relative to the reference line, wherein the laser source and the lens, as positioned, steer the transmit beam toward a center of a system lens of a lidar system comprising the lidar device.

[0288] B64. The method of clause B63, wherein the T/R interface includes at least one mirror.

[0289] B65. The method of clause B63 or B64, wherein the detector is included in the T/R interface.

[0290] B66. The method of any of clauses B63 to B65, wherein positioning the laser source, the T/R interface, and the lens comprises disposing the laser source, the lens, and the T/R interface on a substrate. [0291] B67. The method of clause B66, wherein the first offset, the second offset, and the third offset correspond to respective positions of the laser source, the lens, and the T/R interface on the substrate relative to the reference line.

[0292] B68. The method of clause B66 or B67, further comprising at least one of coupling, bonding, attaching, or fastening the laser source, the lens, and the T/R interface to the substrate.

[0293] B69. The method of any of clauses B63 to B68, wherein the lidar device is one of a plurality of lidar devices included in the lidar system.

[0294] B70. The method of any of clauses B63 to B69, wherein the positioning of the laser source, the T/R interface, and the lens aligns the lidar device to a linear focal plane of the lidar system.

[0295] B71. The method of any of clauses B63 to B70, further comprising aligning the lidar device using an active alignment process that includes energizing the laser source and measuring energy associated with transmit beam at the center of the system lens.

[0296] B72. The method of any of clauses B63 to B71, wherein the reference line corresponds to the center of the system lens.

[0297] B73. The method of any of clauses B63 to B72, wherein the reference line is coincident with a boresight line of the system lens.

[0298] B74. The method of any of clauses B63 to B73, wherein the first offset is larger than the third offset and the third offset is larger than the second offset.

Terminology

[0299] The phrasing and terminology used herein is for the purpose of description and should not be regarded as limiting.

[0300] Measurements, sizes, amounts, and the like may be presented herein in a range format. The description in range format is provided merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as 1- 20 meters should be considered to have specifically disclosed subranges such as 1 meter, 2 meters, 1-2 meters, less than 2 meters, 10-11 meters, 10-12 meters, 10-13 meters, 10-14 meters, 11-12 meters, 11-13 meters, etc. [0301] Furthermore, connections between components or systems within the figures are not intended to be limited to direct connections. Rather, data or signals between these components may be modified, re-formatted, or otherwise changed by intermediary components. Also, additional or fewer connections may be used. The terms “coupled,” “connected,” or “communicatively coupled” shall be understood to include direct connections, indirect connections through one or more intermediary devices, wireless connections, and so forth.

[0302] Reference in the specification to “one embodiment,” “preferred embodiment,” “an embodiment,” “some embodiments,” or “embodiments” means that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention and may be in more than one embodiment. Also, the appearance of the above-noted phrases in various places in the specification is not necessarily referring to the same embodiment or embodiments.

[0303] The use of certain terms in various places in the specification is for illustration purposes only and should not be construed as limiting. A service, function, or resource is not limited to a single service, function, or resource; usage of these terms may refer to a grouping of related services, functions, or resources, which may be distributed or aggregated.

[0304] Furthermore, one skilled in the art shall recognize that: (1) certain steps may optionally be performed; (2) steps may not be limited to the specific order set forth herein; (3) certain steps may be performed in different orders; and (4) certain steps may be performed simultaneously or concurrently.

[0305] The term “approximately”, the phrase “approximately equal to”, and other similar phrases, as used in the specification and the claims (e.g., “X has a value of approximately Y” or “X is approximately equal to Y”), should be understood to mean that one value (X) is within a predetermined range of another value (Y). The predetermined range may be plus or minus 20%, 10%, 5%, 3%, 1%, 0.1%, or less than 0.1%, unless otherwise indicated.

[0306] The indefinite articles “a” and “an,” as used in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements).

[0307] As used in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of’ or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.

[0308] As used in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements).

[0309] The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof, is meant to encompass the items listed thereafter and additional items. [0310] Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Ordinal terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term), to distinguish the claim elements.

[0311] Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. Other steps or stages may be provided, or steps or stages may be eliminated, from the described processes. Accordingly, other implementations are within the scope of the following claims.

[0312] It will be appreciated by those skilled in the art that the preceding examples and embodiments are exemplary and not limiting to the scope of the present disclosure. It is intended that all permutations, enhancements, equivalents, combinations, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It shall also be noted that elements of any claims may be arranged differently including having multiple dependencies, configurations, and combinations.

[0313] Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.