Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A VISION SYSTEM AND VISION METHOD FOR A VEHICLE
Document Type and Number:
WIPO Patent Application WO/2019/012085
Kind Code:
A1
Abstract:
A vision system (1) for a vehicle (100) comprises a light source (2) arranged on an emitting side, adapted to emit a light beam (3) to a scanned surface (4) in the environment (5) of the vehicle (100), a receiving unit (26) arranged on a receiving side and comprising at least one light deflection device (6), at least one light sensing device (8), and a lens arrangement (11) with a field of view (90), wherein the at least one light deflection device (6) comprises an array of light deflection elements (7), wherein each light deflection element is adapted to redirect light which is incident on said light deflection element (7) from the scanned surface (4), and to change the direction of the redirected light between at least a first deflection direction and a second deflection direction, the at least one light sensing device (8) is adapted to sense light redirected from the light deflection device (6) in said first deflection direction, the lens arrangement (11) is adapted to focus a reflected light beam (16) from the scanned surface (4) to the at least one light deflection device (6), and a data processing device (19). The lens arrangement (11) comprises a plurality of lens systems (11a, 11b), and each one of the plurality of lens systems (11a, 11b) is adapted to focus a portion (16a, 16b) of the reflected light beam corresponding to a fraction (90a, 90b) of the field of view (90) to the at least one light deflection device (6).

Inventors:
ROYO SANTIAGO (ES)
RIU JORDI (ES)
RODRIGO NOEL (ES)
SANABRIA FERRAN (ES)
KÄLLHAMMER JAN-ERIK (SE)
Application Number:
PCT/EP2018/069036
Publication Date:
January 17, 2019
Filing Date:
July 12, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VEONEER SWEDEN AB (SE)
BEAMAGINE S L (ES)
International Classes:
G01S17/89; G01S7/481; G01S17/931
Domestic Patent References:
WO2017040066A12017-03-09
WO2012123809A12012-09-20
WO2014125153A12014-08-21
Foreign References:
DE102015217908A12017-03-23
DE102005049471A12007-05-31
EP3182160A12017-06-21
US20080112029A12008-05-15
Attorney, Agent or Firm:
MÜLLER VERWEYEN PATENTANWÄLTE (DE)
Download PDF:
Claims:
Claims :

1. A vision system (1) for a vehicle (100) comprising

- a light source (2) arranged on an emitting side, adapted to emit a light beam (3) to a scanned surface (4) in the environment (5) of the vehicle;

- a receiving unit (26) arranged on a receiving side and comprising at least one light deflection device (6), at least one light sensing device (8) , and a lens arrangement (11) with a field of view (90) ; wherein

- the at least one light deflection device (6) comprises an array of light deflection elements (7) , wherein each light deflection element (7) is adapted to redirect light which is incident on said light deflection element (7) from the scanned surface (4), and to change the direction of the redirected light between at least a first deflection direction and a second deflection direction;

- the at least one light sensing device (8) is adapted to sense light redirected from the light deflection device (6) in said first deflection direction;

- the lens arrangement (11) is adapted to focus a reflected light beam (16) from the scanned surface (4) to the at least one light deflection device (6) ;

- a data processing device (19) ;

characterized in that

- the lens arrangement (11) comprises a plurality of lens systems (11a, lib) ; and

- each one of the plurality of lens systems (11a, lib) is adapted to focus a portion (16a, 16b) of the reflected light beam (16) corresponding to a fraction (90a, 90b) of the field of view (90) to the at least one light deflection device ( 6 ) . The vision system (1) as claimed in claim 1, characterized in that the lens arrangement (11) is adapted to partition the field of view (90) so that the fractions (90a, 90b) of the field of view (90) corresponding to the plurality of lens systems (lla, lib) overlap.

The vision system (1) according to claim 2, characterized in that the data processing device (19) is adapted to fuse the image data from overlapping portions of the fractions (90a, 90b) .

The vision system (1) as claimed in any one of the preceding claims, characterized in that the plurality of lens systems (lla, lib) is adapted to divide the field of view (90) so that the fractions (90a, 90b) align horizontally.

The vision system (1) as claimed in any one of the preceding claims, characterized in that the plurality of lens systems (lla, lib) are arranged in a juxtaposition arrangement .

The vision system (1) as claimed in any one of the preceding claims, characterized in that the number of sensing devices (8) is less than the number of lens systems (lla, lib) comprised in the lens arrangement (11) and/or the number of deflection devices (6) is less than the number of lens systems (lla, lib) .

The vision system (1) as claimed in any one of the preceding claims, characterized in that

- the receiving unit (26) comprises a prism system (91) arranged between the plurality of lens systems (lla, lib) and the light deflection device (6) ; and - the prism system (91) is adapted direct light from a plurality of lens systems (11a, lib) towards the at least one deflection devices (6) .

8. The vision system (1) as claimed in any one of the preceding claims, characterized in that

- the receiving unit (26) comprises at least one shutter (50a, 50b) ; and

- each of the at least one shutter (50a, 50b) corresponds to any one of the plurality of lens systems (11a, lib) and is adapted to change between a light blocking state and a light transmitting state.

9. The vision system (1) as claimed in claim 8, characterized in that

- the vision system (1) is adapted to perform range-gated imaging by sequential opening and closing of the at least one shutter (50a, 50b) .

10. The vision system (1) as claimed in any one of the preceding claims, characterized in that the number of light sensing devices (8) is less than the number of light deflection devices (6) and in particular equals one.

11. The vision system (1) as claimed in any one of the preceding claims, characterized in that the receiving unit (26) comprises a plurality of light sensing devices (8) .

12. The vision system (1) as claimed in claim 11, characterized in that the number of light sensing devices (8) equals the number of lens systems (lla, lib) comprised in the lens arrangement (11) .

13. The vision system (1) as claimed in any one of the preceding claims, characterized in that the receiving unit (26) further comprises at least one light detecting device (9) adapted to detect light redirected from the light deflection device (6) in the second deflection direction.

14. The vision system (1) according to any one of the preceding claims, characterized in that the at least one light source (2), the at least one light sensing device (8), and the data processing device (19) form at least one LIDAR system to measure a set of time-of -flight values.

15. The vision system (1) according to any one of the preceding claims, characterized in that the at least one light sensing device (8) comprises avalanche photo diodes, pho- tomultiplier tubes, single photon avalanche diodes, or an array of any of the aforementioned and the light sensing device (8) is adapted to detect light in the range of UV, near infrared, short/mid/long wavelength infrared, and/or sub-mm/THz band, both, wideband or monochromatic, preferably wideband visible, and near infrared or short wavelength infrared.

16. A vision method for a vehicle (100) comprising

- emitting a light beam (3) to a scanned surface (4) in the environment (5) of the vehicle from a light source (2) arranged on an emitting side;

- a receiving unit (26) arranged on a receiving side and comprising at least one light deflection device (6) , at least one light sensing device (8), and a lens arrangement (11) with a field of view (90); wherein

- deflecting light incident on a light deflection element (7) comprised in the at least one light deflection device (6) , wherein each light deflection element (7) is adapted to redirect light which is incident on said light deflection element (7) from the scanned surface (4), and to change the direction of the redirected light between at least a first deflection direction and a second deflection direction;

- sensing light redirected from the light deflection device (6) in said first deflection direction by the at least one light sensing device (8) ;

- focusing a reflected light beam (16) from the scanned surface (4) to the at least one light deflection device (6) by the lens arrangement (11) ;

- comprising a data processing device (19) ;

characterized in that

- the lens arrangement (11) comprises a plurality of lens systems (11a, lib) ; and

- each one of the plurality of lens systems (11a, lib) focuses a portion (16a, 16b) of the reflected light beam (16) corresponding to a fraction (90a, 90b) of the field of view (90) to the at least one light deflection device (6) .

Description:
A vision system and vision method for a vehicle

The invention relates to a vision system for a vehicle comprising a light source arranged on an emitting side, adapted to emit a light beam to a scanned surface in the environment of the vehicle, a receiving unit arranged on a receiving side and comprising at least one light deflection device, at least one light sensing device, and a lens arrangement with a field of view, wherein the at least one light deflection device comprises an array of light deflection elements, wherein each light deflection element is adapted to redirect light which is incident on said light deflection element from the scanned surface, and to change the direction of the redirected light between at least a first deflection direction and a second deflection direction, the at least one light sensing device is adapted to sense light redirected from the light deflection device in said first deflection direction, the lens arrangement is adapted to focus a reflected light beam from the scanned surface to the at least one light deflection device, and a data processing device .

Such systems are generally known, for example from WO 2012 123809 Al and WO 2014 125153 Al . Herein, a surface is illuminated by a light beam that originates from a light source and is reflected by the surface. The reflected light beam is incident on a lens arrangement transmitting the light beam towards the light deflection device where it is deflected in a specific manner to either a light sensing device or to an absorber. The time-of- flight of the light beam is indicative of the distance between the system and a point on the surface, which spatial location is derivable from information on the light deflection device and by the time-of -flight . The field of view of such systems is limited by the optical properties of the lens arrangement, its geometry and in particular the size of the entrance pupil, through which the light enters the system. Due to imaging aberrations, a large field of view requires typically a small entrance pupil and at the same time the area of the entrance pupil is proportional to the light gathering capabilities of the system. However, the amount of light passing through the entrance pupil and the lens arrangement is crucial for the light sensing capabilities of the sensing device. Thus, the size of the entrance pupil of the lens arrangement needs to balance the requirements of the field of view and sufficient light transmission. A large field of view is favorable but leads to low light sensing capabilities and requires larger power of the light source to emit a high intensity light beam leading to difficulties with power consumption, cooling, cost, durability, and eye safety constraints. A small field of view is typically not favorable.

The object of the invention is to improve the field of view and the light sensing capabilities of the vision system and method.

The invention solves this problem with the features of the independent claims. The system according to the present invention suggests that the lens arrangement comprises a plurality of lens systems, and each one of the plurality of lens systems is adapted to focus a portion of the reflected light beam corresponding to a fraction of the field of view to the at least one light deflection device. According to the invention the lens arrangement, which is adapted to focus a reflected light beam from the scanned surface to the at least one light deflection device, comprises a plurality of lens systems. A plurality of lens systems can enable more light entering the receiving unit than a single lens system only, and can possibly diminish the required power of the light beam emitted by the light source. By a proper arrangement of the lens systems, the field of view can be extended compared to a single lens system. The division of the field of view can imply an improvement of the light gathering capabilities, allowing a larger diameter of the entrance pupils of the lens systems, reducing the power of the light source, and/or increasing the range of the vision system.

The invention suggests that each one of the plurality of lens systems is adapted to focus a portion of the reflected light beam corresponding to a fraction of the field of view to the at least one light deflection device. The overall field of view is partitioned into fractions of the field of view and each lens system is adapted to focus the scanned surface corresponding to a respective fraction of the overall field of view .

The invention is applicable to any kind of vehicles, in particular motor vehicles and non-motorized vehicles, like cars, drones, boats, trains, airplanes and so on. The use of multiple imaging subsystems, i.e., segmentation of the field of view by the lens arrangement, can keep the range and power of the light source, e.g., the laser, while covering a larger field of view. In an equivalent manner, the invention can be used to keep the overall field of view and increase the attainable range. A number of design alternatives may be

achieved, depending on the desired performance of the system, the number and geometrical distribution of the lens arrangements .

Preferably, the lens arrangement is adapted to partition the field of view so that the fractions of the field of view corresponding to the plurality of lens systems overlap. The overlap can be located in different areas of the images, depending on the application. The fractions of the field of view that are defined by the lens systems can overlap or touch in order to yield data of the scanned surface to provide a connected field of view without gaps or missing features of the scanned surface .

In a preferred embodiment, the data processing device is adapted to fuse the image data from overlapping portions of the fractions. To yield a unified image the data processing device can fuse several image data corresponding to portions of the field of view together.

Advantageously, the plurality of lens systems is adapted to divide the field of view so that the fractions align horizontally. The fractions of the field of view can be aligned horizontally to increase the combined field of view in the horizontal direction.

Preferably, the plurality of lens systems are arranged in a juxtaposition arrangement. Each lens system is preferably arranged side by side another lens system. The lens systems can be separated and can be inclined and/or be arranged with differing principal planes and/or optical axes to achieve a preferred set of fractions of the field of view. Advantageously, the number of sensing devices is less than the number of lens systems comprised in the lens arrangement . This embodiment is particularly cost-effective and a single sensing device is fed by light entering one or more lens systems.

In an advantageous embodiment, the receiving unit comprises a prism system arranged between the plurality of lens systems and the light deflection device, and the prism system is adapted to direct light from a plurality of lens systems towards the at least one deflection device. In this embodiment, light can be directed with prisms to the at least one deflection device, focused thereon and be deflected towards the at least one light sensing device.

In a preferred embodiment the receiving unit comprises at least one shutter. Each shutter corresponds to any one of the lens systems and is adapted to change between a light blocking state and a light transmitting state. To clearly differentiate light entering different lens systems, one or more lens systems can be equipped with shutters. More preferably, to yield data from a certain fraction of the field of view, light enters the optical unit through a single lens system only, controlled by a single open shutter, while other shutters are closed .

In one embodiment, the vision system is adapted to perform range-gated imaging by sequential opening and closing of the at least one shutter in order to increase the light sensing capabilities. In particular, the vision system can be adapted to filter out signals within a given time period and/or to prevent the at least one light sensing device from sensing signals. In a preferred embodiment the vision system comprises a first lens system providing a longer detection range than a second lens system due to the larger possible optical aperture of the first lens system than the possible optical aperture of the second lens system. In this embodiment, the range-gated imaging can be performed by sequentially opening and closing of individual shutters. Thereby light is transmitted sequentially through an opened shutter in a light transmitting state and/or preferably to a corresponding light sensing device, only. Thereby the first lens system and the second lens system are opened and closed in an alternating manner. This allows the focusing and/or detection objects, which are arranged in different distances to the vision system, in an alternating manner .

Advantageously, the number of light sensing devices is less than the number of light deflection devices, which is cost effective because less light sensing devices are required. In this embodiment at least one light sensing device is adapted to sense the light redirected from a plurality of light deflection devices.

Preferably, the number of light sensing devices equals one. This embodiment is particularly cost-effective, and control and data processing is particularly simple.

Advantageously, the receiving unit comprises a plurality of light sensing devices. This embodiment is particularly versatile. The plurality of light sensing devices can acquire data simultaneously and the data processing device can be adapted to process the simultaneously acquired data preferably on-the- fly.

In an advantageous embodiment, the number of sensing devices equals the number of lens systems comprised in the lens ar- rangement . In this embodiment, the receiving unit can comprise distinct optical paths which allow a versatile setup by introducing, for example, filters in order to detect and/or suppress different light characteristics, e.g., wavelength, intensities, and/or polarizations. Each lens system and the corresponding sensing device form a subsystem in an arm of the receiving unit.

Preferably, the receiving unit further comprises at least one light detecting device adapted to detect light redirected from the light deflection device in the second deflection direction. The additional at least one light detection device allows the acquisition of different information of the incident light; preferably image data of different type can be recorded.

In a preferred embodiment, at least one light source, the at least one light sensing device and the data processing device form at least one LIDAR system adapted to measure time-of- flight values. The time-of -flight is the runtime of the light beam and/or a portion of a light beam from the light source via reflection by the scanned surface in the vehicle's environment, until sensing by the light sensing device in the vision system. The time-of -flight corresponds to the distance between the vision system and the scanned surface, which is estimated or computed by the data processing device.

By an appropriate scanning technique, preferably realized by the deflection device and/or at least one MEMS device that directs the emitted light beam to the scanned surface, the environment can be scanned. The spatial resolution depends on the number of deflection elements comprised in the deflection device, the laser repetition rate, and on the number of light sensors comprised in the light sensing element. The measurement of the time-of -flight values can acquire depth information on a z-axis in addition to x- and y-coordinates given by the scanning procedure. In a preferred embodiment wherein the light source is a laser, the vision system comprises a LI DAR system.

In a preferred embodiment the at least one light sensing device comprises focal plane arrays, PIN, photo diodes, avalanche photo diodes, photomultiplier tubes, single photon ava lanche diodes, or an array of any of the aforementioned, and/or are adapted to detect light in the range of UV, near infrared, short/mid/long wavelength infrared, and/or sub- mm/THz band, both, wideband or monochromatic, preferably wide band visible, and near infrared or short wavelength infrared. Sensors for LIDAR can be PIN photo diodes, avalanche photo di odes, single photon avalanche diodes and photomultiplier tubes, either based on Silicon, SiPM, or InGaAs technologies. Also arrays of the aforementioned devices can be comprised in the light sensing device. The radiometric measurement could preferably be used to modify the responsivity of the light sensing device to enhance stability, frame rate, and/or other features of the light sensing device. Further, an interesting feature of the system is its ability to perform foveated imag ing procedures, that is, to adjust the spatial resolution in scanning or detecting some region of interest, e.g., a pedestrian or a cyclist at the side of the vehicle, or some long distance obstacle on the road.

In the following the invention shall be illustrated on the ba sis of preferred embodiments with reference to the accompanying drawings as non- limiting cases, wherein: Fig. 1 shows a schematic view of a vision system;

Fig. 2 shows an optical unit of the vision system;

Fig. 3-7 shows a schematic view of an optical unit of the vi sion system in different embodiments of the invention .

According to Figure 1, the vision system 1 is mounted in/on a vehicle 100, to capture images of a scanned surface 4 in the environment 5 of the vehicle 100. The vision system 1 compris es an optical unit 22 having a light emitting unit 25 and a light receiving unit 26, and a data processing device 19. The vision system 1 can further include other detection system and/or sensors such as radar or other cameras.

The light emitting unit 25 comprises a light source 2 that is adapted to emit a light beam 3, which is preferably directed through a first lens system 10 towards the environment 5 of the vehicle 100. The light beam 3 eventually interacts with the environment 5, in particular the scanned surface 4 or dust, snow, rain, and/or fog in the environment 5. The light beam 3 is reflected and a reflected portion 16 of the light beam 3 enters the optical unit 22, more particular the receiv ing unit 26 thereof, through a second lens arrangement 11. In addition to the reflected portion 16 of the light beam 3, oth er light from the environment 5 enters the optical unit 22, preferably through the second lens arrangement 11. The light entering the optical unit 22 is preferably recorded by the op tical unit 22 and the recorded image data is processed in a combined manner with data fusion by the data processing devic The driver assistance device 20 is able to trigger defined driver assistance action to assist the driver, e.g. braking, acceleration, steering, showing information etc., based on the data provided by the data processing device 19. In the embodiment shown in figure 1, the vision system 1 comprises a single optical unit 22 mounted for example in the front part of a vehicle 1 and directed for example towards the front of the vehicle 1. Other positions in/on the vehicle are also possible, for example mounted at and/or directed towards the rear, the left, and/or the right. The vision system 1 can also comprise a plurality of optical units 22 and/or a plurality of receiving units 26 that are mounted and/or directed for example towards the rear, the left, and/or the right. The viewing direction of the optical unit 22 and/or the receiving unit 26 is preferably variable. A plurality of optical units 22 and/or the receiving units 26 allows covering a wide field of view, even up to 360° around the vehicle 100. A plurality of optical units 22 and/or the receiving units 26 could communicate with separate data processing devices 19, communicate with a single data processing device 19, or work with a master-slave configuration. In particular, the images recorded from the environment 5 could be used for physical calibration of the plurality of the optical units 22 and/or the receiving units 26 to cover a large field of view. The multiplicity of imaging subsystems, defined by the lens arrangement 11, may be used to achieve a large field of view 90.

Figure 2 shows a vision system 1 in more detail. The light source 2 which is arranged in the light emitting unit 25 comprises preferably a laser, including laser diodes, fiber lasers, etc., such that the vision system 1 is a LIDAR system. But also other embodiments of the light source 2 are possible, e.g. LEDs, or polarized light sources of different wavebands, adapted to correspond with the recording capabilities of a light sensing device 8 comprised in the optical unit 22 and/or in the receiving unit 26.

In this embodiment, the light emitting unit 25 and light the receiving unit 26 are housed in the same housing 27. It is also possible to arrange the emitting unit 25 and the receiving unit 26 in different housings 27.

The use of a light source 2, or a plurality of light sources 2, corresponding to the sensing and detection capabilities of the vision system 1 could enable multi-specular methods, implementation of detection algorithms, improvement of the sensitivity of retro-reflected beams, and/or road friction estimations. The improved light gathering capabilities may be used to optimize the light beam power to reach a given range depending on the specific goals of the system and its environment 5.

The light beam 3 emitted from the light source 2 is preferably divided or split, and a portion of the light beam is directed towards a trigger device 21 comprised in the emitting unit 25 to allow an accurate time-of -flight estimation. The trigger device 21 is preferably in communication with the data processing device 19 to start and/or synchronize the time-of - flight measurement .

Another portion of the light beam 3 can in some embodiments be redirected by a scanning device 15, comprised in the emitting unit 25, for example a mirror rotatable around at least one axis. The light beam 3 can in some embodiments be widened by suited optical components, e.g., a first system of lenses 10, which may defocus, and thereby widen, the light beam 3, to- wards the scanned surface 4. The scanning device 15, like a mirror, a group of mirrors, a prism arrangement such as Risley prisms, and/or other optical component and/or a group thereof, is adapted to be controllable by and in communication with the data processing device 19. The scanning device 15, which may in particular be a MEMS device, is preferably adapted to be rotatable around at least two axes and/or a plurality of MEMS devices 15 is adapted to rotate around at least one axis, allowing scanning of the environment 5 of the vehicle 100. For example a first scanning device 15 could be arranged to perform the rotation around a first axis and a second scanning device 15 could be arranged to perform a rotation around a second axis. By means of the scanning device 15, the light beam 3 can be directed sequentially towards points or spots on the scanned surface 4. In another embodiment, a cylindrical lens could be used to widen the light beam 3, 29.

A portion of the light beam 3 is reflected on the scanned surface 4 or by other reflectors and/or scatterers in the environment 5, and a reflected light beam portion 16 of the light beam 3 enters the optical unit 22 and/or the receiving unit 26, e.g., by passing the lens arrangement 11, alternatively an entrance window. The first lens system 10 and the second lens arrangement 11 can coincide. The light beam portion 16 travelling through the optical unit 22 and/or the receiving unit 26 is preferably directed towards a prism 14 placed in the optical path of the light beam portion 16 within the optical unit 22 and/or the receiving unit 26, allowing for optical path division and/or light deflection.

The field of view 90 of the lens arrangement 11 is defined by the angle between the pairs of dashed lines indicated with the numeral of the field of view 90. In this embodiment, the lens arrangement 11 comprises two lens systems 11a and lib. The fraction 90a of the field of view 90 of lens system 11a is defined by the angle between the pairs of dashed lines indicated with the numeral of the fraction 90a. The fraction 90b of field of view 90 of lens system lib is defined by the angle between the pairs of dashed lines indicated with the numeral of the fraction 90b.

In a preferred embodiment, the lens systems 11a, lib can be complex optical systems involving a number of different single lenses, beam splitters, polarizers, filters, diaphragms or other optical components arranged in combination in order to provide a given optical performance. Preferably, the lens systems 11a, lib comprise lens objectives.

The overall field of view 90 is a combination of and segmented into the fractions 90a, 90b of the field of view 90. The fractions 90a, 90b are comprised by the overall field of view 90. That is, the field of view 90 of the target is divided into more than one fraction 90a, 90b by the lens arrangement 11. In this embodiment, the fractions 90a, 90b overlap and the field of view 90 is connected. In other embodiments, the fractions 90a, 90b of the field of view 90 do not overlap and the combined overall field of view 90 is disconnected.

Preferably, the lens arrangement 11 is adapted to partition the field of view 90 so that the fractions 90a, 90b overlap. The overlap of the fraction 90a, 90b can be small and allows for image matching and/or image fusion preferably by software and/or the data processing device 19. The overlap may be used to automatically reconstruct the full field of view 90 image or for other applications. In this embodiment, the lens systems 11a, lib are arranged side by side in juxtaposition and do not overlap, that is, the lens systems 11a, lib are arranged in a non-sequential manner with respect to the optical path. The lens systems 11a, lib are preferably embedded in a frame or in separate frames which are adapted to orientate the optical axes of the lens systems lla, lib in a desired direction. The lens system frames can touch each other or be separated. The lens systems lla, lib can be arranged to achieve a preferred overall field of view 90, for example, the optical axes of the lens systems lla, lib can be parallel, slightly tilted (<15°), tilted (15°-60°), strongly tilted (60° -90°) and/or directed into different directions (>90°) . The optical axes of the lens systems lla, lib can be adapted according to the individual fractions 90a, 90b of the field of view 90 to yield a preferred overall field of view 90. The fractions 90a, 90b of the field of view 90 can be different for each lens arrangement 11, so as to improve the performance. The lens systems lla, lib with a smaller field of view will have a larger range, or need less power for the same range. The overall field of view 90 can be divided in as many fractions 90a, 90b as desired by a corresponding number of lens systems lla, lib. The overall field of view 90 can in principle get up to 360° if so desired, given proper arrangements are done. The number of lens systems comprised in the lens arrangement 11 is not limited to two and can be increased to meet the respective requirements.

Smaller fractions 90a, 90b of the field of view 90 enable larger entrance apertures or lens diameters, which results in better light gathering capabilities, which enable better performance of the vision system 1 due to the availability of a stronger signal. As the overall field of view 90 is divided, each single imaging subsystem, defined by the lens arrangement 11, needs to cover a fraction 90a, 90b smaller than the overall field of view 90, relaxing the trade-off between the field of view 90 and the range of the vision system 1, related to the aperture of the entrance pupil.

Each lens system 11a, lib comprised by the lens arrangement 11 is adapted to focus a corresponding light beam portion 16a, 16b to the light deflection device 6. The light beam portions 16a, 16b are indicated in the figure by the solid lines with the numerals of the respective light beam portions 16a, 16b. The light beam portions 16a, 16b are comprised in the overall light beam portion 16 which is reflected by the scanned surface 4.

Each lens system 11a, lib comprised by the lens arrangement 11 can preferably be linked to at least one shutter 50a, 50b as explained later with respect to Figure 7.

In the embodiment of Figure 2, the light beam portion 16 is directed towards a light deflection device 6 comprising a plurality of light deflection elements 7, each of which is adapted to deflect the light beam portion 16 and to change the direction of the redirected light between at least a first deflection direction as a first light beam portion 18 and a second deflection direction as a second light beam portion 17. The first light beam portion 18 preferably is deflected via the prism 14, and is directed towards the light sensing device 8 preferably through a third lens system 12. The second light beam portion 17 is directed towards an absorber to reduce light scattering or in an alternative embodiment towards a detecting device 9 preferably via a fourth lens system 13. The first light beam portion 18 is directed to, and incident on, the light sensing device 8. The second light beam portion 17 is directed to, and incident on, the absorber or the detecting device 9.

Preferably, the light deflection device 6 is a DMD, i.e., a digital micro mirror device. The deflection device 6 is in communication with the data processing device 19. The data processing device 19 is adapted to control the deflection elements 7. Alternatives for the deflection device 6 to the DMD may be other active optical elements, in particular pixelated arrays (such as LCDs) or even deformable mirrors. Additional passive or active optical elements, such as additional DMDs, LCDs, and/or deformable mirrors, can be used in the system oriented to obtain better data quality. Active strategies for local image quality improvement using a deformable mirror in the optical path or a LCD device used as a shutter or as a phase modulator can enhance the quality of the data.

The detecting device 9 can comprise CCD/CMOS, focal plane arrays or polarimeters , or any other area or line imaging sensor composed thereof. CCD/CMOS images may be RGB or grayscale depending on the embodiment . Polarimetric images can be in a convenient format obtained after sensing and computational analysis, which could be performed in the data processing device 19 and/or a suitable processing device comprised in the detection device 9. Also polarization filters can be located in the optical path within the vision system, preferably in combination with a laser as light source 2 and/or a polarime- ter as detecting device 9. The detecting device 9 can in particular comprise different sensors to detect different properties, e.g., different polarizations and/or different spectral bands. Also a detecting device 9 with varying polarizing filters on different areas (pixels) of the detecting device 9 is possible . Additional optics either for laser spot filtering and/or for optimization of focusing on the detecting device 9 is possible. Electronics comprised in the deflection device 6, the light sensing device 8 and/or the light detecting device 9 could be coordinated or controlled by the data processing device 19. The communication between the light sensing device 8 and the data processing device 19 can also be parallel to the communication between the detecting device 9 and the data pro cessing device 19.

The data processing device 19 can comprise a pre-processor adapted to control the capture of images, time-of-flight meas urements, and/or other data by the light sensing device 8 and the detecting device 9 and the control of the deflection device 6, receive the electrical signal containing the information from the light sensing device 8 and the detecting device 9 and from the light deflection device 6. The pre-proces sor may be realized by a dedicated hardware circuit, for exam pie a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC) . Alternatively the preprocessor, or part of its functions, can be realized in the data processing device 19 or a System-On-Chip (SoC) device comprising, for example, FPGA, processing device, ARM and/or microprocessor functionality.

Further image and data processing is carried out by corresponding software in the data processing device 19. The image and data processing in the processing device 19 may for example comprise identifying and preferably also classifying possible objects in the surrounding of the vehicle, such as pedestrians, other vehicles, bicyclists and/or large animals, tracking over time the position of object candidates identi- fied in the captured images, computing depth images based on the time-of- flight values, and activating or controlling at least one driver assistance device 20 for example depending on an estimation performed with respect to a tracked object, for example an estimated collision probability. The driver assistance device 20 may in particular comprise a display device to display information on a possibly detected object and/or on the environment of the vehicle. However, the invention is not limited to a display device. The driver assistance device 20 may in addition or alternatively comprise a warning device adapted to provide a collision warning to the driver by suitable optical, acoustical and/or haptic warning signals; one or more restraint systems such as occupant airbags or safety belt tensioners, pedestrian airbags, hood lifters and the like; and/or dynamic vehicle control systems such as brake or steering control devices.

The data processing device 19 can also be used as input to highly automated, piloted, and/or autonomous driving functions in a vehicle .

The data processing device 19 is preferably a digital device which is programmed or programmable and preferably comprises a microprocessor, micro-controller, digital signal processor (processing device) , field programmable gate array (FPGA) , or a System-On-Chip (SoC) device, and preferably has access to, or comprises, a memory device. The data processing device 19, pre-processing device and the memory device are preferably realised in an on-board electronic control unit (ECU) and may be connected to other components of the vision system 1 via a separate cable or a vehicle data bus. In another embodiment the ECU and one or more of the vision systems 1 can be integrated into a single unit, where a one box solution including the ECU and all vision systems 1 can be preferred. All steps from data acquisition, imaging, depth estimations, pre-processing, processing to possible activation or control of driver assistance device are performed automatically and continuously during driving in real time.

In a particular application, LIDAR, polarimetric , RGB and/or FPA images are gathered together using electronics, including frame-grabber and the data processing device 19, to have them processed and support the decision making process of the driver assistance device 20.

All parts of the receiving unit 26, in particular the lens arrangement 11, the prism 14, the light sensing device 8 and the light deflection device 6 are preferably housed in and/or attached to the same single housing 27.

Figures 3-7 show schematic views of embodiments of the receiving unit 26 of the vision system 1. In these embodiments, the light receiving unit 26 is housed in the housing 27, possibly separate from the light emitting unit 25. In other embodiments the light emitting unit 25 can be arranged within the shown housings 27.

Figure 3 shows a receiving unit 26 with a single deflection device 6 and a single sensing device 8. This embodiment is particularly cost-effective, compact and space-saving. The lens arrangement 11 comprises the lens systems 11a, lib and a prism system 91 comprised by the receiving unit 26 comprises second prisms 92a, 92b. The second prisms 92a, 92b are arranged in the optical path defined by the lens systems lia, lib. The prism system 91 can be absent, depending on the requirements of space and optics. Preferably, to each lens system 11a, lib corresponds at least one prism of the prism system 91. The prisms 92a, 92b are arranged to guide the light beam portions 16a, 16b to a common prism 14 from where the light is directed towards the light deflection device 6. The light deflection device 6 can direct the light towards the light sensing device 8, preferably through the prism 14. In a typical configuration, the two imaging subsystems, defined by the lens systems 11a and lib, divide the total field of view 90 horizontally and focus the light beam portion 16 onto a common deflection device 6, which then projects the received light onto a single light sensing device 8.

Preferably, to simplify the time-of -flight measurements the prisms 92a, 92b are arranged so that several optical paths between the lens systems lla, lib and the light sensing device 8 are equal. The prism system 91 can comprise other optical components to control the optical path of the light beam portion 16, e.g., mirrors, apertures, lens systems, and/or a combination thereof.

The receiving unit 26 as shown schematically in Figure 3 is conceptually simple and desirable. Depending in the optical configuration, for example if large fields of view 90 or small detector sizes are desired, one of the embodiments of Figure 4 to 5 could be advantageous, wherein the deflection device 6 and the light sensing device 8 have been duplicated in two separate arm, each having its own deflection device 6a, 6b and light sensing device 8a, 8b. Figure 4 shows a receiving unit 26 with two deflection devices 6a, 6b and two sensing devices 8a, 8b. In this embodiment, the imaging system is doubled within the receiving unit 26 and the prism system 91 is preferably absent. The light sensing device 8a, 8b can acquire data simultaneously or sequentially in order to be processed by the data processing device 19.

Figure 5 shows a receiving unit 26 as in Figure 4 with additional light detecting devices 9a, 9b. The additional light detecting devices indicate the incorporation of other optical elements, such as CCD/CMOS for RGB or polarimetric imaging or a focal plane array (FPA) , to enable simultaneous depth and image acquisition. Multiple imaging sensors, e.g., a RGB and a polarimetric sensor combined with the time-of - flight image, could be combined using suitable optical and mechanical arrangements .

Figure 6 shows an embodiment of an optical unit 22 comprising a single light emitting unit 25, for example having a 60° scanning device 15, and two light receiving units 26, for example each having a 30° field of view to cover the whole field of view. The light emitting unit 25 comprises a single light source 2 and a single scanning device 15. In this embodiment, a single light beam 3 is emitted into the environment 5. The light beam 3 is widened by a first lens system 10 before being directed towards the scanned surface 4.

Each receiving unit 26 in Figure 6 is adapted to capture light originating from the same light emitting unit 25 and reflected by the scanned surface 4. The receiving units 26 can be identical, and can be similar to the one shown in Figure 2. However, the prism 14 shown in Figure 2 has been dispensed with in the embodiment of Figure 4, which reduces the optical losses. Both receiving units 26 use the same light sensing device 8. In other words, the light incident on the deflection device 6a belonging to one receiving unit 26 can be deflected on the light sensing device 8, and the light incident on the other deflection device 6b belonging to the other receiving unit 26 can be deflected on the same light sensing device 8. Shortly, the common light sensing device 8 is adapted to sense light from a plurality of light deflection devices 6a, 6b. This is a very cost effective embodiment, since only one light sensing device 8 is required.

In addition to the embodiment in Figure 3, the embodiment shown in Figure 7 comprises a plurality of for example two shutters 50a, 50b. Generally, at least one shutter 50a, 50b is provided. The at least one shutter 50a, 50b is adapted to change between a light blocking and a light transmitting state. The at least one shutter 50a, 50b can be arranged upstream or downstream the respective lens system 11a, lib with respect to the optical path.

The shutter 50a, 50b is controlled by the data processing device 19 advantageously to be coordinated with the light deflection device 6. For example, if only one shutter 50a corresponding to a particular lens system 11a is open while the other shutter 50b corresponding to the other lens system lib is closed, the only light which is focused by the lens arrangement 11 is the light beam portion 16a which passes through the lens system 11a and corresponds to the fraction 90a of the overall field of view 90, reducing the ambient light sources thus improving the signal to noise ratio.

A sequential opening and closing of the individual shutters 50a, 50b allows sequential scanning of the fractions 90a, 90b of the overall field of view 90, which is called range-gated imaging. Preferably, the number of shutters 50a, 50b equals the number of lens systems 11a, lib.

The first lens system 11a has a first field of view 190. The second lens system lib has a second field of view 191 that differs from the first field of view 190 of the first lens system 11a. The lens systems 11a, lib can be arranged so that the fields of view 190, 191 overlap at least partially and/or are disjoint.

Overlapping fields of view 190, 191 of lens systems 11a, lib can have potential benefits as follows. The detection range can be increased by using one field of view 190, 191 narrower than the other the field of view 190, 191. The range can be increased by having at least two laser light beams illuminating the same location in the overlap region of the fields of view 190, 191 and thereby by providing more optical power at the specific location. The resolution can be increased using one field of view 190, 191 narrower than the other the field of view 190, 191 that allows subsampling of each illuminated spot .

In one embodiment the fields of view 190, 191 overlap so that the first field of view 190 is entirely located within the second field of view 191. I.e. the second field of view 191 is larger and/or wider than the first field of view 190 and/or the second field of view 191 overlaps the first field of view 190.

The lens systems 11a, lib can be configured to have different fields of view 190, 191 and/or partially or fully overlapping detection areas, i.e., fields of view 190, 191. For example, with first lens system 11a, having a narrower first field of view 190 than the second field of view 191 of the second lens system lib that overlaps the second field of view 191 of the second lens system lib, a zoom- in function is generated by the narrower first field of view 190. This provides longer detection range due to the larger possible optical aperture of the first lens system lla which is possible for a narrower field of view 190 and the same size of the deflection device 6. The wider, second field of view 191 imaging through the second lens system lib will provide a wide overview but less range than the first field of view 190. The narrower, first field of view 190 imaging through the first lens system lla will provide a longer range at a part of the area covered by the second lens system lib.