Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SURROUND VIEW MONITORING SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2024/057060
Kind Code:
A1
Abstract:
The present disclosure provides a surround view monitoring system and method for a vehicle. The system comprises a plurality of components. Specifically, a kinematic data measuring part is configured to determine kinematic data of the vehicle. A processor is configured to process image data captured by the plurality of cameras to generate a first three dimensional (3D) representation representative of a surrounding area and process the image data and the kinematic data to generate a second 3D representation representative of a playground area under the vehicle. The processor is further configured to generate a third 3D vehicle representation of the vehicle. A display screen is configured to display a combined representation of the first 3D representation, the second 3D representation and the third 3D vehicle representation as would be viewed from a virtual camera viewpoint. A portion of the third 3D vehicle representation is rendered as displayed to be transparent to enable viewing at the display screen a portion of the first 3D representation and a portion of the second 3D representation. The combined 3D vehicle representation further comprises a dynamic guideline showing a trajectory of the vehicle. The dynamic guideline is rendered based at least on the kinematic data.

Inventors:
PHAN DAI THANH (VN)
NGUYEN PHUC THIEN (VN)
NGUYEN CHI THANH (VN)
VU DUC CHAN (VN)
NGUYEN TRUONG TRUNG TIN (VN)
TRAN VAN THANG (VN)
NGUYEN DANG QUANG (VN)
HO NGOC VUONG (VN)
BUI HAI HUNG (VN)
Application Number:
PCT/IB2022/058604
Publication Date:
March 21, 2024
Filing Date:
September 13, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VINAI ARTIFICIAL INTELLIGENCE APPLICATION AND RES JOINT STOCK COMPANY (VN)
International Classes:
H04N7/18; B60R1/00; G06T15/20; G06T19/00
Foreign References:
US20190244324A12019-08-08
US20170341583A12017-11-30
EP3967554A12022-03-16
US20140347450A12014-11-27
US20140350834A12014-11-27
EP2769883A12014-08-27
Attorney, Agent or Firm:
VISION & ASSOCIATES COMPANY LIMITED (VISION & ASSOCIATES) (VN)
Download PDF:
Claims:
CLAIMS

1. A surround view monitoring system comprising: a plurality of cameras mounted at a vehicle, wherein the plurality of cameras has respective fields of view exterior of the vehicle; a kinematic data measuring part mounted at the vehicle, wherein the kinematic data measuring part is configured to determine kinematic data of the vehicle; a processor is configured to process image data captured by the plurality of cameras to generate a first three dimensional representation representative of a surrounding area surrounding the vehicle and process the image data and the kinematic data to generate a second three dimensional representation representative of a playground area under the vehicle; wherein the processor is further configured to generate a third three dimensional vehicle representation representative of the vehicle; and a display screen viewable by a driver of the vehicle, wherein the display screen is configured to display a combined representation of the first three dimensional representation, the second three dimensional representation and the third three dimensional vehicle representation as would be viewed from a virtual camera viewpoint selected from (i) exterior to, higher than and rear-left or rear-right of the vehicle, (ii) exterior to, higher than and top-down of the vehicle, (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle; wherein a portion of the third three dimensional vehicle representation is rendered as displayed to be transparent to enable viewing at the display screen a portion of the first three dimensional representation and a portion of the second three dimensional representation that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation; and wherein the combined three dimensional vehicle representation further comprises a dynamic guideline showing a trajectory of the vehicle; wherein the dynamic guideline is rendered based at least on the kinematic data.

2. The surround view monitoring system of claim 1 further comprises an input interface configured to receive a user input representative of a transparency degree, a texture type, a color and a decoration of the third three dimensional vehicle representation, wherein the surround view monitoring system have a capability of adjusting at least one of a degree of transparency, a texture, a color and a decoration of the transparent portion of the third three dimensional vehicle representation based on the user input.

3. The surround view monitoring system of claim 2 further comprises a deep vision diagnostic mechanism configured to analyze a road condition and a driving condition to generate diagnosed data, wherein the capability of adjusting at least one of the degree of transparency, the texture, the color and the decoration of the transparent portion of the third three dimensional vehicle representation is further based on the diagnosed data.

4. The surround view monitoring system of claim 3, wherein the kinematic data measuring part comprises a wheel odometer configured to determine a moving direction, a moving distance and a rotation angle of wheels of the vehicle between successive measures of the wheel odometer, wherein the kinematic data comprises the moving direction, the moving distance and the rotation angle of wheels of the vehicle.

5. The surround view monitoring system of claim 4, wherein the kinematic data measuring part further comprises an Inertial Measurement Unit (IMU) configured to determine an angular acceleration, a linear acceleration, an angular velocity of the vehicle between successive measures of the IMU and further comprises a visual odometer configured to determine a position and an orientation of the vehicle, wherein the kinematic data further comprises the angular acceleration, the linear acceleration, the angular velocity, the position and the orientation of the vehicle.

6. The surround view monitoring system of claim 5, wherein the operation of processing of the image data captured by the plurality of cameras to generate the first three dimensional representation of the surrounding area, by the processor, comprises rendering or updating a first texture of the first three dimensional representation in real time based on the image data captured by the plurality of cameras.

7. The surround view monitoring system of claim 5, wherein the operation of processing of the image data and the kinematic data to generate the second three dimensional representation representative, by the processor, comprises rendering or updating a second texture of the second three dimensional representation in real time based on the image data and the kinematic data.

8. The surround view monitoring system of claim 7, wherein the operation of rendering or updating of the second texture of the second three dimensional representation in real time comprises iteratively performing: rendering the second texture of the second three dimensional representation; translating and rotating in real time the second texture of the second three dimensional representation according to the kinematic data; and updating in real time missing data and outdating data only when the missing data reaches a pre-determined amount.

9. The surround view monitoring system of claim 7, wherein the operation of rendering or updating of the second texture of the second three dimensional representation comprises performing steps of: a) obtaining a moving distance from the kinematic data; b) if the moving distance does not exceed a distance threshold, performing steps c) to e) then returning to step a); if the moving distance exceeds the distance threshold, performing steps f) to g) then returning to step a); c) updating a rotation matrix and a translation vector that respectively represent rotation and translation of the vehicle based on the kinematic data; 1 d) rotating and translating the second texture of the second three dimensional representation using the updated rotation matrix and the updated translation vector in step c); e) rendering a third texture of an under-chassis area being a part of the playground area that is out of the respective fields of view of the plurality of cameras based on the rotated and translated second texture in step d); f) updating the second texture of the second three dimensional representation based on the rotation matrix and the translation vector estimated from the kinematic data and the image data; g) rendering the third texture of the under-chassis area based on the updated second texture of step f) and resetting the rotation matrix and the translation vector to a default value.

10. The surround view monitoring system of claim 9, wherein the rotation matrix and the translation vector are estimated from the kinematic data based on a bicycle kinematic model that is transformed a four-wheel kinematic model.

11. The surround view monitoring system of claim 10 further comprises a deep vision object detection mechanism operable of detecting an object that would be potential to collide with the vehicle and emphasizing the object on the display screen.

12. The surround view monitoring system of claim 11 further comprises a deep vision brightness enhancement mechanism operable of augmenting a brightness of the combined representation when the driving condition is in low-lighting.

13. The surround view monitoring system of claim 12 further comprises a deep vision photometric enhancement mechanism operable of equalizing and balancing image quality of a plurality of images captured from the plurality of cameras; wherein the image quality comprises brightness, color, tone.

14. The surround view monitoring system of claim 13, wherein the plurality of cameras comprises a front camera having a field of view forward of the vehicle, a rear camera having a field of view rearward of the vehicle, a left camera having a field of view left of the vehicle, and a right camera having a field of view right of the vehicle.

15. The surround view monitoring system of claim 14, wherein the generating of the third three dimensional vehicle representation of the vehicle, by the processor, is based on a pre-configured three dimensional mesh of the vehicle.

16. A surround view monitoring method comprising: providing a plurality of cameras at a vehicle, wherein the plurality of cameras has respective fields of view exterior of the vehicle; providing a kinematic data measuring part at the vehicle, wherein the kinematic data measuring part is configured to determine kinematic data of the vehicle; processing, by a processor, image data captured by the plurality of cameras to generate a first three dimensional representation representative of a surrounding area surrounding the vehicle; processing, by the processor, the image data and the kinematic data to generate a second three dimensional representation representative of a playground area under the vehicle; generating, by the processor, a third three dimensional vehicle representation representative of the vehicle; displaying, by a display screen viewable by a driver of the vehicle, a combined representation of the first three dimensional representation, the second three dimensional representation and the third three dimensional vehicle representation as would be viewed from a virtual camera viewpoint selected from (i) exterior to, higher than and rear-left or rear-right of the vehicle, (ii) exterior to, higher than and top-down of the vehicle, (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle; and rendering, by the processor, a portion of the third three dimensional vehicle representation as displayed to be transparent to enable viewing at the display screen a portion of the first three dimensional representation and a portion of the second three dimensional representation that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation; wherein the third three dimensional vehicle representation further comprises a dynamic guideline showing the trajectory of the vehicle; wherein the dynamic guideline is rendered based at least on the kinematic data.

Description:
SURROUND VIEW MONITORING SYSTEM AND METHOD

Field of the invention

The present invention relates to a surround view monitoring system and method for vehicles.

Discussion of related art

Recently, a demand for smart vehicles implementing a surround view monitoring system has rapidly increased. Surround view monitoring systems are known for providing a driver of the smart vehicle with a synthesized surround view of the environment and objects around the vehicle. The synthesized surround view may provide a plurality of viewpoints located, for example, in the exterior of the vehicle, and may be displayed on the display system of the vehicle in order for the driver to be aware of the surrounding of the vehicle while driving for enhanced safety. The driver can specifically select the desired viewpoint having a 360-degree range around the vehicle from the provided viewpoints.

The surround view monitoring system may equip a plurality of cameras (typically 4 fisheye cameras) mounted in the vehicle to capture images outward of the vehicle and along each of the sides of the vehicle, the cameras have respective fields of view outside of the vehicle. The captured images are then used to render the synthesized surround view around the vehicle. While rendering the synthesized surround view, the vehicle body is also rendered as a solid block. Thus, in the synthesized surround view, the vehicle body obscures a certain quantity of the field of view of the surround view monitoring system. A bird’s eye-view is the only case in which the vehicle’s body does not interfere the field of view of the prior art surround view monitoring systems - which demands driver’s intervene on a user input interface, for example, a Human-Machine Interface (HMI). Therefore, in avoiding the imminent collisions or in precise maneuvers such as parking next to curbs or passing by narrow road, the prior art surround view monitoring systems have limitations in that the driver has to manually switch the point- of-view to the bird’s eye view mode so that the vehicle body does not interfere the field of view of these surround view monitoring systems. However, the bird’s eye view has limitations with regard to highly distorted objects or objects that is very near to the vehicle. Further, while rendering the synthesized surround view, an underneath area of the vehicle, which is referred to as a playground area hereinafter, cannot be observed due to the cameras geometric field of view limitation.

Therefore, there is a need for an improved surround view monitoring system and method that is capable of utilizing the entire of the field of view of the cameras equipped in the system without being obstructed from the rendering of the vehicle body, and capable of helping the driver of the vehicle to observe the playground area under the vehicle for tough driving conditions.

Summary of the invention

The invention has been made to solve the above-mentioned problems, and an object of the invention is to provide a surround view monitoring system and method that is capable of utilizing the entire of the field of view of the cameras equipped in the system without being obstructed from the rendering of the vehicle body. Thus, the system according to the invention helps the driver to observe simultaneously vehicle body and the surrounding without losses of multi-cameras system field of view, thereby help the driver to have better view on objects or human - including children - that very close to vehicle (e.g., vehicle’s left/right, rear-bumper, front-bumper) or in blind-spot zones. The driver therefore can precisely make maneuvers and avoid risk of collision from imminent collision of very close objects/human or potential collisions with further objects/human, leading to potential accidents or car damages. Furthermore, this helps the driver to avoid collisions with children or low-height objects that cannot be observed by drivers directly or via mirrors. Moreover, the system according to the invention helps the driver to observe the playground area under the vehicle body for tough driving conditions.

Problems to be solved in embodiments of the invention are not limited thereto and include the following technical solutions and also objectives or effects understandable from the embodiments.

According to an aspect of the invention, there is provided a surround view monitoring system, the system comprising: a plurality of cameras mounted at a vehicle, wherein the plurality of cameras has respective fields of view exterior of the vehicle; a kinematic data measuring part mounted at the vehicle, wherein the kinematic data measuring part is configured to determine kinematic data of the vehicle; a processor is configured to process image data captured by the plurality of cameras to generate a first three dimensional representation representative of a surrounding area surrounding the vehicle and process the image data and the kinematic data to generate a second three dimensional representation representative of a playground area under the vehicle; wherein the processor is further configured to generate a third three dimensional vehicle representation representative of the vehicle; and a display screen viewable by a driver of the vehicle, wherein the display screen is configured to display a combined representation of the first three dimensional representation, the second three dimensional representation and the third three dimensional vehicle representation as would be viewed from a virtual camera viewpoint selected from (i) exterior to, higher than and rear-left or rear-right of the vehicle, (ii) exterior to, higher than and top-down of the vehicle, (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle; wherein a portion of the third three dimensional vehicle representation is rendered as displayed to be transparent to enable viewing at the display screen a portion of the first three dimensional representation and a portion of the second three dimensional representation that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation; and wherein the combined three dimensional vehicle representation further comprises a dynamic guideline showing a trajectory of the vehicle; wherein the dynamic guideline is rendered based at least on the kinematic data.

According to an aspect of the invention, there is provided a surround view monitoring method, the method comprising: providing a plurality of cameras at a vehicle, wherein the plurality of cameras has respective fields of view exterior of the vehicle; providing a kinematic data measuring part at the vehicle, wherein the kinematic data measuring part is configured to determine kinematic data of the vehicle; processing, by a processor, image data captured by the plurality of cameras to generate a first three dimensional representation representative of a surrounding area surrounding the vehicle; processing, by the processor, the image data and the kinematic data to generate a second three dimensional representation representative of a playground area under the vehicle; generating, by the processor, a third three dimensional vehicle representation representative of the vehicle; displaying, by a display screen viewable by a driver of the vehicle, a combined representation of the first three dimensional representation, the second three dimensional representation and the third three dimensional vehicle representation as would be viewed from a virtual camera viewpoint selected from (i) exterior to, higher than and rear-left or rear-right of the vehicle, (ii) exterior to, higher than and top-down of the vehicle, (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle; and rendering, by the processor, a portion of the third three dimensional vehicle representation as displayed to be transparent to enable viewing at the display screen a portion of the first three dimensional representation and a portion of the second three dimensional representation that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation; wherein the third three dimensional vehicle representation further comprises a dynamic guideline showing the trajectory of the vehicle; wherein the dynamic guideline is rendered based at least on the kinematic data.

Brief description of the drawings

The above and other objects, features and advantages of the invention will become more apparent to those of ordinary skill in the art by describing exemplary embodiments thereof in detail with reference to the accompanying drawings, in which:

FIG. 1 is a block diagram showing an example system for monitoring a surround view for a vehicle;

FIG. 2 shows components of a surround view for the vehicle; FIG. 3 is a schematic diagram illustrating top view areas corresponding to components of FIG. 2;

FIG. 4, FIG. 5 and FIG. 6 show surround views for the vehicle from different virtual camera viewpoints;

FIG. 7 shows a surround view in which a dynamic guideline is presented; and

FIG. 8 is a flow diagram of an example method for monitoring a surround view for a vehicle;

FIG. 9 shows an under-chassis area of a playground area;

FIG. 10 is a schematic diagram illustrating the operation of rendering and updating of a second texture of a second three dimensional representation by the system in FIG. 1 ; and

FIG. 11 illustrates a transformation from a four-wheel kinematic model to a bicycle kinematic model.

Detailed description of exemplary embodiments

While the invention may have various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will be described herein in detail. However, there is no intent to limit the invention to the particular forms disclosed. On the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the appended claims.

It should be understood that, although the terms “first,” “second,” and the like may be used herein to describe various elements, the elements are not limited by the terms. The terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element without departing from the scope of the invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting to the invention. As used herein, the singular forms “a,” “an,” “another,” and “the” are intended to also include the plural forms, unless the context clearly indicates otherwise. It should be further understood that the terms “comprise,” “comprising,” “include,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, parts, or combinations thereof.

Unless otherwise defined, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein.

A vehicle as described in this disclosure may include, for example, a car or a motorcycle, or any suitable motorized vehicle, for example, the vehicle applied in maritime, workload handling machine, aviation and space. Hereinafter, a car will be described as an example.

A vehicle as described in this disclosure may be powered by any suitable power source, and may be, for example, an internal combustion engine vehicle including an engine as a power source, a hybrid vehicle including both an engine and an electric motor as a power source, and/or an electric vehicle including an electric motor as a power source.

As used herein, a texture refers to a graphics data structure which models the surface appearance of an object. A texture may represent the visual experience of many materials and substances. Textures may be created digitally by sampling a physical surface utilizing photographic techniques. Alternatively, textures may be created manually utilizing a suitable graphics design application.

Hereinafter, embodiments will be described in detail with reference to the accompanying drawings, the same or corresponding components are denoted by the same reference numerals regardless of reference numbers, and thus the description thereof will not be repeated.

FIG. 1 is the block diagram showing an example system 100 for monitoring a surround view for a vehicle (hereinafter referred to as the system 100). FIG. 1 is described with reference to FIG. 2 to FIG. 7, and FIG. 9 to FIG. 11. The system 100 comprises a plurality of cameras 107 that includes multiple exterior facing imaging sensors or cameras (such as a rearwardly facing imaging sensor, a forwardly facing camera at the front (or at the windshield) of the vehicle, and sidewardly facing cameras at respective sides of the vehicle), which capture images exterior of the vehicle. According to an embodiment, the plurality of cameras 107 comprises a front camera having a field of view forward of the vehicle, a rear camera having a field of view rearward of the vehicle, a left camera having a field of view left of the vehicle, and a right camera having a field of view right of the vehicle.

The system 100 further comprises a processor 125 that is operable to process image data 108 captured by the plurality of cameras 107 and may provide displayed images at a display screen 124 for viewing by a driver of the vehicle.

The system 100 is operable to process the image data 108 to provide a top view or surround view or bird's-eye view image displayed at display screen 124 for viewing by the driver of the vehicle.

The plurality of cameras 107 may comprise optical image sensors, infrared sensors, long and short range RADAR, LIDAR, Laser, ultrasound sensors, and/or the like. The cameras typically use a wide focal width, commonly referred to as a fish eye lens or optic, for providing a wide angle field of view, typically about 180 degrees to about 220 degrees for each camera (such as for a four camera vision system). Typically, there are overlapping regions in the fields of view of the cameras. By providing such a wide angle field of view, the field of view of the cameras typically not only include the environment around the vehicle, but partially includes the vehicle's body as well, such as at the lateral regions of the captured images of each camera.

The system 100 further comprises a kinematic data measuring part 109 mounted at the vehicle. The kinematic data measuring part 109 is configured to determine kinematic data 118 of the vehicle. The kinematic data measuring part 109 comprises a wheel odometer 110 that is configured to determine a moving direction, a moving distance and a rotation angle of wheels of the vehicle between successive measures of the wheel odometer. Accordingly, the kinematic data 118 comprises information on the moving direction, the moving distance and the rotation angle of wheels of the vehicle which are collectively determined by the wheel odometer 110. According to another embodiment, the kinematic data measuring part 109 may further comprise an Inertial Measurement Unit (IMU) 111 that is configured to determine an angular acceleration, a linear acceleration, an angular velocity between successive measures of the IMU of the vehicle. The IMU 111 may have a six degree-of- freedom (DOF) configuration and have one gyroscope and one accelerometer for each of three orthogonal axes. The accelerometer is for detecting the specific force, and the gyroscope is for detecting the angular rate. Accordingly, the kinematic data 118 may comprise information on the angular acceleration, the linear acceleration, the angular velocity which are collectively determined by the IMU 111.

According to another embodiment, the kinematic data measuring part 109 may further comprise a visual odometer 112 that is configured to determine a position and an orientation of the vehicle. Accordingly, the kinematic data 118 may comprise information on the position, the orientation of the vehicle which are collectively determined by the visual odometer 112.

The processor 125 is configured to process the image data 108 captured by the plurality of cameras 107 to generate a first three dimensional representation 115 representative of a surrounding area surrounding the vehicle and process the image data 108 and the kinematic data 118 to generate a second three dimensional representation 116 representative of a playground area under the vehicle. The processor 125 is further configured to generate a third three dimensional vehicle representation 106 representative of the vehicle.

Three representations of the first three dimensional representation 115, the second three dimensional representation 116 and the third three dimensional vehicle representation 106 are defined in FIG. 2 and FIG. 3. FIG. 2 shows components of a surround view for the vehicle. FIG. 3 is a schematic diagram defining top view areas corresponding to components of FIG. 2.

The third three dimensional vehicle representation 106 is displayed as 203 of FIG. 2. The third three dimensional vehicle representation 106 is representative of the vehicle that is represented as the area 303 of FIG. 3. The third three dimensional vehicle representation 106 is generated based on a pre- configured three dimensional mesh 105 of the vehicle so that the third three dimensional vehicle representation 106 that is representative of the vehicle is actually similar (such as in body type or style and/or car line and/or color) to the actual vehicle equipped with the system 100. The first three dimensional representation 115 is displayed as 201 of FIG. 2. The first three dimensional representation 115 is representative of the surrounding area surrounding the vehicle that is represented as the area 301 of FIG. 3. The second three dimensional representation 116 is displayed as 202 of FIG. 2. The second three dimensional representation 116 is representative of the playground area under the vehicle that is represented as the area 302 of FIG. 3.

The processor 125 generates the first three dimensional representation 115 using an operation 114 that comprises rendering or updating a first texture of the first three dimensional representation 115 in real time based on the image data 108 captured by the plurality of cameras 107.

The processor 125 generates the second three dimensional representation 116 using an operation 117 that comprises rendering or updating a second texture of the second three dimensional representation 116 in real time based on the image data 108 and the kinematic data 118. The operation 117 comprises iteratively perform suboperations: i) the processor 125 renders the second texture of the second three dimensional representation 116; ii) the processor 125 translates and rotates in real time the second texture of the second three dimensional representation 116 according to the kinematic data 118; and iii) the processor 125 updates in real time missing data and outdating data only when the missing data reaches a pre- determined amount.

FIG. 9 shows an under-chassis area of a playground area. The under-chassis area 902 is a part of the playground area 901 and being an area under the chassis of the vehicle. The under-chassis area 901 is out of the field of view of the plurality of cameras. Therefore, according to an embodiment, the operation 117 comprises perform suboperations in order to render the under-chassis area 902 in real time by sub-sampling technics as illustrated in FIG. 10. In principle, to fulfill the under-chassis area’s texture, the system retrieves historic playground area’s texture. When a historic playground texture is constructed, the system keeps it for under-chassis area rendering until the vehicle movement exceed a distance threshold. To fulfill the rest of play-ground area’s texture, the system uses real-time image feeding from the cameras then maps them into playground area’s texture.

In particular, FIG. 10 illustrates the sub-operations to render and update the second texture of the second three dimensional representation 116 as follows.

In a screen render stage, as sub-operation a), the processor 125 obtains a moving distance from the kinematic data 118. As sub-operation b), if the moving distance does not exceed a distance threshold (denoted as Dx), the processor 125 performs suboperations c) to e) then returns to sub-operation a). Also in sub-operation b), if the moving distance exceeds the distance threshold, the processor 125 performs suboperations f) to g) then returns to sub-operation a). The distance threshold is denoted as MAX DISTANCE in FIG. 10.

Sub-operation c): the processor 125 updates a rotation matrix and a translation vector that respectively represent rotation and translation of the vehicle based on the kinematic data 118. This sub-operation is denoted as reference numeral 1001.

Sub-operation d): the processor 125 rotates and translates the second texture of the second three dimensional representation 116 using the updated rotation matrix and the updated translation vector in sub-operation c);

Sub-operation e) the processor 125 renders a third texture of the under-chassis area 902 based on the rotated and translated second texture in sub-operation d). This sub-operation is denoted as reference numeral 1002.

Sub-operation f) the processor 125 updates the second texture of the second three dimensional representation 116 based on the rotation matrix and the translation vector estimated from the kinematic data 118 and the image data 108. The sub-operation f) comprises two blocks denoted as 1004 and 1005 in an offline render stage in which block 1004 shows the rendering of a temporary second texture of the second three dimensional representation 116 and block 1005 shows the rendering of the second texture of the second three dimensional representation 116 from the temporary second texture of block 1004.

Sub-operation g) the processor 125 renders the third texture of the under-chassis area based on the updated second texture of sub-operation f).

In the sub-operation g), the processor 125 also resets the rotation matrix and the translation vector to a default value. This is denoted as reference numeral 1003.

According to an embodiment, the rotation matrix and the translation vector are estimated from the kinematic data 118 based on a bicycle kinematic model that is transformed from a four-wheel kinematic model. As an example, the vehicle may have four wheels. FIG. 11 illustrates a transformation from the four-wheel kinematic model to the bicycle kinematic model. In particular, two front wheels of the four-wheel kinematic model are transformed into a virtual front wheel of the bicycle kinematic model located in the center of the two front wheels. Two rear wheels of the four-wheel kinematic model are transformed into a virtual rear wheel of the bicycle kinematic model located in the center of the two rear wheels.

According to an embodiment, the third three dimensional vehicle representation 106 is rendered as displayed to be transparent (for example, the transparent vehicle 203 of FIG. 2). The system 100 further comprises an input interface 103 that is configured to receive a user personalization input 104 that is representative of a transparency degree, a texture type, a color and a decoration of the third three dimensional vehicle representation 106 to be displayed on the display screen 124. Based on such personalization input, the system 100 would adjust a degree of transparency, a texture, a color and a decoration of the transparent third three dimensional vehicle representation 106 when displayed on the display screen 124. For example, the third three dimensional vehicle representation 106 may be adjusted to be more or less transparent based on the user input on transparency degree to enhance viewing of a portion of the first three dimensional representation 115 and a portion of the second three dimensional representation 116. The adjustment of the third three dimensional vehicle representation 106 based on the user personalization input 104 forms a personalized third three dimensional vehicle representation 113. Optionally, the system 100 further comprises a deep vision diagnostic mechanism 101 that is configured to analyze a road condition and a driving condition to generate driving diagnosed data 102. The system 100 would automatically adjust the degree of transparency, the texture, the color and the decoration of the transparent portion of the third three dimensional vehicle representation 106 when displayed on the display screen 124 based on the driving diagnosed data 102. Accordingly, the personalized third three dimensional vehicle representation 113 would be further automatically adjusted based on the driving diagnosed data 102.

The processor 125 assembles and renders a combined representation of the first three dimensional representation 115, the second three dimensional representation 116 and the third three dimensional vehicle representation 106 (or the personalized third three dimensional representation vehicle 113) using a virtual camera model as would be viewed from a virtual camera viewpoint (operation 123). The virtual camera viewpoint could be selected by the user under view selection data 122 through the input interface 103.

The display screen 124 displays the vehicle as a fully or partially transparent vehicle representation. According to another embodiment, a portion of the third three dimensional vehicle representation 106 is rendered as displayed to be transparent to enable viewing at the display screen 124 a portion of the first three dimensional representation 115 and a portion of the second three dimensional representation 116 that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation 106. In one example, the portion of the first three dimensional representation 115 and the portion of the second three dimensional representation 116 may comprises objects such that, for example, the third three dimensional vehicle representation 106 is transparent or substantially transparent to the objects that would be hidden by non-transparent display of the portion of the third three dimensional vehicle representation 106.

The virtual camera viewpoint from which the combined representation is viewed may be (i) exterior to, higher than and rear-left or rear-right of the vehicle. As an example, FIG. 4 shows a surround view from the virtual camera viewpoint exterior to, higher than and rear-left of a vehicle 400. The virtual camera viewpoint from which the combined representation is viewed may be (ii) exterior to, higher than and top-down of the vehicle. As an example, FIG. 5 shows a surround view (or bird’s eye view) from the virtual camera viewpoint exterior to, higher than and top-down of a vehicle 500.

The virtual camera viewpoint from which the combined representation is viewed may be (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle. As an example, FIG. 6 shows a surround view from the virtual camera viewpoint interior with a field of view forward of a vehicle 600.

The combined three dimensional vehicle representation further comprises a dynamic guideline showing a trajectory of the vehicle. The dynamic guideline is rendered based at least on the kinematic data. FIG. 7 shows a surround view in which the dynamic guideline 700 is presented based on the kinematic data 118 obtained from wheel odometer, IMU or visual odometer of the vehicle.

Optionally, the system 100 further comprises a deep vision object detection mechanism 119 operable of detecting an object that would be potential to collide with the vehicle and emphasizing the object on the display screen 124. According to an embodiment, a portion of the third three dimensional vehicle representation 106 is rendered as displayed to be transparent to enable viewing at the display screen 124 a portion of the first three dimensional representation 115 and a portion of the second three dimensional representation 116 so that a portion of the first three dimensional representation 115 comprises the detected object or a portion of the second three dimensional representation 116 comprises the detected object. Accordingly, the portion of the third three dimensional vehicle representation 106 is rendered as displayed to be transparent to enable viewing at the display screen 124 the detected object (for example, children, animals, or low- height objects in blind-spot zones) that would be potential to collide with the vehicle.

Optionally, the system 100 further comprises a deep vision brightness enhancement mechanism 120 operable of augmenting a brightness of the combined representation when the driving condition is in low-lighting.

Optionally, the system 100 further comprises a deep vision photometric enhancement mechanism 121 operable of equalizing and balancing image quality of a plurality of images captured from the plurality of cameras 107. The image quality comprises brightness, color, tone.

The display screen 124 is viewable through the reflective element when the display is activated to display information. The display element may be any type of display element, such as a vacuum fluorescent (VF) display element, a light emitting diode (LED) display element, such as an organic light emitting diode (OLED) or an inorganic light emitting diode, an electroluminescent (EL) display element, a liquid crystal display (LCD) element, a video screen display element or backlit thin film transistor (TFT) display element or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status, and/or the like.

Processor 125 refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be or further include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

FIG. 8 is a flow diagram of an example method 800 for monitoring a surround view for a vehicle. For convenience, the method 800 will be described as being performed by a system, for example, the surround view monitoring system 100 of FIG. 1 (hereinafter referred to as the system).

In step S801, the system provides a plurality of cameras (for example, the plurality of cameras 107 of FIG. 1) at a vehicle, wherein the plurality of cameras has respective fields of view exterior of the vehicle. The plurality of cameras may comprise a front camera having a field of view forward of the vehicle, a rear camera having a field of view rearward of the vehicle, a left camera having a field of view left of the vehicle, and a right camera having a field of view right of the vehicle. In step S802, the system provides a kinematic data measuring part (for example, the kinematic data measuring part 109 of FIG. 1) at the vehicle. The kinematic data measuring part is configured to determine kinematic data (for example, the kinematic data 118 of FIG. 1) of the vehicle.

The kinematic data measuring part comprises a wheel odometer (for example, the wheel odometer 110 of FIG. 1) that is configured to determine a moving direction, a moving distance and a rotation angle of wheels of the vehicle between successive measures of the wheel odometer. Accordingly, the kinematic data comprises the moving direction, the moving distance and the rotation angle of wheels of the vehicle.

Optionally, the kinematic data measuring part further comprises an Inertial Measurement Unit (IMU) (for example, the IMU 111 of FIG. 1) that is configured to determine an angular acceleration, a linear acceleration, an angular velocity between successive measures of the IMU of the vehicle and further comprises a visual odometer (for example, the visual odometer 112 of FIG. 1) that is configured to determine a position and an orientation of the vehicle. Accordingly, the kinematic data further comprises the angular acceleration, the linear acceleration, the angular velocity, the position and the orientation of the vehicle.

In step S803, the system processes image data (for example, the image data 108 of FIG. 1) captured by the plurality of cameras to generate a first three dimensional representation (for example, the first three dimensional representation 115 of FIG. 1) representative of a surrounding area surrounding the vehicle.

Step S803 may further comprises sub-step (S803-1).

(S 803-1) The system renders or updates a first texture of the first three dimensional representation in real time based on the image data captured by the plurality of cameras.

In step S804, the system processes the image data and the kinematic data to generate a second three dimensional representation (for example, the second three dimensional representation 116 of FIG. 1) representative of a playground area under the vehicle. Step S804 may further comprises sub-steps (S804-1), (S804-2) and (S804-3) which are iteratively performed by the system.

(S 804-1) The system renders a second texture of the second three dimensional representation.

(S 804-2) The system translates and rotates in real time the second texture of the second three dimensional representation according to the kinematic data.

(S 804-3) The system updates in real time missing data and outdating data only when the missing data reaches a pre- determined amount.

According to another embodiment, step S804 may further comprises sub-steps a) to g) which are performed by the system. a) The system obtains a moving distance from the kinematic data; b) If the moving distance does not exceed a distance threshold, the system performs sub-steps c) to e) then returns to sub-step a); if the moving distance exceeds the distance threshold, the system performs sub-steps f) to g) then returns to sub-step a); c) The system updates a rotation matrix and a translation vector that respectively represent rotation and translation of the vehicle based on the kinematic data; d) The system rotates and translates the second texture of the second three dimensional representation using the updated rotation matrix and the updated translation vector in sub-step c); e) The system renders a third texture of an under-chassis area (for example, the under-chassis area of FIG. 9) being a part of the playground area that is out of the respective fields of view of the plurality of cameras based on the rotated and translated second texture in sub-step d); f) The system updates the second texture of the second three dimensional representation based on the rotation matrix and the translation vector estimated from the kinematic data and the image data; g) The system renders the third texture of the under-chassis area based on the updated second texture of sub-step f) and reset the rotation matrix and the translation vector to a default value. In step S805, the system generates a third three dimensional vehicle representation (for example, the third three dimensional vehicle representation 106 of FIG. 1) representative of the vehicle. According to an embodiment, the third three dimensional vehicle representation is generated based on a pre-configured three dimensional mesh (for example, the pre- configured three dimensional mesh 105 of FIG. 1) of the vehicle.

Step S805 may further comprises sub-steps (S805-1) and (S805-2).

(S 805-1) The system receives a user input (for example, the user personalization input 104 of FIG. 1) representative of a transparency degree, a texture type, a color and a decoration of the third three dimensional vehicle representation; and

(S 805-2) The system adjusts at least one of a degree of transparency, a texture, a color and a decoration of the transparent portion of the third three dimensional vehicle representation based on the user input.

Optionally, Step S805 may further comprises sub-steps (S805-3) and (S805-4).

(S 805-3) The system analyzes a road condition and a driving condition to generate diagnosed data (for example, the driving diagnosed data 102 of FIG. 1); and

(S 805-4) The system adjusts at least one of the degree of transparency, the texture, the color and the decoration of a transparent portion of the third three dimensional vehicle representation is further based on the diagnosed data.

In step S806, the system displays, by a display screen (for example, a display screen 124 of FIG. 1) viewable by a driver of the vehicle, a combined representation of the first three dimensional representation, the second three dimensional representation and the third three dimensional vehicle representation as would be viewed from a virtual camera viewpoint selected from (i) exterior to, higher than and rear-left or rear-right of the vehicle, (ii) exterior to, higher than and top-down of the vehicle, (iii) interior with a field of view forward of the vehicle, and (iv) interior with a field of view rearward of the vehicle.

In step S807, the system renders a portion of the third three dimensional vehicle representation as displayed to be transparent to enable viewing at the display screen a portion of the first three dimensional representation and a portion of the second three dimensional representation that would otherwise be hidden by non-transparent display of the portion of the third three dimensional vehicle representation in which the third three dimensional vehicle representation further comprises a dynamic guideline showing the trajectory of the vehicle. The dynamic guideline is rendered based at least on the kinematic data.

Optionally, the method 800 further comprises a step of detecting an object that would be potential to collide with the vehicle and emphasizing the object on the display screen.

Optionally, the method 800 further comprises a step of augmenting a brightness of the combined representation when the driving condition is in low-lighting.

Optionally, the method 800 further comprises a step of equalizing and balancing image quality of a plurality of images captured from the plurality of cameras; in which the image quality comprises brightness, color, tone.

It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention. Accordingly, embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.