Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PROVIDING AN IMAGE DEPICTING A TRANSPARENT REAR VEHICLE PORTION
Document Type and Number:
WIPO Patent Application WO/2022/261671
Kind Code:
A1
Abstract:
A method and system (100) for creating a view of an environment rearward of a vehicle, including a rear a camera (132) disposed along the rear of the vehicle, including receiving images (133) captured by the rear camera; creating a combined image (133') in which the representation of a rear portion of the vehicle (152) is replaced by at least a portion of one or more images received from the rear camera so that the representation of the rear portion is transparent (502); and for each combined image created, sending one or more instructions for displaying the combined image on a display unit (122) of the vehicle. The view is from a CHMSL location on the vehicle. The view may be a view from a CHMSL camera if images from the CHMSL camera are used, or a virtual view if CHMSL camera images are not available.

Inventors:
BURTCH JOSEPH (US)
Application Number:
PCT/US2022/072879
Publication Date:
December 15, 2022
Filing Date:
June 10, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTONOMOUS MOBILITY US LLC (US)
International Classes:
B60R1/00
Domestic Patent References:
WO2016026870A12016-02-25
Foreign References:
US20180220082A12018-08-02
US20190241126A12019-08-08
Attorney, Agent or Firm:
ESSER, William F et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method of creating a view of an environment rearward of a vehicle having at least one rear facing camera, the method comprising: receiving, at data processing hardware, images captured by the at least one rear facing camera; storing, at the data processing hardware, the received captured images in memory; creating, at the data processing hardware, a combined image from the received images in which the representation of a rear portion of the vehicle is replaced by at least a portion of one or more images received from the at least one rear facing camera so that a representation of the rear portion is at last partly transparent; at least one of generating or retrieving from memory an overlay depicting an outline of the representation of the rear portion of the vehicle; and for each combined image created, sending, by the data processing hardware, one or more instructions for displaying the combined image and the overlay on a display unit of the vehicle.

2. The method of claim 1, wherein the rear vehicle portion comprises a bed of the vehicle.

3. The method of claim 1, wherein the at least one rear facing camera includes a CHMSL camera and a rear camera mounted along a rear of the vehicle, wherein receiving the images further includes receiving images captured by the CHMSL camera, wherein the method further includes identifying, by the data processing hardware, a representation of the rear portion of the vehicle in the images capture by the CHMSL camera, wherein the combined image is from a perspective of a location of the CHMSL camera, and wherein the representation of the rear portion of the vehicle in the combined image is replaced by at least a portion of images received from the rear camera.

4. The method of claim 1, wherein the vehicle further includes at least one side view camera, receiving the images further includes receiving images captured by the at least one side view camera, and the representation of the rear vehicle portion is replaced by at least a portion of the one or more images received from the rear camera and the at least one side view camera.

5. The method of claim 1, wherein the overlay includes a representation of rear tires of the vehicle, and wherein the display of the overlay depict the tire representations as rotating.

6. The method of claim 5, further comprising receiving data corresponding to a speed of the vehicle or a speed of a wheel of the vehicle, wherein a rotational speed of the rear tire representations depicted corresponds to the received data.

7. The method of claim 5, wherein the overlay includes a representation of rear wheels of the vehicle, and wherein the display of the overlay depicts the rear wheel representations as rotating with the rear tire representations.

8. The method of claim 1, further comprising receiving data corresponding to an angle of a steering wheel of the vehicle and/or an angle of a steerable wheel of the vehicle, and determining a projected path of the vehicle based in part upon the received data, wherein the overlay includes a representation of a projected path of rear tires of the vehicle.

9. The method of claim 1, wherein the combined image is from a view from a location of a CHMSL of the vehicle.

10. A system for providing a view of an environment behind a vehicle having at least one rear facing camera disposed along the rear of the vehicle, the system comprising: a controller configured to perform operations comprising: receiving images captured by the at least one rear facing camera; storing the received captured images in memory; creating a combined image from the received images in which the representation of a rear portion of the vehicle is replaced by at least a portion of one or more images received from the at least one rear facing camera so that a representation of the rear portion is at last partly transparent; at least one of generating or retrieving from memory an overlay depicting an outline of the representation of the rear portion of the vehicle; and for each combined image created, sending one or more instructions for displaying the combined image and the overlay on a display unit of the vehicle.

11. The system of claim 10, wherein the rear vehicle portion comprises a bed of the vehicle.

12. The system of claim 10, wherein the at least one rear facing camera includes a CHMSL camera and a rear camera mounted along a rear of the vehicle, receiving the images further includes receiving images captured by the CHMSL camera, the operations further includes identifying a representation of the rear portion of the vehicle in the images capture by the CHMSL camera, the combined image is from a perspective of a location of the CHMSL camera, and the representation of the rear portion of the vehicle in the combined image is replaced by at least a portion of images received from the rear camera.

13. The system of claim 10, wherein the vehicle further includes at least one side view camera, receiving the images further includes receiving images captured by the at least one side view camera, and the representation of the rear vehicle portion is replaced by at least a portion of the one or more images received from the rear camera and the at least one side view camera.

14. The system of claim 10, wherein the overlay includes a representation of rear tires of the vehicle, and wherein the display of the overlay depicts the tire representations as rotating. 15. The system of claim 14, wherein the operations further comprise receiving data corresponding to a speed of the vehicle or a speed of a wheel of the vehicle, wherein a rotational speed of the rear tire representations depicted corresponds to the received data.

16. The system of claim 14, wherein the overlay includes a representation of rear wheels of the vehicle, and wherein the display of the overlay depicts the rear wheel representations as rotating with the rear tire representations.

17. The system of claim 10, wherein the combined image is from a view from a location of a CHMSL of the vehicle.

18. The system of claim 10, wherein the operations further include detecting an object to be within a first predetermined distance of the vehicle, generating a warning overlay, detecting a representation of the object in the combined image, and sending at least one instruction for displaying the warning overlay, wherein the warning overlay at least partly covers the representation of the object in the combined image.

Description:
System and Method for Providing an Image Depicting a Transparent Rear Vehicle Portion

CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority or claims the benefit of U.S. provisional patent application 63/202,430, filed June 10, 2021 and titled “System and Method for Providing an Image Depicting a Transparent Truck Bed,” and U.S. provisional patent application 63/266,273, filed December 30, 2021 and titled “System and Method for Providing an Image Depicting a Transparent Truck Bed and a Transparent Top View.” The disclosures of the above applications are incorporated herein by reference.

TECHNICAL FIELD

[0002] This disclosure relates to a vehicle imaging system that provides a view of a rearward environment of a vehicle having a rear portion, such as a truck bed, in which the view includes a representation of the truck bed being transparent.

BACKGROUND

[0003] Many vehicles include a cab in which vehicle occupants are located and a truck bed. An example of one such vehicle is a pickup truck. A challenge drivers of such vehicles face is in maneuvering the vehicle in the rearward direction due to the truck bed providing a visual obstruction to the rear view from the cab.

SUMMARY

[0004] According to an example embodiment of the present disclosure, a method is disclosed of creating a view of an environment rearward of a vehicle having at least one rear facing camera. The method includes receiving, at data processing hardware, images captured by the at least one rear facing camera. The method further includes storing, at the data processing hardware, the received captured images in memory. The data processing hardware creates a combined image from the received images in which the representation of a rear portion of the vehicle is replaced by at least a portion of one or more images received from the at least one rear facing camera so that a representation of the rear portion is at last partly transparent. The method further includes at least one of generating or retrieving from memory an overlay depicting an outline of the representation of the rear portion of the vehicle. For each combined image created, the method includes sending, by the data processing hardware, one or more instructions for displaying the combined image and the overlay on a display unit of the vehicle.

[0005] The rear vehicle portion may include a bed of the vehicle.

[0006] In one implementation, the at least one rear facing camera may include a

CHMSL camera and a rear camera mounted along a rear of the vehicle. Receiving the images may further include receiving images captured by the CHMSL camera. The method may further include identifying, by the data processing hardware, a representation of the rear portion of the vehicle in the images capture by the CHMSL camera. The combined image is from a perspective of a location of the CHMSL camera. The representation of the rear portion of the vehicle in the combined image is replaced by at least a portion of images received from the rear camera.

[0007] The vehicle may further include at least one side view camera, receiving the images further includes receiving images captured by the at least one side view camera, and the representation of the rear vehicle portion is replaced by at least a portion of the one or more images received from the rear camera and the at least one side view camera. [0008] The overlay may include a representation of rear tires of the vehicle. The display of the overlay may depict the tire representations as rotating.

[0009] The method may further include receiving data corresponding to a speed of the vehicle or a speed of a wheel of the vehicle. A rotational speed of the rear tire representations depicted corresponds to the received data.

[0010] The overlay may include a representation of rear wheels of the vehicle, and the display of the overlay depicts the rear wheel representations as rotating with the rear tire representations.

[0011] The method may further include receiving data corresponding to an angle of a steering wheel of the vehicle and/or an angle of a steerable wheel of the vehicle, and determining a projected path of the vehicle based in part upon the received data. The overlay may include a representation of a projected path of rear tires of the vehicle. [0012] The combined image is from a view from a location of a CHMSL of the vehicle. [0013] The method may further include detecting an object to be within a first predetermined distance of the vehicle, generating a warning overlay, detecting a representation of the object in the combined image, and sending at least one instruction for displaying the warning overlay, wherein the warning overlay at least partly covers the representation of the object in the combined image. [0014] According to another example embodiment, a system is disclosed for providing a view of an environment behind a vehicle having at least one rear facing camera disposed along the rear of the vehicle. The system includes a controller configured to perform operations forming the method(s) described above. DESCRIPTION OF DRAWINGS

[0015] FIG. l is a top view of a vehicle having a vehicle system according to an example embodiment.

[0016] FIG. 2 is a schematic view of the vehicle system of FIG. 1 according to one or more example embodiments. [0017] FIG. 3 is a view of a display unit of the vehicle of FIG. 1 displaying an image from one of the cameras thereof.

[0018] FIG. 4 is another view of the display unit of the vehicle of FIG. 1 displaying an image from one of the cameras thereof..

[0019] FIG. 5 is a view of the display unit of the vehicle of FIG. 1 displaying a combined image according to one or more example embodiments.

[0020] FIGS. 6A and 6b form a flowchart depicting operations according to the one or more example embodiments.

[0021] FIG. 7 is a view of a transparent representation of a truck bed of the vehicle of

FIG. 1 according to an example embodiment. [0022] FIGS. 8 and 9 are a view of a transparent representation of a truck bed of the vehicle of FIG. 1 according to another example embodiment.

[0023] Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION [0024] It may be difficult to drive the vehicle having a bed, such as a truck bed, in a rearward direction. Therefore, it is desirable to have a system that provides the driver with a view of the rear and side of the vehicle which allows the driver to have a wider range of motion because of the area the driver is able to see.

[0025] Example embodiments of the present disclosure are generally directed to an imaging system in which a rear portion of a vehicle is depicted on a display screen or a head-up display as being largely transparent. For the sake of simplicity, in the example embodiments described in detail below, the vehicle is a pickup truck and the rear vehicle portion is the vehicle’s truck bed. It is understood, however, that the present disclosure may be used with any land-based vehicle for depicting the vehicle’s rear portion as being largely transparent for display to the vehicle driver. For example, the vehicle may be a sedan and the rear portion that is depicted as being largely transparent may include the vehicle’s trunk. By depicting the rear portion of the vehicle as being transparent in a display viewable by the vehicle driver, the driver is able to better see the environment rearward of the vehicle and maneuver the vehicle in reverse in a safer manner. [0026] Referring to FIGS. 1 and 2, in some implementations, a vehicle system 100 includes a vehicle 102. The vehicle 102 includes a vehicle tow ball 104 supported by a vehicle hitch bar 105. The vehicle 102 may include a drive system 110 that maneuvers the vehicle 102 across a road surface based on drive commands having x, y, and z components, for example. As shown, the drive system 110 includes a front right wheel 112, 112a, a front left wheel 112, 112b, a rear right wheel 112, 112c, and a rear left wheel

112, 112d. The drive system 110 may include other wheel configurations as well. The drive system 110 may also include a brake system that includes brakes associated with each wheel 112, 112a-d, a steering system for use in controlling a direction of travel of the vehicle 102, and an acceleration system that is configured to adjust a speed and direction of the vehicle 102. In addition, the drive system 110 may include a suspension system (not shown) that includes tires associates with each wheel 112, 112a-d, tire air, springs, shock absorbers, and linkages that connect the vehicle 102 to its wheels 112,

112a-d and allows relative motion between the vehicle 102 and the wheels 112, 112a-d. The vehicle 102 is a vehicle having a bed 113, such as a pickup truck or flatbed truck having a truck bed 113.

[0027] The vehicle 102 may move across the road surface by various combinations of movements relative to three mutually perpendicular axes defined by the vehicle 102: a transverse axis Xv, a fore-aft axis Yv, and a central vertical axis Zv. The transverse axis Xv extends between a right-side and a left-side of the vehicle 102. A forward drive direction along the fore-aft axis Yv is designated as Fv, also referred to as a forward motion. In addition, an aft or rearward drive direction along the fore-aft direction Yv is designated as Rv, also referred to as rearward motion. In some examples, the vehicle’s suspension system, which when adjusted causes the vehicle 102 to tilt about the Xv axis and or the Yv axis, or move along the central vertical axis Zv.

[0028] The vehicle 102 may include a user interface 120 (FIG. 2). The user interface 120 may include a display 122, a knob, and/or a button, which are used as input mechanisms. In some examples, the display 122 may show the knob and the button. While in other examples, the knob and the button are a knob-button combination. In some examples, the user interface 120 receives one or more driver commands from the driver via one or more input mechanisms or a touch screen display 122 and/or displays one or more notifications to the driver. The user interface 120 is in communication with a controller 140. In some examples, the display 122 is configured to display an image 133 of an environment of the vehicle 102.

[0029] The vehicle 102 may include a sensor system 130 (FIG. 2) to provide reliable and robust driving. The sensor system 130 may include different types of sensors that may be used separately or with one another to create a perception of the environment of the vehicle 102 that is used for the vehicle 102 to drive and aid the driver in make intelligent decisions based on objects and obstacles detected by the sensor system 130. The sensor system 130 may include the one or more cameras 132, 132a-e supported by the vehicle system 100. In some implementations, the vehicle 102 includes a rear vehicle camera 132a (i.e., a first camera) that is mounted to a tailgate or along the bumper of the vehicle 102 to provide a view of a rear-driving path for the vehicle 102, or in other words, the rear vehicle camera 132a captures images 133a of a rear environment of the vehicle 102. Additionally, the sensor system 130 includes, in some implementations, a rear-facing center high mount stop light (CHMSL) camera 132b (i.e., a second camera) that is mounted at or with the CHMSL of the vehicle 102 to provide a second view of the rear-driving path for the vehicle 102, or in other words, the CHMSL camera 132b captures images 133b of a rear environment of the vehicle 102, and particularly the vehicle cab. FIG. 3 is an image 133 of a rearward view 152 displayed on display 122 captured by the CHMSL camera 132b, including a representation 113’ of the truck bed 113 of the vehicle 102. In some examples, the sensor system 130 also includes side view vehicle cameras 132c, 132d (i.e., third camera and fourth camera) each mounted to provide a side image 133 of the side environment of the vehicle 102, and a front camera 132e (fifth vehicle camera) mounted at the front of the vehicle 102 to provide images 133 forwardly of the vehicle. [0030] In some implementation, one or more of the rear vehicle camera 132a, the rear

CHMSL camera 132b, and the side view vehicle cameras 132c, 132d includes a fisheye lens having an ultra-wide-angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image. Fisheye cameras capture images having an extremely wide angle of view. Moreover, images captured by the fisheye camera have a characteristic convex non-rectilinear appearance. Other types of cameras may also be used to capture the images 133.

[0031] The sensor system 130 may also include other sensors 134 (FIG. 2) that detect the vehicle motion, i.e., speed, angular speed, position, etc. The other sensors 134 may include an inertial measurement unit (EMU) configured to measure the vehicle’s linear acceleration (using one or more accelerometers) and rotational rate (using one or more gyroscopes). In some examples, the EMU also determines a heading reference of the vehicle 102. Therefore, the IMU determines the pitch, roll, and yaw of the vehicle 102. The other sensors 134 may also include, but are not limited to, radar, sonar, LIDAR (Light Detection and Ranging, which can entail optical remote sensing that measures properties of scattered light to find range and/or other information of a distant target), LADAR (Laser Detection and Ranging), ultrasonic, HFL (High Resolution 3D Flash LIDAR), etc. In some implementations, the sensor system 130 may provide external sensor data received from other systems or vehicles, such as by way of V2X communication or any other communication.

[0032] The controller 140 includes a computing device (or processor) 142 (e.g., a central processing unit having one or more computing processors) in communication with non-transitory memory 144 (e.g., a hard disk, flash memory, random-access memory, etc.) capable of storing instructions executable on the computing processor(s) 142. The controller may be supported by the vehicle 102. In the example embodiments, the controller 140 executes a transparent rear vehicle portion module 151 of the imaging system 150, described below.

[0033] The imaging system 150 receives images 133 from one or more cameras 132 and provides a rearward view 152 of the rear environment of the vehicle 102. The imaging system 150 solves the difficulties that the driver faces when backing up the vehicle 102 by showing a rearward view 152 of the of the vehicle 102 on the display 122, for example. The rearward view 152 may additionally or alternatively be included as part of the vehicle’s head-up display. In some examples, the rearward view 152 includes images 133 captured by the rear camera 132a, the CHMSL camera 132b and the side view cameras 132c, 132d.

[0034] In the example embodiments, the above-mentioned rearward view 152 combines images 133 captured by the rear camera 132a, the CHMSL camera 132b and the side view cameras 132c and 132d so that the combined image 133’, displayed on display 122, depict the rearward view relative to the CHMSL camera 132b. However, instead of depicting the truck bed 113 on the display 122, the combined image 133’ depicts the absence of the truck bed 113 from the CHMSL camera rearward view, thereby depicting a transparent or largely transparent truck bed.

[0035] Specifically, when the vehicle 102 is in moving in the reverse direction, the transparent rear vehicle portion module 151 of the imaging system 150 identifies, in the images captured by the CHMSL camera 132b, the representation 113’ of the truck bed 113, and replaces the representation 113’ with at least a portion of images recently captured by the rear camera 132a and, in at least one implementation, portions of images captured by the side view cameras 132c and 132d. The recently captured image portions captured by the rear camera 132a which replace the representation 113’ of the vehicle truck bed 113 on the display 122 are captured images as the vehicle 102 moves in the reverse direction. The number and/or amount of the images captured by the rear camera 132a which replace the representation 113’ of the vehicle truck bed 113 on the display 122 may be based upon the size and/or length of the truck bed 113 as well as the height of the truck bed sidewalls and tailgate.

[0036] The images 133 used in creating the combined image 133’ may be taken at different times. For example, the images 133 used in creating the combined image 133’ may include images 133a captured by the rear camera 132a in sequence.

[0037] In an example embodiment, transparent rear vehicle portion module 151 of the imaging system 150 creates and presents an overlay 502 (FIG. 5) on the combined image 133’ displayed on the display 122 which shows the location of an outline of the truck bed representation 113’, including representations of the truck bed walls and the tailgate. In one implementation, the overlay 502 includes a representation 116 of the rear tires of the vehicle 102, which may, for example, assist the vehicle driver in avoiding potholes in the road on which the vehicle 102 is traveling in the reverse direction. The location and dimensions of the tire representations 116 are based upon the known tire size as well as the known location of the tires on the vehicle 102. The tire size and tire location relative to the truck bed 113 may be stored in the memory 144. The overlay 502 may also include a representation of a projected path 115 of the rear tires of the vehicle 102, determined by the imaging system 150. In one aspect, the projected rear tire path 115 is based in part upon the angular position of the steering wheel of the vehicle 102 and/or the steerable front wheels 112a, 112b of the vehicle 102, and that although the projected path 115 is shown in FIG. 5 as a straight path, the path may curve to the left or to the right in the combined CHMSL camera-referenced image 152 if the steering wheel is rotated clockwise or counterclockwise, respectively.

[0038] Best seen in FIG. 7, in an example embodiment the overlay 502 includes 3-D representations 116 of the rear wheels 112c, 112d and rear tires and wheels 112c, 112d of the vehicle 102. The representation 116 of the rear tires may include tire tread representations. Further, as the transparent rear vehicle portion module 151 of the imaging system 150 creates and presents the overlay 502 with or on the combined image 133’ on the display 122, the presented representations 116 of the rear wheels 112c, 112d and tires are animated with the sequence of the combined images 133’ displayed so as to depict the tire presentations as rotating as the vehicle 102 moves in the reverse direction. In one implementation, the imaging system 150 receives vehicle speed information of the vehicle 102, which may be from speed sensors associated with the vehicle wheels 112 or other speed sensor on the vehicle 102, and presents the animation of the rear wheels/tires representations 116 as rotating at a rate corresponding to the sensed vehicle speed or sensed wheel speed. In addition or in the alterative, the imaging system 150 determines vehicle speed from the sequence of the combined images 133’ displayed on the display 122, and cause the animation of the rear wheel and tire representations 116 to rotate based upon the determined vehicle speed. FIG. 7 includes arrows indicating the depiction of the rotation of the tire and wheel representations.

[0039] In one implementation, with the size (length, width and sidewall/tailgate height) of the truck bed 113 known and stored in memory, and with the intrinsic and extrinsic parameters/characteristics of the CHMSL camera 132b also being known and stored in memory, the imaging device may automatically identify in the images 133b captured by the CHMSL camera 132b the representation 113’ of the truck bed 113 without analysis of the images 133b. In another implementation, the imaging system 150 analyzes the images 133b captured by CHMSL camera 132b and uses an object recognition algorithm to identify the representation 113’ of the truck bed 113 in the images 133b.

[0040] With the representation 113’ of the truck bed 113 identified, the imaging system 150 may define a boundary which surrounds the representation 113’ of the truck bed 113 appearing in the images 133b generated by CHMSL camera 132b. The boundary may facilitate the replacement of the representation 113’ with the image portions captured by rear camera 132a and side view cameras 132c and 132d in the combined image.

[0041] The operation 600 of the vehicle system 100, and in particular the controller 140 when executing the instructions of transparent rear vehicle portion module 151 of the imaging system 150, will be described with reference to FIGS. 6 and 6B. The operation 600 will be described according to the example embodiment s) in which the CHMSL camera 132b is mounted on the vehicle 102 and the vehicle 102 includes a truck bed. In the illustrated example embodiment, the transparent rear vehicle portion module 151 is executed when the driver (or other occupant) of the vehicle 102 enters a request to activate the transparent rear vehicle portion function (i.e., the transparent rear vehicle portion module 151) via the user interface 120. This manual request may occur by, for example, hitting a button, an actual button that is part of the user interface 120 or a virtual button displayed on the touch screen display 122, by entering a voice command into a microphone (not shown) forming part of the user interface 120, etc. In another implementation, the transparent rear vehicle portion module 151 is activated automatically whenever the vehicle 102 moves in the reverse direction and/or when the vehicle 102 is shifted into reverse. Following activation of the transparent rear vehicle portion module 151, the controller 140 receives at 602 images captured by the rear camera 132a, the CHMSL camera 132b and the two side view cameras 132c and 132d, and stores the received images in memory 144 at 604. The representation 113’ of the truck bed 113 in the images captured by the CHMSL camera 132b are identified at 606. This identification may be automatic based upon the known intrinsic and extrinsic characteristics and parameters of the CHMSL camera 132b as well as the known dimensions of the truck bed 113 (e.g., the width, length and depth). For each image 133b generated by the CHMSL camera 132b, the controller 140 creates at 608 a combined image in which the representation 113’ of the truck bed 113 in the image 133b is replaced with one or more images 133a captured by the rear camera 132a, thereby creating a combined image with a truck bed representation that is transparent. In another implementation, for each image 133b generated by the CHMSL camera 132b, the controller 140 creates the combined image from image(s) 133a and 133b as well as images 133c and 133d captured by side view cameras 132c and 132d, respectively. The combined image may be created by stitching together the captured image portions, for example. The images 133 used may be images captured by the cameras 132 at the same time. Further, the images 133 used may include multiple images captured in sequence by at least some of the cameras, such as the rear camera 132a, The transparent rear vehicle portion module 151 also generates the overlay 502 at this time. Since the dimensions and relative location of the truck bed 113 and tires are known, the data corresponding to the truck bed portion and representations 116 of the rear tires and rear wheels of the overlay 502 may be maintained in the memory 144 and retrieved therefrom for use following the vehicle 102 being placed in reverse or moving in reverse. The projected path 115 portion of the overlay 502 is determined based upon the sensed steering wheel angle and/or the sensed position of the steerable wheels 112a, 112b.

[0042] At 610, the controller 140 sends to the display 122 instructions to display the combined image, along with the overlay 502 showing an outline representation of the truck bed 113, the projected path 115 and the representations 116 of the rear wheels 112c, 112d and the rear tires. In one implementation, the overlay 502 is displayed with each combined image created.

[0043] With reference to FIG. 6B, in an implementation, as the vehicle 102 continues maneuvering in the reverse direction, rearward facing ultrasonic sensors disposed along the rear portion of the vehicle 102 detect objects rearwardly of the vehicle 102. In the event an object O (see FIG. 8) is detected at 614 by the ultrasonic sensor(s), a determination is made by the controller 140 at 616 whether the object O is within a first predetermined distance from the vehicle 102. Upon an affirmative determination that the object O is within the first predetermined distance, the imaging system 150 and/or the transparent rear vehicle portion module 151 thereof generates at 618 a warning overlay 502W to visually indicate to the driver of the vehicle 102 that the object O is within the first predetermined distance, and sends an instruction to user interface 120 to display the overlay warning 502W. A representation of the object is also detected in the combined image. In one implementation, the portion of the warning overlay 502W covers at least a portion of the representation of the object O in the combined image and is in a first color, as indicated in FIG. 9. In one implementation, the first color is yellow. In the event the object O is detected at 620 by the ultrasonic sensor(s) to be within a second predetermined distance from the vehicle 102, wherein the second predetermined distance is less than the first predetermined distance, the imaging system 150/transparent rear vehicle portion module 151 changes at 622 the overlay warning 502W to a second color, such as red, to indicate to the vehicle driver that the object is within the second predetermined distance, and sends an instruction to user interface 120 to display the color-changed overlay warning 502W. It is understood that audio and haptic based warnings may also be transmitted to the user interface 120 as part of the warning provided to the vehicle driver of the proximity of the object O to the vehicle 102. It is further understood that more than two different warning levels may be provided to the driver, such as three or four, with the third warning level corresponding to a third predetermined distance that is less than the second predetermined distance and the overlay warning 502W having a third color; and the fourth warning level corresponding to a fourth predetermined distance that is less than the third predetermined distance and the overlay warning 502W having a fourth color. The blocks 602-622 are repeated during the time the vehicle 102 continues traveling in the reverse direction.

[0044] In addition, the representations 116 of the rear wheels 112c, 112d and the rear tires of the vehicle 102 in the overlay 502, when displayed on the display 122, may be depicted as rotating as the sequence of the combined images are presented to the driver of the vehicle 102 on the display 122. The rotational speed of the representations 116 of the rear wheels and tires may be based upon a sensed speed of the vehicle 102 and/or sensed wheel speed in the reverse direction.

[0045] Drivers would benefit to see through the truck bed while driving in reverse. The above-described system is a transparent truck bed through the CHMSL camera view. The driver can get a better sense of where the physical limits of the truck are, especially in constrained environments. Using the rear tailgate camera 132a, the CHMSL camera 132b, and SurroundView side mirror cameras 132c and 132d, a transparent view of the truck bed from the CHMSL camera view is provided. Along with the transparent view, projected paths of the vehicle’s rear tires are made as well.

[0046] The example embodiments discussed above utilize a CHMSL camera 132b in providing a rear view of the tow vehicle 102 with a transparent truck bed. It is understood that, in another aspect, the transparent rear vehicle portion module 151 may provide the same rear view having a transparent truck bed representation without utilizing a CHMSL camera 132b, for use in use of the transparent rear vehicle portion module 151 in a vehicle that does not include a CHMSL camera 132b. Specifically, the combined image 133’ uses image data from the rear camera 132b and optionally the side view cameras 132c and 132d, without image data from the CHMSL camera 132b. In this case, the transparent rear vehicle portion module 151 fuses, stitches or otherwise combines image data from the rear camera 132b and optionally image data from side cameras 132c and 132d such that the combined image 133’ is from the perspective of a CHMSL camera 132b (even though there is no CHMSL camera). In this implementation, a transparent truck bed overlay may be created without identifying an image portion corresponding to the truck bed and without replacing said image portion. Instead, a virtual view from the CHMSL location is created using a sequence of images captured by the rear camera 132a and optionally the side view cameras 132c and 132d. The creation of the virtual view may use the location of the CHMSL on the vehicle 102 being known relative to the locations of the cameras 132a, 132c and 132d on the vehicle 102, all of which may be stored in the memory 144. In one implementation, the transparent rear vehicle portion module 151 creates a transparent 3-D model from the location near or exactly where the CHMSL is and/or the CHMSL camera would be located.

[0047] As mentioned, the transparent rear vehicle portion module 151 of the imaging system 150 may be part of a vehicle 102 that does not include a truck bed 113. In this instance, the transparent rear vehicle portion module 151 may depict a transparent rear portion of the vehicle 102 for display on the display 122 or head-up display. In the case of a sedan, the rear vehicle portion may include the vehicle trunk such that the perspective may be relative to a view from the rear portion of the passenger compartment of the vehicle. The overlay generated identifies the edges of the vehicle trunk as well as the projected path of the rear tires. In addition, the generated overlay includes the representations 116 of the rear tires and wheels of the vehicle 102, as discussed above. In the example in which the vehicle 102 is a sedan, images used in creating the combined image are captured by the vehicle’s rear camera and, optionally, side cameras. In the event the sedan includes a CHMSL camera or the like, the transparent rear vehicle portion module 151 of the imaging system 150 utilizes images from the CHMSL camera as described above.

[0048] Example embodiments are described herein in which an animation or other depiction is presented on the display 122 of tire representations rotating at a speed corresponding to the vehicle speed. The depicted rotating speed of the tire/wheel representation corresponding to the speed of the vehicle 102 may occur within a predetermined range of vehicle speeds. For example, at higher speeds above a predetermined speed threshold, the depicted rotational speed of the tire representations displayed may not correspond directly to the speed of the vehicle 102 and in this case may be a fraction thereof.

[0049] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. [0050] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. [0051] Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Moreover, subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The terms “data processing apparatus”, “computing device” and “computing processor” encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus. [0052] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multi-tasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0053] A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.