Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAMERA RADAR FUSION FOR ADVANCED DRIVER ASSISTANCE SYSTEM (ADAS) WITH RADAR AND MOBILE PHONE
Document Type and Number:
WIPO Patent Application WO/2022/104296
Kind Code:
A1
Abstract:
Novel tools and techniques are provided for implementing camera radar fusion for advanced driver assistance system ("ADAS") with radar and mobile device. In various embodiments, a computing system on a mobile device may receive one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; may receive first radar data from a first radar sensor that is mounted behind the windshield of the first vehicle; may fuse the received one or more first images and the received first radar data to generate first fused data; may analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and may present, on a display device, the identified, highlighted, and tracked one or more first objects.

Inventors:
YE XIAOYU (US)
ZHANG ZHEBIN (US)
SUN HONGYU (US)
SUN JIAN (US)
Application Number:
PCT/US2021/065473
Publication Date:
May 19, 2022
Filing Date:
December 29, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
G01S13/86; B60R1/00; G01S13/93; G01S13/931
Foreign References:
US20150234045A12015-08-20
US20210103030A12021-04-08
US20180107871A12018-04-19
US20180232947A12018-08-16
Other References:
VIPIN KUMAR KUKKALA, JORDAN TUNNELL, SUDEEP PASRICHA, THOMAS BRADLEY: "Advanced Driver-Assistance Systems: A Path Toward Autonomous Vehicles", IEEE CONSUMER ELECTRONICS MAGAZINE, vol. 7, no. 5, 1 September 2018 (2018-09-01), Piscataway, NJ, USA , pages 18 - 25, XP055704189, ISSN: 2162-2248, DOI: 10.1109/MCE.2018.2828440
Attorney, Agent or Firm:
BRATSCHUN, Thomas D. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method, comprising: receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; receiving, using the computing system on the mobile device, first radio detection and ranging ("radar") data from a first radar sensor that is mounted behind the windshield of the first vehicle; fusing, using the computing system on the mobile device, the received one or more first images and the received first radar data to generate first fused data; and analyzing, using the computing system on the mobile device, the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle.

2. The method of claim 1, wherein the computing system comprises at least one of a driver assistance system, a radar data processing system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN").

3. The method of claim 1 or 2, wherein the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device.

4. The method of any of claims 1-3, wherein the first radar sensor comprises at least one antenna disposed on an integrated circuit ("IC") chip, wherein the at least one antenna comprises one of a single IC-based antenna disposed on the IC chip, a plurality of IC-based antennas arranged as a one-dimensional ("ID") line of antennas disposed on the IC chip, or a 2D array of IC-based antennas disposed on the

35 IC chip, wherein a radar signal emitted from the at least one antenna is projected orthogonally from a surface of the IC chip on which the at least one antenna is disposed.

5. The method of any of claims 1-4, further comprising: pre-processing, using the computing system on the mobile device, the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis, wherein the one or more image processing operations comprise at least one of de-hazing, de-blurring, pre-whitening, resizing, aligning, cropping, or formatting; and pre-processing, using at least one of the first radar sensor or the computing system on the mobile device, the received first radar data using one or more radar data processing operations to prepare the received first radar data for analysis, wherein the one or more radar data processing operations comprise at least one of data cleaning, data augmentation, projection radar data to same coordinate system as the one or more first images, or denoising.

6. The method of any of claims 1-5, wherein fusing the received one or more first images and the received first radar data to generate the first fused data and analyzing the generated first fused data comprise at least one of: performing early-stage fusion, by: concatenating the received one or more first images and the received first radar data at a data level, by matching image pixels of the received one or more first images with radar point cloud data, to generate second fused data as input to a neural network; and analyzing the generated second fused data using the neural network to generate a bounding box for each first object; performing middle-stage fusion, by: mapping radar point cloud data to image coordinate system to form a point cloud image; concatenating the received one or more first images and the point cloud image at a feature level to generate third fused data as input to the neural network; and

36 analyzing the generated third fused data using the neural network to generate a bounding box for each first object; or performing late-stage fusion, by: analyzing the received one or more first images to identify and highlight one or more second objects in front of the first vehicle that are captured by the first camera; analyzing the received radar data to identify and highlight one or more third objects in front of the first vehicle that are detected by the radar sensor; concatenating the identified and highlighted one or more first objects and the identified and highlighted one or more second objects at a target level to generate fourth fused data as input to the neural network; and analyzing the generated fourth fused data using the neural network to generate a bounding box for each first object.

7. The method of any of claims 1-6, wherein performing object detection and tracking comprises performing at least one of two-dimensional ("2D") object detection with distance and velocity determination, three-dimensional ("3D") object detection with distance and velocity determination, or object tracking in 2D or 3D space using Doppler-radar based analysis.

8. The method of any of claims 1-7, wherein analyzing the generated first fusing data further comprises analyzing, using the computing system on the mobile device, the generated first fused data to perform at least one of simultaneous location and mapping ("SLAM") or depth estimation.

9. The method of any of claims 1-8, wherein the first camera comprises one of a windshield camera or a camera that is integrated with the mobile device.

10. The method of any of claims 1-9, further comprising: receiving, using the computing system on the mobile device, one or more second images from a second camera that is mounted to a second position on the windshield of the first vehicle; wherein generating the first fused data comprises fusing, using the computing system on the mobile device, the received one or more first images, the received one or more second images, and the received first radar data to generate the first fused data.

11. The method of any of claims 1-10, further comprising: receiving, using the computing system on the mobile device, at least one of second radar data from a second radar sensor, lidar data from one or more lidar sensors, ultrasound data from one or more ultrasound sensors, or infrared image data from one more infrared cameras that are mounted on the first vehicle and that are communicatively coupled to the mobile device; wherein generating the first fused data comprises fusing, using the computing system on the mobile device, the received one or more first images, the received first radar data, and at least one of the second radar data, the lidar data, the ultrasound data, or the infrared image data to generate the first fused data.

12. The method of any of claims 1-11, wherein the one or more first objects comprise at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects.

13. The method of any of claims 1-12, further comprising: presenting, using the computing system on the mobile device and on a display device, the identified, highlighted, and tracked one or more first objects.

14. A mobile device, comprising: a computing system; and a non-transitory computer readable medium communicatively coupled to the computing system, the non-transitory computer readable medium having stored thereon computer software comprising a set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; receive first radio detection and ranging ( radar ) data from a first radar sensor that is mounted behind the windshield of the first vehicle; fuse the received one or more first images and the received first radar data to generate first fused data; and analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle.

15. A system, comprising: a first camera that is mounted to a first position on a windshield of a first vehicle; a first radio detection and ranging ("radar") sensor that is mounted behind the windshield of the first vehicle; and a mobile device, comprising: a computing system; and a first non-transitory computer readable medium communicatively coupled to the computing system, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from the first camera; receive first radar data from the first radar sensor; fuse the received one or more first images and the received first radar data to generate first fused data; and analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle.

16. The system of claim 15, wherein the computing system comprises at least one of a driver assistance system, a radar data processing system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine,

39 a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"). 17. The system of claim 15 or 16, wherein the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device.

40

Description:
CAMERA RADAR FUSION FOR ADVANCED DRIVER

ASSISTANCE SYSTEM (ADAS) WITH RADAR AND

MOBILE PHONE

COPYRIGHT STATEMENT

[0001] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

FIELD

[0002] The present disclosure relates, in general, to methods, systems, and apparatuses for implementing driver assistance technologies (e.g., advanced driver assistance systems ("ADASs"), other vision-based object detection, other radar-based object detection, other vision and radar-based objection detection, or the like), and, more particularly, to methods, systems, and apparatuses for implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.).

BACKGROUND

[0003] Advanced driver assistance systems ("ADAS") are technological features that are designed to increase the safety and to improve the user experience of driving a vehicle. Popular ADAS features include keeping a vehicle centered in its lane, bringing a vehicle to a complete stop in an emergency, and identifying other vehicles or pedestrians approaching, and much more. [0004] Most conventional ADAS implementations utilize either camera only solutions or radar only solutions. The detection performance of single- type sensors such as a single radar sensor or monocular camera is susceptible to the impact of specific traffic environments, which may lead to false associations, missed judgments, and/or misjudgments in complex environment scenarios.

[0005] Most off-the-shelf products that utilize both millimeter-wave radar and camera to perform front view perception tasks normally have stand-alone sensors, a computing unit, and a display to showcase the detection results and to send warning message to drivers. However, such systems are typically costly and extremely complex for customers to assemble and/or to mount on their vehicles.

[0006] Hence, there is a need for more robust and scalable solutions for implementing driver assistance technologies.

SUMMARY

[0007] The techniques of this disclosure generally relate to tools and techniques for implementing driver assistance technologies (e.g., advanced driver assistance systems ("ADASs"), other vision-based object detection, other radar-based object detection, other vision and radar-based objection detection, or the like), and, more particularly, to methods, systems, and apparatuses for implementing camera radar fusion for ADAS with radar and mobile device.

[0008] In an aspect, a method may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; receiving, using the computing system on the mobile device, first radar data from a first radar sensor that is mounted behind the windshield of the first vehicle; fusing, using the computing system on the mobile device, the received one or more first images and the received first radar data to generate first fused data; analyzing, using the computing system on the mobile device, the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and presenting, using the computing system on the mobile device and on a display device, the identified, highlighted, and tracked one or more first objects.

[0009] In another aspect, a mobile device might comprise a computing system and a non-transitory computer readable medium communicatively coupled to the computing system. The non-transitory computer readable medium might have stored thereon computer software comprising a set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; receive first radar data from a first radar sensor that is mounted behind the windshield of the first vehicle; fuse the received one or more first images and the received first radar data to generate first fused data; analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and present, on a display device, the identified, highlighted, and tracked one or more first objects.

[0010] In yet another aspect, a system might comprise a first camera mounted to a first fixed position on a windshield of a first vehicle, a first radar sensor that is mounted behind the windshield of the first vehicle, and a mobile device. The mobile device may comprise a computing system and a first non-transitory computer readable medium communicatively coupled to the computing system. The first non-transitory computer readable medium might have stored thereon computer software comprising a first set of instructions that, when executed by the computing system, causes the mobile device to: receive one or more first images from the first camera; receive first radar data from the first radar sensor; fuse the received one or more first images and the received first radar data to generate first fused data; analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and present, on a display device, the identified, highlighted, and tracked one or more first objects.

[0011] Various modifications and additions can be made to the embodiments discussed without departing from the scope of the invention. For example, while the embodiments described above refer to particular features, the scope of this invention also includes embodiments having different combination of features and embodiments that do not include all of the above-described features.

[0012] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS

[0013] A further understanding of the nature and advantages of particular embodiments may be realized by reference to the remaining portions of the specification and the drawings, in which like reference numerals are used to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.

[0014] Fig. 1 is a schematic diagram illustrating a system for implementing camera radar fusion for advanced driver assistance system ("ADAS") with radar and a mobile device, in accordance with various embodiments.

[0015] Figs. 2A and 2B are schematic block flow diagrams illustrating various non-limiting examples of a process for implementing camera radar fusion for ADAS with radar and a mobile device, in accordance with various embodiments.

[0016] Figs. 2C-2E are schematic block flow diagrams illustrating various nonlimiting examples of camera radar fusion that may be implemented during camera radar fusion for ADAS with radar and a mobile device, in accordance with the various embodiments.

[0017] Figs. 3A-3E illustrate a non-limiting example of various forms of radar signal data that may be used as input or may be converted into input for implementing camera radar fusion for ADAS with radar and a mobile device, in accordance with the various embodiments.

[0018] Fig. 3F is an image illustrating a non-limiting example of an output that may be generated during implementation of camera radar fusion for ADAS with radar and a mobile device, in accordance with various embodiments.

[0019] Figs. 4A-4G are flow diagrams illustrating a method for implementing camera radar fusion for ADAS with radar and a mobile device, in accordance with various embodiments.

[0020] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.

[0021] Fig. 6 is a block diagram illustrating a networked system of computers, computing systems, or system hardware architecture, which can be used in accordance with various embodiments. DETAILED DESCRIPTION

[0022] Overview

[0023] Various embodiments provide tools and techniques for implementing driver assistance technologies (e.g., advanced driver assistance systems ("ADASs"), other vision-based object detection, other radar-based object detection, other vision and radar-based objection detection, or the like), and, more particularly, to methods, systems, and apparatuses for implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.). [0024] In various embodiments, a computing system on a mobile device may receive one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; may receive first radio detection and ranging ("radar") data from a first radar sensor that is mounted behind the windshield of the first vehicle; may fuse the received one or more first images and the received first radar data to generate first fused data; may analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and may present, on a display device, the identified, highlighted, and tracked one or more first objects.

[0025] In some embodiments, the computing system may comprise at least one of a driver assistance system, a radar data processing system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like. In some instances, the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like. In some cases, the first radar sensor may comprise at least one antenna disposed on an integrated circuit ("IC") chip. In some instances, the at least one antenna may comprise one of a single IC -based antenna disposed on the IC chip, a plurality of IC-based antennas arranged as a onedimensional ("ID") line of antennas disposed on the IC chip, or a 2D array of IC- based antennas disposed on the IC chip, and/or the like. In some cases, a radar signal emitted from the at least one antenna may be projected orthogonally from a surface of the IC chip on which the at least one antenna is disposed.

[0026] According to some embodiments, the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis. In such cases, the one or more image processing operations may comprise at least one of de-hazing, deblurring, pre-whitening, resizing, aligning, cropping, or formatting, and/or the like. At least one of the first radar sensor or the computing system may pre-process the received first radar data using one or more radar data processing operations to prepare the received first radar data for analysis. In such cases, the one or more radar data processing operations may comprise at least one of data cleaning, data augmentation, projection radar data to same coordinate system as the one or more first images, or denoising, and/or the like.

[0027] In some embodiments, fusing the received one or more first images and the received first radar data to generate the first fused data and analyzing the generated first fused data may comprise at least one of: (1) performing early-stage fusion, by (la) concatenating the received one or more first images and the received first radar data at a data level, by matching image pixels of the received one or more first images with radar point cloud data, to generate second fused data as input to a neural network; and (lb) analyzing the generated second fused data using the neural network to generate a bounding box for each first object; (2) performing middle-stage fusion, by (2a) mapping radar point cloud data to image coordinate system to form a point cloud image; (2b) concatenating the received one or more first images and the point cloud image at a feature level to generate third fused data as input to the neural network; and (2c) analyzing the generated third fused data using the neural network to generate a bounding box for each first object; or (3) performing late-stage fusion, by (3 a) analyzing the received one or more first images to identify and highlight one or more second objects in front of the first vehicle that are captured by the first camera; (3b) analyzing the received radar data to identify and highlight one or more third objects in front of the first vehicle that are detected by the radar sensor; (3c) concatenating the identified and highlighted one or more first objects and the identified and highlighted one or more second objects at a target level to generate fourth fused data as input to the neural network; and (3d) analyzing the generated fourth fused data using the neural network to generate a bounding box for each first object; and/or the like.

[0028] In some instances, performing object detection and tracking may comprise performing at least one of two-dimensional ("2D") object detection with distance and velocity determination, three-dimensional ("3D") object detection with distance and velocity determination, or object tracking in 2D or 3D space using Doppler-radar based analysis, and/or the like. In some cases, analyzing the generated first fusing data may further comprise analyzing, using the computing system on the mobile device, the generated first fused data to perform at least one of simultaneous location and mapping ("SLAM") or depth estimation, and/or the like. In some instances, the first camera may comprise one of a windshield camera or a camera that is integrated with the mobile device, and/or the like.

[0029] According to some embodiments, the computing system may receive one or more second images from a second camera that is mounted to a second position on the windshield of the first vehicle. In such embodiments, generating the first fused data may comprise fusing the received one or more first images, the received one or more second images, and the received first radar data to generate the first fused data. [0030] Alternatively, or additionally, the computing system may receive at least one of second radar data from a second radar sensor, lidar data from one or more lidar sensors, ultrasound data from one or more ultrasound sensors, or infrared image data from one more infrared cameras, and/or the like, that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device. In such embodiments, generating the first fused data may comprise fusing the received one or more first images, the received first radar data, and at least one of the second radar data, the lidar data, the ultrasound data, or the infrared image data, and/or the like, to generate the first fused data.

[0031] Merely by way of example, in some cases, the one or more objects may comprise at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.

[0032] In the various aspects described herein, a system and method are provided for implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.). In some cases, the system may utilize a camera(s) on the mobile device in conjunction with use of a stand-alone millimeter- wave radar for front view perception in ADAS, and utilizing computational resources on cell-phone for running inference detection algorithms, or the like. In some instances, the camera radar fusion algorithm may be designed and implemented with more dense point clouds from 4D radar or imaging radar as well as RGB frames from a monocular camera. Herein, "4D" may refer to the addition of high-dimensional data analysis to the target on the basis of the original distance, azimuth, and speed, which can realize information perception in the four dimensions of "3D + speed," or the like. The various embodiments also provide an inference engine on a mobile device with heterogenous computing architecture for real-time processing, and high compatibility for different cameras and/or radar sensors to be integrated with.

[0033] This allows for improvements over conventional ADAS systems that either utilize only a single sensor (i.e., camera only or radar only) or utilize both sensors but in a complicated-to-assemble or complicated-to-mount platform. For instance, the various embodiments involve fusing data from radar and camera in the object detection and tracking algorithms and models, which takes advantage of the different sensors and overcomes the drawbacks compared to using only one of the sensors. For example, radar signal has depth information but does not have enough features to classify different objects, whereas camera frames have abundant texture features, but it is hard to compute depth from them. The various embodiments are also designed to be based on mobile devices (e.g., mobile phones or the like), and do not require many other devices for data collection, processing, and display. Further, the algorithms, models, and pipeline are lightweight and could run in real-time on most mobile devices. Moreover, the camera radar fusion system is capable of handling sever weather conditions (e.g., rainy, foggy, snowy days or during the night, etc.) better than camera-only or radar-only systems.

[0034] These and other aspects of the system and method for implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.) are described in greater detail with respect to the figures.

[0035] The following detailed description illustrates a few embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention. [0036] In the following description, for the purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these details. In other instances, some structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features.

[0037] Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term "about." In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms "and" and "or" means "and/or" unless otherwise indicated. Moreover, the use of the term "including," as well as other forms, such as "includes" and "included," should be considered nonexclusive. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.

[0038] Various embodiments as described herein - while embodying (in some cases) software products, computer-performed methods, and/or computer systems - represent tangible, concrete improvements to existing technological areas, including, without limitation, object detection technology, image-based object detection technology, camera-mobile device video communication technology, radar-based object tracking technology, radar camera fusion technology, fused radar camera-based 2D and 3D object detection with distance velocity determination technology, and fused radar camera-based 2D and 3D object tracking technology, driver assistance technology, and/or the like. In other aspects, some embodiments can improve the functioning of user equipment or systems themselves (e.g., object detection systems, image-based object detection systems, camera-mobile device video communication systems, radar-based object tracking systems, radar camera fusion systems, fused radar camera-based 2D and 3D object detection with distance velocity determination systems, and fused radar camera-based 2D and 3D object tracking systems, driver assistance systems, etc.), for example, by receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle; receiving, using the computing system on the mobile device, first radar data from a first radar sensor that is mounted behind the windshield of the first vehicle; fusing, using the computing system on the mobile device, the received one or more first images and the received first radar data to generate first fused data; analyzing, using the computing system on the mobile device, the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle; and presenting, using the computing system on the mobile device and on a display device, the identified, highlighted, and tracked one or more first objects; and/or the like.

[0039] In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices, software, systems, and methods that involve novel functionality (e.g., steps or operations), such as, implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.), and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, providing a low cost and easy to use ADAS system that may be used on any existing vehicle (even vehicles without designated or dedicated ADAS hardware) and provides an easy to mount system that is centered on the mobile device to perform data collection, processing, and display of fused data from radar sensor(s) and camera(s) that provide 2D and/or 3D object detection with distance and velocity determination and/or object tracking in 2D or 3D space using Doppler-radar based analysis of the results of camera radar fusion (and thus is suitable for non-ideal conditions (e.g., rainy, foggy, snowy days and/or during the night, etc.), is capable of real-time (or near-real-time operation on most mobile devices due to the algorithms, models, and pipeline being lightweight, and renders clearly delineated objects in respective bounding boxes with object class determination, distance, and/or velocity data of each object, and/or the like, at least some of which may be observed or measured by users (e.g., drivers, ADAS technicians, etc.), developers, and/or object detection system or other ADAS manufacturers. [0040] Some Embodiments

[0041] We now turn to the embodiments as illustrated by the drawings. Figs. 1-6 illustrate some of the features of the method, system, and apparatus for implementing driver assistance technologies, and, more particularly, to methods, systems, and apparatuses for implementing camera radar fusion for ADAS with radar and mobile device, as referred to above. The methods, systems, and apparatuses illustrated by Figs. 1-6 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown in Figs. 1-6 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.

[0042] With reference to the figures, Fig. 1 is a schematic diagram illustrating a system 100 for implementing camera radar fusion for advanced driver assistance system ("ADAS") with radar and a mobile device, in accordance with various embodiments.

[0043] In the non- limiting embodiment of Fig. 1, system 100 may comprise a vehicle 105 and a mobile device 110 removably located therein. In some embodiments, the mobile device 110 may include, but is not limited to, computing system 115, communications system 120, one or more cameras 125, a display screen 130, and/or an audio speaker(s) 135 (optional), and/or the like. In some embodiments, the computing system 115 may include, without limitation, at least one of a driver assistance system (e.g., driver assistance system 115a, or the like), an object detection system or an object detection and ranging system (e.g., object detection system 115b, or the like), a positioning and mapping system, a processor on the mobile device (e.g., one or more processors 115c, including, but not limited to, one or more central processing units ("CPUs"), graphics processing units ("GPUs"), and/or one or more other processors, and/or the like), a machine learning system (e.g., machine learning system 115d, including, but not limited to, at least one of an artificial intelligence ("Al") system, a machine learning system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like), an image processing system or an image data fusing system (e.g., image processing system 115e, or the like), a radio detection and ranging ("radar") data processing system or a radar data fusing system (e.g., radar data processing system 115f, or the like), a combination image data and radar data fusing system (e.g., combination of image processing system 115e and radar data processing system 115f (not shown), or the like), rendering or graphics engine (e.g., rendering or graphics engine 115g, or the like), or data storage (e.g., data storage device 115h, or the like), and/or the like. In some instances, the mobile device 110 may include, but is not limited to, at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like.

[0044] System 100 may further comprise one or more cameras 140 (including, without limitation, first camera 140a and/or second camera 140b (optional), or the like) that may be mounted in respective fixed positions on the windshield 150 at the front of vehicle 105. Cameras 140a, 140b, and/or 125 may capture images or videos in front of the vehicle 105, including images or videos of one or more objects 155a- 155n (collectively, "objects 155" or the like) that may be in front of vehicle 105. Merely by way of example, in some cases, the one or more objects 155 may include, without limitation, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.

[0045] System 100 may further comprise one or more radar systems 145 (including, but not limited to, first radar sensor(s) 145a and/or second radar sensor(s) 145b (optional), or the like) that may be mounted behind the windshield 150 at the front of vehicle 105. In some cases, the one or more radar systems 145 may include, but are not limited to, at least one antenna disposed on an integrated circuit ("IC") chip. In some instances, the at least one antenna may include, without limitation, one of a single IC -based antenna disposed on the IC chip, a plurality of IC -based antennas arranged as a one-dimensional ("ID") line of antennas disposed on the IC chip, or a 2D array of IC-based antennas disposed on the IC chip, and/or the like. In some cases, a radar signal emitted from the at least one antenna may be projected orthogonally from a surface of the IC chip on which the at least one antenna is disposed.

[0046] In some embodiments, system 100 may further comprise a light detection and ranging ("lidar") system (including, without limitation, lidar sensor(s) 160 (optional), or the like), an ultrasound system (including, but not limited to, ultrasound sensor(s) 165 (optional), or the like), an infrared ("IR") system (including, without limitation, IR sensor(s) 170 (optional), or the like), and an external display device or a vehicle display device (e.g., display screen 130b, or the like), and/or the like.

[0047] According to some embodiments, communications system 120 may communicatively couple with one or more of first camera 140a, second camera 140b, and/or display screen 130b via wired cable connection (such as depicted in Fig. 1 by connector lines between communications system 120 and each of first camera 140a, second camera 140b, and display screen 130b, or the like) or via wireless communication link (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of first camera 140a, second camera 140b, and display screen 130b, or the like). In some cases, communications system 120 may also communicatively couple with one or more of first radar sensor(s) 145a, second radar sensor(s) 145b, lidar sensor(s) 160, ultrasound sensor(s) 165, and/or IR sensor(s) 170 via wireless communication link(s) (such as depicted in Fig. 1 by lightning bolt symbols between communications system 120 and each of these components, or the like). In some embodiments, the wireless communications may include wireless communications using protocols including, but not limited to, at least one of Bluetooth™ communications protocol, WiFi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z-wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.

[0048] In operation, computing system 115 (herein, simply referred to as "computing system" or the like) may receive one or more first images from a first camera (e.g., first camera 140a, or the like) that may be mounted to a first position on a windshield (e.g., windshield 150, or the like) of a first vehicle (e.g., vehicle 105, or the like). The computing system may receive first radar data from a first radar sensor (e.g., radar sensor(s) 145a, or the like) that may be mounted behind the windshield of the first vehicle. The computing system may fuse the received one or more first images and the received first radar data to generate first fused data, and may analyze the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects (e.g., objects 155a- 155n, or the like) that may be located in front of the first vehicle. The computing system may present, on a display device (e.g., display screen 130a and/or 130b, or the like), the identified, highlighted, and tracked one or more first objects.

[0049] According to some embodiments, the computing system may pre-process the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis. In some instances, the one or more image processing operations may include, but are not limited to, at least one of de-hazing, de-blurring, pre-whitening, resizing, aligning, cropping, or formatting, and/or the like. Similarly, at least one of one or more radar systems (e.g., the one or more radar systems 145, or the like) or the computing system may pre- process the received first radar data using one or more radar data processing operations to prepare the received first radar data for analysis. In some cases, the one or more radar data processing operations may include, without limitation, at least one of data cleaning, data augmentation, projection radar data to same coordinate system as the one or more first images, or de-noising, and/or the like.

[0050] In some embodiments, fusing the received one or more first images and the received first radar data to generate the first fused data and analyzing the generated first fused data may comprise performing early-stage fusion, by concatenating the received one or more first images and the received first radar data at a data level, by matching image pixels of the received one or more first images with radar point cloud data, to generate second fused data as input to a neural network; and analyzing the generated second fused data using the neural network to generate a bounding box for each first object. Alternatively, or additionally, fusing the received one or more first images and the received first radar data to generate the first fused data and analyzing the generated first fused data may comprise performing middlestage fusion, by mapping radar point cloud data to image coordinate system to form a point cloud image; concatenating the received one or more first images and the point cloud image at a feature level to generate third fused data as input to the neural network; and analyzing the generated third fused data using the neural network to generate a bounding box for each first object. Alternatively, or additionally, fusing the received one or more first images and the received first radar data to generate the first fused data and analyzing the generated first fused data may comprise performing late-stage fusion, by analyzing the received one or more first images to identify and highlight one or more second objects in front of the first vehicle that are captured by the first camera; analyzing the received radar data to identify and highlight one or more third objects in front of the first vehicle that are detected by the radar sensor; concatenating the identified and highlighted one or more first objects and the identified and highlighted one or more second objects at a target level to generate fourth fused data as input to the neural network; and analyzing the generated fourth fused data using the neural network to generate a bounding box for each first object. [0051] In some instances, performing object detection and tracking may comprise performing at least one of two-dimensional ("2D") object detection with distance and velocity determination, three-dimensional ("3D") object detection with distance and velocity determination, or object tracking in 2D or 3D space using Doppler-radar based analysis, and/or the like. In some cases, analyzing the generated first fusing data may further comprise analyzing the generated first fused data to perform at least one of simultaneous location and mapping ("SLAM") or depth estimation, and/or the like.

[0052] According to some embodiments, the computing system may receive one or more second images either from a second camera that may be mounted to a second position on the windshield of the first vehicle (e.g., second camera 140b, or the like) and/or from a third camera(s) that may be integrated with the mobile device that may be mounted to a third position on the windshield of the first vehicle (e.g., camera(s) 125, or the like). In such embodiments, generating the first fused data may comprise fusing the received one or more first images, the received one or more second images, and the received first radar data to generate the first fused data.

[0053] Alternatively, or additionally, the computing system may receive at least one of second radar data from a second radar sensor (e.g., second radar sensor(s) 145b, or the like), lidar data from one or more lidar sensors (e.g., lidar sensor(s) 160, or the like), ultrasound data from one or more ultrasound sensors (e.g., ultrasound sensor(s) 165, or the like), or infrared image data from one more infrared cameras (e.g., IR sensor(s) 170, or the like), and/or the like, that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device. In such embodiments, generating the first fused data may comprise fusing the received one or more first images, the received first radar data, and at least one of the second radar data, the lidar data, the ultrasound data, or the infrared image data, and/or the like, to generate the first fused data.

[0054] In the various aspects described herein, a system and method are provided for implementing camera radar fusion for ADAS with radar and mobile device (e.g., a smartphone, a mobile phone, a tablet computer, etc.). In some cases, the system may utilize a camera(s) on the mobile device in conjunction with use of a stand-alone millimeter- wave radar for front view perception in ADAS, and utilizing computational resources on cell-phone for running inference detection algorithms, or the like. In some instances, the camera radar fusion algorithm may be designed and implemented with more dense point clouds from 4D radar or imaging radar as well as RGB frames from a monocular camera. Herein, "4D" may refer to the addition of high-dimensional data analysis to the target on the basis of the original distance, azimuth, and speed, which can realize information perception in the four dimensions of "3D + speed," or the like. The various embodiments also provide an inference engine on a mobile device with heterogenous computing architecture for real-time processing, and high compatibility for different cameras and/or radar sensors to be integrated with.

[0055] This allows for improvements over conventional ADAS systems that either utilize only a single sensor (i.e., camera only or radar only) or utilize both sensors but in a complicated-to-assemble or complicated-to-mount platform. For instance, the various embodiments involve fusing data from radar and camera in the object detection and tracking algorithms and models, which takes advantage of the different sensors and overcomes the drawbacks compared to using only one of the sensors. For example, radar signal has depth information but does not have enough features to classify different objects, whereas camera frames have abundant texture features, but it is hard to compute depth from them. The various embodiments are also designed to be based on mobile devices (e.g., mobile phones or the like), and do not require many other devices for data collection, processing, and display. Further, the algorithms, models, and pipeline are lightweight and could run in real-time on most mobile devices. Moreover, the camera radar fusion system is capable of handling sever weather conditions (e.g., rainy, foggy, snowy days or during the night, etc.) better than camera-only or radar-only systems.

[0056] The application of fusion results may be extended to other scenarios, including, but not limited to, video quality improvement using depth information from a cell phone's LiDAR sensor, surveillance camera, or traffic monitoring, and/or the like.

[0057] These and other functions of the system 100 (and its components) are described in greater detail below with respect to Figs. 2-4. [0058] Figs. 2 A and 2B are schematic block flow diagrams illustrating various non-limiting examples 200 and 200' of a process for implementing camera radar fusion for ADAS with radar and a mobile device, in accordance with various embodiments. Figs. 2C-2E are schematic block flow diagrams illustrating various non-limiting examples 200", 200"', and 200"" of camera radar fusion that may be implemented during camera radar fusion for ADAS with radar and a mobile device, in accordance with the various embodiments.

[0059] With reference to the non-limiting examples 200 and 200' of Figs. 2 A and 2B, a radar system (e.g., radar system 145, or the like), a mobile device (e.g., mobile device 110, or the like), and, in some cases, a windshield camera as well (e.g., first camera 140a, or the like) may be used within vehicle 105 to provide driver assistance (including, but not limited to, ADAS functionalities, or the like). Radar system 145 may include, without limitation, first radar sensor(s) (e.g., first radar sensor(s) 145a, or the like). Mobile device 110 may include, but is not limited to, at least one of computing system 115, camera(s) 125, or display screen 130, and/or the like. In some instances, vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, and first camera 140a in Fig. 2 may be similar, if not identical, to corresponding vehicle 105, mobile device 110, computing system 115, camera(s) 125, display screen 130, and first camera 140a in Fig. 1, and the descriptions of these components in Fig. 1 may be applicable to the descriptions of the corresponding components in Fig. 2, and vice versa.

[0060] In operation, as shown in the non-limiting example 200 of Fig. 2A, first radar sensor(s) 145 a may acquire radar data either in the form of radar heatmap 205 or in the form of radar point cloud 210, and may pre-process the radar data (at block 215). Concurrently, camera(s) 140 (either first (windshield) camera 140a or (mobile device) camera(s) 125, or the like) may capture one or more images or videos of objects (e.g., objects 155 of Fig. 1, or the like) that are in front of vehicle 105. Camera(s) 140 may send the RGB images or videos to computing system 115 for preprocessing (at block 225). The pre-processed radar data and the pre-processed image data may be combined and fused (at block 230), and the results of data fusion used as inputs to deep neural network ("DNN") or other Al or machine learning models (at block 235), which may be used to perform object detection and tracking of objects based on the radar data and the image data. In some instances, performing object detection and tracking may comprise performing at least one of two-dimensional ("2D") object detection with distance and velocity determination 240a, three- dimensional ("3D") object detection with distance and velocity determination 240b, or object tracking in 2D or 3D space using Doppler-radar based analysis 245, and/or the like. In some cases, analyzing the fusing data (e.g., the results of block 230, or the like) may further comprise analyzing the fused data to perform at least one of simultaneous location and mapping ("SLAM") or depth estimation, and/or the like. Inferencing (at block 250) may subsequently be performed based at least in part on the output of the DNN models (at block 235), and identified, highlighted, and/or tracked objects may be presented on display screen 130, or the like.

[0061] In some aspects, to better handle scenarios that are challenging for front view perception using only RGB cameras, the system may include millimeter-wave radar (or other suitable radar systems and/or sensors, or the like) as additional data input. The perception tasks, including, but not limited to, object detection and tracking, SLAM, depth estimation, etc., may also be performed based on fused data, or the like.

[0062] In this system, a single radar sensor or multiple radar sensors working separately or in a cascaded radar chip in multiple input multiple output ("MIMO") mode may be used. The output radar signal may be a range-doppler-azimuth heatmap or a pre-processed 3D point cloud in its own coordinate system, or the like. In some cases, the signal processing may be performed on the radar system's computational unit (e.g., a digital signal processor ("DSP"), or the like), or may be parsed to a cellphone or other mobile device(s) for processing and for performing early-stage fusion, or the like. For some perception tasks, such as object detection and tracking, or the like, the DNN model and algorithm may be used to perform inferencing on the radar system's computational unit using some light-weight framework (e.g., TensorFlow® Lite deep learning framework, or the like), then the result may be sent to mobile devices for late-stage fusion, or the like.

[0063] The camera may be a stand-alone device, or may be the cell phone's back camera(s). If the cell phone has multiple cameras with different horizontal field of view ("HFOV"), then only the main camera may be used or two or more cameras may be used simultaneously as multi- view input, or the like. In some cases, the preprocessing of camera inputs may include, but is not limited to, de-hazing, de-blurring, enlightening, etc. [0064] Turning to the non-limiting example 200' of Fig. 2B, in response to a main activity (e.g., main activity 255, or the like), and based on a determination that a surface texture(s) is available (at block 260), mobile inference pipeline may be started (at block 265). The mobile inference pipeline may be performed on the computing system 115 of the mobile device. Multi-threaded processes may subsequently be implemented for at least one of pre-processing, data fusion, DNN models, object detection, tracking, and/or rendering, and/or the like.

[0065] Based on a determination that radar point cloud data is ready (at block 270), radar data may be pre-processed (at block 215, similar to the process at block 215 in Fig. 2A, or the like). Likewise, based on a determination that preview image data is ready (at block 275), image data may be pre-processed (at block 225, similar to the process at block 225 in Fig. 2B, or the like).

[0066] At block 230, camera radar fusion may be performed based on the pre- processed radar data (from block 215) and the pre-processed image data (from block 225). Object detection (at block 240) may then be performed, in some cases, based at least in part on at least one of 2D object detection with distance and velocity determination (such as at block 240a in Fig. 2A, or the like) and/or 3D object detection with distance and velocity determination (such as at block 240b in Fig. 2A, or the like), or the like. Based on results of object detection (at block 240), the system may determine or identify object position and class for each identified object (at block 280). For example, the relative position of each object with respect to the vehicle 105 (or with respect to the radar sensor(s) 145a and/or the camera(s) 140 of Fig. 2A, or the like) may be determined, along with the class of object (including, but not limited to, cars, trucks, motorcycles, bicycles, pedestrians, animals, buildings, etc.).

[0067] Currently, or sequentially, at block 245a, image-based and radar-based tracking may be performed based also on the pre-processed radar data (from block 215) and the pre-processed image data (from block 225), in some cases, based at least in part on object tracking in 2D or 3D space using Doppler-radar based analysis (such as at block 245 of Fig. 2A, or the like), or the like. In some instances, to save resources, tracking may be performed, not frame by frame, but rather using doppler analysis to track movements. Radar-based tracking also has the benefit of working well under poor conditions (e.g., snowy, rainy days, etc.). Based on the results of tracking (at block 245a), tracked object position data may be obtained or determined for each identified object (at block 285). [0068] Currently, or sequentially, at block 230a, fusion may be performed with respect to the background objects, in some cases, based at least in part on one or more of the pre-processed radar data (from block 215), the pre-processed image data (from block 225), the determined or identified object position and class for each identified object (at block 280), and/or the tracked object position data for each identified object (at block 285), and/or the like. In some cases, the fusion (at block 230a) may be part of post processing of the results from the processes at blocks 280 and/or 285 (which results may be represented, e.g., by a bounding box around each identified object, or the like). Although not shown, in some instances, noise filtering may be performed at or after this stage. For example, if there are many bounding boxes around a single object, this process may filter out noise, which, in some cases, may result in creating one bounding box around the maximum area around the originally identified bounding boxes to represent the single object. In some embodiments, any suitable kind or type of signal-based or image-based filtering may be utilized, as appropriate or as desired.

[0069] Thereafter, at block 290, rendering may be performed (in some cases, using a rendering engine, such as rendering or graphics engine 115c of Fig. 1, or the like). In some instances, graphics library application programming interface ("API") draw calls (e.g., OpenGL API draw calls, or the like) may be used for rendering the results to produce an output frame(s) 295 (showing the bounding box for each identified object, such as shown, e.g., in Fig. 3F, or the like).

[0070] With reference again to Fig. 2B, the inference part may be performed on a mobile phone or other mobile device in real-time. The data from radar and camera are synchronized and transferred to the mobile phone or other mobile device via WiFi, Bluetooth, or Ethernet cable, or the like. There are data pre-processing modules (at blocks 215 and 225) to prepare the data for fusion algorithm or models (at blocks 230, 245, 230a, etc.). The pre-processing may include, but is not limited to, data cleaning, data augmentation, projection to same coordinate system (e.g., radar data is usually in polar coordinates, while image data is usually in Cartesian coordinates; radar data is usually represented by data in meters (or other distance measurements), while image data is usually represented by data in pixels, etc.), de-noising (e.g., by filtering out noise, etc.), and/or the like. The output of the fusion and object detection modules may include object locations - namely, one or more bounding boxes in a particular coordinate system, or the like. The next step may be to track the detected objects in sequential frames. This may reduce the time for computation on object detection, as long as it is being tracked. The detected and tracked object may then be feed into fusion and the render engine for optimized results and for achieving better user experience.

[0071] The various embodiments provide for a framework and its implementation for camera radar fusion on mobile devices. The inference may be performed on mobile devices via the light-weight pipeline (of Fig. 2B) to run models in real-time (or near-real-time). The results can also be rendered to a display (e.g., display screen 130a and/or 130b of Fig. 1, or the like) via graphics rendering (e.g., at block 290, or the like). To identify other systems or products that utilize this frame, one may determine if the other systems or products: (a) use computational resources on a mobile device for inferencing models or algorithms in pre-processing, fusion, and/or post-processing stages; (b) use graphics rendering to render fused results to display; and/or (c) utilize camera(s) (e.g., on the mobile device or windshield camera) and radar signals for perception tasks, including, but not limited to, object detection, tracking, segmentation, etc., based on fused data; and/or the like.

[0072] Turning to Figs. 2C-2E, various examples of camera radar fusion are shown, including, but not limited to, early-stage fusion (as shown, e.g., in example 200" in Fig. 2C, or the like), middle-stage fusion (as shown, e.g., in example 200"' in Fig. 2D, or the like), and late-stage fusion (as shown, e.g., in example 200"" in Fig. 2E, or the like). In Figs. 2C-2E, radar data (e.g., from radar system 145, or the like) and camera data (e.g., image date and/or video data from camera(s) 140, or the like) may be processed (or pre-processed). A result of the radar camera fusion and analysis (e.g., using fully connected ("FC") layers 235 or the like) may be used to produce or generate a bounding box for each identified object, which may in turn be used as input to generate an output frame(s) containing one or more bounding boxes for identifying one or more objects (at block 295), with one bounding box for each object, or the like. In some cases, output (at block 295) may represent a single bounding box for one identified object, and the process (in each of Figs. 2C-2E) may be repeated for other objects in the frame, resulting in a single bounding box output for each object). Alternatively, output (at block 295) may combine objects in a frame, with each bounding box (among two or more bounding boxes) representing one object in the frame. [0073] Referring to the non-limiting example 200" in Fig. 2C, early-stage fusion is shown, in which fusion may be performed at the data level or layer. The detection signals from radar (e.g., output of radar system 145, or the like) and the RGB frame data from camera (e.g., output of camera 140, or the like) may be reprojected to the same coordinate system. The radar and camera data may be concatenated as a 5 -channel input (including (3 -channel) position, (1 -channel) velocity, and (1 -channel) distance or depth, etc.) for the DNN model (which as shown in Fig. 2C includes, but is not limited to, 3 sets of fully connected ("FC") layers or computer vision algorithm processing layers 235a, or the like). The DNN model may include, without limitation, a convolutional neural network ("CNN") or other classifiers or regressors to fulfill the tasks. The reprojection may require extrinsic calibration between radar and cellphone. According to some embodiments, extrinsic calibration of information between the radar and camera may occur during concatenation (optional). If the results of calibration are the same every time (such as is the case with fixed mounted camera and radar systems), then extrinsic calibration need only occur once, each time that mounting of the radar and/or camera is changed relative to each other and relative to the vehicle in which they are mounted.

[0074] Raw data level fusion refers mainly to the matching of radar point cloud and image pixels. The characteristics of early-stage fusion include, without limitation, at least one of a significant amount of raw data being concatenated, the radar resolution being relatively low, the point clouds being very sparse, the noise being large, and it being difficult to match the radar data to the image data, etc.

[0075] Turning to the non-limiting example 200"' in Fig. 2D, middle-stage fusion is shown, in which fusion may be performed at the abstract, feature level or layer, which is a lower level compared with the data level or layer as described above with respect to the early-stage fusion of Fig. 2C. Also, compared with the data level or layer, the feature level or layer is more conducive for the neural network to learn the complementarity between the different sensors (i.e., the radar sensor(s) and the image sensor(s), etc.). The general approach herein is to map the point cloud data to the image coordinate system to form a point cloud image similar to the camera image. The fusion herein is mainly the concatenation operation, which is a common operation in deep learning. It stacks the feature maps from multiple inputs together, and then generally uses a convolutional layer with a kernel size of 1 x 1 to compress it, which is a weighted average process. [0076] The characteristics of feature-level fusion may mainly include, without limitation, at least one of radar-assisted images, quick elimination of a large number of areas where there will be no objects (e.g., vehicles, non-vehicle targets, etc.), greatly improved recognition speed, enhancement of the reliability of the results, and/or it being difficult to implement, and/or the like.

[0077] With reference to the non-limiting example 200"" in Fig. 2E, late-stage fusion is shown, in which fusion may be performed at the target level or layer.

Target- level fusion, herein, may refer mainly to an effective fusion between the result of image detection and the result of radar detection. The characteristics of target- level fusion may include, but are not limited to, the longitudinal distance recognized by the monocular camera not being accurate, and it being difficult to match accurately the result of image detection and the result of radar detection when there are many obstacles, or the like.

[0078] The perception tasks such as 2D/3D object detection and tracking may respectively be performed based on data from the radar sensor(s) and from the RGB frames from the camera. The results-based radar may then be sent to the cell phone or other mobile device to merge with results based on image data from the camera. The merge process may be performed with or without extrinsic calibration.

[0079] These and other functions of the example 200 (and its components) are described in greater detail below with respect to Figs. 1, 3, and 4.

[0080] Figs. 3A-3E illustrate a non-limiting example 300 of various forms of radar signal data that may be used as input or may be converted into input for implementing camera radar fusion for ADAS with radar and a mobile device, in accordance with the various embodiments. Fig. 3F is an image illustrating a nonlimiting example 300' of an output that may be generated during implementation of camera radar fusion for ADAS with radar and a mobile device, in accordance with various embodiments.

[0081] As illustrated in the non- limiting example 300 of Figs. 3A-3E, characteristics of radar data are shown, including, but not limited to, a radar point cloud where the points of millimeter wave radar include X, Y, Z coordinates, radar cross-section ("RCS"; which may represent an object's reflection area, and may be a measure of how detectable the object is), and Doppler (which may represent an object's speed), and/or the like. Herein the Radar used may include, but is not limited to, millimeter wave radar, which is an electromagnetic wave with a working frequency of 30-100 GHz and a wavelength of 1-10 mm, which works in the millimeter wave band. It accurately detects the direction and distance of the target by emitting electromagnetic waves to obstacles and receiving resultant echoes.

[0082] The millimeter- wave radar has strong anti-clutter interference ability and a certain degree of diffraction ability. The stronger the penetration ability, the better the penetration through smoke and dust, and is thus less affected by light and weather factors, etc. In this manner, it has the ability to work constantly (or around the clock, etc.).

[0083] With reference to the non-limiting example of Fig. 3A, an example radar point cloud is shown. The data of millimeter wave radar is generally presented in the form of Point Cloud. Similar to the point cloud for lidar, only the data contained in each point in the point cloud for radar is different. That is, the point cloud for lidar includes X, Y, Z coordinates and reflected signal strength, while the point cloud for millimeter wave radar includes X, Y, Z coordinates, radar cross-section ("RCS"; representing an object's reflection area), and Doppler data (representing the object's speed), etc. In Fig. 3A, the tight cluster of points near the bottom middle in the point cloud represents objects closest to the radar sensor(s), while the more dispersed or spread out points near the top and sides in the point cloud represent objects further away from the radar sensor(s).

[0084] Referring to the non-limiting examples of Figs. 3B-3E, range- azimuth- Doppler Tensor data or the like is shown. In general, the point cloud data of the traditional millimeter wave radar is very sparse, and the generated point cloud image contains little information, which is not conducive to the feature learning of the neural network. To address this, the various embodiments herein convert both radar data (in polar coordinates) and image data to Cartesian coordinate system (in some cases, bird's eye view ("BEV") coordinates, or the like, rather than vehicle coordinates, or the like). Radar data can actually be regarded as a multi-channel image in polar coordinates, and its channels are Doppler features. After coordinate conversion, it can be regarded as a multi-channel image in BEV. Similarly, the camera image can also be regarded as a multi-channel (such as RGB) image under BEV after coordinate conversion.

[0085] Fig. 3B depicts an example radar range profile averaged across all angles, while Fig. 3C depicts range or Doppler transmit beamforming ("TxBF") radar signals in polar coordinates, which is the typical output of most radar systems. Figs. 3D and 3E depict the converted radar signals in Cartesian coordinates (converted from the polar coordinate-based data in Fig. 3C, for instance), where Fig. 3D depicts an example radar stitch range per azimuth (also referred to as a 2D or flattened view or 2D heatmap, or the like), while Fig. 3E depicts an example radar stitch range per azimuth with intensity peaks (also referred to as a 3D view or 3D heatmap, or the like). As shown in Figs. 3D and 3E, the radar signal data has an ~ 150 ° horizontal field of view ("HFOV").

[0086] The radar data may be converted from one to the other of point cloud (as shown, e.g., in Fig. 3A) to heatmap (as shown, e.g., in one of Figs. 3C-3E, or the like), with intensity spikes in Fig. 3E being above a predetermined or post-selected threshold level. In some instances, for filtering out noise, threshold levels may be changed to select or de-select peaks in the 3D view of Fig. 3E, or the like.

[0087] An example of fusion of road detection results is illustrated in the nonlimiting example of Fig. 3F. As shown in Fig. 3F, the results of the 2D and/or 3D object detection with distance and velocity determination and/or object tracking in 2D or 3D space using Doppler-radar based analysis, based on radar camera fusion data, includes a plurality of bounding boxes, each corresponding to an identified object, and each identifying or highlighting position of the objects, a class of the objects (e.g., truck, human, car, etc.), the distance or depth to the object (from the radar sensor(s) or vehicle), and the velocity of the object, and/or the like. This is superior to conventional ADAS systems that use camera only (in which it is difficult to estimate distance and velocity of detected objects) or that use radar only (in which it is difficult to classify objects).

[0088] In some cases, the distance or depth data may represent an average distance between the radar sensor(s) and a large object (e.g., truck). For example, the 15.5 m distance result for the truck in the left portion of the resultant image in Fig. 3F may represent an average distance between the radar sensor(s) and the truck. In some embodiments, the output as shown in Fig. 3F may be displayed on the display screen of the mobile phone or other mobile device, or may be displayed on an augmented reality ("AR") headset (or mixed reality ("MR") headset, or the like), or may be displayed on a central console display screen of the vehicle or a heads-up display ("HUD") of the vehicle, or the like. [0089] These and other functions of the examples 300 and 300' (and its components or results) are described in greater detail below with respect to Figs. 1, 2, and 4.

[0090] Figs. 4A-4G (collectively, "Fig. 4") are flow diagrams illustrating a method 400 for implementing ADAS with a camera(s) on a windshield of a vehicle and a mobile device, in accordance with various embodiments.

[0091] While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method 400 illustrated by Fig. 4 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 200, 200', 200", 200'", 200"", 300, and 300' of Figs. 1, 2A, 2B, 2C, 2D, 2E, 3A-3E, and 3F, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments 100, 200, 200', 200", 200'", 200"", 300, and 300' of Figs. 1, 2A, 2B, 2C, 2D, 2E, 3A-3E, and 3F, respectively (or components thereof), can operate according to the method 400 illustrated by Fig. 4 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 200, 200', 200", 200'", 200"", 300, and 300' of Figs. 1, 2A, 2B, 2C, 2D, 2E, 3A-3E, and 3F can each also operate according to other modes of operation and/or perform other suitable procedures.

[0092] In the non- limiting embodiment of Fig. 4A, method 400, at block 405, may comprise receiving, using a computing system on a mobile device, one or more first images from a first camera that is mounted to a first position on a windshield of a first vehicle. In some embodiments, the computing system may comprise at least one of a driver assistance system, a radar data processing system, an object detection system, an object detection and ranging system, a positioning and mapping system, an image processing system, an image data fusing system, a graphics engine, a processor on the mobile device, at least one central processing unit ("CPU") on the mobile device, at least one graphics processing unit ("GPU") on the mobile device, a machine learning system, an artificial intelligence ("Al") system, a deep learning system, a neural network, a convolutional neural network ("CNN"), a deep neural network ("DNN"), or a fully convolutional network ("FCN"), and/or the like. In some instances, the mobile device comprises at least one of a smartphone, a tablet computer, a display device, an augmented reality ("AR") device, a virtual reality ("VR") device, or a mixed reality ("MR") device, and/or the like. In some instances, the first camera may comprise one of a windshield camera or a camera that is integrated with the mobile device, and/or the like.

[0093] At block 410, method 400 may comprise pre-processing, using the computing system on the mobile device, the received one or more first images using one or more image processing operations to prepare the received one or more first images for analysis. In some cases, the one or more image processing operations including, without limitation, at least one of de-hazing, de-blurring, pre-whitening, resizing, aligning, cropping, or formatting, and/or the like.

[0094] Method 400 may further comprise, at block 415, receiving, using the computing system on the mobile device, first radio detection and ranging ("radar") data from a first radar sensor that is mounted behind the windshield of the first vehicle. In some cases, the first radar sensor may comprise at least one antenna disposed on an integrated circuit ("IC") chip. In some instances, the at least one antenna may comprise one of a single IC -based antenna disposed on the IC chip, a plurality of IC -based antennas arranged as a one-dimensional ("ID") line of antennas disposed on the IC chip, or a 2D array of IC-based antennas disposed on the IC chip, and/or the like. In some cases, a radar signal emitted from the at least one antenna may be projected orthogonally from a surface of the IC chip on which the at least one antenna is disposed.

[0095] Method 400, at block 420, may comprise pre-processing, using at least one of the first radar sensor or the computing system on the mobile device, the received first radar data using one or more radar data processing operations to prepare the received first radar data for analysis. In some instances, the one or more radar data processing operations may include, without limitation, at least one of data cleaning, data augmentation, projection radar data to same coordinate system as the one or more first images, or de-noising, and/or the like.

[0096] Method 400 may further comprise fusing, using the computing system on the mobile device, the received one or more first images and the received first radar data to generate first fused data (block 425 a); and analyzing, using the computing system on the mobile device, the generated first fused data to perform object detection and tracking to identify, highlight, and track one or more first objects that are located in front of the first vehicle (block 425b). Merely by way of example, in some cases, the one or more first objects may include, but are not limited to, at least one of one or more people, one or more animals, one or more second vehicles, one or more traffic signs, one or more traffic lights, one or more roadway obstructions, or one or more other objects, and/or the like.

[0097] In some instances, performing object detection and tracking (at block 425b) may comprise performing at least one of two-dimensional ("2D") object detection with distance and velocity determination, three-dimensional ("3D") object detection with distance and velocity determination, or object tracking in 2D or 3D space using Doppler-radar based analysis, and/or the like. Alternatively, or additionally, analyzing the generated first fusing data (at block 425b) may further comprise analyzing, using the computing system on the mobile device, the generated first fused data to perform at least one of simultaneous location and mapping ("SLAM") or depth estimation, and/or the like.

[0098] At block 430, method 400 may comprise presenting, using the computing system on the mobile device and on a display device, the identified, highlighted, and tracked one or more first objects.

[0099] With reference to the non-limiting example of Fig. 4B, method 400 may further comprise receiving, using the computing system on the mobile device, one or more second images from a second camera that is mounted to a second position on the windshield of the first vehicle (block 435); and pre-processing, using the computing system on the mobile device, the received one or more second images using the one or more image processing operations to prepare the received one or more second images for analysis (block 440). In such embodiments, generating the first fused data (at block 425 a) may comprise fusing, using the computing system on the mobile device, the received one or more first images, the received one or more second images, and the received first radar data to generate the first fused data (block 425a').

[0100] Referring to the non-limiting example of Fig. 4C, method 400 may further comprise receiving, using the computing system on the mobile device, at least one of second radar data from a second radar sensor, lidar data from one or more lidar sensors, ultrasound data from one or more ultrasound sensors, or infrared image data from one more infrared cameras, and/or the like, that may be mounted on the first vehicle and that may be communicatively coupled to the mobile device (block 445); and pre-processing, using at least one of the first radar sensor or the computing system on the mobile device, the at least one of the second radar data, the lidar data, the ultrasound data, or the infrared image data, and/or the like, in preparation for analysis (block 450). In such embodiments, generating the first fused data (at block 425 a) may comprise fusing, using the computing system on the mobile device, the received one or more first images, the received first radar data, and at least one of the second radar data, the lidar data, the ultrasound data, or the infrared image data, and/or the like, to generate the first fused data (block 425a").

[0101] Turning to the non-limiting example of Fig. 4D, fusing the received one or more first images and the received first radar data to generate the first fused data, and analyzing the generated first fused data (at block 425) may comprise at least one of: performing early-stage fusion (block 455); performing middle-stage fusion (block 460); or performing late-stage fusion (block 465); and/or the like.

[0102] With reference to the non-limiting example of Fig. 4E, performing early- stage fusion (at block 455) may comprise concatenating the received one or more first images and the received first radar data at a data level, by matching image pixels of the received one or more first images with radar point cloud data, to generate second fused data as input to a neural network (block 455a); and analyzing the generated second fused data using the neural network to generate a bounding box for each first object (block 455b).

[0103] Referring to the non- limiting example of Fig. 4F, performing middle-stage fusion (at block 460) may comprise mapping radar point cloud data to image coordinate system to form a point cloud image (block 460a); concatenating the received one or more first images and the point cloud image at a feature level to generate third fused data as input to the neural network (block 460b); and analyzing the generated third fused data using the neural network to generate a bounding box for each first object (block 460c).

[0104] Turning to the non-limiting example of Fig. 4G, performing late-stage fusion (at block 465) may comprise analyzing the received one or more first images to identify and highlight one or more second objects in front of the first vehicle that are captured by the first camera (block 465 a); analyzing the received radar data to identify and highlight one or more third objects in front of the first vehicle that are detected by the radar sensor (block 465b); concatenating the identified and highlighted one or more first objects and the identified and highlighted one or more second objects at a target level to generate fourth fused data as input to the neural network (block 465c); and analyzing the generated fourth fused data using the neural network to generate a bounding box for each first object (block 465d); and/or the like.

[0105] Examples of System and Hardware Implementation

[0106] Fig. 5 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments. Fig. 5 provides a schematic illustration of one embodiment of a computer system 500 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., mobile device 110, computing system(s) 115, etc.), as described above. It should be noted that Fig. 5 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. Fig. 5, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.

[0107] The computer or hardware system 500 - which might represent an embodiment of the computer or hardware system (i.e., mobile device 110, computing system(s) 115, etc.), described above with respect to Figs. 1-4 - is shown comprising hardware elements that can be electrically coupled via a bus 505 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 510, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 515, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 520, which can include, without limitation, a display device, a printer, and/or the like.

[0108] The computer or hardware system 500 may further include (and/or be in communication with) one or more storage devices 525, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like. [0109] The computer or hardware system 500 might also include a communications subsystem 530, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.11 device, a WiFi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem 530 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system 500 will further comprise a working memory 535, which can include a RAM or ROM device, as described above.

[0110] The computer or hardware system 500 also may comprise software elements, shown as being currently located within the working memory 535, including an operating system 540, device drivers, executable libraries, and/or other code, such as one or more application programs 545, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.

[0111] A set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s) 525 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 500. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system 500 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 500 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.

[0112] It will be apparent to those skilled in the art that substantial variations may be made in accordance with particular requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.

[0113] As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system 500) to perform methods in accordance with various embodiments of the invention.

According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system 500 in response to processor 510 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 540 and/or other code, such as an application program 545) contained in the working memory 535. Such instructions may be read into the working memory 535 from another computer readable medium, such as one or more of the storage device(s) 525. Merely by way of example, execution of the sequences of instructions contained in the working memory 535 might cause the processor(s) 510 to perform one or more procedures of the methods described herein. [0114] The terms "machine readable medium" and "computer readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion. In an embodiment implemented using the computer or hardware system 500, various computer readable media might be involved in providing instructions/code to processor(s) 510 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 525. Volatile media includes, without limitation, dynamic memory, such as the working memory 535. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 505, as well as the various components of the communication subsystem 530 (and/or the media by which the communications subsystem 530 provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radiowave and infra-red data communications).

[0115] Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.

[0116] Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 510 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 500. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.

[0117] The communications subsystem 530 (and/or components thereof) generally will receive the signals, and the bus 505 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 535, from which the processor(s) 505 retrieves and executes the instructions. The instructions received by the working memory 535 may optionally be stored on a storage device 525 either before or after execution by the processor(s) 510.

[0118] While particular features and aspects have been described with respect to some embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the methods and processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented on any suitable hardware, firmware and/or software configuration. Similarly, while particular functionality is ascribed to particular system components, unless the context dictates otherwise, this functionality need not be limited to such and can be distributed among various other system components in accordance with the several embodiments. [0119] Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with — or without — particular features for ease of description and to illustrate some aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise. Consequently, although several embodiments are described above, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.