Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VEHICLE-TO-PEDESTRIAN COMMUNICATION SYSTEMS
Document Type and Number:
WIPO Patent Application WO/2018/026803
Kind Code:
A1
Abstract:
Vehicle-to-pedestrian information systems that use directional sound transmission on autonomous vehicles are disclosed. A cloud computing system manages messages for transmission to pedestrians via autonomous vehicles having directional speakers. The cloud computing system identifies pedestrians and identifies messages for the pedestrians. Pedestrians may be known and authenticated to the cloud computing system or may be unknown. The cloud computing system maintains profiles for known pedestrians and transmits messages to vehicles based on the profiles. The cloud computing system keeps track of the location of vehicles and causes the vehicles to use directional speakers to transmit messages to the pedestrians based on the relative positions of the vehicles and the pedestrians.

Inventors:
NEWMAN AUSTIN L (US)
Application Number:
PCT/US2017/044880
Publication Date:
February 08, 2018
Filing Date:
August 01, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NIO USA INC (US)
International Classes:
G08G1/005; B60Q1/00; B60Q1/26; B60Q5/00; B60W30/08; B60W30/095
Domestic Patent References:
WO2016109482A12016-07-07
WO2017061795A12017-04-13
Foreign References:
US20160167648A12016-06-16
US20150346845A12015-12-03
US20150035685A12015-02-05
US20140346823A12014-11-27
US20110090093A12011-04-21
Attorney, Agent or Firm:
LENNOX-GENTLE, Thaine (US)
Download PDF:
Claims:
What Is Claimed Is:

1. A vehicle-to-pedestrian information system comprising:

a cloud computing system configured to communicate with a vehicle configured for autonomous piloting, the vehicle including a directional speaker,

wherein the cloud computing system is configured to:

identify a message for a pedestrian based on a location of the pedestrian; transmit the message to the vehicle; and

cause the vehicle to play the message for the pedestrian via the directional speaker.

2. The vehicle-to-pedestrian information system of claim 1, wherein the cloud computing system is further configured to:

authenticate the pedestrian by communicating with a personal device associated with the pedestrian.

3. The vehicle-to-pedestrian information system of claim 2, wherein the cloud computing system is further configured to:

identify a user profile based on the authentication with the pedestrian,

wherein the message is identified based on the user profile.

4. The vehicle-to-pedestrian information system of claim 2, wherein the cloud computing system is further configured to:

identify the location of the pedestrian based on location data reported by the personal device associated with the pedestrian.

5. The vehicle-to-pedestrian information system of claim 1, wherein:

the message comprises a first portion of a composite message, and

the cloud computing system is further configured to:

transmit a second portion of the composite message to a second vehicle for playback to the pedestrian.

6. The vehicle-to-pedestrian information system of claim 5, wherein:

the cloud computing system is configured to instruct the first vehicle to play the first portion of the composite message and the second vehicle to play the second portion of the composite message in a manner that minimizes Doppler shift observed by the pedestrian.

7. The vehicle-to-pedestrian information system of claim 1, wherein:

the message comprises a safety message.

8. The vehicle-to-pedestrian information system of claim 1, wherein the vehicle is configured to:

display a first visual indicator communicating that the vehicle is operating autonomously when the vehicle is operating autonomously; and

display a second visual indicator communicating that the vehicle is operating non- autonomously when the vehicle is operating non-autonomously.

9. An autonomous vehicle capable of communicating information to a pedestrian, the autonomous vehicle comprising:

a steering system and a speed control system;

a directional speaker; and

an on-board computer configured to:

autonomously control the steering system and the speed control system based on environmental conditions and navigation conditions;

receive a message for a pedestrian from a cloud computing system;

determine a location of the pedestrian; and

cause the directional speaker to play the message for the pedestrian based on the location of the pedestrian.

10. The autonomous vehicle of claim 9, wherein:

the pedestrian is authenticated to the cloud computing system via a personal device associated with the pedestrian.

11. The autonomous vehicle of claim 10, wherein:

the message is based on a user profile that is associated with the authenticated pedestrian.

12. The autonomous vehicle of claim 10, wherein determining the location comprises:

receiving the location from the cloud computing system, which previously received the location from the personal device associated with the pedestrian.

13. The autonomous vehicle of claim 9, wherein:

the message comprises a first portion of a composite message; and

the composite message also includes a second portion that is sent to a different autonomous vehicle for playback to the pedestrian.

14. The autonomous vehicle of claim 9, wherein:

the message comprises a safety message.

15. The autonomous vehicle of claim 9, further comprising: a visual indicator display,

wherein the on-board computer is configured to:

display a first visual indicator on the visual indicator display communicating that the vehicle is operating autonomously when the vehicle is operating autonomously; and

display a second visual indicator on the visual indicator display communicating that the vehicle is operating non-autonomously when the vehicle is operating non-autonomously.

16. A method for facilitating vehicle-to-pedestrian communication, the method comprising:

identifying a message for a pedestrian based on a location of the pedestrian;

transmitting the message to an autonomous vehicle that includes a direction speaker; and

causing the vehicle to play the message for the pedestrian via the directional speaker.

17. The method of claim 16, further comprising:

authenticating the pedestrian by communicating with a personal device associated with the pedestrian.

18. The method of claim 17, further comprising:

identifying a user profile based on the authentication with the pedestrian, wherein the message is identified based on the user profile.

19. The method of claim 17, further comprising:

identifying the location of the pedestrian based on location data reported by the personal device associated with the pedestrian.

20. The method of claim 16, wherein:

the message comprises a first portion of a composite message, and the method further comprises:

transmitting a second portion of the composite message to a second vehicle for playback to the pedestrian.

Description:
VEHICLE-TO-PEDESTRIAN COMMUNICATION SYSTEMS

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] The present application claims the benefits of and priority, under 35 U.S.C. § 119(e), to U.S. Patent Application Serial No. 15/250,356, filed on August 29, 2016, entitled "Vehicle-to-Pedestrian Communication Systems" which claims, under 35 U.S.C. § 119(e), the benefits of and priority to U.S. Provisional Application Serial No. 62/369,799, filed on August 2, 2016, entitled "Vehicle-to-Pedestrian Communication Systems." The entire disclosures of the applications listed above are hereby incorporated by reference, in their entirety, for all that they teach and for all purposes.

FIELD

[0002] The present disclosure relates to vehicle-to-pedestrian communication systems, and, more particularly, to a vehicle-to-pedestrian communication system that uses exterior-focused sound transmission and/or an exterior-focused information display.

BACKGROUND

[0003] An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Numerous companies and research organizations have developed working prototype autonomous vehicles. One area of discussion surrounding the development of autonomous vehicles is how the autonomous vehicles will interact with pedestrians. Without a driver, there is a need for technology which allows the vehicle itself to communicate information to the pedestrian.

SUMMARY

[0004] Vehicle-to-pedestrian information systems that use directional sound

transmission on autonomous vehicles are disclosed. A cloud computing system manages messages for transmission to pedestrians via autonomous vehicles having directional speakers. The cloud computing system identifies pedestrians and identifies messages for the pedestrians. Pedestrians may be known and authenticated to the cloud computing system or may be unknown. The cloud computing system maintains profiles for known pedestrians and transmits messages to vehicles based on the profiles. The cloud computing system keeps track of the location of vehicles and causes the vehicles to use directional speakers to transmit messages to the pedestrians based on the relative positions of the vehicles and the pedestrians. The car may make decisions regarding which messages to play via the directional speakers without control of the cloud based system as well. The vehicle may also include a visual indicator that indicates when the vehicle is operating autonomously.

BRIEF DESCRIPTION OF THE DRAWINGS

[0005] The foregoing Summary and the following Detailed Description will be better understood when read in conjunction with the appended drawings, which illustrate a preferred embodiment of the invention. In the drawings:

[0006] Fig. 1 A is a block diagram of a car-to-pedestrian communication system, according to an example;

[0007] Fig. IB is a block diagram of a computing device that can be any of the servers, the on-board computers within the vehicles, or the personal devices, according to an example;

[0008] Fig. 2 illustrates interactions between the vehicles of the car-to-pedestrian system of Fig. 1A and pedestrians, according to an example;

[0009] Fig. 3 illustrates visual indicators for indicating whether vehicles are operating in autonomous mode, according to an example; and

[0010] Fig. 4 is a flow diagram of a method for performing car-to-pedestrian

communication, according to an example.

DETAILED DESCRIPTION

[0011] The present disclosure relates to a vehicle-to-pedestrian information system for autonomous vehicles that uses an exterior focused sound transmission that directs sound at pedestrians.

[0012] Fig. 1 A is a block diagram of a car-to-pedestrian communication system 100, according to an example. As shown, the car-to-pedestrian communication system 100 includes one or more autonomous vehicles 102, and one or more personal devices 104, where the vehicles 102 and personal devices 104 are coupled to each other via a cloud computing system 106. The cloud computing system 106 includes one or more servers 108 coupled to each other and configured to be coupled to the vehicles 102 and personal devices 104. The servers 108 of the cloud computing system 106 are coupled together to transfer data between each other and to perform other functions typically associated with cloud computing systems.

[0013] Communications links 112 exist between the servers 108 in the cloud system

106, between the servers 108 and vehicles 102, and between the servers 108 and personal devices 104. The communications links 112 may be of different kinds and may be any technically feasible link for communicating data between two entities. For example, the communications links 112 may be cellular data links, wireless network links (e.g., 802.11), Bluetooth links, wired networking links (e.g., Ethernet), or any other technically feasible communications links. In one example, the communications link 112 between the servers 108 of the cloud computing system 106 are wired networking links and the communications links 112 between the vehicles 112 and the server 108 and between the personal devices 104 and the server 108 are cellular networking links.

[0014] The vehicles 102 have standard vehicle systems such as an electric motor and/or engine, a steering system, a speed control system (engine, accelerator, brakes, gear shifter, and the like), a lighting system, and other systems typically present in vehicles. The vehicles 102 have the capability to perform at least some driving functions autonomously. For example, the vehicles 102 are able to control speed and steering based on road layouts (e.g., maps), current traffic, environmental conditions (e.g., weather), speed limits, obstacles (including pedestrians and other obstacles), signal conditions (e.g., traffic lights, etc.), and on other factors. In some embodiments, the vehicles 102 are able to drive fully or nearly-fully autonomously, controlling all driving functions such as acceleration, gears, braking, and steering, as well as navigation. The vehicles 102 include sensors 114 for performing the above tasks. The sensors 114 include any technically feasible sensors for detecting environmental conditions, such as speed, presence of objects, and other factors and may include sensors such as cameras, radar sensors, sonar sensors, and other technically feasible sensors. The vehicles 102 include an on-board computer 103 that processes data received from sensors 114 to autonomously control the vehicle 102.

[0015] The vehicles 102 also include output devices 116 for communicating with pedestrians. The output devices 116 include either or both of a directional speaker and a visual signal generator. The directional speaker generates sound waves that travel in a specific direction towards a particular target. Using directional speakers allows the vehicles 102 to deliver sound messages to specific pedestrians and avoid delivering such messages to other pedestrians in different locations. This function can be achieved in any technically feasible manner, with any technically feasible directional speaker included in output devices 116. The on-board computer 103 outputs information (e.g., sound and visual information) via output devices 116.

[0016] In one example, a directional speaker consists of an array of a plurality of ultrasonic transducers which produce two modulated ultrasound waves. Ultrasound waves have high frequencies that are inaudible to humans. The ultrasound travels out from a directional speaker in a narrowly focused beam, since higher-frequency waves have a relatively shorter wavelength and diffract less as they travel. When the two ultrasound waves hit someone they mix together via parametric interaction, producing a new wave with a much lower frequency which humans can hear. Persons standing outside the beam traveling out of the directional speaker cannot hear anything because the sound waves are not diverging from the source of the sound. Since a directional speaker sends its sound in a much more tightly focused beam than a conventional speaker and with far less energy dissipation, it can travel approximately twenty times further than sound form a

conventional speaker.

[0017] The visual signal generator includes one or more lighting modules capable of generating a visual signal to be seen by a pedestrian and to be understood by that pedestrian as indicating that the vehicle 102 is being operated autonomously. Any visual signal may be used for this purpose. In various examples, a solid or blinking light of a specific color may be used, or lighted text may be used.

[0018] The personal device 104 includes a computing device configured to

communicate with the servers 108 in the cloud system. Examples of personal devices 104 include smart phones, tablet computers, and laptop computers, and may include any other computing device capable of communicating with servers 108. The personal device 104 indicates to the vehicles 102 (through the cloud computing system 106 or directly), the presence and location of a pedestrian (under the assumption that the pedestrian is likely carrying that personal device 104) so that the vehicles 102 are able to transmit sound and visual signals to such pedestrians.

[0019] The personal device 104 also identifies pedestrians to the cloud computing system 106 for the purpose of accessing profiles 110. Profiles store data that are used by vehicles 102 to determine what audio messages to play for the associated pedestrian. Profiles 110 include user preferences for pedestrians indicating what that pedestrian wants to hear. For example, user preferences may indicate music, messages (from other people or automatically generated messages), news, advertisements, or any other audio messages indicated as being desired by the pedestrian. The preferences may be set up and/or automatically determined by the cloud computing system 106 based on previous interactions with the user via the personal device 104 associated with the user or via a different computing device, as is generally known.

[0020] Fig. IB is a block diagram of a computing device 150 that can be any of the servers 108, the on-board computers 103 within the vehicles 102, or the personal devices

104, according to an example. The device 150 includes a processor 152, a memory 154, a storage device 156, one or more input devices 158, and one or more output devices 160. The device 150 also includes an input driver 162 and an output driver 164. It is understood that the device 150 may include additional components not shown in Fig. IB.

[0021] The processor 152 includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core is a CPU or a GPU. The memory 154 may be located on the same die as the processor 152, or may be located separately from the processor 152. The memory 154 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.

[0022] The storage device 156 includes a fixed or removable storage, for example, a hard disk drive, a solid state drive, an optical disk, or a flash drive. The input devices 158 include one or more of a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, and a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 160 include one or more of a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). Other types of input or output devices may also be included.

[0023] The input driver 162 communicates with the processor 152 and the input devices 158, and permits the processor 152 to receive input from the input devices 158. The output driver 154 communicates with the processor 152 and the output devices 160, and permits the processor 152 to send output to the output devices 160.

[0024] Fig. 2 illustrates interactions between the vehicles 102 of the car-to-pedestrian system 100 of Fig. 1 A and pedestrians 204, according to an example. Some of the pedestrians carry personal devices 104 illustrated in Fig. 1 A and some other pedestrians do not carry such personal devices 104.

[0025] In operation, the cloud computing system 106 tracks the location of pedestrians

204 having personal devices 104. Based on the tracked locations and on the profiles 110, the cloud computing system 106 provides information to one or more vehicles 102 instructing such vehicles to provide audio and/or visual output to one or more pedestrians

204. Tracking location may be done via a global positioning system module included within a personal device 104 associated with a particular pedestrian 204. The association between personal device 104 and a pedestrian 204 may be made through an app executing in a personal device 104 through which the pedestrian 204 has provided authentication credentials known by the cloud computing system 106 to be associated with that pedestrian 204.

[0026] For pedestrians 204 having personal devices 104 registered with the cloud computing system 106, the cloud computing system 106 is able to instruct vehicles 102 regarding audio or visual output to provide to such pedestrians 204. To instruct vehicles 102, the cloud computing system 106 identifies one or more vehicles 102 to provide output, based on the location of the vehicles 102 and the pedestrian 204, identifies one or more items to output to the pedestrian 204, and then instructs the identified one or more vehicles 102 to output those items to the pedestrian.

[0027] There are several techniques by which the cloud computing system 106 identifies one or more vehicles 102 and one or more items for the vehicles 102 to output. In one technique, the cloud computing system 106 identifies the vehicle 102 that is closest to the pedestrian 204 and transmits items to that vehicle 102 to transmit to the pedestrian. In another technique, the cloud computing system 106 keeps track of driving paths for multiple vehicles 102 and instructs different vehicles 102 to output items to a pedestrian

204 in a sequence, the sequence being dependent on the driving paths for the multiple vehicles 102. For example, the cloud computing system 106 determines that multiple vehicles 102 will pass the pedestrian 204 on a road and instructs the different vehicles 102 to output different portions of a message to the pedestrian 204 based on the relative location between the different vehicles 102 and the pedestrian. In one such example, based on relative location between vehicles 102 and pedestrians 204, the cloud computing system 106 determines that a first vehicle 102 should play the first four seconds of a message for the pedestrian 204, that a second vehicle 102 should play the second four seconds of a message for the pedestrian 204, that a third vehicle should play the third four seconds of a message for the pedestrian 204, and so on. The cloud computing system 106 thus causes each vehicle 102 to "pass off the message to another vehicle. In such situations, the different portions of the message played by different vehicles 102 can be said to be parts of a "composite message." In one example, the cloud computing system

106 determines that a change in Doppler shift would occur and causes the vehicles 102 to pass off the message before such a change in Doppler shift would occur, in order to minimize distortion of the signal transmitted to the pedestrian 204. In general, the determination by the cloud computing system 106 of when to pass a message off from one vehicle 102 to another vehicle 102 is based on the total length of the message and how long the vehicle 102 is in range of the pedestrian 204 to which the message is targeted, as well as the quality with which the message is to be played. For example, if there are multiple vehicles in range than needed, the multiple vehicles can be used to play a stereo affect with multiple vehicles, increase sound fidelity with the multiple vehicles, or use the multiple vehicles to minimize the Doppler effect.

[0028] The messages played for the pedestrians 204 comprise anything potentially of interest to the pedestrians 204 and could include, for example, advertisements, music or other audio, radio stations, news alerts, stock quotes, or any other audio content. Another example is a message providing information about an arriving vehicle (such as a taxi or other car service) that allows the pedestrian 204 to know what type of vehicle to look for when the vehicle arrives. Preferences regarding what audio content to play for any particular pedestrian 204 are determined based on the stored profiles 110.

[0029] The above messages can be played for pedestrians with devices 104. Other messages, like safety messages, can be played for pedestrians with or without such devices. Safety messages relate to the presence of one or more vehicles 102, obstacles, or other objects in the environment of a pedestrian 204. In one example, a safety message includes a warning to a pedestrian 204 that the pedestrian 204 is or will be in the path of an object such as a vehicle 102. The safety message may come from the vehicle 102 whose path the pedestrian 204 is in or crossing into or may come from a different vehicle 102. Another example includes a warning that the pedestrian 204 is crossing a road or train tracks in a manner that violates pedestrian laws (e.g., jaywalking) or safety (e.g., crossing train tracks when a train is coming). Yet another example includes a warning that a pedestrian is walking towards a hazard, such as a road-related hazard or building-related hazard. Other safety messages may of course be emitted by the vehicle 102. For pedestrians 204 without devices 104, the cloud computing system 106 determines the location of such pedestrians 204 based on the sensors 114 of one or more vehicles 102. For pedestrians 204 with devices 104, the cloud computing system 106 can use a combination of the global positioning data on such devices 104, the sensors 114 of the vehicles, and triangulation techniques using networking hardware of the vehicles 102 to determine position of the pedestrians 204.

[0030] Fig. 3 illustrates visual indicators 302 for indicating whether vehicles 102 are operating in autonomous mode, according to an example. The vehicle 102 causes the visual indicator 302 to display an indication that the vehicle 102 is operating in non- autonomous mode when the vehicle 102 is operating in a non-autonomous mode and to display an indication that the vehicle 102 is operating in autonomous mode when the vehicle 102 is operating in autonomous mode. In one embodiment, autonomous mode occurs when the driver does not have control of steering and does not occur when the driver has control of steering. It should be understood that other embodiments are possible. For example, the visual indicator 302 may provide other information related to the control of a vehicle, such as whether a human is present in an autonomously driven car. This information may be important to a pedestrian because it may be beneficial for the pedestrian to know whether a vehicle is autonomously controlled. For example, the pedestrian would be able to judge the risk of crossing in front of a vehicle depending on whether the vehicle is autonomously controlled or controlled by a human operator.

[0031] In some embodiments, the visual indicator 302 includes a light source or display screen or both. In an exemplary embodiment, the light source is on when a vehicle is being controlled autonomously and off when a vehicle is being controlled non-autonomously. In another exemplary embodiment, the display screen displays a message which indicates whether a vehicle is being controlled autonomously (e.g., "AUTO") or non-autonomously (e.g., "NON-AUTO").

[0032] Fig. 4 is a flow diagram of a method 400 for performing vehicle-to-pedestrian communication, according to an example. Although described with respect to the system shown and described with respect to Figs. 1 A-3, it should be understood that any system configured to perform the method, in any technically feasible order, falls within the scope of the present disclosure.

[0033] As shown, the method 400 begins at step 402, where the cloud computing system 106 identifies a pedestrian 204 in range of a vehicle 102. Determination of whether a pedestrian 204 is in range of a vehicle is done by comparing locations of vehicles 102 with locations of pedestrians 204. These locations can be obtained directly from the vehicles 102 and pedestrians 204 (via a device 104). Location of a pedestrian 204 can also be obtained by from one or more vehicles 102 that detect a pedestrian via known autonomous car pedestrian detection techniques. If a pedestrian 204 is within a threshold distance from a vehicle 102, the pedestrian 204 is considered in range of the vehicle 102, which is able to transmit a message to the pedestrian 204.

[0034] At step 404, the cloud computing system 106 determines whether the pedestrian

204 has an authenticated personal device 104. If the pedestrian 204 has an authenticated personal device 104, then the method proceeds to step 406 and if the pedestrian 204 does not have an authenticated personal device 104, then the method proceeds to step 408. In some embodiments, this determination is made based on how the location information was obtained for the pedestrian 204. For example, if the location information was obtained through a personal device 104, the cloud computing system 106 obviously knows that the pedestrian 204 has an authenticated personal device 104. If the location information for the pedestrian 204 was obtained through sensors 114 of a vehicle 102, then the cloud computing system 106 assumes that the pedestrian 204 does not have an authenticated personal device 104.

[0035] At step 406, the cloud computing system 106 identifies a safety message or a profile-based message to provide to the pedestrian 204. The safety message can be based on environmental information associated with the pedestrian 204, such as whether a vehicle and the pedestrian 204 will cross paths, whether the pedestrian 204 is close to a particular danger, or any other safety message. The profile-based message may include anything related to personalized information for the pedestrian 204, such as bank information, stock information, news items, music, or any other personalized information. At step 408, the cloud computing system 106 identifies a safety message for the pedestrian 204. The safety message is similar to the safety message described for step 406.

[0036] At step 410, the cloud computing system 106 selects a vehicle 102 for playback. The selected vehicle 102 may be the vehicle 102 identified in step 402 or may be a different vehicle. The cloud computing system 106 may also select multiple vehicles to transmit the message in succession. For example, if a large number of vehicles are passing the pedestrian 204, the cloud computing system 106 may cause different portions of the message to be played by different vehicles 102. The cloud computing system 106 would cause the different portions to be played while the vehicle 102 playing such portions are in "advantageous" playing position. For example, the cloud computing system 106 may select different vehicles 102 based on the proximity to the pedestrian 204 (closer location resulting in better sound), based on a desire to avoid a Doppler shift, and based on other sound quality considerations. At step 412, the cloud computing system 106 transmits the message to the one or more vehicles 102 for playback.

[0037] Having thus described the presently preferred embodiments in detail, it is to be appreciated and will be apparent to those skilled in the art that many physical changes, only a few of which are exemplified in the detailed description of the invention, could be made without altering the inventive concepts and principles embodied therein. It is also to be appreciated that numerous embodiments incorporating only part of the preferred embodiment are possible which do not alter, with respect to those parts, the inventive concepts and principles embodied therein. The present embodiments and optional configurations are therefore to be considered in all respects as exemplary and/or illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all alternate embodiments and changes to this embodiment which come within the meaning and range of equivalency of said claims are therefore to be embraced therein.

[0038] It should be understood that many variations are possible based on the disclosure herein. Although features and elements are described above in particular combinations, each feature or element may be used alone without the other features and elements or in various combinations with or without other features and elements.

[0039] The methods provided may be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific

Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors may be

manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing may be mask works that are then used in a semiconductor manufacturing process to manufacture a processor which implements aspects of the embodiments.

[0040] The methods or flow charts provided herein may be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non- transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).

[0041] Any of the steps, functions, and operations discussed herein can be performed continuously and automatically.

[0042] The exemplary systems and methods of this disclosure have been described in relation to vehicle systems and electric vehicles. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scope of the claimed disclosure. Specific details are set forth to provide an understanding of the present disclosure. It should, however, be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.

[0043] Furthermore, while the exemplary embodiments illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined into one or more devices, such as a server, communication device, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switched network, or a circuit- switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system.

[0044] Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire, and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.

[0045] While the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.

[0046] A number of variations and modifications of the disclosure can be used. It would be possible to provide for some features of the disclosure without providing others.

[0047] In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed

microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Exemplary hardware that can be used for the present disclosure includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.

[0048] In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or

microcomputer systems being utilized.

[0049] In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general- purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as a program embedded on a personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system

component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.

[0050] Although the present disclosure describes components and functions

implemented in the embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure.

[0051] The present disclosure, in various embodiments, configurations, and aspects, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the systems and methods disclosed herein after understanding the present disclosure. The present disclosure, in various embodiments, configurations, and aspects, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments, configurations, or aspects hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease, and/or reducing cost of implementation.

[0052] The foregoing discussion of the disclosure has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more embodiments,

configurations, or aspects for the purpose of streamlining the disclosure. The features of the embodiments, configurations, or aspects of the disclosure may be combined in alternate embodiments, configurations, or aspects other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment, configuration, or aspect. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.

[0053] Moreover, though the description of the disclosure has included description of one or more embodiments, configurations, or aspects and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights, which include alternative embodiments, configurations, or aspects to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges, or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges, or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

[0054] Embodiments include a vehicle-to-pedestrian information system comprising: a cloud computing system configured to communicate with a vehicle configured for autonomous piloting, the vehicle including a directional speaker, wherein the cloud computing system is configured to: identify a message for a pedestrian based on a location of the pedestrian; transmit the message to the vehicle; and cause the vehicle to play the message for the pedestrian via the directional speaker.

[0055] Aspects of the above system include wherein the cloud computing system is further configured to: authenticate the pedestrian by communicating with a personal device associated with the pedestrian. Aspects of the above system include wherein the cloud computing system is further configured to: identify a user profile based on the authentication with the pedestrian, wherein the message is identified based on the user profile. Aspects of the above system include wherein the cloud computing system is further configured to: identify the location of the pedestrian based on location data reported by the personal device associated with the pedestrian. Aspects of the above system include wherein the message comprises a first portion of a composite message, and the cloud computing system is further configured to: transmit a second portion of the composite message to a second vehicle for playback to the pedestrian. Aspects of the above system include wherein the cloud computing system is configured to instruct the first vehicle to play the first portion of the composite message and the second vehicle to play the second portion of the composite message in a manner that minimizes Doppler shift observed by the pedestrian. Aspects of the above system include wherein the message comprises a safety message. Aspects of the above system include wherein the vehicle is configured to: display a first visual indicator communicating that the vehicle is operating autonomously when the vehicle is operating autonomously; and display a second visual indicator communicating that the vehicle is operating non-autonomously when the vehicle is operating non-autonomously.

[0056] Embodiments include an autonomous vehicle capable of communicating information to a pedestrian, the autonomous vehicle comprising: a steering system and a speed control system; a directional speaker; and an on-board computer configured to: autonomously control the steering system and the speed control system based on environmental conditions and navigation conditions; receive a message for a pedestrian from a cloud computing system; determine a location of the pedestrian; and cause the directional speaker to play the message for the pedestrian based on the location of the pedestrian.

[0057] Aspects of the above vehicle include wherein the pedestrian is authenticated to the cloud computing system via a personal device associated with the pedestrian. Aspects of the above vehicle include wherein the message is based on a user profile that is associated with the authenticated pedestrian. Aspects of the above vehicle include wherein determining the location comprises: receiving the location from the cloud computing system, which previously received the location from the personal device associated with the pedestrian. Aspects of the above vehicle include wherein the message comprises a first portion of a composite message; and the composite message also includes a second portion that is sent to a different autonomous vehicle for playback to the pedestrian. Aspects of the above vehicle include wherein the message comprises a safety message. Aspects of the above vehicle further comprising: a visual indicator display, wherein the on-board computer is configured to: display a first visual indicator on the visual indicator display communicating that the vehicle is operating autonomously when the vehicle is operating autonomously; and display a second visual indicator on the visual indicator display communicating that the vehicle is operating non-autonomously when the vehicle is operating non-autonomously.

[0058] Embodiments include a method for facilitating vehicle-to-pedestrian

communication, the method comprising: identifying a message for a pedestrian based on a location of the pedestrian; transmitting the message to an autonomous vehicle that includes a direction speaker; and causing the vehicle to play the message for the pedestrian via the directional speaker.

[0059] Aspects of the above method further comprising: authenticating the pedestrian by communicating with a personal device associated with the pedestrian. Aspects of the above method further comprising: identifying a user profile based on the authentication with the pedestrian, wherein the message is identified based on the user profile. Aspects of the above method further comprising: identifying the location of the pedestrian based on location data reported by the personal device associated with the pedestrian. Aspects of the above method include wherein the message comprises a first portion of a composite message, and the method further comprises: transmitting a second portion of the composite message to a second vehicle for playback to the pedestrian.

[0060] Any one or more of the aspects/embodiments as substantially disclosed herein. [0061] Any one or more of the aspects/embodiments as substantially disclosed herein optionally in combination with any one or more other aspects/embodiments as

substantially disclosed herein.

[0062] One or means adapted to perform any one or more of the above

aspects/embodiments as substantially disclosed herein.

[0063] The phrases "at least one," "one or more," "or," and "and/or" are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions "at least one of A, B and C," "at least one of A, B, or C," "one or more of A, B, and C," "one or more of A, B, or C," "A, B, and/or C," and "A, B, or C" means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.

[0064] The term "a" or "an" entity refers to one or more of that entity. As such, the terms "a" (or "an"), "one or more," and "at least one" can be used interchangeably herein. It is also to be noted that the terms "comprising," "including," and "having" can be used interchangeably.

[0065] The term "automatic" and variations thereof, as used herein, refers to any process or operation, which is typically continuous or semi-continuous, done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be "material."

[0066] Aspects of the present disclosure may take the form of an embodiment that is entirely hardware, an embodiment that is entirely software (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module," or "system." Any combination of one or more computer-readable medium(s) may be utilized. The computer- readable medium may be a computer-readable signal medium or a computer-readable storage medium.

[0067] A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples

(a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.

[0068] A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer- readable signal medium may be any computer-readable medium that is not a computer- readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

[0069] The terms "determine," "calculate," "compute," and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.