Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CHANGING VEHICLE CONFIGURATION BASED ON VEHICLE STORAGE COMPARTMENT CONTENTS
Document Type and Number:
WIPO Patent Application WO/2019/172927
Kind Code:
A1
Abstract:
The present invention extends to methods, systems, and computer program products for changing vehicle configuration based on vehicle storage compartment contents. At an autonomous vehicle, a camera is mounted inside a storage compartment. The camera monitors the interior of the storage compartment. The camera can confirm that the storage compartment is empty when it is supposed to be empty and contains an object when it is supposed to contain an object. Any discrepancies can be reported to a human operator. The human operator can instruct the autonomous vehicle to change configuration to address discrepancies. In one aspect, a machine-learning camera memorizes a background pattern permeated to a surface of the storage compartment. The machine-learning camera detects objects in the storage compartment based on disturbances to the background pattern.

Inventors:
SCHMIDT DAVID (US)
KREDER RICHARD (US)
Application Number:
PCT/US2018/021717
Publication Date:
September 12, 2019
Filing Date:
March 09, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FORD GLOBAL TECH LLC (US)
International Classes:
B60K28/08; B60R22/00; B60W10/00; G06N5/00; G06Q10/08; H04W4/02
Foreign References:
US7455225B12008-11-25
US20140036072A12014-02-06
US7151447B12006-12-19
US20160196527A12016-07-07
US20030125855A12003-07-03
Attorney, Agent or Firm:
STEVENS, David, R. (US)
Download PDF:
Claims:
CLAIMS

1. At a vehicle, a method comprising:

detecting an event purported to alter the content of a compartment at the vehicle; a camera monitoring the compartment for disturbances relative to a background image permeated on an interior surface of the compartment after the event;

determining if the content of the compartment accords with a defined event outcome based on any monitored disturbances; and

modifying the configuration of the vehicle based on the determination.

2. The method of claim 1, further comprising the camera memorizing the background image prior to detecting the event.

3. The method of claim 1, wherein detecting an event purported to alter the content of a compartment at the vehicle comprises detecting an event purported to remove an object from the interior of the compartment.

4. The method of claim 3, wherein determining if the content of the compartment accords with a defined event outcome comprises detecting the presence of an object in the compartment; and

wherein modifying the configuration of the vehicle based on the determination comprises:

sending network communication to notify a computer system that the compartment contains the object; receiving instructions from the computer system indicating how to proceed to facilitate removal of the object from the compartment; and

operating the vehicle in accordance with the received instructions.

5. The method of claim 1, wherein detecting an event purported to alter the content of a compartment at the vehicle comprises detecting an event purported to place an object in the interior of the compartment.

6. The method of claim 5, wherein determining if the content of the compartment accords with a defined event outcome comprises determining the compartment is empty; and wherein modifying the configuration of the vehicle based on the determination comprises:

sending network communication to notify a computer system that the compartment is empty;

receiving instructions from the computer system indicating how to proceed to facilitate pickup of the object; and

operating the vehicle in accordance with the received instructions.

7. A method at a vehicle, the method comprising:

a camera memorizing a background image permeated on an interior surface of a vehicle compartment at the vehicle;

detecting an event purported to alter the number of objects contained in the vehicle compartment;

the camera monitoring the vehicle compartment for any disturbance relative to the background image after the event;

determining if the contents of the vehicle compartment are appropriate based on the event and any monitored disturbance; and

modifying the configuration of the vehicle to respond to the determination.

8. The method of claim 7, wherein a camera memorizing a background image permeated on an interior surface of a vehicle compartment comprises a machine-learning camera learning spectral, spatial, and temporal features of the background image.

9. The method of claim 7, wherein a camera memorizing a background image permeated on an interior surface of a vehicle compartment comprises a machine-learning camera memorizing a background image including a principal feature or a pattern.

10. The method of claim 7, wherein a camera memorizing a background image permeated on an interior surface of a vehicle compartment comprises a machine-learning camera memorizing a background image that is outside the visible light spectrum.

11. The method of claim 7, wherein detecting an event purported to alter the number of objects contained in the vehicle compartment comprises detecting an event purported to remove all objects from the interior of the vehicle compartment;

wherein the camera monitoring the vehicle compartment for any disturbance relative to the background image comprises the camera detecting a disturbance relative to the background image after the event;

wherein determining if the contents of the vehicle compartment are appropriate comprises determining that the contents of the vehicle compartment are inappropriate based on the camera detecting the disturbance; and

wherein modifying the configuration of the vehicle to respond to the determination comprises:

sending network communication to notify a computer system that the vehicle compartment contains the object;

receiving instructions from the computer system indicating how to facilitate removal of the object from the vehicle compartment; and

operating the vehicle in accordance with the received instructions.

12. The method as recited in claim 11, wherein operating the vehicle in accordance with the received instructions comprises driving the vehicle to a designated location.

13. The method of claim 7, wherein detecting an event purported to alter the number of objects contained in the vehicle compartment comprises detecting an event purported to insert an object into the interior of the vehicle compartment; wherein the camera monitoring the vehicle compartment for any disturbance relative to the background image comprises the camera failing to detect a disturbance relative to the background image after the event;

wherein determining if the contents of the vehicle compartment are appropriate comprises determining that the contents of the vehicle compartment are inappropriate based on the camera failing to detect a disturbance; and

wherein modifying the configuration of the vehicle to respond to the determination comprises:

sending network communication to notify a computer system that the vehicle compartment is empty;

receiving instructions from the computer system indicating how to facilitate pick up of the object; and

operating the vehicle in accordance with the received instructions.

14. The method of claim 13, wherein operating the vehicle in accordance with the received instructions comprises driving the vehicle to a designated location.

15. A vehicle, the vehicle comprising:

a compartment having an interior surface permeated with a background image; a camera mounted inside the compartment;

a processor; and

system memory coupled to the processor and storing instructions configured to: cause the camera to memorize the background image;

cause the processor to detect an event purported to alter the number of objects contained in the compartment;

cause the camera to monitor the compartment for any disturbance relative to the background image after the event;

cause the processor to determine if the contents of the compartment are appropriate based on the event and any monitored disturbance; and cause the processor to modify the configuration of the vehicle to respond to the determination.

16. The vehicle of claim 15, wherein instructions configured to cause the camera to memorize the background image comprise instructions configured to cause the camera to learn spectral, spatial, and temporal features of a principal feature or a pattern of the background image.

17. The vehicle of claim 15, wherein instructions configured to cause the processor to detect an event purported to alter the number of objects contained in the compartment comprise instructions configured to cause the processor to detect an event purported to remove all objects from the interior of the compartment;

wherein instructions configured to cause the camera to monitor the compartment for any disturbances relative to the background image comprise instructions configured to cause the camera to detect a disturbance relative to the background image after the event;

wherein instructions configured to cause the processor to determine if the contents of the compartment are appropriate comprise instructions configured to cause the processor to determine that the contents of the vehicle compartment are inappropriate based on the camera detecting the disturbance and the event purporting to remove all objects from the interior of the compartment; and

wherein instructions configured to cause the processor to modify the configuration of the vehicle to respond to the determination comprise instructions configured to cause the processor to:

send network communication to notify a computer system that the compartment contains the object;

receive an indication from the computer system indicating how to facilitate removal of the object from the compartment; and

operate the vehicle in accordance with the received indication.

18. The vehicle of claim 17, wherein instructions configured to cause the processor to operate the vehicle in accordance with the received indication comprise instructions configured to cause the processor to drive the vehicle to a designated location.

19. The vehicle of claim 15, wherein instructions configured to cause the processor to detect an event purported to alter the number of objects contained in the compartment comprise instructions configured to cause the processor to detect an event purported to insert an object into the interior of the compartment;

wherein instructions configured to cause the camera to monitor the compartment for any disturbances relative to the background image comprise instructions configured to cause the camera to fail to detect a disturbance relative to the background image after the event;

wherein instructions configured to cause the processor to determine if the content of the compartment is appropriate comprise instructions configured to cause the processor to determine that the content of the compartment is inappropriate based the camera failing to detect a disturbance and the event purporting to insert an object into the interior of the compartment; and wherein instructions configured to cause the processor to modify the configuration of the vehicle to respond to the determination comprise instructions configured to cause the processor to:

send network communication to notify a computer system that the compartment is empty;

receive an indication from the computer system indicating how to facilitate pick up of the object; and

operate the vehicle in accordance with the received indication.

20. The vehicle of claim 19, wherein instructions configured to cause the processor to operate the vehicle in accordance with the received indication comprise instructions configured to cause the processor to drive the vehicle to a designated location.

Description:
CHANGING VEHICLE CONFIGURATION

BASED ON VEHICLE STORAGE COMPARTMENT CONTENTS

BACKGROUND

[0001] 1. Field of the Invention

[0002] This invention relates generally to the field of changing vehicle configurations, and, more particularly, to changing vehicle configuration based on the contents of vehicle storage compartments.

[0003] 2. Related Art

[0004] Autonomous vehicles (AVs) can be equipped with various (and possibly secured) storage compartments that can be used for object delivery and/or object pickup. For example, an autonomous pizza delivery vehicle can include a pizza warming oven for keeping pizzas warm during transit a customer. Similarly, an autonomous grocery delivery vehicle can include a refrigerator, a freezer, and another storage compartment for other grocery items (possibly for grocery bags) to prevent food from spoiling during transit to a customer. Likewise, an autonomous package delivery vehicle can include one or more storage compartments for holding packages in transit to a customer.

[0005] In other cases, an autonomous vehicle with a storage compartment is sent to a customer to accept a returned object. The customer can place the returned object into the storage compartment and the autonomous vehicle can return to a designated location, such as, for example, a warehouse, a store, etc. BRIEF DESCRIPTION OF THE DRAWINGS

[0007] The specific features, aspects and advantages of the present invention will become better understood with regard to the following description and accompanying drawings where:

[0008] Figure 1 illustrates an example block diagram of a computing device.

[0009] Figure 2 illustrates an example computer architecture that facilitates network communication between an autonomous vehicle and other electronic devices.

[0010] Figure 3A illustrates an example computer architecture that facilitates delivery of an object from a vehicle storage compartment.

[0011] Figure 3B illustrates an example computer architecture that facilitates pickup of an object into a vehicle storage compartment.

[0012] Figure 4 illustrates a flow chart of an example method for changing the configuration of an autonomous vehicle based on the contents of a vehicle storage compartment.

[0013] Figure 5A illustrates an example vehicle including a vehicle compartment.

[0014] Figure 5B illustrates an example background pattern.

[0015] Figure 5C illustrates the example background pattern of Figure 5B permeated to an interior surface of the vehicle compartment of Figure 5 A.

[0016] Figure 5D illustrates an example view of objects in the vehicle compartment of Figure 5A on top of the example background pattern of Figure 5B.

[0017] Figure 5E illustrates an example magnified view of the objects of Figure 5D on top of the example background pattern of Figure 5B.

[0018] Figure 5F illustrates an example image captured by a camera mounted above the example background pattern of Figure 5B in the vehicle compartment of Figure 5 A. [0019] Figure 5G illustrates another example view of an object in the vehicle compartment of Figure 5 A on top of the example background pattern of Figure 5B.

[0020] Figure 5H illustrates an example magnified view of the object of Figure 5G on top of the example background pattern of Figure 5B.

[0021] Figure 51 illustrates another example image captured by the camera mounted above the example background pattern of Figure 5B in the vehicle compartment of Figure 5 A.

PET ATT, ED DESCRIPTION

[0022] The present invention extends to methods, systems, and computer program products for changing vehicle configuration based on vehicle storage compartment contents. In some aspects, an autonomous vehicle is used for delivering an object. For example, an object can be placed in a vehicle storage compartment and the autonomous vehicle can then travel to a customer location. At the customer location, a customer can remove the object from the vehicle storage compartment. After the object is removed, the autonomous vehicle can return to a designated location, for example, back to a store or a warehouse.

[0023] In other aspects, an autonomous vehicle is used to pick up an object. For example, the autonomous vehicle can travel to a customer location with an empty vehicle storage compartment. At the customer location, a customer can place the returned object into the vehicle storage compartment. After the object is placed into the vehicle storage compartment, the autonomous vehicle can return to a designated location, for example, back to a store or a warehouse. At the designated location, an employee can then remove the object from the vehicle storage compartment.

[0024] Generally, it is appropriate to ensure that vehicle storage compartments are actually empty when expected to be empty and actually contain an (appropriate or correct) object when expected to contain the (appropriate or correct) object. However, an autonomous vehicle may not include a human. As such, vehicle storage compartments of an autonomous vehicle can be electronically monitored both before and after travel to a customer location and before and after a customer contact. For example, cameras mounted inside vehicle storage compartments can be used to monitor the interior of the vehicle storage compartments. [0025] For object deliveries, a camera can be used to monitor a vehicle storage compartment after purported loading at a loading location (e.g., to confirm presence of a delivery object in the vehicle storage compartment) and after purported unloading at a customer location (e.g., to confirm the object has been retrieved). For object pickups, a camera can be used to monitor vehicle storage compartments prior to leaving for a customer location (e.g., to confirm the vehicle storage compartment is empty) and after purported loading at the customer location (e.g., to confirm presence of a returned object in the vehicle storage compartment).

[0026] In one aspect, a machine-learning camera is used to monitor a vehicle storage compartment. The machine-learning camera is mounted inside the vehicle storage compartment. An artificially created background is permeated onto an interior surface of the vehicle storage compartment (e.g., a surface where objects are placed for transport). The artificially created background can include a principal feature or a known pattern. The artificially created background can be configured to help objects stand out and reduce the likelihood of objects blending in with the artificially created background.

[0027] The machine-learning camera memorizes the artificially created background, for example, as a reference image, including learning specific features (e.g., one or more of spectral, spatial, and temporal features) that can be used to characterize the background appearance of specific regions of the interior of the vehicle storage compartment. Image processing decision rules can be derived for background classification of the principal feature or known pattern.

[0028] The machine-learning camera can detect any changes or disturbances to the background caused by objects present on the surface (within the vehicle storage container). A non-zero difference between the reference image and a current image of the artificially created background can indicate a disturbance. Thus, a foreground object can be detected through change classification of the principal background feature or known pattern. Upon detecting an object in a vehicle storage compartment, a human can confirm if the object is authorized. Detection of an unauthorized object can occur when a foreign object is present in a vehicle storage compartment but the vehicle storage container should be empty. Upon detection of a foreign or unauthorized object, the machine-learning camera can provide imagery from inside the vehicle storage container to another computer system.

[0029] In one aspect, the computer system is at a central hub where a human can assess the disposition of and/or identity the foreign object. If a foreign object is a nefarious, dangerous, or hazardous object, the human can take precautionary actions with an autonomous vehicle and notify the proper authorities. If the foreign object belongs to a customer, the customer can be notified via text, email, or voice that the foreign object was left in the vehicle storage container. The human can also have the autonomous vehicle stay at or return to a customer location. A foreign object belonging to a customer can be a delivery object the customer failed to retrieve from the vehicle storage compartment or an object (e.g., cell phone, keys, etc.) the customer inadvertently placed in the vehicle storage compartment. If the foreign object is a“nuisance” object, such as, an empty bag or box, the human can allow the autonomous vehicle to return to a designated location (e.g., to a store or warehouse or to another delivery location).

[0030] In one aspect, an artificially created background is inside the visible light spectrum and is visible to the human eye. In another aspect, an artificially created background is outside the visible light spectrum and is not visible to the human eye. For example, the artificially created background can be in the InfraRed (IR) spectrum, Ultraviolet (UV) spectrum, etc.

[0031] As such, prior to object delivery, a machine-learning camera can monitor a vehicle storage compartment at a loading location to confirm that the vehicle storage compartment contains one or more objects (e.g., pizzas, groceries, packages, boxes, bags, etc.) for delivery to the customer. After a customer delivery, the machine-learning camera can monitor the vehicle storage compartment to confirm that the vehicle storage compartment is empty. If the vehicle storage compartment is not empty, a human operator can be notified and can take appropriate action.

[0032] Similarly, prior to object pickup, the vehicle storage compartment can be monitored to confirm that the compartment is empty. If the vehicle storage compartment is not empty, a human operator can be notified and can take appropriate action, such as, for example, returning the autonomous vehicle to a warehouse or store for unloading. After a customer pickup, the vehicle storage compartment can be monitored to confirm that the compartment includes an (authorized) object for return. If the vehicle storage compartment is empty, a human operator can be notified and take appropriate action. For example, the customer can be notified via text, email, or voice that the returned object was not placed in the vehicle storage container.

[0033] In one aspect, when a customer is returning an object, a vehicle storage compartment is monitored after any object is placed in the vehicle storage container. The machine-learning camera can provide imagery from inside the vehicle storage container to another computer system (e.g., a central hub). Based on the imagery, a human operator can confirm that the object placed in the storage container is the (authorized) returned object. If the object placed in the vehicle storage compartment is not the (authorized) returned object and is otherwise benign, the customer can be notified via text, email, or voice that the returned object was not the object placed in the vehicle storage container. If the object placed in the vehicle storage compartment is a nefarious, dangerous, or hazardous object, the human operator can take precautionary actions with an autonomous vehicle and notify the proper authorities. [0034] Thus, in general, electronically monitoring vehicle storage containers facilitates changes to autonomous vehicle configuration to ensure proper object delivery and object pickup, address use of vehicle storage compartments for nefarious purposes, and assist in recovering objects inadvertently and/or improperly left in vehicle storage compartments.

[0035] Figure 1 illustrates an example block diagram of a computing device 100. Computing device 100 can be used to perform various procedures, such as those discussed herein. Computing device 100 can function as a server, a client, or any other computing entity. Computing device 100 can perform various communication and data transfer functions as described herein and can execute one or more application programs, such as the application programs described herein. Computing device 100 can be any of a wide variety of computing devices, such as a mobile telephone or other mobile device, a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.

[0036] Computing device 100 includes one or more processor(s) 102, one or more memory device(s) 104, one or more interface(s) 106, one or more mass storage device(s) 108, one or more Input/Output (I/O) device(s) 110, and a display device 130 all of which are coupled to a bus 112. Processor(s) 102 include one or more processors or controllers that execute instructions stored in memory device(s) 104 and/or mass storage device(s) 108. Processor(s) 102 may also include various types of computer storage media, such as cache memory.

[0037] Memory device(s) 104 include various computer storage media, such as volatile memory (e.g., random access memory (RAM) 114) and/or nonvolatile memory (e.g., read-only memory (ROM) 116). Memory device(s) 104 may also include rewritable ROM, such as Flash memory. [0038] Mass storage device(s) 108 include various computer storage media, such as magnetic tapes, magnetic disks, optical disks, solid state memory (e.g., Flash memory), and so forth. As depicted in Figure 1, a particular mass storage device is a hard disk drive 124. Various drives may also be included in mass storage device(s) 108 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 108 include removable media 126 and/or non-removable media.

[0039] I/O device(s) 110 include various devices that allow data and/or other information to be input to or retrieved from computing device 100. Example EO device(s) 110 include cursor control devices, keyboards, keypads, barcode scanners, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, cameras, lenses, radars, CCDs or other image capture devices, and the like.

[0040] Display device 130 includes any type of device capable of displaying information to one or more users of computing device 100. Examples of display device 130 include a monitor, display terminal, video projection device, and the like.

[0041] Interface(s) 106 include various interfaces that allow computing device 100 to interact with other systems, devices, or computing environments as well as humans. Example interface(s) 106 can include any number of different network interfaces 120, such as interfaces to personal area networks (PANs), local area networks (LANs), wide area networks (WANs), wireless networks (e.g., near field communication (NFC), Bluetooth, Wi-Fi, etc., networks), and the Internet. Other interfaces include user interface 118 and peripheral device interface 122.

[0042] Bus 112 allows processor(s) 102, memory device(s) 104, interface(s) 106, mass storage device(s) 108, and EO device(s) 110 to communicate with one another, as well as other devices or components coupled to bus 112. Bus 112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, ETSB bus, and so forth.

[0043] Figure 2 illustrates an example computer architecture 200 that facilitates network communication between an autonomous vehicle 210 and other electronic devices. Autonomous vehicle 210 can be a land-based vehicle having a plurality of wheels, such as, for example, a car, a van, a light truck, etc. and can operate fully autonomously under virtually all conditions. Autonomous vehicle 210 can be instructed to follow a path, travel to one or more destinations, etc., and can safely travel along roadways to move between locations as instructed. Autonomous vehicle 210 may also include manual operator controls so that a driver can operate autonomous vehicle 210 when appropriate.

[0044] As depicted, autonomous vehicle 210 includes Vehicle-to-Infrastructure (V-to-I) interface 211, powertrain controller 212, brake controller 213, steering controller 214, computing device 215, sensors 216, and storage compartment 217. Computing device 215 can perform computations for piloting autonomous vehicle 210 during autonomous operation. Computing device 215 can receive information regarding the operation, status, configuration, etc., of autonomous vehicle 210 and corresponding components from sensors 216. Computing device 215 can make decisions with respect to controlling autonomous vehicle 210 based on information received from sensors 216.

[0045] Sensors 216 can include a variety of devices for monitoring the operating components of autonomous vehicle 210 (e.g., tires, wheels, brakes, throttle, engine, etc.), monitoring an environment surrounding autonomous vehicle 210 (e.g., for other vehicles, for pedestrians, for cyclists, for static obstacles, etc.), and monitoring storage compartment 217. Sensors 216 can include cameras, LIDAR sensors, Radar sensors, ultrasonic sensors, etc. [0046] For example, a radar fixed to a front bumper (not shown) of the vehicle 210 may provide a distance at autonomous vehicle 210 to a next vehicle in front of the vehicle 210. A global positioning system (GPS) sensor at autonomous vehicle 210 may provide geographical coordinates of autonomous vehicle 210. The distance(s) provided by the radar and/or other sensors 216 and/or the geographical coordinates provided by the GPS sensor can be used to facilitate autonomous operation of autonomous vehicle 210.

[0047] Computing device 215 can include any of the components described with respect to computing device 100. Computing device 215 can include programs for controlling vehicle components, including: brakes, propulsion (e.g., by controlling a combustion engine, an electric motor, a hybrid engine, etc.), steering, climate control, interior and/or exterior lights, etc., Computing device 215 can also determine whether it or a human operator is in control of autonomous vehicle 210.

[0048] Computing device 215 be communicatively coupled, for example, via a vehicle communications bus, to other computing devices and/or controllers at autonomous vehicle 210. For example, computing device 215 can be coupled to powertrain controller 212, brake controller 213, and steering controller 214 via a communications bus to monitor and/or control various corresponding vehicle components. In one aspect, V-to-I interface 211, computing device 215, sensors 216, powertrain controller 212, brake controller 213, and steering controller 214 as well as any other computing devices and/or controllers are connected via a vehicle communication network, such as, a controller area network (CAN). V-to-I interface 211, computing device 215, sensors 216, powertrain controller 212, brake controller 213, and steering controller 214 as well as any other computing devices and/or controllers can create message related data and exchange message related data via the vehicle communication network. [0049] V-to-I interface 211 can include a network interface for wired and/or wireless communication with other devices via network 230. Server computer 220 and user mobile device 260 can also include network interfaces for wired and/or wireless communication with other devices via network 230. As such, each of autonomous vehicle 210, server computer 220, and user mobile device 260, as well as their respective components, can be connected to one another over (or be part of) network 230, such as, for example, a LAN, a WAN, and even the Internet. Accordingly, autonomous vehicle 210, server computer 220, and user mobile device 260, as well as any other connected computer systems or vehicles and their components, can create message related data and exchange message related data (e.g., near field communication (NFC) payloads, Bluetooth packets, Internet Protocol (IP) datagrams and other higher layer protocols that utilize IP datagrams, such as, Transmission Control Protocol (TCP), Hypertext Transfer Protocol (HTTP), Simple Mail Transfer Protocol (SMTP), etc.) over network 230. In one aspect, V-to-I interface 211 also facilitates vehicle-to-vehicle (V-to-V) communication via ad hoc networks formed among autonomous vehicle 210 and other nearby vehicles.

[0050] Figure 3A illustrates an example computer architecture 300 that facilitates delivery of an object from a vehicle storage compartment. As depicted in computer architecture 300, storage compartment 217 further includes surface 301 and (e.g., machine-learning) camera 303. Surface 301 is permeated with background 302. Camera 303 can memorize background 302 (e.g., as a reference image) and subsequently monitor storage compartment 217 for any disturbance to background 302. A disturbance, such as, a non-zero difference between a current image of surface 301 and the reference image, can indicate the presence of object in storage compartment

217 As depicted, object 321 (e.g., a package) is currently contained in storage compartment 217. As such, camera 303 can detect a disturbance in background 302 indicative of an object being contained in storage compartment 217. Autonomous vehicle 210 can be dispatched to a location of person 313 so that person 313 can remove object 321 from storage compartment 217. Subsequent to arriving at the location, computing device 215 can detect storage compartment 217 be opened and then closed (e.g., via a contact sensor on a door, lid, top, etc. of storage compartment 217) purportedly to remove object 321 (a removal event). After storage compartment 217 is closed, camera 303 can monitor storage compartment 217 for any disturbance in background 302.

[0051] In one aspect, person 313 is a customer and object 321 is being delivered to the customer. In another aspect, person 313 is a worker and object 321 is being returned to a store, a warehouse, or other return location.

[0052] It may be that camera 303’ s monitoring does not detect any disturbances in background 302. As such, camera 303 considers storage compartment 217 to be empty. In response, vehicle 210 can automatically proceed to a designated location, for example, back to a warehouse or store (e.g., to pick up another package). Alternatively, V-to-I interface 211 can send notification 331 to central hub 311 notifying central hub 311 that storage compartment 217 is empty. Human operator 312 can view notification 331. In response, human operator 312 can send instructions 332 to vehicle 210 instructing vehicle 210 to proceed to a designated location.

[0053] In another aspect, camera 303 detects a disturbance in background 302 indicating the presence of an object. In response, V-to-I interface 211 can send notification 331 to central hub 311 notifying central hub 311 that an object was detected in storage compartment 217. Camera 303 can also send imagery 334 of the interior of storage compartment 217 (through V-to-I interface 211) to central hub 311. Human operator 312 can receive notification 331 and view imagery 334.

[0054] Imagery 334 may depict that object 321 is still contained in storage compartment 217. In response, human operator 312 can send message 333 (e.g., text, email, etc.) to mobile device 314. Message 333 can notify person 313 that object 321 was left in storage compartment 217. Human operator 312 can also send instructions 332 to vehicle 210. Instructions 332 can instruct vehicle 210 to return to or remain at the location so that person 313 can remove object 321 from storage compartment 217.

[0055] Imagery 334 may alternately depict object 322 contained in storage compartment 217. Object 322 may be an object that person 313 intentionally or inadvertently placed in or left in storage compartment 217.

[0056] Object 322 may be a personal item of person 313, such as, for example, a phone (mobile device 314) or keys. In response, human operator 312 can send message 333 to mobile device 314. When message 333 is an email message, message 333 can also be received at other devices associated with person 313. Message 333 can notify person 313 that object 322 was left in storage compartment 217. Human operator 312 can also send instructions 332 to vehicle 210. Instructions 332 can instruct vehicle 210 to return to or remain at the location so that person 313 can retrieve object 322 from storage compartment 217.

[0057] Object 322 may be a“nuisance” object, such as, a leftover bag, box, or other packaging associated with object 321. In response, human operator 312 can send instructions 332 to vehicle 210 instructing vehicle 210 to proceed to a designated location.

[0058] Object 322 may be a dangerous or hazardous object (e.g., explosives, chemicals, etc.). In response, human operator 312 can send instructions 332 to vehicle 210 instructing vehicle 210 to proceed to a designated safer location (e.g., away from other vehicles and people). Human operator 312 can also notify authorities including passing along the identity and last known location of person 313.

[0059] It may also be that multiple objects are contained in storage compartment 217 for delivery. If less than all of the objects are removed, human operator 312 can notify person 313 to retrieve any remaining objects.

[0060] Figure 3B illustrates an example computer architecture 350 that facilitates pickup of an object into a vehicle storage compartment. Vehicle 210 can be dispatched to a location of person 343 with storage compartment 217 empty so that person 343 can place an authorized object 361 in storage compartment 217. Subsequent to arriving at the location, computing device 215 can detect storage compartment 217 be opened and then closed (e.g., via a contact sensor on a door, lid, top, etc. of storage compartment 217) purportedly to insert authorized object 361 (an insertion event). After storage compartment 217 is closed, camera 303 can monitor storage compartment 217 for any disturbance in background 302.

[0061] In one aspect, person 343 is a customer and object 361 is being returned by the customer. In another aspect, person 343 is an employee and object 361 is being loaded into storage compartment 217 for delivery to a customer.

[0062] It may be that camera 303’ s monitoring does not detect any disturbances in background 302. As such, camera 303 considers storage compartment 217 to be empty. In response, V-to-I interface 211 can send notification 371 to central hub 311 notifying central hub 311 that storage compartment 217 is empty. Camera 303 can also send imagery 374 of the interior of storage compartment 217 (through V-to-I interface 211) to central hub 311. Human operator 312 can receive notification 371 and view imagery 374. [0063] In response, human operator 312 can send message 373 (e.g., text, email, etc.) to mobile device 344. Message 373 can notify person 343 that storage compartment 217 remains empty and that authorized object 361 is to be inserted into storage compartment 217. Human operator 312 can also send instructions 372 to vehicle 210. Instructions 372 can instruct vehicle 210 to return to or remain at the location so that person 343 can insert authorized object 361 into storage compartment 217.

[0064] In another aspect, camera 303 detects a disturbance in background 302 indicating the presence of an object. In response, V-to-I interface 211 can send notification 371 to central hub 311 notifying central hub 311 that storage compartment 217 contains an object. Camera 303 can also send imagery 374 of the interior of storage compartment 217 (through V-to-I interface 211) to central hub 311. Human operator 312 can receive notification 371 and view imagery 374.

[0065] Imagery 374 may depict that authorized object 361 is the only object contained in storage compartment 217. In response, human operator 312 can send instructions 372 to vehicle 210 instructing vehicle 210 to proceed to a designated location, such as, a delivery location or a return location.

[0066] Imagery 374 may alternately depict that unauthorized object 362 is contained in storage compartment 217 (either alone or along with authorized object 361). Object 362 may be an object that person 343 intentionally or inadvertently placed in or left in storage compartment 217.

[0067] Object 362 may be a personal item of person 343, such as, for example, a phone (mobile device 344) or keys, an incorrect package, etc. In response, human operator 312 can send message 373 to mobile device 344. When message 373 is an email message, message 373 can also be received at other devices associated with person 343. Message 373 can notify person 343 that object 362 is to be retrieved from storage compartment 217 and that only authorized object 361 is to be inserted into storage compartment 217. Human operator 312 can also send instructions 372 to vehicle 210. Instructions 372 can instruct vehicle 210 to return to or remain at the location so that person 343 can retrieve object 362 from storage compartment 217 and possibly insert authorized object 361 into storage compartment 217.

[0068] Object 362 may be a“nuisance” object that is not authorized but is otherwise benign. If object 362 is a“nuisance” object and authorized object 361 is not contained in storage compartment 217, a response can be similar to the response when object 362 is a personal item. On the other hand, if object 362 is a“nuisance” object and authorized object 361 is also contained in storage compartment 217, human operator 312 can send instructions 332 to vehicle 210 instructing vehicle 210 to proceed to a designated location, such as, a delivery location or a return location.

[0069] Object 362 may be a dangerous or hazardous object. In response (and whether or not authorized object 361 is also contained in storage compartment 217), human operator 312 can send instructions 372 to vehicle 210 instructing vehicle 210 to proceed to a designated safer location (e.g., away from other vehicles and people). Human operator 312 can also notify authorities including passing along the identity and last known location of person 343.

[0070] Thus, generally, an autonomous vehicle can detect an event purported to alter the content of a vehicle compartment. For example, an autonomous vehicle can detect opening and closing a vehicle compartment (an event) to purportedly remove an object from or insert an object into the vehicle compartment. A machine-learning camera can monitor the vehicle compartment for any disturbances relative to a (e.g., previously memorized) background image permeated on an interior surface of the vehicle compartment after the event. For example, the machine-learning camera can monitor a vehicle compartment for any disturbances relative a background pattern after the vehicle compartment is opened and closed.

[0071] The autonomous vehicle can determine if the content of the vehicle compartment accords with a defined event outcome based on any monitored disturbances. For example, if the autonomous vehicle was making a delivery, the autonomous vehicle can determine if the vehicle compartment is empty based on any monitored disturbances after the vehicle compartment was opened and closed. If the autonomous vehicle was making a pickup, the autonomous vehicle can determine the presence of an object in the vehicle compartment based on any monitored disturbances after the vehicle compartment was opened and closed.

[0072] The autonomous vehicle can modify the configuration of the autonomous vehicle based on the determination. Modifying the configuration of the autonomous vehicle can include sending a notification to a central hub, sending imagery to a central hub, staying at a location, driving back to a prior location, driving to a new location, driving to a safer location, etc.

[0073] Figure 4 illustrates a flow chart of an example method 400 for changing the configuration of an autonomous vehicle based on the contents of a vehicle storage compartment. Method 400 will be described with respect to the components and data in computer architectures 300 and 350.

[0074] Method 400 includes a camera memorizing a background image permeated on an interior surface of a vehicle compartment at the vehicle (401). For example, camera 303 can memorize background 302. Method 400 includes detecting an event purported to alter the number of objects contained in the vehicle compartment (402). For example, computing device 215 can detect person 313 opening and closing storage compartment 217 to purportedly remove object 321 (a removal event). Alternatively, computing device 215 can detect person 343 opening and closing storage compartment 217 to purportedly insert object 361 (an insertion event).

[0075] Method 400 includes the camera monitoring the vehicle compartment for any disturbance relative to the background image after the event (403). For example, camera 303 can monitor storage compartment 301 for any for disturbance relative to background 302 after person 313 purportedly removed object 321. Alternatively, camera 303 can monitor storage compartment 301 for any disturbance relative to background 302 after person 343 purportedly inserted object 361.

[0076] Method 400 includes determining if the contents of the vehicle compartment are appropriate based on the event and any monitored disturbance (404). For example, computing device 215 can determine if the contents of storage compartment 217 are appropriate or inappropriate based on person 313 purporting to remove object 321 (the removal event) and any monitored disturbance in background 302. Computing device 215 can consider the contents of storage compartment 217 to be appropriate when camera 303 considers storage compartment 217 to be empty after the removal event. On the other hand, computing device 215 can consider the contents of storage compartment 217 to be inappropriate when camera 303 detects the presence of an object in storage compartment 217 after the removal event.

[0077] Alternatively, computing device 215 can determine if the contents of storage compartment 217 are appropriate based on person 343 purporting to insert object 361 into storage compartment 217 (the insertion event) and any monitored disturbance in background 302. Computing device 215 can consider the contents of storage compartment 217 to be appropriate when camera 303 detects the presence of an object in storage compartment 217 after the insertion event (although human confirmation based on imagery may still occur). On the other hand, computing device 215 can consider the contents of storage compartment 217 to be inappropriate when camera 303 considers storage compartment 217 to be empty after the insertion event.

[0078] Method 400 includes modifying the configuration of the vehicle to respond to the determination (405). For example, the configuration of vehicle 210 can be modified to respond to a determination that storage compartment 217 is appropriately or inappropriately empty or appropriately or inappropriately contains an object. Modifying the configuration of vehicle 210 can include sending a notification to a central hub, sending imagery to a central hub, staying at a location, driving back to a prior location, driving to a new location, driving to a safer location, etc. How the configuration of vehicle 210 is modified can vary depending on the contents of storage compartment 217 matching or not matching an expected outcome.

[0079] For example, if vehicle 210 is making a delivery, detecting that storage compartment 217 is empty after customer contact is an expected outcome. As such, changing the configuration of vehicle 210 can include instructing vehicle 210 to drive to a new location. On the other hand, detecting that storage compartment 217 still contains an object after customer contact is an unexpected outcome. As such, changing the configuration of vehicle 210 can include sending a notification and imagery to a central hub. Depending whether or not the object belongs to the customer, is a“nuisance” object, or is dangerous or hazardous object, vehicle 210 can be instructed to stay at a location, return to a warehouse or store, or drive to a safer location respectively.

[0080] The configuration of vehicle 210 can be similarly varied when pickup up an object depending on the contents of storage compartment 217 matching or not matching an expected outcome. [0081] Figure 5 A illustrates an example vehicle 500 including a vehicle compartment 501 (i.e., a pizza warming oven). Figure 5B illustrates an example background pattern 502. Figure 5C illustrates the example background pattern 502 permeated to an interior surface of the vehicle compartment 501. A machine-learning camera (not shown) can be mounted to the top of vehicle compartment 501 and have a lens pointed down towards background pattern 502. The machine- learning camera can memorize background pattern 502 as a reference image indicating that vehicle compartment 501 is empty.

[0082] Figure 5D illustrates an example view of objects 511 and 512 (i.e., pizza boxes) in the vehicle compartment 501 on top of background pattern 502. Figure 5E illustrates an example magnified view of the objects 511 and 512 on top of background pattern 502. Figure 5F illustrates an image 522 captured by the machine-learning camera (not shown) mounted to the top of vehicle compartment 501 above background pattern 502. As depicted, objects 511 and 512 are loaded into vehicle compartment 501. Image 522 depicts disturbance 521, a non-zero difference between the reference image and an image of objects 511 and 512 sitting on top of background 502. Disturbance 521 indicates the presence of one or more objects in vehicle compartment 501. If a customer ordered two pizzas, vehicle 500 would expect the presence of one or more objects in vehicle compartment 501. As such, no notifications or warnings are sent to a central hub (e.g., back to a delivery service).

[0083] Figure 5G illustrates another example view of object 511 in vehicle compartment 501 on top of background pattern 502. Object 511 can be a pizza that was not initially retrieved by a customer. Figure 5H illustrates an example magnified view of the object 511 on top of background pattern 502. Figure 51 illustrates image 524 captured by the machine-learning camera (not shown). Image 524 depicts disturbance 523, a non-zero difference between the reference image and an image of object 511 sitting on top of background 502. Disturbance 523 indicates the presence of one or more objects in vehicle compartment 501.

[0084] Since vehicle 500 expects the customer would take both of their pizzas, disturbance 523 can trigger a notification or warning to a central hub (e.g., back to the delivery service). Vehicle 500 can also send imagery similar to Figure 5H to the central hub for evaluation by a human operator. From the imagery, the human operator can tell that the customer left a pizza in vehicle compartment 501. The human operator can then notify the customer via text, email, or voice that they left a pizza in vehicle compartment 501. The human operator can also prevent vehicle 501 from leaving the customer’s delivery location until the customer returns and retrieves their other pizza.

[0085] In one aspect, one or more processors are configured to execute instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) to perform any of a plurality of described operations. The one or more processors can access information from system memory and/or store information in system memory. The one or more processors can transform information between different formats, such as, for example, background features, background patterns, reference images, imagery, notifications, messages, autonomous vehicle instructions, etc.

[0086] System memory can be coupled to the one or more processors and can store instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) executed by the one or more processors. The system memory can also be configured to store any of a plurality of other types of data generated by the described components, such as, for example, background features, background patterns, reference images, imagery, notifications, messages, autonomous vehicle instructions, etc. [0087] In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

[0088] Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media. [0089] Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.

[0090] An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A“network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer- readable media.

[0091] Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.

[0092] Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in dash or other vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.

[0093] Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.

[0094] It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).

[0095] At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.

[0096] While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications, variations, and combinations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.