Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MOBILE APPARATUS WITH DISPLAY AND LENS TECHNOLOGY
Document Type and Number:
WIPO Patent Application WO/2024/015708
Kind Code:
A1
Abstract:
A system comprising a mobile goods and/or person carrying apparatus (200). The mobile goods and/or person carrying apparatus includes a plurality of wheels (29, 31). The mobile goods and/or person carrying apparatus also includes at least one processor (120) and at least one memory (130) operatively coupled to the processor. The mobile goods and/or person carrying apparatus also includes one or more external-facing displays (6, 7) operatively coupled to the at least one processor and one or more imaging sensors (202) operatively coupled to the at least one processor. The at least one processor is configured to define, using the one or more imaging sensors, an environment in which the mobile goods and/or person carrying apparatus operates. The at least one processor is also configured to display, on at least one of the one or more the external-facing displays, media content based on the defined environment.

Inventors:
BARTLETT DOUGLAS JAMES (US)
Application Number:
PCT/US2023/069749
Publication Date:
January 18, 2024
Filing Date:
July 07, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AISLEWORX TECH LLC (US)
International Classes:
B62B3/14; G01C21/20; G01C21/30; H04W4/021
Foreign References:
US20200182634A12020-06-11
US20200302510A12020-09-24
US20190094876A12019-03-28
Attorney, Agent or Firm:
HARDEN, Keith D. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A system comprising: a mobile goods and/or person carrying apparatus including: a plurality of wheels; at least one processor; at least one memory operatively coupled to the at least one processor; one or more external-facing displays operatively coupled to the at least one processor; and one or more imaging sensors operatively coupled to the at least one processor, wherein the at least one processor is configured to: define, using the one or more imaging sensors, an environment in which the mobile goods and/or person carrying apparatus operates; and display, on at least one of the one or more the external -facing displays, media content based on the defined environment.

2. The system of Claim 1, wherein each one of the one or more imaging sensors is associated with one of the one or more external -facing displays.

3. The system of Claim 1, wherein to define the environment, the at least one processor is further configured to: receive at least one image of the environment from the one or more imaging sensors; and detect one or more objects in the at least one image.

4. The system of Claim 3, wherein the at least one processor is further configured to select the media content for display based on the detected one or more objects in the at least one image.

5. The system of Claim 4, wherein the media content is selected based on at least one of: a temporal condition; a product detected in the at least one image; an actor detected in the at least one image; a demographic of the actor detected in the at least one image; and external conditions of the environment.

6. The system of Claim 1, wherein, to define the environment, the at least one processor is further configured to: receive at least one image of the environment from the one or more imaging sensors; and create a virtual map of the environment using the at least one image.

7. The system of Claim 6, wherein, to create the virtual map of the environment, the at least one processor is further configured to: create an initialized virtual map using a plurality of frames corresponding to the at least one image of the environment; generate a plurality of three-dimensional (3D) points on the initialized virtual map, the plurality of 3D points corresponding to detected objects in the environment; create a 3D point cloud using the 3D points; perform feature tracking by receiving a plurality of key frames of the environment periodically and estimating a pose of the mobile goods and/or person carrying apparatus by matching environmental features in each key frame with features in a previous key frame; perform loop closure and optimize a pose graph; and update a database based on the loop closure and optimized pose graph.

8. The system of Claim 6, wherein the at least one processor is further configured to select the media content for display based on a location of the mobile goods and/or person carrying apparatus determined using the virtual map of the environment.

9. The system of Claim 6, wherein the at least one processor is further configured to transmit the virtual map and data associated with locations on the virtual map for viewing on an electronic device.

10. The system of Claim 1, wherein the at least one processor is further configured to instruct a transmission, to an electronic device, of a communication including data gathered concerning the environment, wherein the data includes at least one of: a number of actors detected during display of a media content selection; an amount of time associated with a facial orientation of one or more of the actors detected during display of the media content selection; demographics of actors detected during display of the media content selection; emotions of actors detected during display of the media content selection; a number of detected objects of one or more object types; and an alert condition detected within the environment.

11. The system of Claim 1, wherein the at least one processor is further configured to display the media content upon detection of movement of the mobile goods and/or person carrying apparatus.

12. A method comprising: defining, using one or more imaging sensors, an environment in which a mobile goods and/or person carrying apparatus operates, the mobile goods and/or person carrying apparatus including: a plurality of wheels, at least one processor, at least one memory operatively coupled to the at least one processor, one or more external -facing displays operatively coupled to the at least one processor, and one or more imaging sensors operatively coupled to the at least one processor; and displaying, on at least one of the one or more the external-facing displays, media content based on the defined environment.

13. The method of Claim 12, wherein each one of the one or more imaging sensors is associated with one of the one or more external -facing displays.

14. The method of Claim 12, wherein defining the environment includes: receiving at least one image of the environment from the one or more imaging sensors; and detecting one or more objects in the at least one image.

15. The method of Claim 14, further comprising selecting the media content for display based on the detected one or more objects in the at least one image.

16. The method of Claim 15, wherein selecting the media content is based on at least one of: a temporal condition; a product detected in the at least one image; an actor detected in the at least one image; a demographic of the actor detected in the at least one image; and external conditions of the environment.

17. The method of Claim 12, wherein defining the environment includes: receiving at least one image of the environment from the one or more imaging sensors; and creating a virtual map of the environment using the at least one image.

18. The method of Claim 17, wherein creating the virtual map of the environment includes: creating an initialized virtual map using a plurality of frames corresponding to the at least one image of the environment; generating a plurality of three-dimensional (3D) points on the initialized virtual map, the plurality of 3D points corresponding to detected objects in the environment; creating a 3D point cloud using the 3D points; performing feature tracking by receiving a plurality of key frames of the environment periodically and estimating a pose of the mobile goods and/or person carrying apparatus by matching environmental features in each key frame with features in a previous key frame; performing loop closure and optimizing a pose graph; and updating a database based on the loop closure and optimized pose graph.

19. The method of Claim 17, further comprising selecting the media content for display based on a location of the mobile goods and/or person carrying apparatus determined using the virtual map of the environment.

20. The method of Claim 17, further comprising transmitting the virtual map and data associated with locations on the virtual map for viewing on an electronic device.

21. The method of Claim 12, further comprising instructing a transmission, to an electronic device, of a communication including data gathered concerning the environment, wherein the data includes at least one of: a number of actors detected during display of a media content selection; an amount of time associated with a facial orientation of one or more of the actors detected during display of the media content selection; demographics of actors detected during display of the media content selection; emotions of actors detected during display of the media content selection; a number of detected objects of one or more object types; and an alert condition detected within the environment.

22. The method of Claim 12, further comprising displaying the media content upon detection of movement of the mobile goods and/or person carrying apparatus.

Description:
MOBILE APPARATUS WITH DISPLAY AND LENS TECHNOLOGY

TECHNICAL FIELD

[0001] This disclosure relates generally to computer vision and machine learning systems. More specifically, this disclosure relates to a mobile apparatus with display and lens technology.

BACKGROUND

[0002] Mobile apparatuses are used at various locations such as grocery stores or other retail stores, airports, hospitals, amusement parks, city streets, and other public spaces. These mobile apparatuses can include various manned apparatuses such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc. Given the ubiquity of such mobile apparatuses in public spaces, there exists an opportunity utilize mobile apparatuses to both gather and provide information on the environments in which they operate.

SUMMARY

[0003] This disclosure relates to a system and methods for a mobile apparatus with display and lens technology.

[0004] In one aspect, a system includes a mobile goods and/or person carrying apparatus. The mobile goods and/or person carrying apparatus includes a plurality of wheels. The mobile goods and/or person carrying apparatus also includes at least one processor. The mobile goods and/or person carrying apparatus also includes at least one memory operatively coupled to the at least one processor. The mobile goods and/or person carrying apparatus also includes one or more external-facing displays operatively coupled to the at least one processor. The mobile goods and/or person carrying apparatus also includes one or more imaging sensors operatively coupled to the at least one processor. The at least one processor is configured to define, using the one or more imaging sensors, an environment in which the mobile goods and/or person carrying apparatus operates. The at least one processor is also configured to display, on at least one of the one or more the external-facing displays, media content based on the defined environment.

[0005] In another aspect, a method includes defining, using one or more imaging sensors, an environment in which a mobile goods and/or person carrying apparatus operates. The mobile goods and/or person carrying apparatus includes a plurality of wheels . The mobile goods and/or person carrying apparatus also includes at least one processor. The mobile goods and/or person carrying apparatus also includes at least one memory operatively coupled to the at least one processor. The mobile goods and/or person carrying apparatus also includes one or more external -facing displays operatively coupled to the at least one processor. The mobile goods and/or person carrying apparatus also includes one or more imaging sensors operatively coupled to the at least one processor. The method also includes displaying, on at least one of the one or more the external-facing displays, media content based on the defined environment.

[0006] Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims. [0007] Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “transmit,” “receive,” and “communicate,” as well as derivatives thereof, encompass both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, means to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like.

[0008] Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.

[0009] As used here, terms and phrases such as “have,” “may have,” “include,” or “may include” a feature (like a number, function, operation, or component such as a part) indicate the existence of the feature and do not exclude the existence of other features. Also, as used here, the phrases “A or B,” “at least one of A and/or B,” or “one or more of A and/or B” may include all possible combinations of A and B. For example, “A or B,” “at least one of A and B,” and “at least one of A or B” may indicate all of (1) including at least one A, (2) including at least one B, or (3) including at least one A and at least one B. Further, as used here, the terms “first” and “second” may modify various components regardless of importance and do not limit the components. These terms are only used to distinguish one component from another. For example, a first user device and a second user device may indicate different user devices from each other, regardless of the order or importance of the devices. A first component may be denoted a second component and vice versa without departing from the scope of this disclosure.

[0010] It will be understood that, when an element (such as a first element) is referred to as being (operatively or communicatively) “coupled with/to” or “connected with/to” another element (such as a second element), it can be coupled or connected with/to the other element directly or via a third element. In contrast, it will be understood that, when an element (such as a first element) is referred to as being “directly coupled with/to” or “directly connected with/to” another element (such as a second element), no other element (such as a third element) intervenes between the element and the other element.

[0011] As used here, the phrase “configured (or set) to” may be interchangeably used with the phrases “suitable for,” “having the capacity to,” “designed to,” “adapted to,” “made to,” or “capable of’ depending on the circumstances. The phrase “configured (or set) to” does not essentially mean “specifically designed in hardware to.” Rather, the phrase “configured to” may mean that a device can perform an operation together with another device or parts. For example, the phrase “processor configured (or set) to perform A, B, and C” may mean a generic-purpose processor (such as a CPU or application processor) that may perform the operations by executing one or more software programs stored in a memory device or a dedicated processor (such as an embedded processor) for performing the operations.

[0012] The terms and phrases as used here are provided merely to describe some embodiments of this disclosure but not to limit the scope of other embodiments of this disclosure. It is to be understood that the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. All terms and phrases, including technical and scientific terms and phrases, used here have the same meanings as commonly understood by one of ordinary skill in the art to which the embodiments of this disclosure belong. It will be further understood that terms and phrases, such as those defined in commonly- used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined here. In some cases, the terms and phrases defined here may be interpreted to exclude embodiments of this disclosure.

[0013] Examples of an “electronic device” according to embodiments of this disclosure may include a mobile apparatus with display and lens technology, for example an apparatus used in grocery stores or other retail stores, airports, hospitals, amusement parks, city streets, etc., such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc. Other examples of an “electronic device” according to embodiments of this disclosure may include at least one of a smartphone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop computer, a netbook computer, a workstation, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a mobile medical device, an image sensor, or a wearable device (such as smart glasses, a head-mounted device (HMD), electronic clothes, an electronic bracelet, an electronic necklace, an electronic accessory, an electronic tattoo, a smart mirror, or a smart watch). Other examples of an electronic device include a smart home appliance. Examples of the smart home appliance may include at least one of a television, a digital video disc (DVD) player, an audio player, a refrigerator, an air conditioner, a cleaner, an oven, a microwave oven, a washer, a drier, an air cleaner, a set-top box, a home automation control panel, a security control panel, a TV box (such as SAMSUNG HOMESYNC, APPLETV, or GOOGLE TV), a smart speaker or speaker with an integrated digital assistant (such as SAMSUNG GALAXY HOME, APPLE HOMEPOD, or AMAZON ECHO), a gaming console (such as an XBOX, PLAYSTATION, or NINTENDO), an electronic dictionary, an electronic key, a camcorder, or an electronic picture frame. Still other examples of an electronic device include at least one of various medical devices (such as diverse portable medical measuring devices (like a blood sugar measuring device, a heartbeat measuring device, or a body temperature measuring device), a magnetic resource angiography (MRA) device, a magnetic resource imaging (MRI) device, a computed tomography (CT) device, an imaging device, or an ultrasonic device), a navigation device, a global positioning system (GPS) receiver, an event data recorder (EDR), a flight data recorder (FDR), an automotive infotainment device, a sailing electronic device (such as a sailing navigation device or a gyro compass), avionics, security devices, vehicular head units, industrial or home robots, automatic teller machines (ATMs), point of sales (POS) devices, or Internet of Things (loT) devices (such as a bulb, various sensors, electric or gas meter, sprinkler, fire alarm, thermostat, street light, toaster, fitness equipment, hot water tank, heater, or boiler). Other examples of an electronic device include at least one part of a piece of furniture or building/structure, an electronic board, an electronic signature receiving device, a projector, or various measurement devices (such as devices for measuring water, electricity, gas, or electromagnetic waves). Note that, according to various embodiments of this disclosure, an electronic device may be one or a combination of the abovelisted devices. According to some embodiments of this disclosure, the electronic device may be a flexible electronic device. The electronic device disclosed here is not limited to the above-listed devices and may include any other electronic devices now known or later developed.

[0014] In the following description, electronic devices are described with reference to the accompanying drawings, according to various embodiments of this disclosure. As used here, the term “user” may denote a human or another device (such as an artificial intelligent electronic device) using the electronic device.

[0015] Definitions for other certain words and phrases may be provided throughout this patent document. Those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior as well as future uses of such defined words and phrases.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016] For a more complete understanding of this disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:

[0017] FIGURE 1 illustrates an example network configuration including an electronic device in accordance with embodiments of this disclosure;

[0018] FIGURE 2A illustrates a front perspective view of a mobile apparatus in accordance with embodiments of this disclosure;

[0019] FIGURE 2B illustrates a rear perspective view of a mobile apparatus in accordance with embodiments of this disclosure;

[0020] FIGURE 2C illustrates a rear exploded view of a mobile apparatus in accordance with embodiments of this disclosure; [0021] FIGURE 2D illustrates a top exploded view of a mobile apparatus in accordance with embodiments of this disclosure;

[0022] FIGURE 2E illustrates a side perspective view of a mobile apparatus in accordance with embodiments of this disclosure;

[0023] FIGURE 2F illustrates another side perspective view of a mobile apparatus in accordance with embodiments of this disclosure;

[0024] FIGURE 3A illustrates a front perspective view of a charging apparatus in accordance with embodiments of this disclosure;

[0025] FIGURE 3B illustrates a rear perspective view of a charging apparatus in accordance with embodiments of this disclosure;

[0026] FIGURE 3C illustrates an exploded view of a charging apparatus in accordance with embodiments of this disclosure;

[0027] FIGURES 4A-4D illustrate various views of an interaction between a mobile apparatus and a charging apparatus in accordance with embodiments of this disclosure;

[0028] FIGURE 5 illustrates one example of various internal components of a mobile apparatus and of a charging apparatus in accordance with various embodiments of this disclosure;

[0029] FIGURE 6 illustrates an example communications network architecture in accordance with embodiments of this disclosure;

[0030] FIGURE 7 illustrates an example shopping environment in accordance with embodiments of this disclosure;

[0031] FIGURE 8 illustrates an example environment with multiple beacons in accordance with embodiments of this disclosure;

[0032] FIGURE 9 illustrates an example outdoor environment in accordance with embodiments of this disclosure;

[0033] FIGURE 10 illustrates an example travel hub environment in accordance with embodiments of this disclosure;

[0034] FIGURE 11 illustrates an example communications process in accordance with embodiments of this disclosure;

[0035] FIGURES 12A-12C illustrate an example operational process of a mobile apparatus in accordance with embodiments of this disclosure;

[0036] FIGURE 13 illustrates an example object recognition process in accordance with embodiments of this disclosure;

[0037] FIGURE 14 illustrates an example object recognition system in accordance with embodiments of this disclosure;

[0038] FIGURE 15 illustrates an example virtual mapping process in accordance with embodiments of this disclosure; [0039] FIGURE 16 illustrates an example virtual mapping and object detection process in accordance with embodiments of this disclosure;

[0040] FIGURE 17 illustrates an example user interface of a user device in accordance with embodiments of this disclosure; and

[0041] FIGURES 18A and 18B illustrate an example user interaction process in accordance with embodiments of this disclosure.

DETAILED DESCRIPTION

[0042] FIGURES 1 through 18B, discussed below, and the various embodiments of this disclosure are described with reference to the accompanying drawings. However, it should be appreciated that this disclosure is not limited to these embodiments, and all changes and/or equivalents or replacements thereto also belong to the scope of this disclosure. The same or similar reference denotations may be used to refer to the same or similar elements throughout the specification and the drawings.

[0043] As noted above, mobile apparatuses are used at various locations such as grocery stores or other retail stores, airports, hospitals, amusement parks, city streets, and other public spaces. These mobile apparatuses can include various manned apparatuses such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc. Given the ubiquity of such mobile apparatuses in public spaces, there exists an opportunity utilize mobile apparatuses to both gather and provide information on the environments in which they operate.

[0044] This disclosure provides various systems, apparatuses, electronic devices, and methods for using mobile apparatuses with display and lens technologies to both gather and provide information on the environments in which they operate. For example, the mobile apparatuses and associated systems of the various embodiments of this disclosure can capture and analyze information regarding environments to create virtual maps of the environments, determine what objects are present within the environments, determine whether actors or people are present within the environments, how many actors or people are present, the types of actors or people that are present, and facial orientation of the actors or people, determine environmental conditions in the environment, and report any of this information to one or more external systems over a networked architecture. The virtual maps can be used by the module apparatuses and associated systems as well as people within the environment to understand current environmental conditions and/or the locations of items of interest within the environments.

[0045] The mobile apparatuses and associated systems of the various embodiments of this disclosure can also receive and present media content for viewing by actors or people within the environment. This media content can be tailored based on the type of mobile apparatus, the type of environment in which the mobile apparatus operates, detected environmental conditions such as detected objects, actors, or people, the location of the mobile apparatus within the particular environment, geographic location of the environment, day of week and/or time of day, alert or emergency conditions detected by or transmitted to the mobile apparatus, etc. The media content can be of various kinds, such as educational content, entertainment content, advertising content, informational content such as information on how to navigate the current environment, public advisory content such as weather, allergen, or pollution alerts or emergency conditions such as of the criminal or natural disaster natures, etc.

[0046] To achieve the above, various embodiments of this disclosure provide for a unique combination of a mobile apparatus communicated connected within a networked architecture and including multiple external displays associated with lens or imaging sensor technology to intimately understand the environment and customize content for the displays using computer vision and machine learning techniques.

[0047] FIGURE 1 illustrates an example network configuration 100 including an electronic device in accordance with this disclosure. The embodiment of the network configuration 100 shown in FIGURE 1 is for illustration only. Other embodiments of the network configuration 100 could be used without departing from the scope of this disclosure.

[0048] According to embodiments of this disclosure, an electronic device 101 is included in the network configuration 100. The electronic device 101 can include at least one ofabus 110, a processor 120, a memory 130, at least one input/output (I/O) interface 150, a display 160, a communication interface 170, or a sensor 180. In some embodiments, the electronic device 101 may exclude at least one of these components or may add at least one other component. The bus 110 includes a circuit for connecting the components 120-180 with one another and for transferring communications (such as control messages and/or data) between the components.

[0049] The processor 120 includes one or more of a central processing unit (CPU), an application processor (AP), or a communication processor (CP). The processor 120 is able to perform control on at least one of the other components of the electronic device 101 and/or perform an operation or data processing relating to communication. In some embodiments, the processor 120 can be a graphics processor unit (GPU). In some cases, the processor 120 can receive and process inputs, such as image data, and the inputs or information associated with the inputs can be analyzed by the processor 120, or provided to another electronic device, to understand a particular environment, such as by performing computer vision and machine learning tasks. The processor 120 can also instruct other devices to perform certain operations, such as display content on one or more displays 160.

[0050] The memory 130 can include a volatile and/or non-volatile memory. For example, the memory 130 can store commands or data related to at least one other component of the electronic device 101. According to embodiments of this disclosure, the memory 130 can store software and/or a program 140. The program 140 includes, for example, a kernel 141, middleware 143, an application programming interface (API) 145, and/or an application program (or “application”) 147. At least a portion of the kernel 141, middleware 143, or API 145 may be denoted as an operating system (OS).

[0051] The kernel 141 can control or manage system resources (such as the bus 110, processor 120, or memory 130) used to perform operations or functions implemented in other programs (such as the middleware 143, API 145, or application 147). The kernel 141 provides an interface that allows the middleware 143, the API 145, or the application 147 to access the individual components of the electronic device 101 to control or manage the system resources. The application 147 includes one or more applications supporting the capture, manipulation, and exchange of data, such as image data, and/or the selection and display of media content. These functions can be performed by a single application or by multiple applications that each carries out one or more of these functions. The middleware 143 can function as a relay to allow the API 145 or the application 147 to communicate data with the kernel 141 , for instance . A plurality of applications 147 can be provided. The middleware 143 is able to control work requests received from the applications 147, such as by allocating the priority of using the system resources of the electronic device 101 (like the bus 110, the processor 120, or the memory 130) to at least one of the plurality of applications 147. The API 145 is an interface allowing the application 147 to control functions provided from the kernel 141 or the middleware 143. For example, the API 145 includes at least one interface or function (such as a command) for fding control, window control, display control, image processing, or text control.

[0052] The I/O interfaces 150 serve as an interface that can, for example, transfer commands or data input from a user or other external devices to other component(s) of the electronic device 101. The I/O interfaces 150 can also output commands or data received from other component(s) of the electronic device 101 to the user or the other external device.

[0053] The display 160 includes, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a quantum -dot light emitting diode (QLED) display, a microelectromechanical systems (MEMS) display, or an electronic paper display. The display 160 can also be a depth-aware display, such as a multi-focal display. The display 160 is able to display, for example, various contents (such as text, images, videos, icons, or symbols) to the user. The display 160 can include a touchscreen and may receive, for example, a touch, gesture, proximity, or hovering input using an electronic pen or a body portion of the user.

[0054] The communication interface 170, for example, is able to set up communication between the electronic device 101 and an external electronic device (such as a first electronic device 102, a second electronic device 104, or a server 106). For example, the communication interface 170 can be connected with a network 162 or 164 through wireless or wired communication to communicate with the external electronic device. The communication interface 170 can be a wired or wireless transceiver or any other component for transmitting and receiving signals, such as images.

[0055] The wireless communication is able to use at least one of, for example, WIFI, BLUETOOTH, near field communications (NFC), long term evolution (LTE), long term evolution-advanced (LTE-A), 5th generation wireless system (5G), millimeter-wave or 60 GHz wireless communication, Wireless USB, code division multiple access (CDMA), wideband code division multiple access (WCDMA), universal mobile telecommunication system (UMTS), wireless broadband (WiBro), or global system for mobile communication (GSM), as a cellular communication protocol. The wired connection can include, for example, at least one of a universal serial bus (USB), high definition multimedia interface (HDMI), recommended standard 232 (RS-232), or plain old telephone service (POTS). The network 162 includes at least one communication network, such as a computer network (like a local area network (LAN) or wide area network (WAN)), Internet, or a telephone network. In some embodiments, the communication interface can be a power interface to provide electrical power to another electronic device or charge the electronic device, such as a via a wired connection, e.g., NEMA, IEC, USB, USB-C cables, Power over Ethernet (PoE), etc., or a wireless connection, e.g., induction or resonant charging.

[0056] The electronic device 101 further includes one or more sensors 180 that can meter a physical quantity or detect an activation state of the electronic device 101 and convert metered or detected information into an electrical signal. For example, one or more sensors 180 can include one or more imaging sensors, which may be used to capture images of scenes. The sensor(s) 180 can also include one or more buttons for touch input, one or more microphones, a gesture sensor, a gyroscope or gyro sensor, an air pressure sensor, a magnetic sensor or magnetometer, an acceleration sensor or accelerometer, a grip sensor, a proximity sensor, a color sensor (such as an RGB sensor), a bio-physical sensor, a temperature sensor, a humidity sensor, an illumination sensor, an ultraviolet (UV) sensor, an electromyography (EMG) sensor, an electroencephalogram (EEG) sensor, an electrocardiogram (ECG) sensor, an infrared (IR) sensor, an ultrasound sensor, an iris sensor, or a fingerprint sensor. The sensor(s) 180 can further include an inertial measurement unit, which can include one or more accelerometers, gyroscopes, and other components. In addition, the sensor(s) 180 can include a control circuit for controlling at least one of the sensors included here . Any of these sensor(s) 180 can be located within the electronic device 101.

[0057] The first external electronic device 102 or the second external electronic device 104 can be a wearable device or an electronic device -mountable wearable device (such as an HMD). When the electronic device 101 is mounted in the electronic device 102 (such as the HMD), the electronic device 101 can communicate with the electronic device 102 through the communication interface 170. The electronic device 101 can be directly connected with the electronic device 102 to communicate with the electronic device 102 without involving with a separate network. The electronic device 101 can also be an augmented reality wearable device, such as eyeglasses, that include one or more imaging sensors.

[0058] The first and second external electronic devices 102 and 104 and server 106 each can be a device of the same or a different type from the electronic device 101. According to certain embodiments of this disclosure, the server 106 includes a group of one or more servers. Also, according to certain embodiments of this disclosure, all or some of the operations executed on the electronic device 101 can be executed on another or multiple other electronic devices (such as the electronic devices 102 and 104 or server 106). Further, according to certain embodiments of this disclosure, when the electronic device 101 should perform some function or service automatically or at a request, the electronic device 101, instead of executing the function or service on its own or additionally, can request another device (such as electronic devices 102 and 104 or server 106) to perform at least some functions associated therewith. The other electronic device (such as electronic devices 102 and 104 or server 106) is able to execute the requested functions or additional functions and transfer a result of the execution to the electronic device 101. The electronic device 101 can provide a requested function or service by processing the received result as it is or additionally. To that end, a cloud computing, distributed computing, or client-server computing technique may be used, for example. While FIGURE 1 shows that the electronic device 101 includes the communication interface 170 to communicate with the external electronic device 104 or server 106 via the network 162, the electronic device 101 may be independently operated without a separate communication function according to some embodiments of this disclosure.

[0059] The server 106 can include the same or similar components as the electronic device 101 (or a suitable subset thereof). The server 106 can support to drive the electronic device 101 by performing at least one of operations (or functions) implemented on the electronic device 101. For example, the server 106 can include a processing module or processor that may support the processor 120 implemented in the electronic device 101. In some cases, the server 106 can support avatar-based exchange of information between electronic devices, such as by receiving user input or information associated with user input from one device and providing the user input or the information associated with the user input (with or without processing by the server 106) to another device.

[0060] Although FIGURE 1 illustrates one example of a network configuration 100 including an electronic device 101, various changes may be made to FIGURE 1. For example, the network configuration 100 could include any number of each component in any suitable arrangement. In general, computing and communication systems come in a wide variety of configurations, and FIGURE 1 does not limit the scope of this disclosure to any particular configuration. Also, while FIGURE 1 illustrates one operational environment in which various features disclosed in this patent document can be used, these features could be used in any other suitable system.

[0061] FIGURES 2A-2F illustrate an example mobile apparatus 200 in accordance with various embodiments this disclosure. FIGURE 2A illustrates a front perspective view of the mobile apparatus 200. FIGURE 2B illustrates a rear perspective view of the mobile apparatus 200. FIGURE 2C illustrates a rear exploded view of the mobile apparatus 200. FIGURE 2D illustrates a top exploded view of the mobile apparatus 200. FIGURE 2E illustrates a side perspective view of the mobile apparatus 200. FIGURE 2F illustrates another side perspective view of the mobile apparatus 200.

[0062] For ease of explanation, the mobile apparatus 200 is described as being used in the network configuration 100 to support the exchange of information between electronic devices (such as multiple instances of the electronic device 101). However, the mobile apparatus 200 may be used with any other suitable devices and in any other suitable systems. In various embodiments, the mobile apparatus 200 is or incorporates electronic device 101, and can include all or some of the components of electronic device 101, such as the processor(s) 120, the memory (130), and the sensor(s) 180. For purposes of illustration, the mobile apparatus 200 is shown as a shopping cart or stroller, but it will be understood that other types of mobile apparatuses such as described herein can be used without departing from the scope of this disclosure . [0063] The example mobile apparatus 200 includes a body, housing, or chassis that is designed to accommodate the carrying of goods or people. For instance, as shown in FIGURES 2A-2F, in this example, the mobile apparatus includes an at least partially enclosed area with a seat to accommodate a child passenger, a handle 10 for an adult to push the mobile apparatus 200 via wheeled axle assemblies, and one or more trays to carry personal items and/or purchasable goods, such as a top tray 2 and a rear tray 9. The trays can include a wire mesh cage, a net, or other retaining device to assist with preventing items from falling out of the trays. The at least partially enclosed area can include various entertainment features for the child passenger, including a dashboard screen configured to play a variety of educational or entertainment content, i.e., shows and/or games, and a toy steering wheel. The content displayed on the dashboard screen can be played on a preset loop, on a schedule such as based on day of the week and/or time of day, controlled by the child passenger via dashboard controls, and/or controller by the adult operator of the mobile apparatus 200, such as via the dashboard controls, via a separate adult screen 11 located on the handle or the trays, which can be disposed in one or both of the top tray 2 or the rear tray 9. In some embodiments, the dashboard screen and the adult screen can be touchscreens, allowing for features of the mobile apparatus 200, such as child entertainment content, to be controlled.

[0064] In some embodiments, the adult screen can also provide entertainment content, or other content such as educational content, entertainment content, advertising content, informational content such as information on how to navigate the current environment, public advisory content such as weather, allergen, or pollution alerts or emergency conditions such as of the criminal or natural disaster natures, etc. In some embodiments, the adult screen can also provide other information such as a virtual map of the environment, an electronic copy of a paper map of the environment, planning features such as a list creation user interface to create list such as shopping or meal component lists, a travel itinerary, a to-do list, etc. In some embodiments, items within a created list can be displayed on the environmental map, such as via icons showing, e.g., the location of a product in a store, an attraction at an amusement park, a restaurant in an area of a city, the emergency room of a hospital, a gate location within an airport, etc. The adult display can also provide other content such as a searchable list of products, attractions, important points of interest, etc., information on special offers or discounts, customer surveys, such as surveys inquiring whether the user remembers what advertisements they saw or if advertisements influenced their decisions, a self-checkout and/or payment user interface, etc. In some embodiments, the mobile apparatus 200 includes a barcode scanner to allow users to scan barcodes of products to retrieve information, such as price, nutrition facts, etc., or to assist with checkout.

[0065] In some embodiments, any or all of the features of the adult screen can also be provided via an application downloadable and executable on a mobile device of the user or operator of the mobile apparatus. For example, a user can download an application to the user’s smartphone or other personal computing device, which can provide various information for a location or environment. In some embodiments, the application can provide information on a plurality of different locations of different types, and information associated with each, such as maps, information on points of interests, media content such as advertisements, etc., even when the user is away from the particular location. In some embodiments, this information and additional information can be provided to the user device when at a location, such as via communication between the user device and servers, or via a communication connection established between the mobile apparatus 200 and the user device, such as via an NFC, WIFI, or BLUETOOTH connection. In various embodiments, features, information, and content can be stored in internal memory of the mobile apparatus 200 and provided to the user device over the established connection, can be downloaded or streamed from servers to the mobile apparatus 200 and then to the user device, or data can be provided to the user device via a combination of data accessed from servers by the user device and data provided to the user device by the mobile apparatus 200. In some embodiments, servers, which can be server 106, maintain an account for the user, and, upon logging in using the user device or the tray and/or handle screen, the user can view additional information such as user preferences, loyalty rewards, history information, e.g., products frequently purchased by the user or set as a favorite by the user, or other information associated with the user.

[0066] The mobile apparatus 200 further includes a plurality of external screens, such as side screens 6 and front screen 7 illustrated in FIGURES 2A-2F. The external screens are operated by one or more processors 120 of the mobile apparatus 200 to display media content to both the user of the mobile apparatus 200 and other actors or people proximate to the mobile apparatus 200. The mobile apparatus further includes a plurality of images sensors, such as image sensors 202 disposed near and associated with each of the external screens. Other image sensors can also be disposed at locations on the chassis of the mobile apparatus 200, such as additional image sensors disposed near and associated with the adult screens. The mobile apparatus 200 can also include various other sensors, e.g., MEMS sensors, such as one or more accelerometers, gyroscopic sensors, etc.

[0067] The various displays and imaging technologies incorporated in the mobile apparatus 200 are used to both gather and provide information on the environments in which the mobile apparatus 200 operates. For example, the mobile apparatus 200 and associated systems, such as a cloud server system, can capture images of an environment using the imaging sensors, as well as movement or positional data from the other sensors, and analyze information regarding environments to create virtual maps of the environments using computer vision techniques, such as Visual Simultaneous Localization and Mapping (VSLAM). The mobile apparatus and associated systems can also determine what objects are present within the environments, determine whether actors or people are present within the environments, how many actors or people are present, the types of actors or people that are present, and facial orientation of the actors or people, determine environmental conditions in the environment, and report any of this information to one or more external systems over a networked architecture. Such actor/object detection can be performed using machine learning techniques for object detection and identification, such as convolutional neural networks. [0068] The virtual maps can be used by the module apparatuses and associated systems as well as people within the environment to understand current environmental conditions and/or the locations of items of interest within the environments. In some embodiments, images are captured by the imaging sensors of the mobile apparatus 200, and mapping and/or actor/object detection are performed by the mobile apparatus 200 such as via applications or processors on the mobile apparatus 200. In some embodiments, one or more of these processes can be distributed such as by the mobile apparatus 200 performing image capture and movement and positional data capture, and then transmitting the images and data to the server(s) to perform mapping and/or actor/object detection.

[0069] In addition to providing environmental and mapping information, the mobile apparatus 200 and associated systems can also receive and present media content for viewing by actors or people within the environment. For example, a content management server can transmit media content to the mobile apparatus 200 via a communication network such as illustrated in FIGURE 1, with the mobile apparatus 200 receiving, via a communications interface, such as communications interface 170, the media content along with display parameters. These display parameters can be initial default parameters for the content, such as start and stop dates, time of day, day of the week, etc. In some embodiments, the external screens and other components, such as the imaging sensors, processors, communication interfaces, etc., can be in a standby or low power mode until movement of the mobile apparatus 200 is detected by using the accelerometer, causing the mobile apparatus 200 to begin displaying content on the screens. The types of content can also be tailored to the environment type in which the mobile apparatus 200 is to be located. For example, a mobile apparatus 200 to be operated in a grocery store may have a default setting to display an advertisement for PEPSI every hour between 8:00-18:00 every day of the week, an advertisement for a special discount on steaks every Thursday at 11 :00, and so on. As another example, a mobile apparatus 200 operated within an amusement park may play informational content on a particular attraction or ride every other hour every day of the week. As another example, a mobile apparatus 200 operated in a hospital can display content related to various doctors practicing at the hospital, services such as navigation assistance or food offerings provided at the hospital, etc. In some embodiments, the media content can include a banner or logo associated with the location in which the mobile apparatus 200 operates. Mobile apparatuses 200 operating at locations of a same type, can also play different content based on geographic locations, such as one mobile apparatus 200 in a grocery store in Texas playing media content for particular products found in that grocery store or regionally, than another mobile apparatus 200 in a grocery store in New York.

[0070] In some embodiments, different locations can be identified in the memory or databases of the mobile apparatus and associated systems, e.g., servers, by a unique identifier for that location, in order to associate particular media content for that location, with each media content item also having a unique identifier. This also allows statistics to be gathered for the location and the media content displayed at that location, such as number of times played, number of actors/people who viewed the content based on facial orientation, length of time actors/people viewed the content, how actors/people reacted to viewing the content (e.g., picking up and/or purchasing the product associated with a displayed advertisement), etc. This information can be periodically provided in logs uploaded by the mobile apparatus 200 to servers, such as a content management system, which can in turn be provided to other systems such as third-party systems, e.g. , advertisers, entertainment companies, hospitals, amusement parks stores, etc., to better understand how actors/people respond to the media content.

[0071] Parameters can also provide that certain media content be displayed on certain ones of the external screens, a subset of the screens, or all the screens. In some embodiments, different media content can be played on different screens simultaneously. In some embodiments, the mobile apparatus 200 can include speakers and the media content can be played with audio or without based on the parameters. For example, audio may be disabled when playing different media content on separate screens simultaneously, in order to prevent confusion for viewers.

[0072] Although there may be default parameters for the display of content, the default parameters can be overridden based on the environmental conditions detected by the mobile apparatus 200 and/or the associated systems. For example, media content can be tailored based on detected environmental conditions such as detected objects, actors, or people, the location of the mobile apparatus within the particular environment, alert or emergency conditions detected by or transmitted to the mobile apparatus, etc. For example, when the mobile apparatus recognizes it is in a particular location within the environment, the mobile apparatus can be configured to play media content associated with that location, such as an advertisement for a product within an aisle of a grocery store, information on a ride or attraction when in proximity to the ride or attraction at an amusement park, flight information when in proximity to a particular gate at an airport, etc. This environmental awareness can be achieved by using either a virtual map created by the mobile apparatus, which can also be created in conjunction with multiple mobile apparatuses, and/or based on an object detected, such as by detecting a particular product on a shelf via shape, packaging, RFID tags, barcodes, and/or shelf labels, detecting a particular building or structure, recognizing information (such as a gate number in an airport), etc.

[0073] Media content displayed can also be changed based on detecting people with the imagining sensors of certain demographics. For example, if within a pharmacy area of a location, and the mobile apparatus and associated systems detect a nearby actor is an African American female, the mobile apparatus can display media content pertaining to a medical condition that affects African American females, and possibly medication information related to that medical condition. Media content can also be played based on external circumstances for a specific location, such as information on current weather conditions, allergen or pollution levels, medical conditions such as flu or COVID, criminal activity, natural disaster information, breaking news, etc. Content regarding external circumstances could be video or image content specific to each condition, or could be displayed icons, such as overlayed on other media content.

[0074] In some embodiments, media content and/or content change parameters can have set priorities. For example, if an advertisement for PEPSI has a highest priority setting, the mobile apparatus 200 may finish playing the PEPSI advertisement even if an environmental condition that would normally trigger a media content change is detected. However, if an advertisement for PRINGLES has a low priority setting, the mobile apparatus 200 can change the media content displayed to a PEPSI advertisement even during the PRINGLES advertisement when, for example, the mobile apparatus detects a bottle of PEPSI on a nearby shelf. Environmental conditions may also have priority levels that can override media content. For example, media content related to natural disaster information may override all content, even content having a high priority level. [0075] The media content and their display parameters can be updated via periodic updates provided to the mobile apparatus from associated servers. Software and firmware of the mobile apparatus 200 can also be updated remotely by servers transmitting updates to the mobile apparatus 200. In some embodiments, software or firmware updates can be triggered by the mobile apparatus 200 transmitting a log to the servers, where the log includes a current version of software or firmware installed on the mobile apparatus 200. In this case, the servers can check the software or firmware version indicated in the log against a latest version stored on the server and initiate a push communication to the mobile apparatus 200 to provide the latest software or firmware and instruct the mobile apparatus 200 to install the software or firmware. In some embodiments, the mobile apparatus 200 can pull software or firmware updates from the servers. In some embodiments, updates, including media content and software/firmware updates, may only occur during downtime periods, such as when the mobile apparatus 200 determines it has remained stationary for a preset period of time, when the mobile apparatus 200 is in a charging state, etc.

[0076] Possible components of the example mobile apparatus 200 are shown in Tables 1 and 2 below, although other components and arrangements can be used without departing from the scope of this disclosure.

Table 1 - Mobile Apparatus Parts Illustrated in FIGURES 2A-2C

Table 2 - Mobile Apparatus Parts Illustrated in FIGURE 2D

[0077] Although FIGURES 2A-2F illustrates one example of a mobile apparatus 200 various changes may be made to FIGURES 2A-2F. For example, various components or functions shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. For example, the dimensions, shape, and overall structure of the mobile apparatus 200 can be different based on the environment in which the mobile apparatus is used. The mobile apparatus can be various manned apparatuses such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc., including external displays and imaging sensors as described in the various embodiments of this disclosure. In general, this disclosure is not limited to any particular physical implementation of a mobile apparatus.

[0078] FIGURES 3A-3C illustrate an example charging stand 300 in accordance with various embodiments of this disclosure. FIGURE 3 A illustrates a front perspective view of the charging stand 300. FIGURE 3B illustrates a rear perspective view of the charging stand 300. FIGURE 3C illustrates an exploded view of the charging stand 300. [0079] For ease of explanation, the charging stand 300 is described as being used in the network configuration 100 to support the exchange of information between electronic devices (such as multiple instances of the electronic device 101). However, the charging stand 300 may be used with any other suitable devices and in any other suitable systems. In various embodiments, the charging stand 300 is or incorporates electronic device 101. For purposes of illustration, the charging stand 300 is shown in a tower configuration, but it will be understood that other types of charging stations can be used without departing from the scope of this disclosure.

[0080] The example charging station 300 includes a body, housing, or chassis that includes a main body portion and a base portion. The main body portion includes a display screen 12. In some embodiments, the screen 12 can be a touchscreen to allow for interaction with actors and the charging stand. For example, advertisements displayed on the screen 12 of the charging apparatus 300 could include a QR code that can be scanned by a user device to receive more information on the product in the advertisement, recipes, coupons, etc. In various embodiments, the charging stand 300 is configured to display content in a similar manner as described with respect to the mobile apparatus 200. That is, although the charging stand 300 is not mobile, the charging stand 300 can still receive media content for display on the screen 12 from servers, such as server 106 and/or a content management system, to display media content associated with the particular location and environments. In some cases, different charging stands 300 positioned at different locations with a same venue can display different content, such as a charging stand 300 in a product section of a grocery store displaying different media content than a charging stand 300 in a toy aisle.

[0081] Referring also to FIGURES 4A-4D, which illustrate various views of an interaction 400 between a mobile apparatus 200 and a charging stand 300, the base portion of the charging stand 300 includes pad or platform on which the mobile apparatus 200 is parked to charge one or more batteries 35 of the mobile apparatus. The pad can include stops that abut each side of the front wheels of the mobile apparatus to hold the mobile apparatus in place during charging operations. The charging stand 300 can provide induction or resonant charging to the mobile apparatus via a wireless charging transmitter located, for example, in a bottom portion of the body of the charging stand 300 and operatively coupled to the charging pad, to provide power to a wireless charging receiver disposed within a front interior portion of the mobile apparatus. In some embodiments, the charging stand 300 can link to the mobile apparatus via BLUETOOTH, WIFI, or the inductive system to exchange information such as analytics, images, and/or media content.

[0082] In some embodiments, the type of message or media content displayed on the screen 12 of the charging apparatus 300 can change based on whether the charging stand 300 is currently in a charging mode, triggered when a mobile apparatus is detected on the charging pad. For example, the charging stand 300 may display a customer survey (e.g., questions on advertising effectiveness) when a mobile apparatus is placed on the pad, as at this moment a customer is likely near the charging stand 300 and looking at the charging stand screen. In some embodiments, the charging stand 300 can also include one or more imaging sensors 302. The charging stand 300 can therefore tailor media content in a similar manner as described with respect to the mobile apparatus 200, such as changing displayed media content based on detected types of actors or people. In various embodiments, the charging stand 300 can also track viewing statistics for media content, such as number of times particular media content is played, number of viewers, time viewed for each viewer, viewer demographics, etc., and these statistics can be uploaded in periodic logs such as described with respect to the mobile apparatus 200. In some embodiments, the type of message or media content displayed on the screen 12 of the charging apparatus 300 can change based on actor proximity, such as to display a customer survey.

[0083] Possible components of the example charging stand 300 are shown in Table 3 below, although other components and arrangements can be used without departing from the scope of this disclosure. Table 3 - Charging Stand Parts Illustrated in FIGURES 3A-3C

[0084] Although FIGURES 3A-3C illustrates one example of a charging stand 300, and FIGURES 4A-4D illustrate one example of an interaction between a mobile apparatus and a charging stand, various changes may be made to FIGURES 3A-3C and FIGURES 4A-4D. For example, various components or functions shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. For example, the dimensions, shape, and overall structure of the mobile apparatus or charging stand can be different based on the environment in which the mobile apparatus or charging stand are used. In general, this disclosure is not limited to any particular physical implementation of a mobile apparatus or charging stand. [0085] FIGURE 5 illustrates one example of various internal components of a mobile apparatus and of a charging stand in accordance with various embodiments of this disclosure. As illustrated in this example, the mobile apparatus includes three external LCD displays and another smaller 10.1 LCD display, e.g. , the handle or tray screen, that each are associated with a component board that includes USB power to the display, HDMI connections to the display, SD card interfaces, accelerometers, mobile communications interfaces (SIM/LTE), an ISO CAN connected to a wireless receiver/charger, a 12vDC battery connected to a main battery of the mobile apparatus and the wireless receiver/charger, PoE interfaces connected to a PoE switch. In this example, each of the external displays also includes an imaging sensor. In this example, the smaller 10.1 display also includes WIFI and BLUETOOTH capabilities, such as to allow for communications between a user device and the mobile apparatus.

[0086] The charging stand itself includes an LCD display associated with a component board that includes an image sensor, an HDMI connection to the display, an SD card interface, an accelerometer, mobile communications interfaces (SIM/LTE), an ISO CAN, a 12vDC battery connected to a main power source, a PoE interface connected to a wireless power transmitter.

[0087] Although FIGURE 5 illustrates one example of mobile apparatus and charging stand components, various changes may be made to FIGURE 5. For example, various components shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. For example, each of the external screen boards can also include WIFI and/or BLUETOOTH capabilities, such as if the external screens are configured to connect to the network using WIFI of the location in which the mobile apparatus operates. As another example, other connection format other than HDMI, USB, etc. can be used. The mobile apparatus and charging stand can also each include other components such as those illustrated in FIGURE 1, e.g., at least one processor, memory, storage drives, etc. In general, this disclosure is not limited to any particular physical implementation of a mobile apparatus or charging stand.

[0088] FIGURE 6 illustrates an example communications network architecture 600 in accordance with various embodiments of this disclosure. The architecture 600 includes a plurality of mobile apparatuses 602, such as described with respect to FIGURES 2A-2F, and a plurality of charging apparatuses 604, such as described with respect to FIGURES 3A-3C, disposed within an environment 605. The environment 605 can be any location, venue, building, outdoor space, or other locations, such as grocery stores or other retail stores, airports, hospitals, amusement parks, city streets, etc. Likewise, the mobile apparatuses 602 can be manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc.

[0089] In various embodiments, the mobile apparatuses 602 and the charging apparatuses 604 are communicatively connected to an access point 606 to communicate with one or more remote systems over a network 607 (e.g., the Internet), including a content management system 608. In some embodiments, the access point 606 can be a WIFI access point within the environment 605 that the mobile apparatuses 602 and charging apparatuses 604 wirelessly connect to communicate over the network 607. In some embodiments, the access point 606 can be a cellular network base station located within or near the environment 605. In some embodiments, the mobile apparatuses 602 and charging apparatuses 604 can communicate with one another using the access point 606, e.g., over WIFI, or can otherwise communicate via BLUETOOTH or NFC communications.

[0090] The content management system 608 can be a cloud server system and can include a datastore 609, including hard drives and other permanent storage devices and databases for associating different types of data. The content management system 608 receives content from a content provider system 610, having a data store 611, in communication with the content management system 608. The content management system 608 provides content to the mobile apparatuses 602 and the charging apparatuses 604, whereby the mobile apparatuses 602 and charging apparatuses 604 can present media content for viewing by actors or people within the environment 605. For example, the content management system 608 can transmit media content to the mobile apparatus 602, with the mobile apparatus 602 receiving, via a communications interface, such as communications interface 170, the media content along with display parameters. These display parameters can be initial default parameters for the content, such as start and stop dates, time of day, day of the week, etc. In some embodiments, the external screens and other components, such as the imaging sensors, processors, communication interfaces, etc., can be in a standby or low power mode until movement of the mobile apparatus 602 is detected by using an accelerometer, causing the mobile apparatus 602 to begin displaying content on the screens. The types of content can also be tailored to the environment type in which the mobile apparatuses 602 are located. For example, a mobile apparatus 602 to be operated in a grocery store may have a default setting to display an advertisement for PEPSI every hour between 8:00-18:00 every day of the week, an advertisement for a special discount on steaks every Thursday at 11:00, and so on. As another example, a mobile apparatus 602 operated within an amusement park may play informational content on a particular attraction or ride every other hour every day of the week. As another example, a mobile apparatus 602 operated in a hospital can display content related to various doctors practicing at the hospital, services such as navigation assistance or food offerings provided at the hospital, etc. In some embodiments, the media content can include a banner or logo associated with the location in which the mobile apparatus 602 operates. Mobile apparatuses 602 operating at locations of a same type, can also play different content based on geographic locations, such as one mobile apparatus 602 in a grocery store in Texas playing media content for particular products found in that grocery store or regionally, than another mobile apparatus 602 in a grocery store in New York.

[0091] In some embodiments, different locations can be identified in the memory or databases of the mobile apparatuses 602 and in the datastore 609 of the content management system 608 by a unique identifier for the environment 605, in order to associate particular media content with the environment 605, with each media content item also having a unique identifier. This also allows statistics to be gathered for the environment 605 and the media content displayed at that location, such as number of times played, number of actors/people who viewed the content based on facial orientation, length of time actors/people viewed the content, how actors/people reacted to viewing the content (e.g., picking up and/or purchasing the product associated with a displayed advertisement), etc. This information can be periodically provided in logs uploaded by the mobile apparatus 602 or the charging apparatuses 604 to the content management system 608, which can in turn be provided to other systems such as the content provider system 610, e.g., systems belonging to advertisers, entertainment companies, hospitals, amusement parks stores, etc., to better understand how actors/people respond to the media content. In some embodiments, the logs can also include other data such as amount of time a mobile apparatus 602 is in movement, detected product information, geolocation of the mobile apparatus 602, battery level of the mobile apparatus 602, etc.

[0092] Parameters can also provide that certain media content be displayed on certain ones of the external screens, a subset of the screens, or all the screens. In some embodiments, different media content can be played on different screens simultaneously. In some embodiments, the mobile apparatuses 602 and/or the charging apparatuses 604 can include speakers and the media content can be played with audio or without audio based on the parameters. For example, audio may be disabled when playing different media content on separate screens simultaneously, in order to prevent confusion for viewers.

[0093] Although there may be default parameters for the display of content, the default parameters can be overridden based on the environmental conditions detected by the mobile apparatuses 602 or the charging apparatuses 604. For example, media content can be tailored based on detected environmental conditions such as detected objects, actors, or people, the location of the mobile apparatus within the particular environment, alert or emergency conditions detected by or transmitted to the mobile apparatus, etc. For example, when a mobile apparatus 602 recognizes it is in a particular location within the environment, the mobile apparatus 602 can be configured to play media content associated with that location, such as an advertisement for a product within an aisle of a grocery store, information on a ride or attraction when in proximity to the ride or attraction at an amusement park, flight information when in proximity to a particular gate at an airport, etc. In some embodiments, advertisements can be played based on the detection of a competitor product, or when associated products, e.g., other types of soft drinks, are detected. This environmental awareness can be achieved by using either a virtual map created by the mobile apparatus 602, which can also be created in conjunction with multiple mobile apparatuses 602, and/or based on an object detected, such as by detecting a particular product on a shelf via shape, packaging, RFID tags, barcodes, and/or shelf labels, detecting a particular building or structure, recognizing information (such as a gate number in an airport), etc.

[0094] In some embodiments, virtual mapping, object detection, facial detection, etc. can be performed locally on the mobile apparatuses 602 or the charging apparatuses 604, or, in some embodiments, images captured by the mobile apparatuses 602 and/or the charging apparatuses 604 can be transmitted to the content management system 608, and the content management system 608 processes the images to determine a mapping result, an object detection result, a facial orientation result, and/or other results, and transmits the result(s) to the mobile apparatuses 602 and/or the charging apparatuses 604. The mobile apparatus can use the created mapping of the environment and objects within the environment to know when objects should be coming up based on the mobile apparatuses current travel path within the environment and/or based on objects detected so far during travel, to play media content before reaching an object the mobile apparatus knows is coming up.

[0095] Media content displayed can also be changed based on detecting people with the imagining sensors of certain demographics. For example, if the environment 605 includes a pharmacy, and the mobile apparatus 602 and associated systems detect a nearby actor is an African American female, the mobile apparatus 602 could display media content pertaining to a medical condition that affects African American females, and possibly medication information related to that medical condition. Media content can also be played based on external circumstances of the environment 605, such as information on current weather conditions, allergen or pollution levels, medical conditions such as flu or COVID, criminal activity, natural disaster information, breaking news, etc. Content regarding external circumstances could be video or image content specific to each condition, or could be displayed icons, such as overlayed on other media content.

[0096] In some embodiments, media content can play for a set period of time before changing based on detected environmental conditions, such as 8.5 seconds, 1 minute, etc., which can give the mobile apparatus 602 time to track metrics such as counting the available audience, what proportion of the audience turned their heads, where the mobile apparatus 602 is within the environment at the time, whether viewers react in a certain way, such as taking a product from a shelf, etc. In some embodiments, media content and/or content change parameters can have set priorities. For example, if an advertisement for PEPSI has a highest priority setting, a mobile apparatus 602 may finish playing the PEPSI advertisement even if an environmental condition that would normally trigger a media content change is detected. However, if an advertisement for PRINGLES has a low priority setting, the mobile apparatus 602 can change the media content displayed to a PEPSI advertisement even during the PRINGLES advertisement when, for example, the mobile apparatus detects a bottle of PEPSI on a nearby shelf. Environmental conditions may also have priority levels that can override media content. For example, media content related to breaking news, e.g., natural disaster information, may override all content, even content having a high priority level.

[0097] In some embodiments, the type of message or media content displayed on the screen of the charging apparatuses 604 can change based on whether the charging apparatus 604 is currently in a charging mode, triggered when a mobile apparatus is detected on the charging pad. For example, the charging apparatus 604 may display a customer survey (e.g., questions on advertising effectiveness) when a mobile apparatus 602 is placed on a wireless charging pad of the charging apparatus, as at this moment a customer is likely near the charging apparatus 604 and looking at the screen of the charging apparatus 604. In some embodiments, the charging apparatus 604 can also include one or more imaging sensors that can be used to tailor media content in a similar manner as described with respect to the mobile apparatus 602, such as changing displayed media content based on detected types of actors or people. In various embodiments, the charging apparatus 604 can also track viewing statistics for media content, such as number of times particular media content is played, number of viewers, time viewed for each viewer, viewer demographics, etc., and these statistics can be uploaded to the content management system 608 in periodic logs such as described with respect to the mobile apparatus 602. In some embodiments, the type of message or media content displayed on the screen of the charging apparatus 604 can change based on actor proximity, such as to display a customer survey.

[0098] The media content and their display parameters can be updated via periodic updates provided transmitted by the content management system 608 to the mobile apparatuses 602 and the charging apparatuses 604. The content management system 608 can also remotely update software and firmware of the mobile apparatuses 602 and the charging apparatuses 604 by transmitting updates over the network 607. In some embodiments, software or firmware updates can be triggered by a mobile apparatus 602 or a charging apparatus 604 transmitting a log to the content management system 608, where the log includes a current version of software or firmware installed on the mobile apparatus 602. In this case, the content management system 608 can check the software or firmware version indicated in the log against a latest version stored in the datastore 609 of the content management system 608, and initiate a push communication to the mobile apparatus 602 or the charging apparatus 604 to provide the latest software or firmware and instruct the mobile apparatus 602 or the charging apparatus 604 to install the software or firmware. In some embodiments, the mobile apparatus 602 or the charging apparatus 604 can pull software or firmware updates from the content management system 608. In some embodiments, updates, including media content and software/firmware updates, may only occur during downtime periods, such as when the mobile apparatus 602 determines it has remained stationary for a preset period of time, when the mobile apparatus 602 is in a charging state, at a particular time of day, etc. In some embodiments, mobile apparatuses 602 and/or charging apparatuses 604 can be remotely repaired or maintained by the content management system 608, such as remotely resetting or troubleshooting the mobile apparatuses 602 and/or the charging apparatuses 604.

[0099] The architecture 600 also includes, in various embodiments, an environment management system 612, having a datastore 613. In various embodiments, the environment management system 612 is associated with the environment 605, such as a grocery store system including inventory management, pricing, personnel, and other associated data, an airport system including flight data, personnel data, security data, and other data regarding the airport, an amusement park system including data on attractions, personnel data, security data, etc. In various embodiments, the mobile apparatuses 602 and/or the charging apparatuses 604 can provide information to the environment management system 612, either directly via a local communications path, e.g., the access point 606, or by transmitting information first to the content management system 608, which in turn provides the information to the environment management system 612.

[0100] In various embodiments, the environment management system 612 includes a contact point, such as a telephone number (e.g., for text messages), email address, integrated messaging service, etc., for alert conditions detected by the mobile apparatuses 602 and/or the charging apparatuses 604. For example, the mobile apparatuses 602 and/or the charging apparatuses 604 can be configured, either locally or in collaboration with the content management system 608, to detect various environmental conditions such as a product being out of stock on a shelf, a liquid spill in an area of the location, a person lying in an apparent injured state, individuals carrying weapons, etc. The mobile apparatuses 602 and/or the charging apparatuses 604 can transmit to the environment management system 612, either directly or through the content management system 608, an alert identifying the detected event, allowing the environment management system 612 to notify one or more personnel associated with the environment 605 of the alert condition, and allowing the one or more personnel to alleviate the situation, such as by cleaning a spill, restocking a product, alerting emergency authorities, etc. In various embodiments, mobile apparatuses 602 and/or charging apparatuses 604 can be linked to the environment management system 612 to provide an integrated experience for users. For example, mobile apparatuses 602 and/or charging apparatuses 604 can provide product searching functions, product barcode scanning functions, self-checkout functions, emergency assistance request functions, navigational assistance functions, etc.

[0101] One or more user devices 615 can also be present within the environment 605, such as smartphones or other personal computing devices. The user device 615 can include an application installed and executed on the user device 615 that can communicate with the content management system 608 to receive and display various information regarding the environment 605, such as copies of virtual maps created using the mobile apparatuses 602, environmental information including prices, products, services, sales, attractions, office locations, dining locations, etc., personalized information such as user preferences and behavior history, loyalty rewards, and lists created for or by the user such as shopping lists, to-do lists, travel itineraries, upcoming appointments, etc. In some embodiments, QR codes displayed in media content on screens of the mobile apparatuses 602 and/or charging apparatuses 604 can be scanned by user devices to receive information on the subject of the media content, e.g., the product in the advertisement, recipes, coupons, flight information, information on local services, information on professionals offering services such as doctors, etc.

[0102] In some embodiments, the user device 615 can communicatively connect to a mobile apparatus 602 via BLUETOOTH, WIFI, NFC, etc. to provide the user operating the mobile apparatus 602 to be able to control aspects of the mobile apparatus 602, such as changing or selecting entertainment content for a child, or otherwise interacting with the mobile apparatus and other connected systems, such as searching for and purchasing products. In some embodiments, features offered by the application, such as viewing virtual maps, creating shopping lists, etc., can also be offered by the mobile apparatuses 602, such as via a touchscreen in a tray or on a handle of the mobile apparatus 602. In some embodiments, the mobile apparatuses 602 can include on the body of the mobile apparatus, or display on one of the screens of the mobile apparatus, a QR code that can be scanned by the user device to automatically and wirelessly connect the user device 615 and the mobile apparatus 602. In some embodiments, media content can be displayed provided by the mobile apparatus 602, or the content management system 608, to the user device 615, for display on the user device 615, that is tailored to the environment analyzed by the mobile apparatus 602, in a similar fashion to how content is tailored for display on the mobile apparatus 602. In some embodiments, if a mobile apparatus 602 remains stationary for a certain period of time, or is returned to a charging apparatus 604, the user device 615 can be automatically disconnected and/or the user can be automatically logged out in order to prevent access to a user’s account or device after the user has ceased operating the mobile apparatus 602.

[0103] Although FIGURE 6 illustrates one example of a communications network architecture 600, various changes may be made to FIGURE 6. For example, various components shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. For example, in some cases, the content provider system 610 and the environment management system 612 can be a same or associated system, such as if, for example, a grocery store both provides store advertisements to the content management system 608 for display on the mobile apparatuses 602 and/or charging apparatuses 604, and also receives alerts and other messages regarding the environment 605. In general, this disclosure is not limited to any particular physical or logical implementation of a communications network architecture.

[0104] Referring now to FIGURES 7-10, FIGURE 7 illustrates an example shopping environment 700 in accordance with embodiments of this disclosure, FIGURE 8 illustrates an example environment 800 with multiple beacons 802 in accordance with embodiments of this disclosure, FIGURE 9 illustrates an example outdoor environment 900 in accordance with embodiments of this disclosure, and FIGURE 10 illustrates an example travel hub environment 1000 in accordance with embodiments of this disclosure.

[0105] As shown in FIGURES 7-10, the mobile apparatuses and their associated systems of the embodiments of this disclosure can operate in various environments to capture images to understand the environment, create virtual maps of the environment, detect objects and people within the environment, and display media content on a plurality of screens of the mobile apparatus based on parameters including parameters influenced by the detected environment. For example, as shown in FIGURE 7, mobile apparatuses 602 can operate within a store or retail environment as, for example, shopping carts and/or strollers, whereby users push the mobile apparatus 602 around the store among aisles 702, and the mobile apparatus 602 constantly capture images of the store using a plurality of imaging sensors. A virtual map of the store can be created to visualize the physical layout of the store as well as the location and arrangement of products located on store shelves.

[0106] While being pushed around the store, as described in this disclosure, the mobile apparatuses 602 display media content such as educational or entertainment content to child passengers, as well as entertainment content, advertising content, informational content such as information on how to navigate the current environment, public advisory content such as weather, allergen, or pollution alerts or emergency conditions such as of the criminal or natural disaster natures, alerts on store conditions (e.g., a liquid spill in one of the aisles 702 detected by the mobile apparatus 602, out of stock product notices, etc.), etc. Charging apparatuses 604 present within environment 700 can also display media content in a similar manner. As described herein, the mobile apparatuses 602 can also transmit alerts to the environment management system 612, such as to connected internal store computing systems or via a point of contact for the store, such as a message sent to a stored manager’s phone number or email address notifying the store of alert conditions (e.g., a liquid spill in one of the aisles 702 detected by the mobile apparatus 602, out of stock product notices, etc.).

[0107] As shown in FIGURE 9, mobile apparatuses 602 and charging apparatuses 604 can also operate in an outdoor environment 900, such as a university campus, amusement park, city park, city street blocks, etc. In such environments, the mobile apparatuses 602 can virtually map the environment in a similar fashion, detecting and recognizing objects within the environment such as particular buildings, known environmental actors, such as the mascot 902 in the example of FIGURE 9, street signs and names on street signs, signs and names on buildings, etc. The media content displayed in the environment 900 can be tailored for the environment and the current environmental status. For example, in addition to all the types of media content mentioned in this disclosure, the mobile apparatus 602 could detect the presence of the mascot 902, and play media content pertaining to the mascot 902, such as a cartoon, an advertisement for an area of the environment 900 one can later find the mascot 902, etc. Other content can be tailored as described herein based on the location of the mobile apparatus 602 within the environment, hazards in the environment, weather, demographics of people detected in the environment, etc. As described herein, the mobile apparatus 602 can also report hazardous or alert conditions to an environment management system 612, such as detected actor injuries, hazardous weather conditions, etc.

[0108] As shown in FIGURE 10, mobile apparatuses 602 and charging apparatuses 604 can also operate in other environments such as a travel hub environment 1000, e.g. , an airport. In such environments, the mobile apparatuses 602 can virtually map the environment in a similar fashion as with the other environments described herein, i.e., detecting and recognizing objects within the environment such as particular internal structures, known environmental factors such as security personnel, signs and/or names of shopping or restaurant services, gate numbers and gate signs, etc. The media content displayed in the environment 1000 can be tailored for the environment and the current environmental status. For example, in addition to all the types of media content mentioned in this disclosure, the mobile apparatus 602 could detect the presence of a gate sign, retrieve information relating to the gate, such as nearby amenities within the airport or information regarding the current destination of the flight set to leave from the detected gate, and play media content associated with the retrieved information. Other content can be tailored as described herein based on the location of the mobile apparatus 602 within the environment, hazards in the environment, weather, demographics of people detected in the environment, etc. As described herein, the mobile apparatus 602 can also report hazardous or alert conditions to an environment management system 612, such as detected actor injuries, suspicious actors, etc. Additionally, as shown in FIGURE 10, various types of mobile apparatuses can be operated in a single environment, such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc. having a plurality of displays 1002 and imaging sensors.

[0109] The mobile apparatuses 602 and charging apparatuses 604 as illustrated in FIGURES 7-10 can communicate with each other and/or with external systems, such as systems 608, 610, 612, via access point 606. In some embodiments, the access point 606 can be a WIFI access point. In some embodiments, the access point 606 can be a cellular network base station located within or near the environment 605. In some embodiments, the mobile apparatuses 602 and charging apparatuses 604 can communicate with one another using the access point 606, e.g., over WIFI, or can otherwise communicate via BLUETOOTH or NFC communications. The mobile apparatuses 602 can determine their location within their respective environments in a number of ways, such as by virtual mapping and a detected position using the imaging and movement sensors, via object detection (e.g., recognizing products and/or labels on store shelves, particular buildings, signs, etc.), via satellite geo-location, etc. In some embodiments, such as shown in FIGURE 8, one or more beacons 802 within an environment can be used by the mobile apparatuses 602 to determine their location within the environment. For example, the beacons 802 can be BLUETOOTH or NFC beacons that a mobile apparatus 602 connects to when within proximity to the beacon 802, indicating to the mobile apparatus 602 it is at a location associated with a particular identifier for the beacon 802. In some embodiments, the beacons 802 can be signal devices that are used to triangulate the positions of mobile apparatuses 602 within the environment.

[0110] Although FIGURES 7-10 illustrate various examples of environments for mobile apparatuses in accordance with embodiments of this disclosure, various changes may be made to FIGURES 7-10. For example, although certain environments and types of mobile apparatuses are depicted in FIGURES 7-10, mobile apparatuses of various types such as manual or motorized shopping carts, luggage carts, child strollers, wheelchairs, shuttles, bikes, scooters, etc. equipped with external displays and imaging sensors can be used in various environments. In general, this disclosure is not limited to any particular type of environment or mobile apparatus.

[0111] FIGURE 11 illustrates an example communications process 1100 in accordance with embodiments of this disclosure. For ease of explanation, the process 1100 is described as being performed using the architecture 600 of FIGURE 6 within the network configuration 100 of FIGURE 1. It will be understood that various steps of the process 1100 can be carried out or instructed by a processor of the associated device, such as the processor 120. However, the process 1100 may be performed using any other suitable device(s) and in any other suitable system(s).

[0112] In this example, the process 1100 is carried out by components of the architecture 600, including the mobile apparatus 602, charging apparatus 604, content management system 608, content provider system 610, and the environment management system 612. At step 1102, the content provider system 610 provides display content to the content management system 608. This display content can include various types of content based on the type of content provider, such as content from advertisers, entertainment companies, hospitals, amusement parks stores, etc. At step 1104, the environment management system 612 provides environmental content, such as initial maps of the environment, environmental specific display content, such as advertisements for products sold at the location of the environment, alert parameters such as types of alert conditions to monitor (spills, out of stock items, injuries, weather), and points of contact (telephone numbers, email addresses, etc. of personnel at the environment in which the mobile apparatuses 602 operate). [0113] At steps 1106 and 1108, the content management system transmits content to the mobile apparatus 602 and the charging apparatus 604, along with display parameters for the content. These display parameters can include initial default parameters for the content, which can be originally provided by the content provider system 610 in some embodiments, such as start and stop dates, time of day, day of the week, etc. The types of content can also be tailored to the environment type in which the mobile apparatuses 602 are located. For example, a mobile apparatus 602 to be operated in a grocery store may have a default setting to display an advertisement for PEPSI every hour between 8:00-18:00 every day of the week, an advertisement for a special discount on steaks every Thursday at 11:00, and so on. As another example, a mobile apparatus 602 operated within an amusement park may play informational content on a particular attraction or ride every other hour every day of the week. As another example, a mobile apparatus 602 operated in a hospital can display content related to various doctors practicing at the hospital, services such as navigation assistance or food offerings provided at the hospital, etc. In some embodiments, the media content can include a banner or logo associated with the location in which the mobile apparatus 602 operates. Mobile apparatuses 602 operating at locations of a same type, can also play different content based on geographic locations, such as one mobile apparatus 602 in a grocery store in Texas playing media content for particular products found in that grocery store or regionally, than another mobile apparatus 602 in a grocery store in New York.

[0114] Parameters can also provide that certain media content be displayed on certain ones of the external screens, a subset of the screens, or all the screens. In some embodiments, different media content can be played on different screens simultaneously. In some embodiments, the mobile apparatuses 602 and/or the charging apparatuses 604 can include speakers and the media content can be played with audio or without audio based on the parameters. For example, audio may be disabled when playing different media content on separate screens simultaneously, in order to prevent confusion for viewers. At steps 1110 and 1112, the mobile apparatus 602 and the charging apparatus 604 store the received display content and the associated display parameters. At step 1114, the charging apparatus 604 displays a first content selection according to the parameters.

[0115] In some embodiments, the external screens and other components, such as the imaging sensors, processors, communication interfaces, etc., can be in a standby or low power mode until movement of the mobile apparatus 602 is detected by using an accelerometer, such as by being pushed by a user, causing the mobile apparatus 602 to begin displaying content on the screens. In some embodiments, the imaging sensor of the mobile apparatus 602 may detect movement, such as an actor walking by the mobile apparatus 602. At step 1116, the mobile apparatus 602 detects movement, and, in response to detecting the movement, displays content according to the parameters. At step 1118, the mobile apparatus 602 detects environmental conditions of the environment in which it operates, and, at step 1120, the mobile apparatus updates the displayed content according to the parameters. For example, although there may be default parameters for the display of content, the transmitted parameters can also include parameters for overriding the default parameters, or to queue the playing of media content directly after the playing of content based on the default parameters, based on the environmental conditions detected by the mobile apparatuses 602 or the charging apparatuses 604. For example, media content can be tailored based on detected environmental conditions such as detected objects, actors, or people, the location of the mobile apparatus within the particular environment, alert or emergency conditions detected by or transmitted to the mobile apparatus, etc. For instance, when a mobile apparatus 602 recognizes it is in a particular location within the environment, the mobile apparatus 602 can be configured to play media content associated with that location, such as an advertisement for a product within an aisle of a grocery store, information on a ride or attraction when in proximity to the ride or attraction at an amusement park, flight information when in proximity to a particular gate at an airport, etc. As another example, media content can be played in response to actor actions, such as an actor picking up a product, followed by playing an advertisement associated with that product. In some embodiments, advertisements can be played based on the detection of a competitor product, or when associated products, e.g., other types of soft drinks, are detected.

[0116] This environmental awareness can be achieved by using a virtual map created by the mobile apparatus 602, which can also be created in conjunction with multiple mobile apparatuses 602, and/or based on an object detected, such as by detecting a particular product on a shelf via shape, packaging, RFID tags, barcodes, and/or shelf labels, detecting a particular building or structure, recognizing information (such as a gate number in an airport), etc. In some embodiments, virtual mapping, object detection, facial detection, etc. can be performed locally on the mobile apparatuses 602 or the charging apparatuses 604, or, in some embodiments, images captured by the mobile apparatuses 602 and/or the charging apparatuses 604 can be transmitted to the content management system 608, and the content management system 608 processes the images to determine a mapping result, an object detection result, a facial orientation result, and/or other results, and transmits the result(s) to the mobile apparatuses 602 and/or the charging apparatuses 604. The mobile apparatus can use the created mapping of the environment and objects within the environment to know when objects should be coming up based on the mobile apparatuses current travel path within the environment and/or based on objects detected so far during travel, to play media content before reaching an object the mobile apparatus knows is coming up.

[0117] Media content displayed can also be changed based on detecting people with the imagining sensors of certain demographics. For example, if the environment 605 includes a pharmacy, and the mobile apparatus 602 and associated systems detect a nearby actor is an African American female, the mobile apparatus 602 could display media content pertaining to a medical condition that affects African American females, and possibly medication information related to that medical condition. Media content can also be played based on external circumstances of the environment 605, such as information on current weather conditions, allergen or pollution levels, medical conditions such as flu or COVID, criminal activity, natural disaster information, breaking news, etc. Content regarding external circumstances could be video or image content specific to each condition, or could be displayed icons, such as overlayed on other media content. [0118] In some embodiments, media content and/or content change parameters can have set priorities. For example, if an advertisement for PEPSI has a highest priority setting, a mobile apparatus 602 may finish playing the PEPSI advertisement even if an environmental condition that would normally trigger a media content change is detected. However, if an advertisement for PRINGLES has a low priority setting, the mobile apparatus 602 can change the media content displayed to a PEPSI advertisement even during the PRINGLES advertisement when, for example, the mobile apparatus detects a bottle of PEPSI on a nearby shelf. Environmental conditions may also have priority levels that can override media content. For example, media content related to breaking news, e.g., natural disaster information, may override all content, even content having a high priority level.

[0119] At step 1122, the mobile apparatus transmits logs and/or alerts to the content management system 608. In some embodiments, different locations can be identified in the memory or databases of the mobile apparatuses 602 and in the datastore 609 of the content management system 608 by a unique identifier for the environment 605, in order to associate particular media content with the environment 605, with each media content item also having a unique identifier. This also allows statistics to be gathered for the environment 605 and the media content displayed at that location, such as number of times played, number of actors/people who viewed the content based on facial orientation, length of time actors/people viewed the content, how actors/people reacted to viewing the content (e.g., picking up and/or purchasing the product associated with a displayed advertisement, or picking up and/or purchasing a competitor product), etc. In some embodiments the mobile apparatus can also detect apparent emotions of people viewing the media content to track and report viewer’s reactions to the media content. In some embodiments, the mobile apparatus can also track and report which media content had the longest average facial orientation to the screen. This information can be periodically provided in logs uploaded by the mobile apparatus 602 to the content management system 608, which can in turn be provided to other systems such as the content provider system 610, e.g., systems belonging to advertisers, entertainment companies, hospitals, amusement parks stores, etc., to better understand how actors/people respond to the media content, and to gauge the effectiveness of the media content.

[0120] In some embodiments, the logs can also include other data such as amount of time a mobile apparatus 602 is in movement, detected product information, geolocation of the mobile apparatus 602, battery level of the mobile apparatus 602, current software or firmware versions installed on the mobile apparatus 602, etc. At step 1124, the content management system 608 reports content analytics to the content provider system 610, using the information provided in the logs received from the mobile apparatus 602. In some embodiments, analytics can be viewed remotely by users of the content provider system 610, such as via an online portal or user interface provided by the content management system 608 providing and organizing the analytics, such as providing analytics for the various tracked metrics described in this disclosure for selectable or given periods of time.

[0121] At step 1126, the content management system 608 reports environmental conditions to the environment management system 612. For example, in various embodiments, the environment management system 612 is associated with the environment in which the mobile apparatus 602 and the charging apparatus 604 operate, such as a grocery store system including inventory management, pricing, personnel, and other associated data, an airport system including flight data, personnel data, security data, and other data regarding the airport, an amusement park system including data on attractions, personnel data, security data, etc. In various embodiments, the environment management system 612 includes a contact point, such as a telephone number (e.g., for text messages), email address, integrated messaging service, etc., for alert conditions detected by the mobile apparatuses 602 and/or the charging apparatuses 604. For example, the mobile apparatuses 602 and/or the charging apparatuses 604 can be configured, either locally or in collaboration with the content management system 608, to detect various environmental conditions such as a product being out of stock on a shelf, a liquid spill in an area of the location, a person lying in an apparent injured state, individuals carrying weapons, etc. The content management system 608 transmits the alert identifying the detected event, allowing the environment management system 612, at step 1128, to generate an environmental action, such as to notify one or more personnel associated with the environment of the alert condition, and allowing the one or more personnel to alleviate the situation, such as by cleaning a spill, restocking a product, alerting emergency authorities, etc. In some embodiments, alert conditions can also be reported to the content provider system 610, such as to report out of stock items to a brand or company, e.g., reporting PEPSI out of stock at a store to PEPSI. In some embodiments, detected alert conditions can also influence the media content displayed, such as avoiding playing advertisements for out of stock products.

[0122] At step 1130, the content provider system 610 provides updated content to the content management system 608. In some embodiments, the updated content can be sent in response to the analytics reported at step 1124, such as to fine tune media content to be presented and their parameters based on the analytics. At step 1132, the content management system 608 transmits updates to the mobile apparatus 602. At step 1134, the content management system 608 transmits updates to the charging apparatus 604. The media content and their display parameters can be updated via periodic updates provided transmitted by the content management system 608 to the mobile apparatuses 602 and the charging apparatuses 604. The content management system 608 can also remotely update software and firmware of the mobile apparatuses 602 and the charging apparatuses 604 by transmitting updates over the network 607. In some embodiments, software or firmware updates can be triggered by a mobile apparatus 602 or a charging apparatus 604 transmitting the log to the content management system 608, where the log includes a current version of software or firmware installed on the mobile apparatus 602. In this case, the content management system 608 can check the software or firmware version indicated in the log against a latest version stored in the datastore 609 of the content management system 608, and initiate a push communication to the mobile apparatus 602 or the charging apparatus 604 to provide the latest software or firmware and instruct the mobile apparatus 602 or the charging apparatus 604 to install the software or firmware. In some embodiments, the mobile apparatus 602 or the charging apparatus 604 can pull software or firmware updates from the content management system 608. In some embodiments, updates, including media content and software/firmware updates, may only occur during downtime periods, such as when the mobile apparatus 602 determines it has remained stationary for a preset period of time, when the mobile apparatus 602 is in a charging state, at a particular time of day, etc. In some embodiments, mobile apparatuses 602 and/or charging apparatuses 604 can be remotely repaired or maintained by the content management system 608, such as remotely resetting or troubleshooting the mobile apparatuses 602 and/or the charging apparatuses 604.

[0123] At step 1136, the mobile apparatus 602 connects to the charging apparatus 604 to begin a charging operation. In some embodiments, the mobile apparatus 602 ceases display of content and/or other functions when in the charging operation. In some embodiments, the mobile apparatus 602 can continue to play the content, or change the content based on entering the charging operation. At step 1138, the charging apparatus detects the connection with the mobile apparatus 602 and, in response, displays a second content selection according to the display parameters. For example, in some embodiments, the type of message or media content displayed on the screen of the charging apparatuses 604 can change based on whether the charging apparatus 604 is currently in a charging mode, triggered when a mobile apparatus is detected on the charging pad. For instance, the charging apparatus 604 may display a customer survey (e.g., questions on advertising effectiveness) when a mobile apparatus 602 is placed on a wireless charging pad of the charging apparatus, as at this moment a customer is likely near the charging apparatus 604 and looking at the screen of the charging apparatus 604. In some embodiments, the charging apparatus 604 can also include one or more imaging sensors that can be used to tailor media content in a similar manner as described with respect to the mobile apparatus 602, such as changing displayed media content based on detected types of actors or people. In various embodiments, the charging apparatus 604 can also track viewing statistics for media content, such as number of times particular media content is played, number of viewers, time viewed for each viewer, viewer demographics, etc., and these statistics can be uploaded to the content management system 608 in periodic logs such as described with respect to the mobile apparatus 602. In some embodiments, the type of message or media content displayed on the screen of the charging apparatus 604 can change based on actor proximity, such as to display a customer survey.

[0124] Although FIGURE 11 illustrates one example of a communications process 1100, various changes may be made to FIGURE 11. For example, various components shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. Additionally, while shown as a series of steps, various steps in FIGURE 11 may overlap, occur in parallel, occur in a different order, or occur any number of times. For example, in some cases, the content provider system 610 and the environment management system 612 can be a same or associated system, such as if, for example, a grocery store both provides store advertisements to the content management system 608 for display on the mobile apparatuses 602 and/or charging apparatuses 604, and also receives alerts and other messages regarding the environment 605. As another example, logs and/or alerts may be provided to the content provider system 610 or the environment management system 612 directly from either the mobile apparatus 602 or the charging apparatus 604. In general, this disclosure is not limited to any particular physical or logical implementation of a communications network architecture. [0125] FIGURES 12A-12C illustrate an example operational process 1200 of a mobile apparatus in accordance with embodiments of this disclosure. For ease of explanation, the process 1200 is described as being performed using the architecture 600 of FIGURE 6 within the network configuration 100 of FIGURE 1. It will be understood that various steps of the process 1200 can be carried out or instructed by a processor of the identified device, such as the processor 120. However, the process 1200 may be performed using any other suitable device(s) and in any other suitable system(s).

[0126] The process 1200 begins as block 1202. At block 1202, the mobile apparatus receives and stores a plurality of content for display, and associated display parameters, from remote servers, such as the content management system 608. As described herein, these display parameters can include initial default parameters for the content, which can be originally provided by the content provider system in some embodiments, such as start and stop dates, time of day, day of the week, etc., and parameters for dynamically displaying content based on detected environmental conditions. Parameters can also provide that certain media content be displayed on certain ones of the external screens, a subset of the screens, or all the screens. In some embodiments, different media content can be played on different screens simultaneously. As described in this disclosure, the types of content can also be tailored to the environment type in which the mobile apparatus is located.

[0127] At decision block 1204, the mobile apparatus determines if movement is detected, such as if the mobile apparatus is being moved by a user based on accelerometer data, or if imaging sensors detect movement of actors near the mobile apparatus. If no movement is detected, the process 1200 loops at decision block 1204. If movement is detected, at block 1206, the mobile apparatus displays content selections based on the display parameters. At block 1208, the mobile apparatus captures images of the environment using imaging sensors. In some embodiments, these captured images can be transmitted to servers, such as the content management system 608, to assist with virtual environment mapping, object/actor detection, facial recognition, etc. Each imaging sensor of the mobile apparatus can be associated with a screen of the mobile apparatus, to perform these tasks for each screen. In some embodiments, the virtual environment mapping, object/actor detection, facial recognition, etc., can be performed by the mobile apparatus if the mobile apparatus has enough processing resources to perform these tasks independently. In some embodiments, mobile apparatuses may be able to independently detect well-known environmental conditions, such as recognizing ubiquitous types of products, while identification of less well-known or new products can be performed by the servers using images transmitted to the servers from the mobile apparatus.

[0128] At decision block 1210, the mobile apparatus determines whether any actors/people are detected within the captured images of the environment. In not, the process 1200 moves to block 1218. If so, at block 1212, the mobile apparatus determines the number of people detected within the image, and the facial orientation of the people detected, and tracks the amount of time each face is oriented towards, i.e., viewing, the screen. In some embodiments, the mobile apparatus may record a view count for each detected actor, where the view count is determined by the amount of time an actor’s face is oriented towards the screen, such as two seconds.

[0129] At decision block 1214, the mobile apparatus determines whether to change content based on the actor detection. If not, the process 1200 moves to block 1218. If so, at block 1216, the mobile apparatus changes content for at least one of the screens based on the actor detection. For example, media content displayed can be changed based on detecting people with the imagining sensors of certain demographics. For instance, if the environment includes a pharmacy, and the mobile apparatus and associated systems detect a nearby actor is an African American female, the mobile apparatus could display media content pertaining to a medical condition that affects African American females, and possibly medication information related to that medical condition. As another example, the mobile apparatus could detect the presence of the mascot, such as at a sports event or at an amusement park, and play media content pertaining to the mascot, such as a cartoon, an advertisement for an area of the environment one can later find the mascot, advertisement for merchandise associated with the mascot, etc.

[0130] At block 1218, the mobile apparatus stores actor analytics in association with a content identifier. These analytics can be included in logs periodically sent to the servers. At decision block 1220, the mobile apparatus determined whether any objects are detected within the captured images of the environment. If not, the process moves to block 1228. If so, at block 1222, the mobile apparatus identifies the detected objects and stores object analytics. The object analytics allows statistics to be gathered on detected objects, such as whether the products are in stock or have low stock, how often users visit a particular location, how actors/people reacted to viewing the content (e.g. , picking up and/or purchasing the product associated with a displayed advertisement), etc. At decision block 1224, the mobile apparatus determines whether to change content based on the object detection. If not, the process 1200 moves to block 1228. If so, at block 1226, the mobile apparatus changes content for at least one of the screens based on the object detection. For example, the mobile apparatus can be configured to play media content associated a detected object, such as an advertisement for a product detected within an aisle of a grocery store, information on a ride or attraction when detected at an amusement park, flight information when a particular gate at an airport is detected, etc. In some embodiments, advertisements can be played based on the detection of a competitor product, or when associated products, e.g., other types of soft drinks, are detected. This environmental awareness can be achieved by detecting a particular product on a shelf via shape, packaging, RFID tags, barcodes, and/or shelf labels, detecting a particular building or structure, recognizing information (such as a gate number in an airport), etc. The actor detection, facial detection, and object detection can be achieved using one or more machine learning models such as a convolutional neural network, as described in the embodiments of this disclosure.

[0131] At block 1228, the mobile apparatus updates the environmental map or its location with the environmental map. In some embodiments, the mobile apparatus independently creates the environmental map, while in other embodiments multiple mobile apparatuses within the environment can contribute to the environmental map to increase map generation accuracy and speed. At decision block 1230, the mobile apparatus determines whether to change content based on the updated map or its location within the map. If not, the process 1200 moves to decision block 1234. If so, at block 1232, the mobile apparatus changes content for at least one of the screens based on the updated map or its location within the map. For example, the mobile apparatus can be configured to play media content associated with that location, such as an advertisement for a product within an aisle of a grocery store, information on a ride or attraction when in proximity to the ride or attraction at an amusement park, flight information when in proximity to a particular gate at an airport, etc. This environmental awareness can be achieved by using either the virtual map created by the mobile apparatus, and can also involve other location checks performed by the mobile apparatus, such as beacon proximity, signal triangulation, satellite geo-location, etc.

[0132] At decision block 1234, the mobile apparatus determines whether an alert condition is detected. If not, the process 1200 moves to decision block 1238. If so, at block 1236, the mobile apparatus transmits an alert signal, such as to the content management system 608, or directly to the environment management system 612. In various embodiments, the environment management system 612 includes a contact point, such as a telephone number (e.g., for text messages), email address, integrated messaging service, etc., for alert conditions detected by the mobile apparatuses and/or the charging apparatuses. For example, the mobile apparatus can be configured, either locally or in collaboration with the content management system 608, to detect various environmental conditions such as a product being out of stock on a shelf, a liquid spill in an area of the location, a person lying in an apparent injured state, individuals carrying weapons, etc. The content management system 608 transmits the alert identifying the detected event, allowing the environment management system 612, at step 1128, to generate an environmental action, such as to notify one or more personnel associated with the environment of the alert condition, and allowing the one or more personnel to alleviate the situation, such as by cleaning a spill, restocking a product, alerting emergency authorities, etc.

[0133] At decision block 1238, the mobile apparatus determines whether external data has been received, such as new media content to display based on external circumstances for the environment, such as information on current weather conditions, allergen or pollution levels, medical conditions such as flu or COVID, criminal activity, natural disaster information, breaking news, etc. Content regarding external circumstances could be video or image content specific to each condition, or could be displayed icons, such as overlayed on other media content. If not, the process 1200 moves to block 1244. If so, at decision block 1240, the mobile apparatus determines whether to change content based on the external data. If not, the process 1200 moves to block 1244. If so, at block 1242, the mobile apparatus changes the displayed content on at least one screen based on the external data. External data can be prioritized over normal media content due to public safety concerns.

[0134] At block 1244, the mobile apparatus transmits one or more logs to servers, such as the content management system 608. In some embodiments, different locations can be identified in the memory or databases of the mobile apparatuses and at the servers of the content management system by a unique identifier for the environment, in order to associate particular media content with the environment, with each media content item also having a unique identifier. This also allows statistics to be gathered for the environment and the media content displayed at that location, such as number of times played, number of actors/people who viewed the content based on facial orientation, length of time actors/people viewed the content, how actors/people reacted to viewing the content (e.g., picking up and/or purchasing the product associated with a displayed advertisement), etc. This information can be periodically provided in logs uploaded by the mobile apparatus to the servers, which can in turn be provided to other systems such as the content provider system 610, e.g., systems belonging to advertisers, entertainment companies, hospitals, amusement parks stores, etc., to better understand how actors/people respond to the media content, and to gauge the effectiveness of the media content. In some embodiments, the logs can also include other data such as amount of time a mobile apparatus is in movement, detected product information, geolocation of the mobile apparatus, battery level of the mobile apparatus, current software or firmware versions installed on the mobile apparatus, etc.

[0135] At decision block 1246, the mobile apparatus determined whether no movement has been detected for a predetermined amount of time. If not, the process 1200 moves to decision block 1250. If so, to preserve battery life, for example, at block 1248, the mobile apparatus ceases the display of content. In some embodiments, other components of the mobile apparatus, such as communications interfaces, processors, imaging sensors, etc., can also enter a low power of rest mode. From block 1248, the process 1200 loops back to decision block 1204. At decision block 1250, the mobile apparatus determines whether it detects a charging connection or operation. If not, the process 1200 loops back to block 1206. If so, at block 1252, the mobile apparatus commences a charging state, and installs updates. In some embodiments, when the mobile apparatus connects to a charging apparatus to begin a charging operation, the mobile apparatus ceases display of content and/or other functions. In some embodiments, the mobile device can continue to play the content, or change the content based on entering the charging operation, such as displaying surveys, charging mode graphics, general information on the location, etc.

[0136] Servers, such as the content management system 608, can remotely update software and firmware of the mobile apparatus by transmitting updates over a network, such as the network 607. In some embodiments, software or firmware updates can be triggered by the mobile apparatus transmitting the log to the servers, where the log includes a current version of software or firmware installed on the mobile apparatus. In this case, the servers can check the software or firmware version indicated in the log against a latest version stored in association with the servers and initiate a push communication to the mobile apparatus to provide the latest software or firmware and instruct the mobile apparatus for installation. In some embodiments, software or firmware updates may not be performed until the servers receive a status update from the mobile apparatus that it has entered the charging mode. In some embodiments, the mobile apparatus can pull software or firmware updates from the servers. For example, after transmitting a log to the servers indicating an old software or firmware version number, the servers may send an update instruction to the mobile apparatus, and, upon stopping movement for a period of time, or upon entering the charging state, the mobile apparatus transmits a request for the latest version for installation. In some embodiments, mobile apparatus can be remotely repaired or maintained by the servers, such as remotely resetting or troubleshooting the mobile apparatus.

[0137] Although FIGURES 12A-12C illustrate one example operational process 1200 of a mobile apparatus, various changes may be made to FIGURES 12A-12C. For example, while shown as a series of steps, various steps in FIGURES 12A-12C may overlap, occur in parallel, occur in a different order, or occur any number of times. In general, the steps of the process 1200 are performed according to detected environmental conditions, and thus can be dynamically performed based on current conditions detected by the mobile apparatus. Additionally, as described in this disclosure, various steps in process 1200 can also be performed by a charging apparatus, including displaying content based on parameters, detecting products or actors, sending logs, etc.

[0138] Referring now to FIGURES 13 and 14, FIGURE 13 illustrates an example object recognition process 1300 in accordance with embodiments of this disclosure and FIGURE 14 illustrates an example object recognition system 1400 in accordance with embodiments of this disclosure.

[0139] As shown in FIGURE 13, as a mobile apparatus 602 views objects, such as products on a shelf in this example, the mobile apparatus 602, using one or more imaging sensors, detects objects, such as products, product labels, and/or faces and facial orientation, and determines where in the images the objects are located using one or more machine learning models 1402, such as a convolutional neural network, executed either on the mobile apparatus 602, and/or by a content management system 608 as shown in FIGURE 14. In some embodiments, each imaging sensor takes images at a particular number of frames per second (FPS) such as 10 FPS. In some embodiments, the mobile apparatus and/or servers analyze each snapshot instantly, avoiding permanently recording the images. In some embodiments, the machine learning model creates a bounding box 1302 around each detected object, such as products or product labels as shown in FIGURE 13, based on recognizable or regionally common features (pixel intensity, etc.). The machine learning model then can classify the detected object based on the model’s training.

[0140] For example, the model can be a convolutional neural network that receives an input image, and convolutional layer fdters (feature detectors) extract image features from the input image, sliding through the elements of the image and multiplying each element by the fdter, to product a feature map (a matrix). In some embodiments, an activation function, such as a Rectified Linear Unit (ReLU) function, is performed on the feature map to replace negative pixels values to add non-linearity to the model. In some embodiments, the machine learning model includes pooling layers between convolutional layers to reduce the dimensionality of each feature map, reducing noisy and redundant convolutions. In some embodiments, the model also includes at least one fully connected layer that receives the outputs from the convolution layers and classifies the object within the image, for example, a product type.

[0141] In some embodiments, as shown in FIGURE 14, images and data is sent from the mobile apparatus to the servers via a communications interface in the mobile apparatus, such as an inbuilt LTE modem, to identify products from a cloud based object library 1404, and once identified are downloaded back to the cart and incorporated into the map. The cloud based object library 1404 can include a large number of objects. For example, in embodiments including a product identification system, the object library 1404 can include, for example, over 100,000 brands and products, and, using the machine learning model, each image is sorted by shape and brand label to see if it can be identified and filed in the internal mobile apparatus memory as an address ID. In addition to optical recognition of the product itself, the mobile apparatus will attempt to read the labels on the shelf edge and match the product ID to the label and price information. The mobile apparatus will then sort different products and locate each product by those identified products on every side of the identified product, thus building a map of product locations in its internal memory. Over time the mobile apparatus will assemble a complete picture of all the store products and locations of each product relevant to its neighboring products. The product location map is in three dimensions, i.e., in this example, the mobile apparatus will know the physical location in the aisle in terms of which aisle and how far up the aisle, what side it is on, and the height of the product from the floor, i.e., which shelf each product is on. As sown in FIGURE 13, the mobile apparatus can also determine when items are out of stock, such as by identifying an open space 1304 on a product shelf, such as by detecting open space between products on the shelf, or by detecting open space above a product label.

[0142] As shown in FIGURE 14, the content management system 608 can also include a database 1406 including and associating object IDs, content IDs, locations, priority levels, demographics associated with the objects and/or content, and content display parameters for the content. These display parameters can include initial default parameters for the content, which can be originally provided by the content provider system 610 in some embodiments, such as start and stop dates, time of day, day of the week, etc. The types of content can also be tailored to the environment type in which the mobile apparatuses 602 are located. For example, as shown in FIGURE 14, a mobile apparatus 602 to be operated in a grocery store may have a default setting to display an advertisement for PEPSI every hour day at 7:00, 13:00, and 18:00, an advertisement for TIDE products Thursdays at 8:00, and so on, while a mobile apparatus at an amusement park location may have a default setting to display content related to a mouse mascot every day at 9:00 and 11: 15. Each content type has an associated object that can trigger display of the content, a location in which the content is played on mobile apparatuses, a priority level indicating whether the content should take priority over other content, and target demographic. Mobile apparatuses 602 operating at locations of a same type, can also play different content based on geographic locations, such as one mobile apparatus 602 in a grocery store in Texas playing media content for particular products found in that grocery store or regionally, than another mobile apparatus 602 in a grocery store in New York.

[0143] Parameters can also provide that certain media content be displayed on certain ones of the external screens, a subset of the screens, or all the screens. In some embodiments, different media content can be played on different screens simultaneously. In some embodiments, the mobile apparatuses 602 and/or the charging apparatuses 604 can include speakers and the media content can be played with audio or without audio based on the parameters. For example, audio may be disabled when playing different media content on separate screens simultaneously, in order to prevent confusion for viewers. [0144] Accordingly, although there may be default parameters for the display of content, the parameters can also include parameters for overriding the default parameters, or to queue the playing of media content directly after the playing of content based on the default parameters, based on the environmental conditions detected by the mobile apparatuses 602. For example, media content can be tailored based on detected environmental conditions such as detected objects, actors, or people, the location of the mobile apparatus within the particular environment, e g , a particular aisle as shown in FIGURE 14, alert or emergency conditions detected by or transmitted to the mobile apparatus, etc. For instance, when a mobile apparatus 602 recognizes it is in a particular location within the environment, the mobile apparatus 602 can be configured to play media content associated with that location, such as an advertisement for a product within an aisle of a grocery store, information on a ride or attraction when in proximity to the ride or attraction at an amusement park, flight information when in proximity to a particular gate at an airport, etc.

[0145] This environmental awareness can be achieved by using a virtual map created by the mobile apparatus 602, which can also be created in conjunction with multiple mobile apparatuses 602, and/or based on an object detected, such as by detecting a particular product on a shelf via shape, packaging, RFID tags, barcodes, and/or shelf labels, detecting a particular building or structure, recognizing information (such as a gate number in an airport), etc. Media content displayed can also be changed based on detecting people with the imagining sensors of certain demographics. For example, as shown in the example of FIGURE 14, each of the PEPSI, TIDE, and mouse mascot have associated demographics that, if the mobile apparatus detects an actor of the associated demographic, can display the associated content on at least one of the screens of the mobile apparatus, such as a screen associated with the imaging sensor that detected the actor. [0146] As also shown in FIGURE 14, media content and/or content change parameters can have set priorities. For example, if the advertisement for PEPSI has a highest priority setting, a mobile apparatus 602 may finish playing the PEPSI advertisement even if an environmental condition that would normally trigger a media content change is detected. However, if the advertisement for TIDE has a low priority setting, the mobile apparatus 602 can change the media content displayed to a PEPSI advertisement even during the TIDE advertisement when, for example, the mobile apparatus detects a bottle of PEPSI on a nearby shelf. Environmental conditions may also have priority levels that can override media content. For example, media content related to breaking news, e.g., natural disaster information, may override all content, even content having a high priority level.

[0147] Although FIGURES 13 and 14 illustrate example object detection methods and systems, various changes can be made to FIGURES 13 and 14. For example, in some embodiments, the database 1406 can be part of the object library 1404. As another example, the machine learning models can include convolutional neural networks (CNNs), such as a deep and/or a region based CNNs, single shot detector (SSD) models, You Only Look Once (YOLO) models, OpenCV models and/or TensorFlow models, or other image recognition and object detection models, without departing from the scope of this disclosure. In general, this disclosure is not limited to any particular implementation of a machine learning model. Additionally, although FIGURES 13 and 14 illustrate a product shelf example, as described in this disclosure, various other environments can be mapped and objects within those environments identified. Also, charging apparatuses of this disclosure can also use the system illustrated in FIGURE 14 to identify objects and actors, and to receive and display media content.

[0148] FIGURE 15 illustrates an example virtual mapping process 1500 in accordance with embodiments of this disclosure. For ease of explanation, the process 1500 is described as being performed using the architecture 600 of FIGURE 6 within the network configuration 100 of FIGURE 1. It will be understood that various steps of the process 1500 can be carried out or instructed by a processor of the identified device, e.g., the mobile apparatus, such as the processor 120. However, the process 1500 may be performed using any other suitable device(s) and in any other suitable system(s).

[0149] In various embodiments of this disclosure, each mobile apparatus will have its own environmental map that is continually updated as the mobile apparatus moves around the environment, but also has the ability to send the updated map to the other mobile apparatuses in the same store, so all mobile apparatuses can contribute to the accuracy of the store mapping system. The mobile apparatus will then always know where it is the store by analyzing images of the object product’s surrounding. One key benefit of the environmental map is that the mobile apparatus can play relevant media content to surrounding actors related to objects or groups of objects, e.g., products or groups of products, in the immediate vicinity. Another benefit is that that map can be made available to actors in the environment via a user device application to assist them with navigating the environment, such as providing product locations, entertainment locations, travel destinations, and other valuable information relevant to where they are in the environment.

[0150] In various embodiments of this disclosure location mapping uses VSLAM (Visual Simultaneous Localization and Mapping) using a combination of image sensor inputs and MEMS sensors. Local mapping can be created on the mobile apparatus and updated continuously. In the example illustrated in FIGURE 15, a mobile apparatus captures a plurality of image frames at step 1502. At step 1504, the mobile apparatus initializes the map using more than one of the frames, such as two frames. The map is initialized with 3D points or map points identified by the mobile apparatus as corresponding to detected objects in the environment. The 3D points and relative image poses can be computed using triangulation based on 2D feature correspondences. As 3D points are added to the images, the process 1500 creates a 3D point cloud corresponding to the physical shape of the environment.

[0151] At step 1506, the mobile apparatus performs features tracking. That is, once amap is initialized, for each new frame, the pose is estimated by matching features in the current frame to features in the last key frame. A key frame is a subset of video frames that contain cues for localization and tracking. Two consecutive key frames usually involve sufficient visual change. Non-key frames can be discarded. At step 1508, the mobile apparatus performs local mapping by using the current frame to create new 3D map points if it is identified as a key frame. In some embodiments, bundle adjustment is used to minimize reprojection errors by adjusting the camera pose and 3D points. [0152] At step 1510, the mobile apparatus performs loop closure. Loops can be detected for each key frame by comparing it against all previous key frames using a bag-of-features approach. The main goal in loop closure is being able to detect when the mobile apparatus is observing a previously explored scene, so that additional constraints can be added to the map. Once a loop closure is detected, a pose graph is optimized to refine the camera poses of all the key frames. After step 1510, if the loop has not been closed, the process 1500 updates a recognition database and moves back to step 1506 to process the next frame. Once a scene has been thoroughly analyzed, at step 1512, the virtual map is generated and stored on the mobile apparatus and/or transmitted to other mobile apparatuses or to servers, such as the content management system 608.

[0153] Although FIGURE 15 illustrates one example virtual mapping process 1500, various changes may be made to FIGURE 15. For example, while shown as a series of steps, various steps in FIGURE 15 may overlap, occur in parallel, occur in a different order, or occur any number of times. In general, this disclosure is not limited to any particular implementation of virtual environment mapping.

[0154] FIGURE 16 illustrates an example virtual mapping and object detection process 1600 in accordance with embodiments of this disclosure. For ease of explanation, the process 1600 is described as being performed using the architecture 600 of FIGURE 6 within the network configuration 100 of FIGURE 1. It will be understood that various steps of the process 1600 can be carried out or instructed by a processor of the identified device, e.g., the mobile apparatus, such as the processor 120. However, the process 1500 may be performed using any other suitable device(s) and in any other suitable system(s).

[0155] The process 1600 starts at block 1602. At block 1602, the mobile apparatus captures image of an environment using one or more image sensors, and captures other sensor data such as pose and movement data using sensors such as accelerometers, gyroscopic sensors, etc. At block 1604, the mobile apparatus detects objects within the captured images and generates a 3D point cloud using the captured images and sensor data, such as described with respect to FIGURE 15. At block 1606, the mobile apparatus uploads data such as current mapping data to the cloud, e.g., servers such as data management system 608, or to other mobile apparatuses. At decision block 1608, the mobile apparatus determines if mapping data is available, either from the cloud or from one or more other mobile apparatuses. If not, the process 1600 moves to decision block 1612, if so, the process moves to block 1610. At block 1610, the mobile apparatus receives mapping data from the cloud or from one or more other mobile apparatuses.

[0156] In various embodiments of this disclosure, mobile apparatuses can work in tandem to map an environment, with each mobile apparatus contributing to the overall mapping as the mobile apparatus travels within the environment, continuously capturing images and positional data. The mobile apparatuses share their mapping information with each other in this manner to over time create a full virtual map of the environment. At decision block 1612, the mobile apparatus determines if there is sufficient data to generate a full environmental map. If not, the process loops back to block 1602. If so, at block 1614, the mobile apparatus, individually or in collaboration with other mobile apparatuses, generates an environmental map of the environment. [0157] At block 1616, the mobile apparatus detects objects in captured images of the environment, and, at block 1618, classifies the objects and determines locations of the object within the environmental map. At decision block 1620, the mobile apparatus determines whether the detected object is a static object. If not, the process 1600 moves to block 1624. If so, for example, a product on a shelf would be a statis object that would normally be part of the environment, while an actor or person detected in the environment would not be, unless the actor is a special actor that is normally part of the environment to be mapped. Determining whether an object is a static object can be performed by analyzing multiple captured images for a pose to determine if the object is common to all or most of the images, or is temporary, e.g. , a person leaving the images due to leaving an image capture area. If, at decision block 1620, the mobile apparatus determines the object is a static object, at step 1622 the mobile apparatus updates the environmental map with the location of the object.

[0158] At block 1624, the mobile apparatus determines its location in the environment using the generated environmental map, current image and sensor data, and/or detected object. In some embodiments, the mobile apparatus can analyze raw mapping data to determine its location within the environment, such as mathematical representations of objects(s) within the environment, along with a determined trajectory using its sensors, to determine, based on the object(s) detected what objects are expected to be coming up next. In some embodiments, the environmental map can be generated as a virtual rotatable image that can be sent to user devices, or displayed on a screen of a mobile apparatus, such as a tray or handle screen. The process 1600 ends at block 1626.

[0159] Although FIGURE 16 illustrates one example virtual mapping and object detection process 1600, various changes may be made to FIGURE 16. For example, while shown as a series of steps, various steps in FIGURE 16 may overlap, occur in parallel, occur in a different order, or occur any number of times. In general, this disclosure is not limited to any particular implementation of virtual environment mapping or object detection.

[0160] FIGURE 17 illustrates an example user interface of a user device 1700 in accordance with embodiments of this disclosure. The user device 1700 can be the user device 615 of FIGURE 6, and an electronic device 100 as described with respect to FIGURE 1.

[0161] One or more user devices 1700 can also be present within the environment in which mobile apparatuses and charging apparatuses operate, such as smartphones or other personal computing devices. The user device 1700 can include an application installed and executed on the user device 1700 that can communicate with servers, such as the content management system 608, or with mobile apparatuses, such as mobile apparatus 602, to receive and display various information regarding the environment. In some embodiments, the user device can communicatively connect to a mobile apparatus via BLUETOOTH, WIFI, NFC, etc. to provide the user operating the mobile apparatus to be able to control aspects of the mobile apparatus using the user interface of the user device 1700, such as changing or selecting entertainment content for a child, or otherwise interacting with the mobile apparatus and other connected systems, such as searching for and purchasing products. In various embodiments, the user can access the user interface of the user device 1700 even when outside the associated environment, retrieving data on the environment from servers, such as content management system 608, such as to view an environmental map, view products, create lists, etc.

[0162] In some embodiments, the mobile apparatuses can include on the body of the mobile apparatus, or display on one of the screens of the mobile apparatus, a QR code that can be scanned by the user device to automatically and wirelessly connect the user device and the mobile apparatus. In some embodiments, the user has an account that can be logged into either using the application and the user interface of the user device 1700, and/or can be logged into using a screen on the mobile apparatus. In some embodiments, if a mobile apparatus remains stationary for a certain period of time, is returned to a charging apparatus, or the user device leaves the vicinity of the mobile apparatus, the user device can be automatically disconnected and/or the user can be automatically logged out in order to prevent access to a user’ s account or device after the user has ceased operating the mobile apparatus.

[0163] The application can display a copy of a virtual map 1702 of the environment created using the mobile apparatuses. In some embodiments, the virtual map 1702 can include text and/or graphics overlayed or annotated on the virtual map 1702 showing environmental information including prices, products, services, sales, attractions, office locations, dining locations, etc. For example, as shown in FIGURE 17, the map 1702 is of a grocery store and is annotated with icons identifying items 1 and 3, for example corresponding to items on a shopping list created by the user of the user device 1700 using the application or corresponding to items included in user preferences or history information, an advertisement item, e.g., a product selected by the application, mobile apparatus, and/or servers to showcase on the map 1702 or an advertisement corresponding toa product in the user’s shopping list, and a discount notice (50% off) showing the location of a current sales promotion.

[0164] As also illustrated in FIGURE 17, the user interface of the user device 1700 can also include options to access personalized information such as a shopping list button 1704, a checkout button 1706, a meal planning button 1708, and a user preferences and behavior history button 1710. The shopping list button 1704 links to another user interface for creating shopping lists. The shopping list interface can provide a search function for searching products carried by the store, and search results can be selectable for easily adding the products to the list. In some embodiments, shopping list contents and items purchased can be tracked and reported to servers to provide information on what users planned to buy, what they actually bought, and whether any advertisements appeared to influence changes in purchasing behavior.

[0165] The checkout button 1706 can provide a user interface for a user to perform a self-checkout. For example, in some embodiments, the mobile apparatus can include self-checkout features such as a barcode scanner to scan products, the ability to scan items using the imaging sensors and retrieve pricing information for the identified item, and/or a scale to weight food such as product items. In some embodiments the application on the user device 1700 can use imaging sensors on the user device to read barcodes. In various embodiments, the mobile apparatus can be integrated with the store point-of-sale systems to provide pricing and product information. Payment information can be input and processed either directly using a screen of the mobile apparatus or using the user interface of the user device 1700, with payment being accomplished either via a communication link with the mobile apparatus, a charging apparatus, another in-store point-of-sale system, or a remote payment server.

[0166] The meals button 1708 allows a user to select different meals and view recipes, ingredients, nutrition facts, etc. In some embodiments, ingredients can be selected for immediate addition to the users shopping list, populating the selected item on the map 1702 for easy navigation to the desired ingredient. In some embodiments, entire meals can be selected, with all ingredients associated with the meal added to the shopping list and/or map 1702. The preferences and history button 1710 provides access to another user interface for viewing user preferences and history, such as favorited products, viewing products of a preferred category, e.g., organic products, and viewing user history such as previous purchases, behavioral trends such as categorizing types of items purchased, etc.

[0167] Other UI elements to access other features can also be included, such as UI elements for loyalty rewards, other lists created for or by the user such as to-do lists, travel itineraries, upcoming appointments, etc. In some embodiments, QR codes displayed in media content on screens of the mobile apparatuses and/or charging apparatuses can be scanned by the application on the user device to receive information on the subject of the media content, e.g. , the product in the advertisement, recipes, coupons, flight information, information on local services, information on professionals offering services such as doctors, etc. In some embodiments, the application can display surveys to test the user’s memory of advertising media in the store, e.g., unprompted awareness questions such as asking what product advertisements the user remembers seeing (either on the user device, on a mobile apparatus, or on a charging apparatus), or prompted awareness questions such as asking if the user remembers seeing an advertisement for a specific product, and/or questions regarding whether the user changed purchasing decision based on advertisements. [0168] In some embodiments, media content can be displayed provided by the mobile apparatus, or servers such as the content management system, to the user device, for display on the user device in, for example, a content window 1712, that is tailored to the environment analyzed by the mobile apparatus, in a similar fashion to how content is tailored for display on the mobile apparatus described in this disclosure. In some embodiments, features offered by the application, such as viewing virtual maps, creating shopping lists, etc., can also be offered by the mobile apparatuses, such as via a touchscreen in a tray or on a handle of the mobile apparatus.

[0169] Although FIGURE 17 illustrates one example of a user interface of a user device 1700, various changes may be made to FIGURE 17. For example, various UI elements shown or described may be combined, further subdivided, replicated, omitted, or rearranged and additional components or functions may be added according to particular needs. For example, the virtual map may be of any environment mapped by mobile apparatuses, as described in the various embodiments of this disclosure. Additionally, the various buttons or other UI elements can be rearranged, or features can be within other types of UI elements, such as drop down menus. In general, this disclosure is not limited to any particular physical or logical implementation of a user interface or a user device. [0170] FIGURES 18A and 18B illustrate an example user interaction process 1800 in accordance with embodiments of this disclosure. For ease of explanation, the process 1800 is described as being performed using the architecture 600 of FIGURE 6 within the network configuration 100 of FIGURE 1. It will be understood that various steps of the process 1800 can be carried out or instructed by a processor of the identified device, such as the processor 120. However, the process 1800 may be performed using any other suitable device(s) and in any other suitable system(s).

[0171] The process 1800 begins at block 1802. At block 1802, the mobile apparatus detects a connection request from an application on a user device, such as user devices 615 and 1700 described in this disclosure. In some embodiments, the user device can communicatively connect to a mobile apparatus via BLUETOOTH, WIFI, NFC, etc. to provide the user operating the mobile apparatus to be able to control aspects of the mobile apparatus using the user interface of the user device. In various embodiments, the user can access the user interface of the user device even when outside the associated environment, retrieving data on the environment from servers, such as content management system, such as to view an environmental map, view products, create lists, etc. In some embodiments, the mobile apparatuses can include on the body of the mobile apparatus, or display on one of the screens of the mobile apparatus, a QR code that can be scanned by the user device to automatically and wirelessly connect the user device and the mobile apparatus. In some embodiments, the user has an account that can be logged into either using the application and the user interface of the user device, and/or can be logged into using a screen on the mobile apparatus. At block 1804, the mobile apparatus establishes a communication channel with the user device. [0172] At decision block 1806, the mobile apparatus determines if it receives a channel request from the user device, such as a request to change or select entertainment content for a child passenger of the mobile apparatus. If not, the process 1800 moves to decision block 1810. If so, at block 1808, the mobile apparatus changes the media content displayed on an entertainment screen of the mobile apparatus based on the received request. At decision block 1810, the mobile apparatus determines if it receives a map request from the user device. If not, the process 1800 moves to block 1814. If so, at block 1812, the mobile apparatus provides a virtual map to the user, which can be displayed in the application of the user device, or on a screen of the mobile apparatus.

[0173] At block 1814, the mobile apparatus receives lists, preferences, and/or history data. At block 1816, the mobile apparatus displays information on the virtual map corresponding to the list, preferences, and/or history data, e.g., product locations and/or advertisements in a virtual map of a store. Additionally, media content corresponding to the list, preferences, and/or history data can be displayed on one or more screens of the mobile apparatus. If, at decision block 1810, no map request was received, at block 1816, the mobile apparatus may only display media content corresponding to the data. In some embodiments, the user device can receive the list, preferences, and/or history data and display the data on the virtual map on a screen of the user device or play media content associated with the data.

[0174] At block 1818, the mobile apparatus, or the user device, connects with an environmental system, such as content management system 608 or environmental management system 612, and retrieves special event data, such as sales events, special attractions, travel itinerary changes, etc. At block 1820, the mobile apparatus, or the user device, associates the special event data with a map location and/or displays media content pertaining to the special event data. At decision block 1822, the mobile apparatus determines if a connection termination event occurs. If not, the process moves to block 1826. If so, at block 1824, the mobile apparatus terminates the connection with the user device, and/or logs out the user from the mobile apparatus, and the process 1800 ends at block 1832. In some embodiments, if a mobile apparatus remains stationary for a certain period of time, is returned to a charging apparatus, or the user device leaves the vicinity of the mobile apparatus, the user device can be automatically disconnected and/or the user can be automatically logged out in order to prevent access to a user’s account or device after the user has ceased operating the mobile apparatus.

[0175] At block 1826, the mobile apparatus, or the user device, receives a checkout request. At block 1828, the user scans items for checkout. In various embodiments, the mobile apparatus or the user device can provide a user interface for a user to perform a self-checkout. For example, in some embodiments, the mobile apparatus can include self-checkout features such as a barcode scanner to scan products, the ability to scan items using the imaging sensors and retrieve pricing information for the identified item, and/or a scale to weight food such as product items. In some embodiments the application on the user device can use imaging sensors on the user device to read barcodes. At decision block 1830, the mobile apparatus, or the user device, determines whether payment has been received. If not, the process loops back to block 1828. If so, payment is processed and, at block 1832, the user is provided a receipt. In various embodiments, the mobile apparatus can be integrated with the store point-of-sale systems to provide pricing and product information. Payment information can be input and processed either directly using a screen of the mobile apparatus, or using the user interface of the user device, with payment being accomplished either via a communication link with the mobile apparatus, with a charging apparatus, with another in-store point-of- sale system, or with a remote payment server. The process 1800 ends at block 1834.

[0176] Although FIGURES 18A and 18B illustrate one example user interaction process 1800, various changes may be made to FIGURES 18A and 18B. For example, while shown as a series of steps, various steps in FIGURES 18A and 18B may overlap, occur in parallel, occur in a different order, or occur any number of times. In various embodiments, requests, functions, or commands (e.g. , channel change requests, list creation, virtual map viewing, etc.) performed by the user device described in FIGURES 18A and 18B can also be perfumed using a user interface screen of the mobile apparatus, such as a screen on a handle of the mobile apparatus. Additionally, in environments in which products are not purchased, blocks 1826- 1830 can be omitted.

[0177] In one example, a system comprises a mobile goods and/or person carrying apparatus. The mobile goods and/or person carrying apparatus includes a plurality of wheels, at least one processor, at least one memory operatively coupled to the at least one processor, one or more external -facing displays operatively coupled to the at least one processor, and one or more imaging sensors operatively coupled to the at least one processor. The at least one processor is configured to define, using the one or more imaging sensors, an environment in which the mobile goods and/or person carrying apparatus operates, and display, on at least one of the one or more the external-facing displays, media content based on the defined environment.

[0178] In one or more of the above examples, each one of the one or more imaging sensors is associated with one of the one or more external -facing displays.

[0179] In one or more of the above examples, to define the environment, the at least one processor is further configured to receive at least one image of the environment from the one or more imaging sensors and detect one or more objects in the at least one image.

[0180] In one or more of the above examples, the at least one processor is further configured to select the media content for display based on the detected one or more objects in the at least one image.

[0181] In one or more of the above examples, the media content is selected based on at least one of a temporal condition, a product detected in the at least one image, an actor detected in the at least one image, a demographic of the actor detected in the at least one image, and external conditions of the environment. [0182] In one or more of the above examples, to define the environment, the at least one processor is further configured to receive at least one image of the environment from the one or more imaging sensors and create a virtual map of the environment using the at least one image.

[0183] In one or more of the above examples, to create the virtual map of the environment, the at least one processor is further configured to create an initialized virtual map using a plurality of frames corresponding to the at least one image of the environment, generate a plurality of three-dimensional (3D) points on the initialized virtual map, the plurality of 3D points corresponding to detected objects in the environment, create a 3D point cloud using the 3D points, perform feature tracking by receiving a plurality of key frames of the environment periodically and estimating a pose of the mobile goods and/or person carrying apparatus by matching environmental features in each key frame with features in a previous key frame, perform loop closure and optimize a pose graph, and update a database based on the loop closure and optimized pose graph.

[0184] In one or more of the above examples, the at least one processor is further configured to select the media content for display based on a location of the mobile goods and/or person carrying apparatus determined using the virtual map of the environment.

[0185] In one or more of the above examples, the at least one processor is further configured to transmit the virtual map and data associated with locations on the virtual map for viewing on an electronic device.

[0186] In one or more of the above examples, the at least one processor is further configured to instruct a transmission, to an electronic device, of a communication including data gathered concerning the environment, wherein the data includes at least one of a number of actors detected during display of a media content selection, an amount of time associated with a facial orientation of one or more of the actors detected during display of the media content selection, demographics of actors detected during display of the media content selection, emotions of actors detected during display of the media content selection, a number of detected objects of one or more object types, and an alert condition detected within the environment. [0187] In one or more of the above examples, the at least one processor is further configured to display the media content upon detection of movement of the mobile goods and/or person carrying apparatus.

[0188] In another example, a method comprises defining, using one or more imaging sensors, an environment in which a mobile goods and/or person carrying apparatus operates, the mobile goods and/or person carrying apparatus including a plurality of wheels, at least one processor, at least one memory operatively coupled to the at least one processor, one or more external-facing displays operatively coupled to the at least one processor, and one or more imaging sensors operatively coupled to the at least one processor, and displaying, on at least one of the one or more the external-facing displays, media content based on the defined environment.

[0189] In one or more of the above examples, each one of the one or more imaging sensors is associated with one of the one or more external -facing displays.

[0190] In one or more of the above examples, defining the environment includes receiving at least one image of the environment from the one or more imaging sensors, and detecting one or more objects in the at least one image.

[0191] In one or more of the above examples, the method further comprises selecting the media content for display based on the detected one or more objects in the at least one image.

[0192] In one or more of the above examples, selecting the media content is based on at least one of a temporal condition, a product detected in the at least one image, an actor detected in the at least one image, a demographic of the actor detected in the at least one image, and external conditions of the environment.

[0193] In one or more of the above examples, defining the environment includes receiving at least one image of the environment from the one or more imaging sensors, and creating a virtual map of the environment using the at least one image.

[0194] In one or more of the above examples, creating the virtual map of the environment includes creating an initialized virtual map using a plurality of frames corresponding to the at least one image of the environment, generating a plurality of three-dimensional (3D) points on the initialized virtual map, the plurality of 3D points corresponding to detected objects in the environment, creating a 3D point cloud using the 3D points, performing feature tracking by receiving a plurality of key frames of the environment periodically and estimating a pose of the mobile goods and/or person carrying apparatus by matching environmental features in each key frame with features in a previous key frame, performing loop closure and optimizing a pose graph, and updating a database based on the loop closure and optimized pose graph. [0195] In one or more of the above examples, the method further comprises selecting the media content for display based on a location of the mobile goods and/or person carrying apparatus determined using the virtual map of the environment.

[0196] In one or more of the above examples, the method further comprises transmitting the virtual map and data associated with locations on the virtual map for viewing on an electronic device.

[0197] In one or more of the above examples, the method further comprises instructing a transmission, to an electronic device, of a communication including data gathered concerning the environment, wherein the data includes at least one of a number of actors detected during display of a media content selection, an amount of time associated with a facial orientation of one or more of the actors detected during display of the media content selection, demographics of actors detected during display of the media content selection, emotions of actors detected during display of the media content selection, a number of detected objects of one or more object types, and an alert condition detected within the environment.

[0198] In one or more of the above examples, the method further comprises displaying the media content upon detection of movement of the mobile goods and/or person carrying apparatus.

[0199] Although this disclosure has been described with example embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that this disclosure encompass such changes and modifications as falling within the scope of the appended claims.