Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ROBOT CONTEXTUALIZATION OF MAP REGIONS
Document Type and Number:
WIPO Patent Application WO/2019/210157
Kind Code:
A1
Abstract:
A mobile robot can navigate within an area using a keepout map. The keepout map identifies portions of the area through which the robot can safely navigate, and likewise identifies portions of the area through which the robot is unable to navigate. The keepout map can be generated based on external data (such as blueprints or local area maps) or based on data from sensors included on or within the robot (such as cameras or LIDAR sensors). The keepout map can also be updated as the robot navigates within the area, for instance to identify new obstacles within the area. The robot navigates through the area by selecting a route that avoids the portions of the area that the robot is unable to navigate identified by the keepout map. The updated keepout map can be provided to a central server, for storage or distribution to one or more other robots.

Inventors:
MORONITI DAVID (US)
SWEET NICHOLAS (US)
GREEN MICAH (US)
Application Number:
PCT/US2019/029335
Publication Date:
October 31, 2019
Filing Date:
April 26, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAIDBOT INC (US)
International Classes:
G05D1/02; A47L11/40; G01C21/20
Domestic Patent References:
WO2018038553A12018-03-01
Foreign References:
US20170225336A12017-08-10
US20170312916A12017-11-02
US9844879B12017-12-19
Attorney, Agent or Firm:
JACOBSON, Anthony, T. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A mobile robot comprising:

an input configured to access map data corresponding to an area in which the mobile robot is located;

a map making engine configured to generate an initial keepout map corresponding to the accessed map data;

a navigation engine configured to navigate the mobile robot during the performance of one or more tasks within the area such that the mobile robot does not enter a region identified as a keepout region within the initial keepout map; one or more sensors configured to produce sensor data based on a context of the robot; and

a map modification engine configured to generate an updated keepout map based on the initial keepout map and the sensor data produced by the one or more sensors.

2. The mobile robot of claim 1, wherein the map data comprises blue prints for a building in which the mobile robot is located.

3. The mobile robot of claim 1, wherein the map data comprises a SLAM map previously generated by the mobile robot or by an/other mobile robot.

4. The mobile robot of claim 1, wherein the map data is received from a robot server to which the mobile robot is communicatively coupled.

5. The mobile robot of claim 1, wherein the initial keepout map includes a representation of one or more keepout regions within the area, each keepout region comprising a portion of the area through which the mobile robot will not travel.

6. The mobile robot of claim 5, wherein at least one keepout region is designated based on one or more of: a wall, building structure, furniture, access gates, and a permanent fixture.

7. The mobile robot of claim 1, wherein the mobile robot is configured to perform tasks comprising one or more of: vacuuming, mopping, scrubbing, cleaning, dusting, making a bed, moving a customer’s belongings, moving furniture, opening doors, and communicating with a human.

8. The mobile robot of claim 1, wherein the navigation engine is configured to navigate the mobile robot by identifying an initial location of the mobile robot, identifying an end location of the mobile robot, and selecting a path through the area from the initial location to the end location that substantially minimizes the distance traveled by the mobile robot without entering a keepout region represented by the initial keepout map.

9. The mobile robot of claim 1, wherein at least one sensor comprises a LIDAR system configured to detect objects within a vicinity of the mobile robot, and wherein generating an updated keepout map comprises including an object detected by the LIDAR system within the initial keepout map.

10. The mobile robot of claim 1, wherein at least one sensor comprises a floor sensor configured to detect a type of the floor under the mobile robot, and wherein generating an updated keepout map comprises including a detected floor type within the initial keepout map.

11. The mobile robot of claim 1, wherein at least one sensor comprises a camera configured to take images of objects or entities within a vicinity of the mobile robot, and wherein generating an updated keepout map comprises including an object, obstacle, or entity captured within an image taken by the camera within the initial keepout map.

12. The mobile robot of claim 1, wherein generating an updated keepout map comprises removing a keepout region represented by the initial keepout map based on a determination that the portion of the area corresponding to the keepout region is safe for the mobile robot to traverse.

13. The mobile robot of claim 1, wherein the mobile robot is configured to provide the updated keepout map to one or both of another mobile robot and a robot server.

14. The mobile robot of claim 1, wherein the mobile robot is configured to perform one or more tasks based on the updated keepout map.

15. A method comprising:

accessing, by a mobile robot, map data corresponding to an area in which the mobile robot is located;

performing, by the mobile robot, one or more tasks within the area;

during the performance of the one or more tasks, detecting, by one or more sensors of the mobile robot, one or more regions within the area through which the mobile robot is unable to navigate; and generating, by the mobile robot, a keepout map based on the accessed map data and the detected one or more regions.

16. The method of claim 15, wherein the generated keepout map is provided to one or both of another mobile robot and a computing system associated with the mobile robot or a building or location corresponding to the area in which the mobile robot is located.

17. A method comprising:

accessing, by a mobile robot, a keepout map corresponding to an area in which the

mobile robot is located, the keepout map including one or more keepout regions; navigating, by the mobile robot, through the area by selecting a path through the area that avoids the one or more keepout regions;

generating, by one or more sensors of the mobile robot, information describing a context of the mobile robot; and

modifying, by the mobile robot, the keepout map based on the generating information describing a context of the mobile robot to produce an updated keepout map.

18. The method of claim 17, wherein accessing the keepout map comprises receiving the keepout map from another robot or from a computing system associated with the mobile robot or a building or location corresponding to the area in which the mobile robot is located.

19. A method comprising:

accessing, by a robot server, a map of an area in which one or more mobile robots are located;

receiving, by the robot server, information generated by the one or more mobile robots, the received information associated with regions within the area through which the mobile robots are unable to safely navigate; and

generating, by the robot server, a keepout map based on the accessed map and the

received information generated by the one or more mobile robots.

20. The method of claim 19, wherein the robot server is configured to do one or more of:

provide the keepout map to the one or more mobile robots, and display the keepout map to an operator of a mobile robot within the area such that a location of the mobile robot within the keepout map is highlighted and such that the regions within the area through which the mobile robots are unable to safely navigate are highlighted.

Description:
Robot Contextualization of Map Regions

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of and priority to U.S. Provisional Application No. 62/663,143, filed April 26, 2018, the contents of which are incorporated by reference herein in their entirety.

BACKGROUND

[0002] This disclosure relates generally to mobile robotics, and more specifically, to the contextualization of map regions by robots.

[0003] Blueprint maps are useful for determining a general layout of a building or area, but without further information, may be limited in usefulness to a robot attempting to navigate based on the maps. A robot often needs additional context to demarcate regions within a map through which the robot can travel, and regions the robot should avoid. Further, the location of objects, obstacles, and other entities within the building or area can change over time, and thus a map of the building or area may become outdated over time.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Fig. 1 illustrates an example environment in which a mobile robot can operate, according to one embodiment.

[0005] Fig. 2 is a syste -level diagram that illustrates an example embodiment of a robot.

[0006] Fig. 3 A is a flow chart illustrating the generation of a localization map, according to one embodiment.

[0007] Fig. 3B is a flow chart illustrating the generation of contextual map layers, according to one embodiment.

[0008] Fig. 4 is a flow chart illustrating a process for a robot to perform a task and update a contextual map, according to one embodiment.

[0009] Figs. 5A-5F illustrate examples of contextualized maps, according to various embodiments.

[0010] The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.

DETAILED DESCRIPTION

Environment Overview

[0011] Embodiments described herein detail functionality associated with a mobile robot. Mobile robots can include a number of sensors, data retrieval systems, communication systems, and other components as described in greater detail below. Accordingly, a mobile robot can access data associated with or describing an environment in which the robot operates, for instance from the on-board sensors, from a data source external to the robot, or from a human operator. In response to or based on the accessed data, the mobile robot can perform a number of functionalities, either autonomously or in response to operator input. It should be noted that while the mobile robot is described herein as operating in an indoor environment and performing various housekeeping functions, in other embodiments the mobile robot can operate in other environments and can perform other functions while adhering to the principles described herein.

[0012] Fig. 1 illustrates an example environment 100 in which a mobile robot can operate, according to one embodiment. The environment includes three mobile robots including robot 102A, robot 102B, and robot 102C (collectively“robots 102”), operating within a building 110. The building 110 includes two rooms: room 112 and room 114. In an example embodiment, the room 112 is a hotel hallway and the room 114 is a hotel bedroom. In such embodiments, the robots 102 can perform housekeeping or other operations within the rooms 112 and 114, as described herein.

[0013] The robots 102 can wirelessly communicate with a robot server 130 via a first network 104. Likewise, the robot server 130 and/or the robots 102 (via the robot server 130) can communicate with one or more of a client device 132, a remote terminal 134, a building database 136, and a robot database 138 via the first network 104. The robots 102 can also communicate with building infrastructure, such as the dock station 111 and the building station 112, via the second network 106. Further, the robots 102 can communicate with each other via the third network 106. It should be noted that although three distinct networks are illustrated in the embodiment of Fig. 1, in other embodiments, the robots can communicate via fewer or more networks, or one or more of the networks 104, 106, and 108 can be the same network. It should be noted that although the term“robot server” is used herein, in practice, the robot server 130 can be any server, any computing device, or any system with which the robots 102 can communicate and configured to perform the functionality described herein.

[0014] The networks 104, 106, and 108 can each include one or more networks, including, but not limited to, wireless networks (e.g., wireless communication networks), mobile telephone networks (e.g., cellular telephone networks), closed communication networks, open

communication networks, satellite networks, navigation networks, broadband networks, narrowband networks, the Internet, local area networks, and any other networks capable of carrying data and/or communications signals between the robots 102, building infrastructure, the robot server 130, and any other systems or components (including but not limited to the client device 132, the remote terminal 134, the building database 136, and the robot database 138). For instance, the network 104 can include the Internet, and the robots 102 can interface with the Internet via WiFi. Likewise, the network 106 can be a Bluetooth or Bluetooth Low Energy (BLE) wireless network, enabling the robot 102 A to communicate with the dock station 111 and the building system 112 when within a communication threshold proximity of either system. Finally, the network 108 can be a long-distance communicative network, enabling the robots to communicate as a point-to-point network, for instance using GSM transceivers.

[0015] The robots 102 can navigate within the building 110, for instance by autonomously planning a route through the building, by following a pre-determined route, or by modifying a route based on the detected environment in which the robot is navigating. In some embodiments, the robots 102 can navigate collectively, for instance by individually selecting a route such that a collective goal is achieved. For example, if the robots 102 are tasked with vacuuming the entirety of the building 110, the robots 102 can each select a route that minimizes an amount of time it takes the robots 102 to entirely vacuum the building, or can select routes that maximize an amount of time each robot 102 is vacuuming a portion of the building floor that has yet to be vacuumed. Likewise, if the robots 102 are tasked with mapping an entirety of the floorplan of the building 110, each robot 102 can individually select a route that minimizes an amount of time it takes the robots 102 to view the floorplan. The robots 102 can communicate with each other via the network 108 or via the robot server 130 in order to coordinate route selection.

[0016] The robots 102 can also navigate independently, for instance to complete a task assigned to the robot 102 (such as individually vacuuming or mapping a space, identifying anomalies within a room, identifying other tasks that need to be performed, etc.) In some embodiments, a robot 102 can select a route that minimizes a transit time from a first location to a second location, that minimizes travel through unknown locations or that avoids keepout regions altogether, or that guarantees that the robot 102 can navigate to a power dock station 111 before it runs out of power. For example, in the embodiment of Fig. 1, the room 114 includes the beds 120A and 120B, the furniture 122, and the closet 124. In this embodiment, the robot 102B can access a map (for instance, from the building database 136) to identify the locations of these objects within the room 114, or can identify the locations of these objects using object detection functionality (such as a camera and/or depth sensor), and can select a route around these objects based the identified locations. Likewise, as discussed below, the robot 102B can identify the closet 124 as a region in which the robot 102B is unable to or unlikely to be able to navigate within using a keepout or contextualized map, and can avoid navigating into or within the closet 124 in response.

[0017] In some embodiments, one or more rooms within the building 110 can include a dock station 111. The dock station 111 can electrically couple to a robot 102, for instance when the robot 102 drives up to a dock station base and physically couples to a dock station power outlet or connector. In some embodiments, the robot 102 can select a route to the dock station 111 when the power available to the robot 102 falls below a threshold, or can automatically return to a dock station 111 when a particular task is completed. In some embodiments, the robots 102 can remain at a dock station 111 until a subsequent task is assigned to the robot 102, until a subsequent periodic task is to be performed, and the like. In some embodiments, the dock station 111 can also act as an access point, enabling communications between the robots 102 and one or more systems external to the robot 102. Likewise, the dock station 111 may be able to perform a diagnostic check of one or more robot components or systems, and can communicate this information to an external system (e.g., the robot server 130).

[0018] The building system 112 can include any system capable of wirelessly

communicating with a robot 102, such as a fire detector system, a wireless modem, a thermostat, a clock, an entertainment system, a smart lighting system, an elevator, a housekeeping or front desk computer, or the like. A robot 102 can communicate with the building system 112, for instance by receiving information or instructions from the building system 112 or by providing information or instructions to the building system 112. For instance, the robot 102 may determine that the beds 120A and 120B in the room 114 need to be made, and can instruct a housekeeping computer (e.g., a client device 132) to schedule the room for housekeeping.

Likewise, the robot 102 can determine that the temperature within the room 114 is above a comfort threshold, and can instruct a thermostat to reduce the temperature. It should be noted that in some embodiments, the building system 112 does not wirelessly communicate with the robot 102, and instead the robot 102 can determine information displayed by or otherwise conveyed by the building system 112. For instance, the robot can capture an image of a thermostat display in order to determine a temperature of a room, or can capture an image of a room number to identify a room.

[0019] The robot server 130, the client device 132, the remote terminal 134, the building database 136, and the robot database 138 are computing systems configured to communicate with each other and with one or more robots 102 and/or building infrastructure via the network 104 or one or more other networks. In some embodiments, fewer, additional, or different computing systems are included within the environment 100 than those shown in Fig. 1. Further, one or more of the computing systems of Fig. 1 can be combined (such as the remote terminal 134 and the client device 132). Although the computing systems of Fig. 1 are illustrated as external to the building 110, in practice these computing systems can be located in any suitable location, including within the building, within a different building, in a datacenter, in the possession of an operator or user, and the like.

[0020] The robot server 130 provides a communicative interface between the robots 102 and one or more other systems of Fig. 1. For instance, the robot server 130 can provide an instruction to a robot 102 to vacuum the room 114, can receive a request for floorplan data from the robot 102 in response to the instruction, can access the floorplan data from the building database 136 and provide the floorplan to the robot 102, and can receive diagnostic information from the robot 102 indicating that a repair to the robot 102 is needed and provide the diagnostic information to a client device of hotel staff. In some embodiments, the robot server 130 is a cloud server, enabling long distance operation of and communication with the robots 102.

[0021] The client device 132 is a computing system, such as a mobile phone, a desktop computer, a tablet, and the like enabling an operator to interface with the robots 102. For instance, video captured by a robot 102 can be provided to the robot server 130 and routed to a client device 132 for viewing by an operator. In some embodiments, the environment 100 of Fig. 1 can include any number of client devices 132, each assigned to a particular operator. In such embodiments, data received from the robots 102 by the robot server 130 can be routed to the appropriate client device 132. For instance, if the robot server 130 receives an indication from a robot 102 that a particular hallway needs to be cleared of debris, the robot server 130 can provide the indication to the client device 132 operated by housekeeping staff. Likewise, if the robot server 130 receives an indication that a pipe is leaking, the robot server 130 can provide the indication to the client device 132 operated by maintenance staff. Operators can provide instructions to the robots 102 via the robot server 130 from the client devices 132. For example, housekeeping staff can use a mobile phone client device to provide an instruction to the robot 102 to vacuum room 114, and the robot 102 can navigate to the room 114 and begin vacuuming in response to receiving the instruction.

[0022] The remote terminal 134 includes an interface displayed to an operator via a device display, such as a mobile device, computer monitor, and the like. As noted above, in some embodiments, the remote terminal is displayed via a client device 132, though in other embodiments, the remote terminal is displayed within dedicated hardware. Information received from a robot 102 (for instance, via the robot server 130) can be displayed within the remote terminal interface, such as video or images captured by the robot 102, diagnostic information associated with a robot 102, a map indicating a route being navigated by the robot 102, a map indicating a location of the robot 102 received from the robot 102, a map allowing an operator to configure a route of the robot 102, a set of displayed instruction elements that (when selected) can configure the behavior of the robot to perform one or more tasks, or any other suitable information associated with the robot 102, the building, or one or more other systems or components of Fig. 1. The remote terminal 134 can enable an operator to provide instructions to one or more robots 102 by interacting with the remote terminal interface. For instance, an operator can select a room (such as room 112) from a map interface displayed within the remote terminal interface, can select a robot (such as robot 102 A) from a list or set of corresponding robot icons displayed within the remote terminal interface, and can select an interface element corresponding to a particular task (such as vacuuming) in order to instruct robot 102 A to vacuum room 112.

[0023] The building database 136 includes data associated with the building 110, such as a floorplan of the building, a location of objects within the building 110, a location of doors and windows within the building 110, information associated with building infrastructure, a schedule associated with the building 110, building occupancy or guest information, building staff information, and the like. The robot database 138 includes data associated with the robots 102, functions the robots 102 can perform, robot maintenance information, and the like. In some embodiments, the robot database 138 can include routes through the building 110, robot routine information (such as periodic cleaning schedules), diagnostic or communication information associated with the robots, and the like. The building database 136 and the robot database 138 can be accessed by the robot server 130, by the robots 102, or by one or more other components of Fig. 1.

Robot Overview

[0024] Fig. 2 illustrates an example embodiment of a robot 102. The robot 102 comprises a processor 210, a storage medium 220, sensors 250, a drive system 252, input/output devices 254, a communications system 256, special purpose equipment 258, and a power system 260. In alternative embodiments, the robot 102 may include different, fewer, or additional components.

[0025] The sensors 250 comprise devices that measure or detect conditions or properties of the robot, the surrounding environment, or objects and entities within the surrounding environment. The sensors 250 then generate signals to the processor 210, storage medium 220, or other components of the robot 102 representative of the sensed conditions or properties. The sensors 250 may include, for example motion and orientation sensors such as an accelerometer, gyroscope, velocity sensor, or positioning sensor. The sensors 205 may also include object detection sensors, such as a RADAR sensor, LIDAR sensor, and the like. Furthermore, the sensors 250 may include devices such as a light sensor, a noise sensor, a moisture sensor, a thermometer, a dust sensor, a mold sensor, or other sensor for detecting environmental conditions. In yet other embodiments, the sensors 250 may include capture systems such as a still image camera, a video camera, or a microphone. The sensors 250 may include integrated processing devices for processing sensor data before sending it to other components of the robot 102. For example, the sensors 250 may include integrated filters or data conversion processors that may convert the sensor data from its raw captured form to a processed form.

[0026] The drive system 252 includes one or more drive components and/or controllers for controlling motion of the robot 102. For example, the drive system 252 may include a set of wheels, one or more drive motors, wheel shafts for providing energy from the drive motors to the wheels, and a drive controller for controlling the drive motors. In an embodiment, the drive system 252 may be configured to enable the robot 102 to navigate in any of multiple different directions regardless of the orientation of the robot. For example, the drive system 252 may include Mecanum wheels or Omni wheels that enable the robot 102 to move in a direction parallel to its axis of orientation, perpendicular to its axis of orientation, or diagonally relative to its axis of orientation. Furthermore, the drive system 252 may enable the robot 102 to rotate in place to a desired orientation.

[0027] The input/output device 254 comprise various devices for capturing inputs provided to the robot 102 and generating output information. Input devices may include, for example, buttons, dials, switches, or a touchscreen display for controlling various functions of the robot 102, or may include ports for connecting external input devices such as, for example, a USB port or other serial or parallel input port. The output devices may include, for example, a display, a lighted indicator such as an LED, a haptic device, a speaker, or an output port for connecting to external output devices (e.g., a USB port).

[0028] The communications system 256 comprises a wired or wireless communication interface for communicating with external devices (e.g., via the network 104). The

communications system 256 may comprise, for example, a Bluetooth interface, a WiFi interface, a cellular communication interface, or other communication interface.

[0029] The special purpose equipment 258 comprises components of the robot 102 for performing specific tasks relating to a specialized configuration the robot 102. For example, in a robot 102 designed for vacuuming, the special purpose equipment 258 may comprise a vacuum motor and a vacuum brush. In a robot 102 designed for mopping, the special purpose of equipment 258 may comprise a water tank and a mop head. In other robots 102 (e.g., a robot designed for mapping an area), the special purpose equipment 258 may be omitted.

[0030] The power system 260 stores power and supplies power to the robot 102. For example, the power system 260 may comprise a battery or a port for connecting to an external power source (such as a dock station 111). In an embodiment, the power system 260 may include separate power sources for powering high-voltage components such as the drive system 252 and low-voltage components such as integrated electronics.

[0031] The storage medium 220 (e.g., a non-transitory computer-readable storage medium) comprises volatile memory, non-volatile storage, or a combination thereof. The storage medium 220 stores instructions that when executed by the processor 210 causes the processor 210 to carry out the functions attributed to the various modules described herein (e.g., a mapping module 222, a navigation module 224, a function control module 226, and a data collection module 228). Additionally, the storage medium 220 may include a data storage 230 to store various data collected by or utilized by the robot 102.

[0032] The mapping module 222 includes instructions for automatically mapping an area (e.g., such as a room). For example, the mapping module 222 may control the drive system 252 of the robot 102 to navigate an unexplored space in a particular pattern and process data from the sensors 250 to predict locations of walls, furniture, or other objects or obstacles. The mapping data may be stored to the data storage 230 or may be transmitted to the robot server 130 for storage in the building data 136.

[0033] The navigation module 224 includes instructions for controlling navigation of the robot 102. The navigation module 224 may control the robot 102 in an automated manner to complete a certain task assigned to the robot 102 (e.g., mapping a space or cleaning a room) or may navigate in response to manual commands from an operator. In an embodiment, the navigation module 224 may utilize mapping data from the local data storage 230 or the building data 136 to determine an efficient navigation route for completing the assigned task.

Furthermore, the navigation module 224 may utilize mapping data and real-time sensing data to navigate in a manner that avoids objects and keepout regions.

[0034] The function control module 226 comprises instructions for controlling various functions of the robot 102 to enable the robot 102 to complete an assigned task. In some embodiments, the functions of the robot include operations such as vacuuming, mopping, scrubbing, cleaning, dusting, making a bed, moving a customer’s belongings, moving furniture, opening doors, and communicating with a human. For example, for a vacuuming robot 102, the function control module 226 may control the speed of a brush assembly or suction power of a vacuum. For a mopping robot, the function control module 226 may control the spray of water and the spin speed of the mop head. In other embodiments, the function control module 226 may control other special purpose equipment 258 of the robot 102 according to its particular assigned task.

[0035] The data collection module 228 comprises instructions for controlling collection and processing of data (e.g., from the sensors 250) obtained by the robot 102. The data collection module 228 may collect and process different types of data depending on a particular data collection task that may be assigned to the robot. For example, the data collection module 228 may collect moisture data to determine areas of a room that are particular damp. In another example, the robot 102 may collect dust data to determine an overall cleanliness factor for a particular room. The data collection module 228 may store the collected data or may share it in real-time with other modules to enable those modules to execute their respective functions, or may provide the collected data to one or more external entities (such as another robot 102, the robot server 130, the remote terminal 134, the robot data store 138, and the like).

Robot Contextualization of Map Regions

[0036] Robots can be broken down into two categories: mobile and non-mobile robots. Within the category of mobile robots, they may be further broken down into localized or non- localized robots; a localized robot knows its absolute position with respect to some reference. This reference for a localized mobile robot is typically in the form of a fixed map representation of an area, such as a two-dimensional occupancy grid, which is a collection of cells that each contain values for whether the cell is occupied by a physical object, unoccupied or unknown.

[0037] Simultaneous Localization and Mapping (SLAM) is a robotics technique to generate a representation of a physical space. Multiple sensors are used to take observations of the immediate area around a robot, which are then combined with estimates of the robot's motion to build a representation of the overall space, or more generally a map. However, this

transformation of a physical space to a representation thereof will necessarily lose contextual information. This contextual map information may be vital to a robot's functionality.

[0038] In practice, map generation is reliant on sensor data (e.g. optical systems such as cameras or lasers). Vision-based systems rely on identifiable features in the area to generate a physical representation of distances. Laser-based systems rely on direct distance measurements of the surrounding environment by reflecting laser light at known angles. These techniques do not currently offer a robot the contextual information required for its functionality that a human would naturally understand.

[0039] Many aspects of physical environments might not be understandable or easily representable for robots. There are many situations where contextual knowledge of a space can influence the performance of a task by an actor. For instance, a human told to vacuum a room would understand not to vacuum adjoining spaces in the same room such as a bathroom or a pool, but a robot without this knowledge would not. Likewise, a robot instructed to mop a tile portion of a floor needs to know which areas of the floor are tile and which are covered by an area rug so that it only cleans the correct regions. Providing a robot with relevant contextual information (e.g., within a contextual map) can enable a robot to avoid keepout areas (areas through or into which a robot is unable to safely navigate) identified by the map. This is a simple contextual cue to limit the space in which the robot may visit, but not its internal representation of the physical space.

[0040] Other approaches might include physical barriers or signal-based“fences” which require a physical device to provide the robot with the context of a keepout area. For example, a doorway may be outfitted with an infrared emitter or magnet strip that the robot treats as a fence. This approach suffers from a dependency on infrastructure (such as installing physical devices) on every region demarcated as a keepout, which can have substantial repercussions on commercial applications.

[0041] In large open spaces, such as warehouses or conference centers, sufficient

infrastructure may not exist to support physical devices. Businesses that have a strong emphasis on decor, such as hotels and museums, may not be willing to incorporate the required physical changes in their infrastructure. Some environments may also have misaligned external requirements on their infrastructure, such as cleanliness in hospitals or security in government agencies. An additional concern with fence-based approaches is that they do not contextualize which side of the fence a robot should be on. This becomes especially problematic if a robot finds itself on the wrong side of a fence and is unable to cross back over.

[0042] One approach is to have multiple overlapping map representations: localization maps, in which the environment is represented as a two-dimensional occupancy grid that models the physical environment the robot would perceive through its sensors; and additional map overlays used to contextualize regions with information pertaining to the actions a robot may or may not perform in that space (“contextual map layers” as used herein). For instance, a localization map may include boundaries representative of the walls within a room, and an overlaid contextual map layer may identify a floor type (such as tile or carpet), enabling the robot to determine that a vacuuming function should be performed only on the carpeted portion and a mopping function should be performed only on the tiled portion. This provides semantic meaning for the physical environment without the reliance on physical infrastructure.

[0043] To create a localization map, a trusted initial set of information is accessed, typically the blueprints/floor plans of the building. The localization map is created from the walls and permanent features of the building. This is the base layer that is used by the robot to understand where it is within the building in the global sense, and to visualize data collected by the robot for humans to consume. The blueprints are combined with alternate sources of data such as images, prior sensor measurements, and human input to create the localization map.

[0044] Fig. 3 A is a flow chart illustrating the generation of a localization map, according to one embodiment. In the embodiment of Fig. 3A, building floor plans 300 and robot sensor data 302 are accessed. For instance, the building floor plans can be blueprints for a building, and the robot sensor data can be images captured by a robot, LIDAR data captured by the robot, or a SLAM map generated by the robot. From the accessed data, walls and permanent features are identified and extracted 304. The localization map is generated 306 based on the extracted walls and features. It should be noted that in some embodiments, the localization map is generated by a robot (for instance, the same robot or a different robot than the robot that captured the robot sensor data), while in other embodiments, the localization map is generated manually by a human user, or automatically by a computing system associated with the robot, the building, or any other suitable entity.

[0045] As with the localization map, the contextual map layers can include an occupancy grid, with each cell of the grid representing information about that location in that layer. For instance, a floor type map layer would identify for each cell of the grid any relevant floor types within the cell (e.g., carpet, tile, hardwood, etc.) A different layer can identify the room type (such as“hallway”,“bathroom”,“bedroom”, etc.). Example contextual map layers types include:

• Obstacle layers (walls, permanent obstacles)

• Flooring type (carpet, tile, marble, hardwood)

• Keep-out regions • Doors/entryways

• Room type (e.g., kitchen, bathroom, bedroom, office, hallway, etc.)

• Furniture (or regions that have semi-permanent obstacles)

• Historical areas where robots get stuck

• Foot traffic density

• Historically dirty areas

• Historical cleaning history

[0046] Fig. 3B is a flow chart illustrating the generation of contextual map layers, according to one embodiment. Building floor plans 300 and robot sensor data 302 is accessed. From the accessed data, one or more properties or characteristics of an area are identified for use in generating contextual map layers. In the embodiment of Fig. 3B, a floor type for each of one or more regions within an area can be identified 310 (for instance, using a floor sensor), furniture within one or more regions of the area is identified 312, room thresholds are identified 314, and room usage is identified 316. It should be noted in that other embodiments, other properties or characteristics of the room can be identified (including but not limited to those listed above). One or more contextual map layers are generated 320 based on the identified properties or characteristics of the area. For instance, a first“floor type” map layer, a second“furniture” map layer, a third“room threshold” map layer, and a fourth“room usage” map layer can be generated. As with the localization map, it should be noted that one or more contextual map layers can be generated by one or more robots (e.g., different robots can generate different contextual map layers), and one or more contextual map layers can be generated by a robot server or building server, manually by a human user, or by any other suitable computing system. It should be noted that the map layers can be stored as grid layers, as raster graphics images, as vector graphics images, or as any other suitable format.

[0047] The localization map and the one or more contextual map layers can be combined to form a combined map. As a robot traverses an area using the combined map, the robot can update or modify the combined map based on image or sensor data captured by the robot. For instance, the robot can traverse a hotel room using a combined map that includes a contextual map layer identifying the location of furniture within the room. In an example where a hotel guest has moved a chair within the room to a different location, the robot can detect the new position and orientation of the chair, and can modify the combined map to reflect the chair’s new position. Alternatively, the robot can move the chair back to the location and orientation initially identified by the combined map, or can flag the chair for manual relocation by a human.

[0048] A robot can use a combined map to navigate or to perform one or more functions within an area. For instance, the combined map can include a contextual map layer that identifies regions of the map within which point-to-point travel by the robot is permissible, and the robot can navigate through the room based on these regions. Likewise, the combined map can include a keepout layer defining areas the robot cannot or should not enter, and the robot can navigate through the room by avoiding these areas. In some embodiments, the combined map can include a floor type layer, and the robot can perform one or more cleaning functions (e.g., vacuuming, mopping, etc.) based on the identified floor types. In other embodiments, the combined map can include layers identifying areas in which spot cleaning or edge cleaning is required, identifying areas in which particular (such as a vacuum suction power or brush roller RPM) should be used, and the like. In some embodiments, the combined map can define, for each of one or more portions of the map, one or more functions that a robot can perform within the portion of the map, one or more functions that a robot must perform within the portion of the map, and/or one or more functions that a robot is prohibited from performing within the portion of the map.

[0049] For example, a robot can be instructed to vacuum a room. A floor type map layer and a can be used to identify two regions within the room that are carpeted. The robot identifies a path that enables vacuuming of the regions that are carpeted. The two carpeted regions can be separated by a hardwood floor on which vacuuming is prohibited. Accordingly, as the robot traverses the hardware floor, the robot can disable the vacuuming functionality. In addition, a keepout map layer can identify a region within the room that includes a table and chairs, and the robot can select a shortest path within the room that enables the robot to vacuum the carpeted regions of the room while avoiding the table and chairs. Finally, a map layer can identify heavily trafficked portions of the carpeted regions, and he robot plan a route that vacuums each of these heavily trafficked portions more than once.

[0050] Fig. 4 is a flow chart illustrating a process for a robot to perform a task and update a contextual map, according to one embodiment. A task is received 400 by a robot. The task can be provided by a human, or can be assigned to the robot as part of an automated building-wide directive (e.g., a directive to vacuum all rooms on a floor of the building). A combined localization and contextual map is accessed 405, for instance from the robot itself, from a building server or computing system, from a mobile device of a human user, or from another robot. The robot then plans 410 a path through an area using the accessed map, for instance based on the received task. For example, if the task is to clean a room, the robot can plan a path from its current location to the room, and can plan a path within the room that will allow it to clean the room.

[0051] The robot navigates 415 the path, and receives 420 sensor data while navigating the path. For instance, the robot can include a LIDAR sensor and camera that allows it to identify the location of obstacles, people, furniture, and the like within the path. In the event that the robot identifies something that will prevent it from navigating the path or performing the task (such as a previously undetected piece of furniture at a location on the path), the robot can update 425 the accessed map or the planned path based on the detected obstacle, and can perform the task using the updated map or path. For example, the robot can navigate to a room in which the task is to be performed using an alternative route, or can perform the task at a different starting location within the room based on the updated map. The updated map can then be provided by the robot to a building server, computing system, or another robot for subsequent use.

[0052] As noted above, one example contextual map layer is a keepout map. The keepout map defines portions of an area through which a robot should not or cannot enter or travel. Each of these portions is referred to as a“keepout region”. For instance, within the context of a hotel room, a keepout region may be defined for a bed within the hotel room (e.g., the portion of the localization map corresponding to the boundary of the bed can be designated as a keepout region, since the robot is unable to enter or travel through the boundary). Likewise, a keepout region can be defined for high-trafficked areas where a robot may inhibit movement of people within the areas. A keepout region can be manually defined, for instance by an owner of a building, a manager of a company, a cleaning or maintenance staff member, a user of the robot, or any other individual. Keepout regions can also correspond to regions where the presence of a robot may risk damaging the floor or objects within an area. Likewise, keepout regions can be defined for areas inaccessible to humans, for areas where the robot may get stuck, areas without wireless reception, walls, building structures, access gates, permanent fixtures, or any other suitable area.

[0053] In some embodiments, a contextual map layer can include keepout regions, can include free space regions (where full speed movement is permitted), and can further include reduced speed regions (where movement is permitted but only at a reduced speed). In such embodiments, the reduced speed regions can be defined based on high trafficked areas, based on areas where full speed movement may risk damaging furniture or other objects within the areas, based on particular floor types, based on a presence of a person or type of object, or the like. Likewise, in some embodiments, a contextual map layer can include reduced functionality regions (where the robot can only perform certain operations but not others). For example, a contextual map layer can identify a first set of regions where the robot can perform all operations, and can identify a second set of regions where the robot is not allowed to vacuum. It should be noted that a contextual map can identify reduced functionality regions by either identifying allowed operations within the regions or by identifying operations that aren’t allowed within the regions.

[0054] Figs. 5A-5F illustrate examples of contextualized maps, according to various embodiments. In the example of Fig. 5 A, a map includes areas designated as“known” and “unknown”. In this example, the designation between“known” and“unknown” can be part of a contextual map layer overlaid on a localization map, and can reflect a status of room data based on previously-acquired robot data. For instance, areas designated as“known” can be areas previously determined by a robot to not include obstacles, and areas designated as“unknown” can be areas previously determined by a robot to either include obstacles or areas for which the presence of obstacles is unknown. A robot can use the map of Fig. 5 A, for instance by traveling within the“known” areas and scanning the unknown areas to detect the presence of obstacles. In the event that the robot either detects obstacles within the unknown areas, or determines that the unknown areas do not include obstacles, the robot can update the map and provide the updated map to another robot or a computing system associated with the area, or can provide information describing the presence or absence of obstacles within the unknown area to a computing system or server configured to update the map based on the provided information.

[0055] In the example of Fig. 5B, a map includes a keepout contextual layer. In this example, the white region (labeled“free space”) represents areas that a robot can enter or travel within. Likewise, the black region (labeled“keepout”) represents areas that a robot cannot, should not, or is not permitted to enter. As the robot traverses within the area using the map of Fig. 5B, the robot can determine that portions of the area designated as keepout regions are in fact obstacle- free, and can update the map or can inform a remote computing system (such as a building server) that the portions should be re-classified as“free space” regions. Likewise, the robot, traversing through the room, can identify an obstacle not otherwise indicated by the map of Fig. 5B, and can update the map or can provide information describing the location and/or boundary of the object to a remote system in order to update the map of Fig. 5B.

[0056] In the example of Fig. 5C, a map includes a floor type contextual map layer. For instance, the white area represents tile floor, the grey area represents carpeted floor, and the black area represents keepout regions. A robot can use the map of Fig. 5C to identify cleaning operations to perform within each area of the room (e.g., a mopping operation on the tile floor and a vacuuming operation on the carpeted floor). In the example of Fig. 5D, a map includes an obstacle contextual layer identifying obstacles within the room, for instance, a bed, a desk, a table, a chair, a toilet, and as ink. A robot can use the map of Fig. 5D to identify a route to take through the room, for instance by avoiding the obstacles or by identify boundaries of the obstacles to aid in cleaning. In the example of Fig. 5E, a map includes a historical dust data contextual map layer identifying regions of the room that have accumulated dust in the past. A robot can use the map of Fig. 5E to identify portions of the room that require additional vacuuming (e.g., another vacuuming pass, a higher vacuuming power, a slower vacuuming speed, etc.) In the example of Fig. 5F, a map includes a human occupation contextual map layer identifying portions of the room occupied by humans. A robot can use the map of Fig. 5F to plan a route through the room that avoids or doesn’t obstruct) human occupants of the room.

[0057] As data is gathered by the robot via sensors within the robot, the contextual map layers can be continuously updated to maintain accuracy. Walls, permanent furniture, or other obstacles that may not be properly positioned on the localization map are updated from high- accuracy sensors like a LIDAR sensor. The robot can then update the contextual map layers to include the proper position of the detected objects or obstacles, or can provide information describing the location of the objects or obstacles to a remote system (such as a building server) to update the map layers. The updated contextual map layers can be distributed to one or more other robots (such as a set of robots within a threshold distance of the robot, on a same floor or in a same building as the robot, or assigned a particular task or navigating a particular route affect by the updated map), for instance in real-time, in order to provide the other robots with the most up-to-date contextual maps available.

[0058] In one embodiment, the robot (or a computing system to which the robot communicates sensor information) can update a contextual map layer in response to detecting an obstacle within a free space region. In such an embodiment, the contextual map layer is updated by adding a keepout region to the contextual map layer corresponding to the boundary or location of the detected obstacle. Further, the robot can notify maintenance or security personnel of the detected obstacle. Likewise, the robot (or the computing system) can update a contextual map layer by removing a keepout region in response to determining that the obstacle

corresponding to the keepout region is no longer there (for instance, in response to being moved). It should be noted that although reference is made to“contextual map layers”, in practice, a contextual map layer may itself be a map, or may be combined with a localization layer to form a map.

Additional Considerations

[0059] The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many

modifications and variations are possible in light of the above disclosure.

[0060] Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like, for instance, within a robot or robot system. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or engines, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.

[0061] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a non-transitory computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described. Further, the functionalities described herein can be performed by a hardware processor or controller located within the robot.

[0062] Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

[0063] Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

[0064] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.