Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD AND SYSTEM FOR MAPPING A REAL-WORLD ENVIRONMENT
Document Type and Number:
WIPO Patent Application WO/2024/033801
Kind Code:
A1
Abstract:
A method and system of mapping a real-world environment using a mobile robot platform comprising receiving, an indication of an update to at least a portion of an occupancy grid, comprising a representation of the real-world environment. A distance map representative of at least the portion of the occupancy grid is generated. A contour based on the distance map is generated that comprises a plurality of waypoints spaced along the contour; and navigating a locomotion- enabled component of the mobile robot system to at least one of the waypoints. The portion of the occupancy grid is updated based on an input received from the one or more sensors mounted on the locomotion-enabled component of the mobile robot platform.

Inventors:
TOSAS BAUTISTA MARTIN (GB)
Application Number:
PCT/IB2023/057997
Publication Date:
February 15, 2024
Filing Date:
August 08, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DYSON TECHNOLOGY LTD (GB)
International Classes:
G01C21/00; G01C21/10
Foreign References:
US10571925B12020-02-25
US20220024034A12022-01-27
SG11202009494YA2020-10-29
Attorney, Agent or Firm:
MITCHELL, Joshua et al. (GB)
Download PDF:
Claims:
CLAIMS

1. A method of mapping a real-world environment using a mobile robot platform, the method comprising: receiving, by the mobile robot platform, at least one indication of an update to at least a portion of an occupancy grid, the occupancy grid comprising a representation of the real-world environment; generating a distance map representative of at least the portion of the occupancy grid; generating at least one contour based on the distance map and comprising a plurality of waypoints spaced along the contour, each representing a geographical location in the real-world environment; navigating a locomotion-enabled component of the mobile robot platform to at least one of the waypoints; and updating the portion of the occupancy grid based on an input received from one or more sensors that are mounted on the locomotion-enabled component of the mobile robot platform.

2. The method of claim 1 , wherein the distance map comprises a plurality of areas, each categorised as at least one of: occupied space representative of an area in the real-world environment comprising at least a part of an object; known empty space representative of an area in the real-world environment comprising no objects; and unknown space representative of an area in the real-world environment which has not been mapped by the mobile robot platform.

3. The method of claim 1 , comprising: selecting an appropriate distance, wherein the appropriate distance is representative of a visibility characteristic of the one or more sensors of the mobile robot platform; and refining the distance map based on the appropriate distance.

4. The method of claim 3, wherein the step of refining the distance map comprises removing, from the distance map, areas categorised as unknown space.

5. The method of claim 3 or claim 4, wherein the visibility characteristic of one or more sensors comprises determining a minimum distance such that at least one object within the real-world environment is in the field of view of the one or more sensors.

6. The method of any previous claim, comprising selecting a nearest waypoint of the plurality of waypoints, wherein the nearest waypoint is a waypoint geographically closest to a current location of the mobile robot platform in the real-world environment.

7. The method of claim 6, wherein the nearest waypoint is not in a blacklist map, wherein the blacklist map represents portions in the occupancy grid already visited by the mobile robot platform.

8. The method of claim 7, comprising adding a visited portion to the blacklist map, the visited portion being the portion of the occupancy grid that has been updated.

9. The method any previous claim, wherein updating the portion of the occupancy grid comprises identifying at least one object in the field-of-view of the one or more sensors.

10. The method of claim 9, comprising storing a representation of the at least one object in storage associated with the mobile robot platform. 11. The method of claim 10, wherein the representation of the at least one object comprises an indication of a geographical location of the at least one object in the real-world environment.

12. The method of claim 10 or claim 11 , comprising storing characteristics of the mobile robot platform and associating the characteristics with the representation of the at least one object.

13. The method of any previous claim, whereby the occupancy grid is updated at the or at least some of the plurality of waypoints.

14. A system for mapping a real-world environment, the system comprising: at least one sensor to capture information associated with the real-world environment; storage to store at least an occupancy grid representative of the real- world environment; at least one processor arranged to: update the occupancy grid representative of the real-world environment based on information captured by the at least one sensor; generate a distance map representative of at least a portion of the occupancy grid; determine at least one contour in the distance map comprising a plurality of waypoints spaced along the contour, each waypoint being representative of a geographic location in the real-world environment; and determine a route comprising a plurality of the waypoints; and a locomotion-enabled component to navigate to at least one of the waypoints of the route determined by the motion planning component.

15. The system according to claim 13, comprising a simultaneous localization and mapping module for determining a location of at least the locomotion-enabled component within the real-world environment. 16. The system according to claim 13 or claim 14, wherein the at least one sensor for capturing information associated with the real-world environment comprises at least one of: a camera unit; a time of flight sensor unit; an array distance sensor unit; and an inertial measuring unit.

17. A non-transitory computer-readable storage medium comprising a set of computer-readable instructions stored thereon which, when executed by at least one processor, are arranged to control a mobile robot platform to map a real-world environment, wherein the instructions, when executed, cause the processor to: receive, by the mobile robot platform, at least one indication of an update to at least a portion of an occupancy grid, the occupancy grid comprising a representation of the real-world environment; generate a distance map representative of at least the portion of the occupancy grid; generate at least one contour based on the distance map, and comprising a plurality of waypoints spaced along the contour, each representing a geographical location in the real-world environment; navigate a locomotion-enabled component of the mobile robot platform to at least one of the waypoints; and updating the portion of the occupancy grid based on an input received from one or more sensors of the locomotion-enabled component of the mobile robot platform.

Description:
A METHOD AND SYSTEM FOR MAPPING A REAL-WORLD ENVIRONMENT

Field of the Invention

The present invention relates to a method and system for mapping a real-world environment. More particularly, but not exclusively, the method and system, relate to mapping a real-world environment using a mobile robot platform.

Background of the Invention

When navigating a robotic device in a real-world environment, determining the location and characteristics of objects within said environment enables the robotic device to detect and avoid those objects. Sensor systems detect the objects and store the locations and characteristics within the real-world environment such that, when navigating, the robotic device can determine where it can move unimpeded. Processing the data from the sensor system is a computationally complex and resource-intensive process, and therefore mapping the entirety of the real-world environment can take a long time. This is further compounded by objects within the real-world environment obstructing the sensor system of the robotic device such that determining a representation of the entirety of the real-world environment is further complicated. of the Invention

According to a first aspect of the present invention, there is provided a method of mapping a real-world environment using a mobile robot platform, the method comprising receiving, from the mobile robot platform, at least one indication of an update to at least a portion of an occupancy grid, the occupancy grid comprising a representation of the real-world environment; generating a distance map representative of at least the portion of the occupancy grid; generating at least one contour based on the distance map and comprising a plurality of waypoints spaced along the contour, each representing a geographical location in the real-world environment; navigating a locomotion- enabled component of the mobile robot platform to at least one of the waypoints; and updating the portion of the occupancy grid based on an input received from the one or more sensors that are mounted on the locomotion- enabled component of the mobile robot platform. This enables the mapping of a real-world environment to be undertaken whilst exploring said environment, and in particular capture and update mapping information represented in the occupancy grid for areas which are initially obscured by objects within the environment. Furthermore, by determining contours within the real-world environment, the mapping can be undertaken efficiently by focussing on areas within the real-world environment which are visible within the field of view of the sensors associated with the mobile robot platform.

The distance map may comprise a plurality of areas, each categorised as at least one of occupied space representative of an area in the real-world environment comprising at least a part of an object, known empty space, representative of an area in the real-world environment comprising no objects, and unknown space representative of an area in the real-world environment which has not been mapped by the mobile robot platform. By categorising areas in the distance map, object avoidance can be implemented, and the mapping focussed on portions which have yet to be mapped by the mobile robot platform, thereby increasing the efficiency of the method and the speed by which an accurate map of the entirety of the real-world environment can be generated.

Preferably, the method comprises selecting an appropriate distance, wherein the appropriate distance is representative of a visibility characteristic of the one or more sensors of the mobile robot platform; and refining the distance map based on the appropriate distance. This enables the distance map to be refined based on the characteristics of the sensors of the mobile robot platform, thereby ensuring that the distance map accurately represents the portions of the real- world environment visible by the one or more sensors. Optionally, the step of refining the distance map comprises removing, from the distance map, areas categorised as unknown space. This ensures that the distance map only comprises areas which have been previously mapped, thereby ensuring that the locomotion-enabled component of the mobile robot platform is navigated to geographical locations categorised as known empty space.

The visibility characteristic of one or more sensors may comprise determining a minimum distance such that at least one object within the real-world environment is in the field of view of the one or more sensors. By refining the distance map based on a minimum distance associated with the field-of-view of the sensors, it ensures that the distance map is limited to the areas where the sensor is capable of obtaining data, such that only areas within the field-of-view of the sensor at a given time are used when determining the contour and, subsequently, the areas the mobile robot platform moves within.

A nearest waypoint of the plurality of waypoints may be selected, wherein the nearest waypoint, is a waypoint geographically closest to the location of the mobile robot platform in the real-world environment. This enables the mobile robot platform to navigate to the geographical location of the start of the closest contour in the distance map, ensuring that the real-world environment is mapped in the most efficient manner.

Preferably, the nearest waypoint is not in a blacklist map. The blacklist map may represent portions in the occupancy grid already visited by the mobile robot platform. A visited portion may be added to the blacklist map, the visited portion being the portion of the occupancy grid that has been updated. This prevents the mobile robot platform from revisiting areas in the environment which have already been mapped, and as such, where ethe occupancy grid has already been updated. This ensures that a map of the entirety of the real-world environment is captured in the most efficient and timely manner. Updating the portion of the occupancy grid may comprise identifying at least one object in the field-of-view of the one or more sensors. This enables objects which may obstruct the movement of the mobile robot platform when navigating around the real-world environment. Preferably, a representation of the at least one object in storage associated with the mobile robot platform. By identifying objects, and subsequently storing a representation of an object in the real-world environment, the mobile robot system may use this data to interact with the objects at a future time,

The representation of the at least one object may further comprise an indication of a geographical location of the at least one object in the real-world environment. By determining the geographical location of the object in the real- world environment, the mobile robot system may determine a desired location of the object, and at a future time reposition and/or reorient the object in accordance with the desired location.

Optionally, characteristics of the mobile robot platform may be stored and associated with the characteristics of the representation of the at least one object. By associating characteristics of the mobile robot platform with the object representation, the mobile robot platform may return to the location of the object at a future time, and position itself in such a way to efficiently facilitate interaction with the object, such as adjusting the height of a manipulator of the robot platform that is arranged to interact with the object.

According to a second aspect of the present invention, there is provided a system for mapping a real-world environment, the system comprising at least one sensor to capture information associated with the real-world environment; storage to store at least an occupancy grid representative of the real-world environment; at least one processor arranged to update the occupancy grid representative of the real-world environment based on information captured by the at least one sensor; generate a distance map representative of at least a portion of the occupancy grid; determine at least one contour in the distance map comprising a plurality of waypoints spaced along the contour, each waypoint being representative of a geographic location in the real-world environment; determine a route comprising a plurality of the waypoints; and a locomotion-enabled component to navigate to at least one of the waypoints of the route determined by the motion planning component. This enables the mapping of a real-world environment to be undertaken whilst exploring said environment, and in particular capture and update mapping information represented in the occupancy grid for areas which are initially obscured by objects within the environment. Furthermore, by determining contours within the real-world environment, the mapping can be undertaken in an efficient manner, by focussing on areas within the environment which may initially be obscured from the field of view of the sensors associated with the mobile robot platform.

The system may comprise a simultaneous localization and mapping module for determining the location of at least the locomotion-enabled component within the real-world environment. This enables the mobile robot platform to determine its current location within the real-world environment accurately using the data gathered from one or more sensors.

Preferably, the at least one sensor for capturing information associated with the real-world environment comprises at least one of a camera unit; a time of flight sensor unit; an array distance sensor unit; and an inertial measuring unit. Using different sensors enables differing information to be gathered and processed, and therefore can be used to generate a more accurate and comprehensive representation of the real-world environment.

According to a third aspect of the resent invention, there is provided a non- transitory computer-readable storage medium comprising a set of computer- readable instructions stored thereon which, when executed by at least one processor are arranged to control a mobile robot platform to map a real-world environment, wherein the instructions, when executed, cause the processor to receive, from the mobile robot platform, at least one indication of an update to at least a portion of an occupancy grid, the occupancy grid comprising a representation of the real-world environment; generate a distance map representative of at least the portion of the occupancy grid; generate at least one contour based on the distance map, and comprising a plurality of waypoints spaced along the contour, each representing a geographical location in the real- world environment; navigating a locomotion-enabled component of the mobile robot platform to at least one of the waypoints; and update the portion of the occupancy grid based on an input received from one or more sensors of the locomotion-enabled component of the mobile robot platform. This enables the mapping of a real-world environment to be undertaken whilst exploring said environment, and in particular capture and update mapping information represented in the occupancy grid for areas which are initially obscured by objects within the environment. Furthermore, by determining contours within the real-world environment, the mapping can be undertaken efficiently, by focussing on areas within the environment which may initially be obscured from the field of view of the sensors associated with the mobile robot platform.

Further features and advantages of the invention will become apparent from the following description of examples of the invention, given by way of example only, which is made with reference to the accompanying drawings. Optional features of aspects of the present invention may be equally applied to other aspects of the present invention, where appropriate.

Brief of the

Figure 1 is a flowchart illustrating a method of mapping a real-world environment using a mobile robot platform, according to an example;

Figure 2 is a schematic diagram of an occupancy grid used for mapping a real- world environment using a mobile robot platform, according to an example; Figure 3 is a schematic diagram of a distance map generated by the mobile robot platform according to an example;

Figure 4A is a schematic diagram of a refined distance map indicating a plurality of contours according to an example;

Figure 4B is a schematic diagram of a refined distance map of Figure 4A where the plurality of contours have associated waypoints; and

Figure 5 is a block diagram of a system for mapping a real-world environment according to an example.

Detailed

Details of methods and systems according to examples will become apparent from the following description with reference to the figures. In this description, for the purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to ‘an example’ or similar language means that a feature, structure, or characteristic described in connection with the example is included in at least that one example but not necessarily in other examples. It should be further noted that certain examples are illustrated schematically with certain features omitted and/or necessarily simplified for the ease of explanation and understanding of the concepts underlying the examples.

Accurately mapping a real-world environment using a robot can be a time consuming and processor-intensive operation, which is dependent on a number of factors, including the complexity of the real-world environment, the presence of objects within the real-world environment, and the capability of the hardware used to map the real-world environment. With the advent of robotic systems, the mapping of such real-world environments can be optimised, and the accuracy of the generated maps increased. Figure 1 is a flowchart showing a method 100 in accordance with an example. The method 100 maps a real-world environment using a given mobile robot platform, such as the mobile robot platform described below with reference to Figure 5. The method 100 will also be described with reference to representations 200, 300, 400 of an exemplary real-world environment shown in Figures 2 - 4B. Whilst a given real-world environment is shown in Figures 2 - 4B, it will be appreciated that the method 100 may be applied to any given real-world environment, and different representations generated. Although the method 100 is described with reference to a contour exploration methodology, it will be appreciated that other methodologies, such as frontier exploration, may be used in other examples.

At step 110 of method 100, an indication to update at least a portion of an occupancy grid representing the real-world environment is received by a mobile robot platform 220. In some examples, the indication may be a message sent from a user device, indicating that the mobile robot platform 220 is to update the occupancy grid, in other examples the indication may be a notification that the mobile robot platform 220 is to update the occupancy grid, such as at a predetermined time, and/or according to a given schedule. The occupancy grid is a representation of a given real-world environment, such as a room in a dwelling or other building. An example of an occupancy grid 200 is shown in Figure 2. The occupancy grid 200 contains areas of known occupied space 210 representing the edges of objects, surfaces, and other obstructions within the real-world environment, such as walls. In some examples, the occupancy grid 200 may comprise other information, such as data related to the position of the mobile robot platform 220. Whilst four areas of known occupied space 210 are labelled in the occupancy grid 200 shown in Figure 2, in reality, there may be any number of positions contained within the occupancy grid 200 representing, for example, the locations of walls, surfaces, objects and obstructions within the real-world environment. Taken together, these multiple positions in the occupancy grid 200 signify the edges of objects, surfaces and/or obstructions within the real-world environment indicated by the distance map. These positions are then used to generate the contours which are representative of paths in the real-world environment, in which it is safe for the mobile robotic platform 220 to move without impinging on the positions. Other information may also be associated with the mobile robot platform 220 and stored in storage. Examples of such information include the positioning of servos, motors, hydraulics and/or pneumatic components of the mobile robot platform 220. This information may be used to position one or more armatures or other moveable components.

Given the nature of the real-world environment, and the fact that the mobile robot platform 220 is in a given location, areas within the real-world environment may not be visible by one or more sensors associated with the mobile robot platform, that are used to capture data representing the real-world environment. For example, in some areas, the view of one or more sensors may be obscured by one or more objects, such as object 260. Examples of such sensors will be described below with reference to Figure 5. Based on the data captured, areas within the occupancy grid 200 may be categorised as one of:

• known empty space 230, representing areas within the real-world environment where it is known that there are no objects, obstructions and/or surfaces;

• known occupied space 210 as described above representing the edges of objects, obstructions and/or surfaces within the real-world environment; and

• unknown space 240 representing areas within the real-world environment where the one or more sensors are unable to obtain data, for example based on a current field-of-view of the sensor at the location of the mobile robot platform 220. The indication to update at least a portion of the occupancy grid may be received by the mobile robot platform via a wired or wireless network connection, including but not limited to 802.11 , Ethernet, Bluetooth®, or nearfield communications (‘NFC’) protocol. The portion of the occupancy grid may represent a predefined area within the real-world environment, such as a room in a dwelling, may represent a group of pre-defined areas, such as a whole apartment within an apartment block, or may represent a given area within another area, such as part of a larger room, as indicated by area 250 in the occupancy grid 200 of Figure 2. In some examples, a user of the mobile robotic platform 220 can use a network-connected computing device, such as a mobile telephone, laptop computer, desktop computer, or wearable device to indicate the portions of the occupancy grid to update. Alternatively, in another examples, indications may be received periodically, or at predetermined times. In other examples, the indications may be received from one or more sensors in the real-world environment which detect changes, and/or may be received from a remote device such as a user’s smartphone These indications may be used to notify the mobile robotic platform that a portion of the occupancy grid is to be updated, thereby ensuring an accurate representation of the real-world environment is maintained.

Returning again to Figure 1 , following the receipt of the indication to update a portion of the occupancy grid, at step 120, a distance map is generated. Carrying on from exemplary occupancy grid 200 of Figure 2, a distance map 300 for this given real-world environment is shown in Figure 3. The distance map 300 represents the portion of the occupancy grid 200 for which the indication was received, in this example, the entirety of the real-world environment, and represents the maximum distance in the real-world environment where the mobile robot platform 220 can move. By moving within this maximum distance, the mobile robot platform 220 is able to capture data regarding the object(s), surface(s), and/or obstruction(s) represented in the occupancy grid 200 clearly. For example, the mobile robot platform 220 is able to position itself such that it is close enough to the object, surface and/or obstruction for the one or more sensors to be able to capture in-focus data regarding said object, surface and/or obstruction. Positions, such as positions 310, 320, 330, are represented on the distance map 300 as areas surrounding the known occupied space 210 in the occupancy grid 200.

The positions 310, 320, 330 may be based on characteristics of the mobile robot platform 220 itself, for example, a clearance required for any locomotion components such as a wheel assembly, or in some examples, may be based on visibility characteristics of one or more sensors associated with the mobile robot platform 220. For example, a given sensor associated with the mobile robot platform 220 may have a set focal length. As such, the distance map 300 may represent areas within the occupancy grid 200 where the mobile robot platform 220 may move to a position within the real-world environment. This enables the mobile robot platform 220 to accurately capture in-focus data of whatever object, obstruction, and/or surface is at a given location. Where the position is based on the visibility characteristics of more than one sensor, multiple positions 310, 320, 330 may be generated for each sensor, such that each sensor is able to obtain accurate in-focus data regarding the known occupied space 210 from the multiple sensors. In some examples, based on this distance, the distance map 300 may be refined, such that only positions within the real-world environment where the mobile robot platform 220 can capture accurate in-focus data are represented.

In some examples, the distance map 300 may be refined such that it represents areas of the occupancy grid that are known empty space 230 or unknown space 240. In yet further examples, the distance map may be refined such that it only includes areas which are known empty space 230. This may involve removing, from the distance map 300, areas categorised as unknown space 240 and/or known occupied space 210. This ensures that the distance map 300 includes areas which the mobile robot platform 220 is able to traverse. However, it will be appreciated that this need not be undertaken, and that other means of ensuring the mobile robot platform 220 does not enter unknown space 240 when mapping the real-world environment may be used. Examples of this include the use of a location positioning system in combination with geofencing, which may be provided by a simultaneous location and mapping method performed by a simultaneous location and method module, as described below. In yet further examples, the movement of the mobile robot platform 220 itself may be used to update the occupancy grid 200 as it traverses the real-world environment.

Following the generation of the distance map 300, at step 130 one or more contours are generated. The contours represent appropriate paths in the known empty space 230 that a mobile robot platform 220 may follow, so as to avoid the objects, obstructions and/or surfaces 210 in the real-world environment. As mentioned above, the distance map 300 comprises positions 310, 320, 330 which represent a distance from known occupied space 210 identified within the occupancy grid. The positions 310, 320, 330 may be based on a number of characteristics, and represent a location of the mobile robot platform 220 in the real-world environment. This ensures that accurate data can be captured relating to objects, surfaces and/or obstructions in the real-world environment. As such, some of the positions 310, 320, 330 may fall within unknown space 240. It would, therefore, be undesirable to position the mobile robot platform 220 at such a position since it is unknown whether there is a surface, object and/or obstacle at that position. As such, one or more contours, such as contours 410, 420, 430, are generated based on the positions 310, 320, 330 as shown in Figure 4A. The contours 410, 420, 430 represent portions of the position 310, 320, 330 that fall within known empty space 230. Accordingly, it is possible to move the mobile robot platform 220 to that location within the real- world environment without interacting with any of the objects, surfaces and/or obstructions.

Associated with each contour 410, 420, 430 is a start waypoint 410A, 420A, 430A. In some examples, the start waypoint represents a starting waypoint of the contour 410, 420, 430 closest to the geographical location of the mobile robot platform 220 in the real-world environment. However it will be appreciated that other methods for selecting a start waypoint 410A, 420A, 430A may be used. For example, the start waypoint 410A, 420A, 430A may be selected in accordance with other characteristics, such as determining whether the selected waypoint is on a blacklist of waypoints, such as the blacklist of waypoints described below, or based on the length of the path the mobile robot platform 220 must traverse in order to reach the start waypoint 410A, 420A, 430A. It will be appreciated that the methodology for selecting a start waypoint may be based on a combination of methods, such as the above mentioned closest methodology and the blacklist methodology. Following the selection of a start waypoint 410A, 420A, 430A, the contour 410, 420, 430 may be separated into a plurality of waypoints between the start waypoint 410A, 420A, 430A, and an ending waypoint 410Z, 420Z, 430Z representing the furthest point along the contour 410A, 420A, 430A. This, therefore, represents a continuous route from the start waypoint 410A, 420A, 430A to the respective end waypoint 410Z, 420Z, 430Z. An example of the contours 410, 420, 430 being separated into a plurality of waypoints each with a start waypoint 410A, 420A, 430A and an end waypoint 410Z, 420Z, 430Z is shown in Figure 4B. It will be appreciated that there are a number of methodologies for separating the contour into a plurality of waypoints. One such example may be to divide the contour 410, 420, 430 evenly, such that each waypoint is equidistant from another. This waypoint information may then be stored in association with the occupancy grid to enable easy and quick subsequent access during operations requiring the mobile robot platform 220 to transit the waypoints.

Once the contours 410, 420, 430 have been generated and the waypoints determined, at step 140, the waypoint information is used to navigate the mobile robot platform 220 to a waypoint in one of the contours 410, 420, 430. In some examples, this may be the start waypoint 410A, 420A, 430A associated with the contour 410, 420, 430. Navigating the mobile robot platform 220 to the waypoint may include initiating a locomotion-enabled component associated with the mobile robot platform 220. The locomotion-enabled component enables the mobile robot platform 220 to physically move the mobile robot platform 220 around the real-world environment in accordance with the occupancy grid 200 and the contours 410, 420, 430. This is achieved by navigating the mobile robot system 220 to the start waypoint 410A, 420A, 430A of one of the contours 410, 420, 430, and then traversing the contour 410, 420, 430 by navigating to the next waypoint of the contour 410, 420, 430 until the end waypoint 410Z, 420Z, 430Z is reached. This will be described in further detail below with reference to Figure 5.

Referring again to Figure 1 , at step 150, the occupancy grid 200 is updated whilst navigating along the contours 410, 420, 430. This may occur whilst navigating along a contour, for example at a waypoint and/or between waypoints. This enables data to be captured using one or more sensors associated with the mobile robot platform 220, such that a more accurate representation of the real-world environment can be obtained. This is achieved, since areas of the real-world environment which were previously outside the field-of-view of the one or more sensors (that is, areas of unknown space 240) may now be within the field-of-view (and can thus be classed as either known occupied space 210 or known empty space 230) as the mobile robot platform 220 traverses the contours 410, 420, 430. The updating process may occur as each waypoint is visited by the mobile robot platform 220, and, in other examples, may occur at selected waypoints of the contour 410, 420, 430. In yet further examples, the mobile robot platform 220 may traverse the entire contour 410, 420, 430, and perform the updating process when the end waypoint 410Z, 420Z, 430Z is reached, and/or may perform the updating process whilst navigating between the waypoints. In some examples, the process can be repeated, such that new contours are determined enabling further exploration of the real-world environment, and further increasing the accuracy of the map and occupancy grid 200.

As the mobile robot platform 220 traverses along the contour 410, 420, 430 and visits the waypoints performing the update action described above, each waypoint visited may be added to a blacklist and the updated blacklist is then stored in storage. By recording the visited waypoints in this way, it can be tracked which waypoints have and have not been visited, and for which updated information has been obtained. This enables a starting waypoint to be selected from the waypoints which are not contained within the blacklist, and for which updated information has not already been obtained. Furthermore, this enables the mapping process to be stopped and started as required whilst maintaining an understanding of the current progress of the mapping process.

During the exploration of the real-world environment by the mobile robot platform 220 and, whilst updating the occupancy grid 200, the data obtained by the one or more sensors may be analysed to identify objects within the real- world environment. By analysing the data and identifying objects therein, the mobile robot platform can obtain further information about the real-world environment and thereby generate a more accurate mapping of the locations of objects, surfaces and/or other obstructions. In some examples, information associated with the identity and/or representation of the objects may be stored as part of, or separately from, the occupancy grid, enabling subsequent access and analysis to be undertaken. In addition, the identity and/or representation of the object may be associated with its geographical location in the real-world environment.

As mentioned above, in some examples, the mobile robot platform 220 may need to adjust the position of one or more armatures or other moveable components associated with it in order to traverse a contour, and/or to perform an action at a given location. In such examples, the position of the armature or other moveable component may also be tracked and stored alongside the identity of the object, the representation of the object and/or a given location in the occupancy grid 200. It will be appreciated that other characteristics of the mobile robot platform 220 may also be stored, not just the position of the armature and/or other moveable components. Figure 5 shows a schematic representation of a system 500, such as the mobile robot platform 220 described above with reference to Figures 1 through 4B. The components of the system 510, 520, 530, 540, 550 may be interconnected by a bus, or in some examples, may be separate such that data is transmitted to and from each component via a network.

The system 500 comprises at least one sensor 510A, 510Z for capturing information associated with the real-world environment. The one or more sensors 510A, 510Z may include a camera unit for capturing frames of image data representing the real-world environment. The camera unit may be a visual camera unit configured to capture data in the visible light frequencies. Alternatively, and/or additionally, the camera unit may be configured to capture image data in the infra-red frequencies. It will be appreciated that other types of camera unit may be used. In some examples, the camera unit may comprise multiple individual cameras each configured differently, such as with different lens configurations, and may be mounted in such a way as to be a 360-degree camera. In other examples, the camera unit may be arranged to rotate such that it scans the real-world environment, thereby increasing its field of view. Again, it will be appreciated that other configurations may be possible.

In addition to, or instead of, a camera unit, the at least one sensor 510A, 510Z may comprise a time of flight sensor unit or array distance sensor unit configured to measure the distance to/from the sensor unit to objects, surfaces and/or obstacles in the real-world environment. An example of such time of flight or array distance sensors includes laser imaging, detection, and ranging (LIDAR). Other time of flight and/or array distance sensors may also be used. In addition to detecting objects within the environment, the one or more sensors 510A, 510Z may also include an inertial measuring unit for measuring the movement of the mobile robot platform 220 around the real-world environment. The one or more sensors 510A, 510Z provide the captured data to a processor 530 for processing. The processor 530 is arranged to use the captured data to update the occupancy grid 200 accordingly, such that the occupancy grid 200 represents the real-world environment within the field-of-view of the one or more sensors 510A, 51 OZ.

The system 500 also comprises storage 520 which may include any type of storage medium such as a solid-state drive (SSD) or other semiconductorbased RAM; a ROM, for example, a CD ROM or a semiconductor ROM; a magnetic recording medium, for example, a floppy disk or hard disk; optical memory devices in general; etc. The storage 520 is configured to store at least an occupancy grid representing the real-world environment, such as the occupancy grid 200 described above. In some examples, and as explained above, the storage 520 may be configured to store characteristics associated with the mobile robot platform, such as the position of armatures and other moveable components, in addition to the identities and characteristics of objects within the real-world environment.

The processor 530 is configured to perform at least the method 100 described above with reference to Figures 1 - 4B, and is configured to receive the occupancy grid data from storage 520 and data representing the real-world environment from the one or more sensors 510A, 510Z. The processor 530 comprises at least an updating module 532 for updating the occupancy grid based on the data obtained by the one or more sensors 510A, 510Z, as described above in relation to step 150 of method 100. Updating the occupancy grid comprises analysing the data obtained by the one or more sensors 510A, 510Z and categorising portions of the occupancy grid as known empty space, unknown space, and known occupied space.

A distance map generating module 534 associated with the processor 530 is configured to generate the distance map, such as distance map 300, based on the known occupied space represented in the occupancy grid in accordance with step 120 of method 100 described above. A contour determination module 536 is used to generate a plurality of contours based on the generated distance map, where each contour comprises a plurality of waypoints as described above in accordance with step 130 of method 100.

The processor 530 also comprises a motion planning component 538 for determining a route comprising the plurality of waypoints of a given contour. The motion planning component 538 analyses the contours generated by the contour determination module 536, and maps a route for a mobile robot platform, such as mobile robot platform 220 described above to take. In some examples, where the occupancy grid comprises other information, such as data relating to the positioning of armatures and/or other moveable components of the mobile robot platform 220, the motion planning component 538 may be arranged to indicate this positioning data in accordance with a given waypoint on the determined route.

Once the route has been determined by the motion planning component 538, it is output to a locomotion-enabled component 540 for navigating the waypoints of the route. The locomotion-enabled component 540 may be a wheel assembly, propellor assembly, or other controllable means for moving a mobile robot platform around the real-world environment. This enables the one or more sensors 510A, 510Z to capture data relating to areas of the real-world environment that were previously outside the field-of-view of the one or more sensors 510A, 510Z.

In some examples, the system 500 may comprise a simultaneous localization and mapping (SLAM) module 550 for locating the system 500 in the real-world environment. The SLAM module 550 may comprise several additional sensors and/or components such as a local positioning sensor. This may be used in combination with other sensors such as the inertial measuring sensor described above, and/or a satellite radio-navigation system. Examples of such satellite radio-navigation systems include the Global Positioning System, Galileo, or GLONASS. These sensors, either individually or together, are capable of tracking the location of the system 500 in the real-world environment as the locomotion-enabled component 540 moves the system around the real-world environment. It will be appreciated that the simultaneous localization and mapping module may comprise other components for performing these functions.

At least some aspects of the examples described herein with reference to Figures 1 - 5 comprise computer processes performed in processing systems or processors. However, in some examples, the disclosure also extends to computer programs, particularly computer programs on or in an apparatus, adapted for putting the disclosure into practice. The program may be in the form of non-transitory source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other non-transitory form suitable for use in the implementation of processes according to the disclosure. The apparatus may be any entity or device capable of carrying the program. For example, the apparatus may comprise a storage medium, such as a solid-state drive (SSD) or other semiconductor-based RAM; a ROM, for example, a CD ROM or a semiconductor ROM; a magnetic recording medium, for example, a floppy disk or hard disk; optical memory devices in general; etc.

In the preceding description, for purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to "an example" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples.

The above examples are to be understood as illustrative examples of the disclosure. Further examples of the disclosure are envisaged. It is to be understood that any feature described in relation to any one example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the examples, or any combination of any other of the examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the disclosure, which is defined in the accompanying claims.