Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS FOR CONTROLLING THE MOVEMENT OF A ROBOT FOR DISINFECTION
Document Type and Number:
WIPO Patent Application WO/2023/144523
Kind Code:
A1
Abstract:
There is disclosed a method of controlling movement of a UV cleaning and/or disinfection robot (1), the robot comprising one or more wheels, and a depth detection system (7), the method including a learning phase followed by an operational phase wherein the learning phase comprises the following steps: moving the robot along a first route from a finish point to a start point of the operational phase; performing measurements associated with the at least one wheel as the robot is moved along the first route, wherein the measurements are indicative of the movement and orientation of the robot; and performing first depth measurements with the depth detection system whilst the robot is moving along the first route; wherein once the robot has reached the start point and the operational phase is commenced the method comprises the following steps: moving the robot along a second route under its own control, wherein the second route is intended to be the reverse of the first route from the start point to the finish point by using the measurements associated with the at least one wheel; performing second depth measurements with the depth detection system at positions where first depth measurements were taken along the first route; comparing the first and second depth measurements; determining from differences between the first and second depth measurements errors in the positioning of the robot, wherein errors correspond to differences between the robot's actual position and the robot's expected position based on the measurements associated with the at least one wheel; and correcting the position of the robot based on the errors by moving it to the expected position.

Inventors:
RATHBONE KEVIN (GB)
NORTHWOOD EWEN (GB)
GRAUERS OLIVER (GB)
SYKES NATASHA (GB)
Application Number:
PCT/GB2023/050153
Publication Date:
August 03, 2023
Filing Date:
January 24, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GAMA HEALTHCARE LTD (GB)
International Classes:
G05D1/02
Foreign References:
US20190171210A12019-06-06
US20070198144A12007-08-23
US20210373559A12021-12-02
US20150271991A12015-10-01
Attorney, Agent or Firm:
MEISSNER BOLTE UK (GB)
Download PDF:
Claims:
Claims

1. A method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including a learning phase followed by an operational phase wherein the learning phase comprises the following steps: moving the robot along a first route from a finish point to a start point of the operational phase; performing measurements associated with the at least one wheel as the robot is moved along the first route, wherein the measurements are indicative of the movement and orientation of the robot; and performing first depth measurements with the depth detection system whilst the robot is moving along the first route; wherein once the robot has reached the start point and the operational phase is commenced the method comprises the following steps: moving the robot along a second route under its own control, wherein the second route is intended to be the reverse of the first route from the start point to the finish point by using the measurements associated with the at least one wheel; performing second depth measurements with the depth detection system at positions where first depth measurements were taken along the first route; comparing the first and second depth measurements; determining from differences between the first and second depth measurements errors in the positioning of the robot, wherein errors correspond to differences between the robot's actual position and the robot's expected position based on the measurements associated with the at least one wheel; and correcting the position of the robot based on the errors by moving it to the expected position.

2. The method of any preceding claim, wherein the depth detection system comprises one of: a LIDAR detection system, a radar system, depth sensing camera, stereoscopic camera, or other suitable apparatus, optionally wherein the depth detection system is a LIDAR detection system, and the first depth measurements are first LIDAR measurements, and the second depth measurements are second LIDAR measurements.

3. The method of any preceding claim, further comprising continuing to move the robot along the second route after the correction of the position of the robot.

4. The method of any preceding claim, further comprising when the error is above a first bound discarding the individual depth detection measurements and/or discarding the determined errors in the positioning of the robot, optionally wherein the first bound is between 0.5m and 0.1m, preferable wherein the first bound is 0.5m.

5. The method of any previous claim, wherein determining the errors in the positioning of the robot comprises determining a first error associated with error in a first direction, and determining a second error associated with error in a second direction, wherein the first and second directions are perpendicular.

6. The method of any previous claim, wherein the measurement associated with the wheel comprises at least one of the number of revolutions of the wheel, the orientation of the wheel, and/or the number of revolutions at a specific orientation.

7. The method of any preceding claim wherein using the measurements associated with the wheel to move along the second route, comprises using the measurements associated with the wheel taken whilst the robot was being moved along the first route to follow the first route in reverse.

8. The method of any preceding claim, wherein the measurement associated with the wheel is converted into x, y and theta positions, wherein x and y define the plane of the ground on which robot is located and theta defines an angle relative to a reference orientation, optionally wherein the start or finish position is denoted by the origin.

9. The method of any preceding claim, wherein the depth detection system is positioned at a height of 1.8m to 2.2m above the base of the robot, preferably wherein the depth detection system is positioned at a height of 1.9m above the base of the robot.

10. The method of any preceding claim, wherein each depth measurement uses the robot's current position at the time of the measurement as the origin in the coordinate system for each depth measurement. 11. The method of any preceding claim, wherein when a first depth measurement is taken it is stored and associated with a position based on the measurements associated with the wheel.

12. The method of any preceding claim, wherein first depth measurements are taken at one of: regular time intervals, and/or regular distance intervals, and/or regular angular intervals, optionally wherein in the case of regular distance intervals said distance interval is 3cm, further optionally and/or wherein in the case of regular angular intervals the angular interval is between 0.1 and 0.2 radians.

13. The method of any preceding claim, wherein the method comprises powering the robot during the learning phase, with power provided by batteries housed on/within the robot itself, optionally wherein during the learning phase a power cable is stored on/within the robot.

14. The method of any preceding claim, wherein the method comprises powering the robot during the operational phase, with power provided from an external source, optionally wherein the learning phase is powered by the batteries, and the operational phase is powered by an external power source.

15. The method of claim 14, wherein powering the robot during the operational phase comprises plugging a power cord into an external power source, optionally wherein the external power source is mains electricity.

16. The method of any previous claim, wherein a depth reading of the second depth measurements detects an object, and wherein the robot uses the depth reading of the second LIDAR measurements to determine a vector representing the orientation and/or position of the object relative to the robot.

17. The method of claim 16, wherein the depth reading of the second depth measurements is compared to a first depth reading of the first depth measurements, wherein said readings are taken at the same expected position, further comprising comparing the position of the object in the depth reading of the second LIDAR measurements to the closest two data points from the first depth reading of the first LIDAR measurements, the closest two data points forming candidate positions for the expected position of the object.

18. The method of claim 17, wherein the two candidate positions are joined by a line to form a candidate surface on which the object detected is expected to sit.

19. The method of claim 18, comprising determining a surface unit normal vector indicating the perpendicular direction to the surface. 20. The method of claim 19, further comprising using the depth readings to determine an offset vector leading from the object as measured in the depth reading of the second depth measurement to a candidate position of the object as measured in the first depth reading of the first depth measurement, preferably wherein said candidate position is the closest of the two candidate positions identified.

21. The method of claim 20, comprising determining a single measurement error by multiplying the offset vector by the surface unit normal vector.

22. The method of claim 21, comprising multiplying the single measurement error by the surface unit normal vector to determine a single measurement error vector.

23. The method of any of claims 21, wherein the multiplication is an inner product.

24. The method of claim 22 or 23, wherein the single measurement error vector is discarded if the magnitude of the single measurement error vector is above a first bound, preferably wherein said first bound is between 0.5m and 0.1m, further preferably wherein the first bound is 0.5m.

25. The method of any preceding claim, wherein data points of the depth measurements in opposite directions are treated as being in alignment.

26. The method of claim 22, wherein each depth reading comprises a plurality of data points, each data point representing an object, and each object associated with a single measurement error vector, the method further comprising summing the square of the plurality of single measurement vector errors, and taking the square root of the result to give a major direction.

27. The method of claim 26, wherein the vectors are represented as complex numbers.

28. The method of claim 26 or 27, further comprising determining the perpendicular of the major direction to determine the minor direction.

29. The method of claim 26, 27 or 28, further comprising projecting each single measurement error unit vector along the major direction, and summing the absolute value of said projections to determine a major direction evidence level.

30. The method of claim 28, further comprising projecting each single measurement unit vector error along the minor direction, and summing the absolute value of said projections to determine a minor direction evidence level. 31. The method of claim 29 or 30, further comprising weighting the use of the single measurement unit error vectors prior to the summation of the projections.

32. The method of claim 31, wherein the weighting decreases the value of a first single measurement error unit vector in the summation as the first single measurement error vector approaches the first bound.

33. The method of claim 32, wherein the weighting is performed by a cos20 function, optionally wherein 0 is the modulus of the error vector divided by the first bound and multiplied by pi/2.

34. The method of claim 29, determining a major direction correction vector by summing the projection of each single measurement error vector along the major direction, and dividing the resulting total by the major direction evidence level.

35. The method of claim 30, determining a minor direction correction vector by summing the projection of each single measurement error vector along the minor direction, and dividing the resulting total by the minor direction evidence level.

36. The method of claims 29 or 30, further comprising comparing the major or minor direction evidence level with a major or minor direction threshold, and if above the threshold continuing with the determination of the major or minor direction correction vector.

37. The method of claims 34 and 35, further comprising applying the major direction correction vector and the minor direction correction vector to the offset vectors of the LIDAR reading.

38. The method of any preceding claim, further comprising determining a fitness score to determine how accurately the orientation of the route is being tracked, optionally using equation: where the error is the modulus of the single measurement error vectors after the correction vectors have been applied, and the threshold is a pre-set value, preferably 0.5m.

39. The method of any preceding claim, further comprising rotating the measurements from the depth detection system by an angle and simulating a second depth reading in the second depth measurements. 40. The method of claim 39, further comprising repeating the method claims set out in claims 16-39, to determine a fitness score for the second depth reading in the second depth measurements and determining whether the fitness score is reduced, and then using the major direction correction vector and minor direction correction vector for the measurement corresponding to the lowest fitness score.

41. The method of any preceding claim, wherein comparing the first depth measurements and the second depth measurements comprises comparing the perpendicular distance, and/or the vector between the robot and the object detected in the first depth measurements at the position, with the perpendicular distance and/or the vector between the robot and the object detected in the second depth measurements at the same expected position based on the measurements associated with the wheels, optionally wherein when the difference between the first and second depth measurements is greater than a set value the depth measurements are discarded, further optionally wherein the set value is 50cm.

42. The method of any preceding claim, wherein the expected position is the position on the second route at which the first depth measurement was taken, whilst following the first route.

43. The method of any preceding claim, wherein moving the robot along a first route is performed under manual control.

44. The method of any preceding claim, wherein the depth detection system comprises a two-dimensional LIDAR detection system.

45. The method of any preceding claim, wherein in the operational phase the robot is configured to abort moving along the second route in the event that it comes into contact with an object, optionally wherein coming into contact with an object is detected by one or more of: a bumper sensor positioned at the perimeter of the base of the robot, detected deviation of the robot from its expected path by more than 50mm, an accelerometer, a gyroscope, or by the depth detection system determining that the robot has tilted due to the contact.

46. The method of any preceding claim, wherein in the learning phase the robot communicates a warning to the user if it is moving along the first route at a velocity above a set limit, optionally wherein the set limit is 1 metre per second.

47. The method of any preceding claim, wherein after the learning phase is complete, but before the operational phase is commenced, the robot provides a confidence score based on the first depth measurements as to whether the first route is sufficient to clean the room during the operational phase, optionally wherein the confidence score is based on the first depth measurements determining if the robot was positioned close enough to each of the objects detected by the depth detection system during movement along the first route.

48. The method of any preceding claim, wherein correcting the position comprises moving the robot and iteratively determining whether the robot is closer to the expected position, and moving the robot again until the robot is at the expected position, or within a set distance of the expected position.

49. The method of any preceding claim, wherein during the operational phase the expected position is determined using the measurements associated with the at least one wheel, and interpolating between the locations at which such measurements here taken during movement along the first route in the learning phase.

50. The method of any preceding claim, wherein the robot comprises UV lamps, wherein the UV lamps are configured to be activated during the operational phase.

51. A method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including a learning phase followed by an operational phase in which the route of the learning phase is followed, wherein the learning phase comprises the following steps: moving the robot manually along a first route from a finish point to a start point of the operational phase; performing measurements associated with the at least one wheel as the robot is moved along the first route, wherein the measurements are indicative of the movement and orientation of the robot; and performing first depth measurements with the depth detection system whilst the robot is moving along the first route.

52. The method of claim 51, further comprising the features of any of claims 2-50.

53. A method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including an operational phase in which a previously learnt first route is followed, wherein the operational phase comprises the following steps: moving the robot along a second route under its own control, wherein the second route is intended to be the reverse of the first by using the measurements associated with the at least one wheel; performing depth measurements with the depth detection system and comparing this with data associated with the first route; determining from the comparison errors in the positioning of the robot, wherein the errors correspond to differences between the robot's actual position and the robot's expected position based on the measurements associated with the at least one wheel; and correcting the position of the robot based on the errors by moving it to the expected position.

54. The method of claim 53, further comprising the features of any of claims 2-50.

55. A UV cleaning and/or disinfection robot comprising at least one wheel, and a LIDAR detection system, wherein the robot is configured to perform the method of claims 1-54.

56. A non-tangible machine readable media configured to perform the method of any of claims 1-54.

57. A UV cleaning and/or disinfection robot comprising at least one wheel, and a depth detection system, wherein the robot is configured to measure its position on the basis of sensors associated with the at least one wheel to determine the orientation and revolutions of the wheel as the robot traverses a route, and wherein the depth detection system is configured to determine the accuracy of the robots positioning on the basis of the wheel sensor measurement, wherein the wheel sensor uses a global coordinate system such that the wheel sensor measurements are made using the same co-ordinate axis, whereas the depth detection system uses a new co-ordinate system for each measurement.

Description:
Methods for Controlling the Movement of a Robot for Disinfection

Technical Field

The present invention relates to a robotic, mobile apparatus for treating an enclosed space, in particular a room in a hospital, for example by disinfecting same using ultraviolet (IIV-C) radiation. The robot may be for the purpose of cleaning and/or disinfecting. In particular, the robot may be configured to perform UV cleaning and/or disinfection of the enclosed space.

Background

Infectious microbe strains that are resistant to antibiotics and chemical disinfectants are a growing threat to the general public. Hospitals and clinics are particularly prone to harbouring these dangerous microbes, which pose an elevated danger to patients that have weakened immune systems. To counter these microbes in a manner which prevents their acquiring resistance, the use of apparatus which irradiates them with high frequency ultraviolet radiation (IIV-C) is becoming more common. This is because electric lamps that produce IIV-C radiation with wavelengths in the range 2800 A to 150 A (optionally 2537 A produced by mercury vapour lamps) are now widely available. Such bulbs have been incorporated into hospital building structures in order that they can be operated remotely in empty rooms to sterilize the room. They have also been incorporated into transportable, free-standing apparatus for placement into rooms requiring disinfection.

In present embodiments these IIV-C bulbs (or other disinfectant apparatus) are mounted to a robot that can traverse a room. Such a robot can be taught a route, and may then autonomously attempt to follow that route. There is therefore a need to accurately track a route around an enclosed area to ensure that there is sufficient irradiation of all intended surfaces. A lack of irradiation may lead to surfaces not being sufficiently disinfected. It is noted that as the irradiation is dangerous to the operator it is necessary for the robot to track along the intended route without being assisted by an operator (unless the operator is wearing sufficient protective apparatus). As such it is the aim of this application to improve the accuracy by which a robot tracks along an intended disinfection pathway.

Statements of Invention

According to a first aspect there is provided a method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including a learning phase followed by an operational phase wherein the learning phase comprises the following steps: moving the robot along a first route from a finish point to a start point of the operational phase; performing measurements associated with the at least one wheel as the robot is moved along the first route, wherein the measurements are indicative of the movement and orientation of the robot; and performing first depth measurements with the depth detection system whilst the robot is moving along the first route; wherein once the robot has reached the start point and the operational phase is commenced the method comprises the following steps: moving the robot along a second route under its own control, wherein the second route is intended to be the reverse of the first route from the start point to the finish point by using the measurements associated with the at least one wheel; performing second depth measurements with the depth detection system at positions where first depth measurements were taken along the first route; comparing the first and second depth measurements; determining from differences between the first and second depth measurements errors in the positioning of the robot, wherein errors correspond to differences between the robot’s actual position and the robot’s expected position based on the measurements associated with the at least one wheel; and correcting the position of the robot based on the errors by moving it to the expected position. This method is particularly advantageous because it has been found that the use of the measurements associated with the wheel alone are not sufficiently accurate. In either the learning or operational phase the wheels may slip or skid such as experiencing microslippage, and this may not be replicable in the other phase. Therefore, by simply tracking the inverse of the wheel based measurements errors may be introduced in the operational phase. The above method allows these errors to be detected and corrected during the tracking of the route, such that the robot tracks the intended route with greater accuracy. This is important for disinfecting applications where errors in tracking the route may lead to unsuitable levels of disinfection. This method therefore improves disinfection efficacy and prevents the robot from hitting walls and obstacles.

Optionally, the method comprises powering the robot during the learning phase, with power provided by batteries housed on/within the robot itself. This is highly advantageous as it may allow the user to perform the learning phase more efficiently, without cord tangling, trip hazards etc.

Optionally, during the learning phase a power cable is stored on/within the robot. This may reduce complications during the learning phase. This may also enable the use of the power cable during the operational phase.

Optionally, the method comprises powering the robot during the operational phase, with power provided from an external source. This is highly advantageous as the robot may perform disinfection making use of UV-C producing bulbs. The use of batteries in the operational phase may be highly restrictive for the range of the robot. Therefore, the use of the power cable during disinfection allows longer/more complex routes to be taken by the robot.

Optionally, powering the robot during the operational phase comprises plugging a power cord into an external power source. This may provide a simple means of connection to an external power source.

Optionally, the external power source is mains electricity. This may allow for the robot to be used in a variety of locations where mains power is provided.

Optionally wherein the learning phase is powered by the batteries, and the operational phase is powered by an external power source. This is highly advantageous as this duel functionality enables ease of use in both the operational and learning phases and therefore provides considerable benefits to the system as a whole in terms of efficiency of use.

Optionally, the depth detection system comprises one of: a LIDAR detection system, a radar system, depth sensing camera, stereoscopic camera, or other suitable apparatus.

Optionally, the depth detection system is a LIDAR detection system, and the first depth measurements are first LIDAR measurements, and the second depth measurements are second LIDAR measurements. LIDAR may be particularly advantageous for use in the above method because it is accurate, and may be efficiently processed by the robot.

Optionally, the method further comprising continuing to move the robot along the second route after the correction of the position and/or orientation of the robot. This allows the operational phase to continue after the correction has taken place. Optionally, the method further comprising when the error is above a first bound discarding the individual depth detection measurements and/or discarding the determined errors in the positioning of the robot. This is advantageous as the error being above a set bound may be indicative that the first and second depth measurements may be of differing objects, or otherwise may not be directly comparable.

Optionally, wherein the first bound is between 0.5m and 0.1 m, preferable wherein the first bound is 0.15m. This may be particularly useful for determining if the differences between the depth measurements are accurate reflections of the robot’s position.

Optionally, wherein the error being above the first bound is indicative of the robot sensing a different surface during disinfection than it did in routing. This may allow the measurements to be ignored, or for this to cause the pausing of the routing in some embodiments.

Optionally, wherein determining the errors in the positioning of the robot comprises determining a first error associated with error in a first direction, and determining a second error associated with error in a second direction, wherein the first and second directions are perpendicular. This is advantageous as it is common that errors may be predominantly in one of these directions and therefore separating the directions allows the true error in position to be determined, by using different divisors in the two directions.

Optionally, wherein the measurement associated with the wheel comprises at least one of the number of revolutions of the wheel, the orientation of the wheel, and/or the number of revolutions at a specific orientation. This allows the wheel to be tracked in a data efficient manner.

Optionally, wherein using the measurements associated with the wheel to move along the second route, comprises using the measurements associated with the wheel taken whilst the robot was being moved along the first route to follow the first route in reverse. The reversing of the robot allows the user to simply teach the robot the route without repositioning the robot at the beginning.

Optionally, wherein the measurement associated with the wheel is converted into x, y and theta positions, wherein the x and y axes define the plane of the ground on which the robot is located and theta defines an angle relative to a reference orientation. This allows all of the information regarding the wheels to be stored and used efficiently. Optionally, wherein the start or finish position is denoted by the origin. This allows the wheel measurements to use a single co-ordinate set which is efficient for data use, and allows for the wheel measurements to be used continuously. It also allows for interpolation between measurements.

Optionally, wherein the depth detection system is positioned at a height of 1.8m to 2.2m above the base of the robot, preferably wherein the depth detection system is positioned at a height of 1.9m above the base of the robot. This is advantageous as most obstacles are positioned lower, so the measurements will have less noise (which may be referred to as there being fewer occlusions), and may be predominantly based on wall positioning around the room. Having a depth detection system above the height of the operator may also be advantageous as the operator is unlikely to obstruct the measurement.

Optionally, wherein each depth measurement uses the robot’s current position at the time of the measurement as the origin in the co-ordinate system for each depth measurement. This is advantageous as each depth measurement has its own coordinate system. This means the depth detection and wheel based measurements work with separate co-ordinate systems. This is advantageous as this means they are independent of one another - and therefore the depth detection system works independently as a check of the accuracy of the wheel based measurements. Moreover, this reduces the processing and memory power needed for the robot (as opposed to attempting to translate each LIDAR measurement to a global co-ordinate system).

Optionally, wherein when a first depth measurement is taken it is stored and associated with a position based on the measurements associated with the wheel. This is advantageous in the operational phase as the associated wheel measurement may then be used to determine the expected position of the robot.

Optionally, wherein first depth measurements are taken at one of: regular time intervals, and/or regular distance intervals, and/or regular angular intervals.

Optionally, wherein in the case of regular distance intervals said distance interval is 3cm. This allows the robots position to be tracked accurately, whilst limiting the amount of data recorded and stored. Optionally, wherein in the case of regular angular intervals the angular interval is 0.1 radians. This allows the robots position to be tracked accurately, whilst limiting the amount of data recorded and stored.

Optionally, wherein a LIDAR reading of the second LIDAR measurements detects an object, and wherein the robot uses the LIDAR reading of the second LIDAR measurements to determine a vector representing the orientation and/or position of the object relative to the robot.

Optionally, wherein the LIDAR reading of the second LIDAR measurements is compared to a LIDAR reading of the first LIDAR measurements, wherein said readings are taken at the same expected position, further comprising comparing the position of the object in the LIDAR reading of the second LIDAR measurements to the closest two data points from the LIDAR reading of the first LIDAR measurements, the closest two data points forming candidate positions for the expected position of the object. The two candidate positions are advantageous as two is sufficient to draw a straight line between them. Moreover in some embodiments it has been found that this number of candidate points is sufficient, and any more increases the processing required exponentially without meaningfully increasing the accuracy of the result.

Optionally, wherein the two candidate positions are joined by a line to form a candidate surface on which the object detected is expected to sit. The candidate positions are assumed to form a surface (as they are normally close to one another). It is also assumed that this same surface has been detected in the second LIDAR reading. Therefore, the position of the object in the second LIDAR reading should sit on this same surface (and so somewhere on the line between the candidate positions).

Optionally the method comprising determining a surface unit normal vector indicating the perpendicular direction to the surface.

Optionally the method, further comprising using the LIDAR readings to determine an offset vector leading from the object as measured in the LIDAR reading of the second depth measurement to a candidate position of the object as measured in the LIDAR reading of the first depth measurement, preferably wherein said candidate position is the closest of the two candidate positions identified. This gives an indication of the distance and direction by which the measured value differs from the expected value.

Optionally, the method comprising determining a single measurement error by multiplying the offset vector by the surface unit normal vector. The single measurement error is advantageous as it is indicative of the error associated with the measurement within the LIDAR reading. Each measurement within the LIDAR reading may have an associated single measurement error.

Optionally, wherein the multiplication is an inner product.

Optionally, the method comprising multiplying the single measurement error by the surface unit normal vector to determine a single measurement error vector.

Optionally, wherein the single measurement error vector is discarded if the single measurement error vector’s magnitude is above a first bound, preferably wherein said first bound is between 0.5m and 0.1 m, further preferably wherein the first bound is 0.15m.

Optionally, wherein LIDAR measurements in opposite directions are treated as being in alignment. This allows the data to be used together such that only two directions of errors are recorded.

Optionally, wherein each LIDAR reading comprises a plurality of data points, each data point representing an object, and each object associated with a single measurement error vector, the method further comprising summing the squares of the complex numbers representing the plurality of single measurement vector errors, and taking the square root of the result to give a major direction. This is advantageous as the major direction is indicative of the direction in which the error is estimated with the greatest confidence.

Optionally, the method further comprising determining the perpendicular of the major direction to determine the minor direction. This is advantageous as the minor direction is perpendicular to the major direction and so may have a lower confidence associated with the estimated error in this direction.

Optionally, the method further comprising projecting each single measurement error vector along the major direction, and summing said projections to determine a major direction evidence level. This gives an indication of confidence in the estimated error in this direction.

Optionally the method, further comprising projecting each single measurement vector error along the minor direction, and summing said projections to determine a minor direction evidence level. This gives an indication of confidence in the estimated error in this direction. Optionally, the method further comprising weighting the use of the single measurement error vectors prior to the summation of the projections. This is advantageous as some larger errors may be less reliable, as there is a greater chance that these are actually measurements of different surfaces from those measured during the learning phase and so weighting them as such may prohibit them from skewing the overall results.

Optionally, wherein the weighting decreases the value of a first single measurement error vector in the summation as the first single measurement error vector approaches the first bound.

Optionally, wherein the weighting is performed by a cos 2 0 function.

Optionally, determining a major direction correction vector by summing the projection of each single measurement error vector along the major direction, and dividing the resulting total by the major direction evidence level. This is advantageous as it is the amount the robot may correct by in the major direction. As set out below this may be only one candidate correction, or may be used immediately.

Optionally, determining a minor direction correction vector by summing the projection of each LIDAR measurement along the minor direction, and dividing the resulting total by the minor direction evidence level. This is advantageous as it is the amount the robot may correct by in the minor direction. As set out below this may be only one candidate correction, or may be used immediately.

Optionally, the method further comprising comparing the major or minor direction evidence level with a major or minor direction threshold, and if above the threshold continuing with the determination of the major or minor direction correction vector.

Optionally, the method further comprising adding the major direction correction vector and the minor direction correction vector to the offset vectors of the LIDAR reading.

Optionally, the method further comprising determining a fitness score to determine how accurately the route is being tracked, optionally using equation: where the error is the modulus of the offset vector after the correction vectors have been added, and the threshold is a pre-set value, preferably 0.5m. This is advantageous as it allows the candidate robot orientation to be assessed, alongside alternative candidate robot orientations to determine which is most suitable.

Optionally, the method further comprising rotating the LIDAR detection system is by an angle and performing a second LIDAR reading in the second LIDAR measurements. This allows a second correction vector to be determined at this new angle.

Optionally, the method further comprising repeating the method steps for determining the correction vector, to determine a fitness score for the second LIDAR reading in the second LIDAR measurements and determining whether the fitness score is reduced, and then using the major direction correction vector and minor direction correction vector for the measurement corresponding to the lowest fitness score.

Optionally, wherein comparing the first depth measurements and the second depth measurements comprises comparing the perpendicular distance, and/or the vector between the robot and the object detected in the first depth measurements at the position, with the perpendicular distance and/or the vector between the robot and the object detected in the second depth measurements at the same expected position based on the measurements associated with the wheels.

Optionally, wherein when the difference between the first and second depth measurements is greater than a set value the depth measurements are discarded. Optionally, wherein the set value is 50cm.

Optionally, wherein the expected position is the position on the second route at which the first depth measurement was taken, whilst following the first route.

Optionally, wherein moving the robot along a first route is performed under manual control.

Optionally, wherein the depth detection system comprises a two-dimensional LIDAR detection system. This allows a single slice along the x-y plane to be viewed by the LIDAR. This reduces the data required to be processed, but has been found to be reliable.

Optionally, wherein in the operational phase the robot is configured to abort moving along the second route in the event that it comes into contact with an object. This is advantageous because if this occurs it is likely the routing has gone wrong. As such the operator may reroute the robot from scratch through the learning phase. Optionally, wherein coming into contact with an object is detected by one or more of: a bumper sensor positioned at the perimeter of the base of the robot, detected deviation of the robot from its expected path by more than 50mm, an accelerometer, a gyroscope, or by the depth detection system determining that the robot has tilted due to the contact.

Optionally, wherein in the learning phase the robot communicates a warning to the user if it is moving along the first route at a velocity above a set limit. This may be advantageous as the route may not be accurately learnt above a set velocity in some embodiments, and therefore communicating a warning lets the operator become aware that said limit is exceeded. Optionally, wherein the set limit is 1 metre per second.

Optionally, wherein after the learning phase is complete, but before the operational phase is commenced, the robot provides a confidence score based on the first depth measurements as to whether the first route is sufficient to clean the room during the operational phase. This may be advantageous as routes that are not suitable (such as by not providing enough disinfection of a part of the floor surface) may be mitigated against.

Optionally, wherein the confidence score is based on the first depth measurements determining if the robot was positioned close enough to each of the objects detected by the depth detection system during movement along the first route. This may be advantageous to ensure that the border of the enclosed space receives sufficient disinfection.

Optionally, wherein correcting the position comprises moving the robot and iteratively determining whether the robot is closer to the expected position, and moving the robot again until the robot is at the expected position, or within a set distance of the expected position. This may be advantageous as it reduces the processing required by the robot. Moreover, for small correction errors this may be an efficient method of correcting position errors.

Optionally, wherein during the operational phase the expected position is determined using the measurements associated with the at least one wheel, and interpolating between the locations at which such measurements were taken during movement along the first route in the learning phase. This allows a continuous prediction of the expected position of the robot at all times. Optionally, wherein the robot comprises UV lamps, wherein the UV lamps are configured to be activated during the operational phase. The UV lamps may be advantageous for disinfection. Alternatively, a hydrogen peroxide vapour (HPV) fogging device may be used for disinfection.

According to a second aspect there is provided a method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including a learning phase followed by an operational phase in which the route of the learning phase is followed, wherein the learning phase comprises the following steps: moving the robot manually along a first route from a finish point to a start point of the operational phase; performing measurements associated with the at least one wheel as the robot is moved along the first route, wherein the measurements are indicative of the movement and orientation of the robot; and performing first depth measurements with the depth detection system whilst the robot is moving along the first route. This method efficiently teaches the robot a route that it may then track autonomously. This learning technique allows the wheel measurements to be complemented by depth measurements as a fall-back in the case of errors during following the route. This method may be used with any of the optional features listed above.

According to a third aspect there is provided a method of controlling movement of a UV cleaning and/or disinfection robot, the robot comprising one or more wheels, and a depth detection system, the method including an operational phase in which a previously learnt first route is followed, wherein the operational phase comprises the following steps: moving the robot along a second route under its own control, wherein the second route is intended to be the reverse of the first by using the measurements associated with the at least one wheel; performing depth measurements with the depth detection system and comparing this with data associated with the first route; determining from the comparison errors in the positioning of the robot, wherein the errors correspond to differences between the robot’s actual position and the robot’s expected position based on the measurements associated with the at least one wheel; and correcting the position of the robot based on the errors by moving it to the expected position. This method allows the robot to follow a pre-taught route effectively without significant deviation, and is therefore considered advantageous. Any of the optional features described above may be used in combination with this method. It is noted that previously learnt may include in the learning phase outlined in other aspects, or alternatively a pre-programmed route.

According to a fourth aspect there is provided a UV cleaning and/or disinfection robot comprising at least one wheel, and a LIDAR detection system, wherein the robot is configured to perform the method of the aspects described above.

According to a fifth aspect there is provided a non-tangible machine readable media configured to perform the method of any of the aspects described above.

According to a sixth aspect there is provided a UV cleaning and/or disinfection robot comprising at least one wheel, and a depth detection system, wherein the robot is configured to measure its position on the basis of sensors associated with the at least one wheel to determine the orientation and revolutions of the wheel as the robot traverses a route, and wherein the depth detection system is configured to determine the accuracy of the robot’s positioning on the basis of the wheel sensor measurement, wherein the wheel sensor uses a global coordinate system such that the wheel sensor measurements are made using the same co-ordinate axis, whereas the depth detection system uses a new co-ordinate system for each measurement. The optional features described above in relation to other aspects may also be used in combination with this aspect.

Brief Description of Figures

Figure 1 shows a disinfection Robot.

Figure 2 shows a flow diagram of a process to be performed during the route learning phase.

Figure 3 shows a flow diagram of a process to be performed during the disinfection phase.

Figure 4a shows an exemplary LIDAR scan at a position during the learning phase.

Figure 4b shows an exemplary LIDAR scan at a position during the disinfection phase.

Figure 5 shows the exemplary LIDAR scans of Figures 4a and 4b superimposed to illustrate the differences between them. Figure 6 shows a diagram illustrating a single point from the LIDAR scan during the disinfection phase and the two closest candidate matches from the learning phase.

Figure 7 shows a flow diagram detailing the determination of a fitness score for the position of the robot.

Detailed Description of Figures

Figure 1 shows a robot for disinfection of an enclosed space. The robot may be particularly advantageous at disinfecting, or cleaning hospital rooms or wards. The robot comprises one or more wheels, and a depth detection system. The robot may also comprise a processor and/or a memory element.

The wheels allow the robot to move along the floor of the enclosed area. The robot is configured to be pushed so that the wheels track a user’s preferred route during the learning phase. The orientation of the wheels, and the number of revolutions of travel at said orientation is configured to be recorded by a processor and/or memory element.

It is noted that in some embodiments said wheel measurements may use a global coordinate system such that the entire route track is recorded using the same coordinate system.

The depth detection system may comprise any suitable system for determining the distance that objects are situated away from the robot. For example, a LIDAR detection system, a radar system, depth sensing camera, stereoscopic camera, or other suitable apparatus may be used. Advantageously the depth detection system may comprise a LIDAR system. A LIDAR system may be particularly accurate, and may also simplify the calculations needed. It is noted that objects include items such as walls which form the enclosed space. It is noted that in the remaining description the depth detection system and associated scans may be referred to as LIDAR scans, and a LIDAR system. However, any suitable system such as those listed above may be used in place of a LIDAR scan/system.

LIDAR is an acronym for “light detection and ranging” or “laser imaging, detection and ranging”. LIDAR involves targeting an object with a light source (such as a laser) and measuring the time for the reflected light to return. Some alternate LIDAR systems measure the angle of returned light. The two-dimensional LIDAR discussed here relates to light being emitting along a plane, and the reflected light from all directions in said plane being collected. This may be achieved by a rotating turret measuring one direction at a time as it rotates to detect the entire plane with each rotation. Each reflection comprises a data point (as shown in Figures 4a, 4b and 5), and is associated with an object detected by the LIDAR scan. The other depth detection measurements listed above provide for similar data (for example radar and Sonar use radio waves and sound waves to similar effect). During the time the robot traverses the route (in both the learning and operational phases) multiple LIDAR scans will be made. Each scan is composed of a plurality of data points. It is noted that each scan may use a local co-ordinate system (i.e. the origin is different for each scan) in some embodiments.

It may be particularly advantageous to use a 2-dimensional LIDAR detection system that detects the presence of objects in a single plane. For example, this may be the horizontal plane approximately parallel with the floor of the enclosed space. This is because a two-dimensional depth detection system allows a single plane to be captured. This plane comprises a plurality of data points, and as such possesses detail of the position of objects within the plane. Whilst a three-dimensional detection system may comprise more data points it has been found that this greatly increases the complexity of sorting, ordering and using said data, whilst not demonstrably improving the performance of the robot’s tracking of the intended route. Alternatively, a 3-D LIDAR scan may be used, or a 2-D LIDAR scan may be used in the vertical direction. Such scans are likely to still allow the robot to function, but may increase the data processing required.

It may be advantageous for the depth detection system to be raised a height above the floor, for example to around the height of a fully grown adult male (around 1.7m- 2.2m). Generally, rooms contain a great deal of obstacles at low level which may make matching LIDAR scans more difficult. Positioning the LIDAR detection system above the level of most obstacles/furniture makes this task simpler. It is also advantageous for the depth detection apparatus to be situated above the height of the operator so that the operator does not interfere in the measurements.

The robot may also comprise a disinfection apparatus. This may comprise a UV-C device, or an alternative device such as a hydrogen peroxide vapour (HPV) fogging device. Other suitable disinfection devices may be used. It is noted that the present claims are directed to a method for controlling the movement of the robot, and therefore the disinfection system is not essential to the claims. Figure 1 shows a robot 1 for use in disinfecting a space. Shown in Figure 1 are a robot base, comprising a floor element 2. A stand 3 and handle elements 4 for a user to grasp whilst pushing the robot 1 through the space to be disinfected. Also shown is a disinfection column 5. This disinfection column contains one or more lamps/bulbs 6 that are configured to emit radiation at a wavelength to enable disinfection. For example, these may be IIV-C emitting bulbs. The floor element 2 may be approximately 2m from the top of the disinfection column 5. Therefore, the top of this disinfection column 5 may be higher than the average human operator (and indeed the majority of human operators). At the top of the disinfection column 5 is situated the LIDAR detector. This is shown in the form of a turret 7 that is configured to rotate around and continuously perform LIDAR measurements of the plane in which the turret is sitting (and the plane it is rotating through). Due to the height of the disinfection column 5 the user is unlikely to interfere with the LIDAR measurements.

The embodiment showed in Figure 1 may comprise a power cord for connecting the robot to an external power source such as mains electricity. The power cord may be referred to as an electrical connection, connector, cable, conducting member or any such other equivalent term. The robot may also comprise a number of batteries or electrical cells or the like. This may be located locally on the robot such that the batteries travel with the robot as the robot traverses, or is guided, along a route. Indeed, in some particularly advantageous embodiments the robot may comprise both a power cord and a number of batteries.

Mains power may be that of any country. Common examples include 230v at 50Hz (UK), 240v at 60Hz (USA), 220v at 50Hz (China) and other suitable voltages/frequencies. The power may also be transferred from an external power source with direct current, as opposed to alternating current.

During use of the robot, as set out below, the robot may be guided through a learning phase in which the robot is taught a route by a user, and an operational phase in which the robot performs disinfection along said route. The method of routing in both phases is set out below, and in the appended claims and the statements of invention.

During the learning phase the robot may operate using battery power alone. This means that during the learning phase a user may be able to guide the robot along a preferred route without the power cord of the robot being plugged in to mains electricity. The power cord may be wound so as to be kept contained within/on the robot. This may make the guiding process simpler for the user as there is less risk of a cord becoming tangled, and this reduces the trip hazard to the user.

Prior to commencement of the operational phase the user may plug the power cord of the robot into the mains electricity (or other suitable electricity supply). The mains power, or equivalent, will then power the robot during the operational phase. During the operational phase the robot makes use of IIV-C producing bulbs to aid with the disinfection. The use of mains electricity enables longer routes that take more time to traverse as batteries may not be able to power the robot for the duration of the disinfection.

The ability to use battery power during the learning phase, and then the ability to plug the robot into the mains for the operational phase is highly advantageous as this makes the learning phase simpler for the user, and maximises the length of time for which the operational phase can run.

Figure 2 shows a flow diagram illustrating steps taken during the learning or routing phase. In the learning phase the operator pushes the robot along a path. It is intended that the robot will then reverse back along the path, following it faithfully during the operational phase. The operator therefore selects the path carefully so that optimal disinfection is to be achieved. This route creation may be aided by other instruments informing the operator of an optimal path, or it may be determined by the operator themselves. As the operator pushes the robot along the intended path the process in the flow diagram of Figure 2 is carried out.

As the learning phase begins a first waypoint if recorded. The robot’s position and orientation according to the dead reckoning system of the wheels is recorded (for the first measurement this is likely to be 0,0,0 in the x, y and the optional theta coordinates). A scan (for example a LIDAR scan) is then taken. As the robot moves the robot’s position and orientation are updated based on the wheel measurements. If the robot has moved more than a set threshold, then a second LIDAR scan at a second waypoint is recorded. This is associated with the robot’s position and orientation. The robot then continues. If the robot has moved more than a set angular change (or in a second length axis if using Cartesian co-ordinates) then a further LIDAR scan is recorded and is associated with the robot’s new position and orientation. This continues with the robot continuing to make further waypoints with associated LIDAR scans as the robot moves by set distance amounts, or by set angular amounts. This continues until the user begins disinfection. At this point the learning phase ends. Figure 3 shows a flow chart illustrating process steps taken during the operational phase. This begins with the user starting disinfection. The user is typically remote from the robot for safety reasons at the time at which disinfection begins. This is because the disinfection means is often hazardous to the operator (such as IIV-C which may be carcinogenic in large doses).

As the robot traverses the path from the end point back to the start point the wheel sensor measures the movement of the wheels. The wheel movements should accurately correspond to the inverse of that recorded during the learning phase. That is the number of revolutions of the wheel and the orientation should be consistent with the inverse of that during the learning phase. However due to slippage etc. despite the wheels performing these same wheel movements the actual position of the robot may differ slightly from the expected position (i.e. the position during the learning phase). Therefore, as the robot moves an updated estimated position and orientation of the robot is determined based on the wheel measurements. Between waypoints a target position and orientation is determined by interpolating between the waypoints. When the robot believes it has arrived at a waypoint the robot performs a LIDAR scan. This may be based on the amount of time or distance the robot believes it would have taken to reach the next waypoint.

Once the robot has performed the LIDAR scan it compares the LIDAR scan with that taken during the learning phase at the same expected position (of course due to slippage etc. the actual position may be different, but the robot’s estimated position is that at which said scan was taken during the learning phase). The robot then determines differences between the scan taken during the learning phase and the scan taken during the operational phase. If the robot was in the exact same position, then the scans would be virtually identical (there would likely still be some minor differences due to noise in both scans). However even very small changes in the position or orientation of the robot will affect the scans and so differences will occur between the scans with even very minor changes in position.

The robot will then determine from differences between the first and second depth measurements errors in the positioning of the robot, wherein errors correspond to differences between the robot’s actual position and the robot’s expected position based on the measurements associated with the at least one wheel. This determination may be performed in any number of ways. For example, the process described with respect to Figure 7 may be used (this is an entirely optional method and other alternative ways are also described herein). Alternatively, the robot may simply perform a trial and error routine. As the position error is likely to be small the robot may move slightly in one direction and then perform the LIDAR scan once more. If the differences have increased then it is likely the robot has moved in the wrong direction, whereas if the differences between the LIDAR scans has decreased then this is likely the correct direction for movement. This trial and error may be continued until the robot finds a position with a minimum level of difference between the LIDAR scans, or it may be continued until the difference is below a pre-set threshold.

After determining the error in the position of the robot, the position of the robot may be corrected. This may be achieved in a number of ways. For example, the robot may move directly to the correct position and may then continue following the intended route. Alternatively, the position correction may be factored into the direction of the robot. In this way the robot may never reach the expected position at the point at which the robot should have been situated, rather it will converge back to the expected route whilst the robot moves along, so as to join the intended path. For example, over the course of the movement of 1 m the robot may add a correction vector of approximately 0.1 m so that the robot converges with the expected route. This continues until the end of the route is reached.

It is noted that the robot may determine that the positional error is too great and exceeds a pre-defined threshold. If the robot is off course by too great an amount, in some embodiments, it may be configured to stop the operational phase. In such embodiments the robot would then need to be re-programmed in the learning phase by an operator as if from scratch. This is because the desired disinfection would not be achieved and therefore it is no longer safe to continue. There would also be increased risk of the robot colliding with an obstacle in such an eventuality.

It is also noted that if the positional correction determined is deemed to be below a pre-set threshold the robot may discard the correction. This is because a correction based upon just a few measurements would have a high uncertainty associated with it.

Figure 4A shows an example of a 2D LIDAR scan (it is noted that any depth detection means may be used) taken during the learning phase at a first position. This shows a number of points that have been detected. These correspond to the walls of the enclosed space as in this example no other objects are detected. Of course in other embodiments or enclosed spaces additional objects may also be detected. This is the simplest result of the scan for ease of discussion below. Figure 4B shows an example of a 2D LIDAR scan taken during the operational phase at a position believed to be the same as the first position. This also shows a number of points have been detected. These correspond to the walls of the enclosed space as in this example no other objects are detected. Of course in other embodiments or enclosed spaces additional objects may also be detected.

Figure 5 shows a superimposed view of Figures 4A and 4B to illustrate that there is a slight difference between the scans. It is noted that actual differences between scans are likely to be significantly smaller, and this difference is shown only to aid the understanding of the reader. However, every pair of scans will likely have slight differences between one another. If these errors are allowed to propagate then the errors will increase in magnitude and may eventually lead to the robot significantly deviating from its intended path. This in turn may lead to areas receiving a lower dose of radiation than intended. This could of course be dangerous in hospital settings, and may be disadvantageous in other settings such as hospitality.

It is noted that these differences may be made up both of noise (for example even if the robot is in a near identical position it is possible that slightly different points of the same surface may return signals in a second scan as opposed to a first scan, and there may also be sensor noise), as well as differences due to an actual difference in the position of the robot. Differences in the actual position may include both orientation and translation changes. If the orientation of the robot is different it is likely the two scans may be rotatory translations of one another (with the addition of noise). If there is a translation error in terms of the x or y axis along the plane of the floor, then the points returned in the scan will be slightly different. Depending on the shape of the walls, and the position of the robot within the room the points returned will change accordingly.

Figure 6 focuses on one point detected in the LIDAR scan during the operational phase. The robot’s task is to determine which of the points in the corresponding scan (i.e. taken at the same expected position) in the learning phase this selected point corresponds with. In this example two points from the learning phase scan are the closest match as these are closest to the selected point. The robot’s position is also shown in Figure 6.

Figure 6 shows a linear path drawn between the two closest learning phase points (this linear path may be recorded as a vector). A line is also drawn between the selected point of the operational phase and the linear path. This line is perpendicular to the linear path. The line drawn between the selected point in the operational phase and the linear path represents the single-measurement error vector. This is a direction and magnitude that represents a likely error in the position of the selected point (due to an error in the position of the robot). An arrow is also drawn between one of the two points from the learning phase to the selected point from the operational phase. This is known as the offset vector and demonstrates the offset between the point in the learning phase and the operational phase.

The offset vector can be determined as the positions of the selected point in the learning phase and the selected point in the operational phase are both known. The vector taking the linear path between the two candidate points from the phase is also known, and from this the direction perpendicular to the linear path is also known. This is a linear path unit normal vector (as this is the vector normal to the linear path, and has a magnitude equal to one).

From these known parameters the single measurement error vector (and the single measurement error unit vector) may be determined. The single measurement error vector may be determined by multiplying the offset vector by the linear path unit normal vector. This multiplication may be a dot product, or an inner product. Multiplying the single measurement error vector by the linear path unit normal vector once more gives the single measurement error magnitude. This multiplication may be a dot product or an inner product. Multiplying the single measurement error magnitude by the linear path unit normal vector gives the single measurement error vector.

It is noted that the single measurement error vector magnitude is unlikely to be very large. Indeed, if it is over a certain threshold it is likely to be the result of a measurement error, rather than a positioning error of the robot. Therefore, in some embodiments if the magnitude of the single measurement error vector is above 500mm then it is assumed that the points being compared belong to different objects and the single measurement error vector is therefore discarded. In other embodiments this threshold may be between 100mm and 600mm.

As each scan (for example each LIDAR scan) generates a multitude of data points it is the collective use of these data points that enables the accuracy of the position of the robot to be verified. For example, a single measurement error vector may be large, but the other single measurement error vectors associated with the other points in the scan may be low. It is therefore advantageous to consider at least a portion of the points in the scan together as a whole. A naive approach (which may nevertheless be used in some embodiments) to combining all the single measurement error vectors would be to sum them all and divide the result by the number of vectors. However, this may distort the result — for example, consider a case in which most vectors are parallel to the x-axis, with the remainder parallel to the y-axis; only those parallel to the x-axis provide evidence of the positional error in the x direction, and similarly for the y direction, so they should be handled independently, adding the x-direction vectors and dividing by the (large) number of vectors in this direction, then separately adding the y-direction vectors and dividing by the (small) number of vectors in that direction; this clearly gives quite a different result to the naive approach. In reality, the vectors do not neatly all align with the axes, and are in general not just aligned with two perpendicular directions, so we may determine the two directions in which to independently assess the vectors. To determine these directions we need to treat vectors that are offset by 180° from one another as representing the same direction. Therefore, in some embodiments it is advantageous to let the single measurement error unit vector be represented by a complex number. This may then be squared such that measurements with a 180 degree offset have the same value.

To then view the results of the scan as a collective, the sum of the squares of these complex numbers representing single measurement unit vectors may be taken. The square root of this total may then be taken. This gives a major direction. This major direction is the direction in which the scan gives the greatest evidence level (and hence the direction in which there is the least uncertainty of the estimated robot positional error). The normal to this major direction is the minor direction. As this is perpendicular to the major direction any additional errors that do not have a constituent part in the major direction can therefore be mapped onto this minor direction. These directions are the directions in which the least uncertainty and greatest uncertainty, respectively, of the estimated robot positional error lie. These are the two independent directions in which the single measurements errors vectors may be analysed.

It may then be useful to determine an evidence level for each of the major and minor direction. This can be done by projecting each single measurement error unit vector along the major direction. This measures the component of each single measurement error vector along the major direction. The magnitudes of these projections are then summed. This provides the evidence level in the major direction. An additional optional step may involve the moderation of the projections along the major direction. If a threshold is used all measurements over the threshold will be thrown out before the summation (as described above). However, a data point just shy of the threshold may be viewed as potentially being erroneous (but is not however discarded). Therefore, measurements just below the threshold may be modulated so that they do not unnecessarily skew the results. For example, a cos 2 0 function may be used to moderate the projections. Other functions may be used such as any function used in damping models such as logarithmic functions (when suitably modified for the specific application), negative exponents or other such functions. The moderated projections may then be summed to determine the evidence level in the major direction. The same process may take place by projecting each single measurement error unit vector along the minor direction and repeating the process to determine an evidence level in the minor direction. The modulation is also an optional step in the determination of the evidence level in the minor direction.

To determine the position correction for the robot, each single measurement error vector may be projected once more along the major direction, and summed, with modulation if modulation has been used in determining the major direction evidence level. The total may then be divided by the evidence level in the major direction. This then determines a magnitude of position correction in the major direction. Together this gives a major direction correction vector. This same process may be used in the minor direction to determine the minor direction correction vector (by projecting the single measurement error vectors along the minor direction, optionally with modulation, and dividing by the minor direction evidence level).

This position correction may simply be used by the robot. Alternatively, a fitness score may be taken from the correction (before the correction is used by the robot), and alternative rotational corrections may also be evaluated in the same way. The rotational correction with the lowest associated fitness score may then be determined to be the most accurate rotational error, and along with its associated positional error, used to change the position and orientation of the robot. The robot may then be navigated either to directly take this positional change, or to navigate back onto the route by adding this error change into its expected path to re-join the expected path at a later stage.

The fitness score may be calculated by: Where the error is the modulus of the offset vector after the correction vectors have been applied (i.e. the major and minor direction correction vectors), and the threshold is a pre-set value, preferably 0.5m. The measurements from the depth detection device may then be rotated by an angle and the process repeated. The fitness scores may then be compared. The lower the fitness score the better the rotational correction.

This method is summarised in Figure 7 which shows various steps that may be employed to determine a correction vector. The first step then involves determining the offset vector between each detected point in a LIDAR scan, and the closest point in the LIDAR scan taken during the learning phase at the same expected position, and determining the single measurement error vector for each point in the LIDAR scan. The next step comprises determining the major direction and the minor direction, these being the directions in which the least and greatest uncertainty in the positional error lie. The next step comprises determining the evidence score for each of the major and minor directions. This is followed by using the above to determine the combined correction vector. This may be done by projecting the single measurement error vectors for each point onto the major and minor directions and summing these projections, and dividing the resulting sum by the evidence score for each respective direction. The next step is optional and comprises determining a fitness score for the correction vector. The following steps then comprise comparing the correction vector to other correction vectors determined using the same process, and iterating the process by rotating the measurements from the depth detection apparatus by a set angle and doing so until the optimal correction vector is found. It is noted that the steps relating to the fitness score are optional and the correction vector may be used immediately to correct the position of the robot.

It will be appreciated from the discussion above that the embodiments shown in the Figures are merely exemplary, and include features which may be generalised, removed or replaced as described herein and as set out in the claims. With reference to the drawings in general, it will be appreciated that the schematic flow diagrams are used to indicate the functionality of the system and apparatus described herein. This functionality may be provided by any part of the system, for example data storage may be provided by a processor and/or memory element. The depth detection device, or LIDAR device may be in direct communication with a processor, which in turn may be in direct communication with a memory element. The depth detection element making the measurement, the processor processing the measurement, and the memory element storing the result. Of course other suitable arrangements (of which there are many) may be used to achieve this same result. In addition, the processing functionality may also be provided in whole or in part by devices which are supported by the UV cleaning and/or disinfection apparatus. For example, an additional computational device may be in communication with this apparatus and the processor and/or memory functionality may take place external to the robot itself. It will be appreciated however that the functionality need not be divided in this way, and should not be taken to imply any particular structure of hardware other than that claimed below. The function of one or more elements shown in the drawings may be further subdivided, and/or distributed throughout the apparatus, and any supported further devices. In some embodiments the function of the one or more elements shown in the drawings may be integrated into a single functional unit.

The above embodiments are to be understood as illustrative examples. Further embodiments are also envisaged. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

In some examples, one or more memory elements can store data and/or program instructions used to implement the operations described herein. Embodiments of the disclosure provide tangible, non-transitory storage media comprising program instructions operable to program a processor to perform any one or more of the methods described and/or claimed herein and/or to provide data processing apparatus as described and/or claimed herein.

The processor of the robot (and any of the methods, activities or instructions outlined herein) may be implemented with fixed logic such as assemblies of logic gates or programmable logic such as software and/or computer program instructions executed by a processor. Other kinds of programmable logic include programmable processors, programmable digital logic (e.g. a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), an application specific integrated circuit (ASIC) or any other kind of digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine- readable mediums suitable for storing electronic instructions, or any suitable combination thereof. Such data storage media may also provide the data storage of the robot.