Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMPROVED CONTROL DEVICE
Document Type and Number:
WIPO Patent Application WO/2018/033715
Kind Code:
A1
Abstract:
In accordance with an aspect of the invention there is provided a method of controlling operation of one or more devices in accordance with a usage pattern. The one or more devices may be operatively coupled to a controller, the controller having an observation phase in which usage of the one or more devices by a user is observed, and a subsequent automation phase in which operation of the one or more devices is automated in accordance with the observed usage. The observation phase may comprise: receiving a first input, the first input defining a desired operative state of a first one of the one or more devices; receiving a second input, the second input defining a desired operative state of a second one of the one or more devices; determining if the second input was received within a period of time less than or equal to a predefined threshold time period; and generating the usage pattern in dependence on the second input having been received within a period of time less than or equal to the predefined threshold time period, the usage pattern defining the desired operative state of the one or more devices in accordance with the received inputs. The subsequent automation phase may comprise: controlling operation of the one or more devices in accordance with the usage pattern.

Inventors:
NAMBIAR KRISHNAN RATNAKARAN (GB)
MEMON MOHAMMADSHAHID ABDULSHAKUR (IN)
Application Number:
PCT/GB2017/052395
Publication Date:
February 22, 2018
Filing Date:
August 15, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NAMBIAR KRISHNAN RATNAKARAN (GB)
MEMON MOHAMMADSHAHID ABDULSHAKUR (IN)
International Classes:
G05B15/02
Domestic Patent References:
WO2016057243A12016-04-14
Foreign References:
US20160192461A12016-06-30
Attorney, Agent or Firm:
RICHARDSON, Mark et al. (GB)
Download PDF:
Claims:
CLAIMS

1. A method of controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to a controller, the controller having an observation phase in which usage of the one or more devices by a user is observed, and a subsequent automation phase in which operation of the one or more devices is automated in accordance with the observed usage, the observation phase comprising:

receiving a first input, the first input defining a desired operative state of a first one of the one or more devices;

receiving a second input, the second input defining a desired operative state of a second one of the one or more devices;

determining if the second input was received within a period of time less than or equal to a predefined threshold time period; and

generating the usage pattern in dependence on the second input having been received within a period of time less than or equal to the predefined threshold time period, the usage pattern defining the desired operative state of the one or more devices in accordance with the received inputs; and wherein the subsequent automation phase comprises:

controlling operation of the one or more devices in accordance with the usage pattern.

2. The method of claim 1 , wherein the observation phase comprises:

capturing a first time coordinate associated with receipt of any one of the first and/or second input, the first time coordinate defining a time of day associated with receipt of the input;

associating the first time coordinate with the generated usage pattern; and wherein the automation phase comprises:

monitoring a second time coordinate associated with a time of day; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the second time coordinate being correlated with the first time coordinate.

3. The method of claim 2, wherein the automation phase comprises:

controlling operation of the one or more devices in accordance with the usage pattern in dependence on a time difference between the second time coordinate and the first time coordinate being less than or equal to a predefined threshold period of time.

4. The method of any preceding claim, wherein the observation phase comprises: receiving a first operative setting associated with any one of the first and/or second input, the operative setting comprising data associated with a characteristic of a location associated with any one or more of the first one and the second one of the one or more devices at a time of receipt of any one of the first and/or second input;

associating the operative setting with the generated usage pattern; and wherein the automation phase comprises:

monitoring a second operative setting associated with the location; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second operative setting being correlated with the first operative setting.

5. The method of claim 4, wherein the operative setting comprises an environmental state associated with the location of any one or more of the first one and the second one of the one or more devices, and the observation phase comprises: receiving first environmental state data associated with any one of the first and second received input, the first environmental state data comprising data associated with an environmental characteristic associated with the location at a time of receipt of any one of the first and second input;

associating the first environmental state data with the generated usage pattern; and wherein the automation phase comprises:

monitoring second environmental state data associated with an environmental characteristic of the location; and

controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second environmental state data being correlated with the first environmental state data.

6. The method of claim 5, wherein the environmental state data comprises any one or more of:

i. a temperature value;

ii. a brightness value associated with a level of brightness of the location; and

iii. a humidity value associated with the location.

7. The method of any one of claims 4 to 6, wherein the operative setting comprises occupancy information relating to a number of occupants present in the location, and the observation phase comprises:

receiving first occupancy data associated with any one of the first and second input, the occupancy data comprising data associated with a number of occupants present at the location at the time of receipt of any one of the first and second input;

associating the first occupancy data with the generated usage pattern; and wherein the automation phase comprises:

monitoring second occupancy data associated with the location; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second occupancy data being correlated with the first occupancy data.

8. The method of any preceding claim, wherein the automation phase comprises: determining a current operative state of each one of the one or more devices; generating one or more control signals for changing the current operative state of the one or more devices from its current operative state to the operative state defined in the usage pattern; and

controlling operation of the one or more devices in accordance with the generated one or more control signals.

9. The method of any preceding claim, wherein the observation phase comprises: receiving a plurality of pairs of first and second inputs;

determining if each second input comprised within each pair of first and second inputs was received within a period of time less than or equal to the predefined threshold time period; and

generating a usage pattern for each pair of first and second inputs, in dependence on the second input within each pair having been received within a period of time less than or equal to the predefined threshold time period.

10. The method of claim 9, wherein more than two devices are operatively coupled to the controller, and the observation phase comprises:

determining a final system state associated with each generated usage pattern, the final system state being dependent on the final operative state of each one of the devices operatively coupled to the node after receipt of the pair of first and second inputs;

associating a weighting variable with the final system state associated with each generated usage pattern, the weighting variable being indicative of a number of times that the final system state has arisen or a frequency with which the final system state has been determined to arise;

defining a usage pattern as its associated final system state in dependence on the weighting variable associated with the usage pattern being greater than or equal to a predefined weighting threshold value.

1 1. The method of any preceding claim, wherein the observation phase comprises: determining a final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more devices operatively coupled to the controller after receipt of the first input and the second input;

determining an energy consumption value associated with the final system state associated with the usage pattern;

determining an environmental state associated with a location of the one or more devices and the final system state;

varying the energy consumption value by defining an amended final operative state of at least one of the one or more devices, in dependence on the determined environmental state being different to a predefined environmental threshold state;

amending the usage pattern with the amended final operative state of the at least one device; and

wherein the automation phase comprises:

controlling operation of the one or more devices in accordance with the amended usage pattern.

12. The method of claim 1 1 , wherein the observation phase comprises:

reducing the energy consumption value by defining an amended final operative state of at least one of the one or more devices, the amended final operative state being associated with a lower energy consumption value, in dependence on the determined environmental state being greater than the predefined environmental threshold state.

13. The method of claim 1 1 , wherein the observation phase comprises: increasing the energy consumption value by defining an amended final operative state of at least one of the one or more devices, the amended final operative state being associated with a higher energy consumption value, in dependence on the determined environmental state being less than the predefined environmental threshold state.

14. The method of any one of claims 11 to 13, wherein the one or more devices comprise one or more illumination devices, the environmental state relates to an ambient brightness value associated with a location of the one or more illumination devices; and the observation phase comprises:

determining the final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices operatively coupled to the controller after receipt of the first input and the second input;

determining an energy consumption value associated with the final system state associated with the usage pattern;

determining a first ambient brightness value associated with the location of the one or more illumination devices and the final system state;

varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, in dependence on the determined first ambient brightness value being different to a predefined ambient brightness threshold value;

amending the usage pattern with the amended final operative state of the at least one illumination device; and

wherein the automation phase comprises:

controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

15. The method of claim 14, wherein varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, comprises the steps of:

varying the brightness setting of the at least one illumination device by a predefined increment;

measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device varied by the predefined increment; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of varying the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

16. The method of claim 14 or 15, wherein varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, comprises the steps of:

increasing the brightness setting of the at least one illumination device by a predefined increment in dependence on the first ambient brightness value being less than the predefined ambient brightness threshold value;

measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device increased by the predefined increment; and

determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of increasing the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

17. The method of any one of claims 14 or 15, wherein varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, comprises the steps of:

decreasing the brightness setting of the at least one illumination device by a predefined increment in dependence on the first ambient brightness value being greater than the predefined ambient brightness threshold value;

measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device decreased by the predefined increment; and

determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of decreasing the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

18. A controller for controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to the controller, the controller being configured in use with an observation phase in which usage of the one or more devices by a user is observed, and a subsequent automation phase in which operation of the one or more devices is automated in accordance with the observed usage, the controller comprising:

an input arranged in use to receive a first input, the first input defining a desired operative state of a first one of the one or more devices, and a second input, the second input defining a desired operative state of a second one of the one or more devices;

a processor arranged in use during the observation phase to:

determine if the second input was received within a period of time less than or equal to a predefined threshold time period; and

generate the usage pattern in dependence on the second input having been received within a period of time less than or equal to the predefined threshold time period, the usage pattern defining the desired operative state of the one or more devices in accordance with the received input; and

wherein the controller comprises an output arranged in use during the subsequent automation phase to:

output a control signal for controlling operation of the one or more devices in accordance with the usage pattern.

19. The controller of claim 18, wherein the processor is configured during the observation phase to:

capture a first time coordinate associated with receipt of any one of the first and/or second input, the first time coordinate defining a time of day associated with receipt of the input;

associate the first time coordinate with the generated usage pattern; and wherein the processor is arranged in use during the automation phase to: monitor a second time coordinate associated with a time of day; and the output is arranged in use to:

output the control signal for controlling operation of the one or more devices in accordance the usage pattern in dependence on the second time coordinate being correlated with the first time coordinate.

20. The controller of claim 19, wherein the output is arranged in use during the automation phase to:

output a control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on a time difference between the second time coordinate and the first time coordinate being less than or equal to a predefined threshold period of time.

21. The controller of any one of claims 18 to 20, wherein during the observation phase:

the input is arranged in use to receive a first operative setting associated with any one of the first and second input received from one or more sensors present in a location associated with any one of the first one and the second one of the one or more devices, the operative setting comprising data associated with a characteristic of the location at a time of receipt of any one of the first and second input;

the processor is configured in use to associate the operative setting with the generated usage pattern; and

wherein during the automation phase:

the input is configured in use to receive a second operative setting associated with the location from any one of the one or more sensors; and the output is configured in use to output the control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on the received second operative setting being correlated with the first operative setting.

22. The controller of claim 21 , wherein the operative setting comprises an environmental state associated with the location of any one or more of the first one and the second one of the one or more devices, and during the observation phase:

the input is arranged in use to receive first environmental state data from the one or more sensors, the environmental state data being associated with any one of the first and second received input, the first environmental state data comprising data associated with an environmental characteristic associated with the location at a time of receipt of any one of the first and second input;

the processor is arranged in use to associate the first environmental state data with the generated usage pattern; and

wherein during the automation phase: the input is arranged in use to receive second environmental state data from the one or more sensors, the second environmental state data being associated with an environmental characteristic of the location; and

the output is configured in use to output the control signal for controlling the one or more devices in accordance with the usage pattern in dependence on the second environmental state data being correlated with the first environmental state data.

23. The controller of claim 22, wherein the environmental state data comprises any one or more of:

i. a temperature value received from at least one thermometer;

ii. an ambient brightness value associated with a level of ambient brightness of the location, received from one or more ambient light sensors; and

iii. a humidity value associated with the location, received from at least one humidity sensor.

24. The controller of any one of claims 21 to 23, wherein the operative setting comprises occupancy information relating to a number of occupants present in the location, and in the observation phase:

the input is arranged in use to receive first occupancy data associated with any one of the first and second input, the occupancy data comprising data associated with a number of occupants present at the location at the time of receipt of any one of the first and second input;

the processor is arranged in use to associate the first occupancy data with the generated usage pattern; and

wherein during the automation phase:

the input is arranged in use to receive second occupancy data associated with the location; and

the output is arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on the received second occupancy data being correlated with the first occupancy data.

25. The controller of any one of claims 18 to 24, wherein in the automation phase: the processor is arranged in use to determine a current operative state of each one of the one or more devices, and to generate one or more control signals for changing the current operative state of the one or more devices from its current operative state to the operative state defined in the usage pattern; and the output is arranged in use to output the one or more generated control signals.

26. The controller of any one of claims 18 to 25, wherein in the observation phase: the input is arranged in use to receive a plurality of pairs of first and second inputs; and

the processor is arranged in use to determine if each second input comprised within each pair of first and second inputs was received within a period of time less than or equal to the predefined threshold time period, and to generate a usage pattern for each pair of first and second inputs, in dependence on the second input within each pair having been received within a period of time less than or equal to the predefined threshold time period.

27. The controller of claim 26, wherein more than two devices are operatively coupled to the controller, and in the observation phase the processor is arranged to:

determine a final system state associated with each generated usage pattern, the final system state being dependent on the final operative state of each one of the devices operatively coupled to the controller after receipt of the pair of first and second inputs;

associate a weighting variable with the final system state associated with each generated usage pattern, the weighting variable being indicative of a number of times that the final system state has arisen or a frequency with which the final system state has been determined to arise; and

define a usage pattern as its associated final system state in dependence on the weighting variable associated with the usage pattern being greater than or equal to a predefined weighting threshold value.

28. The controller of any one of claims 18 to 27, wherein in the observation phase: the input is arranged in use to receive environmental state data from one or more sensors located in a location associated with the one or more devices, the environmental state data comprising data associated with an environmental characteristic of the location after receipt of any one of the first and second input; the processor is arranged in use to: determine a final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more devices operatively coupled to the controller after receipt of the first input and the second input;

determine an environmental state associated with the location and the final system state from the received environmental state data;

determine an energy consumption value associated with the final system state associated with the usage pattern;

determine if the environmental state is equal to a predefined environmental threshold state;

determine an amended final operative state of at least one of the one or more devices associated with a different energy consumption value in dependence on the determined environmental state being different to the predefined environmental threshold state;

amending the usage pattern with the amended final operative state;

and in the automation phase:

the output is arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the amended usage pattern associated with the different energy consumption value.

29. The controller of claim 28, wherein in the observation phase the processor is arranged in use to:

reduce the energy consumption value by determining an amended final operative state of at least one of the one or more devices associated with a lower energy consumption value in dependence on the determined environmental state being greater than the predefined environmental threshold state.

30. The controller of claim 28, wherein in the observation phase the processor is arranged in use to:

increase the energy consumption value by determining an amended final operative state of at least one of the one or more devices associated with a higher energy consumption value in dependence on the determined environmental state being less than the predefined environmental threshold state.

31. The controller of any one of claims 28 to 30, wherein the one or more devices comprise one or more illumination devices, the environmental state relates to an ambient brightness value associated with a location of the one or more illumination devices, and in the observation phase:

the input is arranged in use to receive ambient brightness data from one or more brightness sensors located in a location associated with the one or more devices, the ambient brightness data comprising data associated with an ambient brightness value of the location after receipt of any one of the first and second input;

the processor is arranged in use to:

determine the final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices operatively coupled to the controller after receipt of the first input and the second input;

determine a first ambient brightness value associated with the location and the final system state from the received ambient brightness data;

determine an energy consumption value associated with the final system state associated with the usage pattern;

determine if the first ambient brightness value is equal to a predefined ambient brightness threshold level;

determine an amended final operative state of at least one illumination device associated with a different energy consumption value in dependence on the determined first ambient brightness value being different to the predefined ambient brightness threshold level;

amend the usage pattern with the amended final operative state;

and in the automation phase:

the output is arranged in use to output a control signal for controlling operation of the one or more illumination devices in accordance with the amended usage pattern associated with the different energy consumption value.

32. The controller of claim 31 , wherein in the observation phase:

the processor is arranged in use to:

vary the brightness setting of the at least one illumination device by a predefined increment;

determine a second ambient brightness value associated with the brightness setting of the at least one illumination device varied by the predefined increment from ambient brightness data received from the one or more brightness sensors; and iteratively varying the brightness setting of the at least one illumination device by the predefined increment until the second ambient brightness value is equal to the predefined ambient brightness threshold level.

33. The controller of any one of claims 18 to 32 comprising one or more environmental sensors arranged in use to capture environmental data associated with a location of any one or more operatively coupled devices.

34. The controller of any one of claims 18 to 32 arranged to carry out the method of any one of claims 1 to 17.

35. A method of controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to a controller, the usage pattern having been generated in dependence on an observed usage of the one or more devices by a user and being associated with at least one parameter value, the method comprising:

identifying a user;

receiving the parameter value;

determining if a predefined threshold condition is satisfied by the received parameter value;

selecting a usage pattern from a plurality of available usage patterns, the usage pattern being associated with the predefined threshold condition, and being selected in dependence on the identified user and in dependence on the predefined threshold condition being satisfied; and

controlling operation of the one or more devices in accordance with the selected usage pattern.

36. The method of claim 35, wherein the parameter value comprises a time coordinate associated with a time of day, and the method comprises:

receiving the time coordinate;

determining if the received time coordinate lies within a predefined time period of a predefined threshold time coordinate;

selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined threshold time coordinate, and being selected in dependence on the identified user and in dependence on the received time coordinate lying within the predefined time period; controlling operation of the one or more devices in accordance with the selected usage pattern.

37. The method of claim 35 or claim 36, wherein the parameter value comprises environmental state data associated with an environmental characteristic of a location comprising the one or more devices, and the method comprises:

receiving the environmental state data;

determining if the received environmental state data satisfies a predefined environmental threshold condition;

selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined environmental threshold condition, and being selected in dependence on the identified user and in dependence on the received environmental state data satisfying the predefined environmental threshold condition; and

controlling operation of the one or more devices in accordance with the selected usage pattern.

38. The method of claim 37, wherein the environmental state data comprises any one or more of:

i. a temperature value;

ii. a brightness value associated with a level of brightness of the location; and

iii. a humidity value associated with the location.

39. The method of any one of claims 35 to 38, wherein the parameter value comprises occupancy information relating to a number of occupants present in a location associated with at least one of the one or more devices, and the method comprises:

receiving the occupancy information;

determining if the received occupancy information satisfies a predefined occupancy threshold condition;

selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined occupancy threshold condition, and being selected in dependence on the identified user and in dependence on the received occupancy information satisfying the predefined occupancy threshold condition; and controlling operation of the one or more devices in accordance with the selected usage pattern.

40. The method of any one of claims 35 to 39, comprising:

determining a current operative state of each one of the one or more devices; generating one or more control signals for changing the current operative state of the one or more devices from its current state to the operative state defined in the selected usage pattern; and

outputting the generated one or more control signals to the one or more devices.

41. The method of any one of claims 35 to 40, comprising:

receiving an environmental state data associated with a location comprising the one or more devices;

determining a final system state associated with the selected usage pattern, the final system state being dependent on a final operative state of each one of the one or more devices;

determining an energy consumption value associated with the final system state associated with the selected usage pattern;

determining if the received environmental state data satisfies a predefined environmental state threshold;

determining an amended usage pattern associated with a different energy consumption value by varying the final operative state of at least one of the one or more devices in dependence on the received environmental state data not satisfying the predefined environmental state threshold; and

controlling operation of the one or more devices in accordance with the amended usage pattern.

42. The method of claim 41 , comprising:

determining an amended usage pattern associated with a lower energy consumption value by selecting a final operative state of at least one of the one or more devices, the final operative state being associated with a lower energy consumption value, in dependence on the received environmental state data being greater than the predefined environmental state threshold.

43. The method of claim 41 or claim 42, wherein the one or more devices comprises one or more illumination devices, the environmental state relates to an ambient brightness value associated with a location of the one or more illumination devices, and the method comprises:

receiving a first ambient brightness value associated with the location comprising the one or more illumination devices;

determining the final system state associated with the selected usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices;

determining the energy consumption value associated with the final system state associated with the selected usage pattern;

determining if the received first ambient brightness value satisfies a predefined ambient brightness threshold value;

determining an amended usage pattern associated with a lower energy consumption value by reducing a brightness value of at least one of the one or more illumination devices in dependence on the received first ambient brightness value being greater than the predefined ambient brightness threshold value; and controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

44. The method of claim 43, comprising the steps of:

reducing the brightness value of at least one of the one or more illumination devices by a predefined increment;

receiving a second ambient brightness value associated with the reduced brightness value of the at least one of the one or more illumination devices; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and repeating the steps of reducing the brightness value by the predefined increment and receiving the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

45. A controller for controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to the controller, the usage pattern having been generated in dependence on an observed usage of the one or more devices by a user and being associated with at least one parameter value, the controller comprising:

an input arranged in use for receiving user data, from which an identity of a user may be determined, and for receiving the parameter value;

a processor arranged in use to: determine if a predefined threshold condition is satisfied by the received parameter value;

select a usage pattern from a plurality of available usage patterns, the usage pattern being associated with the predefined threshold condition, and being selected in dependence on the identified user and in dependence on the predefined threshold condition being satisfied; and

an output arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the selected usage pattern.

46. The controller of claim 45, wherein the parameter value comprises a time coordinate associated with a time of day, and the input is arranged in use to receive the time coordinate;

the processor is arranged in use to:

determine if the received time coordinate lies within a predefined time period of a predefined threshold time coordinate;

select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined threshold time coordinate, and being selected in dependence on the identified user and in dependence on the received time coordinate lying within the predefined time period; and

the output is arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the selected usage pattern.

47. The controller of claim 45 or claim 46, wherein the parameter value comprises environmental state data associated with an environmental characteristic of a location comprising the one or more devices, the input is arranged in use to receive the environmental state data;

the processor is arranged in use to:

determine if the received environmental state data satisfies a predefined environmental threshold condition;

select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined environmental threshold condition, and being selected in dependence on the identified user and in dependence on the received environmental state data satisfying the predefined environmental threshold condition; and the output is arranged in use to control operation of the one or more devices in accordance with the selected usage pattern.

48. The controller of claim 47, wherein the environmental state data comprises any one of more of:

i. a temperature value received from one or more temperature sensors;

ii. a brightness value associated with a level of brightness of the location, the brightness value being received from one or more brightness sensors;

iii. a humidity value associated with the location received from one or more humidity sensors.

49. The controller of any one of claims 45 to 48, wherein the parameter value comprises occupancy information relating to a number of occupants present in a location associated with at least one of the one or more devices, the input being arranged in use to receive the occupancy information;

the processor being arranged in use to:

determine if the received occupancy information satisfies a predefined occupancy threshold condition;

select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined occupancy threshold condition, and being selected in dependence on the identified user and in dependence on the received occupancy information satisfying the predefined occupancy threshold condition; and

the output being arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the selected usage pattern.

50. The controller of any one of claims 45 to 49, wherein the processor is arranged in use to:

determine a current operative state of each one of the one or more devices; generate one or more control signals for changing the current operative state of the one or more devices from its current state to the operative state defined in the selected usage pattern; and the output is arranged in use to output the generated one or more control signals to the one or more devices.

51. The controller of any one of claims 45 to 50, the input being arranged in use to receive an environmental state data associated with a location comprising the one or more device;

the processor is arranged in use to:

determine a final system state associated with the selected usage pattern, the final system state being dependent on a final operative state of each one of the one or more devices;

determine an energy consumption value associated with the final system state associated with the selected usage pattern;

determine if the received environmental state data satisfies a predefined environmental state threshold;

determine an amended usage pattern associated with a different energy consumption value by varying the final operative state of at least one of the one or more devices in dependence on the received environmental state data not satisfying the predefined environmental state threshold; and

the output is arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the amended usage pattern.

52. The controller of claim 51 , wherein the processor is arranged in use to determine an amended usage pattern associated with a lower energy consumption value by selecting a final operative state of at least one of the one or more devices, the final operative state being associated with a lower energy consumption value, in dependence on the received environmental state data being greater than the predefined environmental state threshold.

53. The controller of claim 52 or claim 52, wherein the one or more devices comprises one or more illumination devices, the environmental state relates to an ambient brightness value associated with a location of the one or more illumination devices, and the input is arranged in use to receive a first ambient brightness value associated with the location comprising the one or more illumination devices;

the processor is arranged in use to: determine the final system state associated with the selected usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices; determine the energy consumption value associated with the final system state associated with the selected usage pattern;

determine if the received first ambient brightness value satisfies a predefined ambient brightness threshold value;

determine an amended usage pattern associated with a lower energy consumption value by reducing a brightness value of at least one of the one or more illumination devices in dependence on the received first ambient brightness value being greater than the predefined ambient brightness threshold value; and

the input is arranged in use to output the control signal for controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

54. The controller of claim 53, wherein the output is arranged in use to output a control signal for reducing the brightness value of at least one of the one or more illumination devices by a predefined increment;

the input is arranged in use to receive a second ambient brightness value associated with the reduced brightness value of the at least one of the one or more illumination devices;

the processor is arranged in use to determine if the second ambient brightness value is equal to the predetermined ambient brightness threshold value; and wherein

the output is arranged in use to output a further control signal for further reducing the brightness value of at least one of the one or more illumination devices by the predefined increment, until the received ambient brightness value is equal to the predefined ambient brightness threshold value.

55. The controller of any one of claims 45 to 54 comprising one or more environmental sensors arranged in use to capture environmental data associated with a location of any one or more operatively coupled devices.

56. The controller of any one of claims 45 to 54 arranged to carry out the method of any one of claims 35 to 44.

57. A method of locating a misplaced object within a physical space, the misplaced object having been misplaced within the physical space by a user, the method comprising:

receiving a query from the user to locate the misplaced object;

analysing a plurality of maps of the physical space, each map defining a location of one or more objects located within the physical space at a time coordinate, the time coordinate defining a point in time;

identifying the misplaced object in the plurality of maps; and

determining a current location of the misplaced object from the most recent map comprising the misplaced object.

58. The method of claim 57, comprising:

analysing a plurality of user location data, the user location data defining the location of the user within the physical space over a period of time;

identifying maps associated with a time coordinate correlated with the period of time;

analysing the identified maps and identifying the misplaced object in the identified maps; and

determining the current location of the misplaced object from the identified map occurring most recently comprising the misplaced object.

59. The method of claim 58, comprising:

identifying one or more objects associated with a location correlated with the user location data from the identified maps;

analysing the one or more identified objects in the identified maps;

identifying the misplaced object from the one or more analysed objects; and determining the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

60. The method of claim 59, comprising:

identifying one or more objects located at a distance equal to or less than a predefined threshold distance from the user location data;

analysing the one or more objects located at the distance equal to or less than the predefined threshold distance from the user location data; identifying the misplaced object from the analysed one or more objects located at the distance equal to or less than the predefined threshold distance from the user location data; and

determining the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

61. The method of any one of claims 57 to 60, wherein each map comprises an image captured at a different point in time of one or more objects located in the physical space and distance information associated with each one of the one or more imaged objects, the distance information being expressed with respect to a point of reference, and the method comprises:

analysing a plurality of images of the physical space;

identifying the misplaced object in the plurality of images using image recognition; and

determining the current location of the misplaced object from the most recently captured image comprising an image of the misplaced object.

62. The method of claim 61 , wherein the distance information associated with each one of the one or more imaged objects is determined in dependence on a time of flight of a reflectance signal reflected from each one of the one or more objects located in the physical space; and the method comprises:

determining the current location of the misplaced object from the most recently captured image comprising the image of the misplaced object and distance information associated with the imaged objects comprised in the most recently captured image.

63. A controller for locating a misplaced object within a physical space, the misplaced object having been misplaced within the physical space by a user, the controller comprising:

an input arranged in use to receive a query from the user to locate the misplaced object;

a processor arranged in use to:

analyse a plurality of maps of the physical space, each map defining a location of one or more objects located within the physical space at a time coordinate, the time coordinate defining a point in time; identify the misplaced object in the plurality of maps; and determine a current location of the misplaced object from the most recent map comprising the misplaced object.

64. The controller of claim 63, wherein the processor is arranged in use to:

analyse a plurality of user location data, the user location data defining the location of the user within the physical space over a period of time;

identify maps associated with a time coordinate correlated with the period of time;

analyse the identified maps and identifying the misplaced object in the identified maps; and

determining the current location of the misplaced object from the identified map occurring most recently comprising the misplaced object.

65. The controller of claim 64, wherein the processor is arranged in use to:

identify one or more objects associated with a location correlate with the user location data from the identified maps;

analyse the one or more identified objects in the identified maps;

identify the misplaced object from the one or more analysed objects; and determine the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

66. The controller of claim 65, wherein the processor is arranged in use to:

identify one or more objects located at a distance equal to or less than a predetermined threshold distance from the user location data;

analyse the one or more objects located at the distance equal to or less than the predetermined threshold distance from the user location data;

identify the misplaced object from the analysed one or more objects located at the distance equal to or less than the predetermined threshold distance from the user location data; and

determine the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

67. The controller of any one of claims 63 to 66, wherein each map comprises an image captured at a different point in time of one or more objects located in the physical space and distance information associated with each one of the one or more imaged objects, the distance information being expressed with respect to a point of reference, and the processor is arranged in use to:

analyse a plurality of images of the physical space;

identify the misplaced object in the plurality of images using image recognition; and

determine the current location of the misplaced object from the most recently captured image comprising an image of the misplaced object.

68. The controller of claim 67, wherein the input is arranged in use to receive a time of flight data from a time of flight (ToF) camera, the time of flight data relating to the time of flight of a reflectance signal reflected from each one of the one or more objects located in the physical space; and

the processor is arranged in use to determine the current location of the misplaced object from the most recently captured image comprising the image of the misplaced object and distance information determined from the received time of flight data.

69. The controller of any one of claims 63 to 68, comprising an output arranged in use to output the determined current location of the misplaced object to a user.

70. The controller of any one of claims 63 to 69 comprising an image capture device.

71. The controller of claim 70 comprising a time-of-flight (ToF) camera.

72. The controller of any one of claims 63 to 71 comprising a motion sensor.

73. The controller of any one of claims 63 to 72 arranged in use to carry out the method of any one of claims 57 to 62.

Description:
Improved Control Device

TECHNICAL FIELD

The present disclosure relates to the field of home automation. In particular an aspect of the invention relates to a control device arranged to be operatively coupled to one or more devices, such as one or more home appliances and/or systems, and which is configured to learn a usage pattern associated with a user's usage of an operatively connected electronic device, and to subsequently automate operation of the operatively connected one or more devices in dependence on the generated usage pattern. A further aspect of the invention relates to a controller device arranged to identify an object misplaced by a user within a physical space.

BACKGROUND

Home automation systems are becoming ever more popular. However, a shortcoming of known home automation systems is that their operation is either pre-configured or requires customised user configuration or programming to provide a desired functionality. To achieve this, known solutions often rely on trained installers to configure the system. This increases the configuration time required to configure the home automation system for operation. To mitigate for this, often pre-configured systems are provided. However pre-configured systems are inflexible, and do not accurately replicate a user's desired usage pattern, which may change based on a season of year for example. Similarly, over time a user's usage pattern may change as a result of changes in a user's routine. A pre-configured system will always behave in the same manner regardless of any changes in its environment, and in changes in a user's usage pattern resulting from a change in routine for example.

Known solutions to the problem of locating misplaced objects include the use of geolocation chips, which relate to wireless chips capable of transmitting location information. When affixed to an object they may transmit information regarding the location of the object to a receiver. Such solutions have been employed in military applications with much success. However, the associated cost has meant that these solutions are not cost effective for use in a domestic environment, and in particular to monitor the location of inexpensive items such as keys, for example. Accordingly, a simpler, and more cost effective solution is required, in particular for use in a domestic setting such as within a home.

It is an object of an aspect of the invention to resolve at least some of the shortcomings associated with the known prior art solutions. In particular, it is an object of at least one aspect of the invention to provide a control device, which is intuitive to use and does not require any special expertise, such as programming skills or technical skills, to configure and use by the non-expert user.

SUMMARY OF THE INVENTION

In accordance with an aspect of the invention there is provided a method of controlling operation of one or more devices in accordance with a usage pattern. The one or more devices may be operatively coupled to a controller, the controller having an observation phase in which usage of the one or more devices by a user is observed, and a subsequent automation phase in which operation of the one or more devices is automated in accordance with the observed usage. The observation phase may comprise: receiving a first input, the first input defining a desired operative state of a first one of the one or more devices; receiving a second input, the second input defining a desired operative state of a second one of the one or more devices; determining if the second input was received within a period of time less than or equal to a predefined threshold time period; and generating the usage pattern in dependence on the second input having been received within a period of time less than or equal to the predefined threshold time period, the usage pattern defining the desired operative state of the one or more devices in accordance with the received inputs. The subsequent automation phase may comprise: controlling operation of the one or more devices in accordance with the usage pattern. In this way, usage of one or more devices operatively coupled to a controller may be observed, and a usage pattern generated. In addition, operation of the one or more devices may be subsequently controlled in accordance with the generated usage pattern. In many embodiments the automation phase occurs at a period of time after the observation phase. This helps to ensure that the generated usage pattern is an accurate reflection of how a user controls the one or more devices. The current method does not require any pre-configuration, and so is convenient for use with home automation systems. The use of a predefined threshold time period ensures that the received inputs are associated with the same event. For example, it may be assumed that where a user wishes to control operation of two or more devices, then the inputs required to change the operative states of the two or more devices will be received within a short period of time. The use of the predefined threshold time period assists in distinguishing between inputs associated with a related event, and unrelated events.

The observation phase may comprise: capturing a first time coordinate associated with receipt of any one of the first and/or second input, the first time coordinate defining a time of day associated with receipt of the input; associating the first time coordinate with the generated usage pattern. The automation phase may comprise: monitoring a second time coordinate associated with a time of day; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the second time coordinate being correlated with the first time coordinate.

In certain embodiments the automation phase may comprise: controlling operation of the one or more devices in accordance with the usage pattern in dependence on a time difference between the second time coordinate and the first time coordinate being less than or equal to a predefined threshold period of time.

In this way, advantageously usage behaviour that is regularly observed to occur at certain times of day may be accurately captured, and associated with a generated usage pattern. Operation of the one or more devices may be subsequently controlled in accordance with the generated usage pattern at or near the observed time of day. This is particularly advantageous for automating observed routine use of one or more devices by a user.

In certain embodiments the observation phase may comprise: receiving a first operative setting associated with any one of the first and/or second input, the operative setting comprising data associated with a characteristic of a location associated with any one or more of the first one and the second one of the one or more devices at a time of receipt of any one of the first and/or second input; associating the operative setting with the generated usage pattern. The automation phase may comprise: monitoring a second operative setting associated with the location; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second operative setting being correlated with the first operative setting. This improves the versatility of the method, and in particular enables the system to automate control of the one or more devices in dependence on an observed operative setting associated with a characteristic of the location associated with the devices.

The operative setting may comprise an environmental state associated with the location of any one or more of the first one and the second one of the one or more devices, and the observation phase may comprise: receiving first environmental state data associated with any one of the first and second received input, the first environmental state data comprising data associated with an environmental characteristic associated with the location at a time of receipt of any one of the first and second input; associating the first environmental state data with the generated usage pattern. The automation phase may comprise: monitoring second environmental state data associated with an environmental characteristic of the location; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second environmental state data being correlated with the first environmental state data. In this way advantageously the one or more devices may be controlled in accordance with the usage pattern in dependence on an environmental characteristic of the location. The environmental state data may comprise any one or more of: a temperature value; a brightness value associated with a level of brightness of the location; and a humidity value associated with the location. This feature is particularly useful for automatically controlling climate devices, such as central heating devices or air conditioning devices in dependence on an ambient temperature of the physical location. For example, air conditioning devices may be automatically operated in accordance with the usage pattern when the ambient temperature within a room of a building exceeds a predefined threshold temperature value. Similarly, a central heating system may be automatically operated in accordance with a usage pattern when the ambient temperature within a room is less than a predefined threshold temperature.

In certain embodiments the operative setting may comprise occupancy information relating to a number of occupants present in the location, and the observation phase may comprise: receiving first occupancy data associated with any one of the first and second input, the occupancy data comprising data associated with a number of occupants present at the location at the time of receipt of any one of the first and second input; associating the first occupancy data with the generated usage pattern. The automation phase may comprise: monitoring second occupancy data associated with the location; and controlling operation of the one or more devices in accordance with the usage pattern in dependence on the monitored second occupancy data being correlated with the first occupancy data. In this way it is possible to control operation of devices in dependence on the number of people located in, for example, a room. For example, it is known that the greater the number of people present in a room, the greater the ambient room temperature tends to be as a result of each person within the room radiating an amount of heat. This is particularly problematic in public venues, such as offices. In accordance with the present method, climate control devices such as air conditioning systems or central heating systems may be automatically controlled in accordance with a usage pattern in dependence on the number of people located in the room - for example, the air conditioning system may be automatically engaged to reduce the ambient temperature once the room occupancy exceeds a predefined threshold number.

In certain embodiments the automation phase may comprise: determining a current operative state of each one of the one or more devices; generating one or more control signals for changing the current operative state of the one or more devices from its current operative state to the operative state defined in the usage pattern; and controlling operation of the one or more devices in accordance with the generated one or more control signals.

In certain embodiments the observation phase may comprise: receiving a plurality of pairs of first and second inputs; determining if each second input comprised within each pair of first and second inputs was received within a period of time less than or equal to the predefined threshold time period; and generating a usage pattern for each pair of first and second inputs, in dependence on the second input within each pair having been received within a period of time less than or equal to the predefined threshold time period. In this way a plurality of different usage patterns may be generated associated with different usage characteristics.

In certain embodiments more than two devices may be operatively coupled to the controller. The observation phase may comprise: determining a final system state associated with each generated usage pattern, the final system state being dependent on the final operative state of each one of the devices operatively coupled to the node after receipt of the pair of first and second inputs; associating a weighting variable with the final system state associated with each generated usage pattern, the weighting variable being indicative of a number of times that the final system state has arisen or a frequency with which the final system state has been determined to arise; defining a usage pattern as its associated final system state in dependence on the weighting variable associated with the usage pattern being greater than or equal to a predefined weighting threshold value. Determining the final system state associated with each usage pattern assists in determining what a user's intention was when providing the received inputs, which cannot be inferred exclusively from receipt of the user inputs. The user inputs only provide information regarding the operative states of the devices that the user wishes to change, which will be dependent on their initial operative states. However, a desired final system state is not dependent on the initial operative states of the devices. By defining a usage pattern in terms of the final system state of each one of the operatively coupled devices ensures that irrespective of the initial operative states of the devices, the desired final operative state may always be achieved. The use of weighting variables helps to identify associated usage patterns, in dependence on a shared final system state. In particular, the more frequently a specific final system state is observed, the greater the value of the associated weighting variable. During the automation phase the method may comprise determining the initial operative state of each one of the operatively coupled devices. The desired final operative state of each operatively coupled device may be obtained from the usage pattern. In dependence on the initial operative state of each device and the desired final operative state, the required control signals to change the initial operative state of each device to the required final operative state may be generated.

In certain embodiments the observation phase may comprise: determining a final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more devices operatively coupled to the controller after receipt of the first input and the second input; determining an energy consumption value associated with the final system state associated with the usage pattern; determining an environmental state associated with a location of the one or more devices and the final system state; varying the energy consumption value by defining an amended final operative state of at least one of the one or more devices, in dependence on the determined environmental state being different to a predefined environmental threshold state; and amending the usage pattern with the amended final operative state of the at least one device. The automation phase may comprise: controlling operation of the one or more devices in accordance with the amended usage pattern.

In particular, the observation phase may comprise: reducing the energy consumption value by defining an amended final operative state of at least one of the one or more devices, the amended final operative state being associated with a lower energy consumption value, in dependence on the determined environmental state being greater than the predefined environmental threshold state. In this way, advantageously the method is able to reduce the energy consumption value associated with a particular usage pattern by amending the final operative state of at least one of the operatively coupled devices. The use of a predefined environmental threshold state ensures that user comfort is not compromised. For example, the predefined environmental threshold state may relate to an environmental state that the user wishes to maintain, such as an ambient temperature, or an ambient brightness. The operative state of one or more devices may be amended to reduce energy consumption whilst simultaneously maintaining the ambient temperature or ambient brightness. Accordingly, using comfort is maintained. In use, the energy reduction may occur without the user noticing the amended final operating state of the at least one or more devices.

In certain embodiments, the observation phase may comprise: increasing the energy consumption value by defining an amended final operative state of at least one of the one or more devices, the amended final operative state being associated with a higher energy consumption value, in dependence on the determined environmental state being less than the predefined environmental threshold state. Similarly, the method may be arranged to amend the final operative state of at least one device in order to maintain the predefined environmental threshold state. For example, if the ambient temperature or brightness in a physical location falls below the predefined threshold state, then the system may amend the final operative state of at least one device in order to maintain the predefined threshold state. In certain circumstances this may result in an increase in the overall energy consumption value. In combination with the preceding embodiment this ensures that as, for example, environmental state conditions change the system is adaptive to either decrease or increase energy consumption in order to maintain the predefined environmental threshold state. This is particularly convenient when controlling ambient brightness in a room comprising at least one window through which external illumination may illuminate the room. The ambient brightness conditions within the room will be dependent on both the external illumination conditions and on the illumination conditions within the room. Accordingly, a change in external illumination conditions will have an effect on the ambient brightness conditions within the room. At different times of day the external illumination conditions will vary, and therefore in order to maintain a predefined ambient brightness threshold state, it may be necessary to vary the brightness of the illumination sources within the room to compensate for the change in external brightness conditions. The present method ensures that the predefined environmental threshold state may be automatically maintained.

In accordance with an embodiment the one or more devices may comprise one or more illumination devices. The environmental state may relate to an ambient brightness value associated with a location of the one or more illumination devices. The observation phase may comprise: determining the final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices operatively coupled to the controller after receipt of the first input and the second input; determining an energy consumption value associated with the final system state associated with the usage pattern; determining a first ambient brightness value associated with the location of the one or more illumination devices and the final system state; varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, in dependence on the determined first ambient brightness value being different to a predefined ambient brightness threshold value; amending the usage pattern with the amended final operative state of the at least one illumination device. The automation phase may comprise: controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

Varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, may comprise the steps of: varying the brightness setting of the at least one illumination device by a predefined increment; measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device varied by the predefined increment; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of varying the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

Similarly, varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, may comprise the steps of: increasing the brightness setting of the at least one illumination device by a predefined increment in dependence on the first ambient brightness value being less than the predefined ambient brightness threshold value; measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device increased by the predefined increment; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of increasing the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

Similarly, varying the energy consumption value by defining an amended final operative state of at least one of the one or more illumination devices, may comprise the steps of: decreasing the brightness setting of the at least one illumination device by a predefined increment in dependence on the first ambient brightness value being greater than the predefined ambient brightness threshold value; measuring a second ambient brightness value associated with the brightness setting of the at least one illumination device decreased by the predefined increment; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and iteratively repeating the steps of decreasing the brightness value by the predefined increment and measuring the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

These embodiments benefit from the previously mentioned advantages. In particular, the brightness setting of one or more illumination devices may be varied in order to maintain the predefined ambient brightness threshold value in the described ways.

In accordance with a further aspect of the invention, there is provided a controller for controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to the controller, the controller being configured in use with an observation phase in which usage of the one or more devices by a user is observed, and a subsequent automation phase in which operation of the one or more devices is automated in accordance with the observed usage. The controller may comprise: an input arranged in use to receive a first input, the first input defining a desired operative state of a first one of the one or more devices, and a second input, the second input defining a desired operative state of a second one of the one or more devices; a processor; and an output. The processor may be arranged in use during the observation phase to: determine if the second input was received within a period of time less than or equal to a predefined threshold time period; and generate the usage pattern in dependence on the second input having been received within a period of time less than or equal to the predefined threshold time period, the usage pattern defining the desired operative state of the one or more devices in accordance with the received input. The output may be arranged in use during the subsequent automation phase to: output a control signal for controlling operation of the one or more devices in accordance with the usage pattern. This aspect of the invention and its embodiments benefit from the same advantages as mentioned in relation to the previously summarised aspect of the invention. In practice, a plurality of controllers may be provided in a physical location, such as within different rooms within a house, and being arranged in a mesh network. Accordingly, in the ensuing detailed description the controller may interchangeably be referred to as a node.

In certain embodiments the processor may be configured during the observation phase to: capture a first time coordinate associated with receipt of any one of the first and/or second input, the first time coordinate defining a time of day associated with receipt of the input; and associate the first time coordinate with the generated usage pattern. The processor may be arranged in use during the automation phase to: monitor a second time coordinate associated with a time of day; and the output may be arranged in use to: output the control signal for controlling operation of the one or more devices in accordance the usage pattern in dependence on the second time coordinate being correlated with the first time coordinate.

The output may be arranged in use during the automation phase to: output a control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on a time difference between the second time coordinate and the first time coordinate being less than or equal to a predefined threshold period of time.

In certain embodiments, during the observation phase: the input may be arranged in use to receive a first operative setting associated with any one of the first and second input received from one or more sensors present in a location associated with any one of the first one and the second one of the one or more devices, the operative setting comprising data associated with a characteristic of the location at a time of receipt of any one of the first and second input. The processor may be configured in use to associate the operative setting with the generated usage pattern. During the automation phase: the input may be configured in use to receive a second operative setting associated with the location from any one of the one or more sensors; and the output may be configured in use to output the control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on the received second operative setting being correlated with the first operative setting.

In certain embodiments the operative setting may comprise an environmental state associated with the location of any one or more of the first one and the second one of the one or more devices. During the observation phase: the input may be arranged in use to receive first environmental state data from the one or more sensors, the environmental state data being associated with any one of the first and second received input, the first environmental state data comprising data associated with an environmental characteristic associated with the location at a time of receipt of any one of the first and second input. The processor may be arranged in use to associate the first environmental state data with the generated usage pattern. During the automation phase: the input may be arranged in use to receive second environmental state data from the one or more sensors, the second environmental state data being associated with an environmental characteristic of the location; and the output may be configured in use to output the control signal for controlling the one or more devices in accordance with the usage pattern in dependence on the second environmental state data being correlated with the first environmental state data.

In certain embodiments the environmental data may comprise any one or more of: a temperature value received from at least one thermometer; an ambient brightness value associated with a level of ambient brightness of the location, received from one or more ambient light sensors; and a humidity value associated with the location, received from at least one humidity sensor.

In certain embodiments the operative setting may comprise occupancy information relating to a number of occupants present in the location, and in the observation phase: the input may be arranged in use to receive first occupancy data associated with any one of the first and second input, the occupancy data comprising data associated with a number of occupants present at the location at the time of receipt of any one of the first and second input; the processor may be arranged in use to associate the first occupancy data with the generated usage pattern. During the automation phase: the input may be arranged in use to receive second occupancy data associated with the location; and the output may be arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the usage pattern in dependence on the received second occupancy data being correlated with the first occupancy data.

In certain embodiments in the automation phase: the processor may be arranged in use to determine a current operative state of each one of the one or more devices, and to generate one or more control signals for changing the current operative state of the one ore more devices from its current operative state to the operative state defined in the usage pattern; and the output may be arranged in use to output the one or more generated control signals.

In accordance with an embodiment, in the observation phase: the input may be arranged in use to receive a plurality of pairs of first and second inputs; and the processor may be arranged in use to determine if each second input comprised within each pair of first and second inputs was received within a period of time less than or equal to the predefined threshold time period, and to generate a usage pattern for each pair of first and second inputs, in dependence on the second input within each pair having been received within a period of time less than or equal to the predefined threshold time period.

In certain embodiments more than two devices may be operatively coupled to the controller, and in the observation phase the processor may be arranged to: determine a final system state associated with each generated usage pattern, the final system state being dependent on the final operative state of each one of the devices operatively coupled to the controller after receipt of the pair of first and second inputs; associate a weighting variable with the final system state associated with each generated usage pattern, the weighting variable being indicative of a number of times that the final system state has arisen or a frequency with which the final system state has been determined to arise; and define a usage pattern as its associated final system state in dependence on the weighting variable associated with the usage pattern being greater than or equal to a predefined weighting threshold value.

In certain embodiments in the observation phase: the input may be arranged in use to receive environmental state data from one or more sensors located in a location associated with the one or more devices, the environmental state data comprising data associated with an environmental characteristic of the location after receipt of any one of the first and second input. The processor may be arranged in use to: determine a final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more devices operatively coupled to the controller after receipt of the first input and the second input; determine an environmental state associated with the location and the final system state from the received environmental state data; determine an energy consumption value associated with the final system state associated with the usage pattern; determine if the environmental state is equal to a predefined environmental threshold state; determine an amended final operative state of at least one of the one or more devices associated with a different energy consumption value in dependence on the determined environmental state being different to the predefined environmental threshold state; and amending the usage pattern with the amended final operative state. In the automation phase: the output may be arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the amended usage pattern associated with the different energy consumption value.

In the observation phase the processor may be arranged in use to: reduce the energy consumption value by determining an amended final operative state of at least one of the one or more devices associated with a lower energy consumption value in dependence on the determined environmental state being greater than the predefined environmental threshold state.

In the observation phase the processor may be arranged in use to: increase the energy consumption value by determining an amended final operative state of at least one of the one or more devices associated with a higher energy consumption value in dependence on the determined environmental state being less than the predefined environmental threshold state.

In certain embodiments the one or more devices may comprise one or more illumination devices, the environmental state relates to an ambient brightness value associated with a location of the one or more illumination devices, and in the observation phase: the input may be arranged in use to receive ambient brightness data from one or more brightness sensors located in a location associated with the one or more devices, the ambient brightness data comprising data associated with an ambient brightness value of the location after receipt of any one of the first and second input. The processor may be arranged in use to: determine the final system state associated with the usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices operatively coupled to the controller after receipt of the first input and the second input; determine a first ambient brightness value associated with the location and the final system state from the received ambient brightness data; determine an energy consumption value associated with the final system state associated with the usage pattern; determine if the first ambient brightness value is equal to a predefined ambient brightness threshold level; determine an amended final operative state of at least one illumination device associated with a different energy consumption value in dependence on the determined first ambient brightness value being different to the predefined ambient brightness threshold level; amend the usage pattern with the amended final operative state. In the automation phase: the output may be arranged in use to output a control signal for controlling operation of the one or more illumination devices in accordance with the amended usage pattern associated with the different energy consumption value.

In certain embodiments In the observation phase: the processor may be arranged in use to: vary the brightness setting of the at least one illumination device by a predefined increment; determine a second ambient brightness value associated with the brightness setting of the at least one illumination device varied by the predefined increment from ambient brightness data received from the one or more brightness sensors; and iteratively varying the brightness setting of the at least one illumination device by the predefined increment until the second ambient brightness value is equal to the predefined ambient brightness threshold level.

In certain embodiments the controller may comprise one or more environmental sensors arranged in use to capture environmental data associated with a location of any one or more operatively coupled devices. The environmental sensors enable data to be captured regarding the environment in which the one or more devices are located. In turn this data may be used to increase the versatility of use of the controller.

In accordance with yet a further aspect of the invention there is provided a method of controlling operation of one or more devices in accordance with a usage pattern, the one or more devices being operatively coupled to a controller, the usage pattern having been generated in dependence on an observed usage of the one or more devices by a user and being associated with at least one parameter value, the method may comprise: identifying a user; receiving the parameter value; determining if a predefined threshold condition is satisfied by the received parameter value; selecting a usage pattern from a plurality of available usage patterns, the usage pattern being associated with the predefined threshold condition, and being selected in dependence on the identified user and in dependence on the predefined threshold condition being satisfied; and controlling operation of the one or more devices in accordance with the selected usage pattern. In this way it is possible to control operation of the one or more devices in accordance with a specific user's usage pattern. Accordingly, operation of the one or more devices may be customised to each different user's specific usage patterns. The advantages associated with the previously described aspects of the invention apply to the current aspect.

In certain embodiments the parameter value may comprise a time coordinate associated with a time of day, and the method may comprise: receiving the time coordinate; determining if the received time coordinate lies within a predefined time period of a predefined threshold time coordinate; selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined threshold time coordinate, and being selected in dependence on the identified user and in dependence on the received time coordinate lying within the predefined time period of the predefined threshold time coordinate; controlling operation of the one or more devices in accordance with the selected usage pattern.

The parameter value may comprise environmental state data associated with an environmental characteristic of a location comprising the one or more devices, and the method may comprise: receiving the environmental state data; determining if the received environmental state data satisfies a predefined environmental threshold condition; selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined environmental threshold condition, and being selected in dependence on the identified user and in dependence on the received environmental state data satisfying the predefined environmental threshold condition; and controlling operation of the one or more devices in accordance with the selected usage pattern.

The environmental state data may comprise any one or more of: a temperature value; a brightness value associated with a level of brightness of the location; and a humidity value associated with the location. The parameter value may comprise occupancy information relating to a number of occupants present in a location associated with at least one of the one or more devices, and the method may comprise: receiving the occupancy information; determining if the received occupancy information satisfies a predefined occupancy threshold condition; selecting the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined occupancy threshold condition, and being selected in dependence on the identified user and in dependence on the received occupancy information satisfying the predefined occupancy threshold condition; and controlling operation of the one or more devices in accordance with the selected usage pattern.

In certain embodiments the method may comprise: determining a current operative state of each one of the one or more devices; generating one or more control signals for changing the current operative state of the one or more devices from its current state to the operative state defined in the selected usage pattern; and outputting the generated one or more control signal to the one or more devices.

In certain embodiments the method may comprise: receiving an environmental state data associated with a location comprising the one or more devices; determining a final system state associated with the selected usage pattern, the final system state being dependent on a final operative state of each one of the one or more devices; determining an energy consumption value associated with the final system state associated with the selected usage pattern; determining if the received environmental state data satisfies a predefined environmental state threshold; determining an amended usage pattern associated with a different energy consumption by varying the final operative state of at least one of the one or more devices in dependence on the received environmental state data not satisfying the predefined environmental state threshold; and controlling operation of the one or more devices in accordance with the amended usage pattern.

In certain embodiments the method may comprise: determining an amended usage pattern associated with a lower energy consumption by selecting a final operative state of at least one of the one or more devices, the final operative state being associated with a lower energy consumption value, in dependence on the received environmental state data being greater than the predefined environmental state threshold.

The one or more devices may comprise one or more illumination devices, the environmental state may relate to an ambient brightness value associated with a location of the one or more illumination devices, and the method may comprise: receiving a first ambient brightness value associated with the location comprising the one or more illumination devices; determining the final system state associated with the selected usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices; determining the energy consumption value associated with the final system state associated with the selected usage pattern; determining if the received first ambient brightness value satisfies a predefined ambient brightness threshold value; determining an amended usage pattern associated with a lower energy consumption by reducing a brightness value of at least one of the one or more illumination devices in dependence on the received first ambient brightness value being greater than the predefined ambient brightness threshold value; and controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

In certain embodiments the method may comprise the steps of: reducing the brightness value of at least one of the one or more illumination devices by a predefined increment; receiving a second ambient brightness value associated with the reduced brightness value of the at least one of the one or more illumination devices; and determining if the second ambient brightness value is equal to the predefined ambient brightness threshold value, and repeating the steps of reducing the brightness value by the predefined increment and receiving the second ambient brightness value until the second ambient brightness value is equal to the predefined ambient brightness threshold value.

In accordance with a further aspect of the invention there is provided a controller for controlling operation of one or more devices in accordance with a usage pattern. The one or more devices may be operatively coupled to the controller. The usage pattern may have been generated in dependence on an observed usage of the one or more devices by a user, and may be associated with at least one parameter value. The controller may comprise: an input arranged in use for receiving user data, from which an identity of a user may be determined, and for receiving the parameter value; a processor arranged in use to: determine if a predefined threshold condition is satisfied by the received parameter value; select a usage pattern from a plurality of available usage patterns, the usage pattern being associated with the predefined threshold condition, and being selected in dependence on the identified user and in dependence on the predefined threshold condition being satisfied; and an output arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the selected usage pattern. The parameter value may relate to any parameter, which characterises the usage pattern. For example, it may relate to, but is not limited to: a time of day, to an environmental setting, or to a operational setting such as the number of occupants present in a physical location. The use of the parameter value facilitates the identification and implementation of the appropriate usage pattern. This aspect of the invention and its embodiments benefit from the same advantages as set out in relation to the preceding aspects of the invention.

In certain embodiments the parameter value may comprise a time coordinate associated with a time of day. The input may be arranged in use to receive the time coordinate. The processor may be arranged in use to: determine if the received time coordinate lies within a predefined time period of a predefined threshold time coordinate; select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined threshold time coordinate, and being selected in dependence on the identified user and in dependence on the received time coordinate lying within the predefined time period. The output may be arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the selected usage pattern.

The parameter value may comprise an environmental state data associated with an environmental characteristic of a location comprising the one or more devices. The input may be arranged in use to receive the environmental state data. The processor may be arranged in use to: determine if the received environmental state data satisfies a predefined environmental threshold condition; select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined environmental threshold condition, and being selected in dependence on the identified user and in dependence on the received environmental state data satisfying the predefined environmental threshold condition. The output may be arranged in use to control operation of the one or more devices in accordance with the selected usage pattern.

The environmental state data may comprise any one of more of: a temperature value received from one or more temperature sensors; a brightness value associated with a level of brightness of the location, the brightness value being received from one or more brightness sensors; a humidity value associated with the location received from one or more humidity sensors.

In certain embodiments the parameter value may comprise occupancy information relating to a number of occupants present in a location associated with at least one of the one or more devices. The input may be arranged in use to receive the occupancy information. The processor may be arranged in use to: determine if the received occupancy information satisfies a predefined occupancy threshold condition; select the usage pattern from the plurality of available usage patterns, the usage pattern being associated with the predefined occupancy threshold condition, and being selected in dependence on the identified user and in dependence on the received occupancy information satisfying the predefined occupancy threshold condition. The output may be arranged in use to output the control signal for controlling operation of the one or more devices in accordance with the selected usage pattern.

The processor may be arranged in use to: determine a current operative state of each one of the one or more devices; generate one or more control signals for changing the current operative state of the one or more devices from its current state to the operative state defined in the selected usage pattern. The output may be arranged in use to output the generated one or more control signals to the one or more devices.

In certain embodiments the input may be arranged in use to receive an environmental state data associated with a location comprising the one or more devices. The processor may be arranged in use to: determine a final system state associated with the selected usage pattern, the final system state being dependent on a final operative state of each one of the one or more devices; determine an energy consumption value associated with the final system state associated with the selected usage pattern; determine if the received environmental state data satisfies a predefined environmental state threshold; determine an amended usage pattern associated with a different energy consumption value by varying the final operative state of at least one of the one or more devices in dependence on the received environmental state data not satisfying the predefined environmental state threshold. The output may be arranged in use to output a control signal for controlling operation of the one or more devices in accordance with the amended usage pattern.

The processor may be arranged in use to determine an amended usage pattern associated with a lower energy consumption value by selecting a final operative state of at least one of the one or more devices, the final operative state being associated with a lower energy consumption value, in dependence on the received environmental state data being greater then the predefined environmental state threshold.

In certain embodiments the one or more devices comprise one or more illumination devices, the environmental state may relate to an ambient brightness value associated with a location of the one or more illumination devices, and the input may be arranged in use to receive a first ambient brightness value associated with the location comprising the one or more illumination devices. The processor may be arranged in use to: determine the final system state associated with the selected usage pattern, the final system state being dependent on the final operative state of each one of the one or more illumination devices; determine the energy consumption value associated with the final system state associated with the selected usage pattern; determine if the received first ambient brightness value satisfies a predefined ambient brightness threshold value; determine an amended usage pattern associated with a lower energy consumption value by reducing a brightness value of at least one of the one or more illumination devices in dependence on the received first ambient brightness value being greater than the predefined ambient brightness threshold value. The input may be arranged in use to output the control signal for controlling operation of the one or more illumination devices in accordance with the amended usage pattern.

In certain embodiments the output may be arranged in use to output a control signal for reducing the brightness value of at least one of the one or more illumination devices by a predefined increment. The input may be arranged in use to receive a second ambient brightness value associated with the reduced brightness value of the at least one of the one or more illumination devices. The processor may be arranged in use to determine if the second ambient brightness value is equal to the predetermined ambient brightness threshold value. The output may arranged in use to output a further control signal for further reducing the brightness value of at least one of the one or more illumination devices by the predefined increment, until the received ambient brightness value is equal to the predefined ambient brightness threshold value. In certain embodiments the controller may comprise one or more environmental sensors arranged in use to capture environmental data associated with a location of any one or more operatively coupled devices.

In accordance with yet a further aspect of the invention there is provided a method of locating a misplaced object within a physical space, the misplaced object having been misplaced within the physical space by a user. The method may comprise: receiving a query from the user to locate the misplaced object; analysing a plurality of maps of the physical space, each map defining a location of one or more objects located within the physical space at a time coordinate, the time coordinate defining a point in time; identifying the misplaced object in the plurality of maps; and determining a current location of the misplaced object from the most recent map comprising the misplaced object. The maps may comprise information regarding the location of each object within a physical space, and in certain embodiments may comprise image data. The present method enables the last known location of the misplaced object to be determined using the map data.

The method may comprise: analysing a plurality of user location data, the user location data defining the location of the user within the physical space over a period of time; identifying maps associated with a time coordinate correlated with the period of time; analysing the identified maps and identifying the misplaced object in the identified maps; and determining the current location of the misplaced object from the identified map occurring most recently comprising the misplaced object. In certain embodiments the method may comprise identifying one or more objects associated with a location correlated with the user location data from the identified maps; analysing the one or more identified objects in the identified maps; identifying the misplaced object from the one or more analysed objects; and determining the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently. This embodiment assumes that at some preceding point in time the misplaced object was in the user's possession and therefore it follows that its current location will to some extend be dependent on a user's previous location. Accordingly, analysing the user's preceding location and using this information to restrict the analysis of objects only within a vicinity of the user's preceding location, significantly reduces processing overhead required to identify the misplaced object.

The method may comprise: identifying one or more objects located at a distance equal to or less than a predefined threshold distance from the user location data; analysing the one or more objects located at the distance equal to or less than the predefined threshold distance from the user location data; identifying the misplaced object from the analysed one or more objects located at the distance equal to or less than the predefined threshold distance from the user location data; and determining the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

In certain embodiments each map may comprise an image captured at a different point in time of one or more objects located in the physical space and distance information associated with each one of the one or more imaged objects, the distance information may be expressed with respect to a point of reference, and the method may comprise: analysing a plurality of images of the physical space; identifying the misplaced object in the plurality of images using image recognition; and determining the current location of the misplaced object from the most recently captured image comprising an image of the misplaced object.

The distance information associated with each one of the one or more imaged objects may be determined in dependence on a time of flight of a reflectance signal reflected from each one of the one or more objects located in the physical space. The method may comprise: determining the current location of the misplaced object from the most recently captured image comprising the image of the misplaced object and distance information associated with the imaged objects comprised in the most recently captured image. In accordance with yet a further aspect of the invention there is provided a controller for locating a misplaced object within a physical space, the misplaced object having been misplaced within the physical space by a user. The controller may comprise: an input arranged in use to receive a query from the user to locate the misplaced object; a processor arranged in use to: analyse a plurality of maps of the physical space, each map defining a location of one or more objects located within the physical space at a time coordinate, the time coordinate defining a point in time; identify the misplaced object in the plurality of maps; and determine a current location of the misplaced object from the most recent map comprising the misplaced object. In certain embodiments the controller may also comprise an output for outputting the determined current location of the misplaced object to the user. The controller benefits from the same advantages as set out in relation to preceding aspects of the invention.

In certain embodiments the processor may be arranged in use to: analyse a plurality of user location data, the user location data defining the location of the user within the physical space over a period of time; identify maps associated with a time coordinate correlated with the period of time; analyse the identified maps and identifying the misplaced object in the identified maps; and determining the current location of the misplaced object from the identified map occurring most recently comprising the misplaced object.

In certain embodiments the processor may be arranged in use to: identify one or more objects associated with a location correlate with the user location data from the identified maps; analyse the one or more identified objects in the identified maps; identify the misplaced object from the one or more analysed objects; and determine the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

In certain embodiments the processor may be arranged in use to: identify one or more objects located at a distance equal to or less than a predetermined threshold distance from the user location data; analyse the one or more objects located at the distance equal to or less than the predetermined threshold distance form the user location data; identify the misplaced object from the analysed one or more objects located at the distance equal to or less than the predetermined threshold distance form the user location data; and determine the current location of the misplaced object as the location of the identified misplaced object occurring in the identified map occurring most recently.

In certain embodiments each map may comprise an image captured at a different point in time of one or more objects located in the physical space and distance information associated with each one of the one or more imaged objects, the distance information being expressed with respect to a point of reference. The processor may be arranged in use to: analyse a plurality of images of the physical space; identify the misplaced object in the plurality of images using image recognition; and determine the current location of the misplaced object from the most recently captured image comprising an image of the misplaced object.

In certain embodiments the input may be arranged in use to receive a time of flight data from a time of flight (ToF) camera, the time of flight data relating to the time of flight of a reflectance signal reflected from each one of the one or more objects located in the physical space; and the processor may be arranged in use to determine the current location of the misplaced object from the most recently captured image comprising the image of the misplaced object and distance information determined from the received time of flight data.

In certain embodiments the controller may comprise an image capture device. The image capture device may comprise a time-of-flight (ToF) camera. This significantly simplifies how location data may be determined for each object captured in an image. In particular the use of a ToF camera means that each captured image of a physical location may also be associated with location information regarding each imaged object located within the physical space, form a reflectance signal captured by the ToF camera.

In certain embodiments the controller may comprise a motion sensor. In this way, image data of a physical space may only be captured in dependence on a user having entered the physical space. This significantly reduces the amount of data captured and requiring analysis.

BRIEF DESCRIPTION OF THE DRAWINGS

One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 is a schematic illustration of an environment in which a node in accordance with an embodiment of the invention may be implement, and in particular the figure illustrates two rooms each comprising a node and one or more devices located in each room operatively coupled to the nodes, and arranged to be operated by the node;

Figure 2 is a process flow chart outlining the steps comprised in a method implemented by the nodes of Figure 1 for controlling operation of the one or more operatively coupled devices;

Figure 3 is a schematic illustration of the functional components of the node of Figure 1 ;

Figure 4 is a process flow chart outlining a method that may be implemented by the node of Figure 3 in processing received inputs;

Figure 5 is schematic representation of a mesh network in which a plurality of nodes each located at different locations within a house may be arranged;

Figure 6 is a process flow chart illustrating a method of how the node of Figure 3 may process received inputs to generate a usage pattern, in accordance with an embodiment;

Figure 7 is a process flow chart illustrating a method of how the node of Figure 3 may refine a generated usage pattern, in accordance with an embodiment;

Figure 8 is a diagram illustrating the principle of how a plurality of different usage patterns may each result in the same desired final system state;

Figure 9 is a process flow chart illustrating a method carried out by the node of Figure 3, for determining a user's intention associated with a generated usage pattern, in accordance with an embodiment;

Figure 10 is a process flow chart illustrating a method carried out by the node of Figure 3, for classifying frequently executed usage patterns, in accordance with an embodiment;

Figure 1 1 is a process flow chart illustrating a method carried out by the node of Figure 3, for predicting a desired usage pattern;

Figure 12 is a process flow chart illustrating a method carried out by the node of Figure 3 in implementing the method of Figure 1 1 , in accordance with an embodiment;

Figure 13 is a process flow chart illustrating a method carried out by the node of Figure 3 in reducing the energy consumption associated with a usage pattern; Figure 14 is a process flow chart illustrating a method carried out by the node of Figure 3 for varying the brightness setting of one or more operatively coupled illumination devices whilst maintaining an ambient brightness threshold value;

Figure 15 is process flow chart illustrating a user compromise algorithm that may be comprised in the method of Figure 14, in accordance with an embodiment of the invention;

Figure 16 is a process flow chart illustrating a method for conducting user identification carried out by the node of Figure 3, in accordance with an embodiment;

Figure 17 is a schematic illustration of the functional components comprised in an alternative node, in accordance with an embodiment;

Figure 18 is a process flow chart outlining the steps comprised in a method for locating a misplaced object within a physical environment as carried out by the node of Figure 17;

Figure 19 is a process flow chart outlining how user location information may be generated for subsequent use in the method of Figure 18; and

Figure 20 is a process flow chart outlining a method carried out by the node of Figure 17 for locating a misplaced object upon receipt of a user request.

DETAILED DESCRIPTION

Embodiments of the present invention relate to a home automation system configured to learn a user's usage behaviour with respect to home appliances or electronic devices and to automate usage of these appliances in dependence on this learned behaviour. The system may be used to automate usage of any electronic devices in the home such as heating systems, lighting or other household appliances. The term device may refer to one device or a set of devices controlled by one electric circuit, for example ceiling lights within a room. The system is adaptable to compensate for changes in users' usage behaviour of operatively connected devices, for example the use of heating systems may vary in different seasons.

The system may identify and distinguish between users via user identification means such as their mobile phone, facial recognition or any other identification means configured to uniquely identify a user to the system. The system may then retrieve usage patterns of appliances for the identified user and then, using information captured, for example from sensors, determine which usage pattern is consistent with the current observed situation. The system may then operate the appliances in accordance with the selected usage pattern. Within the present context, a usage pattern may be defined as a sequence of inputs defining the desired state of one or more operatively connected devices.

In certain embodiments the usage patterns of appliances may be optimised in order to reduce energy consumption. This may be carried out without requiring any user input and with minimal impact on the user's comfort. Energy reductions may also be made by monitoring user occupancy information, and turning devices OFF, when a room is unoccupied.

Figure 1 is a schematic illustration in accordance with embodiments of the invention of how the proposed home automation system may be implemented in a home via control devices 100 situated within each room 102. Each room in a house may contain a control device 100, henceforth referred to as a node, which may be operatively connected to any electrical devices within the room 102 via hard wiring or any wireless communication means such as Wi-Fi™, Bluetooth ® or mesh networks such as Z-Wave™ or Zigbee™. Each node enables the user to control operatively connected devices, but also monitors usage patterns of devices and learns preferred user behaviour in relation to the devices in order to automate operation of the operatively connected devices in accordance with the usage patterns. The nodes may be connected to form a wireless mesh network. This system of inter-connected nodes may also be referred to as a swarm. A user may control any device, which is operatively connected to any one node from any node in the network. Thus, each node may control any device connected to any node in the network.

In the example of Figure 1 , node 1 100(1) is operatively connected to the devices within room 1 102(1), namely a thermostat 104, lamp 106 and a television 108. Node 2 100(2) is operatively connected to the devices within room 2 102(2), namely a washing machine 1 10 and a ceiling light 1 12.

In the example of Figure 1 , node 1 100(1) and node 2 100(2) are both given equal status. Node 1 100(1) and node 2 100(2) communicate with each other such that information is shared between the nodes. Control of the devices within room 1 may be carried out by node 1 100(1) unless there is a partial failure at the node, in which case node 2 100(2) takes over control of these operatively connected devices. A partial node failure may involve a component of the node such as the user interface failing.

In certain embodiments the nodes 100 may be operatively connected to any device within the home.

In certain embodiments the system may comprise one or more nodes 100 located within one or more rooms 102. One or more different appliances may be operatively connected to one or more nodes.

In some embodiments a node may be operatively coupled to a single set of appliances, such as a heating system comprising one or more heating devices, and/or a lighting system comprising one or more lights, and/or a system comprising one or more power plug points and/or a system comprising on or more fan controls.

For illustrative purposes, the description that follows will describe an embodiment in which the operatively connected devices are lights. Each device may relate to a single light such as a lamp, or to a set of lights which are controlled by a single circuit or switch, for example a plurality of ceiling mounted lights operatively controlled by a single light switch. Each node may be operatively connected to each light within a room and may be configured to replace the function of a traditional light switch. The system may be configured to learn user behaviour relating to the user's usage of the connected lights, and to automate operation of the lighting system in dependence on the learned usage behaviour. The node may be retrofit to replace an existing light switch. The node may be configured to control a light by turning the light on or off and where the light has a dimmer function, the node may also control the brightness of each operatively connected light.

A general overview of the method adopted by the home automation system for automating operation of the lighting system is illustrated by the flow chart of Figure 2. A user may be identified at step 202 using user identification means which may include, but are not limited to, entering a password or pin into the node, facial recognition, identification of a user's mobile device, voice recognition, use of a key fob or any other form of user identification method, which uniquely identifies the user to the system.

At step 204 the system retrieves usage patterns associated with connected appliances, which usage patterns are associated with the identified user. In dependence on the current situational status, such as the time of day, year etc., the system may select at step 206 the usage pattern which is anticipated to be most desirable to the user, and then at step 208 operate the connected appliances in accordance with the selected usage pattern. The current situational status may refer to the current ambient brightness, room occupancy, time of day, time of the year, user location within the house or any other user preferences. The system may determine the current situational status based on received sensor data, and/or predefined user preference data.

Overview of the functional components of a node:

Figure 3 is a schematic overview of the functional components of a node 100 in accordance with an embodiment.

The node 100 may comprise an input/output (I/O) 306, which enables other functional components to communicate with the node.

These other components may include, but are not limited to, a user interface 308, sensors 310, communication module 312 and a camera 314. In Figure 3 these devices are shown as external to the node, however in other embodiments, one or more of these components may be embedded within the node.

The user interface 308 provides a means for the user to interact with the node 100. The user can input instructions to the node and receive information back from the node. The user interface may include a screen with control elements such as buttons or a touch screen. In some embodiments the user interface may also have the ability to process voice commands or commands based on motions such as hand gestures. The user inputs commands to the user interface to operate the operatively connected devices within the room or within any other room in the house.

In some embodiments the user may also input user information such as names of people who live within the house or set up new user profiles.

The user interface may also display information to the user such as energy consumption of connected devices or usage patterns assigned to different users. The user interface 308 may replace light switches within a room and the user may control the lighting via the node 100. The user interface may output information relating to the usage and energy consumption of each light.

One or more sensors 310 may provide the node with information about the ambient environment and/or activity within a room. Sensors may include, but are not limited to, ambient light sensors for detecting how bright/dark a room is, temperature sensors, motion sensors, presence sensors and humidity sensors.

Communication module 312 may be configured as an access point for a node to communicate with other nodes within the network. This allows the user to control any device operatively connected to any other node within the network. The communication module 312 may also allow nodes to exchange information such as power consumption information of connected devices.

The node 100 may also receive input from a camera 314 located on the node or elsewhere, which supplies the node with video information of the room.

Core functional modules:

To enable the system to automate control of operatively connected devices, a user's control inputs for operatively connected devices to the node may be monitored. The node 100 may comprise a processor 316, a storage 318, and the following functional modules: a pattern generating module 302; a pattern refining module 303; a frequent pattern analysing module 304; energy saving module 320; and identification module 322. It is to be appreciated that the functionality of one or more of the aforementioned modules may be provided via software executed on the processor 316, or by one or more additional processors. Figure 4 outlines how user inputs may be passed through various functional modules of the node in order to generate usage patterns, which may be subsequently automated.

Firstly the user inputs received at the node, which inputs relate to inputs for operating operatively connected devices, may be used by the pattern generating module 302 to generate a usage pattern.

This usage pattern may then be refined by the pattern refining module 303 in order to account for user intention, generating a refined usage pattern, which may be stored in storage 318.

In order to determine which of these stored refined usage patterns should be automated, the system may analyse the stored refined usage patterns to determine which patterns a particular user frequently uses. This step may be carried out by the frequent pattern analysing module 304.

These functional modules are now described in more detail.

Pattern generating module 302 may be configured to monitor the usage of appliances and generate usage patterns based on the received usage data. This may include storing a sequence of inputs received in relation to operatively coupled devices as a usage pattern. In some embodiments the usage pattern generated from the sequence of inputs may be stored in storage 318 as a directed graph. A directed graph is a term of art, which refers to a graph comprising a set of vertices connected by edges where the edges have a direction associated with them (e.g. see Wikipedia). Inputs are stored as vertices and the order of the sequence of inputs is represented by the direction associated with the edges. By representing the usage patterns as directed graphs, the system may more easily analyse these patterns and extract the desired data for automating particular usage patterns.

In some embodiments the usage pattern may include information such as sensor information, time or user information. This information may be contained in the edges of the directed graph.

Pattern refining module 303 may be configured to refine the received usage pattern to remove accidental inputs. Usage patterns may also be refined based on assumed user intention predicted by previous behaviour.

Frequent pattern analysing module 304 may be configured to retrieve all refined usage patterns and to then analyse these stored patterns to determine which patterns are the most frequently used patterns. These patterns are labelled frequent usage patterns and may then be stored in storage 318. Operation of these frequent usage patterns may later be automated by the node.

In an embodiment the frequent usage patterns are determined for individual users.

In one embodiment the frequent pattern analysing module 304 determines frequent usage patterns by applying a frequent itemset mining algorithm to the directed graph representing the refined usage patterns. Frequent itemset mining is a term of art, which the person skilled in the art will be familiar with, and which refers to a branch of data mining that focuses on looking at sequences of actions or events.

Energy saving module 320 may monitor the power consumption of devices operatively coupled to the node 100. The power consumption information may be stored in storage 318. The energy saving module may be configured to make energy savings by analysing stored usage patterns whilst still maintaining the user's comfort. Whilst operatively connected devices are being used, the energy saving module may be configured to make energy savings based on information obtained from the sensor information or any other information such as the local weather or time of day.

In an embodiment, a method carried out by the energy saving module may include defining a threshold ambient brightness level for each room. The threshold ambient brightness level may be determined based on scientific data relating to optimum brightness levels, time of day or user preference. The energy saving module may maintain this threshold brightness without requiring user input by varying the brightness of lights within the room based on the measured ambient light. For example, if the sun comes out, the ambient brightness may increase and the lights may be dimmed to maintain the desired brightness, thereby saving energy. If the user then increases the brightness of the light, the energy saving module may attempt to compromise with the user in order to make energy savings whilst maintaining user comfort.

The energy saving module may also be configured to turn off devices or put devices into standby mode if all users have left the room for a certain period of time. The energy saving module may determine that all users have left the room using motion sensors or data from the camera 314. After a predefined amount of time has passed, devices may be turned off.

Identification module 322 may include means for identifying users of the node. This may include one or more of facial recognition, detection of a user's mobile device, voice recognition or use of a password.

The node 100 may include storage 318 which may store generated usage patterns, refined usage patterns, frequent usage patterns, energy consumption information, user information, sensor information or any other information.

Main processor 316 may be configured to process data received from the other modules in the node 100. The functional components of a node 100 may operate in parallel.

Nodes 100 may be connected via a decentralised mesh network 500 as shown in Figure 5, forming a swarm. There is no central or main node, which the other nodes must pass information through in order to communicate. In this way if one node fails, the other nodes continue to function and the entire system will not fail. The nodes 100 communicate via communication modules 312. In this way the system is never reliant on the communication between any two nodes to function. A single point of failure will not result in failure of the entire system.

Generating usage patterns:

To automate usage behaviour of operatively connected devices, the system may first obtain usage data for individual users, generate usage patterns dependent on the received data and analyse these usage patterns, which may later be used to automate control of operatively connected devices.

The pattern generating module may gather user usage data, generate usage patterns determined by this data and store these usage patterns as directed graphs in storage 318.

The process flow chart 600 of Figure 6 provides an exemplary method, which may be adopted by the usage analysis module.

When a user input is received from the user interface at step 602, the main processor sends this input to the pattern generating module. At step 604 a new pattern is created and the received user input is assigned as the first element of the pattern. In some embodiments a sensor signal such as a signal received by a motion sensor may be input as the first element in a pattern. When the node receives a sensor signal it may send this signal to the pattern generating module.

At step 606 a threshold timer is started in response to the first user input. At step 608 it is determined whether the timer has expired. If the timer has not yet expired, the process proceeds to step 610, in which it is determined whether a user input has been received. If an input has been received, at step 612, the input is added to the pattern.

If instead at step 610, no user input has been received, the process returns to step 608. The pattern generating module therefore waits for an input within a predefined threshold time. The threshold time allows the system to determine whether multiple user inputs occur within a set time of each other. If user inputs are related and intended to form part of the same usage pattern, they will likely occur within a short period of time.

At step 614 it is determined whether the generated usage pattern is longer than a certain threshold length. If many inputs are received within the threshold time, after a certain length it is unlikely that the user intended for, for example, 30 inputs to all form part of the same usage pattern. In this situation it is possible that the inputs are a mistake or a child playing with the node. A threshold length over which no further inputs are added to the pattern prevents usage patterns, which are excessively long from being stored.

If at step 614 the pattern is shorter than the threshold length, the timer is restarted and the pattern generating module waits for another user input within the specified time period. The pattern is determined to be finished either at step 608 when a user input is not received within the specified time period or at step 614 when the usage pattern is determined to be greater than a predefined threshold length.

After the timer has expired at step 608 or the pattern has reached the threshold length at step 514, the process proceeds to step 616 in which the generated usage pattern is sent to the pattern refining module 303.

Refining usage patterns:

Once a usage pattern has been generated by the pattern generating module 302, the usage pattern may be passed to the pattern refining module 303.

Figure 7 provides a method 700 which may be adopted by the pattern refining module.

At step 702, the system analyses the generated usage pattern stored as a directed graph to determine whether the directed graph includes any cycles. Cycles are defined as repeated usage inputs, for example a switch being turned on and off multiple times in succession. Cycles are assumed by the system to result from user error or from a child playing with a node. If cycles are found within the usage pattern, they are removed at step 704. Otherwise the process proceeds to step 706 in which it is determined whether a directed graph already exists for this pattern. If a directed graph does not exist for this pattern, at step 708 a directed graph for the pattern is created. This ensures that the system does not reproduce a directed graph that already exists. Representing the usage pattern as a directed graph allows the system to more easily refine and analyse the generated usage patterns. The nodes of the directed graph represent user inputs in the pattern and the edges contain information such as the user, time of the input and relevant sensor data. The order of execution of the pattern is determined by the directed edges of the directed graph.

If at step 706 it is determined that a directed graph already exists for the pattern, or following step 708 in which a directed is created for the pattern, the method proceeds to step 710, in which user intention analysis is carried out. Following the user intention analysis, the directed graph is updated to represent the now refined usage pattern and stored at step 712.

User intention analysis comprises determining the desired end state of the affected devices (i.e. the operatively connected lights) in dependence on the inputs received from the user. The end state may be defined as the state of every operatively connected device within the system after the inputs have been received from the user. This may include whether each light is either ON or OFF, and the brightness of each light. In particular, user intention analysis comprises determining whether the desired end state is of more importance or whether the usage pattern adopted to arrive at the end state is of greater importance.

A usage pattern may be represented by a directed graph defining the sequence in which the user inputs are received along with any associated sensor data. The user inputs define the end states of at least some of the operatively connected devices (i.e. the lights) that the user wishes to change. The usage pattern thus defines a desired change in state for at least some of the operatively connected devices and the sequence in which the desired change is carried out. By analysis of usage patterns in combination with analysis of the end system state of all operatively connected devices, it may be possible to determine a user's intention when executing the given pattern.

In certain embodiments it may be advantageous to determine a user's intention in dependence on analysis of one or more usage patterns, and to subsequently automate operation of the operatively connected devices, such as lights, in accordance with the determined user intention rather than by strict adherence to observed usage patterns. For example, a user may wish to achieve a particular end system state, but the usage pattern required to achieve the desired end system state will vary depending on the initial state of the system - that is to say, it will vary depending on the initial state of each device within the system. It follows that there may exist a plurality of different usage patterns, associated with different user inputs, which are all intended to achieve the same desired end system state. The diagram of Figure 8 visually represents this. In Figure 8, each different pattern may be represented by a different directed graph. For example, consider a first scenario in which a user wishes to turn off all the lights in a room. The user will provide inputs for turning off those lights which are currently ON in the room and the usage pattern would be characterised by those inputs - the lights which are already OFF may remain unchanged. If in an alternative scenario all of the lights are ON, then the user input will comprise inputs for turning all of the lights OFF. In other words, the usage patterns between the two scenarios, which may be characterised by the user inputs, may differ in dependence on the starting state of the system, In both scenarios the user intention is the same: to turn all of the lights OFF. The aforementioned principle is illustrated in Figure 8, where usage patterns 1 , 2 and 3 are all different, but lead to the same desired end state.

In order to determine user intention, the system may analyse the end system state associated with each stored usage pattern, in order to identify correlations between the different usage patterns. For example, this might comprise identifying usage patterns that share a common end system state, and which are associated with the same user intention.

To further illustrate the aforementioned concept, consider an example in which there are two rooms, each containing a node. For simplicity, each room may comprise two lights, which are operatively connected to the corresponding nodes. The lights in room 1 are labelled l_i and L 2 and the lights in room 2 are labelled as L 3 and L 4 . In this example the user may wish to turn off all of the lights present in both rooms, e.g. before going to bed.

The initial system state may be: l_i on, L 2 on, L 3 off, L 4 on

For the user to turn off all of the lights given this initial system state, the usage pattern must be: l_i off, l_2 off, L 4 off. Recall that the usage pattern is characterised by the received inputs, since L 3 was already off during the initial system state, only inputs for L 2, and L 4 were required.

The final system state is then: l_i off, L 2 off, L 3 off, L 4 off

In a different scenario, the initial system state may be: l_i off, L 2 off, L 3 on, L 4 on

The usage pattern must then be: L 3 off, L 4 off.

Again, the final system state is then: l_i off, L 2 off, L 3 off, L 4 off.

In both scenarios the user intent was the same - to turn off all the lights. However, the usage patterns required to achieve this differ. Accordingly, by exclusively analysing the usage patterns it is not possible to determine that the user intent was the same. This may be achieved only by looking at the end system state. It's clear that in this example the usage pattern itself is of little importance, but rather it is the intent, and specifically the end system state, which is of more interest with a view to automating operation of the lights.

In this example it is not efficient for the system to store the individual usage patterns, which are characterised by different sequences of inputs, in dependence on the initial system state associated with each usage pattern. Rather it is more efficient to represent the plurality of usage patterns via a single usage pattern characterised by the desired end system state, which is for all of the lights to be in the OFF state, irrespective of the initial system state. The system should therefore store the usage pattern as the desired end state system, as: l_i off, L 2 off, L 3 off, L 4 off. When subsequently automating use of the lights, the individual inputs required to achieve the desired end system state may be automatically generated in dependence on the initial system state.

By storing a usage pattern defined in terms of the end system state, ensures that irrespective of what the initial system state of the system may be, execution of the usage pattern will always result in the desired end system state. Additionally, if a final system state is often obtained after repeated use, by storing the usage pattern as the final system aids the frequent pattern analysing module to more accurately determine the frequently used usage patterns. In turn this improves the system's ability to automate operation of the operatively connected devices in accordance with a user's desired usage.

To assist the system in identifying user intentions, and more particularly in determining whether a desired end state may be used to represent a usage pattern, the system may assign each state associated with each operatively connected device a weighting variable. This weighting variable may be increased or decreased depending on various factors.

For example, the weighting variable for a system state may depend on how often and/or frequently the end state is achieved. The more frequently usage patterns resulting in the same specific end system state are observed, the more likely it is that the individual usage patterns actually relate to a shared user intention, and therefore would be better characterised by the achieved end system state.

In certain embodiments the weighting variable associated with an end system state may also be dependent on how often and/or frequently a specific usage pattern results in the final end system state. If the same usage pattern is recorded to result in the same end state frequently, then it is likely that the user's intention is to achieve the specified end system state. A record may be maintained of the number of times a particular usage pattern results in a particular end system state. The weighting variable may also be dependent on a time of day in which the usage patterns resulting in the end system state are observed. Monitoring of time is helpful in identifying user routines, and may be helpful in identifying usage patterns, which relate to routine behaviour, which lends itself to automation. For example, a user may routinely execute a usage pattern resulting in a particular end system state at a similar time each morning, e.g. the user may turn on the kitchen lights every morning around 7AM shortly after waking. A similar usage pattern resulting in the same end system state, observed in the evening, e.g. at 8PM, is unlikely to belong to the same routine behaviour. Accordingly, identifying different observed usage patterns as belonging to the same routine behaviour purely on the basis of a shared end system state, is unlikely to accurately capture routine behaviour. Monitoring a time variable assists in more accurately identifying routine behaviours.

Routine behaviours, and in particular usage patterns may change over time, and it is important that the current system is flexible and capable of accounting for such changes. Again, the use of weighting variables associated with usage patterns and/or end system states may be used to this end. For example, a weighting variable associated with a particular end system state may be decreased where an end system state is not observed occurring for a predefined period of time. End system states associated with relatively low weighting variable values are less representative of routine usage behaviour than those associated with high weighting variable values. Usage patterns and end state systems associated with higher value weighting values are more likely to accurately reflect routine usage behaviours, and this information may be used by the system when automating operation of the operatively connected devices (i.e. when automating operation of the lights). In this regard, usage patterns and end system state weighting variables may be time dependent.

Figure 9 provides an example of how the user intention analysis step 710 of Figure 7 may be carried out to determine whether to store the generated usage pattern in terms of the received user input or to store it in terms of the end system state.

The usage intention analysis commences after either step 706 or step 708. The current system state is retrieved at step 902. The current system state defines the state that the system is in after receipt of the user inputs in steps 602 and 610 of Figure 6. Accordingly, the current system state within the present context relates to the aforementioned end system state.

The generated usage pattern is then analysed, at step 904, to determine if the inputs defining the pattern affect every operatively connected device, e.g. all of the lights. In some embodiments the directed graph may be analysed. If the usage pattern affects every device within the system, then the usage pattern is effectively the end system state, since it will necessarily define the end state of each one of the operatively connected lights, and the method proceeds to step 910.

If instead the usage pattern does not affect every device, then the method proceeds to step 906, where the weighting variable associated with the end system state is increased. In certain embodiments the weighting variable value associated with the end system state may be increased by a predefined increment.

At step 908 it is determined if the weighting variable associated with the end system state is larger than a predefined threshold value. In this regard it is important to note that the weighting variable value is cumulative. That is to say the value of the weighting variable will be dependent on how frequently the given end system state has been observed. To determine this the system will consult previously recorded end system states, to determine if the current end system state has been previously observed, and to update the cumulative weighting value associated with the given end system state. It follows that the first time that the home automation system is used, the weighting variable will never be larger than the predefined threshold value, and the decision at step 908 will necessarily be NO, and the method proceeds with step 910, where the usage pattern is left unchanged, and the method continues with step 712 of Figure 7. It is only once the home automation system has been observed and recording usage behaviour over a time period, and has collated a volume of usage patterns and end system state information, that the cumulatively increasing weighting variable value associated with a routinely observed end system state will be large enough to be greater than the predefined threshold value at step 908. If the cumulative weighting variable value is determined to be greater than the predefined threshold value, then the method proceeds with step 912, and the end system state is stored as the usage pattern - that is to say that the previously received inputs comprised in the usage pattern are replaced by the end system state. In this way the system is able to identify a user's intentions. This information may subsequently be used when automating operation of the connected devices, e.g. when automating operation of the lights.

Extracting frequent patterns:

The refined usage patterns stored as directed graphs maintain a record of all user usage behaviour of operatively connected devices. However the system intends to automate control of operatively connected devices only when the user desires this. There may therefore be a threshold certainty that the correct pattern has been selected, which must be reached before the system may automate stored usage patterns.

The stored usage patterns may further be narrowed down by selecting the usage patterns, which are most frequently used. The system may determine which usage patterns are most frequently used by different users at various times to attempt to learn the user's habits relating to how they use devices within their home.

In order to determine which of the stored refined usage patterns are most frequently used, the system may apply a frequent itemset mining algorithm to the stored directed graphs. Examples of frequent itemset mining algorithms, which may be used include A Priori and FP growth algorithms.

The frequent pattern analysing module may operate according to the flow chart in Figure 10. At step 1002 a frequent itemset mining algorithm is applied to the directed graph. The algorithm finds any frequently executed usage patterns. At step 1004, the frequent usage patterns are classified based on the user, sensor information and time. At step 1006 the frequent usage patterns are stored.

System learning via user feedback:

Once the frequent usage patterns have been extracted from the refined usage patterns, the system may aim to automate operation of devices according to these frequent usage patterns.

However, the system may first require a certain level of confidence to be met relating to which usage pattern a user desires before operation of operatively connected devices may be fully automated. This may ensure that the system does not inconvenience the user.

The system may therefore undergo an initial training period in which user behaviour relating to operation of operatively connected devices is monitored and in which the system attempts to predict desired usage behaviour from initial inputs and received sensor data. The system may take this initial data and then suggest a predicted desired usage pattern to the user. The user may agree or disagree with the system's predicted usage pattern and the system may store this feedback.

The user feedback may be stored as some kind of weighting variable, which depends on the current operative setting, such that the system is more likely to suggest the same pattern again when similar operative settings arise. The system may collect this data over an extended period of time, such as weeks or months in order to collect a large amount of user feedback before attempting to automate control of devices. In this way the system may improve the predictions of desired user usage of operatively connected devices over time in order to be able to automate operation of the one or more connected devices in accordance with observed usage patterns.

Example ways in which the system may predict a desired usage pattern of a user after the initial inputs are entered are described below.

In an embodiment the system may be configured to sort the frequent usage patterns into two groups depending on whether they affect all operatively connected devices within the system or only a selection of the operatively connected devices.

Figure 11 shows an example method for executing usage patterns based on a user input via the user interface on a node. The process flow chart aims to anticipate desired operation of devices within a single room.

In this embodiment the system only retrieves usage patterns, which do not affect all operatively connected devices at step 1 102. If a user is controlling devices within a single room, there is no requirement for patterns that affect all operatively connected devices within a home.

At step 1 104 the system waits for an input. An input may be an input at a user interface or received sensor data, for example from a motion sensor. If an input is received, at step 1 106 is it determined whether the input has been received at the current node. This step ensures that only the node at which the input is received carries out this process and the remaining nodes in the system return to step 1102 and subsequently await an input. At step 1 108 the system determines if patterns exist for the user which begin with the same input and which are assigned to the user. If a pattern does exist, the process proceeds to step 1 1 10 and verifies that the pattern matches the necessary time constraints. For example, if the stored frequent usage pattern always occurs between 7am and 7:30am, this may be a time constraint for executing the pattern. In other embodiments, other constraints may exist such as a dependence on other sensor data such as ambient brightness.

If the pattern fits within the time constraints, the method proceeds to step 1 114 in which the user is asked if they wish to execute the identified pattern. If the user declines, the method proceeds to step 1 116 in which the user's negative response is recorded. The method then returns to step 1 102.

If instead the user accepts, at step 1 114, the method proceeds to step 11 18 and the stored frequent usage pattern is executed and the user's positive response is stored. By maintaining a stored record of the user's responses improves the automation system ability to improve future predictions of desired usage patterns.

Figure 12 is a further exemplary method, which demonstrates how the system may attempt to predict a user's desired usage pattern of operatively connected devices in dependence on their initial inputs.

If a user wishes to affect every device within the system, for example to turn off all devices, it is assumed that multiple inputs to the system will be received within a short period of time. For example, a user may be turning off every light within a home and to achieve this will rapidly input "OFF" for each light. In other examples, the user may affect every device within a home upon waking up in the morning or when returning home for the evening.

If there is a large enough difference between the initial state of the system and the current state of the system after multiple user inputs have been received within a short period of time, it is determined that it is likely the user is trying to affect all devices operatively connected to the system.

The method of Figure 12 aims to predict user behaviour relating to operation of operatively connected devices in dependence on two conditions being met. Firstly, the user inputs must occur within a short time period of each other. Secondly, the difference between the system state at the beginning of the process and after the inputs have been received must be sufficiently large. This enables the home automation system to assume that the user intends to affect the state of all operatively connected devices. The received user inputs are temporarily stored as a temporary usage pattern, to enable the system to retrieve and execute the pattern that is most closely matched to the received user inputs.

The method begins at step 1202 where the system retrieves usage patterns which affect all operatively connected devices. Only usage patterns, which affect all devices are considered in this method. At step 1204 the current system state is stored in storage 318 as a temporary variable initial_state. This enables the system to compare any changes to this initial state at a later stage in the method 1200.

At step 1206 the system waits for receipt of an initial user input. Once the input has been received, a timer is started at step 1208. The timer enables the system to determine whether any subsequent user inputs occur within the predefined threshold time period. The predefined threshold time period is required as the system attempts to determine whether the state of the overall system has changed sufficiently within this threshold time period.

At step 1210 the new current system state, comprising the change resulting from the received user input, is saved as a temporary variable curr_state. At step 1212 it is determined whether a temporary usage pattern has already been created to store the user inputs. If this is the first user input and no pattern exists, at step 1214 a temporary pattern, temp_pattern, is created and the user input is set as the first element of this pattern. If the pattern already exists, the process instead moves to step 1216 where the user input is appended to the temporary pattern. The temporary pattern comprises the current user inputs and later in the method 1200 enables the system to select a stored frequent pattern to execute, which most closely matches the current temporary pattern.

The system waits for receipt of another input at step 1218. Once another input has been received, at step 1220 it is queried whether the timer has expired. If the timer has expired, the method resets the temporary pattern at step 1222 and returns to step 1202. The entire method starts again as the user input was not received within the predefined threshold time period, and the system determines that the user is not trying to affect all operatively connected devices.

If instead at step 1220 the timer has not expired, the method proceeds to step 1224. At this step the temporary variable initial_state is compared to the temporary variable curr_state. There is a predefined threshold difference to which the difference between the two states is compared. The difference is calculated as the percentage of devices, which are in a different state in the current state compared to the initial state. The system then attempts to determine whether the user is attempting to alter the state of every device within the system. If the difference is sufficiently large, denoted as the threshold difference, between the starting system state and the current system state, which has already been determined to have occurred within a threshold time period, the system determines that the user is trying to affect all operatively connected devices.

If the difference between the two states is found to be less than the threshold difference, the method returns to step 1208 and the timer is reset. The system then repeats steps 1210 - 1220 and waits for another user input to be received within the predefined threshold time period. If the difference is found to be greater than or equal to the threshold difference, the method proceeds to step 1226, in which the system retrieves the frequent usage pattern, which most closely matches the temporary pattern. The user is then asked if they wish to execute this pattern at step 1228. If the user declines, the user response is then stored at 1230 and the method returns to step 1202.

If the user accepts at step 1226, the method proceeds to step 1232 where the pattern is executed and the user response is stored.

To better illustrate how this method may be applied, an example is given in which there are three rooms within a house, each with a node, each node being operatively connected to two non- dimmable lights. There are therefore a total of 6 lights within the home which will be labelled l_i - L 6 for simplicity.

In this example the threshold time is set at 5 seconds and the threshold difference is 50%.

We also assume that we have a frequent usage pattern which affects all the lights within the system:

I_1 Off, l_2 Off, l_3 Off, l_4 Off, l_5 Off, l_6 off

Initially we assume that all 6 lights are on, such that at step 1204 the initial system state is set as: initial_state = l_i on, L 2 on, L 3 on, L 4 on, L 5 on, L 6 on

At step 1206, the system waits for an initial user input. The initial user input relates to the user switching OFF l_i . The timer is then started.

Initial user input:

User switches OFF l_i and the temporary variables are as follows:

initial_state = l_i on, L 2 on, L 3 on, L 4 on, L 5 on, L 6 on

curr_state = l_i off, L 2 on, L 3 on, L 4 on, L 5 on, L 6 on

temp_pattern = l_i off

The system then waits for a second user input at step 1218.

User input 2:

Before the 5 second timer expires, the user switches OFF L 2 and the temporary variables are set as follows:

initial_state = l_i on, L 2 on, L 3 on, L 4 on, L 5 on, L 6 on

curr_state = l_i off, L 2 off, L 3 on, L 4 on, L 5 on, L 6 on

temp_pattern = l_i off, L 2 off

The difference between curr_state and initial_state is calculated as two out of the six lights being in a different state and the difference is therefore:

Difference = 2/6(100) % = 33.33%

At step 1224, this difference is determined to be less than the threshold of 50% and the timer is reset at step 1208.

User input 3: Before the 5 second timer expires, the user switches OFF L 3 and the temporary variables are set as follows:

initial_state = l_i on, L 2 on, L 3 on, L 4 on, L 5 on, L 6 on

curr_state = Li off, L 2 off, L 3 off, L 4 on, L 5 on, L 6 on

temp_pattern = Li off, L 2 off, L 3 off

At step 1224 the difference is calculated as:

Difference = 3/6(100) % = 50%

This difference is equal to the threshold. The system therefore retrieves the pattern, which most closely matches the temporary pattern at step 1226 and asks the user if they want to execute it. In this example this is the stored refined usage pattern:

l_i off, l_2 off, L 3 off, L 4 off, l_5 off, l_6 off

If the user accepts at step 1226, this pattern is executed at step 1232.

This method 1200 enables the system to predict intended user operation of operatively connected devices

Automation of operatively connected devices according to usage patterns:

Once a certain level of certainty regarding which usage pattern a user desires has been reached, the home automation system may begin to automate operation of operatively connected devices according to stored frequent usage patterns.

Automation of stored frequent usage patterns may be achieved via the method previously discussed in Figure 2, in which a user is identified at step 202 using any user identification means such as facial recognition or other available recognition techniques, and refined frequent usage patterns associated with the user are subsequently retrieved at step 204.

At step 206 the system may analyse the current situational status in order to choose which pattern to execute. In some embodiments this may involve analysing current sensor data, user location, time of day, day of the week or any other information. At step 208, appliances may be operated in accordance with a pattern consistent with the user and the current situational status.

In other embodiments, automation of operatively connected devices in accordance with frequent usage patterns may be triggered without user recognition, but in accordance with sensor data such as ambient brightness and/or in accordance with the time of day and /or time of year.

In certain embodiments where there is more than one identified user in a room the system may wait for receipt of a user input at a node. The user operating the node may be identified via means such as the proximity of their phone to the node or via image recognition using the camera. The system may then designate this user as the main user for automation purposes, and retrieve usage patterns linked to this user.

In some embodiments there may be a hierarchy of user profiles within the system. This information may be input by a user at any one of the nodes within the system. If multiple different users are present in a room, then priority may be given to automating operation of the operatively connected devices in accordance with the usage profiles associated with the users having a higher hierarchical ranking. In certain embodiments, usage patterns may be classified in dependence on multiple users being present within a room. The edges of directed graphs may contain information gathered by the camera 314 relating to the number of users detected in a room. In some embodiments these users may additionally be identified. For example, a family may sit together whilst eating dinner each evening and set the lighting within the dining room to the same state. By storing the information about which users are present within the room in connection with the stored usage pattern, the system may execute this frequent usage pattern when these users are all detected within the dining room around the same time each evening.

In an embodiment, if the system automates a frequent usage pattern, but a user disagrees with the chosen usage pattern, and provides inputs to change the state of any one of the operatively connected devices at a node, the system may store this information, and use this information to refine the affected usage pattern. For example, the system may account for this by decreasing a weighting variable associated with the affected frequent usage pattern.

Energy consumption reduction:

The home automation system may also automate operation of operatively connected devices when trying to reduce energy consumption via the energy saving module. The energy saving module 320 may continuously monitor the energy consumption of all operatively connected devices and store this information in storage 318. This energy consumption data may then be utilised to reduce the overall energy consumption within the system.

Figure 13 is a process flow chart 1300 illustrating a method adopted by the energy saving module to reduce energy consumption.

The method begins at step 1302, where the energy saving module 320 retrieves the current usage pattern. Then at step 1304, the energy consumption data for each device in the usage pattern is retrieved from storage 318. For present purposes any lights that are controlled via a shared switch may be treated as one device, for example ceiling lights whose operation is controlled by a shared switch. The user inputs comprised in the retrieved current usage pattern are sorted based on the energy consumption data, at step 1306. Step 1306 therefore ensures that the lighting devices are sorted in order of their associated energy consumption, when operated in accordance with the usage pattern. The order may either be in increasing or decreasing order of energy consumption. At step 1308, operation of the lighting devices is optimised in order to reduce energy consumption, beginning with the lighting devices associated with the highest energy consumption. Ordering the lighting devices in accordance with their associated energy consumption enables the system to identify the biggest energy consumers, which in turn makes it easier to identify where the largest energy savings may be made. At step 1310 the pattern is updated to include any changes made at the optimisation step 1308 to the operation of the affected lighting devices. For example, in certain embodiments this might comprise reducing the brightness of the lights associated with the highest energy consumption.

Figure 14 is a process flow chart providing an example of how the optimisation step 1308 of Figure 13 may be carried out. The method 1308 aims to maintain a threshold brightness level within a room, whilst simultaneously reducing overall energy consumption associated with the lighting devices. The objective is to reduce energy consumption without reducing a user's comfort. The threshold brightness may be determined based on scientific data, user preferences, user activity or other relevant data. The ambient brightness within a room is dependent on external lighting conditions if there are any windows present in the room, and on the brightness levels of artificial lighting source present in the room such as lights. Accordingly, the ambient brightness within a room may vary due to external weather conditions, and time of day. Depending on the external lighting conditions, it is therefore possible to reduce the brightness levels of the lights within a room, without reducing the ambient brightness below a threshold brightness level, without requiring user input. This method may result in energy savings if, for example, a user has set a light at a high brightness, but may not require this level of brightness in order to see comfortably. By reducing the brightness of lights within the room, even by a small amount, significant energy savings may be made without reducing user comfort. The method is adaptable such that if the ambient brightness within the room dips below the threshold brightness level, then the brightness of lights within the room may be increased in order to maintain the ambient brightness level equal to the threshold level. This ensures that user comfort is maintained.

The method 1308 commences at step 1404 in which the energy saving module fetches sensor data. In this embodiment the relevant sensor data relates to the ambient brightness within the room, which may be retrieved from an ambient light sensor located within the room. At step 1406, the energy saving module compares the ambient brightness level to the predefined threshold brightness level. If the ambient brightness is greater than the threshold value, the process proceeds to step 1408 where the lights within the room are dimmed by a predefined increment. The process then returns to step 1406 and again determines whether the ambient brightness is greater than the threshold brightness level, and the lights are dimmed by a further increment. Steps 1406 and 1408 are repeated sequentially until the ambient brightness is determined to be equal to or less than the predefined threshold brightness level. Several iterations of steps 1406 and 1408 may be required until the ambient brightness is equal to or less than the predefined threshold brightness level. Once the ambient brightness level is equal to or less than the threshold brightness level, the method proceeds to step 1410.

At step 1410 it is determined whether the ambient brightness is below the threshold brightness level. If the ambient brightness is below the threshold brightness level, the process proceeds to step 1412 where the brightness of the lights is increased by a predefined increment. The method returns to step 1410, where again it is determined if the ambient brightness level is less than the threshold brightness level. Steps 1410 and 1412 are repeated iteratively until the ambient brightness level is equal to or greater than the threshold brightness level, at which point the method proceeds to step 1414.

The iterative steps of steps 1406, 1408, 1410 and 1412 are carried out to manipulate the brightness of the operatively connected lights until the ambient brightness level is equal to the predefined brightness threshold level.

At step 1414 it is determined whether the ambient brightness is equal to the threshold brightness level. If the ambient brightness is not equal to the threshold, the method returns to step 1406 and steps 1406 through 1414 are repeated until the ambient brightness level is equal to the predefined brightness threshold value. If it is determined that the ambient brightness is equal to the threshold value, the method proceeds to step 1416 in which a user compromise algorithm is carried out.

The method then continues to step 1310 of Figure 13, and the updated frequent usage pattern is stored in storage 318.

The method comprised in the user compromise algorithm 1416 is illustrated in Figure 15. This method 1416 may enable the home automation system to make energy savings in scenarios where a user has increased the brightness of lights in response to the energy optimisation step 1308. If a user is not comfortable with the predefined threshold brightness within a room after the lights have been dimmed to maintain this threshold value, they may increase the brightness of the lights in the room in order to increase the ambient brightness level. However, it is possible that there may still exist an ambient brightness level that is greater than the predefined threshold brightness level, but lower than the selected brightness level, that the user would be comfortable with. Accordingly, the object of the compromise algorithm is to identify such intermediary brightness levels, in order to reduce energy consumption whilst maintaining user comfort. In this scenario, the system may propose a brightness level value lying between the user's selected brightness level and the predefined threshold brightness level, and dim the lights accordingly in order to reduce the ambient brightness level to the proposed brightness level value. Should the user subsequently increase the brightness level again, then the system may propose a further alternative ambient brightness level greater than the previously proposed level, yet still lower than the user selected brightness level. This iterative process may be carried out multiple times until an intermediate ambient brightness level has been proposed which is comfortable for the user.

Alternatively, the compromise algorithm 1416 may only carry out a predefined number of iterations.

Once a comfortable intermediate ambient brightness level has been identified, or in where no such intermediate is found, the user's threshold brightness preferences may be updated and stored, and the existing predefined threshold brightness level replaced with either the intermediate brightness level or the user's defined brightness level. This ensures that user comfort is prioritised.

An example of method 1416 carried out by the user compromise algorithm is provided in the flow chart of Figure 15. The method is initiated after step 1414 of Figure 14, where it has been determined that the ambient brightness level within the room is equal to the predefined threshold brightness level. At step 1502 a threshold timer is started. The threshold timer enables the system to determine whether a user input is received within a threshold time period following the energy optimisation process 1308.

At step 1504 it is determined if a user input has been received for increasing the brightness of a light which had previously been dimmed during the energy optimisation process. If no such input has been received, the method proceeds to step 1506 where it is determined if the threshold timer has expired. If the timer has not expired, the method returns to step 1504. If the timer has expired, the user's brightness preferences are updated and set at the current threshold brightness at step 1508, and stored with other user information in storage 318.

If at step 1504 it is determined that a user input to increase the brightness of a light previously dimmed by the energy optimisation step has occurred, then the method proceeds to step 1510.

At step 1510 it is determined whether a default number of iterations of user compromise steps 1502 to 1512 have been carried out. If the default number of iterations has been carried out, then the method proceeds to step 1514, where the user brightness preferences are updated. If the default number of iterations has not been carried out, the method proceeds instead to step 1512 where the brightness of the light is decreased by an increment, which is less than the previous increment. On the first iteration this increment may be set to a default value that decreases the brightness of the light to any value lying between the user's preferred brightness and the stored threshold brightness level. Following each iteration the increment is adjusted to be smaller than the increment in the preceding iteration. In this way the brightness of the light is decreased by ever smaller amounts until the resulting brightness is closer to the user's preferred brightness.

Once the timer has expired at step 1506 without receipt of a user input, or the system has reached the default number of iterations at step 1510, the user brightness preferences are updated at step 1514. The method then proceeds to step 1310 of Figure 13 in which the updated pattern, including the user's brightness preferences, is stored in storage 318.

In certain embodiments, the system may record the number of times that a user has overridden all compromise attempts, and may cease using the user compromise algorithm with the affected user once a default number of compromise attempts have been overridden. In some embodiments the user's brightness preferences may only be updated for the current pattern. Different patterns may be used during different user activities and the same user may not require the same brightness for every activity. Therefore, energy savings may be made for other usage patterns belonging to the same user.

In some embodiments, an algorithm may be carried out after the brightness of the lights has been increased in response to sensor data, at step 1412. In this scenario, the system may wait for a user input to be received within a threshold time period in which the lights brightened by the energy saving module are dimmed by the user. If the user dims the lights, the system may infer that the user prefers the ambient brightness level to be lower than the predefined threshold brightness. The user's preferences may be updated and the new ambient brightness stored as the threshold brightness for this user. In this way, energy savings may be made.

Although the above description of embodiments of the invention has been presented in relation to a home automation system for automating control of a lighting system within a home, the above methods may also be used to automated operation of any operatively connected devices within a home, such as centralised heating systems, sprinkler system, and operation of home appliances.

User identification using facial recognition

In some embodiments, the user identification process may comprise facial recognition. The user identification module 322 may use data received from the camera 314 for facial recognition process.

In certain embodiments an initial configuration step may be required before the home automation system is configured to identify the user. During this initial configuration process users may be required to stand in front of the node 100 for a set period of time to enable the camera 314 to capture at least one image of the users' faces, which image is subsequently stored for use in a facial recognition process.

The initial configuration step may be carried out at any node in the network and the information may be circulated to the other nodes, via the mesh network.

Figure 16 is a process flow chart, which demonstrates how a user may be identified in accordance with an embodiment.

The method commences at step 1602 where human detection is carried out. This step is a precursor to the facial recognition step in which the node 100 first uses image data received from the camera 314 to determine if a person is present in the room. Human detection may involve analysing captured image data to identify humans present in the image data. The human detection step may also comprise receiving data from one or more motion sensors located in the room.

If at step 1604 it is determined that a person has been detected, then the method proceeds to step 1608, where facial recognition is carried out. If instead a person has not been detected, the method returns to the human detection step 1602.

In some embodiments the user identification process may not comprise the human detection step 1602, and may instead begin at step 1606.

At step 1608 facial detection is carried out, which may comprise determining the relative position of facial features such as eyes, nose or mouth. Known facial recognition algorithms may be used.

At step 1608 it is determined if a face has been detected. If not, the process returns to step 1602 and begins afresh. If a face is detected, the method proceeds to step 1610. At step 1610 the detected face is compared with a library of faces of known users. If the detected face matches one of these faces of a known user, then the process proceeds to step 1612, where the user is authenticated. Any of the frequently used patterns associated with the authenticated user may then be automated, and any inputs received at the node made by attributed to the authenticated user.

If instead at step 1610 it is determined that the detected face does not match the face of a known user, then the method returns to the step 1602, and the method begins afresh.

In certain embodiments, if at step 1610 the face does not match a face of a known user, then the face may be registered against a new user profile. Any inputs received at the node 100, may then be attributed to the new user profile, and a usage pattern may be generated as described previously.

Location of lost objects:

A further aspect of the invention relates to the use of the camera 314 operatively connected to each node 1700 making up the home automation system, for locating misplaced user items. In order to achieve this certain assumptions are made. For example, the system may assume that a misplaced item relates to an object, which at some point in time, has been placed at its location within a room by a user. The user has subsequently vacated the room without the object in their possession. In accordance with this assumption, the misplaced item is therefore associated with an event comprising a location and a time coordinate associated with a time that the object was introduced to the location by the user. The location relates to the location of the misplaced object.

The system may retrieve the location of a lost object by generating mapping data of rooms within the home such that a change to a room may be identified by the system. Generating a map of a room may comprise capturing image data of a room in order to model its contents. Using the captured image data, a coordinate may be assigned to every identified object within the room. This information may be stored as a map of the room. Accordingly, mapping data captures the objects within a room and their location within the room. In order to distinguish between changes occurring due to, for example, a pet in a room or wind, the system may also record locations of users within the home throughout the day. If a user is recorded to have been present within a room, a new map of the room may be generated. The newly generated map may be compared to the previously generated map in order to identify any changes between the maps, which may be indicative of changes in objects located within the room. The changes may be assumed to have resulted from the presence of the user within the room.

Upon realising that a desired object has been misplaced, a user may request at the node 1700, that the system locate the misplaced object. This may be achieved by the system analysing stored mapping data in combination with user location information. Once the misplaced object has been located, its location may be output to the user via the node 1700.

This functionality is particularly useful in scenarios in which an object is concealed from view. For example, a user may have placed their mobile phone on a table and later, without realising, the user has placed a magazine on top of the mobile phone, thus concealing it from view. The system, using the stored mapping data and the user location data, may review the mapping data associated with the user's known locations, in order to determine the last known location of the mobile phone. The system will then identify the last known position of the mobile phone when it was visible on the table. In this way, the system may be able to infer the current location of the mobile phone from previously stored mapping information, despite the phone no longer being visible to the camera 314. Figure 17 is a schematic illustration of a node 1700 in accordance with an embodiment in which the node additionally comprises an environment scanning module 1702, a user locating module 1704 and an object finding module 1706.

Within this embodiment, the camera 314 may relate to a time of flight (ToF) camera to enable the depth information associated with a distance of an object within the room to be determined. In addition, the user of a ToF camera enables a point cloud map of the room to be generated. This may enable mapping information and locations of users to be more accurately determined within the three dimensional space of the room.

Environment scanning module 1702 may be configured to map the portion of the room visible to the camera, and to identify objects located within the portion of the room visible to the camera. Objects located within the room may be assigned a location coordinate, and the location coordinates of the objects stored in storage 318. The environment scanning module 1702 may be configured with image recognition software to recognise common household objects, for example items of furniture or keys.

User locating module 1704 may use the stored mapping information relating to each room in order to monitor the location of users within the home. In some embodiments the location of a user may only be stored once the user has been determined to be static. It may be assumed by the system that a user will be standing still at the moment that an object is put down. This reduces the number of user locations to be stored by the system in order to later find lost objects.

The object finding module 1706 may be configured to use the stored mapping information of a room and stored locations of a user, in order to locate an object in response to a user prompt made at a node. Any locations of the required object may be extracted and compared to stored user locations. The system may then select the most recent time at which the object and the user were in close proximity. The node may then output this location to the user.

Environment mapping:

The object of the environment scanning module 1702 is to detect changes associated with the objects located within a room following a user's presence in the room. By identifying any changes in the objects located in the room, including any newly introduced objects, the system may then use this information to identify the location of a misplaced object, if it is requested by a user.

To locate objects within a room, the environment scanning module 1702 may be configured to assign a coordinate value to every object located, and to generate a map of the room using the camera. To enable the system to map the room in three dimensions, the camera may be a ToF camera.

Figure 18 outlines a method 1800 carried out by the environment scanning module 1702 for generating mapping information associated with a room within a home.

The method commences at step 1802 in which the environment scanning module 1702 determines whether mapping information has previously been stored by the node relating to the present room. If a map for the room already exists, then the system proceeds to step 1804 where the room is scanned to determine if there are any changes to the objects located within the room. The may be achieved by capturing image data of the room using the camera 314. The captured image may then be compared to the stored mapping information.

At step 1806 it is determined if the objects located within the room have changed with respect to the previously stored mapping information. If the objects located within the room are unchanged, the method ends. If instead the objects located within the room are determined to have changed, then the method proceeds to step 1808. If at step 1802 it is determined that no previous mapping information exists for the current room, then the method proceeds directly to step 1808.

At step 1808, the portion of the room visible to the camera is mapped.

At step 1810 objects such as items of furniture or other household items located within the room are identified. Each of these objects is assigned a location within the map, for example by associating a coordinate value to each identified object, at step 1812.

At step 1814 the mapping information comprising the object location information for the room is stored in storage 318.

In certain embodiments, the system may first determine whether any users are present in the room before running the environment scanning module 1702. This may be achieved by using image recognition software to identify images of humans within captured image data.

The environment scanning module 1702 may be configured to implement the method 1800 of Figure 18 after it has been determined that a user was previously present in a room. Changes to the room due to the user may then be detected. This helps to focus the systems resources on analysing mapping data and/or image data of rooms where it is more likely that the misplaced object may be located.

In accordance with an embodiment, mapping information of a room may be retained for a predefined period of time before being discarded. For example, the mapping data may be retained for a few hours, days, or weeks.

User locations:

In certain embodiments the system may determine user locations using the camera 314, as mentioned previously.

In certain embodiments it is assumed that if a user has misplaced on object that at some point in time the user would have been in close proximity to the object. Accordingly, it follows that the location of a misplaced object within a room will be in close proximity to a user's location within the room at some previous point in time. The user's locations may then be used to focus analysis for identifying the location of the misplaced object.

A further assumption may also be made. When a user misplaces an object it is assumed that they will have been stationary when misplacing the object, even if only for a very short period of time. Therefore, the system may be configured to update the location of a user only when the user is stationary for a period of time in excess of some threshold period of time - or example for a few seconds. If a user walks through a room without pausing, then the locations of the user as the user traversed the room need not be stored since the user would not have had time to misplace an object in the room. Selectively retaining usage location information only associated with periods of time where the user was stationary reduces the storage requirements of the system.

Figure 19 shows an exemplary method, which may be adopted by the user locating module.

At step 1902 human detection is carried out. Human detection may comprise identifying a user within captured image data in a manner as previously described.

At step 1904 it is determined whether a person has been detected in captured image data. If a person has not been detected, the method returns to step 1902. In some embodiments, the user may be identified and authenticated by the user identification module 322 at this step. If a user has been detected at step 1904, the method proceeds to step 1906, where it is determined if the user is static. If the user is determined to be static, a timer is started at step 1908. The timer enables the system to determine the length of time for which the user has been static at a location within the room. At step 1910 it is again determined if the user is static and this step is repeated until the user moves, at which point the method proceeds to step 1912, where the timer is stopped. In this way, the system is able to determine the length of time for which the user was not moving at a location within the room.

At step 1914 it is determined if the time for which the user was static is greater than a predefined threshold value. If the time is not greater than the threshold time, the process recommences at step 1902.

If the time period that the user is static in the room is greater than the threshold value, then the user location is stored at step 1916.

In certain scenarios the assumption that the user and the misplaced object were in close proximity at some point in time may increase the accuracy with which the misplaced object may be located. For example, the misplaced object may relate to a user's keys, but there may be multiple different sets of keys located within a home. The requirement that the misplaced keys were observed in close proximity to the user at some point in time, increases the probability that the system will locate the correct set of keys.

Finding lost objects:

In certain embodiments, the system may have identified a change in the location of a specific object located within a room, for example the object may be newly introduced into the room. The system may also have determined that a person was located near to the location of the object at a similar time to when the change was identified. If the system is subsequently requested by the user at the node to identify a location of a misplaced object, the system may retrieve the most recent time at which a change in the location of a specific object located within a room was identified in combination with the user having previously been located near to the location of the object, and output this information to the user.

The accuracy with which misplaced objects are located may be improved by targeted searching. For example, if the user requests that the system locate a specific misplaced object, such as car keys, the potential locations where the misplaced object may be located may be narrowed. For example, this may be achieved by using image recognition software to identify only the specific object - i.e. car keys.

Figure 20 is a process flow chart providing an example of how the object finding module 1704 may be configured to operate.

At step 2002 a user request to locate a misplaced object is received. The request may comprise an identification of the misplaced object. This information may be input at any node 1700 using any available input means, such as voice commands where voice recognition is available, or inputting the name via the user interface 308.

At step 2004 the object finding module 1706 retrieves stored mapping information and information about the user's locations.

At step 2006 the object finding module 1706 analyses the retrieved data to identify locations in which the requested object was identified, and to determine the user locations. At step 2008 the object finding module 1706 refines the list of potential locations to only those locations where the user and the requested misplaced object were in close proximity to each other at a similar point in time. For example, if a user was detected as being static in a location at 10AM and then left the room at 10:30AM, and the subsequent environment map shows a new object has appeared in the room at a similar location and time to the user, this location would be shortlisted as a potential location of interest.

At step 2010 the most recent location, which meets the aforementioned criteria is selected, and at step 2012 the location is output to the user, via the available user interface 308.

In certain embodiments, the node 1700 may be provided with natural language functionality, such that the predicted location of the misplaced object may be described to the user.

It is to be appreciated that many modifications may be made to the above examples and embodiments without departing from the scope of the present invention as defined in the accompanying claims.




 
Previous Patent: AN ANTIMICROBIAL COMPOUND

Next Patent: An Improved Gripper