Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SMART INSTALLATION
Document Type and Number:
WIPO Patent Application WO/2022/113082
Kind Code:
A1
Abstract:
Installation of a security device, by an installer, at an installation location in an installation site, is facilitated by capturing images of the installation site. Images are processed to determine a computerised model of the installation site. A graphical representation of the computerised model is presented to an expert. A user interface enables the expert to indicate a recommended device installation location. An expert message is sent, comprising installation position data specifying the recommended installation location, to an installer device. This generates a user interface, presenting a graphical representation of the installation site. A user input action indicative of readiness of the installer to install the device causes generation of an installation position graphical representation, based on the installation position data, the installation position graphical representation comprising an image depicting a portion of the installation site substantially in the region of the installation location.

Inventors:
AMIR HAIM (IL)
AMIR OHAD (IL)
SCHNAPP JONATHAN MARK (IL)
SHALOM YAIR (IL)
Application Number:
PCT/IL2021/051410
Publication Date:
June 02, 2022
Filing Date:
November 26, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ESSENCE SECURITY INTERNATIONAL ESI LTD (IL)
International Classes:
G08B29/18
Domestic Patent References:
WO2019241772A12019-12-19
WO2016154312A12016-09-29
WO2019197196A12019-10-17
Foreign References:
US20200242282A12020-07-30
US20100238164A12010-09-23
Attorney, Agent or Firm:
EHRLICH, Gal et al. (IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computer implemented method for facilitating installation of a security device, by an installer, at an installation location in an installation site, comprising: capturing images of the installation site; processing the images to determine a computerised model of the installation site; presenting a graphical representation of the computerised model to an expert and a user interface to enable the expert to indicate a recommendation for an installation location of the device; sending an expert message, comprising installation position data specifying the installation location, recommended by the expert, to a device accessible by the installer; generating a user interface, for use by the installer, presenting a graphical representation of the installation site; receiving a user input action indicative of readiness of the installer to install the device; and in response to receiving the user input action, generating an installation position graphical representation, based on the installation position data, the installation position graphical representation comprising an image depicting a portion of the installation site substantially in the region of the installation location.

2. A method in accordance with claim 1 wherein the capturing of images comprises capturing photographic images of the installation site.

3. A method in accordance with claim 2 wherein the capturing of images comprises capturing stereoscopic photographic images of the installation site.

4. A method in accordance with claim 1 wherein the capturing of images comprises emitting an electromagnetic irradiation and detecting a reflection of said irradiation.

5. A method in accordance with claim 4 wherein the emitting comprises conducting a LIDAR scan.

6. A method in accordance with claim 1 wherein the image processing comprises detecting a size dimension of a feature in at least one of the images.

7. A method in accordance with claim 6 wherein the detecting of the size dimension comprises performing a photogrammetry method.

8. A method in accordance with claim 7 wherein the photogrammetry method comprises receiving an input data item, the input data item corresponding to a size dimension of a feature in at least one of the images, and inferring at least one size dimension of a further feature in said images from the input data item.

9. A method in accordance with claim 7 wherein the photogrammetry method comprises identifying a feature in at least one of the images, looking up a size dimension for the identified feature, and inferring from that size dimension a size dimension for a further feature in said images.

10. A method in accordance with claim 1 wherein the image processing comprises identifying at least one feature common to two or more of the images, and from the one or more identified features, stitching the images together to produce a composite image.

11. A method in accordance with claim 10 wherein the stitching comprises resizing at least one of the images.

12. A method in accordance with claim 10 wherein the stitching comprises warping at least one of the images.

13. A method in accordance with claim 1 wherein the image processing comprises forming a three dimensional model of the installation site from the captured images.

14. A method in accordance with claim 1 wherein the presenting of a graphical representation of the installation site to the expert user comprises presenting an image derived from the one or more images captured in the image capturing.

15. A method in accordance with claim 14 wherein the presenting of a graphical representation comprises presenting a representation of a three dimensional model of the installation site from a point of view in the installation site.

16. A method in accordance with claim 15 wherein the presenting comprises providing, to the expert user, a graphical user interface to enable change of the point of view.

17. A method in accordance with claim 14 wherein the presenting of a graphical representation comprises presenting a plan view of the installation site.

18. A method in accordance with claim 14 wherein the presenting of a graphical representation comprises providing, to the expert user, a user interface to enable the expert user to provide a user input action comprising an indication as to an expert recommendation as to a position of a device in the installation site.

19. A method in accordance with claim 1 wherein the installation position data comprises location information for the device.

20. A method in accordance with claim 19 wherein the location information comprises location data with respect to a feature in the installation site.

21. A method in accordance with claim 19 wherein the installation position data further comprises orientation information for the device.

22. A method in accordance with claim 21 wherein the orientation information comprises orientation data describing an orientation of the device with respect to a feature in the installation site.

23. A method in accordance with claim 1 wherein the installation position graphical representation is compiled from device depiction data, the device depiction data defining a graphical representation of the device to be installed.

24. A method in accordance with claim 23 wherein the device depiction data defines a graphical representation of a portion of the installation site, the portion having a boundary, the device depiction data comprising boundary data defining the boundary of the portion.

25. A method in accordance with claim 24 wherein the boundary data defines the boundary with respect to an outline of a graphical representation of the device.

26. A method in accordance with claim 24 wherein the boundary data defines the boundary with respect to a shape of the portion, the boundary data defining size dimensions of the shape.

27. A method in accordance with claim 26 wherein the shape of the portion is predetermined, and the boundary data defines one or more size dimensions of the shape.

28. A method in accordance with claim 1 wherein the generating of the user interface comprises generating a three dimensional representation of the installation site.

29. A method in accordance with claim 1 wherein the generating of the user interface comprises generating a two dimensional representation of the installation site.

30. A method in accordance with claim 29 wherein the two dimensional representation comprises a plan view of the installation site.

31. A method in accordance with any one of the preceding claims wherein the generating of the user interface comprises generating a graphical representation of the installation site, including an indication of the installation location of the device in the installation site.

32. A method in accordance with claim 31 wherein the indication comprises a symbol representing the device.

33. A method in accordance with claim 31 wherein the indication comprises a visual representation of the device to be installed.

34. A method in accordance with claim 1 wherein the user input action indicative of readiness comprises a manual user input.

35. A method in accordance with claim 34 wherein the manual user input comprises a user actuation of one of a switch, button or virtual button.

36. A method in accordance with claim 1 wherein the user input action indicative of readiness comprises an audible user input.

37. A method in accordance with claim 36 wherein the audible user input comprises a verbal utterance.

38. A method in accordance with any one of the preceding claims wherein the generating of an installation position graphical representation comprises generating a graphical representation of the device in the portion, the graphical representation of the device being based on device graphical information in the expert message.

39. A method in accordance with claim 38 wherein the expert message comprises a device image file, the method further comprising rendering an image of the device, in the portion, from the device image file.

40. A method in accordance with any one of the preceding claims wherein the expert message comprises portion configuration information, the portion configuration information comprising portion boundary information defining a boundary of the portion of the installation site, wherein the method comprises rendering, from the portion boundary information, a graphical representation of the portion of the installation site.

41. A method in accordance with claim 40 wherein the method comprises extracting, from the expert message, device identification information, the device identification information identifying the device from a plurality of pre-stored device image files, and rendering a graphical representation of the identified device in the portion of the installation site on the basis of the corresponding pre-stored device image file.

42. A method in accordance with any one of the preceding claims wherein the expert message comprises portion position information, and the method further comprises rendering the portion of the installation site based on the portion position information, such that the portion position information defines the position of the portion in the installation site.

43. A method in accordance with claim 42 wherein the portion position information expresses the position of the portion with reference to a reference frame defined across the installation site.

44. A method in accordance with claim 42 wherein the portion position information expresses the position of the portion with reference to a feature within the installation site.

45. A method in accordance with claim 44 wherein the portion position information expresses the position of the portion such that a feature within the installation site is included in the portion to be graphically represented.

46. A method in accordance with any one of the preceding claims, wherein the user interface for use by the installer presents a plurality of devices for installation and wherein the user interface is operable to receive a user input action indicating which of the plurality of devices is selected to be installed by the installer.

47. A method in accordance with claim 46 wherein the user input action comprises a selection action with respect to the user interface.

48. A method in accordance with claim 47 wherein the selection action comprises an interaction with a feature displayed in the user interface.

49. A method in accordance with claim 48 wherein the interaction comprises an actuation of a pointing device corresponding with the feature displayed in the user interface.

50. A method in accordance with claim 48 wherein the interaction comprises an actuation of a touch screen corresponding with the feature displayed in the user interface.

51. A method in accordance with claim 47 wherein the selection action comprises capturing an identification feature of the device to be installed, and mapping the identification feature to a device of the plurality of devices for installation.

52. A method in accordance with claim 51 wherein the identification feature comprises an identification token on or in the device to be installed.

53. A method in accordance with claim 52 wherein the identification token is an identification mark on the device and the method further comprises opto-electronically reading the identifying mark to generate the input thereof.

54. A method in accordance with claim 53 wherein the identification mark comprises a machine readable encoded representation of a string.

55. A method in accordance with claim 53 or 54 wherein the identification mark comprises a barcode.

56. A method in accordance with any one of claims 53 to 55 wherein the identification mark comprises a multi-dimensional graphical matrix code.

57. A method in accordance with claim 56 wherein the matrix code comprises a QR code.

58. A method in accordance with claim 52 wherein the identification token comprises an identification data string stored in a machine readable device affixed to or integrated within the device to be installed.

59. A computer implemented method for facilitating installation of a security device, by an installer, at an installation location in an installation site, the method comprising: capturing images of the installation site; sending the images for processing to determine a computerised model of the installation site; receiving an expert message, comprising installation position data specifying the installation location, recommended by the expert, to a device accessible by the installer; generating a user interface, for use by the installer, presenting a graphical representation of the installation site; receiving a user input action indicative of readiness of the installer to install the device; and in response to receiving the user input action, generating an installation position graphical representation, based on the installation position data, the installation position graphical representation comprising an image depicting a portion of the installation site substantially in the region of the installation location.

60. A computer implemented method for enabling expert facilitation of installation of a security device, by an installer, at an installation location in an installation site, the method being performed on the basis of a computerised model of the installation site, the computerised model being generated on the basis of images captured of the installation site, the method comprising: presenting a graphical representation of the computerised model to an expert and a user interface to enable the expert to indicate a recommendation for an installation location of the device; and sending an expert message, comprising installation position data specifying the installation location, recommended by the expert, to a device accessible by the installer.

61. A computer system configured to facilitate installation of a security device, by an installer, at an installation location in an installation site, wherein the computer system comprises: an image capture system element operable to capture images of the installation site; an image processing system element operable to process the images to determine a computerised model of the installation site; an expert computer for presenting a graphical representation of the computerised model to an expert and a user interface to enable the expert to indicate a recommendation for an installation location of the device, and for sending an expert message, comprising installation position data specifying the installation location, recommended by the expert, to an installer device; the installer device being operable to: generate a user interface, for use by the installer, presenting a graphical representation of the installation site; receive a user input action indicative of readiness of the installer to install the device; and in response to receiving the user input action, generate an installation position graphical representation, based on the installation position data, the installation position graphical representation comprising an image depicting a portion of the installation site substantially in the region of the installation location.

62. An installer computer for facilitating installation of a security device, by an installer, at an installation location in an installation site, the installer computer being operable to: capture images of the installation site; send the images for processing to determine a computerised model of the installation site; receive an expert message, comprising installation position data specifying the installation location, recommended by the expert; generate a user interface, for use by the installer, presenting a graphical representation of the installation site; receive a user input action indicative of readiness of the installer to install the device; and in response to receiving the user input action, generate an installation position graphical representation, based on the installation position data, the installation position graphical representation comprising an image depicting a portion of the installation site substantially in the region of the installation location.

63. An expert computer for enabling expert facilitation of installation of a security device, by an installer, at an installation location in an installation site, on the basis of a computerised model of the installation site, the computerised model being generated on the basis of images captured of the installation site, the expert computer being configured to: present a graphical representation of the computerised model to an expert and a user interface to enable the expert to indicate a recommendation for an installation location of the device; and send an expert message, comprising installation position data specifying the installation location, recommended by the expert, to a device accessible by the installer.

64. A computer program product comprising processor executable instructions which, when executed by a computer, cause the computer to perform a method in accordance with claim 59 or claim 60.

65. A system comprising an installer computer according to claim 62 and an expert computer according to claim 63.

66. A system in accordance with claim 65, further comprising a server implementing an image processing system element operable to receive images from the installer computer and to process the images to determine a computerised model of the installation site.

67. A method of processing computer graphics data, the computer graphics data comprising model data describing an installation site, the model data comprising feature data describing features within the installation site, and location recommendation data describing a location of an object to be installed in the installation site, the model data being derived from one or more images of the installation site, the method comprising: generating image data defining an image, the image data being generated from the model data and the location recommendation data, the image data corresponding to at least one image from which the model data was derived, wherein the image defined by the image data includes an indication as to said location.

68. A method in accordance with claim 67 wherein the indication as to location of the object comprises an illustration of the object at said location.

69. A method in accordance with claim 67 wherein the generated image data maps directly onto an image from which the model was derived.

70. A method in accordance with claim 67 comprising capturing one or more images at the installation site by means of an installer client.

71. A method in accordance with claim 70 further comprising obtaining object identification information identifying an object to be included in the installation site.

72. A method in accordance with claim 71 further comprising offering to a user a user input interface to enable input of object identification information.

73. A method in accordance with claim 72 further comprising implementing the user input interface at the installer client.

74. A method in accordance with claim 72 wherein the installer client comprises an image capture device at which the capturing of images is performed.

75. A method in accordance with claim 74 wherein the image capture device attaches user account information to captured images, wherein the user account information maps to user account information obtained at the user input interface.

76. A method in accordance with claim 75 wherein the installer client comprises an interface device at which the user input interface is offered.

77. A method in accordance with claim 72 wherein the user input interface enables manual input of object identification.

78. A method in accordance with claim 72 wherein the user input interface enables speech based input of object information.

79. A method in accordance with claim 72 wherein the user input interface enables capture of object information by photographic capture of information associated with or attached to the object.

80. A method in accordance with claim 67 comprising presenting at the installer client an image comprising a plan view of the installation site, the plan view comprising a location indication, derived from the location recommendation data, indicating a recommended installation location of the object to be installed in the installation site.

81. A method in accordance with claim 80 wherein the plan view comprises a floorplan of the installation site, the floorplan comprising indications of features in the installation site.

82. A method in accordance with claim 80 comprising, responsive to a user input action, the user input action representing a selection action corresponding to the location indication, presenting at the installer client said image defined by said image data, the image comprising the location indication.

83. A method in accordance with claim 82 wherein the location indication comprises a graphical representation of the object to be installed.

84. A method in accordance with claim 83 wherein the graphical representation is a photographic image of the object to be installed.

85. A method in accordance with claim 82 wherein the indication of the recommended installation location comprises a graphical representation of an object to be installed at the recommended installation location.

86. A method in accordance with claim 85 wherein the generated image includes at least a portion of an image captured at the installation client, or an image synthesised therefrom.

87. A method in accordance with claim 67 wherein the generated image comprises a photographic image based on or synthesised from the images of the installation site.

88. A method in accordance with claim 67 wherein the generated image is with respect to a point of view substantially corresponding to a side elevation.

89. A method in accordance with claim 88 wherein the generated image is a perspective view.

90. A method in accordance with claim 67 wherein the model data is derived from a process comprising: processing image data defining at least one image of the installation site, to identify one or more features in the image, constructing a model from the or each identified feature, generating model data describing the installation site, including feature data for the one or more identified features.

91. A method in accordance with claim 90 comprising rendering a graphical representation of the model data, obtaining an expert user input action identifying a location recommendation of the object in the installation site described by the model data, and generating location recommendation data from the expert user input action.

92. A method in accordance with claim 91 wherein the obtaining and generating are performed at an expert client of a computer system.

93. A method in accordance with claim 91 wherein the rendering of the graphical representation is performed at the expert client.

94. A computer system configured to cause performance of a method in accordance with any one of claims 67 to 93.

95. A computer program product comprising processor executable instructions which, when performed on a computer, cause the computer to effect the performance of a method in accordance with any one of claims 67 to 93.

96. A computer implemented method of facilitating installation of a device at an installation location in an installation site, the method comprising storing a description of the installation site, in which a plurality of devices are to be installed, each device being to be installed at a respective installation location in the installation site; storing a reference corresponding to a subset of the installation locations, the reference comprising information assigning at least one device to the subset of installation locations; receiving an input identifying a device to be installed in the installation site; looking up, using the input as an index, a subset of the installation locations corresponding to the device identified by the input, and presenting to a user, on the basis of the description of the installation site, a graphical display of at least a portion of the installation site with an indication of one of the installation locations in the looked up subset of the installation locations.

97. The computer implemented method of claim 96, wherein the subset of installation locations comprises a single one of the installation locations.

98. The computer implemented method of claim 96 or 97, wherein the reference comprising information assigning at least one device to the subset of installation locations consists of information assigning a single device of the plurality of devices to the subset of installation locations.

99. The computer implemented method of any one of claims 96 to 98, wherein the input corresponding to the device to be installed in the installation site comprises device type information for identifying a device type to be installed at the subset of installation locations.

100. The computer implemented method of any one of claims 96 to 98, wherein the indication of at least one of the subset of the installation locations in the installation site comprises an indication of a plurality of the installation locations, and the method further comprises receiving a selection corresponding to one of the subset of locations, and presenting to a user, on the basis of the description of the installation site, a graphical display of at least a portion of the installation site with an indication of the selected one of the subset of locations in the installation site.

101. The computer implemented method of any one of claims 96 to 100 and further comprising capturing an identification feature of a device to be installed, and mapping the identification feature to a device of the plurality of devices for installation.

102. A computer implemented method in accordance with claim 101 wherein the identification feature comprises an identification token on or in the device to be installed.

103. A computer implemented method in accordance with claim 102 wherein the identification token is an identification mark on the device and the method further comprises opto-electronically reading the identifying mark to generate the input thereof.

104. A computer implemented method in accordance with claim 103 wherein the identification mark comprises a machine readable encoded representation of a string.

105. A computer implemented method in accordance with claim 102 or 103 wherein the identification mark comprises a barcode.

106. A computer implemented method in accordance with any one of claims 104 to 105 wherein the identification mark comprises a multi-dimensional graphical matrix code.

107. A computer implemented method in accordance with claim 106 wherein the matrix code comprises a QR code.

108. A computer implemented method in accordance with claim 101 wherein the capturing of an identification feature comprises capturing an image of the device, processing the image so as to match the image to a pre- stored model of a device, and thereby identifying the device to be installed.

109. A computer implemented method of facilitating installation of a device, by an installer, at an installation location in an installation site, the method comprising: receiving an input identifying a device to be installed in the installation site; accessing a description of the installation site, in which a plurality of devices are to be installed, each device being to be installed at a respective installation location in the installation site, looking up, using the input as an index, a subset of the installation locations corresponding to the device identified by the input, and presenting to the installer, on the basis of the description of the installation site, a graphical display of at least a portion of the installation site with an indication of one of the installation locations in the looked up subset of the installation locations.

110. A computer implemented method of facilitating installation of a device at an installation location in an installation site, the method comprising: storing a description of the installation site, in which a plurality of devices are to be installed, each device being to be installed at a respective installation location in the installation site; storing a reference corresponding to a subset of the installation locations, the reference comprising information assigning at least one device to the subset of installation locations; receiving a look-up request, the look-up request comprising an index identifying a device to be installed in the installation site; looking up, using the input as an index, a subset of the installation locations corresponding to the device identified by the input, and serving to an installer device, on the basis of the description of the installation site, information defining a graphical display of at least a portion of the installation site with an indication of one of the installation locations in the looked up subset of the installation locations.

111. An installer computer for facilitating installation of a device, by an installer, at an installation location in an installation site, the installer computer being operable to: receive an input identifying a device to be installed in the installation site; access a description of the installation site, in which a plurality of devices are to be installed, each device being to be installed at a respective installation location in the installation site, looking up, using the input as an index, a subset of the installation locations corresponding to the device identified by the input, and present to the installer, on the basis of the description of the installation site, a graphical display of at least a portion of the installation site with an indication of one of the installation locations in the looked up subset of the installation locations.

112. An expert computer for enabling expert facilitation of installation of a device, by an installer, at an installation location in an installation site, the expert computer being operable to: store a description of the installation site, in which a plurality of devices are to be installed, each device being to be installed at a respective installation location in the installation site; store a reference corresponding to a subset of the installation locations, the reference comprising information assigning at least one device to the subset of installation locations; receive a look-up request, the look-up request comprising an index identifying a device to be installed in the installation site; look up, using the input as an index, a subset of the installation locations corresponding to the device identified by the input, and serve to an installer device, on the basis of the description of the installation site, information defining a graphical display of at least a portion of the installation site with an indication of one of the installation locations in the looked up subset of the installation locations.

113. A computer program product comprising processor executable instructions which, when executed by a computer, cause the computer to perform a method in accordance with claim 111 or claim 112.

114. A system comprising an installer computer according to claim 111 and an expert computer according to claim 112.

Description:
SMART INSTALLATION

RELATED APPLICATION/S

This application claims the benefit of priority of Great Britain Patent Application No. 2018724.1 filed on 27 November 2020, the contents of which are incorporated herein by reference in their entirety.

FIELD

The present disclosure is concerned with providing a facility to assist in installation of equipment in a domestic or industrial setting.

BACKGROUND

Home improvement can be challenging for a home-owner. In particular, a home-owner may be insufficiently informed to be able to install equipment in the home, particularly where the position and/or orientation of the equipment is critical to its effective performance.

A typical example of this is a scenario whereby a home-owner desires to install a home security system. Such a system can include one or more cameras, one or more motion detectors, and/or other components. The position and orientation of such devices can materially affect the effectiveness of the installed system. Installation can benefit from expert advice.

In one approach, a provider of a home security system can provide a set of generic written instructions, with a view to informing a prospective installer as to particular issues to bear in mind, constraints as to location of devices of a network, and other considerations. However, even the most carefully crafted instructions can be unclear or insufficient to the average home-owner.

As such, it can be daunting or implausible for a home-owner to perform an installation of such devices without expert advice. An expert may therefore need to be engaged for the installation. Such expert installations involve a home visit, expense, and waiting for availability of such an expert to attend to the installation.

Any reference to a computer implementation of an embodiment described herein may be considered to optionally include a distributed implementation of an embodiment on a plurality of interconnected computers, whether locally connected or via interconnections, except where the context requires otherwise. Any embodiment disclosed herein may be implemented on one or more general purpose computers, suitably configured by computer executable instructions. Such computer executable instructions may be introduced by way of a computer program product which may be in the form of a non-transient computer readable storage device. In other embodiments, computer executable instructions may be provided borne on a computer readable signal.

In some embodiments, a computer program product may be configured on the basis that a general purpose computer stores existing instructions defining a platform or operating system.

As used herein, except where the context requires otherwise, the terms “comprises”, “includes”, “has”, and grammatical variants of these terms, are not intended to be exhaustive. They are intended to allow for the possibility of further additives, components, integers or steps.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

FIGURE 1 is a schematic diagram of a system in accordance with an embodiment.

FIGURE 2 is a schematic diagram of hardware architecture of an installer smartphone of the system of figure 1.

FIGURE 3 is a schematic diagram of functional architecture of the installer smartphone of figure 2.

FIGURE 4 is a schematic diagram of hardware architecture of an installer computer of the system of figure 1.

FIGURE 5 is a schematic diagram of functional architecture of the installer computer of figure 4.

FIGURE 6 is a schematic diagram of hardware architecture of an image/model manager of the system of figure 1.

FIGURE 7 is a schematic diagram of functional architecture of the image/model manager of figure 6.

FIGURE 8 is a schematic diagram of hardware architecture of an expert computer of the system of figure 1.

FIGURE 9 is a schematic diagram of functional architecture of the expert computer of figure 8.

FIGURE 10 is a process diagram illustrating operation of the installer smartphone.

FIGURE 11 is a schematic perspective view of an installation site.

FIGURE 12 is a schematic diagram of images captured by the installer smartphone of the installation site of figure 11. FIGURE 13 is a process diagram illustrating operation of the image/model manager in response to receipt of images from the installer smartphone.

FIGURE 14 is a sequence of images depicting stages of a feature matching and image stitching process carried out at the image/model manager.

FIGURE 15 is a plan view depicting a model derived by the image/model manager.

FIGURE 16 is a process diagram illustrating operation of the expert computer.

FIGURE 17 is an illustration of an example of a graphical user interface presented at the expert computer.

FIGURE 18 is an illustration of the graphical user interface of figure 17, in another state.

FIGURE 19 is a process diagram illustrating operation of the image/model manager in response to receipt of a location recommendation from the expert computer.

FIGURE 20 is an illustration of an example of a graphical user interface presented at the installer smartphone.

FIGURE 21 is an illustration of an example of the graphical user interface of figure 20, in another state.

FIGURE 22 is an illustration of an example of the graphical user interface of figure 20, in another state.

FIGURE 23 illustrates a system in accordance with a further embodiment.

FIGURE 24 illustrates functional architecture of an installer smartphone in accordance with the embodiment of figure 23.

FIGURE 25 illustrates a device for installation according to a further embodiment.

FIGURE 26 illustrates a device for installation according to a yet further embodiment.

DESCRIPTION OF EMBODIMENTS

In general terms, embodiments disclosed herein facilitate the acquisition of expert input into an installation, without the expert having to attend the installation. Embodiments also cause the generation of instructions to an installer, the instructions having had the benefit of expert input, to enable the installer to complete the installation.

Broadly, in non-limiting terms, embodiments disclosed herein concern a smart installation system, comprising an installer client and an expert client. The installer client is used by an installer to capture images (which may be still images or video) of an installation site. The installation system comprises a system element used to generate a model (e.g. a floor plan or 3D model) of the installation site in which installation is to be effected using a mapping algorithm. Embodiments disclosed herein contemplate the capture of images, such as by a camera functionality integrated into a personal electronic device such as a smartphone. However, in addition or alternatively, image capture may be effected in such a way as to accommodate photogrammetry. For instance, stereo images may be captured so that, using suitable image processing, a three dimensional representation of the scene can be generated. In addition, or alternatively, any other form of electromagnetic irradiation may be used to enable capture of images of the installation site. Further, FIDAR could be used, thereby facilitating an ability to determine distances to and between features in the images.

Measurement of distances or dimensions of features in the scene may be determined additionally or alternatively by calibration relative to a real distance measurement data input to the installer client (or to another device). The feature may for example be an object, e.g. a door or a wall, or may be an aspect of an object, for example an edge or a height of a wall. The real distance measurement data could, in certain embodiments, be derived from identification of a feature in the scene, wherein at least one dimension measurement of the feature is known. For instance, a device to be installed in the scene can be recognised, and the size dimensions of that device would then be known to the system.

In embodiments disclosed herein, for reasons of clarity, examples of the abovementioned system element are shown as a unitary element of the system; however, the reader will appreciate that the present disclosure also contemplates the provision of the system element in a distributed fashion, across a plurality of computing resources in a network.

The expert client, used by an expert, is then presented with a representation of the model, and identifies where in the installation site a device should be installed. A system element then determines an image that corresponds to the identified installation location in the model. In embodiments disclosed herein, the image comprises a representation of a portion of the installation site, the portion being a view of less than the whole installation site and, in many implementations, of less than the region captured by totality of the captured images.

The image optionally may be one of the images captured by the installer client or it may be generated from one or more of those images. So, for example, it may be derived from a combination of images or a part of an image, or it may be synthesised from image data captured by the installer client. Thus, the image may be appear as a photographic image, as opposed to a model-generated image. The image is then presented on the user’s phone with an indication on the image as to the location recommended by the expert for installation of the device. An installation may involve a plurality of devices. The installer client may implement a device identification facility which enables identification of the device or devices to be installed. The device identification facility may, in one embodiment, offer a user interface, to enable manual entry by the installer of device information, such as device type or serial number. Manual entry may be achieved by user keyboard input action, or, for example scanning a computer readable coded alphanumeric symbol (e.g. a barcode, specifically for example a QR code) affixed to the device.

The image may be determined so as to be assistive to the installer. In one embodiment, this is achieved by defining a boundary around the location of the object to be installed, and devising an image which comprises a representation of the model bounded by the boundary.

The boundary may be determined with reference to absolute dimensions in the model. The boundary may be defined with respect to a location on, or a perimeter of, a representation of the object to be installed, as positioned in the model. In an embodiment, the boundary is defined as a predetermined distance extending beyond the representation of the object to be installed. In an embodiment, the boundary is defined as a predetermined distance extending from a central location within (e.g. a centre of) the object to be installed. In an embodiment, a user interface is provided to enable configuration of the predetermined distance.

In an embodiment, the boundary may be determined as a function of dimensions of the representation of the object to be installed. So, in an embodiment, the representation of the portion of the model may, for example, be a multiple larger than the representation of the object to be installed.

In an embodiment, the boundary may be determined to include a feature in the model of the installation site. So, for example, a feature, such as a wall boundary, a comer, a doorframe, a window frame, an electrical power supply socket, or any other recognisable feature of the model of the installation site, may be used as an anchor from which the installation location can be surmised by the installer. In particular, when the image portion is displayed to the installer, the utility of the image is enhanced if it contains a recognisable feature. The installer can then determine, on the basis of the recognisable feature, how the device to be installed should be positioned. For instance, if the image represents an installation by way of positioning the device with respect to a wall boundary and, for example, a light-switch, the scale and position of the features can convey information to the viewer as to how the device should be oriented prior to installation. Further information may be included in or with the image, such as distance or angle data to aid the installer to position and orient the device with respect to a reference feature in the image portion. The reader will appreciate that the boundary can be defined with respect to features in the model, or features in images generated of the model. The preference for one or other of these approaches will depend on the specific implementation of the embodiment.

The image of the portion may be derived from one or more of the images captured by the installer client.

A speech recognition facility may be provided at the installer client. This can be useful in the event that the installer is unable to use manual entry, such as in a dirty or wet working environment, or if the installer is wearing personal protective equipment.

The installer client may present to the installer an image indicating recommended installation location, wherein the image includes a visual representation of the device (e.g. as a picture or a drawing), or a part of the device, superimposed onto the originally captured image or the image synthesised from one or more originally captured images.

The installer client may comprise more than one device. In one embodiment, an image capture process is implemented on a hand-held device, such as a smartphone, while an installer interface process is implemented on another device, which may be a computer, a tablet or so on. In use, coordination between the two device processes may be implemented by user account information. Such user account information may be attached to images captured in the image capture process, while the user account information may correspond to an account log-in process at the other device hosting the installer user interface.

The orientation of the device to be installed may be indicated in the image. Other features in the image may be identified. A system element may have a feature recognition facility, capable of identifying features such as walls, wall edges, cornices, wall vertices, ceilings, floors, doors, door frames, windows, window openings, furniture. The image presented to the user may include indication of such recognised features, so as to aid the installer in appreciating the location recommended for installation.

The image presented to the user may comprise a floor to ceiling representation of a room in which a device is to be installed. The image presented to the user may contain a representation of a reference feature. Whether or not the reference feature appears in the image presented to the user, the feature may be used to determine the boundary of the image. For example, in a scenario in which a device is to be installed in a room having a floor and a ceiling, the vertical extent of the boundary of the image may be defined with reference to the floor to ceiling, even if the floor and ceiling are not depicted in the image. So, for instance, the vertical extent of the image may be set at 5cm below the ceiling level to 5cm above the floor level. Optionally the horizontal boundary may be determined by a predefined aspect ratio with respect to the dimension spanning the vertical boundary. The horizontal boundary may further be defined to position the object to be installed in a predefined horizontal relationship with respect to the boundary, e.g. midway between the left and right bounds.

In certain embodiments, the installer client may present to the installer a user interface responsive to user input action to enable a zoom (in or out) function to apply to the presented image.

In certain embodiments, the image presented to the user may be representative of a view of the installation location from a particular point of view, and the installer client may present to the installer a user interface responsive to user input action to enable the presented image to reflect a change of position of the point of view.

In an embodiment, the installer client is configured to present a plurality of images to the installer, each generated in response to expert input at the expert client. For example, images may be generated with respect to different intermediate stages of installation of a device. In an embodiment, a device may be installed in stages, such as to attach a bracket to a wall, then to attach a battery holder to the bracket, then finally to affix a housing to the battery holder. Images may be generated, for the benefit of the installer, indicative of each installation stage.

In an embodiment, the installer client may prompt the installer to capture further images of the installation site, during installation, so that the installation can be inspected by an expert before a subsequent stage of installation is commenced.

In an embodiment, the images or other visual representations output to the installer may comprise an animation. A plurality of animations may be provided, each corresponding to a stage in a multi-stage installation process.

Embodiments may be used to guide the installation of a wide range of devices, particularly but not exclusively concerned with premises security. So, for example, control hubs, motion sensors, door/window sensors, radar or sonar sensors, cameras, devices that emit deterrent substances (e.g. a light obscuring material) and so on may be deployed using an embodiment of the present disclosure.

A system element used to process captured images, to develop the model of the installation location, to identify features in the images, to generate images including expert location information, may be located at the expert client. Alternatively, the system element may be separately located, either in a centralised server or in a distributed computing facility. Facilities offered through the installation client may be the result of execution of processing code hosted at the installation client, or may be drawn from cloud-based processing. That is, applications may be hosted remotely, and facilities offered by such applications may be called upon by the installation client.

Figure 1 illustrates a network 10 in accordance with an embodiment of the present disclosure. The network 10 enables interaction between an installer and an expert, to enable the installer to impart information to the expert as to the site in which installation is to be effected, to enable the expert to input expertise specific to the situation, and to enable that expertise to be provided to the installer in a context which fully or largely delivers the value of the expertise.

At an installation site, the network 10 thus comprises an installer smartphone 100, connected by Wi-Fi connection to a Wi-Fi hub 20. Also, an installer computer 200 is provided, similarly connected to the Wi-Fi hub 20. The installer smartphone 100 also has cellular connectivity, and through that establishes a communications channel with a cellular mast 22 employing a communications technology such as a 3GPP specified technology (e.g. LTE or 5G).

Both the Wi-Fi hub 20 and the mast 22 enable connectivity to the Internet 30. The Wi-Fi hub 20 is connected by physical connection, such as fibre or cable, to a communications network and thence to an internet service provider, through which internet based facilities are accessible. The mast 22 is part of a cellular telecommunications network through which internet facilities are also accessible.

As will be understood by the reader, while the Internet 30 is shown for simplicity as a functional block, in fact it comprises a vast number of interconnected devices offering facilities to user devices such as data, applications and connectivity.

In some embodiments, one or other of the connectivity channels may be dispensed with. So, for example, if a site is under construction, it is possible that no Wi-Fi connectivity will be available. In such a case, both the installer smartphone 100 and the installer computer 200 will be reliant on connectivity through the cellular mast. In that case, while not shown in figure 1, the computer 200 would naturally have cellular connectivity capability.

Similarly, it is possible to envisage dispensing with the cellular connectivity channel. In that case, the Wi-Fi hub 20 would provide the channel by which both the installer smartphone 100 and the computer 200 can access the Internet 30 and the services that are available through the Internet 30.

In use, the installer smartphone 100 is used by an installer to capture photographic images of the installation site. These images are processed into image files. As will be appreciated by the reader, the installer smartphone 100 may be substituted by any other computing device, or combination of computing devices, with hardware capabilities sufficient to accommodate an embodiment as described herein. So, for example, alternative devices may include a tablet computer, a laptop computer, an application specific handheld device, or a combination of these. In one alternative approach, images may be captured, in the pursuit of an embodiment, by a digital camera, and then digital images may be transferred therefrom to a computing device to enable implementation of a described embodiment.

Further, in some embodiments, the installer smartphone 100 and the installer computer 200 may be integrated into a single device. Thus, a single device may perform both the functions of the installer smartphone 100 and the installer computer 200.

An image/model manager 300, also with access to the Internet 30, provides computing capability, to receive image files from the smartphone 100, and from those image files to build a model of the installation site.

As noted above, though depicted as a single device in figure 1, in some implementations the image/model manager 300 may be implemented in a distributed manner, by cooperatively employing processing capabilities on a plurality of interconnected computing devices in a network. Further, this principle applies to any of the computing devices described in the context of specific embodiments - computing functions and capabilities may be provided in situ on a device using a self-contained set of software instructions, or a device may call up computing facilities offered by other computer devices with which the device is able to establish a connection, such as web-based or cloud based applications. This is particularly the case when balancing local processing in the context of limitations on local processing capability, combined with high capacity communications channels to enable fast upload and download of information in pursuit of a distributed processing model.

In one embodiment, the model is a 2D plan of the installation site. In another embodiment, the model is a 3D model of the installation site. In each case, a computer interpretable description of the model is created. The image/model manager 300 stores information relating to the manner in which the model is derived from the images, so that expert information inserted into the model can thereafter be mapped to images presented back to the installer user.

This model is passed on to an expert computer 400, also with access to the Internet 30 by way of a Wi-Fi hub 25, though in other embodiments (not shown) the expert computer may be equipped with a cellular modem to enable receipt of the model via a cellular network. An expert using the expert computer 400 is able to view a graphical representation of an installation site, and to input information as to recommended configurations of devices to be installed in the installation site.

In one embodiment, the installer computer 200 is used to configure a user subscription to the expert service. This enables a user to input personal information, including payment information, to establish a transaction before accessing the expert service.

The transaction may include information identifying the equipment to be installed. This information may be bound up in the transaction, in that the equipment purchased by the user may be recorded in a transaction record. This transaction record may then be linked to the user, to the installation site, and to images captured at the smartphone 100.

So, the expert computer 400 may present to the expert information identifying the equipment to be installed. In some cases, there may be no need to provide this information, as it could be the case that the product offered for sale to the user is invariable and always comprises the same number and types of device. So, for example, if a home security system always comprises a single motion-detecting device and a single alarm beacon, then there is no need to actively specify this to the expert, as this will be implicit.

The expert, presented with the graphical representation of the installation site, and in possession of the information as to the or each device that is to be installed in the installation site, uses a graphical interface provided by the expert computer 400 to specify a recommended installation location for the or each device to be installed. This specification of recommended locations can be termed expert location information.

The expert location information is associated with the model. In some embodiments, the expert location information is combined with the information defining the model, in a data file that can be termed an enhanced model. In other embodiments, the expert location information is simply linked to the existing model.

The expert location information is passed from the expert computer 400 to the image/model manager 300. There, the image/model manager 300 may reverse the processing used to create the model from the installer captured images, to create one or more enhanced image files. The or each enhanced image file comprises data enabling the construction of a corresponding image.

The enhanced image file is constructed so as to enable formation at the installer client of an image which is a representation of a portion of the installation site, the portion being defined about the installation location. The portion is, in practice, constrained to enable the size of the enhanced image file to be controlled. This has advantage in terms of transmission time and data consumption for transmitting the enhanced image file to the installer client. It also has potential advantage in terms of rendering speed at the installation client - there is no need to render an image of the whole installation site, just that part relevant to the installation. Further the user may by limiting the portion being defined about the installation location, the installer is thereby directed to the precise part of the site at which the installation is required with minimal, if any, user interface interaction required by the installer.

There are a number of possible approaches to constraining the size of the portion to be represented in the enhanced image file.

In one approach, a boundary is drawn around the object to be installed. This boundary may be an outline of the object, or, for example, an extended outline - that is, a boundary defined by the outline of the object plus a margin. The outline may be coincident with the maximum extent of the object, or it may be a simpler shape, such as a quadrilateral. In one example, the boundary is a rectangular bounding box. The margin may be preset, and/or may be set by the installer or expert. In an embodiment, the boundary may be set by the expert user.

The boundary may be set so as to encompass one or more features in the installation scene. Such features as may be encompassed could include wall corners, floor/wall comers, ceiling/wall corners, door frames, window features, door features, wall mounted features such as hanging pictures, power sockets or light switches. The advantage of encompassing one or more features is that the installer will be better able to locate the object with the context provided of an illustrated feature of the installation scene.

The or each enhanced image file is passed back to either the installer smartphone 100 or the installer computer 200 (for instance governed by the preference of the installer), so that images including location information corresponding to the expert location information can be displayed to the installer in the context of one or more of the images captured by the installer.

In some implementations, prior to the presentation of such an enhanced image, the installer smartphone 100 or computer 200 is presented with a plan view of the installation site with one or more location indications, each indicating a recommended installation for a corresponding item of equipment to be installed at the site. A user input action can be used to select one of the location indications and in response the smartphone 100 or computer 200 displays the enhanced image corresponding to the location indication. Once the item(s) corresponding to that location indication have been installed, the installer smartphone 100 or computer 200 may then be again presented with the plan view, enabling a next location indication to be selected and so on until all of the items of equipment have been installed.

By this approach, an expert can remotely impart information to an installer as to where a device, or a plurality of devices, should be installed in an installation site, and the person at the site can see on their smartphone 100 or computer 200 where and how the device(s) should appear in the site’s environment.

A schematic hardware architecture of the installer smartphone 100 is shown in figure 2. As illustrated, the smartphone 100 comprises a processor 102 which, in use, is operable to execute computer executable instructions. In specific examples of the described embodiment, the processor 102 may be implemented as one or more processing chips (e.g. microprocessor, microcontroller, FPGA, ASIC) and other processing hardware.

An input interface 104 is provided. This includes a connection to an incoming signal from a microphone 106 which, in use, can be used by a user to provide speech commands to the smartphone 100.

A communications interface 110 enables establishment of communications, according to predetermined communications protocols, with other devices. So, for example, a radio based communication channel can be established with the use of a radio transmitter/receiver (TxRx) 112 and an antenna 114. In some cases, a plurality of transmitter/receivers and corresponding antennas 114 are provided, to account for different communication technologies. So, in this described example, facilities are established for communication according to 3 GPP technologies and according to Wi-Fi network protocols.

Other wireless communication technologies could also be provided, such as Bluetooth, and others yet to be contemplated. Further, wired technologies such as USB, Ethernet or powerline communication, could also be implemented in certain embodiments.

Further, as well as or instead of real-time communication, a removable readable data medium, such as a non-transient computer readable storage medium, could be implemented, to enable data (and, in particular, program information) to be introduced to and removed from the smartphone 100. In other embodiments, data (and, in particular, program information) may be introduced into the smartphone 100 borne on a computer readable signal.

A digital camera 116 provides a facility for the capture of images, or movies, by the smartphone 100.

Mass storage 120 is provided to enable non-volatile storage of program code and data in support of the function of the smartphone 100. The mass storage 120 is typically implemented as a solid state mass storage device, as such devices are advantageous in terms of weight, form factor, power consumption and robustness in the context of a hand-held device.

A Read Only Memory (ROM) 122 provides rapid-access non-volatile storage, for system-level and other essential functionalities of the smartphone 100. A Random Access Memory (RAM) 124 provides read/write working memory for use, by the processor 102, in the execution of computer programs. In operation, the processor 102 draws application data stored in the mass storage 120 and stores it temporarily in the RAM 124, so that program instructions can be executed without delay that could be associated with memory access from the (relatively slow) mass storage 120.

An output interface 126 is provided, for interaction between processing elements of the smartphone 100 and the user. For instance, a display 128 is provided, as is a speaker 130. The output interface 126 enables the display of user interface images on the display 128, including, for instance, images of selectable buttons, or, for example, a virtual keyboard. A touch sensitive element of the display 128 may allow digital user interactions, which will then be received by the input interface 104 previously described.

Finally, a clock 132 is illustrated, which governs the timing of execution of program instructions by processing elements of the smartphone 100. The clock 132 may include a facility for synchronisation to an external time signal, such as a time code transmitted by a radio transmitter (e.g. the timing signal established by the GSM satellite system).

The function provided by the smartphone in connection with the presently described embodiment is illustrated in figure 3. As shown in figure 3, the processing elements of the smartphone 100 establish an installation assistant facility 150 which includes a camera driver 152 for operating the camera 116, an image processor 154 for processing images or movies captured by the camera 116 into a file which can be transmitted from the smartphone 100, a user interface driver 156 for running a graphical user interface on the display 128, and a communications driver 158 for effecting communication of files to and from the smartphone 100.

In use, the graphical user interface generated by the user interface driver 156 guides the installer through a process of capturing one or more images or movies of a site (e.g. a room or building) in which it is intended that a device (such as an infrared motion detector) is to be installed. Then, the installation assistant facility 150 causes the image processor 154 to process such images (such as by compression algorithms) to enable them to be communicated efficiently, by means of the communications driver, with other devices in the network 10. On receipt of files resulting from processing by other elements of the network 10, to be described in due course, the graphical user interface generates a display of an image, or a sequence of images, comprising a view of the installation site with indicative information to guide the installer on position and/or orientation of the device to be installed. The indicative information may be symbolic, or may include an image of the device superimposed on image data corresponding to images generated at the installation site in the first instance. The image of the installation site may comprise the original image data, or may comprise image data synthesised from the original image data.

Alongside the installer smartphone 100, the installer computer 200 is provided. Collectively, these two devices can be considered an “installer client” commensurate with the introductory portion of this description. Figure 4 illustrates structural architecture of the installer computer 200.

As noted above, the installer computer 200 may be provided in a range of recognised formats, including a tablet or a laptop. Embodiments are not limited to any particular form factor for the installer computer 200 or, for that matter, for the installer smartphone 100.

As illustrated, the installer computer 200 comprises a processor 202 which, in use, is operable to execute computer executable instructions. In many implementations, a typical computer will include a plurality of processors, to provide application specific processing capability. For instance, a graphics processing unit (GPU) provides parallel processing facilities, favourable to the establishment of a graphics processing pipeline.

An input interface 204 is provided. This may include a physical keyboard, touchpad or connectivity to allow engagement of separate devices to implement the same. Further, the input interface receives a signal from a microphone 206 which, in use, can be used by a user to provide speech commands to the computer 200.

A communications interface 210 enables establishment of communications, according to predetermined communications protocols, with other devices. So, for example, a radio based communication channel can be established with the use of a radio transmitter/receiver (TxRx) 212 and an antenna 214. In some cases, a plurality of transmitter/receivers and corresponding antennas 214 are provided, to account for different communication technologies. So, in this described example, facilities are established for communication according to Wi-Fi network protocols, but this disclosure does not exclude the possibility that the computer 200 will also have access to 3 GPP technologies also. Other wireless communication technologies could also be provided, such as Bluetooth, and others yet to be contemplated. Further, wired technologies such as USB, Ethernet or powerline communication, could also be implemented in certain embodiments.

Further, as well as, or instead of real-time communication, a removable readable data medium could be implemented, to enable data to be introduced to and removed from the computer 200.

A digital camera 216 provides a facility for the capture of images, or movies, by the computer 200. This can be useful in establishing video communication with another party in the network. Further, it can be useful for user recognition, which can then be employed as a security/authorisation method.

Mass storage 220 is provided to enable non-volatile storage of program code and data in support of the function of the computer 200. The mass storage 220 is typically implemented as a magnetic disk based storage device, which has advantages in terms of storage capacity and cost.

A Read Only Memory (ROM) 222 provides rapid-access non-volatile storage, for system-level and other essential functionalities of the computer 200. A Random Access Memory (RAM) 224 provides read/write working memory for use, by the processor 202, in the execution of computer programs. In operation, the processor 202 draws application data stored in the mass storage 220 and stores it temporarily in the RAM 224, so that program instructions can be executed without delay that could be associated with memory access from the (relatively slow) mass storage 220.

An output interface 226 is provided, for interaction between processing elements of the computer 200 and the user. For instance, a display 228 is provided, as is a speaker 230. The output interface 226 enables the display of user interface images on the display 228, including, for instance, images of selectable buttons, or, for example, a virtual keyboard. A touch sensitive element of the display 228 may allow digital user interactions, which will then be received by the input interface 204 previously described. Alternatively, a pointing device, such as a mouse, may be used to collect user input actions which correspond to selection of a virtual button on-screen.

Finally, a clock 232 is illustrated, which governs the timing of execution of program instructions by processing elements of the computer 200. The clock 232 may include a facility for synchronisation to an external time signal, such as a time code transmitted by a radio transmitter (e.g. the timing signal established by the GSM satellite system).

As shown in figure 5, the installer computer 200 implements a number of auxiliary functionalities to be offered to the installer in use of the system. These functionalities can be collectively considered as an installation configuration utility 250, which presents a user interface by means of a user interface driver 252, by which the installer can configure how the installation instructions are to be presented. For this, an installation instructions driver 254 provides a facility for presentation of on-screen installation instructions, as configured by expert input at the expert computer 400 (as will be described in further detail later). This facility is provided in the event that it is deemed more convenient or appropriate to display on the generally larger screen of a computer than the small display of a smartphone. It also may be more convenient to have the instructions displayed on a device such as the computer 200, which may be more convenient to view than the hand-held smartphone 100.

The image/model manager 300 is illustrated schematically in figure 6. As illustrated, the image/model manager 300 comprises a processor 302 which, in use, is operable to execute computer executable instructions. An input interface 304 is provided, by which a user can enter control or configuration commands. Generally, in use, the image/model manager 300 operates without significant user intervention, but an input interface is useful for when the need arises. This includes a connection to an incoming signal from a microphone 306 which, in use, can be used by an operator to provide speech commands to the image/model manager 300.

A communications interface 310 enables establishment of communications, according to predetermined communications protocols, with other devices. For reasons of conciseness, the specific technology supporting the connection of the image/model manager 300 to the internet 25 is not illustrated in figure 6 but it will be understood that this is not material to the disclosure.

Other wireless communication technologies could also be provided, such as Bluetooth, and others yet to be contemplated. Further, wired technologies such as USB, Ethernet or powerline communication, could also be implemented in certain embodiments.

Further, as well as or instead of real-time communication, a removable readable data medium could be implemented, to enable data to be introduced to and removed from the image/model manager 300.

Mass storage 320 is provided to enable non-volatile storage of program code and data in support of the function of the image/model manager 300. A Read Only Memory (ROM) 322 provides rapid-access non-volatile storage, for system-level and other essential functionalities of the image/model manager 300. A Random Access Memory (RAM) 324 provides read/write working memory for use, by the processor 302, in the execution of computer programs. In operation, the processor 302 draws application data stored in the mass storage 320 and stores it temporarily in the RAM 324, so that program instructions can be executed without delay that could be associated with memory access from the (relatively slow) mass storage 320. An output interface 326 is provided, for interaction between processing elements of the image/model manager 300 and an operator thereof. For instance, a display 328 is provided, as is a speaker 330. The output interface 326 enables the display of user interface images on the display 328, including, for instance, images of selectable buttons, or, for example, a virtual keyboard. A touch sensitive element of the display 328 may allow digital user interactions, which will then be received by the input interface 304 previously described.

Finally, a clock 332 is illustrated, which governs the timing of execution of program instructions by processing elements of the image/model manager 300. The clock 332 may include a facility for synchronisation to an external time signal, such as a time code transmitted by a radio transmitter (e.g. the timing signal established by the GSM satellite system).

As noted above in general terms, it will be clear to the reader that the image/model manager 300 can be implemented not only by a single computing device as shown in figure 6, but by a collection of interacting computing devices in a distributed manner. In such an approach, different computers, with different processing capabilities, may result in a more efficient processing outcome.

The functional architecture of the image/model manager 300 is illustrated in Figure 7. The image/model manager 300 implements an image/model processing utility 350 which, as suggested by the name, is operable on the one hand on images generated at the installer computer 100, and on the other hand on model data generated at the expert computer 400.

In general terms, the image/model manager 350 comprises an image stitcher 352 which is operable to receive image data from the installer computer, and to identify how those images relate to each other. The image stitcher 352 will compare image data of two images, to find features common to both. Found correspondences between features of images are then used to build a map as to how images relate to each other - areas of overlap and whether images of the same object were captured from different orientations or distances. To aid in the stitching process, reference is made to metadata attached to an image, such as location data and/or orientation data, which is habitually captured by digital cameras and which, for the purpose of the described embodiment, is a feature activated in this case.

A model builder 354 processes the relationships found between images by the image stitcher 352, to form a model in memory, describing the location captured in the original images.

In one embodiment, the model formed by the model builder 354 is three dimensional. This would ultimately enable images, such as those captured at the installer client, to be overlaid or integrated on this model, to enable a “walk-through” experience to be constructed on a display. Advantageously this can give the installer a real-world feel for the installation site, and the installer can easily determine the vertical position of various features which may assist the installer in assessing the site. In any case a 3D model can advantageously provide the expert with a virtual reality experience approximating their experience of on-site installations.

In another embodiment, the model formed by the model builder 354 is two-dimensional. Depending on the number and type of images captured of the installation site, the model may include representations of rooms in the site, and sizes and relative positions of those rooms.

There are a variety of ways in which a two-dimensional model of an installation site can be represented to a human viewer. One way is simply to provide a plan view - i.e. a technical illustration of the installation site from a viewpoint above the site. This plan view, which may for example be a “bird’s eye view”, conveys the general shape, size and configuration of the site, such as the position of walls and other features visible from the point of view.

It is worth noting that such an elevation is an orthographic projection of the installation site into a plane. That is, features which do not create a projection in the projection plane are not shown. In such a view, then, doors and windows present in the installation site may not be represented in the plan view.

Conventions have been established to convey additional information in a plan view other than those produced by a strictly applied orthogonal elevation. So, for instance, features such as window and doors, electrical outlets, electrical switches, and so on, may be illustrated in a plan view. Such a representation may be described as a “floorplan”. Often in fact, a floorplan is a cross-section, in a plane parallel to the floor of an installation site, at a level to enable the portrayal of the location of doors and windows in the installation site. Other more symbolic approaches are also known, in which features of an installation site can be illustrated.

For the avoidance of doubt, reference to “plan” in the present disclosure does not attract any special meaning, and includes any elevation representation of the installation site (e.g. architectural drawings, bird’s eye views, etc) generated from a nominal viewpoint generally vertically above the installation site.

All of the above can be achieved while retaining the character of the model as a two- dimensional model. However, it may also be convenient to describe the installation site as a three-dimensional model.

In a convenient approach, a three-dimensional model is described as a data set, such as a file expressing shape, configuration and dimensions of elements of the installation site, for instance using a pre-defined mark-up language suitable for use by a computer aided design (CAD) package. In some cases, the data set includes line and vector information to enable suitable description of the installation site. The exact nature and structure of the data set is not material to the present disclosure.

In this example, it is also possible to produce a number of different views of the three- dimensional model so defined. For instance, a plain, orthogonal plan view (e.g. bird’s eye view), as discussed above in the context of the two-dimensional model example, can be generated from the three-dimensional line and vector model. Further, as the data set contains information as to the position of identified features (e.g. doors, windows, electrical outlets) in the installation site, these features can be illustrated symbolically in a floorplan generated from the three-dimensional model.

In some embodiments, features recognised in the images may include items of furniture, either free-standing or built-in, in the installation site. These features may further be identified in the two-dimensional representation of the installation site.

There may be practical advantages to a two-dimensional model, as opposed to a three- dimensional model.

Firstly, a two-dimensional model is likely to impose a lower computational demand on the image/model manager 300. It is also likely that a two-dimensional model can be defined in less data than can a three-dimensional model. Finally, with regard to rendering of useful images at the expert computer 400, it is likely that a two-dimensional model will impose a lower computational demand on the expert computer 400.

Secondly, some expert users of the expert computer 400 will be highly familiar with working with two dimensional plans and technical drawings. So, presenting information to an expert in this paradigm will aid in interaction between the expert computer 400 and the expert, leading to improved results. A “walk-through”, as would be envisaged from a three dimensional model, may in fact be a less familiar experience for some expert users.

The model is sent to the expert computer 400, to enable the expert to impart expert information on location of the or each device to be installed.

An image transformer 356 is also implemented, and detailed description of this element will be provided in due course.

A communications driver 358 supports communication of files between the image/model manager 300 and the other network elements in the system 10.

As illustrated, the expert computer 400 comprises a processor 402 which, in use, is operable to execute computer executable instructions. In many implementations, a typical computer will include a plurality of processors, to provide application specific processing capability. For instance, a graphics processing unit (GPU) provides parallel processing facilities, favourable to the establishment of a graphics processing pipeline.

An input interface 404 is provided. This may include a physical keyboard, touchpad or connectivity to allow engagement of separate devices to implement the same. Further, the input interface receives a signal from a microphone 406 which, in use, can be used by a user to provide speech commands to the computer 400. It also offers the opportunity for a voice channel to be established between the expert and the installer.

A communications interface 410 enables establishment of communications, according to predetermined communications protocols, with other devices. So, for example, a radio based communication channel can be established with the use of a radio transmitter/receiver (TxRx) 412 and an antenna 414. In some cases, a plurality of transmitter/receivers and corresponding antennas 414 are provided, to account for different communication technologies. So, in this described example, facilities are established for communication according to Wi-Fi network protocols, but this disclosure does not exclude the possibility that the computer 400 will also have access to 3 GPP technologies also.

Other wireless communication technologies could also be provided, such as Bluetooth, and others yet to be contemplated. Further, wired technologies such as USB, Ethernet or powerline communication, could also be implemented in certain embodiments.

Further, as well as, or instead of real-time communication, a removable readable data medium could be implemented, to enable data to be introduced to and removed from the computer 100.

A digital camera 416 provides a facility for the capture of images, or movies, by the computer 400. This can be useful in establishing video communication with another party in the network. Further, it can be useful for user recognition, which can then be employed as a security/authorisation method.

Mass storage 420 is provided to enable non-volatile storage of program code and data in support of the function of the computer 400. The mass storage 420 is typically implemented as a magnetic disk based storage device, which has advantages in terms of storage capacity and cost.

A Read Only Memory (ROM) 422 provides rapid-access non-volatile storage, for system-level and other essential functionalities of the computer 400. A Random Access Memory (RAM) 424 provides read/write working memory for use, by the processor 402, in the execution of computer programs. In operation, the processor 402 draws application data stored in the mass storage 420 and stores it temporarily in the RAM 424, so that program instructions can be executed without delay that could be associated with memory access from the (relatively slow) mass storage 420.

An output interface 426 is provided, for interaction between processing elements of the computer 400 and the expert user. For instance, a display 428 is provided, as is a speaker 430. The output interface 426 enables the display of user interface images on the display 428, including, for instance, images of selectable buttons, or, for example, a virtual keyboard. A touch sensitive element of the display 428 may allow digital user interactions, which will then be received by the input interface 404 previously described. Alternatively, a pointing device, such as a mouse, may be used to collect user input actions which correspond to selection of a virtual button on-screen.

Finally, a clock 432 is illustrated, which governs the timing of execution of program instructions by processing elements of the computer 400. The clock 432 may include a facility for synchronisation to an external time signal, such as a time code transmitted by a radio transmitter (e.g. the timing signal established by the GSM satellite system).

Figure 9 shows the processing facilities established by software executed at the expert computer 400. These processing facilities provide an expert interface utility 450, comprising a display driver 452, a user interface driver 454 and a communications driver 456. While a more detailed description of a process performed at the expert computer 400 will follow, in broad terms the display driver 452 enables the display of a graphical representation of the 3D model generated by the image/model manager 300. This can be viewed by the expert. A user interface is managed by the user interface driver 454, enabling the expert to enter user input actions to the computer 400, to manipulate the display of the model.

The manner in which the installation site is presented to the expert user is implementation specific.

So, in one example, the expert user may be presented with a two dimensional plan of the installation site. Note that this is not dependent on whether the model of the installation site is two-dimensional or three-dimensional; however, it will be appreciated that a two-dimensional model may be conveniently processed into a representation, on-screen, of a two dimensional plan of the installation site.

Conversely, the expert user may be presented with a representation of the installation site as a three dimensional scene. This could be derived conveniently from a model describing the installation site as a three dimensional model, but of course it will be appreciated by the user that it is conceivable to construct a three dimensional scene from a two dimensional plan. Among other things, if a three dimensional scene is represented, the user interface may allow a “walk-through” function, with user input facilities provided so that the nominal point of view with reference to the model can be moved, thus causing corresponding change to the generated display. A zoom function, various rotation functions to change the orientation of the point of view, and other functions relating to the display may be provided.

The user interface may include a facility for an expert user to manually enter tags to identified objects in the displayed view. This applies whether the displayed view is of a two dimensional floorplan or of a three dimensional scene. So, for instance, if a particular door frame is a suitable reference point for the installation, this could be tagged. Then the user interface provides a facility to the expert to indicate a recommendation as to a location of a device to be installed in the displayed installation site. This location may be indicated by way of a symbol, or a graphical representation of a device, or of a part of a device. The location may be identified with respect to its distance from other objects identified or identifiable in the scene, such as a door frame, a comice, a wall comer, a ceiling, and so on.

In the event that a two dimensional plan is presented, it may be convenient to associate, with a location recommendation, information describing a recommended installation height for the device. This height indication may be with reference to a floor datum or a ceiling datum.

The expert information is passed back to the image/model manager 300, by the communications driver 456. The expert information is related to the 3D model, but the reader will appreciate that it is not necessary for the 3D model to be sent back to the image/model manager 300 as it can be assumed that the image/model manager 300 retains a copy.

Then, returning to the functional representation of the image/model manager 300 in figure 7, the image transformer 356 generates image data corresponding to the image data originally produced by the installer smartphone 100. The image data generated by the image transformer 356 is enhanced by location information corresponding to the expert information entered by the expert on the expert computer 400.

Figure 10 illustrates a process performed at the installer smartphone 100 to initiate an installation advice request. The present example is based on an installation to be carried out at an installation site as illustrated in Figure 11. As shown in figure 11, the installation site 600 comprises a room, bounded by walls 602, a floor 604 and a ceiling 606. In one of the walls 602, a door 610 is installed, bounded on three sides (top and sides) by a doorframe 612. A skirting board 614 is affixed to the walls 602, following an edge thereof juxtaposed with the floor 604. A window 620 is positioned in the wall comprising the door 610, the window 620 being bounded by a window frame 622. A light switch 630 is positioned on the same wall as the door 610 and the window 620, positioned between them.

In step Sl-2, the user interface instructs the user to capture a number of images of the intended installation site. The user interface may provide instructions as to the type of images to be captured, such as views, orientations, light conditions, and so on. Each image may capture a portion of the installation site. A bounding box 640 is illustrated in figure 11, in broken line, showing a typical extent of an image captured at this stage.

As shown in figure 12, the image capture stage results in a plurality of images, each of a portion of the room 600 designated as the installation site. The images may not ah be captured from the same point, and may not ah be of the same zoom settings. Metadata may be attached to each image, if available, indicating zoom level, location, and the orientation of the image capture device.

In an alternative approach, a video recording may be made of the installation site, in which case a stream of images may be conveyed.

Then, in step Sl-4, an expert advice request is assembled.

In some embodiments, the expert advice request comprises an act of uploading captured images to a server. In other embodiments, a sequence of exchanges may take place, in which a server is informed, by the installer smartphone, that a request is being made for expert advice. This request element may precede or follow an act of uploading the images. The expert advice request can specify, by means of a suitable user interface, devices to be installed in the installation site. For instance, the user may specify which, of a range of available security devices, are to be installed. Alternatively, the system may provide for the expert advice to include recommendations as to the device(s) that should be selected by the user for installation and, for instance, a facility for the user to purchase the recommended devices.

In step Sl-6, the request is sent. The process completes.

Figure 13 illustrates the response of the image/model manager 300 to receiving a request from the installer smartphone 100. In step S2-2, features in each received image are identified. This can be achieved using image processing and a suitably configured machine learning algorithm, which has been trained on training data of identified features in environments in which devices are expected to be installed. So the algorithm may be configured to identify, with an appropriate degree of confidence, features such as corners, picture rails, ceiling cornices, light switches, items of furniture, doors and windows and corresponding frames and other carpentry features. In step S2-4, identified features in each received image (or in frames of a video, if a video file is supplied) are compared. If features are found to correspond to each other, mappings are created and logged, in step S2-6.

An example of this can be seen from figure 14, which shows two images being compared for feature matching. In the first (top) pair of images, two features have been identified in one image which are determined to have a match to corresponding features of the other image. These features are indicated by broken line outlines.

From these feature mappings, and from orientation and location information for the images, the images can be stitched, in step S2-8. Then, in the subsequent stages of the stitching process, the features are brought into alignment, so that an overlap of the two images can be established. If one of the images needs to be scaled or warped to make the overlap fit, this is applied at this stage. As will be understood by the reader, the greater the number of matching features, the better the accuracy of the scaling and warping process.

Then, once the stitching has completed, a composite image is the end result, as shown in the bottom panel of figure 14.

The stitched image set then forms the basis of a model building process, in step S2-10, which builds a model in memory of the installation site. Figure 15 shows a graphical representation of a model of the installation site of the present example, in the form of a plan view. Naturally, as the reader will appreciate, this is simply a graphical representation and the model itself will be a data set which could be rendered in a number of different ways. For the purposes of this example, however, the walls 602, door 610, window 620 and light switch 630 are illustrated symbolically using a predetermined schema for representation of features.

The model is then transmitted to the expert computer 400 in step S2-12. In some embodiments, as explained in further detail below, transmission of the model to the expert computer 400 may be initiated by the expert computer 400.

Figure 16 illustrates how the expert computer 400 responds to receipt (step S3-2) of a model from the image/model manager 300. The model is either accompanied by information from the original request identifying the or each device to be located in the installation site, or this information can at this point be acquired from the installer computer 200. In some embodiments, the identity of the device to be installed can be implicit, in that only one device of known type is to be installed in all cases. In other embodiments, the device or devices to be installed may be the subject of expert recommendation, in which case a facility can be provided for the expert to input this information to the expert computer 400. In some embodiments, the expert computer 400 may download a model from the image/model manager 300 by way of a software implemented application. The application may present a front-end user interface. In that interface, an indication may be pushed, that a model is available for download. The download may be initiated by user input action, or may be initiated by a polling procedure, such as a regularly repeating polling procedure. In this way, the transfer of the model from the image/model manager 300 may be initiated by the expert computer 400, rather than by the image/model manager 300 as indicated above.

A representation of the model is generated on screen (step S3-4) so that the expert user can view it. Various controls are offered to the expert user, so that the representation can be changed. The system responds to user input actions (step S3-6), by changing the representation as required.

A suitable graphical user interface at the expert computer 400 is illustrated in figure 17. As shown, a window 700 is represented on screen, comprising three fields. A site display field 702 provides a display region for display of a representation of the installation site to which the expert is to impart expertise. An inventory field 704, top right, provides a region for display of a list of devices to be installed in the installation site. As noted above, this list can be the result of input by the installer making the request for expert assistance, or it can be the result of input by the expert using the expert computer. The expert user has a facility to select (as shown by a line outline) one of the devices to be installed, so that a position recommendation can be made by the expert.

A configuration zone 706 (bottom right) presents configuration functions to the expert to aid in the operation of the expert advice application. As shown, the configuration zone 706 offers two display options, either a plan view or a 3D model view, but other control options may be provided as may be apparent to the reader.

For instance, for a 2D floorplan, it may be convenient to illustrate but a portion of the whole plan on screen. Functions may be provided to enable a user to move through the plan, or to magnify or reduce the presented image.

On the other hand, for a 3D representation of the installation site, a walk-through experience may be provided, and controls presented to the user may allow the on-screen representation to reflect changes in viewer position or angle of gaze, or a zoom function.

On input by the expert user, the computer 400 illustrates a location recommendation, the location recommendation being a positioning, specified by the expert user, of a device to be installed in the installation site represented by the model. This location recommendation is logged in step S3-8. As shown in figure 18, when each device listed in the inventory panel 704 has been selected and positioned, it is rendered in strike-through text.

The location recommendation is then sent back, S3- 10, to the image/model manager 300 for processing as shown in figure 19. The location recommendation may include an indication of recommended installation height - this is particularly (but not exclusively) beneficial if the model is a two dimensional floorplan.

On receipt of a location recommendation, step S4-2, the location recommendation is integrated, step S4-4, with the model held at the image/model manager 300. As noted previously, this obviates the need for the expert computer 400 to transmit the whole model back to the image/model manager 300, with benefit in terms of communication channel demand.

The image/model manager 300 then generates, step S4-6, at least one image representation from the model, including the location recommendation. The or each image representation may correspond with an image captured by the installer smartphone 100, or it may be a synthesis of such images. Further, the image/model manager 300 may generate a plurality of image representations, each corresponding to installation stages of the device to be installed. This approach provides further guidance to the installer.

The or each image representation is then sent to the installer smartphone, step S4-8. The smartphone 100 is then configured to display the or each representation to the installer, to guide and inform installation of the device.

In certain embodiments, the image representation, including the expert installation recommendation, may be part of a 3D walk-through model, sent to or accessible by the installer smartphone 100. This particularly applies to embodiments in which the image/model manager 300 constructs a three dimensional model of the installation site, but the reader will appreciate that it may be possible to construct an approximate walk-through experience for the installer based on a two dimensional floorplan representation.

One example of the user interface presented to the installer at the installer smartphone 100 at this stage in the process is shown in figure 20. The user interface 800 comprises an installation site display panel 802 which offers a display region to enable the installer to see how devices are to be positioned in the installation site. An inventory panel 804 lists the devices to be installed, and enables the user to select one of the devices to be installed. The selection can be made by using a pointing device, such as a mouse, to identify one of the listed devices to be installed. The device may be selected in the inventory panel 804, or in the installation site display panel 802. This selection provides an indication of readiness of the installer to install the device in question. A configuration zone 806 enables the user to select how the installation site is to be displayed on screen.

The embodiment has been described by reference to selection of a device using a pointing device based selection. However, the reader will appreciate that other user selection actions can also be used to convey an indication of readiness to install a device. For instance, a device may be selected by keyboard action. A device may be selected by tactile interaction with a touchscreen facility of the installer smartphone 100. A device may further be selected by detecting a user utterance at a microphone of the smartphone 100, and using a suitable voice to text interface.

As shown in figure 20, the installation site is shown in schematic plan view, with showing the locations 805 where devices are to be installed in the installation site by including indications 810 at the respective locations 805. This offers minimal detail of the site, beyond the general shape and configuration. It may be a line drawing, with no rendered detail. This is advantageous from the perspective of communication speed to the installer smartphone 100, data communication burden, and processing speed at the smartphone 100.

Figure 21 shows the same user interface but, in this case, the user has selected, in the configuration zone 806, the option to display the installation site as a 3D model. This 3D model is a line drawing, constructed from the model data generated at the image/model manager. Again, it is a minimal detail representation, simply indicating the locations 805 in the installation site where devices are to be installed.

On selecting a device to be installed, the installer user interface then transitions to a device installation representation. An example of this is shown in figure 22. In this case, the display panel 802 contains an illustration of an image portion, of the specific part of the installation site, corresponding to an installation location of the selected device to be installed. The image portion is bounded by a boundary 820 determined so as to be around the location of the object to be installed. Optionally the boundary of the image portion may be shown as a line along a perimeter of the image portion. The may image portion appear in a predefined part of the display panel 802, for example as shown in figure 22, or in other embodiments may encompass the entire display panel 802. There are a number of possible ways in which this detailed portion can be rendered.

In a first example, the image representation sent to the installer smartphone in step S4-8 comprises a rendered image of a portion of the installation site, the portion including the device 830 to be installed with a surrounding region of the installation site. The size of the portion is determined in one of a number of different ways. In the example shown in figure 22 the device 830 is composed of two components 830a and 830b that cooperate with each other, e.g. a magnet sensor and a magnet, respectively. However, in other embodiments, the device being installed may be a single component, whether that single component be the entirety of the device or a one of a plurality of components that make up the device, e.g. a particular component of the plurality of components to be installed first. In the illustration of figure 22, there is a depiction of the device 830 and a further indication 810 of the location, in this example a box around the installation location. However, in other embodiments, the illustration of the device 830 at the installation location 805 may be the only provided visual indication of the installation location.

Firstly, there may be a pre-set outline defined at the image/model manager. This can be defined, for example, in the context of the size of the device to be installed. So, for example, if a relatively large device needs to be installed, a relatively large pre-set outline is defined. If a relatively small device needs to be installed, a relatively small present outline is defined. Desirably, the outline is sufficiently large so as to enable the installer to appreciate, from the rendered image of the portion, how the device should be located and installed in the installation site, without unduly sacrificing the benefit of fully rendering only a portion of the installation site.

Secondly, the outline of the portion of the installation site may be defined with respect to an outline of the device to be installed. So, at the image/model manager, a tool may extend an outline of the device to be installed, to create a portion to be rendered with a suitable border around it. This portion may be defined as a geometric enlargement of the outline of the device to be installed. Alternatively, it may be defined as a bounding shape, such as a rectangular bounding box or an elliptical or circular bounding outline, sufficiently large to enable the installer to appreciate, from the rendered image of the portion, how the device should be located and installed in the installation site, without unduly sacrificing the benefit of fully rendering only a portion of the installation site.

Thirdly, the outline of the portion of the installation site can be defined at least partly with respect to a feature of the installation site in the vicinity of the location at which the device is to be installed. So, for example, the image/model manager can perform a vicinity search for a feature such as a wall, wall edge, comice, wall vertex, ceiling, floor, door, door frame, window, window opening, article of fixed furniture, or electrical fitting, with respect to which the device installation location can be defined. Then, the outline of the portion of the installation site to be fully rendered can be set so as to encompass, at least in part, a feature identified in the vicinity search, so that the feature can then be used as a point of reference by an installer viewing the fully rendered portion on display thereof at the installer device. Fourthly, the outline of the portion can be manually set by the expert, in conjunction with any or all of the preceding considerations. So, for example, the shape of the portion can be set as either matching the outline of the device to be installed, or another preset shape such as a circle, ellipse or rectangle, or a manually drawn outline. Then, the size of the outline can be set so as to balance the need to show a reasonable portion of the installation site while obtaining the benefit of only rendering a portion of the installation site. The outline can be set so as to encompass a feature, such as identified in the installation site by the expert, which can act as a point of reference for the installer. In this way, the expert knowledge of the expert can best be conveyed to the installer using the technology of the present disclosure.

In the context of all of the above examples, the image representation can be conveyed to the installer device and then rendered on screen for the benefit of the installer, in a number of ways.

Firstly, the fully rendered portion can be conveyed as a file, for instance compressed by an established compression process, for superposition on a low-detail (e.g. line drawing) representation of the installation site as a whole. This approach enables a limited use of the communication channel between the expert device and the installer device. Then, at the installer device, the fully rendered portion is displayed superposed on the low-detail representation of the installation site, which enables the viewer to appreciate the installation location without requiring full rendering of a graphical representation the whole installation site.

Secondly, the fully rendered portion may be defined by way of a coordinate file, the coordinate file defining a representation of the device to be installed, an installation position of the device to be installed, and data defining the boundary of the portion to be fully rendered. In this case, the representation itself of the fully rendered portion is not conveyed directly to the installer device, instead the fully rendered portion is rendered at the installer device based on the received coordinate file. This approach is useful if the processing capability of the installer device and the capacity of the communication channel are such that greater benefit can be derived from conveying less information and causing more processing to be effected at the installer device.

As will be understood by the reader, various functionalities of the described embodiment are not tied to specific devices. Indeed, depending on relative processing capabilities, functionalities can be hosted at client devices or server devices. An alternative configuration is shown in figure 23 to demonstrate this. The system 1010 of figure 23 comprises an installer smartphone 1100 and an expert computer 1400 of similar hardware configuration to the previous example. In this case, for simplicity, the installer smartphone 1100 is shown connected to the internet 1030 by means of a cellular network represented by a mast 1022, whereas the expert computer 1400 is shown connected via a Wi-Fi hub 1025.

In this case, the previously described image/model manager 300 is not provided in the embodiment by a specific network element, but its functionalities are provided by other elements of the network. In particular, as shown in figure 24, numerous functionalities are provided at the smartphone 1100, as an installation assistant utility 1150. These functionalities are either hosted directly on the smartphone 1100 or distributed, network-hosted processing resources are called upon by the smartphone 1100. In particular, the smartphone 1100, as before, implements a camera driver 1152, an image processor 1154, a user interface driver 1156 and a communications driver 1166. However, in addition, the smartphone 1100 implements functionalities hosted, in the previous example, at the image/model manager 300. So, the installation assistant utility 1150 also implements an installation instructions driver 1158, an image stitcher 1160, a model builder 1162, and an image transformer 1164.

Working in conjunction with the expert computer 1400, the installation assistant utility 1150 is able to provide to the user one or more images illustrating an installation recommendation provided by an expert, without the need for the expert to actually perform a physical site visit.

As noted above, the inventory field 704 in the displayed graphical user interface at the expert computer lists devices to be installed. This can be the result of input by the expert of a recommended inventory of devices, or it could be as a result of data entry by the installer. In the latter case, for the greater convenience of the installer, facilities may be provided on devices to enable data entry to be made as efficiently as possible, and with minimal risk of device confusion where multiple devices are to be installed.

So, figure 25 illustrates a device 810, on which a machine-readable visual code 812 is integrated onto the device 810, e.g. by affixing or printing, etc. In this example, the visual code 812 is a matrix code, which may be in the QR format. The code 812 encodes product identifying information which, when scanned using a suitable application on the installer smartphone 100, is used to identify the device in the installer’s possession. The product identifying information is conveyed to the expert computer and is represented in the inventory field 704. Then, the expert is able to determine a recommendation as to installation of the device represented by the product identifying information.

Separately, and alternatively or in addition, the scannable code 812 can be used by an installer at the stage represented in Figure 20 where the installer is being invited to select a device to install. In this scenario, the identity of the devices to be installed may have been dictated by expert recommendation, or may have been entered manually by the installer, or may have been entered through scanning of devices already in the possession of the installer.

Regardless as to the method by which the device list has been populated, the graphical user interface in figure 20 is offering a choice to the installer as to which listed device is to be installed. For the convenience of the installer, the device 810 illustrated in figure 25 can be selected by scanning, by the installer smartphone 100, the scannable code 812. In the context of the presented graphical user interface of figure 20, the installer user interface, as before, transitions to a device installation representation, as exemplified in figure 22.

Further implementations of the device 810 can be provided with scannable device information in alternative ways. Figure 26 illustrates one implementation, in which the device 800 is implemented with a near-field communication (NFC) device 814, readable by a suitable NFC scanner such as is commonly implemented in a smartphone. So, assuming that the installer smartphone 100 includes NFC reading capability, the NFC device 814 of the device 810 can be read to obtain device identification information as related above.

In an embodiment, an image recognition facility may be provided, such that, on capture of a photographic image of the device 810 at the smartphone 100, the image may be processed by the image recognition facility to determine the identity of the device. In particular, the image recognition facility may be capable of determining a category, type or model of a device so recognised. From this, the device can be selected for installation simply by taking a photograph of the device. The image recognition facility may be a classifier established by means of a machine learning algorithm, trained on a range of devices that may be contemplated for installation.

In an embodiment, a magnetic storage device may be affixed to the device to be installed, the magnetic storage device storing an identification data item comprising an identification data string in a format readable by a suitably configured magnetic reader.

It will be understood that the invention is not limited to the embodiments above- described and various modifications and improvements can be made without departing from the concepts described herein. Except where mutually exclusive, any of the features may be employed separately or in combination with any other features and the disclosure extends to and includes all combinations and sub-combinations of one or more features described herein.

While the foregoing is directed to embodiments presented in this disclosure, other and further embodiments may be devised without departing from the basic scope of contemplated embodiments. That is, although specific embodiments and numerous specific details are set forth to provide a more thorough understanding of the present disclosure, persons skilled in the art, however, will understand that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.