Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A PHOTOGRAMMETRY SCANNER SYSTEM AND IMAGING ASSEMBLY
Document Type and Number:
WIPO Patent Application WO/2024/011287
Kind Code:
A1
Abstract:
A photogrammetry scanner system for capturing images of a target. The photogrammetry scanner system includes a frame, a plurality of image sensors connected to and spaced apart on the frame, and a plurality of light emitting devices connected to and spaced apart on the frame and configured to project light in a lighting pattern.

Inventors:
POWELL SEAN KEIRAN (AU)
WOODRUFF MARIA ANN (AU)
Application Number:
PCT/AU2023/050644
Publication Date:
January 18, 2024
Filing Date:
July 12, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV QUEENSLAND TECHNOLOGY (AU)
International Classes:
G01C11/02; G01B11/25; G01B11/30; G06T1/00; G06T17/20; H04N13/00; H04N13/167; H04N13/243; H04N13/254; H04N13/271; H04N13/296
Domestic Patent References:
WO2019101901A12019-05-31
WO2017063011A12017-04-20
Foreign References:
US20180302572A12018-10-18
US20190063917A12019-02-28
Other References:
ANONYMOUS: "3D scanning", WIKIPEDIA, 28 June 2022 (2022-06-28), XP093130950, Retrieved from the Internet [retrieved on 20240213]
HESAMH: "DIY 3D Scanner Based on Structured Light and Stereo Vision in Python Language", INSTRUCTABLES, 11 December 2020 (2020-12-11), XP093130953, Retrieved from the Internet [retrieved on 20240213]
Attorney, Agent or Firm:
MICHAEL BUCK IP (AU)
Download PDF:
Claims:
CLAIMS

1 . A photogrammetry scanner system for capturing images of a target comprising: a frame; a plurality of image sensors connected to and spaced apart on the frame; and a plurality of light emitting devices connected to and spaced apart on the frame and configured to project light in a lighting pattern, wherein the scanner system is configured to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors.

2. The photogrammetry scanner system according to claim 1 , wherein each image sensor of the plurality of image sensors is configured with a wide field of view and a short focal distance.

3. The photogrammetry scanner system according to claim 1 or claim 2, wherein the projection of light from the plurality of light emitting devices is synchronised to the image capture sequence of the plurality of image sensors.

4. The photogrammetry scanner system according to any one of claims 1 to 3, wherein the image capture sequence comprises activating a light emitting device adjacent an image sensor and subsequently capturing an image via the image sensor adjacent the light emitting device.

5. The photogrammetry scanner system according to any one of claims 1 to 4, wherein an arrangement of the plurality of image sensors and the plurality of light emitting devices comprises each image sensor of the plurality of image sensors being adjacent a light emitting device of the plurality of light emitting devices.

6. The photogrammetry scanner system according to any one of claims 1 to 5, wherein no two image sensors of the plurality of image sensors are coplanar.

7. The photogrammetry scanner system according to any one of claims 1 to 6, wherein each image sensor of the plurality of image sensors is offset from every other image sensor of the plurality of image sensors in three dimensions.

8. The photogrammetry scanner system according to any one of claims 1 to 7, wherein each image sensor of the plurality of image sensors has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other image sensor of the plurality of image sensors.

9. The photogrammetry scanner system according to any one of claims 1 to 8, wherein a light emitting device of the plurality of light emitting devices is independently controllable relative to another light emitting device of the plurality of light emitting devices.

10. The photogrammetry scanner system according to any one of claims 1 to 9, wherein each light emitting device of the plurality of light emitting devices comprises a plurality of lights emitting light at different wavelengths.

1 1. The photogrammetry scanner system according to any one of claims 1 to 10, wherein the plurality of light emitting devices are configured to emit a range of wavelengths of light including at least one of ultraviolet light, visible light and near infrared light.

12. The photogrammetry scanner system according to any one of claims 1 to 1 1 , wherein a grate is located over the plurality of light emitting devices.

13. The photogrammetry scanner system according to any one of claims 1 to 12, wherein the photogrammetry scanner system further comprises an orientation sensor and/or an acceleration sensor configured to determine and track scanner location and orientation to assist with alignment of the target within their field of view of the image sensors.

14. The photogrammetry scanner system according to any one of claims 1 to 13, wherein the frame is modular and the frame comprises a plurality of releasably connectable receptacles, each receptacle configured to receive an image sensor and a light emitting device therein.

15. The photogrammetry scanner system according to any one of claims 1 to 14, wherein the frame comprises a surface or surfaces having the plurality of image sensors and the plurality of light emitting devices connected thereto.

16. A method of photogrammetry scanning, the method includes: providing a photogrammetry scanner system having a plurality of image sensors and a plurality of light emitting devices connected to and spaced apart on a frame; activating the photogrammetry scanner system to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors.

17. An imaging assembly, the imaging assembly comprising: a body having an image sensor, a light emitting device and a processing assembly connected thereto; the processing assembly being configured to coordinate a lighting and imaging sequence of a target, wherein the imaging assembly is designated as a controller imaging assembly amongst a network of imaging assemblies each in digital communication with the processing assembly and is thus configured to coordinate the lighting and imaging sequence of the target of one or more imaging assemblies in the network of imaging assemblies.

18. The imaging assembly according to claim 17, wherein the imaging assembly comprises a housing, wherein the body forms part of the housing.

19. The imaging assembly according to claim 17 or claim 18, wherein the processing assembly is configured to form the network of imaging assemblies by detecting and communicating with one or more additional imaging assemblies.

20. The imaging assembly according to any one of claims 17 to 19, wherein an additional imaging assembly of the one or more additional imaging assemblies is detected by the controller imaging assembly when the additional imaging assembly is electrically activated.

Description:
A PHOTOGRAMMETRY SCANNER SYSTEM AND IMAGING ASSEMBLY

TECHNICAL FIELD

[1 ] The present invention relates to a photogrammetry scanner system and imaging assembly for capturing images of a target. In particular, the photogrammetry scanner system is for capturing images of a target that can be used to create three dimensional images and models of the target.

BACKGROUND

[2] Any references to methods, apparatus or documents of the prior art are not to be taken as constituting any evidence or admission that they formed, or form part of the common general knowledge.

[3] Photogrammetry is the technique of extracting three-dimensional information from photographs. It involves the processing of a set of overlapping photographs of a target object to produce a 3D computer model of the object.

[4] Typical photogrammetry reconstruction algorithms search sets of images for common features, and compute the 3D dimensional coordinates of these features by correlating relative feature locations. The relative location of corresponding cameras can also be determined in the same way.

[5] By extrapolating to many features and photographs, the relative camera coordinates and a point-cloud representing the 3D morphology of the target object can be computed. From this point-cloud, artificial neural-net or meshing techniques can be employed to construct a three-dimensional computer model consisting of vertices and polygons (triangles or N-gons). To further improve the quality of the final 3D object, per-pixel depth maps can be computed for each input image, with a value assigned to each pixel to indicate its relative distance from the sensor. The depth maps for several images can then be combined to estimate additional structural information between the computed 3D feature point-cloud prior to computing the mesh. During each of the processing stages, various algorithms are available to filter unwanted noise data and apply 3D smoothing, remeshing and noise reduction algorithms. The final stage typically involves computing UV coordinates for the resulting 3D mesh and, using the source images, produce a 2D texture for mapping onto the mesh via the UV coordinates. [6] Two challenges in photogrammetry are (1 ) providing optimal source images for feature detection and depth estimation, and (2) ensuring optimal sampling of the target object and sufficient (and reliable) image overlap for un-biased 3D object reconstruction.

[7] Optimal source images would contain only the object to be scanned (without background), have no noise or lens distortion, be sharp and have high detail. The object would be maximised in the field of view with sufficient sharp distinguishable features distributed across its surface and be matt (non-reflective) and lit in such a way as to minimise lighting artefacts such as specular reflections and unwanted shadows. To enhance feature detection, indirect lighting that produces shadows may be desirable as opposed to flat direct lighting.

[8] Optimal object sampling is important to maximise the useful data that is collected for a given set of images. If regions of the object are insufficiently sampled (i.e., not enough image coverage) or all regions of the object are not covered with sufficiently overlapping images, the reconstruction process will result in a sub-optimal object.

[9] Photogrammetry can be carried out using a single camera, often handheld or attached to a moving object such as a plane or other vehicle, which takes multiple photographs of the object from different locations. Another approach is to use multiple cameras and lights arranged in different locations on a framework aiming at the object, with a system used to synchronously trigger the cameras. A related technique uses overlapping survey images (aerial, satellite, or ground-vehicle based) to reconstruct 3D models of large areas. These multi-camera setups are often heavy, expensive, require a lot of space, and difficult to assemble and move around. Furthermore, due to the often high quality level produced by the cameras used in such systems, long processing times can also lead to a bottleneck impacting throughput.

[10] A weakness of photogrammetry is its difficulty reconstructing uniform surfaces due to a lack of detail for feature detection and depth-map computation. A method to remove this limitation is to take two images per sensor; one image with a pattern projected onto the object for geometry reconstruction, and another without a pattern for texture map generation.

[1 1 ] A related 3D surface scanning technology uses structured light methods. Unlike photogrammetry which uses computer vision techniques to look for features in the target object, structured light methods project a pattern(s) on to the object being scanned (typically a single line or series of parallel lines), and then observe the resulting shape of the pattern. By determining the changes in shape, 3D geometry of the target object can be determined. Common implementations of structured light 3D scanning involve a handheld 3D scanner (pattern projector/imaging) which is swept over the object. The computed 3D geometry of the target object is refined over time from multiple samples.

SUMMARY OF INVENTION

[12] In an aspect, the invention provides a photogrammetry scanner system for capturing images of a target comprising: a frame; a plurality of image sensors connected to and spaced apart on the frame; and a plurality of light emitting devices connected to and spaced apart on the frame and configured to project light in a lighting pattern.

[13] Preferably, each image sensor of the plurality of image sensors is configured with a wide field of view and a short focal distance. This enables the image sensors to be located closer to the target object than a typical camera photogrammetry rig and the photogrammetry scanner system can be made smaller and more portable.

[14] Preferably, the plurality of image sensors comprises a plurality of cameras. Preferably, the plurality of light emitting devices comprises a plurality of light emitting diodes and/or a plurality of laser diodes for pattern recognition.

[15] Preferably, the scanner system is configured to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors.

[16] Preferably, the plurality of image sensors are located at fixed positions on the frame. Preferably, an orientation or angle of each image sensor of the plurality of image sensors is dynamically variable. Preferably, each image sensor of the plurality of image sensors is connected to a servo motor to allow the orientation or angle to be dynamically varied.

[17] Preferably, the plurality of light emitting devices and the plurality of image sensors are configured to coordinate the projection of light from the plurality of light emitting devices with an image capture sequence by the plurality of image sensors. Preferably, the projection of light from the plurality of light emitting devices is synchronised to the image capture sequence of the plurality of image sensors. Preferably, the image capture sequence comprises each image sensor of the plurality of image sensors capturing one or more images. Preferably, the image capture sequence comprises the plurality of image sensors simultaneously capturing one or more images. Preferably, the image capture sequence comprises activating a light emitting device adjacent an image sensor and subsequently capturing an image via the image sensor adjacent the light emitting device.

[18] Preferably, an arrangement of the plurality of image sensors and the plurality of light emitting devices comprises each image sensor of the plurality of image sensors being adjacent a light emitting device of the plurality of light emitting devices.

[19] Preferably, there is an arrangement of image sensors of the plurality of image sensors and light emitting devices of the plurality of light emitting devices on each of a plurality of arms of the frame, wherein the arrangement on each arm comprises each image sensor being adjacent a light emitting device.

[20] Preferably, the arrangement on each arm is inverse to the arrangement of the immediately adjacent arm. Alternatively, the arrangement on each arm is inverted relative to the arrangement of the immediately adjacent arm.

[21 ] Preferably, no two image sensors of the plurality of image sensors are coplanar.

[22] Preferably, the frame comprises a surface or surfaces having the plurality of image sensors and the plurality of light emitting devices connected thereto. Preferably, the surface or surfaces is/are flat or curved or arcuate.

[23] Preferably, the frame is a handheld, portable frame. Alternatively, the frame is a portable frame.

[24] Preferably, the frame is modular. Preferably, the frame comprises a plurality of receptacles, each receptacle configured to receive an image sensor and a light emitting device therein. Preferably, the plurality of receptacles are releasably connectable.

[25] Preferably, the frame comprises a plurality of arms. Preferably, the plurality of arms are spaced apart along a frame support member. Preferably, each of the plurality of arms extend substantially perpendicular from the frame support member.

[26] Preferably, each arm includes one or more of the plurality of image sensors thereon. Preferably, each arm is dynamically configurable.

[27] Preferably, each arm of the plurality of arms is curved. Preferably, a curvature of each arm is substantially equal. [28] Preferably, the scanner system includes a plurality of scanner assemblies. The scanner assemblies comprise one or more imaging assemblies. The scanner assemblies further comprise one or more secondary assemblies.

[29] Preferably, each arm includes an imaging assembly including an image sensor and a light emitting device.

[30] Preferably, each image sensor of the plurality of image sensors is offset from every other image sensor of the plurality of image sensors in three dimensions.

[31 ] Preferably, each image sensor of the plurality of image sensors has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other image sensor of the plurality of image sensors.

[32] Preferably, the plurality of image sensors is configured to be sensitive to a range of wavelengths of light. Preferably, the plurality of image sensors is configured to be sensitive to a range of wavelengths of light (including both visible and non-visible wavelengths) including at least one of ultraviolet light, visible light and near infrared light.

[33] Preferably, a light emitting device of the plurality of light emitting devices is independently controllable relative to another light emitting device of the plurality of light emitting devices. Preferably, the plurality of light emitting devices are configured to emit a range of wavelengths of light. Preferably, each light emitting device of the plurality of light emitting devices comprises a plurality of lights emitting light at different wavelengths. Preferably, the plurality of lights are independently configurable and/or controllable. Preferably, the plurality of light emitting devices are configured to emit a range of wavelengths of light including at least one of ultraviolet light, visible light and near infrared light.

[34] Preferably, the plurality of image sensors are configured to detect a wavelength output by the plurality of light emitting devices.

[35] Preferably, the light emitting devices are configured to project a static or dynamic lighting pattern at a predetermined wavelength. Preferably, the light emitting devices are configured to project a static or dynamic lighting pattern at a predetermined wavelength by: a) controlling the lighting element sequencing; or b) projecting a pattern via a pattern screen and/or lens; or c) using a laser pattern generator (either scanning or through an optical grating or holographic lens). [36] Preferably, a grate is located over the plurality of light emitting devices. Preferably, the grate may be static or dynamic. The grating enables modulated pattern projection for hybrid photogrammetry and/or structured light scanning.

[37] Preferably, the photogrammetry scanner system further comprises an orientation sensor and/or an acceleration sensor. Preferably, the orientation sensor comprises a gyroscope and/or an accelerometer. Preferably, the acceleration sensor comprises an accelerometer. The orientator sensor and/or acceleration sensor can be used to determine and track scanner location and orientation to assist with alignment of the target within their field of view of the image sensors.

[38] Preferably, each light emitting device of the plurality of light emitting devices is offset from every other light emitting device of the plurality of light emitting devices in each of three dimensions.

[39] Preferably, each light emitting device of the plurality of light emitting devices has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other light emitting device of the plurality of light emitting devices.

[40] Preferably, the photogrammetry scanner system further comprises one or more controllers configured to control the plurality of image sensors and/or the plurality of light emitting devices.

[41 ] Preferably, the photogrammetry scanner system includes a three-dimensional (3D) depth sensor for computing accurate point cloud data for hybrid LIDAR and photogrammetry systems to be implemented. Preferably, the 3D depth sensor comprises a LIDAR sensor.

[42] In another aspect, the invention provides a method of photogrammetry scanning, the method includes: providing a photogrammetry scanner system having a plurality of image sensors and a plurality of light emitting devices connected to and spaced apart on a frame; activating the photogrammetry scanner system to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors.

[43] Preferably, the method includes projecting patterned light. Preferably, the patterned light is generated by projecting light from a laser diode through a grating or hologram adjacent the laser diode. This provides for hybrid structured light and photogrammetry systems to be implemented.

[44] In another aspect, the invention provides an imaging assembly, the imaging assembly comprising: a body having an image sensor, a light emitting device and a processing assembly connected thereto; the processing assembly being configured to coordinate a lighting and imaging sequence of a target, wherein the imaging assembly is designated as a controller imaging assembly amongst a network of imaging assemblies each in digital communication with the processing assembly and is thus configured to coordinate the lighting and imaging sequence of the target of all imaging assemblies in the network of imaging assemblies.

[45] Preferably, the imaging assembly comprises a housing, wherein the body forms part of the housing.

[46] Preferably, the processing assembly is configured to form the network of imaging assemblies by detecting and communicating with one or more additional imaging assemblies.

[47] Preferably, an additional imaging assembly of the one or more additional imaging assemblies is detected by the controller imaging assembly when the additional imaging assembly is electrically activated.

BRIEF DESCRIPTION OF THE DRAWINGS

[48] Preferred features, embodiments and variations of the invention may be discerned from the following Detailed Description which provides sufficient information for those skilled in the art to perform the invention. The Detailed Description is not to be regarded as limiting the scope of the preceding Summary of the Invention in any way. The Detailed Description will make reference to a number of drawings as follows:

Figure 1 is a front view of a photogrammetry scanner system for capturing images of a target according to an embodiment of the invention;

Figure 2 is a side view of the photogrammetry scanner system;

Figure 3 is a front view of an arm of the photogrammetry scanner system;

Figure 4 illustrates a side view of an arm of the photogrammetry scanner system; Figures 5 and 6 illustrate the image sensors, light emitting devices and controller of the photogrammetry scanner system on a printed circuit board;

Figure 7 is a functional block diagram of the photogrammetry scanner system;

Figures 8 and 9 illustrate an embodiment of the photogrammetry scanner system imaging a face;

Figures 10 and 1 1 illustrate the photogrammetry scanner system being reconfigured to be folded for storage and/or transport;

Figures 12 to 14 illustrate an imaging assembly according to an embodiment of the present invention;

Figures 15 and 16 illustrate a frame for a photogrammetry scanner system;

Figures 17 and 18 illustrate a photogrammetry scanner according to another embodiment of the present invention;

Figure 19 illustrates a modular frame for a photogrammetry scanner system;

Figure 20 illustrates a portion of a photogrammetry scanner system using a modular frame; and

Figure 21 illustrates a close up view of a connecting member for connecting receptacles (modules) of a modular frame.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

[49] Figures 1 to 5 depict a photogrammetry scanner system 10 for capturing images of a target. In particular, the photogrammetry scanner system 10 is for capturing images of a target that can be used to create three dimensional images of the target. An example of a target includes the head or face of a person. However, the target could include any part of a person, or another object.

[50] The photogrammetry scanner system 10 includes a frame 100, and a plurality of scanner assemblies comprising a plurality of imaging assemblies 101 connected to the frame 100. The scanner assemblies may also comprise one or more secondary assemblies that perform secondary functions, such as lighting or control of the imaging sequence, for example.

[51 ] The imaging assemblies 101 are connected to and spaced apart on the frame 100 in an array. More particularly, the imaging assemblies 101 include a plurality of image sensors 120 connected to and spaced apart of the frame 100 in an array and a plurality of light emitting devices 140 connected to and spaced apart on the frame 100 and configured to emit light in a lighting pattern. In some embodiments, each imaging assembly 101 includes at least one image sensor and one light emitting device as a pair.

[52] In use, the plurality of light emitting devices 140 project light in a lighting pattern onto a target and one or more images of the target are captured by the plurality of image sensors 120.

[53] That is, the photogrammetry scanner system 10 is configured to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices 140 project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors 120.

[54] In some embodiments, the coordinated lighting and imaging sequence includes activating a light emitting device of the plurality of light emitting devices 140 adjacent an image sensor of the plurality of image sensors 120 and subsequently capturing an image via the image sensor adjacent the light emitting device. In other embodiments, the light emitting devices and image sensors may be activated non-sequentially such that a light emitting device is activated and a non-adjacent image sensor subsequently captures an image.

[55] In some embodiments, the image sensors have a wide field of view and short focal distance which allows the image sensors to be located closer to the target than typical camera photogrammetry rigs to optimise their object/sensor ratio. That is, to optimise the per-sensor coverage of the target and ensure sufficient overlap between images from adjacent sensors.

[56] The plurality of image sensors take the form of a plurality of cameras and the plurality of light emitting devices takes the form of a plurality of light emitting diodes. It will be appreciated that any form of light emitting device may be used, including, for example, organic light-emitting diodes (OLED).

[57] Embodiments of the present invention use inch image sensors.

[58] By providing some embodiments of the photogrammetry scanner system 10 with image sensors having a wide field of view and short focal distance, the photogrammetry scanner system 10 can be located reasonably close to the target as noted above, thereby reducing the need for relatively expensive optics to acquire high resolution images from further away and enabling a practical higher density of sensors in the array than with typical camera photogrammetry rigs. The relatively high sensor density means that small regions of the target can completely fill each sensor, and the sensors arranged to ensure sufficient overlap such that the object is completely imaged.

[59] It will be appreciated that the exact specifications (e.g., focal distance and field of view) and density will depend on the typical desired camera to object distance and the geometry of the object and scanner rig.

[60] The illustrated embodiment uses a plurality of image sensors 120 in an array which are connected to a surface of the frame 100. The arrangement of the image sensors in an array (to be explained in further detail below) optimises coverage and overlap of a target object to maximise efficacy of the photogrammetric process.

[61 ] The frame 100, in form of a handheld and portable frame, includes a frame support member 105 and a plurality of surfaces in the form of arms 1 10a-f having the plurality of image sensors 120 and the plurality of light emitting devices 140 connected thereto. The arms 1 10a-f are connected to and spaced apart along the length of the frame support member 105.

[62] In a preferable embodiment, the frame 100 is formed from lightweight plastic or lightweight metal materials, such as carbon-fibre, for example.

[63] In some embodiments, the frame may comprise a single, curved or arcuate surface having the plurality of image sensors and plurality of light emitting devices arranged thereon in accordance with the description herein in relation to other embodiments.

[64] The arms 1 10a-f and frame support member 105 are arcuate and the curvature or arc of each arm 1 10a-f is substantially equal. However, the arms 1 10a-f and the frame support member may be flat or planar, and may also be dynamically configurable between flat and arcuate.

[65] The illustrated embodiment shows the frame support member 105 as being arcuate, as noted above, and extending longitudinally in a first direction and the arms 1 10a-f being arcuate and extending therefrom longitudinally in a second direction. Put another way, the frame support member 105 extends in a first direction along a first longitudinal axis 106 thereof extending from a first end 107 to a second end 108 of the frame support member 105, and each arm 110a-f extends in a second direction along respective second longitudinal axes 1 1 1 thereof extending from a first end 1 12 to a second end 1 13 of each arm 1 10a-f. The first longitudinal axis 106 and the second longitudinal axis 1 1 1 are substantially perpendicular. [66] Despite being arcuate, each of the arms 110a-f can be thought of as extending substantially perpendicularly from the frame support member 105 at the point that each respective arm 1 10a-f is fixed to the frame support member 105.

[67] The curvature of the arms 1 10a-f and the frame support member 105 provides the frame 100 with a substantially semi-spherical (or hemispherical) shape or arrangement.

[68] The semi-spherical arrangement allows each image sensor to sample (image) the target/subject from substantially the same distance depending on the target’s geometry. This arrangement also allows each image sensor to be located at unique 3D coordinates (i.e., none of the image sensors are coplanar) as opposed to 2D in the case of cylindrical or flat array configurations. It is envisioned that such an arrangement may provide improved feature localisation in 3D space (e.g., a triangulation advantage).

[69] Turning now to the image sensors and light emitting devices connected to the frame 100, a general arrangement of the plurality of image sensors 120 and the plurality of light emitting devices 140 includes each image sensor of the plurality of image sensors 120 being adjacent a light emitting device of the plurality of light emitting devices 140.

[70] As can be seen in Figures 1 and 2, there is an arrangement of image sensors of the plurality of image sensors 120 and light emitting devices of the plurality of light emitting devices 140 on each of the six arms 1 10a-f of the frame 1 10. The arrangement on each arm 1 10a-f comprises each image sensor being adjacent a light emitting device.

[71 ] While the embodiment shown in the figures includes six arms, any number of arms may be provided. For example, in one embodiment, five arms may be provided. In another embodiment, seven arms may be provided.

[72] The arrangement on each arm 1 10a-f is inverse to, or inverted relative to, the arrangement of the immediately adjacent arm. For example, in this embodiment, each arm 110a-f includes five image sensors and five light emitting devices where arm 1 10a includes an arrangement of alternating image sensor and light emitting device starting with an image sensor and finishing with a light emitting device from the top down (or from end to end) while arm 1 10b includes alternating light emitting device and image sensor starting with a light emitting device and finishing with an image sensor from the top down (or from end to end). This pattern is repeated for each of the six arms. More generally, the pattern is repeated alternatingly on each successive or subsequent arm.

[73] As noted above, no two image sensors of the photogrammetry scanner system 10 are coplanar and this ensures optimal overlap is achieved in the images obtained by all of the image sensors.

[74] In the illustrated embodiment, each image sensor of the plurality of image sensors 120 is offset from every other image sensor of the plurality of image sensors 120 in each of three dimensions such that each image sensor of the plurality of image sensors 120 has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other image sensor of the plurality of image sensors 120.

[75] Similarly, each light emitting device of the plurality of light emitting devices 140 is offset from every other light emitting device of the plurality of light emitting device 140 in each of three dimensions such that each light emitting device of the plurality of light emitting devices 140 has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other light emitting device of the plurality of light emitting devices 140. However, it will be appreciated that, in some embodiments, the image sensors are offset from every other image sensor while the light emitting devices are not offset from every other light emitting device.

[76] Turing briefly to the light emitting devices, each light emitting device may be either synchronised with or independently controllable relative to another light emitting device of the plurality of light emitting devices 140.

[77] The light emitting devices can be configured to emit a range of wavelengths of light or each light emitting device may include a plurality of lights (in the form of light emitting diodes or other suitable light emitting devices) emitting light at different wavelengths including, but not limited to, ultraviolet light, visible light and near infrared light. An example of a plurality of different lights being provided can be seen in Figures 3 and 5, showing a first type of light emitting diode 140a configured to emit a first type of light and a second type of light emitting diode 140b configured to emit a second type of light that has a different wavelength to the wavelength of the first type of light emitted by the first type of light emitting diode 140a.

[78] The independently controllable light emitting devices, that may also project lighting patterns at varying wavelengths, facilitate maximal visibility of the target in each image. [79] The independently controllable light emitting devices allow the image sensor to receive reflected light at a target wavelength(s) of the light emitting device to capture images of the target object in that wavelength.

[80] In some embodiments, bright light (using high-intensity LEDs, for example) is projected very close to the target object. This allows the target object to be illuminated with the light at the target wavelength(s) more brightly than surrounding ambient broad-spectral light. This also means the shutter speed of the image sensor can be reduced, thereby reducing the incidence of the image sensor focusing on objects outside the target area that are illuminated more dimly than the brightly lit target.

[81 ] In order to utilise these different wavelengths of light provided by the light emitting devices, the plurality of image sensors 120 are configured to detect (or be capable of imaging) a wavelength corresponding to a wavelength output by the plurality of light emitting devices 140.

[82] In use, the plurality of light emitting devices 140 is configured to project a static or dynamic lighting pattern at a predetermined wavelength (as discussed above) by: a) controlling the lighting element sequencing; or b) projecting a pattern via a pattern screen and/or lens; or c) using a laser pattern generator (either scanning or through an optical grating, optical filter or holographic lens).

[83] A grating (either static or dynamic) in the form of the optical grating mentioned above may be located over the plurality of light emitting devices 140 to give effect to the lighting pattern and facilitate modulated pattern projection for hybrid photogrammetry and/or structured light scanning.

[84] In some embodiments, the grating provides patterned light. The patterned light is generated by projecting light from a laser diode through the grating (or a hologram) adjacent the laser diode: This provides for hybrid structured light and photogrammetry systems to be implemented.

[85] In some embodiments, a dynamic grating shapes the light pattern from the light emitting devices in a sweeping motion across the surface of the target object. Sequences of images could be taken of this changing light pattern on the surface of the object from the fixed camera locations and this data used in a hybrid structure- light/photogrammetry algorithm to improve 3D object geometry accuracy and resolution.

[86] In some embodiments, an optical filter (or series of filters that can be selectively placed over the image sensors) is provided to selectively transmit or reject a wavelength or range. The optical filter is located over the image sensors to optimise scanning for specific wavelengths in addition, or alternatively to, using a multiwavelength lighting configuration (e.g., multiple light emitting devices emitting light at different wavelengths).

[87] The use of a pattern can be used to provide additional features to photogrammetric reconstruction software to improve 3D model mesh accuracy and detail, or used by structured light algorithms in addition to the standard photogrammetry processing for the same purpose. In particular, this can be achieved by sequencing the activation of the light emitting devices in coordination with sequencing the image sensor capturing of either single images or video capture. Alternatively, the grating or holographic lens can be moved over the light emitting devices as light is projected therefrom to move the projected pattern over the target.

[88] Each arm 1 10a-f of the photogrammetry scanner system 10 includes a cover panel 1 14. The image sensors and light emitting devices are located between the body of the arm 110a-f and the cover panel 1 14. The cover panel 1 14 is transparent to allow light from the light emitting devices to be projected onto a target and includes an opening (such as openings 1 15) for each image sensor to allow the image sensor to capture images. Alternatively, the opening could be replaced with a transparent cover that allows the image sensor to capture images. With reference to the light patterns mentioned above and the use of an optical grating, optical filter, holographic lens and pattern screen, these may be formed in the control panel or applied to a surface of the cover panel.

[89] The imaging assemblies 101 of the photogrammetry scanner system 10 also each include a processing assembly including a controller 180 in the form of a microprocessor connected to and configured to control the plurality of image sensors 120 and/or the plurality of light emitting devices 140. The controllers 180 can be seen in Figure 6 and in Figure 7 which shows a functional block diagram of the photogrammetry scanner system 10).

[90] The controllers 180 in the present embodiment are connected to the image sensors 120 by a Camera Serial Interface (CSI) but could be connected using any other suitable camera data interface. Each controller 180 includes software executable to control the image sensors 120. [91 ] The controller software can electronically configure the sensor imaging parameters such as resolution, pixel binning, capture mode (still or video), shutter speed and exposure time, gamma, for example, as required or desired.

[92] The controllers 180 are located on the frame 100 in proximity to the image sensors 120 (preferably on a printed circuit board 121 as shown in Figures 5 and 6) and control the imaging parameters, trigger the cameras (to capture still image or video), receive the image data via the CSI ports, interface with local memory to store the image data, communicate with other microcontrollers or external computer devices 2 via physical connections or wireless communications (Bluetooth, WI-FI etc), perform diagnostics, control the lighting, and run program code which is stored locally.

[93] The controllers 180 can also be programmed to enable coordinated sequencing and video capture depending on the application as discussed below.

[94] The controllers 180 are connected to onboard non-volatile storage 181 , with the images stored in a way to enable identification of the images for later processing. To aid in the storage and association of each image set(s) with each physical object, an identification method can be employed that uses imaging of an object descriptor prior to imaging the object itself. Examples of the object description include barcode, XR code, text recognition algorithm, or any other symbol designed to identify the object. For example, many hospitals use digital patient identification numbers which are used to link patient information such as notes, measurements, and images within the hospital patient database system. Alternatively, or additionally, facial recognition or object recognition can be used to identify the person or target object to be scanned using single or multiple image sensors.

[95] This patient ID can be imaged prior to, or during, 3D scanning of the patient by placing an identifier within the vision of the sensor array and triggering a scan. Onboard software can identify the relevant code within the captured images and ensure that each subsequent patient image is associated with the code in the image file name, image header or other mechanism. This code can be stored in the image header for transfer to the 3D scan file header during the photogrammetry 3D reconstruction or within the filename. After each desired imaging sequence has completed, the onboard microcontrollers store and label the images in the storage location using the intra-controller communication protocol. The photogrammetry scanner system 10 is now ready for the next imaging sequence to be triggered. [96] Advantageously, the semi-spherical (or hemispherical) arrangement described above allows each image sensor to sample the target from substantially the same distance, and also ensures each image sensor is located at unique 3D coordinates as opposed to 2D in the case of cylindrical or flat array configurations. However, as will be described in more detail below, the location and orientation of the image sensors can be reconfigured to optimise for objects of different geometries, and can be fully spherical to simultaneously image 360 degrees of the object (by adding more image sensors and arms or by relocating image sensors and arms along the frame dynamically), or a flat or cylindrical arrangement to image objects of like-geometry.

[97] The arms of the frame can also be dynamically configurable to enable a single scanner to be optimised for different target objects and to create different shaped arrangements. For example, the arms may be configurable between a substantially flat (or planar) arrangement and a curved (or arcuate) arrangement. In some embodiments, the image sensors can also be configured to diverge to scan surrounding geometries such as a building interior or exterior.

[98] To perform a 3D scan of an object, the photogrammetry scanner system 10 is placed in a position with respect to the object, and a signal supplied to the controllers 180 to initiate the 3D scan. In one embodiment, the image sensors are approximately 27cm from a central focus point, wherein the target should be located approximately at the central focus point. The signal can be supplied by a single input trigger 182, wireless by a remote device, or via a timer device (such as a clock, for example) or code. On detection of the trigger, the controllers 180 operate the programmed lighting and image capture sequence which involves 1 ) communicating with the image sensors 120 to set their parameters, acquire the images, and transferring the images to the microcontroller for storage in memory 181 ; 2) communicating with the lighting control hardware to turn on and off each of the lighting emitting devices 140 (and pattern generators).

[99] Once the images have been captured, the images are communicated from the non-volatile storage 181 of the photogrammetric scanner system 10 to an external computer 2 equipped with photogrammetric software (such as AliceVision, for example) which is programmed to create the 3D model from the images captured by the photogrammetry scanner system 10. Alternatively, photogrammetric software may be stored and executed in the non-volatile storage 181 of the imaging assembly to facilitate onboard digital image processing and 3D model processing with processing performed on a single on-board processor or by clustering on-board processors.

[100] In Figures 12-18, a photogrammetry scanner system 20 for capturing images of a target is shown. The photogrammetry scanner system 20 is for capturing images of a target that can be used to create three dimensional images or models of the target. An example of a target includes the head, face, hand or foot of a person. However, the target could include any part of a person, or another object.

[101] The photogrammetry scanner system 20 includes a frame 200, and a plurality of scanner assemblies comprising a plurality of imaging assemblies 201 connected to the frame 200.

[102] The imaging assembly 201 also includes a processing assembly 280 (described in more detail below).

[103] The scanner assemblies may also comprise one or more secondary assemblies that perform secondary functions, such as lighting or control of the imaging sequence, for example.

[104] The imaging assemblies 201 are connected or connectable to the frame 200 such that the plurality of imaging assemblies 201 are spaced apart on the frame 200 in an array. More particularly, the imaging assemblies 201 include a plurality of image sensor 220 (image sensor 220 shown includes both the image sensor and a housing the image sensor is mounted to) connected or connectable to and spaced apart on the frame 200 in an array and a plurality of light emitting devices 240 connected or connectable to and spaced apart on the frame 200 and configured to emit light in a lighting pattern. In some embodiments, each imaging assembly 201 includes at least one image sensor 220 and one light emitting device 240 as a pair.

[105] Each imaging assembly 201 includes a body 202 having at least one image sensor 220 and at least one light emitting device 240 connected thereto.

[106] The imaging assembly 201 may also include a grating (either static or dynamic) in the form of an optical grating located over the plurality of light emitting devices 240 to give effect to the lighting pattern and facilitate modulated pattern projection for hybrid photogrammetry and/or structured light scanning. The grating in the illustrated embodiment takes the form of a diffuser 250 located over the light emitting device 240 to improve light diffusion and thus illumination of the subject/target.

[107] The imaging assembly 201 shown in the illustrations is hexagonal but could be any shape. [108] In the illustrated embodiment, the imaging assembly 201 is arranged such that the image sensor 220 is located between the body 202 and the light emitting device 240. More particularly, the arrangement is in the following order: body 202, processing assembly 280, image sensor 220, light emitting device 240 and diffuser 250 (if used).

[109] The body 202, image sensor 220 and light emitting device 240 each include a connector for connecting to a connector on another component (e.g. the image sensor 220 includes a connector for connecting to the body 202 and the light emitting device 240). In the illustrated embodiment, the connectors take the form of multi-pin magnetic connectors 203 that provide for connection and the distribution of power to necessary components (including the processing assembly 280, image sensor 220 and light emitting device 240).

[1 10] The imaging assembly 201 may also be located in a housing 204 that is environmentally sealed. The body 202 forms part of the housing 204.

[1 1 1] The processing assembly 280 includes a system on chip having a programmable microprocessor and memory. The processing assembly 280 is configured to control and operate the image sensor 220, light emitting device 240 and associated inputs and outputs (I/O).

[1 12] The processing assembly 280 may include a wireless transceiver and be configured to enable wireless synchronous operation and data communication without the need for physical inter-device signalling. Thus, in some embodiments, the processing assemblies 280 of the imaging assemblies 220 are in data communication only by wireless means or only interface wirelessly. However, the processing assemblies 280 may also be physically interconnected for data communication in some embodiments.

[1 13] In use, the plurality of light emitting devices 240 project light in a lighting pattern onto a target and one or more images of the target are captured by the plurality of image sensors 220.

[1 14] That is, the photogrammetry scanner system 20 is configured to perform a coordinated lighting and imaging sequence of a target, wherein the plurality of light emitting devices 240 project light in a lighting pattern and one or more images of the target are captured by the plurality of image sensors 220.

[1 15] In some embodiments, the coordinated lighting and imaging sequence includes activating a light emitting device 240 associated with an image sensor 220 of an imaging assembly 201 and subsequently capturing an image via the image sensor 220 associated with the light emitting device 240. In other embodiments, the light emitting devices 240 and image sensors 220 may be activated non-sequentially such that one light emitting device 240 on a first imaging assembly 201 is activated and an image sensor 220 on a second imaging assembly 201 subsequently captures an image.

[1 16] In some embodiments, the image sensors 220 have a wide field of view and short focal distance which allows the image sensors 220 to be located closer to the target than typical camera photogrammetry rigs to optimise their object/sensor ratio. That is, to optimise the per-sensor coverage of the target and ensure sufficient overlap between images from adjacent sensors.

[1 17] The plurality of image sensors 220 take the form of a plurality of cameras and the plurality of light emitting devices 240 take the form of a plurality of light emitting diodes. It will be appreciated that any form of light emitting device may be used, including, for example, organic light-emitting diodes (OLED).

[1 18] Embodiments of the present invention use inch image sensors, but any type of image sensor may be used.

[1 19] By providing some embodiments of the photogrammetry scanner system 20 with image sensors 220 having a wide field of view and short focal distance, the photogrammetry scanner system 20 can be located reasonably close to the target as noted above, thereby reducing the need for relatively expensive optics to acquire high resolution images from further away and enabling a practical higher density of sensors in the array than with typical camera photogrammetry rigs. The relatively high sensor density means that small regions of the target can completely fill each sensor, and the sensors arranged to ensure sufficient overlap such that the object is completely imaged.

[120] It will be appreciated that the exact specifications (e.g., focal distance and field of view) and density will depend on the typical desired camera to object distance and the geometry of the object and scanner rig.

[121] The frame 200 may be formed modularly.

[122] In a preferable embodiment, the frame 200 is formed from lightweight plastic or lightweight metal materials, such as carbon-fibre, for example.

[123] In some embodiments, the frame 200 may comprise a curved or arcuate body 210 having the plurality of imaging assemblies 201 arranged thereon in accordance with the description herein in relation to other embodiments. [124] The frame 200 includes a plurality of receptacles 211 formed in the body 210 of the frame 200 that are shaped congruently to the shape of the imaging assemblies 201 . This allows the imaging assemblies 201 to be connected to the frame 200.

[125] The receptacles 21 1 may be releasably connected to allow for the frame 200 to be formed modularly and reconfigured as required for different targets/subjects.

[126] In an in-use configuration, the illustrated embodiment uses a plurality of image sensors 120 in an array spaced about the frame 200 based on the location of the receptacles 21 1 that receive the imaging assemblies 201. The arrangement of the image sensors 220 in an array (to be explained in further detail below) optimises coverage and overlap of a target object to maximise efficacy of the photogrammetric process.

[127] The body 210 of the frame 200 is substantially semi-spherical having an arcuate face. However, the body may be flat or planar, and may also be dynamically configurable between flat and arcuate.

[128] The semi-spherical arrangement allows each image sensor 220 to sample (image) the target/subject from substantially the same distance depending on the target’s geometry. This arrangement also allows each imaging assembly 201 to be located at unique 3D coordinates when measured from a centrepoint (i.e. , none of the imaging assemblies 201 are coplanar or are non-coplanar) as opposed to 2D in the case of cylindrical or flat array configurations. Each imaging assembly 201 may be programmed with designated coordinates based on a known position in the frame 200 or, alternatively, be configured to determine its coordinates within the frame 200 by communication with any other imaging assemblies 201 within the frame 200. In such an embodiment, at least one imaging assembly 201 may be programmed with coordinates to act as an initial reference to any new imaging assemblies 201 added to the frame 200.

[129] As no two imaging assemblies 201 are coplanar, each imaging assembly 201 is offset from every other imaging assembly 201 .

[130] Accordingly, each image sensor 220 is located at unique 3D coordinates when measured from a centrepoint (i.e., none of the image sensors 220 are coplanar or are non-coplanar) as opposed to 2D in the case of cylindrical or flat array configurations. It is envisioned that such an arrangement may provide improved feature localisation in 3D space (e.g., a triangulation advantage). [131] As noted above, no two image sensors 220 of the photogrammetry scanner system 20 are coplanar and this ensures optimal overlap is achieved in the images obtained by all of the image sensors 220.

[132] As no two image sensors 220 are coplanar, each image sensor 220 is offset from every other image sensor 220 of the plurality of image sensors 220 in each of three dimensions such that each image sensor 220 has a coordinate in three- dimensional space that is unique in each of three dimensions relative to the coordinates of every other image sensor 220.

[133] Similarly, each light emitting device 240 is offset from every other light emitting device 240 of the plurality of light emitting device 240 in each of three dimensions such that each light emitting device 240 has a coordinate in three-dimensional space that is unique in each of three dimensions relative to the coordinates of every other light emitting device 240. However, it will be appreciated that, in some embodiments, the image sensors 220 are offset from every other image sensor 220 while the light emitting devices are not offset from every other light emitting device 240.

[134] Turning briefly to the light emitting devices 240, each light emitting device 240 may be either synchronised with all other light emitting devices 240 in the frame 200 or independently controllable by its corresponding processing assembly 280 (to be explained below) relative to another light emitting device 240.

[135] The light emitting devices 240 can be configured to emit a range of wavelengths of light or one or more light emitting device 240 may include a plurality of lights (in the form of light emitting diodes or other suitable light emitting devices) emitting light at different wavelengths including, but not limited to, ultraviolet light, visible light and near infrared light. An example of a plurality of different lights being provided includes a first type of light emitting diode configured to emit a first type of light and a second type of light emitting diode configured to emit a second type of light that has a different wavelength to the wavelength of the first type of light emitted by the first type of light emitting diode.

[136] The independently controllable light emitting devices 240, that may also project lighting patterns at varying wavelengths, facilitate maximal visibility of the target in each image captured by the corresponding image sensor 220.

[137] The independently controllable light emitting devices 240 allow the corresponding image sensors 220 to receive reflected light at a target wavelength(s) of the light emitting device 240 to capture images of the target object in that wavelength.

[138] In some embodiments, bright light (using high-intensity LEDs, for example) is projected very close to the target object. This allows the target object to be illuminated with the light at the target wavelength(s) more brightly than surrounding ambient broad-spectral light. This also means the shutter speed of the image sensor can be reduced, thereby reducing the incidence of the image sensor 220 focusing on objects outside the target area that are illuminated more dimly than the brightly lit target.

[139] In order to utilise these different wavelengths of light provided by the light emitting devices 240, the plurality of image sensors 220 are configured to detect (or be capable of imaging) a wavelength corresponding to a wavelength output by the plurality of light emitting devices 240.

[140] In use, the plurality of light emitting devices 240 is configured to project a static or dynamic lighting pattern at a predetermined wavelength (as discussed above) by: a) controlling the lighting element sequencing; or b) projecting a pattern via a pattern screen and/or lens; or c) using a laser pattern generator (either scanning or through an optical grating, optical filter or holographic lens).

[141] The processing assemblies 280 are configured similarly to the processing assemblies illustrated in Figure 6 and in Figure 7 which shows a functional block diagram of the photogrammetry scanner system 10.

[142] The processing assemblies 280 in the present embodiment are connected to the image sensors 220 by a Camera Serial Interface (CSI) but could be connected using any other suitable camera data interface. Each processing assembly 280 includes software executable to control the associated image sensor 220 and light emitting device 240 of the imaging assembly 201 .

[143] The controller software can electronically configure the sensor imaging parameters such as resolution, pixel binning, capture mode (still or video), shutter speed and exposure time, gamma, for example, as required or desired.

[144] The processing assemblies 280 control the imaging parameters, trigger the cameras (to capture still image or video), receive the image data via the CSI ports, interface with local memory to store the image data, communicate with other microcontrollers or external computer devices 2 via physical connections or wireless communications (Bluetooth, WI-FI etc), perform diagnostics, control the lighting, and run program code which is stored locally. [145] The processing assemblies 280 can also be programmed to enable coordinated sequencing and video capture depending on the application as discussed below.

[146] The processing assemblies 280 include onboard memory 281 , with the images stored in a way to enable identification of the images for later processing. To aid in the storage and association of each image set(s) with each physical object, an identification method can be employed that uses imaging of an object descriptor prior to imaging the object itself. Examples of the object description include barcode, XR code, text recognition algorithm, or any other symbol designed to identify the object. For example, many hospitals use digital patient identification numbers which are used to link patient information such as notes, measurements, and images within the hospital patient database system. Alternatively, or additionally, facial recognition or object recognition can be used to identify the person or target object to be scanned using single or multiple image sensors.

[147] To perform a 3D scan of an object, the photogrammetry scanner system 20 is placed in a position with respect to the object, and a signal supplied to the processing assemblies 280 to initiate the 3D scan. In one embodiment, the image sensors 220 are approximately 27cm from a central focus point, wherein the target should be located approximately at the central focus point. The signal can be supplied by a single input trigger, wirelessly by a remote device, or via a timer device (such as a clock, for example) or code. On detection of the trigger, the processing assemblies 280 operate the programmed coordinated lighting and image capture sequence which involves 1 ) communicating with the image sensors 220 to set their parameters, acquire the images, and transferring the images to the microcontroller for storage in memory 281 ; 2) communicating with the lighting control hardware to turn on and off each of the lighting emitting devices 240 (and pattern generators).

[148] Once the images have been captured, the images are communicated from the memory 281 of the photogrammetric scanner system 20 to an external computer (such as computer 2 in the earlier illustrations) equipped with photogrammetric software (such as Alice Vision, for example) which is programmed to create the 3D model from the images captured by the photogrammetry scanner system 20. Alternatively, photogrammetric software may be stored and executed in the memory 281 of the imaging assembly to facilitate onboard digital image processing and 3D model processing with processing performed on a single on-board processor or by clustering on-board processors. [149] Further, or alternatively to the above, the processing assembly 280 automatically and dynamically reconfigures both software and hardware of the respective imaging assembly 201 to ensure each component performs independently as desired and collectively to achieve the 3D scan function.

[150] In use, when an imaging assembly 201 is provided with power by way of connection a connector of the frame 200, the processing assembly 280 automatically connects to the processing assemblies 280 of any other imaging assemblies 201 and communicates all required information (preferably wirelessly), so it operates within the photogrammetric scanner system 20 the same as the other imaging assemblies 201 or secondary assemblies.

[151] In addition to the standard microprocessor operating system installed on the processing assembly 280 for each imaging assembly 201 , the photogrammetric scanner system 20 (collection of individual imaging assemblies 201 ) has an overarching operating system which has the capability to perform required functions such as imaging assembly identification and coordination, data transfer, processor and sensor configuration (network and physical), scan operation sequencing, lighting sequencing, hardware and software fault monitoring and diagnostics, control of additional devices, data transfer to external devices, power and temperature monitoring, and distributed data processing (such as computer vision functions like object recognition/segmentation) across the network of imaging assemblies 201 .

[152] External devices in the form of secondary assemblies such as physical button triggers, displays, motion sensors, etc. can also be integrated into this scanner operating system.

[153] The photogrammetric scanner system 20 can also be configured to run multiple programmed coordinated lighting and imaging sequences to enable flexible 3D scanning functions. For example, specific imaging assemblies can be set to scan at different times or in different orders to capture motion data, or set to capture multiple images at different exposure times to combine into high dynamic range images, or lighting devices can be sequenced to produce structured patterns during scanning to enhance features etc. Multiple sequences can also be coded and selected to operate during scanning based on user input (i.e., button or automatically).

[154] To enable the dynamic configuration capability of the photogrammetric scanner system 20, the imaging assemblies 20 may follow a pre-determined sequence to connect and operate. When a given imaging assembly powers up (by connecting it to the required power pins), it initialises the operating system and connects to the scanner network (via a wireless router or using routerless peer-to-peer networking). Each imaging assembly 201 is assigned a network ID. The photogrammetric scanner system 20 can then operate in one of two modes; 1 ) a physical controller is specifically assigned for the function of control and data management, or 2) one of the imaging assemblies 201 is automatically assigned as a controller (in addition to its usual function) via the 3D scanner platform operating system such that there is only one controller.

[155] On startup, each imaging assembly 201 queries on the network for the existence and ID of the controller. If no controller is present, any imaging assembly 201 will nominate itself as the controller in a first-come-first-served basis. To avoid situations where multiple imaging assemblies 201 may self-nominate simultaneously (due to network delays etc), each imaging assembly 201 checks again after a short delay for the presence of other controllers, and then after negotiation, redundant controllers denominate. The controller periodically announces its presence (using a specified network port) to ensure the existence of one controller at all times. If, after some time, scanner assemblies do not receive repeated announcements from the controller, they can assume it is no longer part of the photogrammetric scanner system 20 and self-nomination as a controller will commence with the available imaging assemblies 201. Each imaging assembly 201 individually contains all of the required code to function as a controller, such that the choice of controller is arbitrary and can dynamically adapt to the addition or removal of imaging assemblies 201 from the photogrammetric scanner system 20 during operation. The function of the controller is to interface with external devices, coordinate individual scanner assemblies, run the desired programmed 3D scan sequences, aggregate 3D scan data (such as images or temperature maps), and interface with an external networked device (computer) for transmission of data and further processing.

[156] Each imaging assembly 201 (and any secondary assemblies) includes hardware configuration information, a version of the overall 3D scanner operating system, sequence information, hardware configuration settings information, 3D scan data, and a live map of the entire 3D scanner configuration which is dynamically updated if scanner assemblies are added or removed during operation. In addition, a physical representation of the 3D scanner can be stored on each imaging assembly 201 as arbitrary 6D coordinates (location/orientation) which have been determined by following a physical configuration process involving structure-from-motion software processing or by physical signalling. For example, if using the building-block style scanner assembly configuration, each interconnect can contain physical information about its shape and dimensions (either electronically, physically, or using RFID or similar). Each scanner assembly when added to receptacle 21 1 of the frame 200 can then capture the required physical information and then communicate this information to all other scanner assemblies in the system (with them also doing the same), so a virtual map/representation of the physical configuration of the entire 3D scanner is constructed.

[157] Software and data types include: Operating system, function code, sequence instruction set, data transfer protocols, and 3D scan data (images, temperature, video streams etc).

[158] The operation of each imaging assembly 201 is more specifically described below.

[159] Firstly, on startup, an imaging assembly 201 makes an announcement of ID to the network of imaging assemblies 201 .

[160] Next, a nomination and election process are conducted to establish one imaging assembly 201 as a controller (master).

[161] A notification is provided to the controller of scanner assembly configuration information (hardware specs, function, network ID, etc)

[162] A notification to the controller of current software versions is also provided. The controller then polls all imaging assemblies 201 to identify latest versions of software and pulls the latest version.

[163] The controller updates all software on all imaging assemblies 201 to the latest versions, if necessary.

[164] Each imaging assembly 201 operates according to software instructions; sensors listen for network (or other signal) to begin processing the sequence code which may include single image capture, video stream capture, or timed sequences of capture (depending on requirements). This might also include loading of configuration data from a separate file prior to operation.

[165] Some assemblies may be provided that have different functions from imaging or data capture such as emitting targeting lights (lasers), acting as remote triggers with physical or optical buttons, displaying information (LED, OLED, e-lnk etc), direct measurement using TOF or ultrasound, emitting sound signals to the user etc. [166] Collection of image, video, or data from all connected imaging assemblies (and other assemblies) is then coordinated by the controller.

[167] As each scan is initiated, the controller sends out a trigger signal to all other imaging assemblies 201 which begin their 3D scan sequencing. The controller records, in a list, information about the triggered capture command of each of the imaging assemblies 201 . Successive capture trigger commands are added to this list. The controller then works through the list and copies scan data from each respective imaging assembly to a central memory location on the controller. This operation is configured to occur in the background of scanner operation and intended to optimise data transfer off the scanner to another device (without the need for the external device to poll each imaging assembly 201 for the scan data). If a scan trigger is activated during this data transfer procedure, the procedure is interrupted to enable transmission of the trigger signal and then resumes, with the additional scan data added to the scan list. This occurs automatically until the available scan list is empty.

[168] The scan sequence list contains the filenames and corresponding imaging assembly ID for each of the scan data and other relevant data. This dynamic list is operated by the controller (items added and removed). For redundancy, items copied from imaging assemblies to the controller data repository are not deleted from each imaging assembly until after the entire data repository is transferred offboard the controller to an external computer.

[169] If, at any time, the controller is removed from the photogrammetry scanner system 20 or fails in some way, it cannot continue controlling, updating and accumulating scan data. In this case, another imaging assembly 201 is automatically assigned as the replacement controller and takes over the functions of the previous controller. As set out above, each imaging assembly 201 is configured to operate as either an imaging assembly as part of the network, or as a controller (with the exception of specific function assemblies performed by secondary assemblies such as trigger button or display only secondary assembly).

[170] In some embodiments, multiple imaging assemblies are designated a controller. Alternatively, no imaging assembly may be designated as a controller with coordination being facilitated by an external device, such as a computer, for example.

[171] To detect the absence of the controller during operation, some embodiments include a health-check function in the network communication protocol where each imaging assembly 201 announces its status on a given port or ports on a periodic basis. If a status signal is not received by either the controller or all other scanner assemblies of the photogrammetry scanner system 20, a health status request can be broadcast to the relevant controller. If no further communication occurs from any one scanner assembly, it is removed from the active assembly list and not activated or data requested from it. As each scanner assembly contains its own internal representation (software, sequences, scan data etc) and the system wide representation (ID’s, function, location information etc about all other active scanner assemblies, the absence of a previously active assembly is detected on each scanner assembly and removed from their representation of the system.

[172] Information about off-board transfer of scan data is also communicated by the controller to all other scanner assemblies in the system (this also means local stored scan data can be removed from each scanner assembly once transferred off the scanner). This means that if the controller is removed from the frame during operation, the other scanner assemblies are aware of the status of the internal scan data transferring (from each scanner assembly to the controller). In this case, the newly nominated controller can reconstruct (or copy) the list of scan data and network locations (or scanner assembly ID’s) and take over the data aggregation operation.

[173] The photogrammetry scanner systems described herein may also include an orientation sensor and/or an acceleration sensor to provide information on physical scanner coordinates with respect to the real world. These sensors can be used to autonomously identify and communicate correct positioning between the photogrammetry scanner system and the target. The orientator sensor and/or acceleration sensor can be used to determine and track scanner location and orientation to assist with alignment of the target within their field of view of the image sensors.

[174] The orientation sensor may take the form of a gyroscope and/or an accelerometer and the acceleration sensor may take the form of an accelerometer.

[175] In some embodiments, a display in communication with the orientation sensor and/or the acceleration sensor can be provided. The orientation and acceleration sensors provide data that indicates the current scanner orientation with respect to an optimal orientation which can be indicated visually via the display.

[176] The display may also be used to display information related to the system or display images captured. [177] Other sensors that may be used include ultrasonic distance sensors, optical range-finding sensors, magnetometers, GPS sensors, temperature sensors (shown as temperature sensor 183 in Figure 7, for example), and humidity sensors.

[178] Range sensors (e.g. ultrasound, laser and LIDAR) may also be provided. The range sensors can be used to measure the distance between the arms and the target, and, in some embodiments, indicate to the user holding the scanner that the scanner is too close or too far away from the target to be scanned. Alternatively, the range detection could be achieved using computer vision code that detects the features of the target (e.g., facial features for a face) and determines the distances using that information. The computer vision code can also be used to determine distance to target information which can be used for real-time calculation of optimal scanner geometry for automated variable geometry variants of the frame.

[179] In some embodiments, the frame may comprise multiple arms (as in Figure 1 ) with receptacles for receiving an imaging assembly therein (as in the frames of Figures 15 and 19). The receptacles may be movably connected to the arms or frame to provide dynamic reconfiguration and arrangement of the imaging assemblies about the body of the frame to suit different purposes and requirements.

[180] The physical configuration of the sensors in the illustrated embodiment is optimised to enable simultaneous imaging of an object the size of the human head (shown in Figures 8 and 9, as an example), with the spacing of the sensors optimised to enable approximately uniform distribution around the object, converging on the centre of the object, and in a semi-spherical (or hemispherical) arrangement as shown in figure 1. This arrangement is able to be varied, however, to enable imaging of various objects through the addition of sensors optimised for each object’s geometry. For example, a configuration to scan a human foot would involve sensors for the bottom of the foot as well as surrounding the sides and lower leg. Such an embodiment is illustrated in Figures 19-21.

[181] The photogrammetry scanner system 30 for scanning a foot includes a frame 300, and a plurality of imaging assemblies 201 connected to the frame 300.

[182] The frame 300 is substantially similar to the frame 200 including receptacles 31 1 configured to receive the imaging assemblies 201 .

[183] The receptacles 31 1 also include a connector in the form of a multi-pin magnetic connector 303 that provides for connection and the distribution of power to necessary components (including the processing assembly 280, image sensor 220 and light emitting device 240).

[184] The receptacles 31 1 are separable allowing the frame 300 to be constructed modularly. This can be best seen in Figures 19 and 20.

[185] The receptacles 31 1 include connectors 312 that matingly engage with corresponding connector members to allow two or more receptacles 31 1 to be connected together. The connector members may come in various configurations to allow for receptacles 31 1 to be suitably angled relative to each other or to create specific and/or custom geometry for a scanner on an as needed basis. For example, a first connector member 313 is shown having an angled portion. In another example, a second connector member 314 is shown where the member is straight.

[186] The connector members and the connectors 312 of the receptacles 31 1 may also include reciprocal pins for facilitating power and/or data distribution to the receptacles 31 1 to power the imaging assemblies 201 .

[187] Secondary assemblies, such as guide lights, laser pattern devices and batteries, for example, can also be connected to the frame 300 by way of a connector and corresponding connector member.

[188] For example, secondary assembly 315 is shown having a connector 312 to matingly engage with a connector member that matingly engages with the connector 312 of a receptacle 311.

[189] The connectors and connecting members preferably matingly engage with a transition fit to limit movement of the receptacles when connected together.

[190] In use, the photogrammetry scanner system 30 would be placed within a housing having a clear or transparent face to allow a foot in gait to be placed on the clear or transparent face and allow the imaging assemblies 201 to image the foot.

[191] In some embodiments, the photogrammetry scanner can include additional imaging assemblies for imaging the foot and the leg (e.g., lower leg).

[192] Embodiments of the photogrammetry scanner system could scan the entire body.

[193] The embodiments described above having variable or reconfigurable frame allows the system to be reconfigured to optimise for the target geometries.

[194] In some embodiments, the image sensors can also capture video (at 30fps) allowing for 3D video scanning and multiple 3D models to be constructed from a single set of videos. [195] In some embodiments, each image sensor captures a 5 second video (although the video captured could be any length of time), which is stored as a motion-JPEG or other format. Video capture is commenced approximately 0.5 seconds before the lighting pattern projected by the light emitting devices is activated whereby the lighting can be used for synchronisation of the frames of the video. The Inventors have found that the initialisation varies between image sensors by a few milliseconds (due to each image sensor having a microcontroller independent of the other microcontrollers connected to the other image sensors) which has limited impact when capturing images of a stationary (or nearly stationary) target.

[196] The captured videos for each image sensor can be exported to photogrammetry software which uses the lighting pattern to determine a starting/first frame based on when the light is first detected in each video. This allows frames captured before this determined starting frame to be excluded and the videos/frames from each image sensor to be synchronized.

[197] In some embodiments, one or more arms of the frame may be intersecting (to form an X configuration, for example), where image sensors are located along the length of each arm.

[198] Advantageously, embodiments of the invention allow the images necessary for a three-dimensional scan to be taken in approximately less than 1 ms. This can be important and/or useful in hospitals, particularly children’s hospitals, where the patients are often not able, or are unwilling, to stay motionless, and thus keeping scanning time as low as possible is important.

[199] In some embodiments, a single button on the photogrammetry scanner system 10 activates the lighting and imaging sequence for all of the light emitting devices and image sensors to quicky capture a 3D model of the target (e.g., a patient) or to capture a predetermined sequence of frames for a video.

[200] In some embodiments, the photogrammetry scanner system includes a beam sensor to automatically activate the lighting and imaging sequence, either instantly or after a predetermined amount of time.

[201] In some embodiments, the image sensors 120 can be controlled by the controllers 180 to capture multiple single images per sensor with different shutter speed/ISO/exposure settings to provide the raw image data for high dynamic range imaging (HDR), or as raw image data for use in 3D reconstruction sensor. As explained above, this is synchronised with the lighting devices to enable two or more scans with lighting and aperture set to different levels to, for example, expose dark skin and hair and light skin and hair at the same time.

[202] In some embodiments, a microphone is provided to capture voice instructions for triggering a scan operation, or other instructions to find out status information (battery level, number of onboard scans etc,) or change scanning parameters.

[203] In some further embodiments, a speaker is provided to inform the user of important information such as number of scans in memory, battery level etc. A short sound can also be played through the speaker to give an audible cue on scan capture (e.g., similar to a camera shutter sound).

[204] Power may be provided to the photogrammetry scanner system 10 through either mains power or a battery.

[205] Embodiments provide a lightweight and robust frame designed with the potential for disinfecting and/or sterilization, with the internal processors and imaging sensors encased in sealed compartments.

[206] The ability of some embodiments to control the lighting to enable capturing 3D models that have been illuminated in the UV and IR areas of the spectrum close to the visible region provides a broader range of applications for the photogrammetry scanner system.

[207] In some embodiments, it may be advantageous to augment the LED lighting of the light emitting device with one or more lasers that can illuminate the subject with coherent light.

[208] By recording a video of the reflected light and performing post-capture data processing, it is possible to measure changes in the skin and surface tissue for diagnostics. In such embodiments, all sensors in the photogrammetry scanner system record short video sequences simultaneously and synchronously of the reflected light. Images corresponding to a given time-point from each sensor is extracted from the video sequence and processed in the photogrammetry pipeline to produce a single 3D object. The 3D objects for all desired frames in the video sequence can then be processed and the surface maps processed to perform the desired analysis of the reflected light over time. Light polarisation can also be incorporated at the laser output and/or sensor input to change the detected light. One example of this is crosspolarisation which is designed to minimise specular reflections and enhance target features. In this implementation, a linear polariser is attached to the light sources at a given orientation with another linear polariser attached to the sensor lens or input at an orthogonal orientation with respect to the light source polariser.

[209] In compliance with the statute, the invention has been described in language more or less specific to structural or methodical features. The term “comprises” and its variations, such as “comprising” and “comprised of” is used throughout in an inclusive sense and not to the exclusion of any additional features.

[210] It is to be understood that the invention is not limited to specific features shown or described since the means herein described comprises preferred forms of putting the invention into effect.

[211] The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted by those skilled in the art.