Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAMERA APPARATUS AND METHODS
Document Type and Number:
WIPO Patent Application WO/2023/089321
Kind Code:
A1
Abstract:
According to an aspect of the present invention there is provided a camera apparatus comprising: an event camera comprising a multi-pixel sensor; a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause relative movement between the multi-pixel sensor and the lens assembly along an optical axis of the lens assembly to alter the focus of the image on the multi-pixel sensor. The apparatus comprises control circuitry configured to: drive the actuation mechanism to cause relative movement between the multi-pixel sensor and the lens assembly along the optical axis to alter the focus of the image on the multi-pixel sensor; and detect movement of the image on the multi-pixel sensor to detect one or more static objects with the event camera.

Inventors:
FOOTE WILL (GB)
CARR JOSHUA (GB)
BROWN ANDREW BENJAMIN SIMPSON (GB)
Application Number:
PCT/GB2022/052921
Publication Date:
May 25, 2023
Filing Date:
November 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CAMBRIDGE MECHATRONICS LTD (GB)
International Classes:
H04N23/55; G01B11/24; G01B11/25
Domestic Patent References:
WO2021180294A12021-09-16
WO2011104518A12011-09-01
WO2020120997A12020-06-18
WO2018096347A12018-05-31
Foreign References:
GB2594334A2021-10-27
US10387477B22019-08-20
GB2022052893W2022-11-15
GB202116960A2021-11-24
GB202118753A2021-12-22
Attorney, Agent or Firm:
CAMBRIDGE MECHATRONICS LIMITED (GB)
Download PDF:
Claims:
Claims

1. A camera apparatus comprising: an event camera comprising a multi-pixel sensor; a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause relative movement between the multipixel sensor and the lens assembly along an optical axis of the lens assembly to alter the focus of the image on the multi-pixel sensor; and control circuitry configured to: drive the actuation mechanism to cause relative movement between the multipixel sensor and the lens assembly along the optical axis to alter the focus of the image on the multi-pixel sensor; and detect movement of the image on the multi-pixel sensor to detect one or more static objects with the event camera.

2. A camera apparatus according to claim 1, wherein the control circuity is configured to drive the actuation mechanism to cause movement of the sensor and/or the lens assembly relative to a support structure of the camera apparatus to induce movement of the image on the sensor.

3. A camera apparatus according to claim 2, wherein the control circuity is configured to drive the actuation mechanism to cause relative movement between the sensor and the lens assembly along a first axis which is perpendicular to the optical axis to induce movement of the image on the sensor.

4. A camera apparatus according to claim 3 wherein the control circuity is configured to drive the actuation mechanism to cause relative movement between the sensor and the lens assembly along a second axis which is perpendicular to both the optical axis and the first axis to induce movement of the image on the sensor.

5. A camera apparatus according to claim 1 comprising a module upon which the sensor and the lens assembly are supported and wherein the control circuitry is configured to drive the actuation mechanism to tilt the module about one or more axes which are perpendicular to the optical axis to induce movement of the image on the sensor. A camera apparatus according to claim 3 wherein the control circuitry is configured to determine a first threshold displacement value and to induce movement of the image on the sensor by driving the actuation mechanism to cause relative movement between the multipixel sensor and lens assembly along the first axis such that a maximum displacement between the multi-pixel sensor and the lens assembly along the first axis is equal to or less than the first threshold displacement value. A camera apparatus according to claim 4 wherein the control circuitry is configured to determine a respective threshold displacement value for each of the first axis and the second axis and to induce the movement of the image on the sensor by driving the actuation mechanism to cause relative movement of the multi-pixel sensor and lens assembly along each of the first and second axes such that the maximum displacement between the multi-pixel sensor and the lens assembly along each axis is equal to or less than the respective threshold displacement value. A camera apparatus according to claim 5 wherein the control circuitry is configured to determine an angular threshold displacement value and to drive the actuation mechanism to tilt the module about one of the one or more axes which are perpendicular to the optical axis such that the maximum angular displacement between the module and a plane perpendicular to the optical axis is equal to or less than the angular threshold displacement value. A camera apparatus according to any of claims 6 to 8, wherein the control circuitry is configured to determine the threshold displacement value based on one or both of the focal length and aperture diameter of the camera. A camera apparatus according to any of claims 1, 2, 6, 7 or 9, wherein the control circuity is configured to cause movement of the image on the sensor by driving the actuation mechanism to cause the relative movement between the sensor and one or more lenses of the lens assembly along the optical axis. A camera apparatus according to claim 1, wherein the sensor and the lens assembly are disposed on a handheld or wearable device and the control circuitry is configured to detect movement of the image on the sensor as a result of movement of the handheld or wearable device. A camera apparatus according to any of claims 1, 2, 6, 7 or 9, wherein the control circuity is configured to cause movement of the image on the sensor by driving the actuation mechanism to cause relative movement between the sensor and the one or more lenses of the lens assembly along the optical axis and concurrent movement of the sensor and/or lens assembly in a plane perpendicular to the optical axis. A camera apparatus according to claim 12, wherein the control circuitry is configured to drive the movement of the sensor and/or lens assembly in the plane perpendicular to the optical axis. A camera apparatus according to claim 13, wherein the control circuitry is configured to drive the actuation mechanism to move the lens assembly or sensor along a helical path, the axis of the helix being aligned with the optical axis, in order to achieve the relative movement between the sensor and the one or more lenses of the lens assembly along the optical axis and the concurrent movement of the sensor and/or lens assembly in a plane perpendicular to the optical axis . A camera apparatus according to any preceding claim wherein the ratio of the focal length of the camera to the diameter of the aperture of the camera is less than or equal to 4, optionally less than or equal to 2. A camera apparatus according to any preceding claim wherein the multi-pixel sensor comprises a plurality of pixels which are each configured to send a signal to a processor of the event camera when a rate of change of the intensity of illumination received by the pixel over time is above a threshold value. A camera apparatus according to any preceding claim wherein the multi-pixel sensor comprises a plurality of pixels which are each configured to send a signal to a processor of the event camera when a change in the intensity of illumination received by the pixel is above a threshold value. A camera apparatus according to any preceding claim, wherein the control circuitry is configured to determine an optimum focal depth value for a pixel of the multi-pixel sensor based on the intensity received by that pixel over time.

19. A camera apparatus according to claim 18, wherein the control circuitry is configured to determine a distance from the event camera to the one or more objects in the field of view of the pixel based on the optimum focal depth value.

20. A camera apparatus according to any preceding claim wherein the actuation mechanism comprises one or more SMA wires.

21. A camera apparatus according to claim 20, wherein the apparatus is configured to maintain a relative position between the lens assembly and the sensor along the optical axis when the one or more SMA wires are not contracted.

22. A method of detecting one or more static objects in a scene, the method comprising: causing relative movement between a multi-pixel sensor of an event camera and one or more lenses of a lens assembly along an optical axis of the lens assembly to alter the focus of an image on the multi-pixel sensor; and detecting movement of the image on the multi-pixel sensor to detect the one or more static objects with the event camera.

23. A method according to claim 22 comprising causing movement of the sensor and/or the lens assembly relative to a support structure to induce the movement of the image on the sensor.

24. A method according to claim 23 wherein causing movement of the sensor and/or the lens assembly comprises causing relative movement between the sensor and the lens assembly along a first axis which is perpendicular to the optical axis.

25. A method according to claim 24 wherein causing movement of the sensor and/or the lens assembly relative to a support structure further comprises causing relative movement between the sensor and the lens assembly along a second axis which is perpendicular to both the optical axis and the first axis.

26. A method according to claim 23 wherein causing the movement of the sensor and/or the lens assembly relative to a support structure comprises: tilting a module on which the sensor and the lens assembly are supported about one or more axes which are perpendicular to the optical axis; and or rotating the sensor or a module on which the sensor and the lens assembly are supported about the optical axis. A method according to claim 24 comprising determining a first threshold displacement value and inducing the movement of the image on the sensor by causing relative movement between the multi-pixel sensor and lens assembly along the first axis such that the maximum displacement between the multi-pixel sensor and the lens assembly along the first axis is equal to or less than the first threshold displacement value. A method according to claim 25 comprising determining a respective threshold displacement value for each of the first axis and the second axis and inducing the movement of the image on the sensor by causing relative movement between the multi-pixel sensor and the lens assembly along each of the first and second axes such that the maximum displacement between the multi-pixel sensor and the lens assembly along each axis is equal to or less than the respective threshold displacement value. A method according to claim 26 comprising determining an angular threshold displacement value and inducing the movement of the image on the sensor by tilting the module on which the sensor and the lens assembly are supported about an axis which is perpendicular to the optical axis such that the maximum angular displacement between the module and a plane perpendicular to the optical axis is equal to or less than angular threshold displacement value. A method according to any of claims 27 to 29, wherein the threshold displacement value is calculated based on one or both of the focal length and aperture diameter of the camera. A method according to any of claims 22, 23, 27, 28 or 30, wherein the movement of the image on the sensor is caused by the relative movement between the sensor and the one or more lenses of a lens assembly along the optical axis. A method according to claim 22, wherein the sensor and the lens assembly are disposed on a handheld or wearable device and the movement of the image on the sensor is caused by movement of the handheld or wearable device. A method according to any of claims 22 to 25, 27, 28 or 30, wherein the movement of the image on the sensor is caused by the relative movement between the sensor and the one or more lenses of the lens assembly along the optical axis and concurrent movement of the sensor and/or lens assembly in a plane perpendicular to the optical axis. A method according to claim 33, comprising causing the movement of the sensor and/or lens assembly in the plane perpendicular to the optical axis. A method according to claim 34, wherein the relative movement between the sensor and the one or more lenses of the lens assembly along the optical axis and the concurrent movement of the sensor and/or lens assembly in a plane perpendicular to the optical axis comprises moving the lens or sensor along a helical path, the axis of the helix being aligned with the optical axis. A method according to any of claims 22 to 35 wherein the ratio of the focal length of the camera to the diameter of the aperture of the camera is less than or equal to 4, optionally less than or equal to 2. A method according to any of claims 22 to 36 wherein the multi-pixel sensor comprises a plurality of pixels which are each configured to send a signal to a processor of the event camera when a rate of change of the intensity of illumination received by the pixel over time is above a threshold value. A method according to any of claims 22 to 37 wherein the multi-pixel sensor comprises a plurality of pixels which are each configured to send a signal to a processor of the event camera when a change in the intensity of illumination received by the pixel is above a threshold value. A method according to any of claims 22 to 38, wherein the movement of the image on the multi-pixel sensor causes a change in intensity level received by a pixel and wherein the method comprises determining an optimum focal depth value for that pixel based on the intensity received by that pixel over time. A method according to claim 39, comprising determining a distance from the event camera to the one or more objects in the field of view of the pixel based on the optimum focal depth value. A method according to any of claims 22 to 40 wherein causing relative movement of the multipixel sensor relative to the lens assembly along the optical axis comprises passing current through one or more shape memory alloy, SMA, wires to contract the one or more SMA wires, thereby effecting the movement. A computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out the method of any of claims 22 to 41.

Description:
Camera apparatus and methods

Field

The present application relates to a camera apparatus and methods related thereto.

Background

A conventional frame camera comprises a multi-pixel sensor and is configured to produce an image which is a pictorial representation of the intensity of the illumination received by each pixel at a given moment in time. Conversely, an event camera comprises a multi-pixel sensor in which each pixel is configured to report on intensity changes only as and when they occur. Accordingly, in the case of an event camera, when viewing a static scene no signals will be received by the camera processor from the pixels. For an event camera, movement in the field of view is required to cause the pixels to send data to the processor. As a result, event cameras are low power and low latency as compared to conventional frame cameras.

An event camera comprises a multi-pixel sensor, as described above, a lens assembly configured to focus light on the sensor and optionally one or more further optical elements, such as a mirror and/or prism. Methods and apparatuses are provided herein which involve the control (for example actuation of one or more elements) of an event camera apparatus in order to provide advantages over known systems. Some of the methods and apparatuses relate to optical image stabilisation and focusing of a camera apparatus.

Summary

In an aspect of the present disclosure there is provided a camera apparatus comprising: an event camera comprising a multi-pixel sensor; a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause relative movement between the multi-pixel sensor and the lens assembly along an optical axis of the lens assembly to alter the focus of an image on the multi-pixel sensor.

The camera apparatus comprises control circuitry configured to: drive the actuation mechanism to: cause relative movement between the multi-pixel sensor and the lens assembly along the optical axis to alter the focus of an image on the multi-pixel sensor; and detect movement of an image on the multipixel sensor to detect the one or more static objects with the event camera.

By altering the focus of an image on the multi-pixel sensor, certain static objects (for example those which a user desires to detect) can be brought into focus. Detecting movement of the image on the sensor of the event camera then facilitates detection of the static objects in a scene (as without motion, the event camera would not detect static objects). The movement of the image on the sensor may be achieved in a number of ways, as will be explained below.

Altering the focus may also be achieved in a number of ways: a separate camera may be used for focusing, an auto-focusing algorithm could be used (for example phase detection auto-focus) and/or a range of focal depths could be iterated through (determining a focus value for each focal depth and selecting the optimal value). Alternatively, a pre-set focal depth may be used, for example a value stored on a memory. Any method of setting the focal value of the assembly could be used.

An event camera may also be referred to as a dynamic vision sensor or a neuromorphic camera. The event camera may comprise a multi-pixel sensor only.

In some embodiments, the movement of the image on the sensor may be induced by movement of the sensor and/or the lens assembly relative to a support structure of the event camera. Specifically, the control circuitry may be configured to drive the actuation mechanism to cause movement of the sensor and/or the lens assembly (or one or more lenses of the lens assembly) relative to a support structure to induce movement of the image on the sensor. This may be achieved in a number of different ways, as set out below.

In some embodiments, the control circuitry may be configured to drive the actuation mechanism to cause relative movement between the sensor and the lens assembly along a first axis which is perpendicular to the optical axis of the lens assembly. This could be achieved in a number of ways:

The sensor remaining stationary relative to a support structure of the camera apparatus and the lens assembly moving relative to the sensor (lens shift);

The lens assembly remaining stationary relative to a support structure of the camera apparatus and the sensor moving relative to the lens assembly (sensor shift);

Both the lens assembly and the sensor moving relative to a support structure of the camera apparatus and relative to eachother. In some embodiments, the control circuitry may be configured to drive the actuation mechanism to cause relative movement between the sensor and the lens assembly along a second axis which is perpendicular to both the optical axis and the first axis. For example, the motion of the sensor and/or lens assembly could follow a loop in a plane perpendicular to the optical axis, for example a circular or oval path.

In some embodiments the apparatus may comprise a module upon which the sensor and the lens assembly are supported. The control circuitry may be configured to drive the actuation mechanism to tilt the module (relative to a support structure of the camera apparatus) about one or more axes which are perpendicular to the optical axis of the lens assembly to induce movement of the image on the sensor. For example, the module may be tilted about the x and/or y axes (where the z axis is parallel to the optical axis of the lens assembly).

Additionally or alternatively, the movement of the sensor and/or the lens assembly relative to a support structure may comprise rotating the sensor (and optionally the lens, e.g. rotating a module upon which the sensor and the lens assembly are supported) about an optical axis of the lens assembly or about an axis parallel to the optical axis.

In some embodiments, the control circuitry may be configured to determine a first threshold displacement value and to induce movement of the image on the sensor by driving the actuation mechanism to cause relative movement of the multi-pixel sensor and lens assembly along the first axis (which is perpendicular to the optical axis, as set out above) such that the maximum displacement between the multi-pixel sensor and the lens assembly along the first axis is equal to or less than the first threshold displacement value. In other words, a threshold maximum displacement along a given axis may be set and movement along that axis constrained such that the displacement between the sensor and lens assembly never exceeds the threshold. Accordingly, the threshold can be set such that the relative movement only results in detection of objections which are in focus (and objects which are not in focus are not detected). If the movement is kept small enough, only sharp edges (i.e. edges of objects which are in focus) will be detected by the event camera and any blurred edges (i.e. edges of objects which are out of focus) will not be detected. This is because there will be a change in intensity detected by at least one pixel in the case of a sharp edge but for blurred edges, no change in light intensity will be detected if the movement is small enough for an edge that is blurred enough. This process is described with reference to the first axis (perpendicular to the optical axis) but may be carried out for any given axis with a component which is perpendicular to the optical axis. The control circuitry may be configured to determine a respective threshold displacement value for each of the first axis and the second axis (which is perpendicular to both the first axis and the optical axis) and to induce the movement of the image on the sensor by driving the actuation mechanism to cause relative movement of the multi-pixel sensor and lens assembly along the each of the first and second axes such that the maximum displacement between the multi-pixel sensor and the lens assembly along each axis is equal to or less than the respective threshold displacement value. This has the advantage explained above in relation to the first axis.

In embodiments in which the apparatus comprises a module upon which the lens assembly and sensor are supported and that module is tilted, the control circuity may be configured to determine an angular threshold displacement value and induce movement of the image on the sensor by driving the actuation mechanism to tilt the module about one of the one or more axes which are perpendicular to the optical axis such that the maximum angular displacement between the plane perpendicular to the optical axis and the module is equal to or less than the angular threshold displacement value. This also has the advantage which was explained above in relation to the first axis but here, the relative movement involves tilting the module so the relevant displacement is angular rather than linear. The module may define a plane and the angular displacement of the module may be measured with reference to that plane. The plane may be parallel to the light-sensitive region of the sensor.

The threshold displacement value (either angular or linear) may be calculated based on one or more parameters of the camera system, for example one or both of the focal length of the camera and the camera's aperture size (e.g. diameter). For example, the threshold displacement value may be calculated based on the f-number of the camera (which is the ratio of the focal length of the camera to the diameter of the aperture). Alternatively or additionally the threshold displacement value may be calculated based on the derivative of the intensity received by a pixel or group of pixels with respect to distance across the field of view. Such a derivative may be calculated for each of a number of focal depths and these derivatives may be used to calculate the threshold displacement value.

The threshold displacement value may be pre-determined and may be retrieved from a memory.

The image may be moved by at list one pixel (which is achieved by moving the lens or sensor by one pixel pitch). In the case of module tilt, the angular displacement would correspond to at least one pixel. For example, the threshold displacement value (either angular or linear) may correspond to one pixel, two pixels, or three pixels. The threshold displacement value for any axis may be a multiple (e.g. 1, 2, 3 or 4, for example) of the pixel pitch (the distance between the respective centres of two adjacent pixels).

In some embodiments, the control circuitry is configured to cause movement of the image on the sensor by driving the actuation mechanism to cause relative movement between the sensor and the one or more lenses of a lens assembly along the optical axis. In this sense, the act of changing the focus of the image on the sensor may itself provide enough movement of the image on the sensor to facilitate detection of the one or more static objects.

In some embodiments, the sensor and the lens assembly may be disposed on a handheld or wearable device (e.g. a smart phone, watch or glasses). Further, the movement of the image on the sensor may be caused by movement of the handheld or wearable device. For example, this could be user handshake or other motion associated with motion of the whole device, rather than actuated movement of one or more elements in the optical pathway of the camera. In other words, the movement of the image on the sensor may not be the result of movement of the sensor and/or the lens assembly (or one or more lenses thereof) relative to a support structure but instead may be result of movement of the whole device on which the event camera is disposed. Such motion of the whole device may, in some embodiments, be driven by an actuation mechanism.

In some embodiments, the control circuitry is configured to cause movement of the image on the sensor by driving the actuation mechanism to cause relative movement between the sensor and the one or more lenses of the lens assembly along the optical axis and concurrent movement of the sensor and/or lens assembly in a plane perpendicular to the optical axis. In this sense, the sensor and/or lens assembly are driven to move in the x-y plane (where the z-axis is aligned with the optical axis) and it is this movement and also the change in focus of the image on the sensor which results in the movement of the image on the sensor. In some embodiments the control circuitry is configured to cause movement of the sensor and/or lens assembly in the plane perpendicular to the optical axis, for example by shifting the sensor and/or lens assembly such that there is relative motion between the two in the plane.

In some embodiments the control circuitry is configured to drive the actuation mechanism to move the lens assembly (or one or more lenses thereof) and/or sensor along a helical path, the axis of the helix being aligned with the optical axis. Accordingly, one or both of the lens assembly and sensor are moved in a loop (e.g. an ellipse, for example a circle) in the x-y plane whilst simultaneously being translated along the optical (z) axis. Alternatively, one of the sensor and lens assembly (or one or more lenses thereof) may be moved in a loop (e.g. an ellipse) while the other is moved along the optical axis. In this sense, the relative motion between the sensor and lens assembly follows a helical path.

In some embodiments, the camera may have a large aperture. Specifically, the f-number (which is the ratio of the focal length of the camera to the diameter of the aperture) may be less than or equal to 4.0. In some embodiments, the f-number of the camera may be less than or equal to 3 or less than or equal to 2. For example, the f-number may be 1.6 or 1.8. If the camera has a large aperture (and hence a shallow field of view), then the focus of objects at different distances will be different. In this case the position of the lens relative to the image sensor can be used to preferentially identify the edges of objects at some distances over objects at other distances.

The event camera may be configured as a derivative magnitude event camera. In this case, the plurality of pixels of the sensor are each configured to send a signal to a processor of the event camera when a rate of change of the intensity of illumination received by the pixel over time is above a threshold value. A derivative magnitude camera may be particularly useful when detecting edges of objects which are in focus because across a sharp edge of an object, the rate of change of intensity will peak sharply and be more easily detected.

The event camera may equally be configured as an absolute magnitude event camera. In this case, the plurality of pixels of the sensor are each configured to send a signal to a processor of the event camera when a change in the intensity of illumination received by the pixel is above a threshold value. The event camera may equally be any other type of event camera.

Movement of the image on the multi-pixel sensor causes a change in intensity level received by a pixel. In some embodiments, the control circuitry may be configured to determine an optimum focal depth value for that pixel based on the intensity received by that pixel over time. In this way, the history of the intensity received by the pixel over time may be analysed in order to determine an optimum focal depth value for that pixel. It will be appreciated that optimum does not necessarily mean the absolute optimal focal depth but could instead refer to the best of a set of possible values. At the optimum focal depth, the object (or part of an object) which is in the field of view of the pixel is in focus.

In some embodiments the control circuitry may be configured to determine a distance from the event camera to the object captured by the pixel based on the optimum focal depth value. In some embodiments, the actuation mechanism may comprise one or more shape memory alloy (SMA) wires. The one or more SMA wires may drive relative movement between the multi-pixel sensor and the lens assembly along the optical axis.

The actuation mechanism may be configured to cause movement of the image on the multipixel sensor. This may be either:

(a) by inducing relative movement of the sensor and lens assembly along the optical axis or

(b) in one of the other ways listed above (module tilt, rotation of the sensor/module, lens shift or sensor shift, for example).

The control circuitry may be configured to cause relative movement between the multi-pixel sensor and the lens assembly along the optical axis and/or cause movement of the sensor and/or the lens assembly relative to a support structure by passing current through the one or more shape memory alloy, SMA, elements (e.g. wires) to contract the one or more SMA elements, thereby effecting the movement. For example, a particular configuration of SMA elements used for causing such movement is detailed in patent application W02011/104518A1, which is hereby incorporated by reference in its entirety.

Equally, other actuators may be used in addition to or instead of SMA (to cause relative movement between the sensor and the lens assembly along the optical axis and/or to cause movement of the image on the sensor). For example, voice coil motors, piezoelectric actuators, ultrasonic motors, microelectromechanical systems (MEMS). Additionally or alternatively, one or more liquid lenses or polymer lenses could be employed.

In some embodiments, the camera apparatus may comprise a further, separate actuation mechanism to drive movement as in (b) above. This further actuation mechanism may comprise one or more SMA wires and/or another type of actuator (e.g. one or more of those listed above).

In some embodiments, the camera apparatus may be configured to maintain a relative position between the lens assembly and the multi-pixel sensor along the optical axis when the one or more SMA wires are not contracted. In other words, the apparatus is configured such that when the one or more SMA wires are unpowered the component being moved by the one or more SMA wires remains stationary relative to the support structure. Such a configuration may be referred to as a 'zero hold power' configuration. Various means of configuring the apparatus are detailed in applications W02020/120997A1, PCT/GB2022/052893, GB2116960.2, GB2118753.9, all of which are incorporated herein by reference in their entirety. A benefit of such a zero-hold-power arrangement is that the power consumption of the apparatus may be reduced. The actuation mechanism may be driven to set the relative distance between the sensor and the lens assembly to a first value, which is then maintained by the zero-hold-power configuration or mechanism, and then movement of the image on the sensor is induced (e.g. by sensor shift) in order to detect edges of static objects which are in focus.

In another aspect of the present disclosure there is provided a method of detecting one or more static objects in a scene. The method comprises: causing relative movement between a multi-pixel sensor of an event camera and one or more lenses of a lens assembly along an optical axis of the lens assembly to alter the focus of an image on the multi-pixel sensor; and detecting movement of the image on the multi-pixel sensor to detect the one or more static objects with the event camera

Any of the features described above with respect to the apparatus for detecting one or more static objects in a scene may also be applied to this method.

In another aspect of the present disclosure there is provided a computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out any of the above methods. The instructions may further cause the processor to receive data from an event camera.

Reference to movement of an object in a plane is intended to include movement of an object where the movement is not entirely constricted to motion in that plane. Instead, all that may be required is that the direction of motion has a component in that plane. Equally, movement of an object in a plane includes movement which is entirely constricted to motion in that plane.

The method of detecting static objects in a scene set out above may also be described as follows:

A method of detecting one or more static objects in a scene, the method comprising: altering the focus of an image of the scene on a multi-pixel sensor of an event camera, such that a first set of static objects that are at a first distance from the event camera is relatively in focus and a second set of static objects that are at a second distance from the event camera is relatively out of focus; and causing an actuator assembly to move at least part of the event camera, thereby moving the image on the multi-pixel sensor by a distance selected to preferentially detect edges of the first set of static objects over edges of the second set of static objects with the event camera.

Any of the method steps described above herein and/or features described with reference to the apparatus also be applied to this method.

In an aspect of the present disclosure there is provided a camera apparatus comprising: an event camera comprising a multi-pixel sensor; a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause: relative movement between the multi-pixel sensor and the lens assembly along an optical axis of the lens assembly to alter the focus of an image on the multi-pixel sensor;

The camera apparatus comprises control circuitry configured to: drive the actuation mechanism to: cause relative movement between the multi-pixel sensor and the lens assembly along the optical axis to alter the focus of an image on the multi-pixel sensor; detect a change in the image on the multipixel sensor to detect one or more objects with the event camera.

A change in the image on the sensor may include any type of change, for example a change in focus of the image or a shift or rotation of the image. The one or more objects may be static objects or moving objects.

Optical image stabilisation in an event camera

According to an aspect of the present disclosure, there is provided a camera apparatus comprising: an event camera comprising a multi-pixel sensor and a lens assembly for focusing an image on the multi-pixel sensor; an actuation mechanism configured to cause relative movement between first element of the camera apparatus and a second element of the camera apparatus; and control circuitry configured to drive the actuation mechanism to cause the relative movement to effect stabilisation of an image on the multi-pixel sensor. As mentioned above, an event camera comprises a multi-pixel sensor in which each pixel is configured to report on a detected intensity change only when such a change occurs. In a conventional frame camera, all pixels report their detected intensity levels at the same time (when a frame is captured) but in an event camera, pixels report on detected intensity changes asynchronously. Accordingly, by stabilising the image on the sensor, stationary objects in the scene will not be detected by the event camera and instead, only moving objects will be detected. The term 'image' here refers to the light incident on the multi-pixel sensor.

An event camera may also be referred to as a dynamic vision sensor or a neuromorphic camera.

The control circuitry may be configured to: receive a signal from a motion sensor which is provided on the camera apparatus (or on a device on which the camera apparatus is provided) or otherwise associated with the camera apparatus upon the motion sensor detecting motion; and compensate for the detected motion by driving the actuation mechanism to cause relative movement between the first and second elements. Specifically, the control circuity may send a signal to the actuation mechanism to drive such movement.

The motion sensor may be a gyroscope or accelerometer or any other device capable of detecting motion and/or acceleration. The apparatus may comprise the motion sensor.

The first element may be a support structure and the second element may comprise the multi-pixel sensor and the lens assembly. For example, the second element may comprise a module which supports both the sensor and the lens assembly and this module may be moved relative to the support structure.

Further, the actuation mechanism may be configured to tilt the sensor and the lens assembly about one or more axes which are perpendicular to the optical axis of the lens assembly. For example, this could be done by tilting a module which supports the sensor and lens assembly. Tilting both the sensor and lens assembly together in this way may be advantageous (as compared to shifting one or both of the lens assembly and sensor relative to eachother along an axis perpendicular to the optical axis of the lens assembly) because less image distortion results from module tilt as opposed to lens/sensor shift.

In some embodiments, the first element may be a support structure and the second element may comprise the multi-pixel sensor and optionally the lens assembly. The actuation mechanism may be configured to rotate the second element about the optical axis of the lens assembly or about an axis parallel to the optical axis. In this way, the image on the sensor may be stabilized by rotating the sensor (and optionally also the lens).

In another embodiment, the first element may comprise the multi-pixel sensor and the second element may comprise the lens assembly. In this way, the actuation mechanism causes relative motion between the sensor and lens assembly. This could be achieved by:

The sensor remaining stationary relative to a support structure of the camera apparatus and the lens assembly moving relative to the sensor (referred to as 'lens shift');

The lens assembly remaining stationary relative to a support structure of the camera apparatus and the sensor moving relative to the lens assembly (referred to as 'sensor shift');

Both the lens assembly and the sensor moving relative to a support structure of the camera apparatus and relative to eachother.

The relative motion between the sensor and lens assembly may be along an axis which is perpendicular to the optical axis of the lens assembly. The relative motion may be translational movement in a plane perpendicular to the optical axis.

The actuation mechanism may be also configured to cause relative movement of the sensor and the lens assembly along the optical axis, for example for the purposes of focusing an image on the sensor.

In some embodiments, the first element may comprise the multi-pixel sensor and the second element may comprise one or more lenses of the lens assembly. In this way, the actuation mechanism causes relative motion between the sensor and one or more lenses of the lens assembly. This could be achieved by lens shift, sensor shift, or by moving both the one or more lenses and the sensor relative to a support structure of the camera apparatus.

The relative motion between the sensor and one or more lenses may be along an axis which is perpendicular to the optical axis of the lens assembly. The relative motion may be translational movement in a plane perpendicular to the optical axis.

The actuation mechanism may be also configured to cause relative movement of the sensor and the one or more lenses along the optical axis, for example for the purposes of focusing an image on the sensor. The actuation mechanism may comprise one or more shape memory alloy (SMA) wires. For example, the wires could be arranged as detailed in patent application W02011/104518A1, which is hereby incorporated by reference in its entirety.

Any other suitable actuator could also be used. For example, voice coil motors, piezoelectric actuators, ultrasonic motors, microelectromechanical systems (MEMS). Additionally or alternatively, one or more liquid lenses or polymer lenses could be employed.

The apparatus may be provided on a handheld or wearable device, for example a smart phone, smart watch or glasses.

The control circuitry may be configured to receive one or more signals from a first set of pixels (of the multi-pixel sensor) and detect a moving object in a field of view of the event camera based on the one or more received signals. An object moving in the field of view will cause a change in intensity of light received by one or more of the pixels of the multi-pixel sensor (i.e. the first set of pixels). One or more signals may then be received by the control circuitry from the one or more pixels and these signals may be used to detect the moving object.

The multi-pixel sensor may be configured to detect the moving object only when a displacement of the image on the sensor from a first position to a second position exceeds a threshold value. The threshold value may be a distance (either linear or angular) across the sensor. The distance may be an integer or half-integer multiple of the distance between pixels on the sensor. Applying a displacement threshold in this way may be useful because some movement of an image on the sensor may be detected as a result of less-than-perfect optical image stabilization rather than true movement of an object in the field of view of the camera. In other words, one or more pixels may report a change in intensity of light received but this may only be a result of the image moving on the sensor as a result of e.g. handshake that is not entirely corrected by optical image stabilization. Accordingly, a threshold can be applied such that only movement of the image on the sensor which is likely (e.g. above a certain confidence threshold) to result from movement of an object in the scene rather than handshake, for example.

Herein, the term control circuity refers to a control system generally which may comprise one or any number of processors, chips or circuits. The control circuitry may comprise separate elements to control one or more parts of the camera apparatus and any processing may be carried out on a single processor or on a distributed processing system. According to another aspect of the present disclosure there is provided a method of controlling a camera apparatus, the camera apparatus comprising: an event camera comprising a multi-pixel sensor and a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause relative movement between a first element of the camera apparatus and a second element of the camera apparatus.

The method comprises driving the actuation mechanism to cause the relative movement to effect stabilisation of an image on the multi-pixel sensor.

As explained above, by stabilising the image on the sensor, stationary objects in the scene will not be detected by the event camera and instead, only moving objects will be detected.

The method may further comprise receiving data from one or more pixels of the multi-pixel sensor and using the data to detect only one or more objects which are moving relative to the camera apparatus.

Any of the features described herein with respect to the camera apparatus may also be applied to this method.

The method may comprise receiving one or more signals from a first set of pixels and detecting a moving object in a field of view of the event camera based on the one or more received signals. As explained above, an object moving in the field of view will cause a change in intensity of light received by one or more of the pixels of the multi-pixel sensor (i.e. the first set of pixels). One or more signals may then be received from the one or more pixels and these signals may be used to detect the moving object.

The method may comprise configuring the multi-pixel sensor to detect the moving object only when a displacement of the image on the sensor from a first position to a second position exceeds a threshold value. This has the advantages explained above with reference to the apparatus. Configuring the sensor may comprise setting a parameter of the sensor, for example the threshold value.

The method may comprise receiving one or more signals from a second set of pixels of the sensor, different to the first set of pixels, and sending an object-detection signal to a processor only when a distance between the first and second sets of pixels exceeds a threshold value. In other words, the threshold value mentioned above may be a distance between pixel groups. Accordingly, a minimum movement is required to prompt an object-detection signal. In another aspect of the present disclosure there is provided a method comprising effecting stabilisation of an image on a multi-pixel sensor of an event camera. Any of the features described above may be carried out as part of this method.

In another aspect of the present disclosure there is provided a computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out any of the methods described above. The instructions may also cause the processor to receive data from the multi-pixel sensor of an event camera and detect only one or objects which are moving relative to the camera apparatus.

According to another aspect of the present disclosure there is provided a non-transitory data carrier comprising instructions which, when executed by a processor, cause the processor to carry out a method described herein (for example in any of the method claims).

The term actuation mechanism may comprise a single actuator or actuator assembly or multiple separate actuators or actuator assemblies. For example, the actuation mechanism may comprise one or more voice coil motors, and/or one or more SMA wires. The actuation mechanism may comprise two or more groups of SMA wires wherein each group is configured to cause movement of a respective part of the apparatus (e.g. the first and/or second elements respectively) along one or more axes. In this case, the axes may be perpendicular to eachother (e.g. the optical axis and an axis perpendicular to the optical axis).

The term 'element' may refer to an object comprising multiple parts (for example the term 'second element' may refer to the sensor and the lens assembly) or may equally refer to a single part of the apparatus.

3D sensing with an event camera

In another aspect of the present disclosure there is provided a method of determining a three- dimensional representation of a scene using an event camera comprising a multi-pixel sensor, each pixel having a respective location on the sensor. The method comprises: emitting illumination having a spatially non-uniform intensity over a field of view of the multipixel sensor; detecting a change in intensity of the illumination received by a first set of the pixels of the multi-pixel sensor; moving the illumination across at least part of the field of view; detecting a change in intensity of the illumination received by a second set of the pixels of the multi-pixel sensor as a result of the movement, wherein the first set of pixels is different to the second set of pixels; and determining a three-dimensional representation of the scene based on the respective location on the multi-pixel sensor of each pixel of the first and second sets of pixels.

This method takes advantage of the structured light method of 3D sensing, in which the depth of the scene is calculated from the apparent arrangement of a light pattern as observed by a camera which views the scene from a different direction than the light source. Further details can be found below and also in patent application WO2018096347A1, which is incorporated herein by reference in its entirety.

By taking a first reading and then moving the illumination across at least part of the scene and taking a second reading, as is done in the method of generating a three-dimensional representation of a scene set out above, the depth sensing is improved in the following ways.

Firstly, by taking two readings more data is captured and can be used to generate the 3D representation of the scene. Taking the example of a dot pattern, moving the dots allows the resolution of the resulting depth map to be increased from a lower density of projected dots to a higher density, thereby allowing the intensity of the dots to be increased, thereby increasing signal to noise and hence range without risking safety issues with respect to the intensity of illumination emitted (e.g. LASER safety issues). In this sense, by taking two readings the resolution of the 3D representation can be increased.

Secondly, with conventional cameras (i.e. frame cameras), it can be hard to distinguish the illumination from the scene itself at long ranges. This issue is negated by using an event camera because for a static scene, only the illumination itself is detected by the event camera because the illumination is all that is moving. The scene itself is stationary and so is not picked up on the event camera. This advantage also applies to scenes which are not entirely stationary - the amount of background noise is still reduced because any stationary elements of the scene are not detected and so any moving objects can more easily be distinguished from the reflected illumination.

Thirdly, the accuracy and/or resolution of the 3D representation can be increased as follows . In some cases, when an illumination pattern is shone onto a scene it can be difficult to determine which part of the pattern is which. For example, if the pattern is a pattern of dots, it can be hard to tell which dot is which (i.e. which emitted dot matches up to which reflected dot) because the pattern is distorted when it falls into a scene of varying depth. By moving the illumination pattern and capturing data at each of a first and second position, more data is acquired and this data can be used to build up a confidence level as to which dot is which.

Further advantages are that: by moving the illumination, the set-up is also made more efficient because more pixels of the sensor are employed in generating the 3D representation and a simpler light source may be used - if the illumination pattern is to be moved, then (taking the example of a dot pattern), a pattern containing fewer dots can be used and moved around multiple times to build up a higher-resolution depth map of the scene.

Moving the illumination may also have the advantage that dots which had previously not been reflected (because they were incident upon a surface which did not reflect them back to the detector, for example an absorptive surface) are now incident on a sufficiently reflective surface as a result of the movement of the illumination.

Moving the illumination may comprise moving the illumination from a first position to a second position, wherein the illumination is moved continuously between the first and second positions. In other words, the illumination is emitted continuously during movement between the first and second positions. This may be advantageous because more data is captured, as compared to instead emitting the illumination at a first position in the field of view and then at a second position, where no light is emitted during the transition between the first and second positions.

The method may further comprise moving the illumination across at least part of the field of view a second time, detecting a change in intensity of the illumination received by a third set of the pixels of the multi-pixel sensor as a result of this second movement and generating the three-dimensional representation of the scene based on the respective location on the multi-pixel sensor of each pixel of the first, second and third sets of pixels. This process may be repeated such that any the illumination is moved any number of times and a corresponding data set is captured each time and used to generate the 3D representation.

Determining a three dimensional representation of a scene may comprise generating the three- dimensional representation of the scene. Alternatively, determining a three dimensional representation of a scene may comprise selecting a representation from a set of pre-determined three dimensional representations. For example a set of possible hypotheses as to what is happening in the scene may be determined and the above methods may be used to aid in determining which of these hypotheses (i.e. which 3D representation) is the most likely to be right.

The illumination may be constant (i.e. the source of illumination is always on) or the illumination may be intermittent. Intermittent illumination (or varying the intensity of the illumination over time) has the advantage of reducing power consumption. As an example, the illumination may comprise pulses of illumination.

Moving the illumination may comprise passing current through one or more SMA wires to contract the one or more SMA wires, thereby effecting movement of the illumination. For example, one or more SMA wires may be connected to one or more of the following components in order to move the illumination: a lens, a prism, a mirror, a dot projector, and the source of illumination. Equally, any other actuators (for example any of those listed above) may be used in addition to or instead of SMA.

In another aspect of the present disclosure there is provided a computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out any of the above methods.

In another aspect of the present disclosure there is provided an apparatus for determining a three- dimensional representation of a scene. The apparatus comprises: an event camera comprising a multi-pixel sensor, each pixel having a respective location on the sensor; an illumination assembly configured to emit illumination having a spatially non-uniform intensity over a field of view of the sensor; an actuation mechanism configured to move the illumination across at least part of the field of view; and control circuitry.

The control circuity is configured to: detect a change in intensity of the illumination received by a first set of the pixels of the sensor; drive the actuation mechanism to move the illumination across at least part of the field of view; detect a change in intensity of the illumination received by a second set of the pixels of the sensor as a result of the movement, wherein the first set of pixels is different to the second set of pixels.; and determine a three-dimensional representation of the scene based on the respective location on the sensor of each pixel of the first and second sets of pixels. Any of the features described above with respect to the method of generating a three-dimensional representation of a scene using an event camera may also be applied to this camera apparatus. According to another aspect of the present disclosure there is provided a non-transitory data carrier comprising instructions which, when executed by a processor, cause the processor to carry out a method described herein (for example in any of the method claims).

In some embodiments, moving the illumination comprises passing current through one or more shape memory alloy wires to move one or more optical components. In this way a shape memory alloy (SMA) actuator is employed to move the illumination. Other actuators may equally be used, for example voice coil motors, piezoelectric actuators, microelectromechanical systems (MEMS). Additionally or alternatively, one or more liquid lenses or polymer lenses could be employed.

In another aspect of the present disclosure there is provided a computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out any of the above methods. The instructions may further cause the processor to receive data from an event camera.

Brief description of the drawings

Certain embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:

Figure 1 is a schematic view of a camera apparatus including an actuation mechanism;

Figure 2A illustrates a method of controlling a camera apparatus;

Figure 2B illustrates a method of controlling a camera apparatus;

Figure 3A is a schematic view of a camera apparatus including an actuation mechanism;

Figure 3B is a schematic view of a camera apparatus including an actuation mechanism;

Figure 3C is a schematic view of a camera apparatus including an actuation mechanism;

Figure 3D is a schematic view of an actuation mechanism;

Figure 3E is a schematic side view of the actuation mechanism of Figure 2D;

Figure 4 illustrates a schematic arrangement of a control circuit for controlling a camera apparatus;

Figure 5 illustrates a method of conducting optical image stabilisation;

Figure 6A illustrates the broad scheme of a structured light depth-sensing device;

Figure 6B illustrates an example light pattern for use in 3D sensing methods; Figure 7 is a schematic diagram of an apparatus for generating a three-dimensional representation of a scene;

Figures 8A and 8B illustrate an apparatus for generating a 3D representation of a scene; and Figure 9 illustrates a method of producing a 3D representation of a scene.

Detailed description

With reference to Figure 1, a camera apparatus 1 incorporating an actuation mechanism 2 is shown. Details of the camera apparatus will be described with reference to Figure 1 and a method involving use of the apparatus of figure 1 will be described with reference to Figures 2A and 2B.

Referring to Figure 1, the camera apparatus 1 comprises a support structure 3 and includes a base 5. The camera apparatus further comprises a multi-pixel sensor 6 and a lens assembly 4 which is configured to focus an image on the multi-pixel sensor 6. The lens assembly has an optical axis O.

The multi-pixel sensor 6 is disposed on a front side of the base 5, i.e., the multi-pixel sensor 6 is interposed between the lens assembly 4 and the base 5. The multi-pixel sensor 6 is a multi-pixel sensor in which each pixel is configured to report changes in intensity of illumination received by that pixel as the changes occur. The multi-pixel sensor 6 may otherwise be referred to as an event camera sensor. The sensor 6 is thus distinguished from a sensor of a conventional frame camera, in which all pixels are polled at the same time, when a frame is captured. However in an event camera sensor, each pixel reports on intensity changes only as and when they occur.

The lens assembly 4 and the multi-pixel sensor are both suspended on the support structure 3 of the camera apparatus 1 by the actuation mechanism 2 such that the multi-pixel sensor 6 and the lens assembly 4 can move independently, relative to the support structure 3. The actuation mechanism 2 comprises one or more SMA (shape memory alloy) wires but it will be appreciated that other types of actuation mechanism may be used, for example one or more voice coil motors (VCMs) or any other actuator type (e.g. any of those listed above).

The actuation mechanism 2 is configured to cause relative movement between the lens assembly 4, or at least one lens 10 thereof, and the multi-pixel sensor 6 parallel to the optical axis O (i.e. parallel to the z axis) to alter a focus of an image formed on the multi-pixel sensor 6, for example as part of an automatic focussing (AF) function or otherwise. The actuation mechanism 2 may additionally be configured to cause relative movement between the lens assembly 4 (or at least one lens thereof) and the sensor 6 along one or more directions perpendicular to the optical axis O.

The lens assembly 4 includes a lens carriage 9 in the form of a cylindrical body supporting the two lenses 10 arranged along the optical axis O. In general, any number of one or more lenses 10 may be included. Preferably, each lens 10 has a diameter of up to about 20 mm. The camera apparatus 1 can therefore be referred to as a miniature camera.

The camera apparatus 1 includes an integrated circuit (IC) 7, which implements control circuitry, and also optionally a gyroscope sensor (not shown). The support structure 3 also includes a can 8 which protrudes from the base 5 to encase and protect the other components of the camera apparatus 1.

Figure 2A is a flow diagram depicting a method of controlling a camera apparatus 1. The camera apparatus 1 is shown in Figure 1 and is described above.

Referring to figure 2A, at step 101 relative movement between the sensor 6 and one or more lenses of the lens assembly 4 (see Figure 1) along the optical axis is caused. Specifically, actuation mechanism 2 drives relative movement between the sensor 6 and the one or more lenses along the optical axis to focus an image on the sensor 6. This can be achieved in a number of ways, for example any one of: the sensor 6 remains stationary relative to the support structure 3 and the lens assembly 4 (or one or more lenses thereof) is moved along the optical axis, relative to the multi-pixel sensor, the lens assembly 4 remains stationary relative to the support structure 3 and the sensor 6 is moved along the optical axis, relative to the lens assembly 4, or the sensor 6 and the lens assembly 4 move relative to eachother and relative to the support structure 3 along the optical axis.

At step 103, movement of the image on the sensor is detected. In one example, the movement of the image on the sensor is caused by movement of the sensor 6 and/or one or more lenses of the lens assembly relative to the support structure (e.g. lens shift, sensor shift, module tilt or rotating the sensor (and optionally the lens assembly) about the optical axis and/or movement along the optical axis). This motion of the image on the sensor is detected by the event camera as the intensity of illumination received by at least some of the pixels of the sensor changes. As a result of this motion of the image, one or more static objects can be detected. Figure 2B illustrates a specific embodiment of the general method described with reference to Figure 2A. Referring to figure 2B, at step 104 relative movement between the multi-pixel sensor 6 and the lens assembly 4 (see Figure 1) along the optical axis is caused. Specifically, actuation mechanism 2 drives relative movement between the multi-pixel sensor 6 and the lens assembly 4 along the optical axis to focus an image on the multi-pixel sensor 6. For example, an auto-focus process may be used to focus an image on the sensor.

At step 106, a first threshold displacement value for a first axis perpendicular to the optical axis, e.g. the x-axis, is determined. The value is calculated using the f-number of the camera. Alternatively, a pre-set value may be used. The image may be moved by at list one pixel (which is achieved by moving the lens or sensor by one pixel pitch). In the case of module tilt, the angular displacement would correspond to at least one pixel. For example, the pre-set value may correspond to one pixel, two pixels, or three pixels.

At step 108, a second threshold displacement value for a second axis perpendicular to the optical axis and the first axis, e.g. the y-axis, is determined. This may be determined in the same way as the first threshold displacement value. Alternatively, a single threshold value may be determined and used for each of the two axes.

At step 110, the actuation mechanism drives relative movement between the multi-pixel sensor 6 and the lens assembly 4 along the first and second axes, which has the effect of moving the image on the multi-pixel sensor 6. This changes the intensity of illumination received by at least some of the pixels, which in turn report this change. Accordingly, one or more edges of objects in the scene are detected.

During the movement, the relative displacement between the multi-pixel sensor 6 and the lens assembly 4 along the x axis is restricted such that is equal to or less than the first threshold displacement value and the relative displacement between the multi-pixel sensor 6 and the lens assembly 4 along the y axis is restricted such that throughout the movement, it is equal to or less than the second threshold displacement value. In this way, only objects which are in focus are detected by the multi-pixel sensor.

For objects which are not in focus, the edges of such objects are blurred and will not be detected by such a small movement in the x-y plane. In a further embodiment, an analogous method is provided for a camera apparatus in which the sensor 6 and the lens assembly 4 are supported on a module. This module is tilted about one or more axes which are perpendicular to the optical axis in order to move the image on the sensor 6. The first and second threshold values described above are determined in an analogous way but in the case of a module-tilt apparatus, the displacement values are angular rather than linear.

Further camera apparatuses for use in detecting static objects as described herein will now be described with reference to Figures 3A, 3B and 3C, which use various means of moving an image on the sensor. With reference to Figure 3A, a variation on the apparatus of Figure 1 is described. The camera apparatus 1 comprises a multi-pixel sensor 6 and a lens assembly 4.

As in the apparatus shown in Figure 1, the lens assembly 4 is suspended on the support structure 3 by the actuation mechanism 2. The actuation mechanism 2 is configured to cause relative movement between the multi-pixel sensor 6 and the lens assembly 4. In this embodiment, the multi-pixel sensor 6 is stationary relative to the support structure 3. It will be appreciated that, in this embodiment, the multi-pixel sensor 6 may not be supported by the actuation mechanism 2 and may instead be fixed relative to the support structure 3, as shown in Figure 2A.

The actuation mechanism 2 is configured to move the lens assembly 4 in a plane orthogonal to the optical axis O. Movement of the lens assembly in a plane orthogonal to the optical axis O (i.e. the x-y plane) relative to the multi-pixel sensor 6 has the effect that image on the multi-pixel sensor 6 is moved. For example, if a set of right-handed orthogonal axes x, y, z is aligned so that a third axis z is oriented substantially parallel to the optical axis O, then the lens assembly 4 may be moveable in a direction parallel to the x axis and/or in a direction parallel to the y axis. The actuation mechanism 2 is also configured to move the lens assembly (or one or more lenses thereof) along the optical axis to alter a focus of the image on the sensor.

With reference to Figure 3B, a second embodiment is described. In this second embodiment, the apparatus 1 comprises a module upon which the sensor 6 and lens assembly 4 are supported. The image on the sensor is moved by tilting the module about an axis parallel to the x axis and/or about an axis parallel to the y axis (the y axis is directed out of the page, perpendicular to the x and z axes). The module comprises a casing (not shown) on which the multi-pixel sensor 6 and lens assembly 4 are supported and the casing is supported on the support structure 3 by the actuation mechanism 2. With reference to Figure 3C, a third embodiment is described. The actuation mechanism 2 is configured to move the multi-pixel sensor 6 in a plane orthogonal to the optical axis O. In this third embodiment, movement of the multi-pixel sensor 6 in a plane orthogonal to the optical axis O (i.e. the x-y plane) and relative to the lens assembly 4 has the effect that image on the multi-pixel sensor 6 is moved. For example, if a set of right-handed orthogonal axes x, y, z is aligned so that a third axis z is oriented substantially parallel to the optical axis O, then the multi-pixel sensor 6 may be moveable in a direction parallel to the first x axis and/or in a direction parallel to the second y axis.

In the first and third embodiments described above, one of the lens assembly 4 and the multi-pixel sensor 6 are held static relative to the support structure 3 and the other of the lens assembly 4 and the multi-pixel sensor 6 is moved in a plane orthogonal to the optical axis to effect optical image stabilisation. In some embodiments, however, both the lens assembly 4 and the multi-pixel sensor 6 may move in the x-y plane to effect relative movement between the multi-pixel sensor 6 and lens assembly 4, thus stabilising the image on the multi-pixel sensor 6.

In any of the apparatuses described herein, the actuation mechanism 2 may comprise one or more SMA wires. A particular embodiment of such an actuation mechanism 2 is described with reference to Figures 3D and 3E. By way of an overview, in Figure 3D, the actuation mechanism is an SMA actuator that uses SMA wires to move a first element relative to a second element, for example to provide autofocus and/or optical image stabilization. Eight SMA wires are arranged inclined with respect to a notional primary axis, with a pair of SMA wires on each of four sides around the primary axis. The SMA wires are connected so that on contraction two groups of four SMA wires provide a force with a component in opposite directions along the primary axis, so that the groups are capable of providing movement along the primary axis. The SMA wires of each group may have twofold rotational symmetry about the primary axis, and there are SMA wires opposing each other that are capable of providing lateral movement or tilting. Such an actuation mechanism is described in patent application WO 2011/104518A1, which is incorporated herein by reference in its entirety.

Referring to Figures 3D and 3E, an actuator assembly 20 (also referred to simply as the actuator) will now be described.

The actuator assembly 20 includes a first element which in this case is a movable element 11 and which is supported on a support structure 14 by eight SMA wires wo-w 7 .The movable element 11 may in general be any type of element. The movable element 11 may be a lens or lens assembly or an image sensor, for example. As viewed along a primary axis z (i.e. the optical axis), the movable element 11 has the shape of a square with two diagonally-opposite corners that are rounded. However, more generally, the movable element 11 could have any shape. The support structure 14 has a square base 14a with two parts 14b (also referred to as support posts) that extend from this base 14a into the space left by the rounded corners of the movable element 11. However, in general, the support structure 14 could be any type of element suitable for supporting the movable element 11. The support structure 14 supports the movable element 11 in a manner allowing movement of the movable element 11 relative to the support structure 14. In this example, the movable element 11 is supported on the support structure 14 solely by the SMA wires wo-w 7 but the SMA actuator 20 may comprise a suspension system additionally supporting the movable element 11 on the support structure 14.

Each SMA wire w comprises a piece of SMA wire connected at each end via a connector 15 to a respective one of the movable element 11 and the support structure 14. As will be described in more detail below, the connectors 15 are crimp portions (and will be generally referred to as such). However, more generally, any suitable means that provides mechanical connection may be used. In addition, electrical connections are made to the SMA wires w, for example via the crimp portions 15.

Each SMA wire w extends along a side s of the primary axis z perpendicular to a notional line radial of the primary axis z and inclined with respect to the primary axis. Each SMA wire w is held in tension, thereby applying a component of force in a direction along the primary axis z and a component of force in a lateral direction perpendicular to the primary axis z.

SMA material has the property that on heating it undergoes a solid-state phase change which causes the SMA material to contract. At low temperatures, the SMA material enters the Martensite phase. At high temperatures the SMA enters the Austenite phase which induces a deformation causing the SMA material to contract. The phase change occurs over a range of temperature due to the statistical spread of transition temperature in the SMA crystal structure. Thus heating of the SMA wires w causes them to decrease in length. The SMA wires w may be made of any suitable SMA material, for example Nitinol or another titanium-alloy SMA material. Advantageously, the material composition and pre-treatment of the SMA wires w is chosen to provide phase change over a range of temperature that is above the expected ambient temperature during normal operation and as wide as possible to maximise the degree of positional control.

On heating of one of the SMA wires w, the stress therein increases and it contracts. This causes movement of the movable element 11. A range of movement occurs as the temperature of the SMA increases over the range of temperature in which there occurs the transition of the SMA material from the Martensite phase to the Austenite phase. Conversely, on cooling of one of the SMA wires w so that the stress therein decreases, and it expands under the force from opposing ones of the SMA wires w. This allows the movable element 11 to move in the opposite direction.

The position of the movable element 11 relative to the support structure 14 along the primary axis z is controlled by varying the temperature of the SMA wires w. This is achieved by passing through SMA wires w a drive current that provides resistive heating. Heating is provided directly by the drive current. Cooling is provided by reducing or ceasing the drive current to allow the movable element 11 to cool by conduction to its surroundings.

Two of the SMA wires w are arranged on each of four sides s (i.e. a first side S1, a second side s2, a third side S3 and then a fourth side S4) around the primary axis z. The two of the SMA wires w on each side s, for example SMA wires w6 and w 7 , are inclined in opposite senses with respect to each other, as viewed perpendicular from the primary axis z, and cross each other. The four sides S1, s 2 , S3, S4 on which the SMA wires w are arranged extend in a loop around the primary axis z. In this example, the sides Si, s 2 , S3, S4 are perpendicular and so form a square as viewed along the primary axis z, but alternatively the sides Si, s 2 , S3, S4 could take a different e.g. quadrilateral shape. In this example, the SMA wires wo-w 7 are parallel to the outer faces of the square envelope of the movable element 11 which conveniently packages the SMA actuator 20 but is not essential.

One of the SMA wires w on each side s provides a force on the movable element 11 in the same direction along the primary axis z. In particular, the SMA wires wo, W3, W4, w 7 form a 'first' group that provide a force in one direction ('upwards') and the other SMA wires w1, W2, w 5 , w6 form a 'second' group that provide a force in the opposite direction ('downwards'). Herein, 'up' and 'down' generally refer to opposite directions along the primary axis z, wherein movement of the movable element 11 away from the base 14a of the support structure 14 is 'up'.

The SMA wires w 0 -w 7 have a symmetrical arrangement in which lengths and inclination angles are the same, so that both the first group of SMA wires w 0 , W3, W4, w 7 and the second group of SMA wires w1, w 2 , w 5 , w 6 are each arranged with twofold rotational symmetry about the primary axis z (i.e. bisecting the angle between SMA wires w on adjacent sides s and across the diagonals of the square envelope of the movable element 11). As a result of this symmetrical arrangement, different combinations of the SMA wires wo-w 7 , when selectively actuated are capable of driving movement of the movable element 11 with multiple degrees of freedom, as follows. The first group of SMA wires w0, W3, w 4 , w 7 and the second group of SMA wires w1, w 2 , w 5 , w6 when commonly actuated drive movement in different directions along the primary axis z.

Within each group, adjacent pairs of the SMA wires (for example on one hand SMA wires w 1 , w6 and on the other hand SMA wires w 2 , w 5 ) when differentially actuated drive tilting about a lateral axis perpendicular to the primary axis z. Tilting in any arbitrary direction may be achieved as a linear combination of tilts about the two lateral axes.

Sets of four SMA wires, including two SMA wires from each group, (for example on one hand SMA wires W4, w 5 , w6, w 7 and on the other hand SMA wires wo, w1, w 2 , W3) when commonly actuated drive movement along a lateral axis (e.g. the line y=-x) perpendicular to the primary axis z. Movement in any arbitrary direction perpendicular to the primary axis z may be achieved as a linear combination of movements along the two lateral axes.

A control circuit can be electrically connected to the SMA wires for supplying drive currents thereto to drive these movements, e.g. as described in WO 2011/104518 Al (which is herein incorporated by reference in its entirety).

The moveable element 11 could comprise one or more of a lens assembly, a sensor (e.g. a multi-pixel sensor of an event camera) or a module on which both the sensor and lens assembly are supported.

Figure 4 shows a schematic arrangement of a control circuit 12 for controlling the camera apparatus 1. The control circuit 12 includes a processor 41 and a memory device 42. The processor 41 may receive inputs from sensor 40 which may be physically coupled to the camera apparatus 1 and may detect vibrations experienced by the camera assembly. The sensor 40 may be a vibration sensor such as a gyroscope sensor which detects the angular velocity of the camera assembly in three dimensions or an accelerometer which detects motion allowing the orientation and/or position to be inferred. The control circuit 12 may monitor these inputs and may determine any shake of the camera apparatus 1.

The control circuit 12 sends actuation signals to the actuation mechanism 2 to cause relative movement of one or more elements of the camera apparatus 1 according to the techniques described above. The memory device 42 may store predetermined algorithms which the processor 41 uses to drive the actuation mechanism 2. Optical image stabilisation in an event camera

Some of the methods and apparatuses described herein relate to optical image stabilisation in an event camera. Figure 5 shows a flow diagram setting out a method of conducting optical image stabilisation. At step 100, movement of the camera apparatus 1 is detected. Specifically, sensor 40 (see figure 4) detects movement of the camera apparatus 1 and sends a signal to the processor 41, which forms part of the control circuit 12. At step 102, the actuation mechanism 2 (see figure 1) is driven to compensate for movement of the camera apparatus 1. Specifically, the control circuit 12 sends actuation signals to the actuation mechanism to drive the actuation mechanism.

The method may further comprise step 102a which comprises receiving one or more signals from a first set of pixels and detecting a moving object in a field of view of the event camera based on the one or more received signals. As explained above, an object moving in the field of view will cause a change in intensity of light received by the first set of pixels. The one or more signals may be used to detect the moving object.

As mentioned above, the multi-pixel sensor may be configured to detect the moving object only when a displacement of the image on the sensor from a first position to a second position exceeds a threshold value. The threshold value may be a distance (either linear or angular) across the sensor. The distance may be an integer or half multiple of the distance between pixels on the sensor. Step 102a may comprise setting this threshold value and only detecting a moving object when the image on the sensor moves by a distance greater than the threshold value. As explained above, applying a displacement threshold in this way may be useful because some movement of an image on the sensor may be detected as a result of less-than-perfect optical image stabilization rather than true movement of an object in the field of view of the camera. In other words, one or more pixels may report a change in intensity of light received but this may only be a result of the image moving on the sensor as a result of e.g. handshake that is not entirely corrected by optical image stabilization. Accordingly, a threshold can be applied such that only movement of the image on the sensor which is likely (e.g. above a certain confidence threshold) to result from movement of an object in the scene rather than e.g. handshake.

Various apparatuses may be used to implement optical image stabilization, for example those illustrated in Figures 1 and 3A-E, as follows:

Figure 3A: In this embodiment, the multi-pixel sensor corresponds to the first element A and the lens assembly corresponds to the second element B. Movement of the lens assembly 4 (as described above with reference to Figure 3A) may be used to provide optical image stabilization (OIS), compensating for movement of the camera apparatus 1, which may be caused by hand shake etc. The movement providing OIS need not be constrained to the x-y plane.

Figure 3B: the support structure 3 corresponds to the first element A and the module comprising the lens assembly 4 and the sensor 6 corresponds to the second element B. OIS functionality is provided by tilting the module comprising the lens assembly 4 and the multipixel sensor 6, about an axis parallel to the x axis and/or about an axis parallel to the y axis (the y axis is directed out of the page, perpendicular to the x and z axes).

Figure 3C: the lens assembly 4 corresponds to the first element A and the multi-pixel sensor 6 corresponds to the second element B. Movement of the sensor 6 in a plane orthogonal to the optical axis relative to the lens assembly is used to provide optical image stabilization (OIS), compensating for movement of the camera apparatus 1, which may be caused by hand shake etc. The movement providing OIS need not be constrained to the x-y plane.

In the embodiments described above with respect to Figures 3A and 3C, one of the lens assembly 4 and the multi-pixel sensor 6 are held static relative to the support structure 3 and the other of the lens assembly 4 and the multi-pixel sensor 6 is moved in a plane orthogonal to the optical axis to effect optical image stabilisation. In some embodiments, however, both the lens assembly 4 and the multipixel sensor 6 may move in the x-y plane to effect relative movement between the multi-pixel sensor 6 and lens assembly 4, thus stabilising the image on the multi-pixel sensor 6.

3D sensing in an event camera

As mentioned above, some methods described herein make use of the structured light method of 3D sensing. Figure 6A illustrates the broad scheme of a structured light depth-sensing device 50. The device 50 is for generating a depth map of a scene 52.

As shown in Figure 6A, in an embodiment the device 50 comprises an emitter 54 and a detector 56. The emitter 54 is configured to emit radiation to the scene 52. The detector 56 is configured to detect the radiation reflected from the scene 52. Optionally, the emitter 54 is configured to emit structured radiation (i.e. a light pattern) to the scene 52. Figure 6B depicts an example of a light pattern 70 formed of a plurality of dots (i.e. points of light) that may be used in the context of the 3D sensing methods described herein. Optionally, the radiation is infrared radiation. The light pattern is transmitted to the scene 52 and extends across an area of the scene 52, which area may have varying depths, as for example when the scene comprises a human face and the depth mapping device is used for face recognition. The detector 56 is configured to detect radiation received from the scene 52. When a light pattern is used, the measurements of the detector 56 are used to determine distortion of the projected light pattern such that a depth map of the scene 52 can be generated.

The device 50 includes a depth mapping processor unit 58 that is supplied with the output of the detector 56 and which processes that output to generate a depth map of the scene. The depth mapping processor unit 58 may perform this processing using known techniques. The depth mapping processor unit 58 may be implemented in a processor executing appropriate software.

The device 50 also includes a control processor unit 60 configured to control the emitter 54. The control processor unit 60 may be implemented in a processor executing appropriate software, which may be same processor or a different processor from that which implements the depth mapping processor unit 58.

In the 3D sensing methods described herein, the detector 56 is a sensor of an event camera, i.e. it is a multi-pixel sensor in which each pixel is configured to report on intensity changes only as and when they occur. Methods and systems for generating a 3D depth map of a scene using an event camera will now be described with reference to Figures 7 to 9.

Figure 7 shows a schematic diagram of an apparatus 120 for generating a three-dimensional (3D) representation of a scene using an event camera. For example, the apparatus 120 may be, or may be included in, any of: a smartphone, a mobile computing device, a laptop, a tablet computing device, a security system, a gaming system, an augmented reality system, an augmented reality device, a wearable device such as a watch or glasses, a drone, an aircraft, a spacecraft, a vehicle, an autonomous vehicle, a robotic device, a consumer electronics device, a domotic device, and a home automation device.

The apparatus 120 comprises a light source 122 which is arranged to emit non-uniform illumination (for example a dot pattern such as that shown in Figure 6B) and a multi-pixel sensor 124 for receiving reflected light from a field of view. The multi-pixel sensor 124 is configured such that each pixel is configured to report changes in intensity of illumination received by that pixel as the changes occur. The multi-pixel sensor 6 may otherwise be referred to as an event camera sensor.

The non-uniform illumination may be any form of illumination and the light source 122 may be any suitable light source. For example, the light source 122 may be a source of non-visible light or a source of near infrared (NIR) light. The light source 122 may comprise at least one laser, laser array (e.g. a VCSEL array), or may comprise at least one light emitting diode (LED). The non-uniform illumination emitted by the light source 122 (or by the overall apparatus 120) may have any form or shape. For example, the non- uniform illumination may be a light beam having a circular beam shape or may comprise a pattern of parallel stripes of light or may comprise a uniform or non-uniform pattern of dots or circles of light, for example as shown in Figure 6B. It will be understood that these are merely example types of illumination and are non-limiting.

The apparatus 120 comprises an actuation mechanism 126 for moving the emitted non-uniform illumination across at least part of the field of view of the sensor 124. The actuation mechanism 126 may be any suitable actuation mechanism for incorporation into the apparatus 120 and for use in an imaging system. For example, the actuation mechanism 126 may be a shape memory alloy (SMA) actuation system, which comprises at least one SMA actuator wire. The at least one SMA actuator wire may be coupled to the or each element of the apparatus 120 which may be moved in order to move the emitted non-uniform illumination across at least part of the scene. Additionally or alternatively, the actuation mechanism 126 may comprise a voice coil motor (VCM), or an adaptive beam-steering mechanism for steering the non-uniform illumination (which may comprise an electrically switchable spatial light modulator). Alternatively or additionally any of the actuator types listed above may be used. The actuation mechanism 126 may be arranged to move the emitted non-uniform illumination by moving any one of the following components of the apparatus 120: a lens, a prism, a mirror, a dot projector, and the light source 122.

In embodiments, the apparatus 120 may comprise at least one moveable optical element 128 which is provided 'in front of' the light source 122, i.e. between the light source 122 and the object field/scene. The actuation mechanism 126 may be arranged to spin or rotate, or otherwise move, the optical element 128 in order to move the emitted non-uniform illumination. The optical element 128 may be any one of: a lens, a prism, a mirror, and a diffraction grating.

Figures 8A and 8B respectively show an embodiment of an apparatus for generating a 3D map of a scene using an event camera. Elements that are shown in Figure 7 are given the same reference numerals here. Illumination is directed on the centre of a scene and on the right-side of the scene. The apparatus 120 comprises a light source 122 (e.g. a VCSEL array). The light emitted by the light source 122 passes through one or more optical elements 128 (e.g. lenses, mirrors, diffraction gratings, etc.) before being emitted from the apparatus 120 and projecting onto a scene/object field 132. The apparatus 120 may comprise a receiver lens and filter system 134 and comprises a multipixel sensor 124 of an event camera for sensing reflected light. One or more of the optical elements 128 are coupled to an actuation mechanism 126. The actuation mechanism 126 is arranged to move the optical element 128 to which it is coupled. Figure 8A shows the optical elements 128 in their central or default position, which causes the emitted non-uniform illumination to project onto the centre of the scene 132 corresponding to the field of view of the sensor 124. Figure 8B shows how one of the optical elements 128 may be moved by the actuation mechanism 126 in order to move the non-uniform illumination to different areas of the scene 132. In the illustration, moving an optical element 128 to the left of the figure may cause the non-uniform illumination to be projected on the right side of the scene 132.

Figure 9 illustrates a method of producing a 3D representation of a scene. Reference is made to the elements of apparatus 120 shown in Figures 7, 8A and 8B.

At step 140, illumination having a spatially non-uniform intensity over a field of view of the multi-pixel sensor 124 (of an event camera) is emitted by the light source 122.

At step 142, illumination reflected back from the scene is received by the sensor 124 and as a result, a change in intensity of the illumination received by a first set of the pixels of the multi-pixel sensor 124 is detected.

Subsequently, at step 144, the illumination is moved across at least part of the field of view. This is achieved by driving actuation mechanism 126 to move one or more moveable optical elements 128. The movement could be continuous and illumination continuously emitted during the movement such that the illumination is projected onto the scene while the actuation mechanism 126 drives movement of one or more optical elements 128. Alternatively, the illumination may be emitted non-continuously, e.g. only when one or more of the moveable optical elements 128 has reached the required position.

In either case, as a result of the movement, a different pattern of illumination is reflected by the scene and is received by the sensor 124. As a result, at step 146 a change in intensity of the illumination received by a second set of pixels of the multi-pixel sensor 124 (which is different to the first set of pixels) is detected. In this way, the event camera is used advantageously to detect movement of the illumination across the field of view (preferentially over detecting the scene itself). Any stationary elements of the scene are not detected by the event camera sensor 124 and accordingly, the illumination is more easily detected (as compared to a corresponding method using a standard frame camera). At step 148, a 3D representation of the scene is generated based on the respective location on the multi-pixel sensor 124 of each pixel of the first and second set of pixels. This may be done using a standard algorithm, for example. For example, the data may be combined using statistical techniques to generate a 3D representation of the scene.

It will be appreciated that the illumination could be moved over the field of view any number of times (for example 3, 4, 5 or more), collecting data each time and using this data to generate the 3D representation.

The above-described actuator assemblies may comprise an SMA wire. The term 'shape memory alloy (SMA) wire' may refer to any element comprising SMA. The SMA wire may have any shape that is suitable for the purposes described herein. The SMA wire may be elongate and may have a round cross section or any other shape cross section. The cross section may vary along the length of the SMA wire. It is also possible that the length of the SMA wire (however defined) may be similar to one or more of its other dimensions. The SMA wire may be pliant or, in other words, flexible. In some examples, when connected in a straight line between two elements, the SMA wire can apply only a tensile force which urges the two elements together. In other examples, the SMA wire may be bent around an element and can apply a force to the element as the SMA wire tends to straighten under tension. The SMA wire may be beam-like or rigid and may be able to apply different (e.g. non-tensile) forces to elements. The SMA wire may or may not include material(s) and/or component(s) that are not SMA. For example, the SMA wire may comprise a core of SMA and a coating of non-SMA material. Unless the context requires otherwise, the term 'SMA wire' may refer to any configuration of SMA wire acting as a single actuating element which, for example, can be individually controlled to produce a force on an element. For example, the SMA wire may comprise two or more portions of SMA wire that are arranged mechanically in parallel and/or in series. In some arrangements, the SMA wire may be part of a larger piece of SMA wire. Such a larger piece of SMA wire might comprise two or more parts that are individually controllable, thereby forming two or more SMA wires.

It will be appreciated that there may be many other variations of the above-described examples.

The above description of embodiments is made by way of example only and various modifications and juxtapositions of the described features will occur to the skilled person. The above description is made for the purpose of illustration of embodiments of the invention and not limitation of the invention, which is defined in the appended claims. Also disclosed is the following:

1. A camera apparatus comprising: an event camera comprising a multi-pixel sensor and a lens assembly for focusing an image on the multi-pixel sensor; an actuation mechanism configured to cause relative movement between a first element of the camera apparatus and a second element of the camera apparatus; and control circuitry configured to drive the actuation mechanism to cause the relative movement to effect stabilisation of an image on the multi-pixel sensor.

2. A camera apparatus according to item 1, wherein the first element is a support structure and the second element comprises the multi-pixel sensor and the lens assembly.

3. A camera apparatus according to item 2, wherein the actuation mechanism is configured to tilt the sensor and the lens assembly about one or more axes which are perpendicular to the optical axis of the lens assembly.

4. A camera apparatus according to item 1, wherein the first element is a support structure and the second element comprises the multi-pixel sensor and optionally the lens assembly, wherein the actuation mechanism is configured to rotate the second element about the optical axis of the lens assembly or about an axis parallel to the optical axis.

5. A camera apparatus according to item 1, wherein the first element comprises the multi-pixel sensor and the second element comprises the lens assembly.

6. A camera apparatus according to any preceding item wherein the actuation mechanism comprises one or more SMA wires.

7. A camera apparatus according to any preceding item wherein the apparatus is provided on a handheld or wearable device.

8. A camera apparatus according to item 1, item 2 or any of items 4 to 7, wherein the relative movement is in a plane perpendicular to the optical axis of the lens assembly. 9. A camera apparatus according to any preceding item, wherein the actuation mechanism is also configured to cause relative movement of the sensor and the lens assembly along the optical axis.

10. A camera apparatus according to any preceding item wherein the control circuitry is configured to receive one or more signals from a first set of pixels and detect a moving object in a field of view of the event camera based on the one or more received signals.

11. A camera apparatus according to item 10, wherein the multi-pixel sensor is configured to detect the moving object only when a displacement of the image on the sensor from a first position to a second position exceeds a threshold value.

12. A method of controlling a camera apparatus, the camera apparatus comprising: an event camera comprising a multi-pixel sensor and a lens assembly for focusing an image on the multi-pixel sensor; and an actuation mechanism configured to cause relative movement between a first element of the camera apparatus and a second element of the camera apparatus, the method comprising: driving the actuation mechanism to cause the relative movement to effect stabilisation of an image on the multi-pixel sensor.

13. A method according to item 12, wherein the first element is a support structure and the second element comprises the multi-pixel sensor and the lens assembly.

14. A method according to item 13, wherein the actuation mechanism is configured to tilt the sensor and the lens assembly about one or more axes which are perpendicular to the optical axis of the lens assembly.

15. A method according to item 12, wherein the first element is a support structure and the second element comprises the multi-pixel sensor and optionally the lens assembly, wherein the actuation mechanism is configured to rotate the second element about the optical axis of the lens assembly or about an axis parallel to the optical axis

16. A method according to item 12, wherein the first element comprises the multi-pixel sensor and the second element comprises the lens assembly. A method according to any of items 12 to 16 wherein the actuation mechanism comprises one or more SMA wires. A method according to any of items 12 to 17 wherein the apparatus is provided on a handheld or wearable device. A method according to item 12, item 13 or any of items 15 to 18, wherein the relative movement is in a plane perpendicular to the optical axis of the lens assembly. A method according to any of items 12 to 19, wherein the actuation mechanism is also configured to cause relative movement of the sensor and the lens assembly along the optical axis. A method according to any of items 12 to 20 comprising receiving one or more signals from a first set of pixels of the sensor and detecting a moving object in a field of view of the event camera based on the one or more received signals. A method according to item 21 comprising configuring the multi-pixel sensor to detect the moving object only when a displacement of the image on the sensor from a first position to a second position exceeds a threshold value. A method according to item 22, wherein configuring the sensor comprises setting a parameter of the sensor. A method according to any of items 21 to 23 comprising receiving one or more signals from a second set of pixels of the sensor, different to the first set of pixels, and sending an objectdetection signal to a processor only when a distance between the first and second sets of pixels exceeds a threshold value. A method comprising effecting stabilisation of an image on a multi-pixel sensor of an event camera. A computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out the method of any of items 12 to 25. A method of determining a three-dimensional representation of a scene using an event camera comprising a multi-pixel sensor, each pixel having a respective location on the sensor, the method comprising: emitting illumination having a spatially non-uniform intensity over a field of view of the multi-pixel sensor; detecting a change in intensity of the illumination received by a first set of the pixels of the multi-pixel sensor; moving the illumination across at least part of the field of view; and detecting a change in intensity of the illumination received by a second set of the pixels of the multi-pixel sensor as a result of the movement, wherein the first set of pixels is different to the second set of pixels.; and determining a three-dimensional representation of the scene based on the respective location on the multi-pixel sensor of each pixel of the first and second sets of pixels. A method according to item 27, wherein moving the illumination comprises moving the illumination from a first position to a second position, wherein the illumination is moved continuously between the first and second positions. A method according to item 27 or item 28, wherein determining a three-dimensional representation of the scene comprises generating the three-dimensional representation of the scene based on the respective location on the multi-pixel sensor of each pixel of the first and second sets of pixels. A method according to item 27 or item 28, wherein determining a three-dimensional representation of the scene comprises selecting, from set of pre-determined three-dimensional representations, a three-dimensional representation of the scene based on the respective location on the multi-pixel sensor of each pixel of the first and second sets of pixels. A method according to item 27 or item 28, wherein moving the illumination comprises passing current through one or more SMA wires to contract the one or more SMA wires, thereby effecting movement of the illumination. A computer program product comprising instructions which, when the program is executed by a processor, cause the processor to carry out the method of any of items 27 to 31. An apparatus for determining a three-dimensional representation of a scene, the apparatus comprising: an event camera comprising a multi-pixel sensor, each pixel having a respective location on the sensor; an illumination assembly configured to emit illumination having a spatially non-uniform intensity over a field of view of the sensor; an actuation mechanism configured to move the illumination across at least part of the field of view; and control circuitry configured to: detect a change in intensity of the illumination received by a first set of the pixels of the sensor; drive the actuation mechanism to move the illumination across at least part of the field of view; detect a change in intensity of the illumination received by a second set of the pixels of the sensor as a result of the movement, wherein the first set of pixels is different to the second set of pixels; and determine a three-dimensional representation of the scene based on the respective location on the sensor of each pixel of the first and second sets of pixels. An apparatus according to item 33, wherein the actuation mechanism is configured to move the illumination continuously across at least part of the field of view. An apparatus according to item 33 or item 34, wherein the control circuitry is configured to generate a three-dimensional representation of the scene based on the respective location on the sensor of each pixel of the first and second sets of pixels. An apparatus according to item 33 or item 34, wherein the control circuitry is configured to select, from a set of pre-determined three-dimensional representations, a three-dimensional representation of the scene based on the respective location on the sensor of each pixel of the first and second sets of pixels. An apparatus according to any of items 33 to 36, wherein the actuation mechanism comprises one or more SMA wires.




 
Previous Patent: CONTROLLING A SWARM OF AGENTS

Next Patent: CABLE SLEEVE