Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING EXPOSURE PARAMETERS FOR IMAGING
Document Type and Number:
WIPO Patent Application WO/2022/167098
Kind Code:
A1
Abstract:
A method for determining an exposure parameter for imaging is disclosed. The method comprises detecting, using an event camera, a change in brightness within a field-of-view of a frame camera, and determining the exposure parameter for imaging using the frame camera based on the detected change in brightness.

Inventors:
MUUKKI MIKKO (SE)
BILCU RADU (SE)
Application Number:
PCT/EP2021/052945
Publication Date:
August 11, 2022
Filing Date:
February 08, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
MUUKKI MIKKO (SE)
International Classes:
H04N5/225; H04N5/235; H04N5/247
Domestic Patent References:
WO2020071683A12020-04-09
Foreign References:
US20200137287A12020-04-30
US20150341576A12015-11-26
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
Claims

1. A method for determining an exposure parameter for imaging, the method comprising: detecting, using an event camera, a change in brightness within a field-of-view of a frame camera; and determining the exposure parameter for imaging using the frame camera based on the detected change in brightness.

2. The method as claimed in claim 1, further comprising imaging using the frame camera based on the determined exposure parameter.

3. The method as claimed in claim 1 or claim 2, wherein the determining of the exposure parameter comprises determining an exposure time and/or a gain for imaging using the frame camera based on the detected change in brightness

4. The method as claimed in any one of the preceding claims, wherein the detecting, using an event camera, the change in brightness within a field-of-view of the frame camera comprises detecting, using the event camera, changes in brightness at spatially different positions with the field-of-view of the frame camera, and the method further comprises: quantifying motion within the field-of-view of the frame camera based on the detected changes in brightness, wherein the determining an exposure parameter comprises determining an exposure parameter for imaging using the frame camera based on the quantity of the motion.

5. The method as claimed in claim 4, wherein the determining an exposure parameter for imaging using the frame camera based on the quantity of the motion comprises determining an acceptable level of image blur in imagery acquired by the frame camera, and determining a maximum exposure parameter suitable for maintaining image blur in imagery acquired by the frame camera below the acceptable level.

28

6. The method as claimed in any one of the preceding claims, comprising determining a further exposure parameter for imaging using the frame camera based on the determined exposure parameter.

7. The method as claimed in any one of the preceding claims, wherein the determining the exposure parameter comprises determining an exposure time parameter, and the method comprises: starting acquisition of an image frame using the frame camera based on an initial exposure time parameter with a rolling shutter whereby periods of exposure for different regions of an image sensor of the frame camera end at mutually different times, determining the exposure time parameter during acquisition of the image frame using the frame camera based on the initial exposure time parameter, and imaging using the frame camera based on the determined exposure time parameter in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter.

8. The method of claim 7, wherein the imaging using the frame camera based on the determined exposure time parameter comprises: determining that no period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended; determining that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determinations, continuing acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby any period of exposure of the image sensor of the frame camera based on the initial exposure time parameter that has elapsed that is shorter than the determined exposure time parameter is extended to the determined exposure time parameter.

9. The method of claim 7, wherein the imaging using the frame camera based on the determined exposure time parameter comprises: determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, and/or determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determination, restarting acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby an entirety of the image frame is acquired using the determined exposure time parameter.

10. An optical imaging device comprising an exposure parameter determining entity for determining an exposure parameter for imaging, the exposure parameter determining entity being configured to: receive from an event camera a signal indicative of a change in brightness within a field- of-view of a frame camera, and determine an exposure parameter for imaging using the frame camera based on the signal indicative of a change in brightness within the field-of-view of the frame camera received from the event camera.

11. The optical imaging device as claimed in claim 10, wherein the exposure parameter determining entity is configured to control the frame camera for imaging using the frame camera based on the determined exposure parameter.

12. The optical imaging device as claimed in claim 10 or claim 11, wherein the determining an exposure parameter comprises determining an exposure time and/or a gain.

13. The optical imaging device as claimed in any one of claims 10 to 12, wherein the exposure parameter determining entity is configured to control the event camera to detect a change in brightness within a field-of-view of the frame camera and generate a signal indicative of a detected change in brightness with a field of view of the frame camera.

14. The optical imaging device as claimed in claim 13, wherein the exposure parameter determining entity is configured to: control the event camera to detect changes in brightness at spatially different positions within the field-of-view of the frame camera, and quantify motion within the field-of-view of the frame camera based on the detected changes in brightness, wherein the determining an exposure parameter comprises determining an exposure parameter for imaging using the frame camera based on the quantity of the motion.

15. The optical imaging device as claimed in claim 14, wherein the exposure parameter determining entity is configured to: determine an acceptable level of image blur in imagery acquired by the frame camera, and determine an exposure parameter suitable for maintaining image blur in imagery acquired by the frame camera below the acceptable level.

16. The optical imaging device as claimed in any one of claims 10 to 15, wherein the exposure parameter determining entity is configured to determine a further exposure parameter for imaging using the frame camera based on the determined exposure parameter.

17. The optical imaging device as claimed in any one of claims 10 to 16, wherein the determining the exposure parameter comprises determining an exposure time parameter, and the exposure parameter determining entity is configured to: start acquisition of an image frame using the frame camera based on an initial exposure time parameter with a rolling shutter whereby periods of exposure for different regions of an image sensor of the frame camera end at mutually different times, determine the exposure time parameter during acquisition of the image frame using the frame camera based on the initial exposure time parameter, and image using the frame camera based on the determined exposure time parameter in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter.

18. The optical imaging device as claimed in claim 17, wherein the imaging using the frame camera based on the determined exposure time parameter comprises: determining that no period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, determining that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determinations, continuing acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby any period of exposure of the image sensor of the frame camera based on the initial exposure time parameter that has elapsed that is shorter than the determined exposure time parameter is extended to the determined exposure time parameter.

19. The optical imaging device as claimed in claim 17, wherein the imaging using the frame camera based on the determined exposure time parameter comprises: determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, and/or determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determination, restarting acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby an entirety of the image frame is acquired using the determined exposure time parameter.

20. The optical imaging device of any one of claims 10 to 19, further comprising: a frame camera, and

32 an event camera for detecting a change in brightness within a field-of-view of the frame camera and outputting a signal indicative of a change in brightness within the field of view, wherein the exposure parameter determining entity is in communication with each of the frame camera and the event camera.

21. The optical imaging device of claim 20, wherein the event camera is mechanically rigidly connected to the frame camera such that movement of the frame camera causes movement of the event camera. 22. The optical imaging device of claim 20 or claim 21, wherein image sensors of the frame camera and the event camera are co-located.

23. A computer program comprising machine-readable instructions which, when executed by a computer, cause the computer to carry out the method of any one of claims 1 to 9.

24. A computer-readable data carrier having the computer program of claim 23 stored thereon.

33

Description:
DETERMINING EXPOSURE PARAMETERS FOR IMAGING

Field of the Disclosure

The present disclosure relates to determining exposure parameters for imaging.

Background of the Disclosure

The brightness of an image acquired using a frame-based camera is a function of the camera exposure parameters, such as exposure time, gain and aperture size, and the brightness of the scene being imaged. For a given scene brightness, the exposure parameters of a frame-based camera may be adjusted to obtain a desired image brightness. In particular, low ambient light level imaging conditions may be compensated for by adjusting the exposure parameters to obtain a desirably high brightness image. For example, the image brightness may be increased by increasing the exposure time and/or increasing the gain. However, increasing the exposure time may typically undesirably increase image blurring for a moving scene. And increasing the gain may typically undesirably increase image noise. It is therefore desirable to take into account ambient light level conditions and/or movement of an imaged scene when determining exposure parameters for imaging using a frame-based camera.

Summary of the Disclosure

An objective of the present disclosure is to provide a method for determining exposure parameters for imaging using a frame camera based on scene dynamics. Determining exposure parameters based on scene dynamics may desirably result in improved image quality.

A first aspect of the present disclosure provides a method for determining an exposure parameter for imaging, the method comprising, detecting, using an event camera, a change in brightness within a field-of-view of a frame camera, and determining the exposure parameter for imaging using the frame camera based on the detected change in brightness.

In other words, the method involves determining one or more exposure parameters for acquiring images using a frame camera, for example, exposure time, gain and/or aperture size parameters, based on changes in brightness in the field-of-view of the frame camera, i.e., the scene imaged by the frame camera, detected using an event camera.

Such changes in brightness in the imaged scene may be indicative of large-scale changes in scene brightness, i.e., ambient light levels, and/or motion in the scene. Both changes in scene brightness and motion in the scene are significant when determining exposure parameters, inasmuch that they may affect image brightness and/or image blur. For example, where a change in scene brightness occurs during a session of imaging using the frame camera, i.e., between temporally- separated image frames, it may be desirable to adjust exposure parameters used by the frame camera to compensate for the change in scene brightness, to thereby obtain images of a desired image brightness. Further, where there is motion in the scene, e.g., moving objects in the scene, it may be desirable to acquire images using exposure parameters which minimise image blur whilst maintaining a desired image brightness.

Determining exposure parameters used for imaging based on changes in scene brightness may thus mean that the exposure parameters take account of changes in scene brightness and/or motion in the scene. As a result, a quality of images acquired using the exposure parameters may be improved. For example, image brightness may be maintained approximately constant between temporally -separated image frames even for a scene with temporally-varying scene brightness, and/or image blur may be maintained at a desirably low level for a scene containing motion.

Such changes in the scene imaged by the frame camera, e.g., changes in scene brightness and/or motion in the scene, may be detectable using means such as a frame-based camera. However, frame cameras may produce full images at a fixed frame rate, e.g., 30 frames-per-second (fps). In the context of a fast-changing scene, e.g., a scene in which objects are moving quickly and/or in which scene brightness is rapidly changing, this delay in receiving data from frames, e.g., at 30 fps image data is delayed by 33ms, means that exposure parameter determinations may typically lag frame acquisition by several frames. Thus, exposure parameters may be suboptimal, particularly where fast changes in speed of motion in scene or scene brightness occur. Additionally, frame-based cameras may capture image data globally for an entire field-of-view, and continuously during a period of operation, with the result that data may be generated irrespective of whether any motion/change in brightness in scene occurs. This may result in acquisition of a relatively large volume of image data, including redundant data unrelated to changes in brightness, correspondingly demanding a relatively high degree of computational resource to identify brightness changes/infer motion in the scene.

In the disclosure however, changes in brightness in the imaged scene are detected using an event camera. The event camera may trigger events asynchronously across its pixels in response to a change in light intensity, i.e., brightness change, at a given pixel exceeding a pre-set threshold. This has advantages in the present application. In particular, because event cameras sample light based on scene dynamics, rather than on a clock that has no relation to the viewed scene, event cameras may typically offer relatively high temporal resolution and low latency. Additionally, because event cameras only trigger events in response to brightness changes exceeding a threshold, rather than continuously, and/or because their pixels are triggered asynchronously by changes in brightness in the scene, they may advantageously generate less redundant data, thereby demanding lower computational resource to detection motion/brightness changes in the scene.

Such a relatively fast response time of an event camera to changes in brightness in the scene, and/or a reduction in complexity, and thus time, involved in processing data generated by an event camera, may desirably allow relatively fast identification of changes in brightness in the scene, e.g., identification of changes in scene brightness and/or motion in the scene. Advantageously, this may allow exposure parameters for imaging using the frame camera to be set based on recent scene dynamics, and/or be adjusted relatively quickly to adapt to changes in the scene. Consequently, the quality of images acquired by the frame camera may be further improved. For example, image brightness may be maintained relatively constant between temporally separated images of a scene, even for a scene with temporally -varying scene brightness, and/or image blur may be maintained relatively low even for a scene containing motion.

In an implementation, the method further comprises imaging using the frame camera based on the determined exposure parameter. In other words, the method may comprise an additional step of acquiring one or more images with the frame camera using the determined exposure parameters. In a simpler example, the method could, for example, instead involve outputting the determined exposure parameters to an external system functional to control a frame camera to acquire images using the determined exposure parameters.

In an implementation, the determining an exposure parameter comprises determining an exposure time and/or a gain for imaging using the frame camera based on the detected change in brightness. In other words, the method could involve determining one or both of an exposure time and or a gain level to be applied for imaging based on the detected changes in brightness. Determining an exposure time may desirably allow control and/or reduction of blur in an image of a scene containing motion. Determining a gain level, e.g., an analog and/or digital gain level, may desirably allow control and/or reduction of noise in an image.

In an implementation, the detecting, using an event camera, the change in brightness within a field-of-view of the frame camera comprises detecting, using the event camera, changes in brightness at spatially different positions with the field-of-view of the frame camera, and the method further comprises: quantifying motion within the field-of-view of the frame camera based on the detected changes in brightness, wherein the determining an exposure parameter comprises determining an exposure parameter for imaging using the frame camera based on the quantity of the motion.

In other words, the method may involve quantifying motion of the scene relative to the event camera, e.g., quantifying motion of objects in the scene, based on detected changes in brightness occurring at spatially different positions of the field-of-view, which may be inferred to depict motion of an object in the scene.

Where the scene is moving relative to the event camera, e.g., where an object in the scene is moving, the movement may be expected to result in triggering of brightness changes in the pixels of the event camera at different times, depending on the magnitude of the motion of the scene relative to the event camera, e.g., the speed of moving objects in the scene. The motion may thus be quantified, e.g., the speed of moving objects may be quantified, using known values of the inter-pixel distance between pixels of the event camera and the time difference between pixels detected by the events.

Determining one or more exposure parameters based on the quantity of motion, i.e., the speed of motion of the scene relative to the event camera, for example, the speed of objects in the scene, may advantageously allow for determination of exposure parameters which best reduce image blur resulting from the motion. As a result, image blur in images acquired by the frame camera using the determined exposure parameters may be reduced.

In an implementation, the determining an exposure parameter for imaging using the frame camera based on the quantity of the motion comprises determining an acceptable level of image blur in imagery acquired by the frame camera, and determining a maximum exposure parameter suitable for maintaining image blur in imagery acquired by the frame camera below the acceptable level.

In other words, the method may involve determining an acceptable level of image blur, e.g., a maximum permitted level of blur in images acquired by the frame, and determining a maximum extent of an exposure parameter, e.g., an exposure time parameter, that may still allow acquisition of images by the frame camera. As a result, image blur in images acquired by the frame camera is advantageously maintained below the acceptable level. This may advantageously avoid acquisition of images that are unacceptably blurred, which may, for example, minimise the volume of storage incurred in storing image data corresponding to unacceptably blurred images. For example, the method may involve determining a maximum exposure time parameter that, based on the determined quantity of motion, will still allow acquisition of images using the frame with no more than the acceptable level of image blur. The determining an acceptable level of image blur may, for example, involve receiving, by a computing device involved in the determining an exposure parameter, an input value from an operator via a human-machine interface, or accessing a pre-defined blur value, e.g., a value stored in machine-readable memory accessible by a computing device involved in the determining an exposure parameter, defining an acceptable level of image blur.

In an implementation, the method comprises, determining a further exposure parameter for imaging using the frame camera based on the determined exposure parameter. In other words, the method may involve determining a further, i.e., an additional, exposure parameter that complements the previously determined exposure parameter. This may advantageously result in further improved image quality. For example, where an exposure time parameter has previously been determined, the method may further involve determining a gain parameter and/or an aperture size parameter, that complements the exposure time parameter.

In an implementation, the determining the exposure parameter comprises determining an exposure time parameter, and the method comprises: starting acquisition of an image frame using the frame camera based on an initial exposure time parameter with a rolling shutter whereby periods of exposure for different regions of an image sensor of the frame camera end at mutually different times, determining the exposure time parameter during acquisition of the image frame using the frame camera based on the initial exposure time parameter, and imaging using the frame camera based on the determined exposure time parameter in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter.

In other words, the method may involve operating the frame camera to image, using a rolling shutter method whereby periods of exposure for different sensing regions of an image sensor of the frame camera, e.g., different sub-groups of photosensors of an image sensor, are staggered such that the exposure periods start and end at mutually different times. This mode of operation may advantageously allow for readout of the acquired image data directly via a relatively low- bandwidth readout channel, thereby avoiding intermediate storage of the acquired image data.

The rolling shutter mode of operation, i.e., the staggering of exposure times of regions of the image frame, may however disadvantageous^ increase the overall time incurred in acquiring an image frame. As a result of the increased frame acquisition time, initial exposure parameters employed for acquiring the image frame, and in particular an initial exposure time parameter employed, may be relatively old by the time of exposure of later-exposed regions of the image sensor. In the context of a dynamic-scene, this may result in sub-optimal image quality. For example, where the scene contains objects having fast-changing speeds of motion, the initial exposure parameters, e.g., an initial exposure time parameter, may be inappropriately long for exposure of later-exposed regions of the image sensor, thereby resulting in an unacceptably high level of image blurring.

The method thus provides for determining of an exposure time parameter, based on changes in brightness detected by the event camera, during acquisition of an image frame by the frame camera, i.e., midway through acquisition of the image frame, based on the initial exposure time parameter, and subsequently imaging using the newly determined exposure time parameter. The newly determined exposure time parameter may be more appropriate for acquiring all or part of the image frame, and the method may thus advantageously result in improved image quality, e.g., a reduction in image blurring.

A risk exists however in using the newly determined exposure time parameter for acquiring the ongoing frame, inasmuch that such use may result in corruption of the image frame caused by different regions of the image sensor being exposed for mutually different times. Such differences in exposure times may cause spatially -varying image brightness, which may be considered an undesirable image characteristic.

The disclosed method however may advantageously avoid such frame corruption by imaging using the newly determined exposure time parameter depending on a magnitude of the newly determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter. In other words, the method may involve selectively employing the newly determined exposure time parameter taking into account the length of the newly determined exposure time parameter relative to the initial exposure time parameter, and the time at which the newly determined exposure time parameter is determined in the course of the acquisition of the ongoing image frame. By taking into account these factors in determining whether or not to employ the newly determined exposure time parameter, image frame corruption, resulting from spatially -varying exposure times, may be avoided. Thereby, the quality of acquired image frames may be improved.

In examples, the initial exposure time parameter may be an exposure time parameter determined by performed of the hereinbefore disclosed method. In other words, the initial exposure time parameter may be an exposure time that has been determined at a previous time-point by the presently disclosed method based on changes in scene brightness detected using an event camera. The determining the exposure time parameter during acquisition of the image frame may thus represent a repetition of the method at a relatively later time-point. In other words, in examples, the method for determining an exposure time parameter may be performed repeatedly in the course of acquisition of an image frame, such that an exposure time parameter used for imaging may be current at a sub-frame level, i.e., the exposure time parameter may be updated during the course of acquisition of an image frame.

In an implementation, the imaging using the frame camera based on the determined exposure time parameter comprises: determining that no period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, determining that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determinations, continuing acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby any period of exposure of the image sensor of the frame camera based on the initial exposure time parameter that has elapsed that is shorter than the determined exposure time parameter is extended to the determined exposure time parameter.

In other words, the method may involve updating an exposure time parameter in respect of an ongoing image frame, by starting acquisition of the image frame based on the initial exposure time parameter, and continuing acquisition of the image frame based on the determined exposure time parameter. This updating of the exposure time parameter during acquisition of the image frame may advantageously result in an exposure time parameter that is more appropriate to the extant scene dynamics, and thus improved image quality. A risk may exist in such a mode of operation, inasmuch that updating of the exposure time parameter during acquisition of the image frame may undesirably result in frame corruption, i.e., different regions of the image sensor being exposed for mutually different periods of time, thereby potentially resulting in spatially-varying image brightness. Such a situation could, for example, arise where a period of exposure of a region of the image sensor based on the initial exposure time period has ended, and/or where a period of exposure of a region of the image sensor has already elapsed which is longer than the newly determined exposure time parameter.

By determining that no period of exposure of the image sensor based on the initial exposure time parameter has ended, i.e., that the initial exposure time period has not elapsed for any region of the image sensor and that readout of the acquired image data for that region has not commenced, and that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and employing the newly determined exposure time parameter for continued acquisition of the image frame based on such determinations, the risk of frame corruption may be avoided.

Moreover, by extending to equal the newly determined exposure time parameter periods of exposure of the image sensor based on the initial exposure time parameter that have elapsed which are shorter than the newly determined exposure time parameter, the already lapsed periods of exposure may still be utilised for acquiring the image frame using the newly determined exposure parameters. Thereby the overall time incurred for frame acquisition may be minimised.

In an implementation, the imaging using the frame camera based on the determined exposure time parameter comprises determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, and/or determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determination, restarting acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby an entirety of the image frame is acquired using the determined exposure time parameter.

Where a period of exposure of the image sensor has already ended, i.e., the image data readout for the corresponding region of the image frame has already begun, and/or a period of exposure of the image sensor has already elapsed that is longer than the newly determined exposure time parameter, usage of the newly determined exposure time parameter for acquiring a remainder of the frame may undesirably risk different regions of the image sensor being exposed for mutually different exposure times, which may undesirably result in spatially -varying image brightness.

The disclosed method avoids this risk, as in result of determining that a period of exposure for the image frame based on the initial exposure time has ended, or that a period of exposure for the image frame has elapsed that is relatively longer than the newly determined exposure time parameter, the method involves restarting acquisition of the entire image frame using the newly determined exposure time parameter. For example, acquisition of the image frame using the initial exposure time parameter may be ceased, the acquired image data discarded, and frame acquisition restarted using the newly determined exposure time parameter. Image quality may thereby be improved, whilst frame corruption may be thereby avoided.

A second aspect of the present disclosure provides an optical imaging device comprising an exposure parameter determining entity for determining an exposure parameter for imaging, the exposure parameter determining entity being configured to: receive from an event camera a signal indicative of a change in brightness within a field-of-view of a frame camera, and determine an exposure parameter for imaging using the frame camera based on the signal indicative of a change in brightness within the field-of-view of the frame camera received from the event camera.

In an implementation, the exposure parameter determining entity is configured to control the frame camera for imaging using the frame camera based on the determined exposure parameter.

In an implementation, the determining an exposure parameter comprises determining an exposure time and/or a gain.

In an implementation, the exposure parameter determining entity is configured to control the event camera to detect a change in brightness within a field-of-view of the frame camera and generate a signal indicative of a detected change in brightness with a field of view of the frame camera.

In an implementation, the exposure parameter determining entity is configured to: control the event camera to detect changes in brightness at spatially different positions within the field-of- view of the frame camera, and quantify motion within the field-of-view of the frame camera based on the detected changes in brightness, wherein the determining an exposure parameter comprises determining an exposure parameter for imaging using the frame camera based on the quantity of the motion.

In an implementation, the exposure parameter determining entity is configured to: determine an acceptable level of image blur in imagery acquired by the frame camera, and determine an exposure parameter suitable for maintaining image blur in imagery acquired by the frame camera below the acceptable level.

In an implementation, the exposure parameter determining entity is configured to determine a further exposure parameter for imaging using the frame camera based on the determined exposure parameter.

In an implementation, the determining the exposure parameter comprises determining an exposure time parameter, and the exposure parameter determining entity is configured to: start acquisition of an image frame using the frame camera based on an initial exposure time parameter with a rolling shutter whereby periods of exposure for different regions of an image sensor of the frame camera end at mutually different times, determine the exposure time parameter during acquisition of the image frame using the frame camera based on the initial exposure time parameter, and image using the frame camera based on the determined exposure time parameter in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter.

In an implementation, the imaging using the frame camera based on the determined exposure time parameter comprises: determining that no period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended; determining that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determinations, continuing acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby any period of exposure of the image sensor of the frame camera based on the initial exposure time parameter that has elapsed that is shorter than the determined exposure time parameter is extended to the determined exposure time parameter.

In an implementation, the imaging using the frame camera based on the determined exposure time parameter comprises determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has ended, and/or determining that a period of exposure of the image sensor of the frame camera based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, and in response to such determination, restarting acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby an entirety of the image frame is acquired using the determined exposure time parameter.

In an implementation, the optical imaging device may further comprise a frame camera, and an event camera for detecting a change in brightness within a field-of-view of the frame camera and outputting a signal indicative of a change in brightness within the field of view, wherein the exposure parameter determining entity is in communication with each of the frame camera and the event camera.

In an implementation, the event camera is mechanically rigidly connected to the frame camera such that movement of the frame camera causes movement of the event camera.

Rigidly connecting the event camera to the frame camera has the result that movement of the frame camera causes movement of the event camera. As a result, the event camera may detect relative motion between the frame camera and the imaged scene resulting from movement of the frame camera. For example, in the event that the frame camera is adapted to be hand-held, the event camera may detect movement of the frame camera resulting from shaking of a user’s hand. For example, the event camera and the frame camera may be incorporated in a same handheld assembly.

In an implementation, image sensors of the frame camera and the event camera are co-located.

An advantage of the co-location of image sensors of the frame camera and the event camera is that the image sensors may be expected to be exposed to light emanating from the imaged scene in a substantially same location, e.g., at a same angle of reflection. As a result, the perspective of the event camera may be substantially the same as the perspective of the frame camera, and consequently the event camera may be most likely to detect changes of brightness within the field- of-view of the frame camera. A third aspect of the present disclosure provides a computer program comprising machine- readable instructions which, when executed by a computer, cause the computer to carry out the method of any implementation of the first aspect of the present disclosure.

A fourth aspect of the present disclosure provides a computer-readable data carrier having the computer program of any implementation of the third aspect of the present disclosure stored thereon.

The foregoing and other objectives are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the Figures.

These and other aspects of the invention will be apparent from the embodiment(s) described below.

Brief Description of the Drawings

In order that the present invention may be more readily understood, embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:

Figure 1 shows schematically an example of an optical imaging device embodying an aspect of the disclosure, comprising a frame camera, an event camera, and an exposure parameter determining entity for determining exposure parameters for imaging using the frame camera;

Figures 2A and 2B show schematically first and second examples of image sensors of the frame camera and the event camera;

Figure 3 shows schematically an example of the exposure parameter determining entity identified with reference to Figure 1 ;

Figure 4 shows example processes involved in imaging using the optical imaging device identified with reference to Figure 1, which includes a process of setting exposure parameters for imaging using the frame camera;

Figure 5 shows example processes involved in the process of setting exposure parameters for imaging, which includes processes of determining exposure parameters, ; Figure 6 shows example processes involved in the process of determining exposure parameters, which includes a process of determining a first exposure parameter;

Figure 7 shows example processes involved in the determining a first exposure parameter;

Figure 8 shows schematically example processes involved in the imaging using the frame camera based on the determined exposure parameters;

Figure 9 shows example processes involved in the setting exposure parameters and acquiring images using the frame camera; and

Figure 10 shows example processes involved in the imaging using the frame camera.

Detailed Description of the Disclosure

Referring firstly to Figure 1, in examples, an optical imaging device 101 embodying an aspect of the present disclosure comprises a frame camera 102, an event camera 103, an exposure parameter determining entity 104, input/output interface 105, and communication link 106.

Frame camera 102 is operable to optically image a scene within its field-of-view. For example, frame camera 102 may comprise an image sensor, such as a complementary metal-oxide semiconductor or a charge-coupled device, comprising a plurality of pixels for converting incident lights into electrical signals. Frame camera 102 may further comprise computational resource, such as a computer processor, for controlling the operation of the image sensor, e.g., for controlling a mechanical or electronic shutter of the image sensor, and/or for receiving and processing electrical signals output, e.g., for applying gain to signals output by the image sensor, and/or for converting electrical signals output by the image sensor between analogue and digital domains. Frame camera 102 may further comprise storage for storing image data corresponding to images acquired by the image sensor.

Event camera 103 is operable to detect changes in brightness within the field-of-view of the frame camera 102. Event camera 103 comprises an image sensor comprising a pixel array, responsive to changes in brightness in the field-of-view of the event camera asynchronously and independently for each pixel. Each pixel memorizes the light intensity each time it sends an event, and continuously monitors for a change in intensity of sufficient magnitude from this memorized value. When the change exceeds a threshold, the camera triggers an event, signal defining the x, y location of the event, the time t, and the 1-bit polarity of the change (i.e., brightness increase (“ON”) or decrease (“OFF”). Thus, the output of the event camera 103 is a variable data-rate sequence of digital “events” or “spikes”, with each event representing a change of brightness of predefined magnitude at a pixel at a particular time.

In use, event camera 103 is located and oriented relative to the frame camera 102 such that the event camera 103 is operable to detect changes in brightness of at least a region of the field-of- view of the frame camera 102, and optionally such that the event camera 103 is operable to detect changes in brightness of substantially a whole field-of-view of the frame camera 102. In other words, event camera 103 may be configured in use to image at least a part of, and optionally substantially all of, a field-of-view of the frame camera 102.

Exposure parameter determining entity 104 is operable to determine one or more exposure parameters, e.g., exposure time and/or gain, for imaging using the frame camera 102. in particular, as will be described in further detail with reference to later Figures, exposure parameter determining entity 104 is operable to detect, using the event camera 103, changes in brightness within the field-of-view of the frame camera 102, and determine exposure parameters for further imaging using the frame camera based on the detected changes in brightness.

Input/output interface 104 is provided for communication of the optical imaging device 101 with external systems. For example, input/output interface 104 may allow coupling of the optical imaging device 101 to a system supporting a human-machine interface device, to allow control of the optical imaging device 101 by an operator.

The components 102 to 105 of the optical imaging device 101 are in communication via communication link 106. As described below, in examples, communication link 106 may comprise a system bus. In other examples, communication link 106 may comprise a network.

Thus, as will be described in further detail herein, in examples, frame camera 102 is operable to acquire optical images, in accordance with imaging exposure parameters determined by exposure parameter determining entity 104. Exposure parameter determining entity 104 is functional to detect changes in brightness within a field-of-view of the frame camera 102 using the event camera 103. Such changes in brightness may be taken to indicate motion of the scene contained in the field-of-view of the frame camera 102 and/or changes in the ambient light level of the imaged scene. Both motion in the scene and changes in ambient light level of the scene are significant factors when determining exposure parameters, for example, exposure time, gain and/or aperture size, for imaging using the frame camera 102. Such motion of the imaged scene, or more generally relative motion between the frame camera and objects in the imaged scene, could be local motion, resulting from movement of objects contained in the scene, or could be global motion resulting from movement of the frame camera relative to the imaged scene, e.g., resulting from shaking of a user’s hand in the event that the frame camera is hand-held.

For example, for a given desired image brightness, where motion in the scene imaged by the frame camera 102, or more particularly relative motion between the frame camera 102 and the imaged scene, is low, e.g., where objects in the scene are static relative to the frame camera 102, it may be acceptable to utilise a relative long exposure time for imaging using the frame camera 102, as the risk of image blurring may be expected to be relatively low. Such a long exposure time may desirably allow a gain factor involved in imaging using the frame camera 102to be maintained relatively low, whist still achieving the required image brightness. For example, one or both of an analog or digital gain level of the image sensor of the frame camera may be maintained at a relatively low level. Additionally, or alternatively, a digital gain applied during processing in the digital domain of imagery acquired by the frame camera may be maintained relatively low. A low gain factor may desirably reduce image noise. In contrast, where relatively fast motion is detected by the event camera in the scene imaged by the frame camera, e.g., where objects in the scene are moving quickly, it may be desirable to utilise a shorter exposure time, to thereby reduce image blurring, whilst applying a higher gain factor to achieve the required imaged brightness. Similarly, where the ambient brightness level is relatively high, e.g., where a change in brightness of the ambient light level has not been detected by the event camera 103 after a baseline light level reading is taken, it may be determined that the gain factor may be maintained relatively low, thereby desirably minimising image noise.

Such changes in the scene imaged by the frame camera, e.g., changes in brightness resulting from changes in ambient light level and/or motion in the scene, may be detectable using means such as a frame-based camera. However, frame cameras may produce full images at a fixed frame rate, e.g., 30 frames-per-second (fps). In the context of a fast-changing scene, e.g., a scene in which objects are moving quickly and/or in which ambient light levels are rapidly fluctuating, this delay in receiving data from frames, e.g., at 30 fps image data is delayed by 33ms, means that exposure parameter calculations may typically lag frame acquisition by several frames. Thus, exposure settings may likely be suboptimal, particularly where fast changes in speed of motion in scene or scene brightness occur. Additionally, because frame-based cameras capture image data globally for an entire field-of-view, and continuously during a period of operation rather than selectively depending on frame dynamics, data is generated irrespective of whether any motion/change in brightness in scene occurs. This may result in acquisition of a relatively large volume of image data, correspondingly demanding a relatively high degree of computational resource to identify brightness change s/motion in the scene.

In comparison, event camera 103 triggers events asynchronously across its pixels in response to a change in light intensity, i.e., brightness change, at a given pixel exceeding preset threshold. This has advantages in the present application. In particular, because event cameras sample light based on scene dynamics, rather than on a clock that has no relation to the viewed scene, event cameras may typically offer relatively high temporal resolution and low latency. Additionally, because event cameras only trigger events in response to brightness changes exceeding a threshold, rather than continuously, they may advantageously generate less redundant data, thereby demanding lower computational resource to detection motion/brightness changes in the scene. Additionally, because event cameras report an x, y location of a detected event, i.e., a location of the triggering event in the field-of-view of the event camera, the location of a brightness change (e.g., motion) in the scene may be determined.

In the present application, as will be described in detail herein, a particular advantage of employing an event camera for detecting brightness changes in the field-of-view of the frame camera is that, due to the relatively fast response time of the event camera to scene changes, and/or the reduction in complexity for processing event camera data, exposure parameters for imaging using the frame camera may be adjusted relatively quickly to adapt to changes in the scene, thereby advantageously improving image quality. In examples, the relatively fast response time of the event camera/reduction in complexity of data processing may even facilitate updating of exposure parameters for imaging midway through acquisition of an image frame by the frame camera, such that exposure parameters may be optimised even in the course of a single frame.

In examples, optical imaging device 101 has a unitary structure, in which the components 102 to 105 are co-located in a single unit. In examples, the optical imaging device 101 could comprise a hand-held unit, in which two or more of the components 101 to 105 are located inside a single casing. In examples, optical imaging device 101 could take the form of a hand-held camera or a smartphone. More generally, in examples, the event camera 103 is mechanically rigidly connected to the frame camera 102. Such a mechanical connection has the effect that movement of the frame camera 102 causes movement of the event camera 103, such that the event camera 103 may thereby detect global motion of the scene imaged by the frame camera 102 resulting from movement of the frame camera relative to the imaged scene. Such movement of the frame camera could result from shaking of an operator’s hand in the case where the frame camera is configmed to be hand-held. For example, the event camera 103 and frame camera 102 may be mounted to a common rigid structure, e.g., a common casing of a hand-held unit. In other examples, a rigid mechanical linkage could be provided for transferring force between the frame camera 102 and the event camera 103. In such examples, in which two or more of the components 102 to 105 are co-located, communication link 106 may comprise a system bus for communicating the colocated components.

In other examples, one or more of the components 102 to 105 of optical imaging device 101 may be located remotely of one or more other of the components. For example, in other examples, frame camera 102 and/or event camera 103 and/or exposure determining system 104 may be located in mutually different locations, and may communicate via communication link 106, which could, in examples, be a network implemented, for example, by a wide area network (WAN) such as the Internet, a local area network (LAN), a metropolitan area network (MAN), and/or a personal area network (PAN), etc. Such a network could be implemented using wired technology such as Ethernet, Data Over Cable Service Interface Specification (DOCSIS), synchronous optical networking (SONET), and/or synchronous digital hierarchy (SOH), etc.) and/or wireless technology e.g., Institute of Electrical and Electronics (IEEE) 802.11 (Wi-Fi), IEEE 802.15 (WiMAX), Bluetooth, ZigBee, near-field communication (NFC), and/or Long-Term Evolution (LTE), etc.). The network may include at least one device for communicating data in the network. For example, the network may include computing devices, routers, switches, gateways, access points, and/or modems.

Referring next to Figures 2 A and 2B, in examples, each of the frame camera 102 and the event camera 103 comprises an image sensor, 201, 202 respectively.

In examples, the respective image sensors of the frame camera 102 and the event camera 103 may be co-located. For example, a plurality of photosensors may be arranged to form an array, a subset of the photosensors may form an image sensor of the frame camera, and a further subset of the photosensors may form an image sensor of the event camera. In the examples of Figures 2A and 2B, in which the respective image sensors are co-located, the image sensors each comprise a plurality of photosensors, for example, photo-diodes, mounted to a surface of a rigid substrate 203.

Referring firstly to Figure 2A, in the example a first image sensor 201, utilised by frame camera 102, comprises a first plurality of photosensors, such as photosensor 204 and a second image sensor 202, utilised by event camera 103, comprises a second plurality of photosensors, such as photosensor 205, and the second plurality of photosensors of the second image sensor 202 is located adjacent the first plurality of photosensors of the first image sensor 201.

Referring next to Figure 2B, in another example configuration the photosensors 204, 205 of the respective image sensors are arranged such that photosensors 204 of the first image sensor 201 are interspersed with photosensors 205 of the second image sensor 202.

An advantage of the co-location of image sensors of the frame camera and the event camera is that the image sensors may be expected to be exposed to light emanating from the scene in a substantially same location, e.g., at a same angle of reflection. As a result, the perspective of the event camera may be substantially the same as the perspective of the frame camera, and consequently the event camera may be most likely to detect changes of brightness within the field- of-view of the frame camera. In the examples depicted in Figures 2A and 2B, the image sensors each comprise four photosensors. In other examples, either or both of the image sensors may comprise more or fewer photosensors. For example, either or both of the image sensors may comprise several million photosensors. In a further example, the image sensors may share one or more photosensors, such that shared photosensors are dual-functional to form a part of both the frame camera and the event camera.

Referring next to Figure 3, in examples, exposure parameter determining entity 104 comprises processor 301, storage 302, memory 303, and input/output interface 304. The exposure parameter determining entity 104 is configured to run a computer program for determining exposure parameters for imaging using the frame camera 102.

Processor 301 is configured for execution of instructions of a computer program. Storage 302 is configured for non-volatile storage of computer programs for execution by the processor 301. In examples, the computer program for determining exposure parameters for imaging using the frame camera 102 is stored in storage 302. Memory 303 is configured as read/write memory for storage of operational data associated with computer programs executed by the processor 301. Input/output interface 304 is provided for communicating exposure parameter determining entity 104 with communication link 106 of the optical imaging device 101. The components 301 to 304 of the exposure parameter determining entity 104 are in communication via communication link 305.

Referring next particularly to Figure 4, in examples, optical imaging device 101 is configured to perform an imaging procedure for imaging using the frame camera comprising three stages. In examples, the imaging procedure may be controlled by exposure parameter determining entity 104, in accordance with a computer program stored in storage 302.

At stage 401, the optical imaging device 101 initiates an imaging procedure. For example, the optical imaging device 101 may initiate the imaging event in response to receiving a prompt from a human-machine interface, or other input system, coupled to input/output interface 304. Stage 401 may, for example, involve the frame camera 102, event camera 103, and/or exposure parameter determining entity 104 being powered-on, and/or performing self-tests.

At stage 402, the exposure parameter determining entity 104 is configured to set exposure parameters, e.g., an exposure time and/or gain and/or aperture size parameter, for imaging using the frame camera, in accordance with instructions of the computer program stored in storage 302. As will be described, in examples, the exposure parameter determining entity 104 functions as a controller for controlling the operation of the frame camera 102 and the event camera 103, in accordance with the computer program.

At stage 403, the frame camera 102 is controlled, e.g., by the exposure parameter determining entity 104 in accordance with the computer program stored in storage 302, to acquire one or more image frames using the exposure parameters set at stage 402 and in accordance with the mode of operation set at stage 403. In examples, stages 402 and 403 of the imaging procedure may be performed repetitively, such that repeated imaging using the frame camera 102 may occur, wherein between each instance of acquiring images the exposure parameters and mode of operations may be repeatedly updated.

Referring next particularly to Figure 5, in examples, the method of stage 402 for setting exposure parameters for imaging using the frame camera 102 comprises four stages. In examples, the method of stage 402 is implemented by the processor 301 of exposure parameter determining entity 104, in accordance with instructions of the computer program stored in storage 302.

At stage 501, the computer program stored in storage 302 causes the processor 301 to control the event camera 103 to detect changes in brightness of the scene in the field-of-view of the event camera 103. As previously described, the event camera 103 is in use configured such that its field- of-view includes at least a region of the field-of-view of the frame camera 102. Thus, by the process of stage 501, the event camera 103 may detect changes in brightness within a field-of- view of the frame camera 102.

In examples, stage 501 could involve controlling the event camera 103 to detect changes in brightness at a single time step, compared to reference light intensity values stored in event camera 103. In other words, stage 501 could involve a single instance of detecting changes in brightness using the event camera. By such a method, it may be possible to detect changes in brightness, e.g., changes in scene illumination, compared to reference baseline intensity values. In other examples, stage 501 could involve controlling the event camera 103 to detect changes in brightness at plural time steps. In other words, stage 501 could involve performing in quick succession multiple instances of detecting changes in brightness using the event camera. By such a method, as will be described in further detail with reference to Figure 6, it may be possible to detect/infer motion of objects in the scene, or more generally relative motion between the scene and the event camera, by tracking a spatially varying pattern of brightness changes.

Where brightness changes exceeding one or more predetermined thresholds are detected, pixels of the event camera 103 will be triggered, and the output signal(s) will comprise a stream of asynchronous events, where each event comprises spatiotemporal coordinates and a polarity value indicating whether the brightness has increased or decreased by a magnitude exceeding a predefined threshold. Such signals may thus be output by the event camera 103 via the communication link 106.

At stage 502, the computer program stored in storage 302 causes the processor 301 to receive any event signals from the event camera 103 output at stage 501. As described previously, such event signals may be expected to be generated by the event camera, and so received by the processor 301 at stage 502, only in response to changes in scene brightness exceeding one or more change thresholds. At stage 503, the computer program stored in storage 302 causes the processor 301 to determine one or more exposure parameters for imaging using the frame camera 102. In examples to be described in detail herein, stage 503 involves the processor 301 determining two exposure parameters, namely, exposure time and gain, for imaging using the frame camera 103.

Referring next particularly to Figure 6, in examples, the method of stage 503 for determining exposure parameters for imaging using the frame camera 103 comprises three stages.

At stage 601, the computer program stored in storage 302 causes the processor 301 to determine a desired image brightness for images acquired using the frame camera 103. For example, a desired image brightness level could be input at stage 601 by an operator via a human-machine interface connected to input/output interface 304, or a desired image brightness value could be stored in storage 302 and accessed by processor 301.

At stage 602, the computer program stored in storage 302 causes the processor 301 to evaluate any event signals received by the processor at stage 502 and determine a first exposure parameter suitable for imaging using the frame camera 102 to achieve the desired image brightness value determined at stage 601. The determination of the first exposure parameter may be in accordance with logic defined in the computer program stored in storage 302. For example, stage 602 may involve the processor 301 evaluating event signals output by the event camera 103 to detect an extent, and polarity, of a change in brightness of the imaged scene. For example, where the global illumination of the scene has decreased significantly, the processor 301 may determine, in accordance with predefined logic, that a particular, relatively long, exposure time is desirable.

At stage 603, the computer program stored in storage 302 causes the processor 301 to determine one or more further exposure parameters for imaging using the frame camera 103 based on the first exposure parameter determined at stage 602. In other words, stage 603 may involve determining one or more other exposure parameters which are complementary to the exposure parameter determined at stage 602, to achieve the desired image brightness determined at stage 601. For example, where an exposure time has been determined at stage 602, stage 603 may involve determining a minimum gain value to achieve the desired image brightness, or vice versa.

Although the example implementation of the invention described herein in detail involves determining more than one exposure parameter for imaging using the frame camera, simpler implementations of the invention may involve determining only a single exposure parameter, e.g., only one of exposure time, gain and aperture size. Thus, in simpler examples, stage 603 may be omitted from the method.

Referring next particularly to Figure 7, in examples, the method of stage 602 for determining first exposure parameter for imaging using the frame camera 103 comprises three stages.

In particular, the example method depicted in Figure 7 facilitates detection and quantification of motion in the imaged scene, or more generally relative motion between the event camera 103 and objects in the scene, thereby allowing the exposure parameters to be adjusted in response to motion. In examples, the method of stage detects motion by analysing brightness changes between different time steps at which the event camera has been activated to detect changes in brightness. Thus, in examples, the method of Figure 7 is reliant on the event camera 103 being controlled, at earlier stage 501, to detect brightness changes in the imaged scene at plural timesteps and at spatially -different positions of the imaged scene, and accordingly output event signals at different timesteps in the event of brightness changes.

At stage 701, the computer program stored in storage 302 causes the processor 301 to evaluate event signals received from the event camera 103 at stage 502, determine whether an object in the scene is moving relative to the event camera, e.g., where one or both of a scene object and the event camera is moving, and quantify the motion. Where an object in the scene is moving, the moving object may be expected to trigger brightness changes in different pixels of the event camera at mutually different times, depending on the velocity of the object, i.e., the speed and direction of motion of the object. The velocity vector may thus be determined using known values of the inter-pixel distance and the time difference between detected events. Motion, and a quantity /magnitude, of the motion may thus be detected directly from events, or via an intensity change edge map generated from detected events.

At stage 702, the computer program stored in storage 202 causes the processor 201 to determine an acceptable level of image blur for images acquired by the frame camera 103. For example, an acceptable level of image blur could be input at stage 702 by an operator via a human-machine interface connected to input/output interface 104, or an acceptable blur value could be stored in storage 202.

At stage 703, the computer program stored in storage 202 causes the processor 201 to determine an exposure parameter, e.g., an exposure time parameter, for imaging using the frame camera based on the quantity /magnitude of the motion determined at stage 701, and the acceptable level of image blur determined at stage 702. For example, where it has been determined at stage 701 that a scene to be imaged by the frame camera 103 contains a fast-moving object, e.g., an object moving at a speed exceeding a predefined threshold, stage 703 may involve determining a maximum exposure time parameter for the frame camera that will be expected to produce an image with no more than the acceptable level of image blur.

Referring next to Figure 8, in examples, the method of stage 403 for acquiring images using the frame camera 102 comprises exposing the image sensor 201 of the frame camera 102 to incident light using a rolling shutter technique, whereby periods of exposure for different regions of the image sensor 201, and so in respect of different regions of the imaged scene, start and end at mutually different times. The method may thus correspondingly involve reading out of acquired image data from photosensors 204 of the image sensor 201 by the processor 301 at mutually different times.

In Figure 8, the start of a period of exposure of the image sensor 201 of the frame camera 102 in respect of an image frame is demarcated by a broken line, and the end of a period of exposure and start of image data readout in respect of the image frame is demarcated by a solid line. In particular, Figure 8 depicts acquisition of two temporally separated image frames. Broken line 801 demarcates a start of a period of exposure in respect of a first image frame. Solid line 802 demarcates a start of reading out of image data from the image sensor 201 in respect of the first image frame. Broken line 803 demarcates a start of a period of exposure in respect of a second image frame. Solid line 804 demarcates a start of reading out of image data from the image sensor 201 in respect of the second image frame.

Thus, in the example of Figure 8, at time t=l, the exposure parameter determining entity 104 may control the frame camera 102 to begin exposure for a first image frame. At time t=l, exposure in respect of a first image frame is started for a first region of the image sensor, which may for example be a first horizontal region, or line, of the image sensor, corresponding to a first horizontal region of the imaged scene. Between time t=l and time t=2, exposure is subsequently started in respect of further regions of the image sensor, e.g., in respect of further horizontal regions of the image sensor.

Subsequently, at time t=2, exposure in respect of the first image frame is ended for the first region of the image sensor, and readout of image data from the image sensor in respect of the first region thereof is started. Between time t=2 and time t=3, exposure in respect of the first image frame is ended for the further regions of the image sensor, and readout of data in respect of the further regions is started.

Then, at time t=3 exposure in respect of a second image frame is started for a first region of the image sensor, and between time t=3 and time t=4, exposure is subsequently started in respect of further regions of the image sensor. At time t=4 exposure in respect of the second image frame is ended for the first region of the image sensor, and readout of image data from the image sensor in respect of the first region thereof is started. And between time t=4 and time t=5, exposure in respect of the second image frame is ended for the further regions of the image sensor, and readout of data in respect of the further regions is started.

In examples, the frame camera 102 may be operated such that exposure parameters for imaging using the frame camera 102 are updated, i.e., modified in respect of an ongoing frame, i.e., midway through a period of exposure of the image sensor 201 for an image frame. Updating the exposure parameters for an ongoing frame, based on event information detected by the event camera, may optimise image quality during an image frame. In comparison, updating exposure parameters ahead of a frame, i.e., prior to a start of exposure for a frame, results in exposure parameters that may lag events in the scene by a relatively short period of time. The method for updating exposure parameters for an ongoing frame is facilitated by the use of the event camera for detecting events in the scene, due in particular to the improved temporal resolution of the event camera compared to other modes for detecting events, and the typically reduced volume of redundant image data produced by the event camera, facilitating faster processing of event data and so faster detection of events.

Thus, during acquisition of an image frame using the frame camera 102, e.g., during the period of exposure of the image sensor 201 in respect of the first image frame started at time t=l, the exposure parameter determining entity 104 may perform the method of stage 503 i.e., detecting changes in scene brightness using the event camera 103 and determining exposure parameters for imaging based on the detected changes. The new exposure parameter(s) may then be communicated to the frame camera 102, and the frame camera 102 may then acquire the image frame using the new exposure parameter(s).

In examples therefore, the method of stage 403 for acquiring images using the frame camera 102 may involve starting acquisition of an image frame using the frame camera 102 based an initial exposure parameters, e.g., an initial exposure time parameter, and subsequently changing the operation of the frame camera to acquire the image frame using different exposure parameters, e.g., a different exposure time parameter, determined by the method based on detected changes in brightness in the scene.

In such examples however, it may be desirable to control the mode of operation of the frame camera 102 in order to avoid corruption of an image frame resulting from different regions of the image sensor of the frame camera being exposed for mutually different exposure times. Such frame corruption could undesirably result in spatially -varying image brightness, whereas it may be desirable for image brightness to be spatially-constant. In examples therefore, the method comprises selectively updating one or more exposure parameters, and in particular an exposure time parameter, in respect of an ongoing image frame, in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter, with the aim of avoiding differences in exposure times for different regions of an image frame, to thereby avoid frame corruption.

Such frame corruption could result from a situation in which at a midway point of acquisition of an image frame, a new exposure time parameter for acquiring a remainder of the image frame is set at stage 402 which is longer than an initial exposure time set for the initial period of image acquisition. In such a scenario, where it is not possible to adjust the exposure time parameter in respect of the entire image frame, i.e., in respect of all regions of the image frame, it may be desirable to restart acquisition of the image frame using the new exposure time parameter.

Referring then to Figure 9, in examples, at stage 901, the computer program stored in storage 302 may cause the processor 301 to start acquisition of an image frame using the frame camera 102 in a rolling-shutter mode of operation, based on an initial exposure time parameter. The initial exposure time parameter could, for example, be an exposure time parameter set by a previous iteration of the method of stage 402.

At stage 902, the computer program stored in storage 302 may cause the processor 301 to perform the method of stage 402 to determine an exposure time parameter for imaging using the frame camera based on detected changes in brightness in the scene imaged by the frame camera. At stage 903, the computer program stored in storage 302 may cause the processor 301 to determine whether or not to image using the frame camera based on the determined exposure time parameter in dependence on a magnitude of the determined exposure time relative to the initial exposure time and on a time of determination of the determined exposure time relative to the imaging using the frame camera based on the initial exposure parameter.

Referring finally to Figure 10, in examples, the method of stage 903, for determining whether or not to image using the frame camera based on the determined exposure time parameter, comprises four stages.

At stage 1001, the computer program stored in storage 302 may cause the processor 301 to determine whether any period of exposure of the image sensor of the frame camera, at stage 901, based on the initial exposure time parameter has ended.

In the event of a negative determination at stage 1001, i.e., a determination that no period of exposure of the image sensor of the frame camera, at stage 901, based on the initial exposure time parameter has ended, at stage 1002, the computer program stored in storage 302 may cause the processor 301 to determine whether any period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter.

In the event of a negative determination at stage 1002, i.e., a determination that no period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, at stage 1003, the computer program stored in storage 302 may cause the processor 301 to continue acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby any period of exposure of the image sensor of the frame camera based on the initial exposure time parameter that has elapsed that is shorter than the determined exposure time parameter is extended to the determined exposure time parameter.

In the alternative, in the event of an affirmative determination at stage 1001, i.e., a determination that a period of exposure of the image sensor of the frame camera, at stage 901, based on the initial exposure time parameter has ended, and/or an affirmative determination at stage 1002, i.e., a determination that a period of exposure of the image sensor based on the initial exposure time parameter has elapsed that is greater than the determined exposure time parameter, at stage 1004 the computer program stored in storage 302 may cause the processor 301 to restart acquisition of the image frame using the frame camera based on the determined exposure time parameter, whereby an entirety of the image frame is acquired using the determined exposure time parameter. Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. _In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality.