Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM TO DETECT ORBITING OBJECTS
Document Type and Number:
WIPO Patent Application WO/2020/191427
Kind Code:
A1
Abstract:
A method and system for detecting an orbiting object is disclosed. The method involves receiving a sequence of consecutive images of a region in space in which the object moves, processing the sequence of consecutive images to generate a corresponding sequence of foreground images where background artefacts are removed to identify one or more candidate orbiting objects and registering the corresponding sequence of foreground images to a common coordinate frame to generate a sequence of registered foreground images. The method then includes identifying the orbiting object from the one or more candidate orbiting objects in the sequence of registered foreground images.

Inventors:
DO HUAN (AU)
CHIN TAT-JUN (AU)
Application Number:
PCT/AU2020/000022
Publication Date:
October 01, 2020
Filing Date:
March 25, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV ADELAIDE (AU)
International Classes:
G06T7/262; B64G3/00; G06K9/40; G06T5/20
Foreign References:
US9401029B22016-07-26
Other References:
XI, J. ET AL.: "Space debris detection in optical image sequences", APPLIED OPTICS, vol. 55, no. 28, October 2016 (2016-10-01), pages 7929 - 7940, XP055742525
DANESCU, R. ET AL.: "A Low Cost Automatic Detection and Ranging System for Space Surveillance in the Medium Earth Orbit Region and Beyond", SENSORS, vol. 14.2, 11 February 2014 (2014-02-11), pages 2703 - 2731, XP055721492
ONIGA, F. ET AL.: "Automatic Recognition of Low Earth Orbit Objects from Image Sequences", 2011 IEEE 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING, pages 335 - 338, XP032063525
CIURTE, A. ET AL.: "Automatic Detection of MEO Satellite Streaks from Single Long Exposure Astronomic Images", 2014 INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS (VISAPP, vol. 1, pages 538 - 544, XP032792035
KONG, S. ET AL.: "Effect Analysis of Optical Masking Algorithm for GEO Space Debris Detection", INTERNATIONAL JOURNAL OF OPTICS, vol. 2019, pages 1 - 8, XP055742520
ZHANG, X. ET AL.: "Space Object Detection in Video Satellite Images Using Motion Information", INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING, vol. 2017, no. 1024529, pages 1 - 9, XP055742518
Attorney, Agent or Firm:
MADDERNS PATENT AND TRADE MARK ATTORNEYS (AU)
Download PDF:
Claims:
CLAIMS

1. A method for detecting an orbiting object, comprising:

receiving a sequence of consecutive images of a region in space in which the orbiting object moves;

processing the sequence of consecutive images to generate a corresponding sequence of foreground images where background artefacts are removed to identify one or more candidate orbiting objects;

registering the corresponding sequence of foreground images to a common coordinate frame to generate a sequence of registered foreground images; and

identifying the orbiting object from the one or more candidate orbiting objects m the sequence of registered foreground images.

2. The method of claim 1, wherein processing the sequence of consecutive images to generate a corresponding sequence of foreground images comprises:

selecting an image from the sequence of consecutive images;

determining a foreground mask to select pixels from the image corresponding to the one or more candidate orbiting objects present in the image; and

applying the foreground mask to the image to generate the corresponding foreground image.

3. The method of claim 2, wherein the foreground mask comprises the difference between a reconstructed foreground image and a reconstructed background image.

4. The method of claim 3, wherein the reconstructed background image selects the low frequency components of the image up to a first cut-off frequency, where the low frequency components correspond to the background artefacts in the image, and the reconstructed foreground image selects the low and medium frequency components of the image up to a second cut-off frequency, the second cut-off frequency greater than the first cut-off frequency, where the low and medium frequency components of the image correspond to background artefacts and the one or more candidate orbiting objects in the image.

5. The method of claim 4, wherein the reconstructed background image and the reconstructed foreground image are formed by a combined optimisation process that seeks to:

minimise the difference between the spectral content of the reconstructed background image and the reconstructed foreground image for frequencies below' the first cut-off freq uency;

maximise the difference between the spectral content of the reconstructed background image and the reconstructed foreground image for frequencies between the first cut-off frequency and the second cut-off frequency, and minimise the spectral content of both the reconstructed background image and the reconstructed foreground image for frequencies greater than the second cut-off frequency.

6. The method of claim 5, wherein the reconstructed background image and the reconstructed foreground image are generated by a statistical regression technique.

7. The method of claim 6, wherein the statistical regression technique is a Gaussian process regression (GPR) based on a kernel function dependent on the spatial relationship between pixels and including as free parameters the spatial frequencies of the reconstructed image.

8. The method of claim 7, wherein the kernel function is a squared exponential (SE) kernel.

9. The method of any one of claims 1 to 8, wherein registering the corresponding sequence of foreground images to a common coordinate frame comprises:

(a) selecting a reference foreground image to define the common coordinate frame;

(b) reducing the sequence of foreground images to respective sets of discrete points;

(c) defining the set of discrete points corresponding to the reference foreground image to be the reference set of discrete points;

(d) selecting a set of discrete points not equal to the reference set of discrete points;

(e) determining a respective transformation based on the selected set of discrete points to the reference set of discrete points;

(f) applying the respective transformation to a foreground image corresponding to the selected set of discrete points to register the foreground image to the reference foreground image; and

(g) repeat steps (d) - (f) until all foreground images have registered to the reference foreground image to generate the sequence of registered foreground images.

10. The method of claim 9, wherein determining the respective transformation comprises determining an estimate of a planar perspective transform that connects the selected set of discrete points to the reference set of discrete points.

11. The method of claim 10, wherein determining the estimate of the planar perspective transform comprises adopting an iterative closest points (1CP) procedure based on determining the residuals between the selected set of discrete points and the reference set of discrete points.

12. The method of claim 1 1 , wherein the ICP procedure is based on a reduced set of residuals between the sel ected set of discrete points and the reference set of discrete points.

13. The method of any one of claims 1 to 12, wherein identifying the orbiting object from the one or more candidate orbiting objects comprises:

removing the candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images to determine one or more remaining candidate orbiting objects;

determining whether a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images; and

associate the selection of determined remaining candidate orbiting objects that follows the trajectory in the sequence of registered foreground images to identify the orbiting object.

14. The method of claim 13, wherein removing the candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images comprises:

determining overlapping pixel regions that overlap in respective registered foreground images of the sequence of registered foreground images; and

removing the overlapping pixel regions from each of the respective registered foreground image of the sequence of registered foreground images.

15. The method of claims 13 or 14, wherein determining that a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images comprises:

selecting a pair of candidate orbiting objects from the remaining candidate orbiting objects in a respective pair of registered foreground images;

determining a proposed trajectory that passes through the selected pair of candidate orbiting objects, the proposed trajectory defining a hypothetical location of the orbiting object in each of the other registered foreground images in the sequence of registered foreground images; and

determining that at least one of the other registered foreground images comprises a remaining candidate orbiting object that lies on the proposed trajectory.

16. The method of claim 15, wherein determining that a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images comprises determining that a plurality of the other registered foreground images includes a remaining candidate orbiting object that lies on the proposed trajectory.

17. The method of any one of claims 1 to 16, wherein the orbiting object is in a geostationary orbit.

18. The method of any one of claims 1 to 17, further comprising initially capturing the sequence of consecutive images of the region in space in which the orbiting object moves.

19. An object detection system for detecting an orbiting object, comprising one or more processors configured to carry out the method of any one of claims 1 to 17.

20. The object detection system of claim 19, further comprising a sensor for capturing the sequence of consecutive images.

21. An object detection system for detecting an orbiting object, comprising

a data receiving module comprising one or more processors and configured to receive a sequence of consecutive images of a region in space in which the object moves;

an image processing module comprising one or more processors and configured to process the sequence of consecutive images to generate a corresponding sequence of foreground images where background artefacts are removed to identify one or more candidate orbiting objects;

an image registration module comprising one or more processors and configured to register the corresponding sequence of foreground images to a common coordinate frame to generate a sequence of registered foreground images; and

an orbiting object identification module comprising one or more processors and configured to identify the orbiting object from the one or more candidate orbiting objects in the sequence of registered foreground images. 22. The object detection system for detecting an orbiting object, further comprising a sensor to capture the sequence of consecutive images of the region in space in which the orbiting object moves.

Description:
METHOD AND SYSTEM TO DETECT ORBITING OBJECTS

PRIORITY DOCUMENTS

[0001 ] The present application claims priority from Australian Provisional Patent Application No.

2019900985 titled "METHOD AND SYSTEM TO DETECT ORBITING OBJECTS and filed on 25 March 2019, the content of which is incorporated by reference in its entirety.

TECHNICAL FIELD

[0002] The present disclosure relates to the detection of moving objects. In a particular form, the present disclosure relates to the detection of orbiting objects and in one example to objects occupying a geostationary orbit.

BACKGROUND

[0003] Virtually all major public and private assets, such as transportation hubs, commercial buildings, power stations and the like are protected by extensive surveillance networks. Unfortunately the same cannot be said of space assets such as communications satellites and space stations. Currently there are more than a thousand operating satellites and space installations representing a significant financial investment and furthermore these space assets are often providing essential services. As such, protecting these assets from interference and destruction is of utmost importance.

[0004] A major risk to these assets is collision with other resident space objects (RSO), including debris and satellites, both operational and disused. As an example, the collision between Iridium 33 (then an operational communications satellite) and Cosmos 2251 (a retired military communications satellite) introduced over 2,000 unregistered pieces of fragmentation debris which have now drifted and spread to various orbits increasing the risk of further collisions. As such, the assessment of space situational awareness (SSA) has grown in importance with the rapid growth of space utilisation. SSA involves the tracking of artificial earth orbiting objects with a view to determine the potential risk of collision for a particular orbit.

[0005] Ground-based observations have been the primary tools for assessing and maintaining SSA. The efficacy of ground-based observations is affected by uncontrollable factors such as atmospheric effects, weather, and night-time-only observational constraints. Therefore, space-based SSA has been considered as a promising alternative. However, due to the much higher establishment cost, there are a much smaller number of fully functional space-based SSA systems at the present time. Since maintaining a fleet of observatory-class spacecraft is financially prohibitive, the development of " nano-satellites", eg, CubeSat, with equivalent observation capacity to larger satellites has been an active research area.

[0006] In both ground-based and space-based SSA frameworks, a critical element is object detection, ie, identifying potential artificial objects in a given set of measurements taken of the target region in space. This is typically achieved using optical sensors and the measurements comprise a sequence of images taken or captured at successive times.

[0007] In general, there are two main operational modes for optical telescopes. The first is sidereal tracking where the telescope is re-oriented continuously to be fixed to the stars. In this case, objects will appear as elongated or streak-like regions in the sequence of images. The other mode is object tracking where the telescope is re-oriented continuously to be fixed to the selected object or objects. In this case, the selected objects appear as point-like regions in the sequence of images. In practice, point-1 ike object tracking refers to fixating on a target region in near space and as such this mode of tracking can also be referred to as region tracking. As the relative speed between objects in the region and the camera is usually small, the orbiting objects will then take on a point-like form.

[0008] Previous approaches to point-like object detection typically involve first capturing multiple images consecutively in time (ie, a sequence of images or image sequence) and then detecting potential objects, which are usually referred to as "candidates". The spurious candidates are then removed based on the observation that the actual objects move in a different pattern with respect to the background stars. There are a number of issues with these approaches and different methods have been adopted to determine potential candidates from the image sequence which aim to suppress noise in the image and preferably intensify a true object's signal. Unfortunately, the tradeoff for obtaining greater accuracy is that the systems cannot be run in real-time or alternatively require highly specialised and expensive hardware such as a field-programmable gate array (FPGA) in order to implement a useable system.

[0009] There is therefore a need for a method of detecting orbiting objects that can be implemented in generic computer hardware in either ground-based or space-based systems without sacrificing accuracy and the ability to be run in real time.

SUMMARY

[0010] In a first aspect, the present disclosure provides a method for detecting an orbiting object, comprising:

receiving a sequence of consecutive images of a region in space in which the orbiting object moves; processing the sequence of consecutive images to generate a corresponding sequence of foreground images where background artefacts are removed to identify one or more candidate orbiting objects;

registering the corresponding sequence of foreground images to a common coordinate frame to generate a sequence of registered foreground images; and

identifying the orbiting object from the one or more candidate orbiting objects m the sequence of registered foreground images.

[0011] In another form, processing the sequence of consecutive images to generate a corresponding sequence of foreground images comprises:

selecting an image from the sequence of consecutive images;

determining a foreground mask to select pixels from the image corresponding to the one or more candidate orbiting objects present in the image; and

applying the foreground mask to the image to generate the corresponding foreground image.

[0012] In another form, the foreground mask comprises the difference between a reconstructed foreground image and a reconstructed background image.

[0013] In another form, the reconstructed background image selects the low frequency components of the image up to a first cut-off frequency, where the low frequency components correspond to the background artefacts in the image, and the reconstructed foreground image selects the low and medium frequency components of the image up to a second cut-off frequency, the second cut-off frequency greater than the first cut-off frequency, where the low and medium frequency components of the image correspond to background artefacts and the one or more candidate orbiting objects in the image.

[0014] In another form, the reconstructed background image and the reconstructed foreground image are formed by a combined optimisation process that seeks to:

minimise the difference between the spectral content of the reconstructed background image and the reconstructed foreground image for frequencies below the first cut-off frequency;

maximise the difference between the spectral content of the reconstructed background image and the reconstructed foreground image for frequencies between the first cut-off frequency and the second cut-off frequency, and

minimise the spectral content of both the reconstructed background image and the reconstructed foreground image for frequencies greater than the second cut-off frequency.

[0015] In another form, the reconstructed background image and the reconstructed foreground image are generated by a statistical regression technique. [0016] In another form, the statistical regression technique is a Gaussian process regression (GPR) based on a kernel function dependent on the spatial relationship between pixels and including as free parameters the spatial frequencies of the reconstructed image.

[0017] In another form, the kernel function is a squared exponential (SE) kernel.

[0018] In another form, registering the corresponding sequence of foreground images to a common coordinate frame comprises:

(a) selecting a reference foreground image to define the common coordinate frame;

(b) reducing the sequence of foreground images to respective sets of discrete points;

(c) defining the set of discrete points corresponding to the reference foreground image to be the reference set of discrete points;

(d) selecting a set of discrete points not equal to the reference set of discrete points;

(e) determining a respective transformation based on the selected set of discrete points to the reference set of discrete points;

(f) applying the respective transformation to a foreground image corresponding to the selected set of discrete points to register the foreground image to the reference foreground image; and

(g) repeat steps (d) - (f) until all foreground images have registered to the reference foreground image to generate the sequence of registered foreground images.

[0019] In another form, determining the respective transformation comprises determining an estimate of a planar perspective transform that connects the selected set of discrete points to the reference set of discrete points.

[0020] In another form, determining the estimate of the planar perspective transform comprises adopting an iterative closest points (1CP) procedure based on determining the residuals between the selected set of discrete points and the reference set of discrete points.

[0021 ] In another form, the ICP procedure is based on a reduced set of residuals between the selected set of discrete points and the reference set of discrete points.

[0022] In another form, identifying the orbiting object from the one or more candidate orbiting objects comprises:

removing the candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images to determine one or more remaining candidate orbiting objects;

determining whether a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images; and associate the selection of determined remaining candidate orbiting objects that follows the trajectory in the sequence of registered foreground images to identify the orbiting object.

[0023] In another form, removing the candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images comprises:

determining overlapping pixel regions that overlap in respective registered foreground images of the sequence of registered foreground images; and

removing the overlapping pixel regions from each of the respective registered foreground image of the sequence of registered foreground images.

[0024] In another form, determining that a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images comprises:

selecting a pair of candidate orbiting objects from the remaining candidate orbiting objects in a respective pair of registered foreground images;

determining a proposed trajectory that passes through the selected pair of candidate orbiting objects, the proposed trajectory defining a hypothetical location of the orbiting object in each of the other registered foreground images in the sequence of registered foreground images; and

determining that at least one of the other registered foreground images comprises a remaining candidate orbiting object that lies on the proposed trajectory.

[0025] In another form, determining that a selection of remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images comprises determining that a plurality of the other registered foreground images includes a remaining candidate orbiting object that lies on the proposed trajectory.

[0026] In another form, the orbiting object is in a geostationary orbit.

[0027] In another form, the method further comprises initially capturing the sequence of consecutive images of the region in space in which the orbiting object moves.

[0028] In a second aspect, the present disclosure provides an object detection system for detecting an orbiting object, comprising one or more processors configured to carry out the method in accordance with the first aspect.

[0029] In another form, the object detection system further comprises a sensor for capturing the sequence of consecutive images. [0030] In a third aspect, the present disclosure provides an object detection system for detecting an orbiting object, comprising

a data receiving module comprising one or more processors and configured to receive a sequence of consecutive images of a region in space in which the object moves;

an image processing module comprising one or more processors and configured to process the sequence of consecutive images to generate a corresponding sequence of foreground images where background artefacts are removed to identify one or more candidate orbiting objects;

an image registration module comprising one or more processors and configured to register the corresponding sequence of foreground images to a common coordinate frame to generate a sequence of registered foreground images; and

an orbiting object identification module comprising one or more processors and configured to identify the orbiting object from the one or more candidate orbiting objects in the sequence of registered foreground images.

[0031] In another form, the object detection system further comprises a sensor to capture the sequence of consecutive images of the region in space in which the orbiting object moves.

BRIEF DESCRIPTION OF DRAWINGS

[0032] Embodiments of the present disclosure will be discussed with reference to the accompanying drawings wherein:

[0033] Figure 1 is a flowchart of a method for detecting an orbiting object in accordance with an illustrative embodiment;

[0034] Figure 2 is a system overview diagram of an object detection system in accordance with an illustrative embodiment;

[0035] Figure 3 is an example image of a region in space in accordance with an illustrative embodiment;

[0036] Figure 4 is an enlarged subimage from the image illustrated in Figure 3;

[0037] Figure 5 is a processed image based on the enlarged subimage illustrated in Figure 4;

[0038] Figure 6 is a plot of the normalised pixel intensities along the sectional line depicted on the image illustrated in Figure 3;

[0039] Figure 7 is a flowchart of a method for processing a sequence of consecutive images to generate a corresponding sequence of foreground images in accordance with an illustrative embodiment; [0040] Figure 8 is a plot of the Fourier transform of the pixel intensities along the sectional line depicted on the image illustrated in Figure 3;

[0041] Figure 9 is a reconstructed background image of the subimage illustrated in Figure 4 in accordance with an illustrative embodiment;

[0042] Figure 10 is a reconstructed foreground image of the subimage illustrated in Figure 4 in accordance with an illustrative embodiment;

[0043] Figure 11 is a plot of the foreground and background reconstructions of pixel intensities along the sectional line of the image illustrated in Figure 3;

[0044] Figure 12 is a representation of a foreground mask based on the reconstructed background and foreground images illustrated in Figures 9 and 10;

[0045] Figure 13 is a plot of the foreground mask along the sectional line of the image illustrated in Figure 3 based on the foreground and background reconstructions of pixel intensities illustrated in Figure 11;

[0046] Figure 14 is a plot of the Fourier transform of the pixel intensities along the sectional line depicted in Figure 3 in addition to the Fourier transform of the foreground and background reconstructed images shown in Figure 11 ;

[0047] Figure 15 is a representation of a foreground image obtained by applying the foreground mask shown in Figure 12 to the subimage shown in Figure 4;

[0048] Figure 16 is a flowchart of a method for registering a sequence of foreground images to a common coordinate frame in accordance with an illustrative embodiment;

[0049] Figure 17 is a figurative view showing the correspondence between points from a foreground image and points from the reference foreground image in accordance with an illustrative embodiment;

[0050] Figure 18 is a composite image of the sequence of registered foreground images generated by an iterative closest points (ICP) procedure in accordance with an illustrative embodiment;

[0051 ] Figure 19 is a composite image of the sequence of registered foreground images generated by an iterative closest points (ICP) procedure based on a reduced set of residuals in accordance with an illustrative embodiment; [0052] Figure 20 is a flowchart of a method for identifying the orbiting object from the candidate orbiting objects in the sequence of registered foreground images in accordance with an illustrative embodiment;

[0053] Figure 21 is a flowchart of a method for removing candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images in accordance with an illustrative embodiment;

[0054] Figure 22 is a flowchart of a method for determining that a selection of the remaining candidate orbiting objects follows a trajectory in accordance with an illustrative embodiment; and

[0055] Figure 23 is an image of an identified trajectory indicating an orbiting object in accordance with an illustrative embodiment.

[0056] In the following description, like reference characters designate like or corresponding parts throughout the figures.

DESCRIPTION OF EMBODIMENTS

[0057] Referring now to Figure 1, there is shown a flowchart of a method 100 for detecting an orbiting object according to an illustrative embodiment. By way of overview, method 100 comprises at step 110 receiving a sequence of consecutive images 111 of a region in space in which the orbiting object moves and at step 120 then processing the sequence of consecutive images 11 1 to generate a corresponding sequence of foreground images 121 where background artefacts are removed to identify one or more candidate orbiting objects present in the sequence of foreground images 121. At step 130, the

corresponding sequence of foreground images 121 are registered to a common coordinate frame to generate a sequence of registered foreground images 131 and at step 140 the orbiting object is identified from the one or more candidate orbiting objects present in the sequence of registered foreground images 131.

[0058] Referring also to Figure 2, there is shown an object detection system 200 comprising in this example an optional sensor or camera 210, data processor 220 and an optional display 230. As would be appreciated, any type of imaging sensor and system may be adopted that is capable of capturing images of the region in space where relevant objects may be resolved and imaged and further that the operable wavelength of the imaging system need not necessarily be in the visible wavelength spectrum. In addition, sensor 210 may be either ground-based or spaced-based such as forming part of a satellite. [0059] In one example configuration, data processor 220 comprises a data receiving module 221 comprising one or more computer processors for receiving a sequence of consecutive images of a region in space in which the object moves and an image processing module 222 comprising one or more computer processors for processing the sequence of consecutive images 1 1 1 to generate a corresponding sequence of foreground images 121 where background artefacts are removed to identify one or more candidate orbiting objects. Data processor 220 in this example also includes an image registration module 223 comprising one or more computer processors for registering the corresponding sequence of foreground images 121 to a common coordinate frame to generate a sequence of registered foreground images 131 and an orbiting object identification module 224 comprising one or more computer processors for identifying the orbiting object from the one or more candidate orbiting objects in the sequence of registered foreground images 131.

[0060] At step 1 10, a sequence of consecutive images 1 1 1 , , of the region of interest in which the object moves are received where N is the number of images. As referred to above, the sequence of images 111 may be captured by any suitable sensor including, but not limited to, a CCD camera combined with an appropriate optical imaging system.

[0061] In one example, directed to the detection of orbiting objects where the orbiting object occupies a geostationary (GEO) orbit, the images I t are captured by an imaging system operating in region tracking mode to view a region in space corresponding to the GEO orbit of the object in space. In one example, suitable for ground based observation, the sensor comprises an Officina Stellare RH200 telescope and an FLI Proline PL4240 camera operating in the visible wavelength with a field of view of approximately 2.6 degrees with 4.6 arcseconds per pixel. In this example, the sensor is mounted to a Software Bisque Paramount MEII robotic mount, inside an Aphelion Domes 7ft clamshell dome and each observation captures a sequence of consecutive images containing, in this example, 5 images, each captured at 5 second exposure time, with 10 seconds between each image capture or shot and where each image is 16- bit having the size of 2048 x 2048 pixels.

[0062] As would be appreciated, these image capture parameters may vary depending on requirements. As an example, more than 5, 10, 15, 20, or greater than 20 consecutive images may be taken or captured. Furthermore, the exposure time for the image capture can vary depending on the brightness of the objects in region being viewed and the size of the image can also be varied depending on the pixel resolution required.

[0063] As is apparent, relative motion will exist between the RSO or orbiting object and background stars regardless of whether the observational mode is sidereal or region tracking. In region tracking mode, due to the short observing intervals in which the images are captured, ie, in the order of seconds, and the respective hyper-velocity movement of the orbiting object in space, the rotational motion caused by either the Earth's rotation or any changes of course by the orbiting object in the images are negligible compared to that of the orbiting object.

[0064] This effectively results in only translational motion of the orbiting object or RSO through the region of space in the sequence of consecutive images, ie, the position of the RSO in each image of the image sequence will form a line when the images are considered collectively. Similarly,

background astronomical objects such as stars and the like that are present in the images will remain relatively static, but as will be apparent, become elongated as there is some minimal movement of these celestial objects with respect to the region of space that is being viewed. As would be appreciated, in any region of space there will likely be multiple moving orbiting objects.

[0065] The region of space being viewed may not necessarily correspond to orbiting objects having a GEO orbit and potentially the relative motion of a moving object in each image of the sequence of consecutive images may be higher in which case this can lead to distortion of the object in each of the images. However, the position of the RSO in each image of the image sequence will still form a line when considered collectively.

[0066] At step 120, the sequence of consecuti ve images or image sequence 111 is processed to generate a corresponding sequence of foreground images 121, . In one embodiment, this is achieved by a

foreground/ background (FG/BG) segmentation process to segment the foreground pixels corresponding to candidate orbiting objects, which at this stage will comprise those objects that will eventually be determined as moving objects as well as effectively static celestial objects such as stars and the like from the background pixels corresponding to the space void. As would be appreciated, the potential faintness of the candidate orbiting object pixels as compared to any background artefacts arising in the images as a result of potential noise make this process challenging. Sources of noise forming these background artefacts include, but are not limited to, celestial phenomena such as especially bright or varying stars, inherent flaws in the imaging pipeline arising from optical distortion, and/or inherent electronic noise which may include systematic components such as banding and the like.

[0067] A further challenge is that the intensity of background artefacts corresponding to noise in the image often also vary as a function of location in the image. As such, the use of a simple uniform threshold based on intensity that is applied to separate foreground pixels corresponding to candidate orbiting objects and background pixels will fail since there may be background pixels that are as bright as the candidate orbiting object depending on the location of the pixel within the image.

[0068] Referring now to Figure 3, there is shown an example image 300 of a region of space or star field image comprising both orbiting objects (ie, RSOs) and static celestial objects such as stars and the like according to an illustrative embodiment. As can be seen by inspection, there is a discrete change 330 in intensity moving from left to right of image 300. Figure 4 shows an enlarged subimage 400

corresponding to the rectangle 310 shown in Figure 3 indicating the position of the moving object 420, while Figure 5 shows an example processed image 500 obtained by directly applying a uniform threshold to the intensities in subimage 400 in an attempt to remove background artefacts. As can be seen by inspection, image 400 is evidently noisy especially near the bottom left corner. Figure 6 is a plot 600 of the normalised intensities along the sectional line 320 illustrated in Figure 3. In this example, the peak 610 between horizontal pixel coordinates 1200 and 1400 corresponds to the moving object 420.

[0069] As can be seen in Figure 5, the challenge in determining the foreground pixels (belonging to stars and moving objects) from the background pixels (belonging to the space void) in a region of space such as depicted by image 300 is due to the faintness of the candidate orbiting object pixels relative to the noise artefacts in the image which may be due to unexpected celestial phenomena or inherent flaws in the imaging pipeline as described above. By examining the intensities along sectional line 310 of image 300 as shown in Figure 6, the reason behind the undesirabl e outcomes of direct thresholding is clear as the background intensities are not only noisy, they also vary as a function of location in the image. This example shows that no single threshold would cleanly separate the foreground from the background, since there are background pixels that are as bright as the target object 420 along the cross section as indicated by the single peak 610 in F igure 6 effectively meaning that orbiting object 420 cannot be discerned from the noise as can be seen in Figure 5.

[0070] Referring now to Figure 7, there is shown a flowchart of a method 700 for processing a sequence of consecutive images 111 to generate a corresponding sequence of foreground images 121 according to an illustrative embodiment. At step 710, an image I t is selected from the sequence of consecutive images to generate a corresponding foreground image .

[0071] Consider to define a 2D image grid where each x i is a pixel having a corresponding pixel location in the input image l t. The image l t can be interpreted as a function y that gives the observed (noisy) intensity value y(X i ) over each x i Î X. Define y* as the noiseless image, where Equation 1

[0072] and is independent and identically distributed normally distributed noise, for all

[0073] At step 720, a foreground mask is determined to select pixels from the image that correspond to the one or more candidate moving objects or foreground objects that may be present in the image. [0074] An ideal foreground mask m* may be defined as:

Equation 2

[0075] and in accordance with the present disclosure an estimate of m* is determined for each image y comprising a 2D image grid .

[0076] In one embodiment, a statistical regression technique is adopted to reconstruct the noiseless image y*. In one example, a Gaussian process regression (GPR) approach is adopted to reconstruct y* by imposing a Gaussian process prior over the noiseless image y*. In this context, this implies that the vector of noiseless intensity values Equation 3

[0077] will distribute according to the multivariate Gaussian distribution

Equation 4

[0078] where the covariance matrix is defined as

Equation 5

[0079] Here, k is called the kernel function and K is referred to as the kernel matrix. In approximation, k.(x i , x j ) computes the inner product between x i and x j in a higher-dimensional embedding space (defined by the form of k).

[0080] The Gaussian process prior defined by Equation 4 induces a distribution of functions over the domain X, where the specific form of k and the setting of its internal parameters define this distribution of functions. Combining Equations 1 and 4, the marginal distribution of the observed intensities Equation 6

[0081] can be established as Equation 7

[0082] where d n is a noise magnitude factor.

[0083] Consider x * to be an image coordinate for which the value of y*(x * ) is to be predicted. Given

Equations 1 and 4, it can be established that the distribution of the extended vector Equation 8

[0084] is again a Gaussian of the form

Equation 9

[0085] where k(x * ) is the vector of kernel evaluations, Equation 10

[0086] As y is observed and y*(x * ) is to be predicted, the posterior distribution is obtained as follows: Equation 11

[0087] which is a univariate Gaussian with mean and variance as defined below Equation 12

Equation 13

[0088] Define the maximum a posteriori (MAP) estimate for y*(x * ) to be m(x * ). In effect, m(x * ) is the most probable estimate for the unknown function value y * (x * ). In the context of 2D images, x * is taken from the original grid X, ie, x * is always a pixel location and it follows that m(x * ) for all x * Î X is then a reconstruction of the noiseless image y* over the grid X (see Equation 1 above). [0089] In accordance with the present disclosure, an estimate of m* is determined for each image g comprising 2D image grid .In this example, the estimate m(x i ) of the ideal foreground

mask m* is defined as follows:

Equation 14

[0090] where m FG and m BG are respectively a "foreground" reconstructed image and a "background" reconstructed image that are each derived from the original image.

[0091] In this example, both reconstruction processes perform a "denoising" of the original signal but the foreground reconstructed image, m FG is tuned to be more sensitive to foreground intensities (eg, corresponding to small spikes from faint target objects), while the background reconstruction image, m BG , adapts more strongly to the general trend of the background intensities.

[0092] As described previously, the kernel function corresponds to an inner product in a higher- dimensional embedding space of (see Equation 5). This requires K to be positive

semidefmite for any X, which imposes certain conditions on k. In this embodiment, X is a uniform 2D grid and this commends the use of a homogenous kernel dependent on the spatial relationship between pixels. In one example, a squared exponential (SE) kernel where the value k(c, c') is a function of only the distance is adopted.

[0093] The squared exponential (SE) kernel may be defined as follows:

Equation 15

[0094] where d f is the scale factor, and L with d 1 , d 2 being the spatial bandwidths. The

reconstructed background image in Figure 9 and the reconstructed foreground image in Figure 10 use the same noise d n (see Equation 7) and scale d f magnitudes, but as will be apparent from below different spatial bandwidths d 1 , d 2 .

[0095] Other homogeneous kernels that may be adopted include the radial basis function (RBF) kernel

Equation 16

[0096] and the Ornstein-Uhlenbeck (OU) kernel Equation 17

[0097] The RBF and OU kernels can be interpreted as special cases of the SE kernel, with a mam distinguishing factor being that the former two are isotropic. This implies that the RBF and OU kermels assume that the rate of intensity change in the horizontal and vertical directions to be the same and this assumption may not be applicable depending on the original image.

[0098] As noted above, the free parameters or hyperparameters for the SE kernel are q = [d h d f d 1 d 2 ] T . As would be appreciated, the GPR reconstruction of the image behaves as a tuneable low-pass filter that denoises the image. In this application, where the foreground mask in accordance with Equation 14 is to be estimated, the reconstructed background image, m BG , is parameterised or tuned to allow only low frequency (background) signals to pass through corresponding to background artefacts, while the reconstructed foreground image, m FG , is parameterised to admit both low (background) corresponding to background artefacts and medium frequency signals that correspond to the candidate orbiting objects. As referred to above, the candidate orbiting objects at this stage will include both static objects such as stars and moving objects such as RSOs which are to be detected.

[0099] Referring now to Figure 8, there is shown a plot 800 of the Fast Fourier Transform (FFT) of the pixel intensities along the sectional line depicted on the image illustrated in Figure 4 assuming a sampling frequency of 1000. Superimposed on plot 800 are shown conceptually the two low-pass filters 810, 820 that are required that correspond to the background reconstructed image, m BG , and foreground reconstructed image, m FG ,. As can be seen from Figure 8, the respective cut off frequencies for each of the filters are configured to in effect attenuate much of the noise but still retain some medium frequency components including the expected target object signal.

[00100] Based on the above considerations, the hyperparameters q BG and q FG , respectively for m BG and m FG , that can achieve the above effects for a given input image are determined.

[00101] In one exampl e embodimen t, the determination of the hyperparameters q BG and q FG is by a combined optimisation process based on the spectral content of both the reconstructed background image and the reconstructed foreground image. In one example, define the loss function

Equation 18 [00102] where is the FFT of GPR based reconstructed background

image, m BG , using the hyperparameters q BG . Similarly, is the FFT of GPR based

reconstructed image, m FG , using the hyperparameters q FG .

[00103] In this example is the desired first cut-off frequency of the background reconstructed image and is the desired second greater cut-off frequency of the foreground reconstructed image where both first and second cut-off frequencies are determined based on the image and object characteristics as will be described below. The desired hyperparameters are then optimised as Equation 19

[00104] The combination of Equation s 18 and 19 functions to determine those respective hyperparameters that define the reconstructed background image and the reconstructed

foreground image that collectively:

• minimises the difference between the spectral content of the reconstructed background image, m BG , and the reconstructed foreground image, m FG , in the frequency band as a result

preserving the background signals,

• maximises the difference between the spectral content of the reconstructed background image, m BG , and the reconstructed foreground image, m FG, in the frequency band as a result

separating the background signal from object signal, and

• minimises the spectral content of both m BG and m FG in the frequency band as a result

eliminating noise in this frequency regime.

[00105] The setting of first and second cut-off frequencies and depends on the expected

appearance of objects of interest (eg, RSOs, stars) in an image. For example, in the image illustrated in Figure 3, the sizes of foreground objects range in size from 6 to 100 pixels. In this example, this corresponds to a first cut-off frequency of 10 Hz and a second cut-off frequency of / 150 Hz

again using an FFT sampling frequency of 1000 Hz. In effect, the first cut-off frequency corresponds to the noise floor and the second cut-off frequency is that frequency that corresponds approximately to the object size and above which it is accepted the spectral content arises from high frequency noise.

[00106] Equation 19 may be solved by a suitable non-linear optimisation process. In one example,

Equation 19 is solved by employing a grid search over the domain q BG and q FG .

[00107] Referring now to Figure 9, there is shown a reconstructed background image 900 of the subimage 410 illustrated in Figure 4 in accordance with an illustrative embodiment. [00108] Referring now to Figure 10, there is shown a foreground reconstruction image 1000 of the subimage 410 illustrated in Figure 4 in accordance with an illustrative embodiment. As referred to previously both reconstructed images have the same noise d n and scale d f magnitudes but different spatial bandwidths d 1 , d 2 that are determined in accordance with the optimisation process described above. As expected, the bandwidths for reconstructed background image, m BG , (ie, Figure 9) are larger than the bandwidths for the reconstructed foreground image, m FG (ie, Figure 10). As can be seen from inspection, the noise in the original image (ie, Figure 4) is suppressed, though with different strengths in Figures 9 and 10, and that in the reconstructed background image of Figure 9 the object of interest 420 has been removed while in the reconstructed foreground image of Figure 10 object 420 is still present.

[00109] Referring now to Figure 1 1 , there is shown a plot 1 100 of the foreground 1 120 and background 1130 reconstructions of pixel intensities along the sectional line 320 of the image illustrated in Figure 3 as compared to the raw or original pixel intensities 1 140 from the original image.

[00110] Referring now to Figure 12, there is shown the estimated foreground mask 1200, m(x;), for application to the subimage 410 illustrated in Figure 4 based on the reconstructed foreground image, m FG , and the reconstructed background image, m BG , illustrated in Figures 9 and 10 respectively. As can be seen by inspection, m(x i ) has suppressed the effects of noise and non-stationary background pixel intensities while the faint target object remains apparent in m(x i ). This can be seen in Figure 13 which shows a plot 1300 of the foreground mask 1310 as determined along the sectional line 320 shown in Figure 3 which shows the m(x i ) selecting for the target object 420 at pixel locations 1311.

[00111] Referring now to Figure 14, there is shown a plot 1400 of the Fourier transform of the pixel intensities 1410 along the sectional line depicted on the image illustrated in Figure 3 in addition to the Fourier transform of the foreground 1420 and background reconstructed images 1430 shown in Figure 11. Also shown on plot 1400 is the first cut-off frequency 1450 and the second cut-off frequency 1460.

As can be seen from inspection, both reconstructed images have attenuated much of the noise while the foreground reconstructed image 1420 still retains some medium frequency components including m this case the target object signal.

[00112] Referring back to Figure 7, at step 730 the determined foreground mask is then applied to the image to generate the corresponding foreground image and this step is repeated for the entire sequence of foreground images 1 1 1 to generate the corresponding sequence of foreground images 121 .

[00113] Referring now to Figure 15, there is shown the foreground image 1500 obtained by applying the foreground mask 1200, m(x i ), as shown in Figure 12 to the subimage 410. As can be seen by comparison with the procesed image in Figure 5, where a simple threshold was applied, the resulting foreground image 1200 shown in Figure 15 is much cleaner and clearly shows object 420. As can be seen from inspection of foreground image 1500, celestial objects (eg, 1510) will be elongated due to their relative movement in each image.

[00114] Generating foreground images 121 in accordance with the present disclosure provides many computational and processing benefits. As a first consideration, it is generally only necessary to carry out hyperparameter tuning on a few prototypical images from the sequence of images 1 1 1 which may be selected by an operator and the optimised hyperparameters may then be reused for other images that were captured in the same setting (eg, see Equations 18 and 19). In this manner, computational savings of O(K), where K is the number of images in the sequence of consecutive images 1 1 1 , may be expected.

[00115] For the posterior mean as defined by Equation 12, since the domain X is the same 2D grid, as long as the input image is of the same size, the matrix can be pre-computed and pre- inverted before calculating the reconstruction m(x * ). Moreover, across different x * the input intensity values y do not change, hence can also be precomputed. In this manner, computational

savings of 0(N 3 ), where N is the number of pixels in each image, may be expected.

[00116] Accordingly, the main computational effort is to calculate k(x * ) (linear in the number of pixels) and multiply it with (quadratic in the number of pixels) which is still

computationally intensive. In one example embodiment, in order to speed up computations the input image may be further subdivided into 100 pixel by 100 pixel blocks and the reconstruction of the foreground and background image is carried out on each subimage individually. This process also speeds up the reconstruction process without sacrificing accuracy.

[001 17] Referring back to Figure 1 , at step 130, the corresponding sequence of foreground images is registered to a common coordinate frame to bring the foreground images into geometric

alignment. In this common coordinate frame, different observations of the same celestial object will converge to the same pixel location, while the observations of an orbiting object or RSO will be point like and will follow a trajectory over the sequence of foreground images 121 when they are considered collectively.

[00118] Referring now to Figure 16, there is shown a flowchart of a method 1600 for registering a sequence of foreground images to a common coordinate frame according to an illustrative embodiment.

[00119] At step 1610, a reference foreground image is selected from the sequence of

foreground images 121 to define the common coordinate frame. In one example, ie, the middle image in the sequence of foreground images 121 is selected. [00120] At step 1620, the reference foreground image and the other foreground images

where are reduced into respective sets of discrete points by, in this example, adopting a

connected component analysis to extract the centroids of the connected foreground regions in the reference foreground image and the other foreground images

[00121] At step 1630, the reference set of discrete points is defined as which corresponds

to the reference foreground image which will define the common coordinate frame.

[00122] At step 1640, a set of discrete points, say , which is not equal to the reference set

of discrete points defined as is selected.

[00123] At step 1650, the planar perspective transform or 2D homography transform defined by the 3 x 3 matrix that warps or transforms to align with is estimated.

[00124] The warping of an image point p (a 2D coordinate) in is calculated as

Equation 20

[00125] where is the first 2 rows of , and is the 3rd row of In this case, an

estimate of the planar perspective transform is determined based on the discrete point sets and

, as referred to above.

[00126] If the two point sets fully overlap, i.e., each has a genuine matching point in

(implying also that N t = N r ) then in one example embodiment an iterative closest points (1CP) algorithm may be adopted to estimate . In the context of determining orbiting objects, the point sets will only partially overlap due to the apparent motion of the background stars. This produces non-matching points, which act as outliers that can bias the 1CP approach.

[00127] In another example embodiment, a modified ICP approach is adopted. In accordance with this approach, given a candidate , define

Equation 21 [00128] as the residual of the i-th point Intuitively, is the distance of the point in

that is closest to the warped or transformed version of .

[00129] Also, let Equation 22

[00130] indicate the i-th largest residual amongst for the candidate

homography .

[00131] In this example, the ICP procedure is based on a reduced set of residuals in that the

/ rimmed sum of squared residuals defined as follows is minimised

Equation 23

[00132] over the unknown homography , where .

[00133] In this approach, the trimmed ICP (T-ICP) only minimises the smallest residuals,

which enables outliers (non-matching points) to be ignored. In one example, x is chosen to be half of

[00134] To minimise the T-ICP cost, an alternating point-to-point assignment and transformation estimation (using only the x-best assignments) technique is used. This is summarised in Procedure 1 below.

[00135] Procedure 1 for Image Registration

Require Point sets and trimming parameter convergence

threshold e.

[00136] Referring now to Figure 17, there is shown a selection of points 1710 (ie, dark) and

1720 (ie, light) and a connection indication 1730 showing a true correspondence between the respective points from the foreground image and the reference foreground image. As can be seen, and in accordance with the modified ICP adopted in the present disclosure, not all points have to be linked which improves the estimate for .

[00137] At step 1660, once the transformation has been determined it may then be applied to

the corresponding foreground image to register that image to the coordinate frame of the reference foreground image. This process may then be repeated until all the foreground images have been registered to the reference foreground image to generate the sequence of registered foreground images 131 .

Alternatively, the homographies may be estimated initially and then used to register the

foreground images to the foreground reference frame l r to generate the sequence of registered

forego und images 131.

[00138] Referring now to Figures 18 and 19, there is shown composite images 1800, 1900 of the sequence of registered foreground images generated by both the standard ICP procedure (ie, Figure 19) and the T-1CP procedure (ie, Figure 20) based on the set of reduced residuals according to an illustrative embodiment. In this example, the images 1900, 2000 have been composited by the max operator which chooses for a given pixel location the pixel value corresponding to the maximum pixel value for that location from the sequence of registered foreground images 131 .

[00139] As can be seen from inspection of Figure 18, in this example the registration of candidate orbiting object 1810 corresponding to a background star (as can be determined by its elongated characteristics) has resulted in a banded structure where the depiction of the same candidate orbiting object from the original sequence of foreground images (ie, 1810a, 1810b, 1810c and 1810d) can be seen separately in the composited image 1800. This can be compared with the same object 1910 depicted in image 1900 of Figure 19 which shows some substructure but which is, however, more sharply resolved due to the improved registration of the T-ICP procedure for this example.

[00140] Referring again to Figure 1, at step 140 the orbiting object is identified from the one or more candidate orbiting objects from the sequence of registered foreground images. In one example, the orbiting object is identified by determining whether a candidate orbiting object follows a trajectory in the sequence of registered foreground images 131.

[00141] Referring now to Figure 20, there is shown a flowchart of a method 2000 for identifying the orbiting object from the one or more candidate orbiting objects according to an illustrative embodiment.

[00142] At step 2010, the candidate orbiting objects corresponding to celestial objects are removed from the sequence of registered foreground images 131 to determine remaining candidate orbiting objects. As has been referred to previously, celestial objects such as stars and the like present in the region of space being observed will, in a common coordinate frame, appear as overlapping elongated regions arising from the apparent motion of the star during the exposure time for each captured image.

[00143] Referring now to Figure 21 , there is shown a flowchart of a method 2010 for removing candidate orbiting objects corresponding to celestial objects from the sequence of registered foreground images according to an illustrative embodiment to determine one or more remaining candidate objects. As shown in Figure 21, in one example removing the celestial objects includes at step 201 1 , determining the overlapping pixel regions that overlap in respective registered foreground images of the sequence of registered foreground images and at step 2012 removing the overlapping pixel regions from each of the respective registered foreground images 131 of the sequence of registered foreground images 131. In another example, the overlapping pixel regions and any connected pixel regions are also removed from the respective registered foreground images. This effectively removes any objects that are static over time over the sequence of registered foreground images 131 corresponding to background celestial objects such as stars.

[00144] Following this step the sequence of registered foreground images will contain only remaining candidate orbiting objects consisting of orbiting objects and any false foreground pixels that have not been removed as a result of the process to remove background artefacts at step 120.

[00145] Referring back to Figure 20, at step 2020 the orbiting object is identified by determining whether any of the remaining candidate orbiting objects follows a trajectory in the sequence of registered foreground images 131. [00146] Referring now to Figure 22, there is shown a flowchart of a method 2020 for determining whether a selection of the remaining candidate orbiting objects follows a trajectory according to an illustrative embodiment. As would be appreciated, each of the sequence of registered foreground images 131 corresponds to the sequence of consecutive images that were captured at successive times and, as such, each of the registered foreground images is associated with a time which may be expressed as an absolute time or a relative time. As an example, for a sequence of five consecutive images captured with a 5 second exposure time and with a 10 second interval between each image, the relative timings for the sequence of consecutive images would be at 0, 15, 30, 45 and 60 seconds.

[00147] At step 2021 , a pair of candidate orbiting objects is selected from the remaining candidate orbiting objects in a respective pair of registered foreground images. In one example, the pair of registered foreground images are consecutive images from the sequence of registered foreground images 131.

[00148] At step 2021, a proposed trajectory that passes through the selected pair of candidate orbiting objects is determined. In one example, the proposed trajectory is a line that is based on the times associated with each the candidate orbiting objects which will correspond to the respective times of the image where the candidate orbiting objects are located. As would be appreciated, other types of trajectories may be proposed but this may require further candidate orbiting objects from other registered foreground images. The proposed trajectory will define hypothetical locations of the orbiting object in each of the other registered foreground images in the sequence of registered foreground images.

[00149] At step 2022, it is determined whether there is at least one of the other registered foreground images that includes a remaining candidate orbiting object that lies on the proposed trajectory. Assuming that this is the case, the candidate orbiting objects from the original pair of registered foreground images and the remaining candidate orbiting object in the at least one other registered foreground image that lies on the proposed trajectory are linked or associated together and determined to identify an orbiting object. In another embodiment, remaining candidate orbiting objects that lie on the proposed trajectory must be found in at least two other registered foreground images in order for there to be a determination that these objects may be associated together and relate to an orbiting object.

[00150] In one example, the sequence of registered foreground images 131 is "collapsed by application of the max operator as referred to previously to form a composite image so that the images 131 may be considered collectively. In another example, the centroids of all the binary connected regions remaining in the sequence of registered foreground images 131 corresponding to the remaining candidate orbiting objects are determined in order to define a set of spatial coordinates for the remaining candidate orbiting objects which will also have an associated time based on the corresponding time of the registered foreground image in which the remaining candidate orbiting object is present. [00151] In the composite image, the proposed trajectory can be displayed and whether a respective registered foreground image includes a remaining candidate orbiting object can be determined by whether it lies on the line in the composite image. In this example, a trajectory is said to have a support when the location of a remaining candidate orbiting object, which may be defined by its centroid, exists within the predetermined vicinity of a proposed trajectory. In this manner, the number of supports may be determined for each proposed trajectory and a threshold applied to determine whether an orbiting object has been detected.

[00152] Referring now to Figure 23, there is shown a composite image 2300 of an identified trajectory 2310 indicating an orbiting object in accordance with an illustrative embodiment. Composite image 2300 shows a close-up of five observations of an object 2320 in the registered foreground images collapsed by the max operator. The frame index of the sequence of registered foreground images 131 and proposed coordinates are shown above 2330 and below 2340 respectively. In this particular example, candidate orbiting objects from images 3 and 4 are chosen as the initial pair 2321. A proposed trajectory 2310 is formed by the line connecting the two candidate orbiting objects 2321, and the neighbourhoods of locations of the remaining observations are marked validly activated circles 2322 and invalidly activated circles 2323. In this example, the proposed trajectory has a support of 4 as remaining candidate orbiting objects 2324, 2324 are within the validly activated circles 2322.

[00153] As would be appreciated, the present disclosure provides a method and system for detecting an orbiting object having significant computational benefits when compared to prior art methods capable of providing real time detection using standard computer processing power such as would be found in a typical laptop computer. As such, the method and system of the present disclosure may be deployed in more cost effective systems for determining SSA such as nano-satellites due to the reduced computational burden and requirement for only generic computing componentry.

[00154] Throughout this specification the term real time" when pertaining to the object detection method and system of the present disclosure is taken to mean that the results of the method and system are available substantially in real time, or near real time, as compared to the time involved in the capturing of the relevant images. It is understood that the term real time" is not intended to require that the method and system of the present disclosure provide results instantaneously.

[00155] Those of skill in the art would appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed may be implemented as electronic hardware, computer software or instructions, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether the disclosed functionality is implemented as hardware or software will depend upon the requirements of the application and design constraints imposed on the overall system.

[00156] It would be further appreciated, that the systems and methods described here may be implemented using multiple components and modules that may be separate or co-located. As an example, a space-based satellite system may include a suitable sensor and all of the processing steps may be carried on board the satellite or alternatively the processing steps may be distributed between the satellite and any related ground-based computing facility. Furthermore, the components of the system may be

interconnected by any form or medium of digital data communication.

[00157] Throughout the specification and the claims that follow, unless the context requires otherwise, the words“comprise” and“include” and variations such as“comprising” and“including” will be understood to imply the inclusion of a stated integer or group of integers, but not the exclusion of any other integer or group of integers.

[00158] The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement of any form of suggestion that such prior art forms part of the common general knowledge.

[00159] It will be appreciated by those skilled in the art that the invention is not restricted in its use to the particular application described. Neither is the present invention restricted in its preferred embodiment with regard to the particular elements and/or features described or depicted herein. It will be appreciated that the invention is not limited to the embodiment or embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the scope of the invention as set forth and defined by the following claims.