Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE CAPTURE APPARATUS
Document Type and Number:
WIPO Patent Application WO/2022/219327
Kind Code:
A2
Abstract:
An image capture apparatus for imaging at least one fingerprint on an object, the apparatus comprising a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor. The one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate. The image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate. The image processor is operable to process the captured at least one image to detect the location of the or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the or each fingerprint on the object.

Inventors:
KING ROBERTO (GB)
SMITH ADAM (GB)
Application Number:
PCT/GB2022/050921
Publication Date:
October 20, 2022
Filing Date:
April 13, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FOSTER & FREEMAN LTD (GB)
International Classes:
G06V40/13; G06V10/141; G06V10/143
Attorney, Agent or Firm:
LAWRIE IP LIMITED (GB)
Download PDF:
Claims:
Claims

1. An image capture apparatus for imaging at least one fingerprint on an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; and wherein the image processor is operable to process the captured at least one image to detect the location of the or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the or each fingerprint on the object.

2. The image capture apparatus of claim 1, wherein the image capture apparatus is operable to capture the at least one image at a resolution of at least 1 ,000 pixels per inch (ppi), or greater than 1 ,000 ppi.

3. The image capture apparatus of claim 2, wherein the object comprises one or more major surfaces, and wherein the image capture device is operable to capture the at least one image of substantially all of the major surface of the object at a resolution of at least 1 ,000 ppi, or greater than 1,000 ppi, when the object is located on the substrate.

4. The image capture apparatus of any preceding claim, wherein the image capture device comprises a resolution enhancement element operable to increase the resolution of the captured at least one image.

5. The image capture apparatus of claim 4, wherein the resolution enhancement element includes a pixel stepping and/or pixel sub-stepping element.

6. The image capture apparatus of any preceding claim, wherein the image capture apparatus is operable to automatically selectively activate some, or all, of the one or more light sources and to capture at least one image during the activation of the, or each, light source.

7. The image capture apparatus of any preceding claim, wherein the image capture apparatus comprises at least two light sources, each light source being operable to illuminate the object at a different wavelength band to the other light source(s).

8. The image capture apparatus of any preceding claim, wherein the image processor is operable to detect the location of the at least one fingerprint using an image, or images, captured using any one or all of the light source(s).

9. The image capture apparatus of any preceding claim, wherein the image capture apparatus comprises a light filter arranged in the optical light path between the image capture device and the substrate.

10. The image capture apparatus of claim 9, wherein the image capture apparatus is operable to configure the light filter between one or more filter modes, wherein the, or each, filter mode is configured to permit at least a portion of light within a wavelength band to pass therethrough, and wherein the wavelength band is associated with one or more of the light sources of the apparatus.

11. The image capture apparatus of any preceding claim, wherein the image capture apparatus comprises a display operable to display the captured at least one image and/or one or more images created by the image processor.

12. The image capture apparatus of claim 11 , wherein the display is operable to display one or more images, the one or more images being based at least in part on the fingerprint location data created by the image processor.

13. The image capture apparatus of any preceding claim, wherein the image processor is operable to carry out one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes, and wherein the detection of the location of the at least one fingerprint is carried out, at least in part, using the one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes.

14. The image capture apparatus of any preceding claim, wherein the image processor comprises one or more detection units operable to, at least in part, detect the location of the at least one fingerprint, and wherein the one or more detection units include an artificial intelligence unit, a machine learning unit, a neural network unit, a convolutional neural network and/or a faster R-CNN (FRCNN) neural network.

15. The image capture apparatus of any preceding claim, wherein the image capture device is configured to capture the at least one image in monochrome or colour, and wherein the image processor is configured to convert the, or each, at least one colour image from the image capture device to a monochrome image prior to detecting the location of the, or each, fingerprint present on the object.

16. The image capture apparatus of claim 15 when dependent on claim 14, wherein the monochrome image created by the image processor or the monochrome image captured by the image capture device forms the input to the one or more detection units.

17. The image capture apparatus of any preceding claim, wherein the image processor is operable to process the captured at least one image to detect the location of a plurality of fingerprints on the object, or every fingerprint located on the object.

18. The image capture apparatus of any preceding claim, wherein the fingerprint location data created by the image processor includes the probability that a fingerprint is present at a particular location or zone of the object.

19. The image capture apparatus of any preceding claim, wherein the image processor is operable to represent the location of the at least one fingerprint using a feature-mapping process.

20. The image capture apparatus of claim 19, wherein the feature-mapping process is operable to represent the probability of a fingerprint being present at one or more discrete locations on the object.

21. The image capture apparatus of claim 19 or claim 20, wherein the feature-mapping process is operable to make a plurality of predictions of the presence or absence of a fingerprint within the same fingerprint area.

22. The image capture apparatus of any of claims 19 to 21, wherein the feature-mapping process is operable to represent the location of the at least one fingerprint using a heat-map process representing the probability of one or more points, or a plurality of points, or one or more zones, or a plurality of zones, having a fingerprint located there.

23. The image capture apparatus of any preceding claim, wherein the image processor includes a classification module operable to compare the at least one fingerprint on the captured at least one image against a classification library and to output the probability that a fingerprint is present at that location.

24. The image capture apparatus of claim 23, wherein the classification module is operable to use the fingerprint location data, at least in part, to determine which parts of the captured at least one image to analyse and/or wherein the classification module is operable to use data from the detection unit(s), at least in part, to determine which parts of the at captured least one image to analyse and/or wherein the classification module is operable to use the feature mapping data, at least in part, to determine which parts of the captured at least one image to analyse.

25. The image capture apparatus of any preceding claim, wherein the image processor is operable to create a report, or reports, indicating the location of the fingerprint, or fingerprints, on the object.

26. The image capture apparatus of any preceding claim, wherein the image capture apparatus comprises a user input control operable to activate or control at least one of: the light source, or sources, to be used, the filter, and/or the operation of the image capture device in response to user input, and wherein the image capture apparatus is configured to automatically detect the location of the, or each, fingerprint on the object in response to the user input.

27. An image capture apparatus for imaging at least a portion of an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; and wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate.

28. A method of detecting at least one fingerprint on an object, the method comprising the steps of: providing an image capture apparatus for imaging at least one fingerprint on an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; and wherein the image processor is operable to process the captured at least one image to detect the location of the, or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the, or each fingerprint on the object; positioning the object on the substrate; activating at least one of the light source(s); capturing at least one image of the object using the image capture device; and determining the location of the at least one fingerprint on the object using the image processor.

29. A method of imaging an object, the method comprising the steps of: providing an image capture apparatus for imaging at least a portion of an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor; wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; and wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; positioning the object on the substrate; activating at least one of the light source(s); and capturing at least one image of the object using the image capture device.

Description:
IMAGE CAPTURE APPARATUS

Field of the invention

The present invention relates to an image capture apparatus and particularly, but not exclusively, to an image capture apparatus for imaging at least one fingerprint on an object.

Background to the invention

An image capture apparatus can be used to capture and process images of an object. One such application is the use of fingerprint imaging cameras to image fingerprints on an object, such as a piece of paper.

Known fingerprint imaging cameras and viewers can be laborious to use. The operator must typically manually capture an image of a piece of paper having a fingerprint thereon. Subsequently, the images obtained using this method is categorised and reviewed. The process of obtaining fingerprints, scanning them, sorting them, and viewing them one-by-one is very time-consuming.

It would therefore be desirable to reduce the time taken to view, analyse, image, and/or store fingerprints.

Some fingerprints located on certain types of object can be hard to locate, and may be missed by a user, particularly if the user is inexperienced or has deteriorating eyesight. It would therefore be desirable to mitigate the risk of fingerprints being missed when an object is analysed, or viewed, for fingerprints.

The inventors have appreciated the shortcomings in known image capture apparatuses.

Statements of invention

According to a first aspect of the present invention there is provided an image capture apparatus for imaging at least one fingerprint on an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; and wherein the image processor is operable to process the captured at least one image to detect the location of the, or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the, or each fingerprint on the object.

The at least one fingerprint may include one or more latent fingerprints or chemically treated fingerprints. Thus, the apparatus can image and detect the location of latent fingerprints, or non-latent fingerprints, or chemically treated fingerprints. For example, the object could include latent, non-latent, or a combination of latent and non-latent fingerprints, and the apparatus is operable to image and create fingerprint location data of both types of fingerprints.

The at least one fingerprint, whether latent or non-latent, may be at least partially obscured. In this example, the image capture apparatus is operable to detect the location of the at least partially obscured fingerprint and to create fingerprint location data accordingly. It will be understood that the fingerprint could be obscured in many ways, such as by debris/dirt, ink, or the like.

The at least one fingerprint may be processed with one or more fingerprint enhancement agents prior to image capture by the apparatus, as is known in the field of fingerprint analysis.

The object may be a substantially planar object. The object may comprise one or more major surfaces. The one or more major surfaces may be sized such that at least one fingerprint can be located thereon. The one or more major surfaces may be sized such that a plurality of fingerprints can be located thereon. The one or more major surfaces may be rectangular shaped, or square-shaped, surfaces. The one or more major surfaces may be planar surfaces. The object may be a paper-based object. The object may be a piece of paper, a document, a book or page thereof, or the like.

The object may be up to ISO A4 size or up to US letter size. The object may be at least A4 size, or US letter size. The major surfaces may be A4 size, or up to US letter size.

The substrate may be configured to accommodate all of the object thereon. The substrate may be configured to accommodate a major surface of the object thereon. The image capture apparatus may comprise one or more of the objects to be imaged. The image capture apparatus may comprise one or more objects for receiving one or more fingerprints thereon. In this arrangement, the, or each, object may be provided with the apparatus as component parts.

The image capture apparatus may comprise a housing. The one or more light sources may be located in the housing. The image capture device may be located in the housing. The image processor may be located in the housing, or may be external to the housing.

The housing may be substantially opaque at the transmission wavelength(s) of the one or more light sources. The housing may be substantially non-fluorescent at the wavelength(s) of the one or more light sources.

The substrate may be a substantially planar member, a plate member, a platen, or the like. The substrate may be located on the housing. The substrate may be located at a top portion of the housing, or a wall portion of the housing. The housing may include one or more cut-out portions, or the like, to allow light from the one or more light sources to reach the substrate, and/or to allow light from the object to reach the image capture device. The substrate may be integral with the housing.

The substrate may be configured to be in conformal contact with at least a portion of a major surface of the object when the object is located on the substrate. The substrate may include a major surface configured to be in contact with the major surface of the object when the object is located on the substrate.

The substrate may include one or more substantially optically transparent portions configured to permit the light from the one or more light sources to be transmitted and/or reflected therethrough.

The optically transparent portion of the substrate may be made from glass, or the like. The optically transparent portion may be a glass plate, or the like.

The substrate may be configured to be non-fluorescent at the transmission wavelength(s) of the one or more light sources. The optically transparent portion of the substrate may be configured to be non-fluorescent at the transmission wavelength(s) of the one or more light sources. The substrate may be sized (i.e. dimensioned) to allow at least an ISO A4 piece of paper or a US letter sized piece of paper to be located thereon. The optically transparent portion of the substrate may be sized to cover at least an A4 piece of paper or a US letter sized piece of paper.

The substrate may include an indicator arranged to indicate where the object should be positioned. The indicator may be located on or adjacent to the optically transparent portion of the substrate. The indicator may be substantially the same size as A4 paper or US letter size. The indicator may be one or more lines, labels, etched portions, or the like, on the substrate or transparent portion thereof. The indicator may include one or more guide members configured to guide the object into position relative to the substrate. The guide member(s) may be rails members, raised portions of the substrate, or any suitable guide member for guiding the object into position.

The one or more light sources may be arranged to illuminate at least a portion of the object through the optically transparent portion(s) of the substrate. The one or more light sources may be arranged to illuminate substantially all of the substrate or substantially all of the optically transparent portion(s) thereof.

The apparatus may have a light path between the, or each, light source and the image capture device. The apparatus may include a plurality of light paths from each light source to the image capture device.

The substrate may be located in the light path of the apparatus between the one or more light sources and the image capture device. The optically transparent portion of the substrate may be located in the light path between the one or more light sources and the image capture device.

The housing may include one or more cover members configurable to cover at least a portion of the substrate. The cover member may be configurable to cover at least a portion of the object when the object is located on the substrate.

The cover member may be a hinged cover member.

The cover member may be configured to prevent or mitigate light from being transmitted from the outside of the housing to the inside of the housing. The cover member may be configured to substantially block or reflect light in the visible, infrared and/or ultraviolet wavelengths. The cover member may be configured to be non-fluorescent at the transmission wavelengths of the one or more light sources.

The cover member may be configured to secure the object to the substrate. The cover member may be configured to press the object to the substrate.

The image capture device may be arranged to capture at least a portion of light that is reflected from the object. The image capture device may be arranged to capture at least a portion of light that is reflected from the object and that passes through the optically transparent portion(s) of the substrate.

The one or more light sources may include at least one fluorescence light source operable to cause fluorescence in the object, the fingerprint(s) located on the object, or in a fingerprint enhancement agent applied to the fingerprint(s), as is known in the art of fluorescence-based fingerprint detection.

The image capture device may be arranged to capture at least a portion of light that is emitted from the object, the fingerprint(s) located on the object, or from the fingerprint enhancement agent. The image capture device may be arranged to capture at least a portion of light that is emitted from the object and transmitted through the optically transparent portion(s) of the substrate. The emitted light from the object may be at least partly from fluorescence emission.

The image capture device may be arranged to capture at least a portion of light that is reflected and/or emitted from the object, the fingerprint(s) located on the object, or from the fingerprint enhancement agent. The image capture device may be arranged to capture at least a portion of light that is emitted and/or reflected from the object and transmitted through the optically transparent portion(s) of the substrate.

The apparatus may comprise a single image capture device. The image capture device may be fixed relative to the substrate, or relative to the optically transparent portion of the substrate. The image capture device may be fixed relative to the, or each light source. Each light source may be fixed relative to the substrate, or relative to the optically transparent portion of the substrate.

The image capture device may be configured to be movable relative to the substrate, or relative to the optically transparent portion of the substrate. The image capture device may be movable relative to the, or each light source. The image capture device may be a scan camera, a line scan camera, or an area scan camera.

The image capture device may be a camera, or a digital camera. The image capture device may be operable to capture colour images and/or monochrome images.

The image capture device may include one or more light focussing elements configured to focus light reflected and/or emitted from the object. The one or more light focussing elements may include at least one lens element.

The image capture device may be operable to capture the at least one image at a resolution of at least 1,000 pixels per inch (ppi), or greater than 1 ,000 ppi. The image capture device may be operable to capture the at least one image of a major surface of the object. The image capture device may be operable to capture the at least one image of the major surface of the object at a resolution of at least 1 ,000 pixels per inch (ppi), or greater than 1 ,000 ppi. The image capture device may be operable to capture the at least one image of substantially all of the major surface of the object at a resolution of at least 1 ,000 pixels per inch (ppi), or greater than 1,000 ppi.

The image capture device may be configured to increase the resolution of the captured at least one image. The image capture device may comprise a resolution enhancement element operable to increase the resolution of the captured at least one image. The resolution enhancement element may include a pixel stepping and/or pixel sub-stepping element. In this arrangement, the image capture device is operable to perform pixel stepping and/or pixel sub stepping.

The image capture device may be operable to capture the at least one image at a resolution of up to 300 megapixel (MP), optionally up to 260 MP, optionally up to 250 MP, optionally up to 244 MP, optionally 244 MP, optionally between 230 MP and 250 MP, optionally between 240 MP and 250 MP, optionally at least 240 MP, optionally up to 240 MP.

The image capture device may be operable to capture the at least one image at up to 100 megapixel (MP), optionally up to 80 MP, optionally up to 70 MP, optionally up to 61 MP, optionally 61 MP, optionally between 50 MP and 70 MP, optionally between 55 MP and 65 MP, optionally at least 50 MP, optionally at least 55 MP. The image capture device may be operable to capture the at least one image at between 50 MP and 260 MP, optionally between 60 MP and 245 MP, optionally 61 MP or 244 MP.

The image capture device may be configurable between a plurality of imaging modes. The plurality of imaging modes may be defined, at least in part, by the resolution of the imaging modes expressed in MP.

The plurality of imaging modes may include a first imaging mode and a second imaging mode. The second imaging mode may have a higher resolution than the first imaging mode. The first imaging mode and/or the second imaging mode may activate the resolution enhancement element to increase the resolution of the captured image.

In the first imaging mode, the image capture device may be operable to capture the at least one image at up to 100 megapixel (MP), optionally up to 80 MP, optionally up to 70 MP, optionally up to 61 MP, optionally 61 MP.

In the second imaging mode, the image capture device may be operable to capture the at least one image at up to 300 megapixel (MP), optionally up to 260 MP, optionally up to 250 MP, optionally up to 244 MP, optionally 244 MP.

The image capture apparatus may be operable to save one or more image files from the image(s) obtained in the first imaging mode and/or the second imaging mode.

The image capture apparatus may comprise a plurality of light sources, each light source being operable to illuminate at least a portion of the object. The image capture apparatus may comprise a plurality of different types of light source.

The image capture apparatus may be operable to selectively activate the one or more light sources. The image capture apparatus may be operable to selectively activate some, or all, of the light sources. The image capture apparatus may be configured to automatically activate the, or each, light source, and to capture at least one image during the activation of the, or each, light source. In this arrangement, the image capture apparatus may activate each light source, optionally in a sequence. Each light source may be associated with a wavelength band dictated by its light transmission properties.

The light sources may be arranged to illuminate substantially all of a major surface of the object when the object is located on the substrate. The light sources may be arranged to illuminate substantially all of a major surface of the object when the object is located on the optically transparent portion(s) of the substrate.

The one or more light sources may be operable to illuminate the object at one or more wavelength bands, or a plurality of wavelength bands. The image capture apparatus may be operable to selectively illuminate the object at one or more wavelength bands, two or more wavelength bands, three or more wavelength bands, or four or more wavelength bands.

The image capture apparatus may comprise at least two light sources, each light source being operable to illuminate the object at a different wavelength band to the other of the one or more light sources.

The one or more light sources may include an ultraviolet light source, a visible light source, a violet visible light source, a blue visible light source, a blue-green visible light source, a green visible light source, a yellow visible light source, an amber visible light source, a red visible light source, an infrared light source, and/or a substantially white visible light source.

The ultraviolet light source may have a wavelength of between approximately 180 nm and 375 nm, optionally between approximately 200 nm and approximately 375 nm, optionally between 355 nm and 375 nm, optionally approximately 365 nm.

The visible light source may have a wavelength of between approximately 395 nm and approximately 705 nm.

The green visible light source may have a wavelength of between approximately 510 nm and approximately 530 nm, optionally approximately 520 nm.

The infrared light source may have a wavelength of between approximately 840 nm and approximately 860 nm, optionally approximately 850 nm.

The white visible light source may have a wavelength of between approximately 395 nm and approximately 705 nm.

The, or each, light source may be a light emitting diode (LED) light source.

One or more of the light sources may be arranged at a first location of the apparatus. The first location may be at a first side region of the apparatus. The first location may be at a first side region of the housing. One or more of the light sources may be arranged at a second location of the apparatus. The second location may be at a second side region of the apparatus. The first location may be at a second side region of the housing.

At least two of the light sources may be arranged at opposing regions of the apparatus, or housing, when one or more light sources are arranged in the first location and when one or more light sources are arranged in the second location.

The one or more light sources may be arranged such that the incident light from the, or each, light source is angularly offset with respect to the substrate, or the optically transparent portion of the substrate. The one or more light sources may be arranged such that the incident light from the, or each, light source is angularly offset with respect to the object, or the major surface of the object, when the object is located on the substrate.

The image capture device may be operable to capture at least one image associated with the wavelength band of the activated light source. Each of the captured at least one images may then be used by the image processor to create fingerprint location data.

The image capture apparatus may comprise one or more optical components configured to guide reflected and/or emitted light from the object/fingerprint(s) to the image capture device. The one or more optical components may include a mirror.

The image capture apparatus may include a frame on which the light sources are mounted. The frame may be arranged such that the one or more light sources illuminate at least a portion of the object.

The image capture apparatus may comprise a light filter. The light filter may be arranged in the optical light path between the image capture device and the substrate. The light filter may be located between the image capture device and the optical component, or components, used to guide the reflected and/or emitted light from the object to the image capture device.

The image capture apparatus may be operable to configure the light filter between one or more filter modes. The, or each, filter mode may be configured to permit at least a portion of light within a wavelength band to pass therethrough. The wavelength band may be associated with one or more of the one or more light sources of the apparatus. The light filter may comprise one or more filter elements, or two or more filter elements, or three or more filter elements, or four or more filter elements, or five or more filter elements. The, or each, filter element may be configured to permit at least a portion of the light reflected from, or emitted from, the object therethrough. Each filter mode may be defined by the use of a filter element.

The, or each, filter element may be configured to permit at least a portion of light in a wavelength band to pass therethrough. The wavelength band may be associated with transmitted light from one or more of the light sources, and/or associated with emitted light from the object in response to fluorescence excitation from one or more of the light sources.

The, or each, filter element may be locatable in the optical light path of the apparatus. The, or each, filter element may be locatable in the optical light path between the substrate and the image capture device. The apparatus may be operable to move the, or each, filter element between an in-line position, in which the filter element is located in the optical light path, and an off-line position, in which the filter element is not located in the optical light path.

The image capture apparatus may comprise a frame member for mounting the filter, or filter elements, thereto. The image capture apparatus may comprise an actuator for moving the frame member between a plurality of filter positions. The actuator may be a motor, or a stepper motor, or the like.

The filter, or filter elements, may be locatable adjacent to an aperture of the image capture device.

At least one filter element may be configured to permit at least a portion of visible light to pass therethrough. At least one filter element may be configured to permit at least a portion of green light to pass therethrough. At least one filter element may be a green contrast filter configured to permit at least a portion of green light to pass therethrough. At least one filter element may be configured to permit at least a portion of ultraviolet light to pass therethrough. At least one filter element may be configured to permit at least a portion of ultraviolet light to pass therethrough in a wavelength band of between approximately 390 nm and 710 nm, optionally between approximately 400 nm and approximately 700 nm. At least one filter element may be configured to permit at least a portion of green light to pass therethrough at a wavelength of between approximately 550 nm and approximately 570 nm, optionally approximately 560 nm. At least one filter element may be configured to permit at least a portion of infrared light to pass therethrough. At least one filter element may be configured to permit at least a portion of infrared light to pass therethrough at a wavelength above between approximately 770nm and 790 nm, optionally above 780 nm.

The image capture apparatus may comprise one or more displays. The, or each, display may include a user-interface, which may be a graphical user-interface (GUI). The, or each, display may be a touch-screen display.

The display may be operable to display the captured at least one image of the at least a portion of the object. The display may be operable to display one or more images created by the image processor.

The plurality of imaging modes may include a viewer mode. The viewer mode may use images from the image capture device to display images or video of the object, which may be in real time. The viewer mode may use captured image(s) at a resolution lower than the first imaging mode and/or at a resolution lower than the second imaging mode.

The image capture apparatus may be operable to save one or more image or video files from the images or videos obtained in the viewer mode.

The image capture apparatus may be operable to display one or more images on the display. The image may be the at least one image captured by the image capture device. The image may be created based at least in part on the fingerprint location data created by the image processor.

The image processor may be operable to carry out one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes. The detection of the location of the, or each, fingerprint may be carried out by the image processor using, at least in part, the one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes.

The image processor may comprise one or more detection units operable to, at least in part, detect the location of the at least one fingerprint. The detection units may include an artificial intelligence unit, machine learning unit, and/or a neural network unit. The neural network unit may be a convolutional neural network (CNN) or a faster R-CNN (FRCNN). The detection unit may be a neural network, a convolutional neural network (CNN), a faster R- CNN (FRCNN), or the like. The detection unit may include one or more convolution layers forming at least a part of the neural network.

The image processor may be operable to use the image, or images, created using at least one of the imaging modes of the image capture device, optionally using any two or more of the imaging modes, optionally using the first imaging mode and/or the second imaging mode, to detect the location of the at least one fingerprint.

The image processor may be operable to detect the location of the at least one fingerprint using an image, or images, captured using any one or all of the light source(s) and/or any one or all of the filter, or filter element(s).

The image capture device may be configured to capture the at least one image in monochrome or colour. The image processor may be configured to convert the at least one colour image to a monochrome image. The image processor may be configured to convert each colour image from the image capture device to a monochrome image. The image processor may be configured to convert the colour image to a monochrome image prior to detecting the location of the, or each, fingerprint present on the object.

The monochrome image created by the image processor or by the image capture device may form the input to the detection unit. The monochrome image created by the image processor or by the image capture device may form the input to the detection unit(s). The monochrome image created by the image processor may form the input to the neural network unit. The monochrome image created by the image processor may form a convolution layer of the neural network.

The image processor may be operable to process the captured at least one image to detect the location of every fingerprint located on the object. The image processor may be operable to process the captured at least one image to detect the location of a plurality of fingerprints on the object, or every fingerprint located on the object.

The fingerprint location data created by the image processor may be referenced to one or more reference points of the object.

The fingerprint location data created by the image processor may include one or more predicted locations of the, or each fingerprint. The fingerprint location data created by the image processor may include the probability that a fingerprint is present at a particular location or zone of the object. The fingerprint location data created by the image processor may include one or more predicted locations of the, or each fingerprint and the probability that a fingerprint is present at the predicted locations or zones of the object.

The image processor may be operable to represent the location of the at least one fingerprint using a feature-mapping process. The feature-mapping process may be operable to represent the probability of a fingerprint being present at one or more discrete locations on the object.

The feature-mapping process is carried out using, at least in part, the fingerprint location data.

The feature-mapping process may be operable to represent the probability density of one or more points, or a plurality of points, or one or more zones, or a plurality of zones, having a fingerprint located there. In this arrangement, the feature mapping process can provide single, or multiple, predictions of the presence of a fingerprint within the area of the fingerprint itself.

The feature mapping process may be operable to make a plurality of predictions of the presence or absence of a fingerprint within the same fingerprint area. In this arrangement, for a given fingerprint, the feature mapping process makes at least two predictions covering at least two points or zones within the area defined by the fingerprint, rather than making a single prediction covering the area of the fingerprint.

The feature-mapping process may be operable to represent the location of the at least one fingerprint using a heat-map process representing the probability of one or more points, or a plurality of points, or one or more zones, or a plurality of zones, having a fingerprint located there. The heat-map may be a probability map, or probability density map.

The feature mapping process may be operable to create a boundary at least partially around the location, or predicted location, of the, or each, fingerprint. The boundary may surround the at least one fingerprint. It will be understood that the boundary could trace the outline of the, or each, fingerprint, or could be spaced from the fingerprint. For example, a box could be used to indicate the location of the fingerprint.

The feature-mapping process may be operable to make predictions based on one or more threshold values. The feature-mapping process may be used to map features onto the captured at least one image.

The image processor may be operable to indicate the probability, or likelihood, of a fingerprint being present at a location.

The image processor may include a classification module. The classification module may be based on a Resnet50 architecture. The classification module may be operable to use the fingerprint location data, at least in part, to determine which parts of the captured at least one image to analyse. The classification module may be operable to use data from the detection unit(s), at least in part, to determine which parts of the at captured least one image to analyse. The classification module may be operable to use the feature-mapping data, at least in part, to determine which parts of the captured at least one image to analyse.

The classification module may be operable to compare the at least one fingerprint on the captured at least one image against a classification library and to output the probability that a fingerprint is present at that location.

The image processor may be operable to create a report, or reports. The report, or reports, may indicate the location of the fingerprint, or fingerprints, on the object. The image processor may be operable to transmit the report, or reports, to the display, the computing device, and/or a further computing device.

The image capture apparatus may be operable to create at least one annotated image, based at least in part on the captured at least one image from the image capture device and the fingerprint location data from the image processor. The annotated image may highlight, tag, select, delimit, or otherwise indicate the location of the at least one fingerprint on the object. The annotated image may highlight, tag, select, delimit, or otherwise indicate the location of each of the fingerprints on the object. The annotated image may be based, at least in part, on the feature-mapping. The annotated image may include data obtained from the feature mapping.

The image capture apparatus may be operable to divide the captured at least one image into one or more areas of interest, each area of interest including at least one fingerprint.

The image capture apparatus may be operable to create a cropped image based at least in part on the captured at least one image from the image capture device and the fingerprint location data from the image processor. The cropped image may include at least one of the detected fingerprint(s).

The annotated image and/or the cropped image may include reference data indicative of the location of the fingerprint, or fingerprints, relative to the object.

The display may be configured to permit the user of the apparatus to select one or more images for display. The images may be selected from at least one of: the captured at least one image, the one or more annotated images, the one or more cropped images, images including feature mapping, images used by the image processor to determine the location of the fingerprints, and/or any other suitable image for display.

The image processor may be operable to create the annotated or cropped images.

The image processor may be operable to create one or more further images based at least in part on the captured at least one image and the fingerprint location data. The one or more further images may be displayed on the display.

The image processor may be operable to transmit data and/or images to the display. The image processor may be operable to transmit data and/or images to a computing device for processing and/or for display. The computing device may be part of the image capture device, or the computing device may be located remote to the apparatus.

The computing device may include the display of the image capture apparatus.

The display may be located on the housing. The display may be located on a wall of the housing. The computing device may be located on the housing. The computing device may be located on the wall of the housing.

The computing device may be a tablet PC, a PC, or the like.

The image capture apparatus may comprise a power supply module configured to supply electrical power to the one or more light sources, the image capture device, the actuator for moving the frame member between a plurality of filter positions, the image processor and/or any component of the image capture apparatus that requires electrical power. The power supply module may be located in the housing. The image capture apparatus may comprise a user input control. The user input control may be operable to permit the user to input one or more parameters associated with the at least one fingerprint and/or the object on which the at least one fingerprint is located, such as the fingerprint enhancement technique used (e.g. reagents used, type of analysis to be used, such as fluorescence). The user input control may be operable to permit the user to select from a list of fingerprint analysis techniques, processes and parameters. The image capture apparatus may be operable to automatically configure the operation of the apparatus based on the data input to the user input control.

The user input control may be operable to permit the user to adjust the operating parameters of the image capture apparatus. The user input control may be operable to permit the user to manually select the light source, or sources, to be used, the filter, or filter element(s) to be used, and/or the imaging mode to be used.

The image capture apparatus may be operable to automatically configure the operation of the apparatus based on the data input to the user input control and the user input control may be operable to permit the user to manually select at least one of: the light source, or sources, to be used, the filter, or filter element(s) to be used, and/or the imaging mode to be used.

The user input control may be included on the display, or may be part of the user interface of the display.

The user input control may be operable to activate or control at least one of: the light source, or sources, to be used, the filter, or filter element(s) to be used, and/or the imaging mode to be used, and/or the operation of the image capture device in response to user input.

The image capture apparatus may be configured to automatically detect the location of the, or each, fingerprint on the object once the user has activated the one or more light sources and the image capture device. The image capture apparatus may be configured to automatically generate output data, including any images, reports, etc, in response to the user activating the one or more light sources and the image capture device.

The image capture apparatus may be configured to automatically capture an image using each of the one or more light sources to give a set of captured images, to process each of the captured images to detect the location of the at least one fingerprint, and to generate output data including a set of captured images and any cropped or further images. The image capture apparatus may be operable to transmit the output data to the display and/or to a further computing device, or further display. The output data may include the fingerprint location data created by the image processor and referenced to one or more reference points of the object.

The image capture apparatus may be configured to alert if no location of the at least one fingerprint is detected, or if no fingerprints are detected. The alert may be one or more of an audible alert, a visual alert, and an alert in the output data.

According to a second aspect of the present invention there is provided an image capture apparatus for imaging at least a portion of an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; and wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate.

Embodiments of the second aspect of the present invention may include one or more features of the first aspect of the present invention or its embodiments. Similarly, embodiments of the first aspect of the present invention may include one or more features of the second aspect of the present invention or its embodiments.

According to a third aspect of the present invention there is provided a method of detecting at least one fingerprint on an object, the method comprising the steps of: providing an image capture apparatus for imaging at least one fingerprint on an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; and wherein the image processor is operable to process the captured at least one image to detect the location of the, or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the, or each fingerprint on the object; positioning the object on the substrate; activating at least one of the one or more light sources; capturing at least one image of the object using the image capture device; and determining the location of the at least one fingerprint on the object using the image processor.

Embodiments of the third aspect of the present invention may include one or more features of the first and/or second aspects of the present invention or their embodiments. Similarly, embodiments of the first and/or second aspects of the present invention may include one or more features of the third aspect of the present invention or its embodiments.

According to a fourth aspect of the present invention there is provided a method of imaging an object, the method comprising the steps of: providing an image capture apparatus for imaging at least a portion of an object, the apparatus comprising: a substrate for positioning an object relative to the apparatus; one or more light sources; an image capture device; and an image processor, wherein the one or more light sources are operable to illuminate at least a portion of the object when the object is located on the substrate; and wherein the image capture device is operable to capture at least one image of at least a portion of the object when the object is located on the substrate; positioning the object on the substrate; activating at least one of the one or more light sources; and capturing at least one image of the object using the image capture device.

Embodiments of the fourth aspect of the present invention may include one or more features of the first, second and/or third aspects of the present invention or their embodiments. Similarly, embodiments of the first, second, and/or third aspects of the present invention may include one or more features of the fourth aspect of the present invention or its embodiments. According to a fifth aspect of the invention, there is provided a method of detecting at least one fingerprint on an object, the method comprising the steps of: providing at least one image of the at least one fingerprint on the object; and processing the at least one image to detect the location of the, or each fingerprint present on the object, and to create fingerprint location data based at least in part on the detected location of the, or each fingerprint on the object.

Embodiments of the fifth aspect of the present invention may include one or more features of the first, second, third and/or fourth aspects of the present invention or their embodiments. Similarly, embodiments of the first, second, third and/or fourth aspects of the present invention may include one or more features of the fifth aspect of the present invention or its embodiments.

The term "comprising" as used herein to specify the inclusion of components also includes embodiments in which no further components are present.

Brief description of the drawings

Embodiments of the invention will now be described, by way of example, with reference to the drawings, in which:

Fig. 1a shows an image capture apparatus in accordance with an embodiment of the invention;

Fig. 1b shows a top view of the image capture apparatus of Fig. 1a;

Fig. 2 shows an image captured using the image capture apparatus of Fig. 1a;

Figs. 3a to 3c show convolutional layers of a neural network used by the image processor of the image capture apparatus of Fig. 1a;

Figs. 4 and 5 show images created by the image processor of the image capture apparatus of Fig. 1a;

Fig. 6a shows a colour image captured by the image capture apparatus of Fig. 1a; and

Fig. 6b shows a monochrome image of Fig. 6a created by the image capture apparatus of Fig. 1a.

Description of preferred embodiments

A list of the reference signs used herein is given at the end of the description, immediately prior to the claims. A fully and enabling disclosure of the present invention, including the best mode thereof, to one of ordinary skill in the art, is set forth more particularly in the remainder of the specification. Reference now will be made in detail to the embodiments of the invention, one or more examples of which are set forth below. Each example is provided by way of explanation of the invention, not limitation of the invention.

It will be apparent to those of ordinary skill in the art that various modifications and variations can be made in the present invention without departing from the scope of the invention. For instance, features described as part of one embodiment can be used on another embodiment to yield a still further embodiment. Thus, it is intended that the present invention cover such modifications and variations as come within the scope of the appended claims and their equivalents.

Other objects, features, and aspects of the present invention are disclosed in the remainder of the specification. It is to be understood by one of ordinary skill in the art that the present discussion is a description of exemplary embodiments only and is not intended as limiting the broader aspects of the present invention, which broader aspects are embodied in the exemplary constructions.

Repeat use of reference symbols in the present specification and drawings is intended to represent the same or analogous features or elements.

With reference to Figs. 1a to 6b, an image capture apparatus 1 for imaging at least one fingerprint 2 on an object 4 is shown, and related images captured or created by the apparatus 1.

As shown in Fig. 1a and 1b, the apparatus 1 comprises a substrate 6 for positioning the object relative to the apparatus 1, one or more light sources 8, an image capture device 10 and an image processor 12.

The one or more light sources 8 are operable to illuminate at least a portion of the object 4 when the object 4 is located on the substrate 6 and the image capture device 10 is operable to capture at least one image 14 (an example of which is shown in Fig. 2) of at least a portion of the object 4 when the object 4 is located on the substrate 6. Figs. 6a and 6b best show an image of the object 4, which is an A4 piece of paper. The image processor 12 is operable to process the captured at least one image 14 to detect the location of the, or each fingerprint 2 present on the object 4, and to create fingerprint location data based at least in part on the detected location of the, or each fingerprint 2 on the object 4.

The at least one fingerprint 2 could include one or more latent fingerprints. In this embodiment, the apparatus 1 can image and detect the location of latent fingerprints, or non-latent fingerprints. For example, the object 4 could include latent, non-latent, or a combination of latent and non-latent fingerprints, and the apparatus 1 is operable to image and create fingerprint location data of both types of fingerprints.

The apparatus 1 is operable to detect an at least partially obscured fingerprint 2 as will be described in more detail below. In this embodiment, the image capture apparatus 1 is operable to detect the location of the at least partially obscured fingerprint 2 and to create fingerprint location data accordingly. It will be understood that the fingerprint 2 could be obscured in many ways, such as by debris/dirt, ink (as shown in Fig. 2 and Fig. 5), or the like.

The at least one fingerprint 2 may be processed with one or more fingerprint enhancement agents prior to image capture by the apparatus 1 , as is known in the field of fingerprint analysis.

The object 4 is a substantially planar object 4. The object 4 comprises one or more major surfaces 4a. The one or more major surfaces 4a are sized such that at least one fingerprint 2 can be located thereon. The one or more major surfaces 4a are sized such that a plurality of fingerprints 2 can be located thereon. The one or more major surfaces 4a are rectangular shaped surfaces. The one or more major surfaces 4a are planar surfaces. The object 4 is a paper-based object, such as a piece of paper, a document, a book or page thereof, or the like.

In these embodiments, the object 4 is an A4 piece of paper, but the object 4 could be up to A4 size or up to US letter size. The object 4 could be at least A4 size, or US letter size. The major surfaces 4a may be A4 size, or up to US letter size.

The substrate 6 is configured to accommodate all of the object 4 thereon. The substrate 6 is configured to accommodate a major surface 4a of the object 4 thereon.

The image capture apparatus 1 could comprise one or more of the objects 4 to be imaged. For example, the image capture apparatus 1 could be sold with one or more objects 4. The image capture apparatus 1 comprises a housing 16. The one or more light sources 8 are located in the housing 16. The image capture device 10 is located in the housing 16.

The housing 16 is substantially opaque at the transmission wavelengths of the light sources 8. The housing 16 is substantially non-fluorescent at the wavelengths of the light sources 8. Fig. 1a shows the housing 16 with transparent portions, merely to illustrate the inside of the housing 16.

The substrate 6 is a substantially planar member, and is a platen. The substrate 6 is located at a top portion 16a of the housing 16. The housing 16 includes one or more cut-out portions to allow light from the light sources 8 to reach the substrate 6 and to allow light from the object 4 to reach the image capture device 10. The substrate 6 is integral with the housing 16.

The substrate 6 is configured to be in conformal contact with at least a portion of a major surface 4a of the object 4 when the object 4 is located on the substrate 4. The substrate 6 includes a major surface 6a configured to be in contact with the major surface 4a of the object 4 when the object 4 is located on the substrate 6.

The substrate 6 includes one or more substantially optically transparent portions 6b configured to permit the light from the one or more light sources 8 to be transmitted and/or reflected therethrough.

The optically transparent portion 6b of the substrate 6 is a glass plate.

The substrate 6 is configured to be non-fluorescent at the transmission wavelengths of the one or more light sources 8. The optically transparent portion 6b of the substrate 6 is configured to be non-fluorescent at the transmission wavelengths of the one or more light sources 8.

The substrate 6 is sized to allow at least an A4 piece of paper or a US letter sized piece of paper to be located thereon. The optically transparent portion 6b of the substrate 6 is sized to cover at least an A4 piece of paper or a US letter sized piece of paper.

The substrate 6 includes an indicator 6c arranged to indicate where the object 4 should be positioned. The indicator 6c is located on or adjacent to the optically transparent portion 6b of the substrate 6. The indicator 6c is substantially the same size as A4 paper or US letter size. The indicator 6c is formed from one or more lines on the transparent portion 6b. The one or more light sources 8 are arranged to illuminate at least a portion of the object 4 through the optically transparent portion 6b of the substrate 6.

The apparatus 1 has a plurality of light paths from each light source 8 to the image capture device 10.

The optically transparent portion 6b of the substrate 6 is located in the light path between the one or more light sources 8 and the image capture device 10.

The housing 16 includes a hinged cover member (not shown) configurable to cover at least a portion of the substrate 6. The cover member is configurable to cover at least a portion of the object 4 when the object 4 is located on the substrate 6.

The cover member is configured to prevent or mitigate light from being transmitted from the outside of the housing 16 to the inside of the housing 16. The cover member 18 may be configured to substantially block or reflect light in the visible, infrared and/or ultraviolet wavelengths. The cover member is configured to be non-fluorescent at the transmission wavelengths of the light sources 8.

The cover member is configured to secure the object 4 to the substrate.

The image capture device 10 is arranged to capture at least a portion of light that is reflected from the object 4. The image capture device 10 is arranged to capture at least a portion of light that is reflected from the object 4 and that passes through the optically transparent portion 6b of the substrate 6.

The one or more light sources 8 include at least one fluorescence light source operable to cause fluorescence in the fingerprints 2 located on the object 4, or in a fingerprint enhancement agent applied to the fingerprints 2, as is known in the art of fluorescence-based fingerprint detection.

The image capture device 10 is arranged to capture at least a portion of light that is emitted from the fingerprints 2 located on the object 4, or from the fingerprint enhancement agent. The emitted light from the fingerprint 2 or fingerprint enhancement agent may be at least partly from fluorescence emission. The image capture device 10 is fixed relative to the optically transparent portion 6b of the substrate 6. The image capture device 10 is fixed relative to each light source 8. Each light source 8 is fixed relative to the optically transparent portion 6b of the substrate 6.

The image capture device 10 is a digital camera.

The image capture device 10 includes one or more light focussing elements 10a configured to focus light reflected and/or emitted from the object 4. The light focussing element 10a includes at least one lens element.

The image capture device 10 is operable to capture the at least one image 14 at a resolution of at least 1,000 pixels per inch (ppi), or greater than 1 ,000 ppi. The image capture device 10 is operable to capture the at least one image 14 of substantially all of the major surface 4a of the object 4 at a resolution of at least 1 ,000 pixels per inch (ppi), or greater than 1 ,000 ppi.

The image capture device 10 is configured to increase the resolution of the captured at least one image 14. The image capture device 10 comprises a resolution enhancement element operable to increase the resolution of the captured at least one image 14. The resolution enhancement element includes a pixel stepping and/or pixel sub-stepping element. In this embodiment, the image capture device 10 is operable to perform pixel stepping and/or pixel sub-stepping to increase the resolution of the captured at least one image 14.

The image capture device 10 is operable to capture the at least one image at a resolution of 61 MP or 244 MP.

The image capture device 10 is configurable between a plurality of imaging modes. The plurality of imaging modes are defined, at least in part, by the resolution of the imaging modes expressed in MP.

In this embodiment, the plurality of imaging modes include a first imaging mode, corresponding to images captured at 61 MP, and a second imaging mode, corresponding to images captured at 244 MP. The second imaging mode has a higher resolution than the first imaging mode. The second imaging mode activate the resolution enhancement element to increase the resolution of the captured image 14.

The image capture apparatus 1 is operable to save one or more image files from the image(s) obtained in the first imaging mode and/or the second imaging mode. The image capture apparatus 1 comprises a plurality of light sources 8, each light source being operable to illuminate at least a portion of the object 4. The image capture apparatus 1 comprises a plurality of different types of light source 8.

The image capture apparatus 1 is operable to selectively activate the one or more light sources 8. The image capture apparatus 1 is operable to selectively activate some, or all, of the light sources 8. The image capture apparatus 10 is configured to automatically activate the, or each, light source 8, and to capture at least one image 14 during the activation of the, or each, light source 8. In this arrangement, the image capture apparatus 1 can activate each light source 8, optionally in a sequence. Each light source 8 is associated with a wavelength band dictated by its light transmission properties.

The one or more light sources 8 are operable to illuminate the object at a plurality of wavelength bands (one wavelength band for each type of light source).

Each light source 8 is operable to illuminate the object 4 at a different wavelength band to the other light sources 8.

The one or more light sources 8 include an ultraviolet light source, a visible light source, a green visible light source, an infrared light source, and/or a substantially white visible light source.

The ultraviolet light source has a wavelength of approximately 365 nm.

The visible light source has a wavelength of between approximately 395 nm and approximately 705 nm.

The green visible light source has a wavelength of approximately 520 nm.

The infrared light source has a wavelength of approximately 850 nm.

The white visible light source has a wavelength of between approximately 395 nm and approximately 705 nm.

The, or each, light source 8 is a light emitting diode (LED) light source 8. One or more of the light sources 8 are arranged at one side region of the housing 16 and one or more of the light sources 8 are arranged at a second side region of the housing 16.

The image capture device 10 is operable to capture at least one image 14 associated with the wavelength band of the activated light source 8. Each of the captured at least one images 14 may then be used by the image processor 12 to create fingerprint location data.

The image capture apparatus 1 comprises one or more optical components configured to guide reflected and/or emitted light from the object 4 or fingerprints 2 to the image capture device 10. The optical components include a mirror 20.

The image capture apparatus 1 includes a frame 8a on which the light sources 8 are mounted. The frame 8a is arranged such that the light sources 8 illuminate at least a portion of the object 4.

The image capture apparatus 1 comprises a light filter 22. The light filter 22 is arranged in the optical light path between the image capture device 10 and the substrate 6. The light filter 22 is located between the image capture device 10 and the mirror 20 used to guide the reflected and/or emitted light from the object 4 to the image capture device 10.

The image capture apparatus 1 is operable to configure the light filter 22 between one or more filter modes. The, or each, filter mode is configured to permit at least a portion of light within a wavelength band to pass therethrough. The wavelength band may be associated with one or more of the light sources 8 of the apparatus 1.

The light filter 22 comprises one or more filter elements 22a configured to permit at least a portion of the light reflected from, or emitted from, the object 4 therethrough. Each filter mode is defined by the use of a filter element 22a.

The, or each, filter element 22a is configured to permit at least a portion of light in a wavelength band to pass therethrough. The wavelength band is associated with transmitted light from one or more of the light sources 8 and/or associated with emitted light from the object 4 in response to fluorescence excitation from one or more of the light sources 8.

Each filter element 22a is locatable in the optical light path of the apparatus 1. Each filter element is locatable in the optical light path between the substrate 6 and the image capture device 10. The apparatus 1 is operable to move each filter element 22a between an in-line position, in which the filter element 22a is located in the optical light path, and an off-line position, in which the filter element 22a is not located in the optical light path.

The image capture apparatus 1 comprises a frame member 22b for mounting the filter elements 22a thereto. The image capture apparatus 1 comprises an actuator for moving the frame member 22b between a plurality of filter positions. The actuator is a stepper motor.

The filter elements 22a are locatable adjacent to an aperture of the image capture device 10.

The filter elements 22a are selected from the following example: at least one filter element 22a is configured to permit at least a portion of visible light to pass therethrough, at least one filter element 22a is configured to permit at least a portion of green light to pass therethrough, at least one filter element 22a is a green contrast filter configured to permit at least a portion of green light to pass therethrough, at least one filter element 22a is configured to permit at least a portion of ultraviolet light to pass therethrough, at least one filter element 22a is configured to permit at least a portion of ultraviolet light to pass therethrough in a wavelength band of between approximately 400 nm and approximately 700 nm, at least one filter element 22a is configured to permit at least a portion of green light to pass therethrough at a wavelength of approximately 560 nm, at least one filter element 22a is configured to permit at least a portion of infrared light to pass therethrough, at least one filter element 22a is configured to permit at least a portion of infrared light to pass therethrough at a wavelength of approximately 780 nm.

The image capture apparatus 1 comprises a display 24. The display 24 includes a graphical user-interface 26. The display 24 is a touch-screen display.

The image processor 12 is operable to transmit data and/or images to the display 24. The image processor 12 is part of a computing device 30, which in this embodiment is a tablet PC. In other embodiments, the computing device 30 could be located remote to the apparatus 1.

The display 24 is operable to display the captured at least one image 14 of the at least a portion of the object 4. The display 24 is operable to display one or more images created by the image processor 12.

The plurality of imaging modes includes a viewer mode. The viewer mode is configured such that captured at least one images 14 from the image capture device 10 are used to display a real-time, or delayed, image or video of the object 4. The viewer mode uses captured image(s) 14 at a resolution lower than the first imaging mode and/or at a resolution lower than the second imaging mode.

The image capture apparatus 1 is operable to save one or more image or video files from the images or videos obtained in the viewer mode.

The image capture apparatus 1 is operable to display one or more images on the display 24. The image may be the at least one image 14 captured by the image capture device 10. The image may be created based at least in part on the fingerprint location data created by the image processor 12.

The image processor 12 is operable to carry out one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes. The detection of the location of the, or each, fingerprint 2 is carried out by the image processor 12 using, at least in part, the one or more pattern detection processes, edge detection processes, ridge detection processes, and/or friction ridge detection processes.

The image processor 12 comprises one or more detection units operable to, at least in part, detect the location of the at least one fingerprint 2. In this embodiment, the detection unit includes a faster R-CNN (FRCNN) neural network. The detection unit includes one or more convolution layers forming at least a part of the neural network.

The image processor 12 is operable to use the image, or images, created using at least one of the imaging modes of the image capture device 10 to detect the location of the at least one fingerprint 2.

The image processor 12 is operable to detect the location of the at least one fingerprint 2 using an image, or images, captured using any one or all of the light sources 8 and/or any one or all of the filter elements 22a.

The image capture device 10 is configured to capture the at least one image 14 in colour. The image processor 12 is configured to convert each colour image 14 from the image capture device 10 to a monochrome image prior to detecting the location of the, or each, fingerprint 2 present on the object 4.

The monochrome image created by the image processor 12 forms the input to the neural network of the detection unit. The image processor 12 is operable to process the captured at least one image 14 to detect the location of substantially every fingerprint 2 located on the object 4.

The fingerprint location data created by the image processor 12 is referenced to one or more reference points of the object 4.

The fingerprint location data created by the image processor 12 includes one or more predicted locations of the, or each fingerprint 2. The fingerprint location data created by the image processor 12 includes the probability that a fingerprint 2 is present at a particular location or zone of the object 4. The fingerprint location data created by the image processor 12 includes one or more predicted locations of the, or each fingerprint 2 and the probability that a fingerprint 2 is present at the predicted locations or zones of the object 4.

The image processor 12 is operable to represent the location of the at least one fingerprint 2 using a feature-mapping process (as shown in Figs. 5, 6a and 6b). The feature-mapping process is operable to represent the probability of a fingerprint 2 being present at one or more discrete locations on the object 4.

The feature-mapping process is carried out using, at least in part, the fingerprint location data.

The feature-mapping process is operable to represent the probability density of one or more points, or a plurality of points, or one or more zones, or a plurality of zones, having a fingerprint 2 located there. In this arrangement, the feature mapping process can provide single, or multiple, predictions of the presence of a fingerprint 2 within the area of the fingerprint 2 itself.

The feature mapping process is operable to make a plurality of predictions of the presence or absence of a fingerprint 2 within the same fingerprint area. In this arrangement, for a given fingerprint 2, the feature mapping process makes at least two predictions covering at least two points or zones within the area defined by the fingerprint 2, rather than making a single prediction covering the area of the fingerprint 2.

The feature-mapping process is operable to represent the location of the at least one fingerprint 2 using a heat-map process representing the probability of one or more points, or a plurality of points, or one or more zones, or a plurality of zones, having a fingerprint 2 located there. The heat-map may be a probability map, or probability density map. The feature mapping process is operable to create a boundary at least partially around the location, or predicted location, of the, or each, fingerprint 2. The boundary may surround the at least one fingerprint 2. It will be understood that the boundary could trace the outline of the, or each, fingerprint 2, or could be spaced from the fingerprint 2. For example, a box could be used to indicate the location of the fingerprint 2.

The feature-mapping process may be operable to make predictions based on one or more threshold values.

The feature-mapping process may be used to map features onto the captured at least one image 14.

The image processor 12 is operable to indicate the probability, or likelihood, of a fingerprint 2 being present at a location.

The image processor 12 includes a classification module. The classification module is based on a Resnet50 architecture. The classification module is operable to use the fingerprint location data, at least in part, to determine which parts of the captured at least one image 14 to analyse. The classification module may be operable to use data from the detection unit(s), at least in part, to determine which parts of the captured at least one image 14 to analyse. The classification module is operable to use the feature-mapping data, at least in part, to determine which parts of the captured at least one image 14 to analyse.

The classification module is operable to compare the at least one fingerprint 2 on the captured at least one image 14 against a classification library and to output the probability that a fingerprint 2 is present at that location.

The image processor 12 is operable to create a report, or reports. The report, or reports, may indicate the location of the fingerprint 2, or fingerprints 2, on the object 4. The image processor 12 is operable to transmit the report, or reports, to the display 24.

The image capture apparatus 1 is operable to create at least one annotated image, based at least in part on the captured at least one image 14 from the image capture device 10 and the fingerprint location data from the image processor 12. The annotated image may highlight, tag, select, delimit, or otherwise indicate the location of the at least one fingerprint 2 on the object 4. The annotated image may highlight, tag, select, delimit, or otherwise indicate the location of each of the fingerprints 2 on the object 4. The annotated image may be based, at least in part, on the feature-mapping. The annotated image may include data obtained from the feature-mapping.

The image capture apparatus 1 is operable to divide the captured at least one image 14 into one or more areas of interest, each area of interest including at least one fingerprint 2.

The image capture apparatus 1 is operable to create a cropped image based at least in part on the captured at least one image 14 from the image capture device 10 and the fingerprint location data from the image processor 12. The cropped image may include at least one of the detected fingerprint(s) 2.

The annotated image and/or the cropped image may include reference data indicative of the location of the fingerprint 2, or fingerprints 2, relative to the object 4.

The display 24 is configured to permit the user of the apparatus 1 to select one or more images for display. The images may be selected from at least one of: the captured at least one image 14, the one or more annotated images, the one or more cropped images, images including feature-mapping, images used by the image processor 12 to determine the location of the fingerprints 2, and/or any other suitable image for display.

The image processor 12 is operable to create the annotated or cropped images.

The image processor 12 is operable to create one or more further images based at least in part on the captured at least one image 14 and the fingerprint location data. The one or more further images may be displayed on the display 24.

The display 24 and the computing device 30 are located on a wall 16b of the housing 16.

The image capture apparatus 1 comprises a power supply module 32 configured to supply electrical power to the one or more light sources 8, the image capture device 10, the actuator for moving the frame member 22b between a plurality of filter positions, the image processor 12 and/or any component of the image capture apparatus 1 that requires electrical power.

The image capture apparatus 1 comprises a user input control which is implemented as part of the touch screen display 24 and the graphical user interface 26 thereof. The user input control is operable to permit the user to input one or more parameters associated with the at least one fingerprint 2 and/or the object 4 on which the at least one fingerprint 2 is located, such as the fingerprint enhancement technique used (e.g. reagents used, type of analysis to be used, such as fluorescence). The user input control is operable to permit the user to select from a list of fingerprint analysis techniques, processes and parameters. The image capture apparatus 1 is operable to automatically configure the operation of the apparatus 1 based on the data input to the user input control.

The user input control is operable to permit the user to adjust the operating parameters of the image capture apparatus 1. The user input control may be operable to permit the user to manually select the light source, or sources 8 to be used, the filter 22, or filter elements 22a to be used, and/or the imaging mode to be used.

The image capture apparatus 1 is operable to automatically configure the operation of the apparatus 1 based on the data input to the user input control and the user input control is operable to permit the user to manually select at least one of: the light source 8, or sources 8, to be used, the filter 22, or filter elements 22a to be used, and/or the imaging mode to be used.

The user input control is operable to activate or control at least one of: the light sources 8, to be used, the filter elements 22a to be used, and/or the imaging mode to be used, and/or the operation of the image capture device 10 in response to user input.

The image capture apparatus 1 is configured to automatically detect the location of the, or each, fingerprint 2 on the object 4 once the user has activated the one or more light sources 8 and the image capture device 10. The image capture apparatus 1 is configured to automatically generate output data, including any images, reports, etc, in response to the user activating the one or more light sources 8 and the image capture device 10.

An example of how the invention can be used will now be provided.

First, the user inputs information about the type of fingerprint enhancement agent used (if any) and any other relevant input parameters to the user input control 26. The user can optionally manually tweak any of the parameters associated with the operation of the apparatus 1 at this stage, such as the light sources 8 to be used.

The user will then typically use the live image view mode to look at the object 4, primarily to ensure that it is positioned such that the entire major surface 4a of the object is visible. The user then activates the apparatus, typically in the first (61 MP) or second (244 MP) imaging modes to capture at least one image 14 of the at least one fingerprint 2 on the object 4. Fig. 2 depicts such a captured image 14.

In Fig. 2, it will be apparent that one of the fingerprints 2, visible on the right hand side of the image, is not obscured (but is not easily visible) and the other fingerprint 2, in the centre, is obscured by ink, as a tree has been sketched over the fingerprint 2 to demonstrate the efficacy of the apparatus 1. A tree has been deliberately sketched having a similar pattern to a fingerprint 2, to demonstrate how the image processor 12 can detect the location of the fingerprint despite it being partially obscured.

Next, the image processor 12 carries out an analysis of the image 14 shown in Fig. 2 to determine the location of the fingerprints 2 by using the FRCNN neural network. These analysis steps include the creation of convolution layers shown in Figs. 3a to 3c. For brevity, only 3 of the 512 layers used with the FRCNN are shown here.

The image processor 12 then creates an image, shown in Fig. 4, which highlights a plurality of possible locations of the fingerprints 2 for each fingerprint. For example, the fingerprint on the left has three boxes indicating the potential locations of that single fingerprint 2. The fingerprint 2 in the centre, at least partially obscured by the sketched tree, has several boxes highlighting that fingerprint’s potential location. The boxes used here, an example of feature mapping, have different degrees of probability.

Fig. 5 shows a heat-map generated by the image processor 12 based on the feature mapping carried out in Fig. 4.

A technical challenge which separates this invention from a more typical object detection used in artificial intelligence applications is the lack of a silhouette. As any type of fingerprint must be detectable (including a partial fingerprint) a silhouette of a fingerprint should not be of any relevance to the image processing. This differs from most object detection methods, where a silhouette can be a key part of identification. There are also no key features for the model to extract (e.g. wheels on a car, eyes or mouth on a face), which is thought to be a technique commonly employed by artificial intelligence models. Instead, identification of fingerprints must be made from a very localised pattern. In essence, the embodiments illustrated and described here create a pattern recognition Al, rather than an object detection Al. To achieve this, the image processing is able to extract important detail from small areas, instead of simply generalising local detail, as many models do. Most Al programs will use some sort of suppression or combinations to reduce to a single predicted area. However, because accurately locating fingerprints, rather than precisely defining their location, and fingerprints do not have clear and defined edges, these ‘traditional’ techniques could hamper the results. The heat map approach, however, facilitates a higher confidence level (combining multiple predictions in a similar area) and preserves more information than a limited output of boxes with individual confidences.

Fig. 6a shows an image of an A4 document that has been treated with 1,2-indandione and imaged under green fluorescence conditions. Fig. 6b shows a monochromatic image created from the image of Fig. 6a. Figs. 6a and 6b best highlight the field of view of the apparatus 1 and its efficacy in imaging an A4-sized object 4 comprising a plurality of fingerprints 2.

Any of the images shown in Figs. 2 to 6b can be displayed on the display 24.

Modifications and improvements may be made to the foregoing embodiments without departing from the scope of the present invention.

Reference signs:

1 image capture apparatus

2 fingerprint

4 object

4a major surface (of object 4)

6 substrate

6a major surface (of substrate 6)

6b substantially optically transparent portion (of substrate 6)

6c indicator

8 light source

8a frame

10 image capture device

12 image processor

14 image

16 housing

16a top portion (of housing 16)

16b wall (of housing 16)

18 cover member

20 mirror 22 light filter

22a filter element

22b frame member

24 display

26 graphical user interface 30 computing device

32 power supply module