Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM FOR AND METHOD OF CLASSIFYING A FINGERPRINT
Document Type and Number:
WIPO Patent Application WO/2019/021264
Kind Code:
A1
Abstract:
This invention relates to a method of classifying a captured image of a fingerprint into a preselected fingerprint class, the method comprising the steps of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image. The invention extends to a related system and non-transitory computer-readable medium thereof.

Inventors:
MSIZA ISHMAEL SBUSISO (ZA)
Application Number:
PCT/IB2018/055687
Publication Date:
January 31, 2019
Filing Date:
July 30, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MMAPRO IT SOLUTIONS PTY LTD (ZA)
International Classes:
G06K9/46; G06K9/00; G06K9/64
Foreign References:
US20150347804A12015-12-03
TW201015450A2010-04-16
US20120195478A12012-08-02
Other References:
AWAD ET AL.: "Efficient Fingorprint Claccifiootion Uoing Singular Point", INTERNATIONAL JOURNAL OF DIGITAL INFORMATION AND WIRELESS COMMUNICATIONS (IJDIWC), vol. 1, no. 3, 2011, pages 611 - 616, XP055570496, Retrieved from the Internet
JEYALAKSHMI ET AL.: "Fingerprint Image Classification using Singular Points and Orientation Information", INT. JOURNAL OF ENGINEERING RESEARCH AND APPLICATION, September 2017 (2017-09-01), pages 33 - 42, XP055570501, Retrieved from the Internet
Attorney, Agent or Firm:
FIANDEIRO, João Achada (ZA)
Download PDF:
Claims:
CLAIMS

1 . A method of classifying a captured image of a fingerprint into preselected fingerprint classes, the method comprising the steps of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with a first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.

2. The method of claim 1 , wherein the step of detecting the first quasi-singular point includes, navigating across the orientation image in the first region thereof in a first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image, so as to locate a first location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to- acute, which first location marks the presence of the first quasi-singular point which corresponds with the first singular point, and wherein the change in orientation values is measured against a line extending in the first, preferably horizontal direction.

3. The method of claim 2, wherein the step of detecting the second quasi-singular point includes, navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a second location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which second location marks the presence of the second quasi-singular point which corresponds with the second singular point, and wherein the change in orientation values is measured against a line extending in the first direction, preferably horizontal direction.

The method of claim 3, wherein the step of providing the orientation image comprises overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.

The method of claim 4, wherein the first region of the orientation image is a peripheral region of the orientation image, preferably an upper peripheral region of the orientation image.

The method of claim 5, wherein the second region of the orientation image may be a lower peripheral region of the orientation image.

The method of claim 6, wherein the data relating to the detected first and second quasi-singular points includes: their respective presence or absence; their relative locations, preferably their coordinates relative to a reference coordinate defined on the orientation image; and/or the respective types of singular points.

The method of claim 7, including the step of detecting a third quasi-singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the step of detecting the third quasi-singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.

The method of claim 8, including the step of detecting a fourth quasi-singular point that corresponds with a fourth singular point in the second region of the orientation image.

0. The method of claim 9, wherein the step of using the data of the detected first and second quasi-singular points to classify the fingerprint includes forming a slope of a line between the first quasi-singular point and the second quasi- singular point, if both the first and second quasi-singular points are present in the fingerprint; and utilising the slope to classify the image.

1 . The method according to claim 10, including the step of determining the class of the fingerprint in the captured image if none of the first and second quasi- singular points are detected in the fingerprint.

2. A system for classifying a captured image of a fingerprint into a preselected fingerprint class, the system comprising: a processor; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image; provide an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detect a first quasi-singular point in a first region of the orientation image, which first quasi-singular point corresponds with a first singular point of the fingerprint; detect a second quasi-singular point in a second region of the orientation image, which second quasi-singular point corresponds with a second singular point of the fingerprint; and use data relating to the first and second quasi-singular points to classify the captured image.

The system of claim 12, wherein the instructions that cause the processor to detect the first quasi-singular point further cause the processor to navigate across the orientation image in the first region thereof in a first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image, so as to locate a first location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which first location marks the presence of the first quasi-singular point which corresponds with the first singular point, and wherein the change in orientation values is measured against a line extending in the first, preferably horizontal direction.

The system of claim 13, wherein the instructions that cause the processor to detect the second quasi-singular point further cause the processor to navigate across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a second location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse- to-acute, which second location marks the presence of the second quasi- singular point which corresponds with the second singular point, and wherein the change in orientation values is measured against a line extending in the first direction, preferably horizontal direction.

The system of claim 14, wherein the instructions that cause the processor to provide the orientation image further cause the processor to overlay at least a part of the captured image of the fingerprint with block-wise orientation values.

The system of claim 15, wherein the first region of the orientation image is a peripheral region of the orientation image, preferably an upper peripheral region of the orientation image.

17. The system of claim 16, wherein the second region of the orientation image is a lower peripheral region of the orientation image.

18. The system of claim 17, wherein the data relating to the detected first and second quasi-singular points includes: their respective presence or absence; their relative locations, preferably the coordinates of the first and second quasi-singularities with respect to a reference point; and/or the respective types of singular points.

The system of claim 18, wherein the memory contains instructions which when executed cause the processor to detect a third quasi-singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the instructions causing of the processor to detect the third quasi-singular point further cause the processor to navigate across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.

20. The system of claim 19, wherein the instructions further cause the processor to detect a fourth quasi-singular point that corresponds with a fourth singular point in the second region of the orientation image.

21 . The system of claim 20, wherein the instructions that cause the processor to use the data of the detected first and second quasi-singular points to classify the fingerprint further cause the processor to form a slope of a line between the first quasi-singular point and the second quasi-singular point, if both the first and second quasi-singular points are present in the fingerprint; and use the slope to classify the image.

22. The system of claim 21 , wherein the memory contains instructions which when executed cause the processor to determine the class of the fingerprint in the captured image if none of the first and second quasi-singular points are detected in the fingerprint.

23. A non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with a first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.

24. A method of classifying a fingerprint in a captured image into one of preselected fingerprint classes, the method comprising: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.

25. The method of claim 24, wherein the step of detecting the first singular point includes conducting a plurality of horizontal row by row navigations or vertical column by column navigations with respect to a reference point defined on the orientation image, wherein the navigations are conducted in a first region of the orientation image so as to locate transition points defining locations where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute with respect to the direction of the navigation; and connecting the transition points to define a transition path, the end of which path defining the location of the first singular point.

26. The method of claim 25, wherein the step of detecting the second quasi- singular point includes determining the type of first singular point which was previously detected; and accordingly navigating across the orientation image in a second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which second location marks the presence of the second quasi-singular point which corresponds with the second singular point that is different from the first singular point, and wherein the change in orientation values is measured against a line extending in the navigation direction.

The method of claim 26, wherein the step of providing the orientation image comprises overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.

The method of claim 27, wherein the first region of the orientation image extends between an upper peripheral region and middle region of the orientation image.

The method of claim 28, wherein the second region of the orientation image is a lower peripheral region of the orientation image.

The method of claim 29, including the step of, detecting a third quasi-singular point that corresponds with a third singular point of the fingerprint at the second region of the orientation image, wherein the step of detecting the third quasi- singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values. The method of claim 30, including the step of detecting a fourth quasi-singular point that corresponds with a fourth singular point, in the second region of the orientation image.

The method of claim 31 , wherein the step of using the data of the detected first singular point and second quasi-singular point to classify the fingerprint includes forming a slope of a line between the first singular point and the quasi- singular point; and utilising the slope to classify the fingerprint.

The method of claim 32, including the step of determining the class of the fingerprint in the captured image if none of the first singular point and quasi- singular point are detected in the fingerprint.

A system for classifying a fingerprint in a captured image into one of preselected fingerprint classes, the system comprising: a processor which is connected to the database; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image; provide an orientation image of the captured image of the fingerprint; detect a first singular point in the fingerprint in a first region of the orientation image; detect a second quasi-singular of a second singularity of the fingerprint in a second region of the orientation image, in which the second singular point is different from the first singularity; and use data of the detected first singular point and second quasi- singular point to classify the fingerprint. A non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.

Description:
SYSTEM FOR AND METHOD OF CLASSIFYING A FINGERPRINT

FIELD OF INVENTION This invention relates to fingerprint classification. More particularly, but not exclusively, this invention relates to a system for and a method of classifying a captured image of a fingerprint into preselected classes.

BACKGROUND OF INVENTION When authenticating subjects through the use of their fingerprints, they inevitably get exposed to one of two types of transactions: (i) fingerprint verification, and (ii) fingerprint identification.

With fingerprint verification, a subject first claims a particular identity by, for example, entering in a unique PIN or presenting a personalized card. The recognition system then extracts the fingerprint template associated with that PIN or card, and compares it to the template generated from the fingerprint presented by the subject. It is a 1 :1 comparison.

In a fingerprint identification transaction, a subject does not claim any identity. The individual merely presents its fingerprint to the recognition system, for the system to identify the individual. The system then has to go through the entire database of stored fingerprint templates, comparing the template generated from the presented fingerprint with all the stored templates in the database. It is a 1 :M comparison, where M is the total number of records in the database. The database search time, T, is directly proportional to the value of M. This implies that, if the number of records in the template database is significantly large, the system takes longer to generate a result. For a database with a large value of M, it is necessary to have it fragmented into a few partitions. This is done so that, when doing a database search, it does not become necessary to go through the entire database. A search through one of the partitions should, in most transactions, be able to generate the required result. This, obviously, minimizes the value of T.

These database partitions are known as fingerprint classes, and they are determined by a fingerprint classifier, being one of the modules of an automated fingerprint recognition system, by performing fingerprint analytics on captured images of fingerprints. Examples of such fingerprint classes include the Left Loop (LL), Right Loop (RL), Central Twins (CT), Tented Arch (TA), and Plain Arch (PA). In this regard, fingerprints that belong to the LL class have a ridge pattern that emanates from the left-hand side of the fingerprint, flows inwards, and returns in the same direction. Fingerprints that belong to the RL class have a ridge pattern that emanates from the right-hand side of the fingerprint, flows inwards, and returns in the same direction.

Fingerprints that belong to the CT class have a circular ridge pattern. Fingerprints that belong to the TA class have a ridge pattern that emanates from one side of the fingerprint, and returns in the opposite direction. The convex ridges in the middle of a fingerprint that belong to the TA class have significant curvature. Fingerprints that belong to the PA class have a ridge pattern that emanates from one side of the fingerprint, and returns in the opposite direction. The convex ridges in the middle of a fingerprint that belong to the PA class have insignificant curvature.

In order to classify a captured fingerprint image into one of the preselected fingerprint classes, it is well known to make use of fingerprint characteristics or landmarks known as fingerprint singular points or singularities. The terms 'singularity' or 'singular points' of a fingerprint image is often used by those skilled in the art as a fingerprint core and a fingerprint delta. A fingerprint core is forensically defined as the inner-most turning point of a fingerprint loop. A fingerprint delta is a point where the fingerprint ridges tend to form a triangular shape. Accordingly, these terms should be understood, for purposes of this specification, as embracing such meaning.

A problem associated with conventional singularity detection techniques is that, although a particular subject's fingerprint contains both a fingerprint core and a fingerprint delta, for example, in some instances it may happen that the captured image thereof does not include one of the delta and core. For such cases, the default classification rule would fail to order the captured image into the correct fingerprint class.

Furthermore, the process of accurately determining the location of singular points is a time-consuming and, relatively, computationally expensive exercise.

Accordingly, it is an object of the present invention to provide a method of and system for classifying a captured image of a fingerprint into preselected classes with which the applicant believes the aforementioned problems may at least be alleviated, and/or which may provide a useful alternative for the known methods and/or systems.

SUMMARY OF INVENTION

According to a first aspect of the invention there is provided a method of classifying a captured image of a fingerprint into a preselected class, the method comprising the steps of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with a first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.

In an embodiment, the step of detecting the first quasi-singular point may include, navigating across the orientation image in the first region thereof in a first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image, so as to locate a first location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which first location marks the presence of the first quasi-singular point which corresponds with the first singular point, and wherein the change in orientation values is measured against a line extending in the first, preferably horizontal direction.

In an embodiment, the step of detecting the second quasi-singular point may include, navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a second location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to- acute, which second location marks the presence of the second quasi-singular point which corresponds with the second singular point, and wherein the change in orientation values is measured against a line extending in the first direction, preferably horizontal direction. In an embodiment, the step of providing the orientation image may comprise overlaying at least a part of the captured image of the fingerprint with a matrix or an array of non-overlapping pixel blocks, i.e. overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.

Accordingly, in an embodiment, the orientation image may include a grid or an array comprising a plurality of horizontally spaced pixels that define rows and a plurality of vertically spaced pixels that define columns.

In an embodiment, the first region of the orientation image may be a peripheral region of the orientation image, preferably an upper peripheral region of the orientation image.

In an embodiment, the second region of the orientation image may be a lower peripheral region of the orientation image.

In an embodiment, the data relating to the detected first and second quasi-singular points may include: their respective presence or absence; their relative locations, preferably their coordinates relative to a reference point defined on the orientation image; and/or the respective types of singular points.

In an embodiment, the method may also include the step of, detecting a third quasi- singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the step of detecting the third quasi- singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.

In an embodiment, the method may also include the step of, detecting a fourth quasi- singular point that corresponds with a fourth singular point in the second region of the orientation image.

In an embodiment, the step of using the data of the detected first and second quasi - singular points to classify the fingerprint may include forming a slope of a line between the first quasi-singular point and the second quasi-singular point if both the first and second quasi-singular points are present in the fingerprint; and utilising the slope to classify the fingerprint.

In an embodiment, the step of utilizing the data of the detected first and second quasi-singular points to classify the fingerprint may include the steps of determining x-coordinate positions of the first and second quasi-singular points; determining the difference between the x-coordinate positions of the first and second quasi-singular points to determine a first value; comparing the first value to a predefined threshold value; and classifying the fingerprint into one of the preselected classes when the first value is less than the predefined threshold value.

In an embodiment, the method may include the step of determining the class of the fingerprint in the captured image if none of the first and second quasi-singular points are detected in the fingerprint.

In an embodiment, each of the first, second, third and fourth singular points may either be a fingerprint core or a fingerprint delta. According to a second aspect of the invention there is provided a system for classifying a captured image of a fingerprint into a preselected class, the system comprising: a processor; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image of the fingerprint; provide an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detect a first quasi-singular point in a first region of the orientation image, which first quasi-singular point corresponds with a first singular point of the fingerprint; detect a second quasi-singular point in a second region of the orientation image, which second quasi-singular point corresponds with a second singular point of the fingerprint; and use data relating to the detected first and second quasi-singular points to classify the captured image.

According to a third aspect of the invention, there is provided a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.

According to a fourth aspect of the invention, there is provided a method of classifying a fingerprint in a captured image into one of preselected fingerprint classes, the method comprising: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity point of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.

In an embodiment, the step of detecting the first singular point may include, making horizontal row by row navigations or vertical column by column navigations, in a first region of the orientation image, so as to locate transition points defining locations where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute with respect to the direction of the navigation; and connecting the transition points to define a transition path, the end of which path defining the location of the first singular point.

In an embodiment, the step of detecting the second quasi-singular point may include the step of determining the type of first singular point which was previously detected; and accordingly navigate across the orientation image in a second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which second location marks the presence of the second quasi- singular point which corresponds with the second singular point that is different from the first singular point, and wherein the change in orientation values is measured against a line extending in the navigation direction (i.e. first direction, preferably horizontal direction).

In an embodiment, the step of providing the orientation image may comprise overlaying at least a part of the captured image of the fingerprint with a matrix or an array of non-overlapping pixel blocks, i.e. overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.

Accordingly, in an embodiment, the orientation image may include a grid or an array comprising a plurality of horizontally spaced pixels that define rows and a plurality of vertically spaced pixels that define columns.

In an embodiment, the first region of the orientation image may extend between an upper peripheral region and middle region of the orientation image.

In an embodiment, the second region of the orientation image may be a lower peripheral region of the orientation image.

In an embodiment, the method may also include the step of, detecting a third quasi- singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the step of detecting the third quasi- singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.

In an embodiment, the method may also include the step of, detecting a fourth quasi- singular point that corresponds with a fourth singular point in the second region of the orientation image.

In an embodiment, the step of using the data of the detected first singular point and second quasi-singular point to classify the fingerprint may include forming a slope of a line between the first singular point and the quasi-singular point; and utilising the slope to classify the fingerprint.

In an embodiment, the method may include the step of determining the class of the fingerprint in the captured image if none of the first singular point and quasi-singular point are detected in the fingerprint.

In an embodiment, the first singular point may either be a fingerprint core or a fingerprint delta.

According to a fifth aspect of the invention, there is provided a system for classifying a fingerprint in a captured image into one of preselected fingerprint classes, the system comprising: a processor; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image of the fingerprint; provide an orientation image of the captured image of the fingerprint; detect a first singular point in the fingerprint in a first region of the orientation image; detect a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, in which the second singular point is different from the first singularity; and use data of the detected first singular point and second quasi-singular point to classify the fingerprint.

According to a sixth aspect of the invention, there is provided a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.

The other features of the invention are set out in the detailed description and the claims.

BRIEF DESCRIPTION OF DRAWINGS

The objects of this invention and the manner of obtaining them, will become more apparent, and the invention itself will be better understood, by reference to the following description of embodiments of the invention taken in conjunction with the accompanying diagrammatic drawings, wherein:

FIG. 1 shows a flow diagram illustrating steps of a method of classifying a captured image of a fingerprint into preselected classes according to the invention;

FIG. 2 shows an example captured image of a fingerprint;

FIG. 3 shows an example orientation image representation of a fingerprint;

FIG. 4 shows an example algorithm that could be executed in order to determine a quasi-location of a convex core in the image of FIG. 3;

FIG. 5 shows an example algorithm that could be executed in order to determine a quasi-location of a concave core in the image of FIG. 3;

FIG. 6 shows an example algorithm that could be executed in order to determine a quasi-location of a delta in the image of FIG. 3;

FIG. 7 shows a high-level block diagram illustrating a system for classifying a captured image of a fingerprint into preselected classes according to the invention; and

FIGS. 8 - 13 show example uses of the invention as herein described.

DETAILED DESCRIPTION OF AN EXAMPLE EMBODIMENT

The following description of the invention is provided as an enabling teaching of the invention. Those skilled in the relevant art will recognise that many changes can be made to the embodiment described, while still attaining the beneficial results of the present invention. It will also be apparent that some of the desired benefits of the present invention can be attained by selecting some of the features of the present invention without utilising other features. Accordingly, those skilled in the art will recognise that modifications and adaptations to the present invention are possible and can even be desirable in certain circumstances, and are a part of the present invention. Thus, the following description is provided as illustrative of the principles of the present invention and not a limitation thereof.

Referring to the figures, in which like features are indicated by like numerals, example embodiments of a method of and a system for classifying a captured image of a fingerprint into preselected classes are generally designated by the reference numeral 10 in FIGs. 1 and 7.

FIG. 1 shows a flow diagram of the method 10 of classifying a captured image of a fingerprint into preselected classes. The method 10 comprises, at 12, providing a captured image of a fingerprint. The captured image is an electronic representation of a subject's fingerprint. It should be appreciated that the captured image could have been captured by a fingerprint reader, or by scanning a physical representation of a fingerprint. At 14, data relating to an orientation image of at least part of the captured image is generated. The orientation image comprises a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image. At 16, quasi-singular points of the fingerprint are detected in first and second peripheral regions of the orientation image.

As described further below, a plurality of quasi-singularity detection modules (16.1 ,

16.n) could be executed in relation to the orientation image in order to detect the plurality of quasi-singular points, if they exist. At 18, data of the quasi-singularities, such as the coordinates of the detected quasi-singularities with respect to a predefined reference point defined on the orientation image, is utilised to classify the fingerprint in the captured image.

FIG. 2 shows a first example captured image 20 of a fingerprint 22. The fingerprint 22 comprises alternating ridges (24.1 , 24.n), which are represented by the dark lines, and furrows 26, which are indicated by white spaces in between the dark lines of the ridges 24. FIG. 3 is an example orientation image 28 of another fingerprint, being a fingerprint region of interest (ROI) overlaid with block-wise orientation values 30 obtained from a preceding ridge orientation estimation module. The orientation image 28 illustrates the local orientation of the fingerprint ridges and is seen as a matrix of pixels 32 that represent the local orientation of every ridge in a captured image of a fingerprint. As can be seen in FIG. 3, the angle Q denotes an acute angle formed by a ridge with reference to a horizontal line A, and the angle β denotes an obtuse angle formed by a ridge with reference to the horizontal line A. Part of the method includes generating data that relates to the orientation image 28 that represents at least part of a captured image of a fingerprint.

The main reason behind looking at an orientation image 28 per block of pixels 32 (instead of individual pixels) is that a single fingerprint ridge has a thickness that is more than one-pixel width. An immediate advantage of a block-wise (i.e. pixel block to pixel block) approach is that the effects of spurious orientation values (at a pixel level) get to be cancelled out. For analysis purposes, each pixel-block 32 is treated as a pixel, and a collection of horizontal pixel blocks 32 is treated as a row (R), while a collection of vertical pixel blocks is treated as a column (C). The orientation image 28 thus includes a grid comprising a plurality of rows (for example, 20 rows in total as shown in FIG. 3) and columns (for example, 20 columns in total as shown in FIG. 3). The orientation image 28 accordingly is provided with a point of origin or reference point, C, at a left, upper corner thereof. The point of origin has the coordinates (0,0) from which a navigation for locating quasi-singularities (i.e. quasi- singular points) will emanate, as will be described below. The orientation image 28 has a second point D at a right, upper corner thereof. The second point D has the coordinates (0,20). The orientation image 28 has a third point E at a left, lower corner thereof, having the coordinates (0,20). Furthermore, the orientation image 28 has a fourth point F at a lower, right corner thereof, having the coordinates (20,20). The orientation image 28 includes two cores 34, 36 and one delta 38, which are detected by a conventional singularity detection module (not shown). In order to detect the singular points 34, 36, 38, the entire image 28 would have to be processed (as mentioned above, by the conventional singularity detection module), which can be a computationally expensive process, especially when many images need to be processed. This is largely due to its repetitive, iterative character. It is also important to note that singular points 34, 36, 38 are, in the conventional application, used as inputs to a conventional model-based fingerprint classification module. The said classifier simply uses the analytical geometry of these singular points 34, 36, 38 to order the fingerprint into one of the five fingerprint classes. This geometry - in many instances - does not have to be exact, as long as the structural characteristics are preserved. This suggests that the location of these singular points 34, 36, 38 does not have to be exact.

In this regard, features that serve as representatives of the singular points 34, 36, 38 could equally be used in classifying a fingerprint. In an application according to the present invention, these features are - collectively - introduced as quasi-singularities or quasi-singularity points. The representation of a forensic core is referred to as a quasi-core, while the representation of a forensic delta is referred to as a quasi-delta.

For purposes of the specification, a quasi-location of a core should be understood to mean a quasi-core, and vice versa. Similarly, for purposes of the specification, a quasi-location of a delta should be understood to mean a quasi-delta, and vice versa.

The method of classifying a fingerprint 10 in accordance with the present invention includes, navigating across an orientation image 28, preferably at the point of origin C, marked in FIG. 3 with coordinates (0,0), as mentioned before, in order to locate a first quasi-location 44 of the core 34 (i.e. the quasi-core), which first quasi-location represent a first, forensic singular point (i.e. a forensic core). As can be seen in FIG. 3, a first row of the pixel blocks 30 are located in a first peripheral region 40 of the orientation image 28, wherein the first peripheral region 40 delineates an upper periphery of the orientation image 28. The first row of the pixel blocks extends from the point of origin C (0,0) and terminate at the second point D (20,0). The method 10 accordingly includes the step of detecting the first quasi-location 44 (i.e. first quasi-singularity), which step includes, navigating across the orientation image 28 in a first direction (being in the x-direction of a Cartesian x- axis from the point of origin C (0,0) up to the second point D (20,0), to obtain a first location where adjacent pixel blocks 32 represent a change in orientation values 30 from acute (i.e. Q)-to-obtuse (i.e. β), measured against the horizontal line A extending in the first direction.

Similarly, according to the method 10, in a second peripheral region 42 of the orientation image 28, being a lower periphery of the orientation image 28, a second quasi-location 46 (i.e. second quasi-singularity point) of the core 36, being a second singular point, is detected. The lower periphery 42 is a bottom row of the image 28 which comprises a row of pixel blocks spanning between the point E (0,20) and F (20,20). The step of detecting the second quasi-location 46 (i.e. second quasi- singularity point) includes, navigating in the first direction (i.e. from point E to point F, or vice versa), to obtain a second location where adjacent pixels 32 represent a change in orientation 30 from obtuse-to-acute, measured against the horizontal line extending in the first direction (i.e. from left to right).

Also, according to the method 10, in the second peripheral region 42 of the orientation image 28, a third quasi-location 48 of the delta 38, being a third singular point, is located. The step of detecting the third quasi-location 48 includes, navigating in the first direction, to obtain a third location where adjacent pixels 32 represent a change in orientation 30 from acute-to-obtuse, measured against the horizontal line A line extending in the first direction. Representations of the cores 34, 36 and delta 38 being quasi-cores and a quasi- delta are, respectively, indicated by reference numerals 44, 46 and 48 in FIG. 3.

Careful inspection of FIG. 3 leads to an understanding of the following properties:

- Orientation values 30 range from 0 degrees to 180 degrees, measured against a horizontal plane (denoted by the horizontal line A), counterclockwise, from left to right.

- In most instances, the orientation of a fingerprint ridge is either acute (i.e. less than 90 degrees) or obtuse (greater than 90 degrees).

- When moving across the image 28, from left to right, a path 50 is formed at the point where the orientation values 30 change from acute-to-obtuse or obtuse-to-acute. It is known to refer to this path 50 as a transition line 50.

- A transition line 50 is a trajectory of transition points. A transition point is formed where there is a change in orientation values (acute-to-obtuse or obtuse-to-acute) when moving horizontally from one pixel-block 32 to the next 32.

- The fingerprint of the type shown in FIG. 3 has two forensic cores 34, 36 (one convex and one concave), two quasi-cores 44, 46 (one convex and one concave), one forensic delta 38, and one quasi-delta 48.

- It is convenient to track the forensic convex core 34 and convex quasi-core 44 from the top of the orientation image 28, while it is convenient to track the forensic delta 38, quasi-delta 48, forensic concave core 36 and the concave quasi-core 46 from the bottom of the orientation image 28.

- An acute change in orientation values 30 from one pixel-block 32 to the next/adjacent pixel block 32 in the same row indicates a transition point on a transition line 50 that leads to a core.

- An acute change (absolute) in orientation values 30 from one pixel-block 32 to the next 32 along the same row marks the location of a quasi-core 44, 46, that is, a quasi-core 44, 46 is located at the beginning of a transition line 50 that leads to a forensic core 34, 36. The absolute difference between these two adjacent transition blocks 32, Odiff, is less than a heuristically determined threshold, I quasi-core.

- A forensic core 34, 36 is located at the end of the transition line 50, a point where there is a high degree of acuteness in the orientation value 30 change between the two pixel-blocks 32 that form the transition point. The absolute difference between these two transition blocks 32, Odiff, is less than a heuristically determined threshold, I quasi-core.

- An obtuse change in orientation values 30 from one pixel-block 32 to the next 32 indicates a transition point on a transition line 50 that leads to a forensic delta 38.

- An obtuse change in orientation values 30 from one pixel-block 32 to the next 32 in the same row located at a second periphery of the orientation image 28, marks the location of a quasi-delta 48, that is, a quasi-delta 48 is located at the beginning of a transition line 50 that leads to a forensic delta 38. The absolute difference between these two transition blocks 32, Odiff, is less than a heuristically determined threshold, I quasi-delta.

- A forensic delta 38 is located at the end of the transition line 50, a point where there is a high degree of obtuseness in the orientation value 30 change between the two pixel-blocks 32 that form the transition point. The absolute difference between these two transition blocks 32, Odiff, is less than a heuristically determined threshold, Tdeita.

The method 10 further uses data relating to the determined quasi-locations to classify the captured fingerprint into the correct class. Such data includes, but not limited to: the presence or absence of the quasi-singular points; the relative locations of the quasi-singularities on the orientation image with respect to, for example a predefined point of reference such as the point of origin C; and the type(s) of detected quasi-singular point(s).

When the respective locations (such as the coordinates) of quasi-singular points have been established, a gradient (slope) of a vector or line linking the points is determined as part of classifying the fingerprint (for fingerprints that belong to the RL and LL class). Fingerprints that belong to the CT class are classified based on the mere presence of quasi-singular points, while images that belong to the TA class are classified of the basis of the relationship between the x-coordinate of a quasi-core (if it exists) and a quasi-delta (if it exists). Images that belong to the PA class are classified on the basis of the absence of quasi-singular points.

A quasi-core, as mentioned above, is defined as the transition point located at the beginning of a transition line that leads to a forensic core. Similarly, a quasi-delta is defined as the transition point located at the beginning of a transition line that leads to a forensic delta.

The example algorithm, shown in FIG. 4, outlines the procedure used to detect a convex quasi-core 44, if it exits, in a given fingerprint. For the orientation image 28, the instructions in the algorithm are executed only once before getting the location of the convex quasi-core 44.

Similarly, the example algorithm, shown in FIG. 5, outlines the procedure used to detect a concave quasi-core 46, if it exits, in a given fingerprint. For the orientation image 28, the instructions in each algorithm are executed only once before obtaining the location (such as x-coordinates) of the convex or concave quasi-core 46. Also, the example algorithm, shown in FIG. 6, outlines the procedure used to detect a quasi-delta 48, if it exits, in a given fingerprint. For the image 28, the instructions in the algorithm are executed only once before obtaining the location of the quasi-delta 48.

According to another aspect of the invention, the orientation image 28 for a portion of the fingerprint in the captured image, may be provided and the method may include detecting a forensic singular point, such as a first forensic core 34 and a second forensic core 36 in accordance with the conventional method of detecting the singular points in the fingerprint. The conventional detection module (not shown) may accordingly only be able to detect the first and second forensic cores 34, 36. In this regard, the method in accordance with the present invention would comprise establishing the type of detected forensic singularities 34, 36 in the orientation image 28, and accordingly detect a third quasi-singular point 48 of a third forensic singular point 38 which is different in character to the previously detected first and second forensic singular points. Accordingly, the step of detecting the third quasi-singular point 48 may include navigating horizontally across a second periphery of the orientation image 28 (i.e. between point E and F) as described above to locate a transition point of the orientation values, which transition point would indicate the presence of the third quasi-singular point 48. The method would accordingly include the step of using the data of the detected first and second forensic singularities along with the detected third quasi-singularity to establish that the fingerprint has three singularities, and accordingly classify the fingerprint in the CT class. The method mentioned in this version can be equally used in establishing the classes of the other fingerprints which forensic singularities of which have been established by the conventional singularity detection module (not shown). For example, the provided orientation image 28 may have been processed by the traditional singularity detection module (not shown) to establish the location of the forensic singularity, in most cases being a forensic core. Accordingly, the same orientation image 28 may be processed through the quasi-singularity detection module 16 according to the present invention, which would navigate across a second peripheral region of the orientation image 28 to locate a second quasi-singular point, typically a quasi-delta, in the orientation image 28, which second quasi-singularity corresponds with a second forensic singularity (i.e. forensic delta) that is different from the first singular point (i.e. forensic core). Accordingly, the data (such as the coordinates) of the located second quasi-singular point and the data of the first singular point (as detected by the traditional singularity detection module) can be used to form a slope between the first singular point and second quasi-singular point in order to properly classify the fingerprint.

FIG. 7 shows a high-level block diagram illustrating a system 10 for classifying a captured image of a fingerprint into preselected fingerprint classes. A plurality of distributed fingerprint readers 52.1 to 52. n capture a plurality of images relating to fingerprints. Data relating to a captured image of a fingerprint could then be subjected to a plurality of computing processes 54, including, but not limited to, contrast enhancement, and foreground segmentation, which are then stored onto a first database 56. The system 10 preferably comprises the first database 56 and a processor 58 which is connected to the first database 56. The system 10 further comprises a memory (not shown) which is connected to the processor 58 and configured to utilise the data relating to the fingerprint image and contains several instructions which can be executed by the processor 58. The algorithm comprises an orientation image module 14 for providing an orientation image of at least part of the captured image of the fingerprint 12. The provision of the orientation image may comprise overlaying a grid of pixels, comprising rows and columns of non- overlapping pixel blocks over the captured image of the fingerprint. The algorithm further comprises a quasi-singularity detection module 16 for detecting quasi- singular points of the fingerprint in the orientation image. The algorithm yet further comprises a classification module 18 for utilizing data relating to the detected quasi- singular points to classify the captured image 12 into preselected fingerprint classes. Data relating to the classification of the image is stored into a second database 60. It will be appreciated that a backend 62 for an operator may be utilised comprising the processor 58, a memory (not shown) and the second database 60. However, other embodiments may be possible wherein the first and second databases 56, 60 form a single database. Backend 62 may hence receive data relating to a plurality of scanned fingerprint images which were captured by the plurality of fingerprint readers 51 .1 to 52. n.

The effectiveness of the invention, as set out above, will now be described with reference to a few non-limiting examples.

Referring firstly to FIG. 8A, there is shown a region of interest (i.e. ROI) extracted from a fingerprint with one forensic core and no forensic delta. The fingerprint sensor was not able to capture the forensic delta of this fingerprint. In spite of this reality, the quasi-singularity detection module 16 was able to detect a quasi-delta 64 and a quasi-core 66. As seen in FIG. 8B, using the detected quasi-locations 64, 66 of the core and delta, the fingerprint classification module was able to order this fingerprint into the correct LL class.

Turning to FIG. 9A, there is shown a ROI extracted from a fingerprint with one forensic core and no forensic delta. The fingerprint sensor was not able to capture the forensic delta of this fingerprint. In spite of this reality, the quasi-singularity detection module was able to detect a quasi-delta 68 and a quasi-core 70. As seen in FIG. 9B, using the detected quasi-locations 68, 70 of the core and delta, the fingerprint classification module was able to order this fingerprint into the correct RL class.

FIG. 10A shows a ROI extracted from a fingerprint with two forensic cores and no forensic delta. The fingerprint sensor was not able to capture the two forensic deltas of this fingerprint. In spite of this reality, the quasi-singularity detection module was able to detect a quasi-delta 72 that represents one of the missing forensic deltas, as well as two quasi-cores 74.1 , 74.2. As seen in FIG. 10B, this led to the accurate classification of the fingerprint in class CT. This is another instance where a quasi- singularity detector has an advantage over a conventional singularity detector. The classification result, however, is more reliable if the input is very close to being ideal (in this case, three singularities out of four), than in a case where the input is less close to being ideal (in this case, two singularities out of four).

FIG. 1 1 A shows a ROI extracted from a fingerprint with one forensic core and one forensic delta. The fingerprint sensor was able to capture both these features. The quasi-singularity detection module was able to detect both the quasi-core 76 and the quasi-delta 78. As seen in FIG. 1 1 B, using the detected quasi-locations (76, 78) of the core and delta, the fingerprint classification module was able to order this fingerprint into the correct (TA) class.

Turning to FIG. 12A that shows a ROI extracted from a fingerprint with no forensic core and no forensic delta. The quasi-singularity detection module did not detect any quasi-core or quasi-delta. As seen in FIG. 12B, this led to the accurate classification of the fingerprint in class PA.

FIG. 13A shows a ROI extracted from a fingerprint with one forensic core and no forensic delta. The fingerprint sensor was not able to capture the forensic delta of this fingerprint. In addition to that, this fingerprint is not properly aligned. It seems to have slightly been rotated clock-wise. In spite of all these limitations, the quasi- singularity detection module was able to detect a quasi-core 80 and a quasi-delta 82. As seen in FIG. 13B, this led to the accurate classification of the fingerprint in class

LL.

The invention as described hereinabove, describes new fingerprint features collectively referred to as quasi-singularities which can accordingly be used to classify the fingerprint. A quasi-singularity can either be a quasi-core or a quasi- delta. As mentioned above, a quasi-core is defined as the transition point located at the beginning of the transition line that leads to a forensic core. Similarly, a quasi- delta is defined as the transition point located at the beginning of the transition line that leads to a forensic delta.

Quasi-singularity detection is more computationally efficient than the conventional detection of forensic singularities. The detection time is reduced, while maintaining the same classification accuracy, and, in some instances, the classification accuracy is improved. Even fingerprint images whose fingerprint foreground is skewed or rotated are correctly classified, because it is possible to detect their quasi- singularities.

It will be appreciated that there are many variations in detail on the invention as herein defined and/or described without departing from the scope and spirit of this disclosure.