Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND COMPUTER PROGRAM PRODUCT FOR PROCESSING AN EXAMINATION RECORD COMPRISING A PLURALITY OF IMAGES OF AT LEAST PARTS OF AT LEAST ONE RETINA OF A PATIENT
Document Type and Number:
WIPO Patent Application WO/2017/046377
Kind Code:
A1
Abstract:
Method comprising the steps consisting in: - registering the images pairwise, - forming at least one group of images belonging to one same retina, wherein the step of forming at least one group of images is performed by implementing graph theory which includes: - building a graph comprising vertices and edges {I, J}, - forming at least one subgraph of vertices, consistent edges and inconsistent edges being identified among the edges {I, J}, inconsistent edges being removed so that the vertices of each subgraph are connected by consistent edges with each other, and are not connected by any edge with the vertices of another subgraph, - assigning the images corresponding to the vertices of each subgraph to one group of images belonging to one same retina, wherein during the step of forming at least one subgraph of vertices, the graph theory is implemented through a clique analysis.

Inventors:
QUELLEC GWENOLÉ (FR)
LAMARD MATHIEU (FR)
CAZUGUEL GUY (FR)
Application Number:
PCT/EP2016/072049
Publication Date:
March 23, 2017
Filing Date:
September 16, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INSERM (INSTITUT NAT DE LA SANTÉ ET DE LA RECH MÉDICALE) (FR)
UNIV BRETAGNE OCCIDENTALE (FR)
INST MINES-TELECOM (FR)
International Classes:
G06K9/00; A61B3/12; G06K9/34; G06K9/62
Foreign References:
US20150186752A12015-07-02
US20100061601A12010-03-11
US8194936B22012-06-05
US20150186752A12015-07-02
Other References:
KEXIN DENG ET AL: "Retinal Fundus Image Registration via Vascular Structure Graph Matching", INTERNATIONAL JOURNAL OF BIOMEDICAL IMAGING, vol. 20, no. 10, 1 January 2010 (2010-01-01), pages 1 - 13, XP055248119, ISSN: 1687-4188, DOI: 10.1109/TPAMI.2005.188
B. FANG ET AL: "Elastic Registration for Retinal Images Based on Reconstructed Vascular Trees", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING., vol. 53, no. 6, 1 June 2006 (2006-06-01), PISCATAWAY, NJ, USA., pages 1183 - 1187, XP055247845, ISSN: 0018-9294, DOI: 10.1109/TBME.2005.863927
"Grading diabetic retinopathy from stereoscopic color fundus photographs: an extension of the modified Airlie House classification (ETDRS report number 10", OPHTHALMOLOGY, vol. 98, no. 5, May 1991 (1991-05-01), pages 786 - 806
A. CAN; C. V. STEWART; B. ROYSAM; H. L. TANENBAUM: "A feature-based technique for joint, linear estimation of high-order image-to-mosaic transformations: mosaicing the curved human retina", IEEE TRANS PATTERN ANAL MACH INTEL!, vol. 24, no. 3, March 2002 (2002-03-01), pages 412 - 419, XP011094218, DOI: doi:10.1109/34.990145
G. YANG; C. V. STEWART: "Covariance-driven mosaic formation from sparsely-overlapping image sets with application to retinal image mosaicing", PROC IEEE CVPR, vol. 1, 27 June 2004 (2004-06-27), pages 1 - 7
T. E. CHOE; I. COHEN; M. LEE; G. MEDIONI: "Optimal global mosaic generation from retinal images", PROC IEEE ICPR, 2006, pages 681 - 684
J. LI; H. CHEN; C. YAO; X. ZHANG: "A robust feature-based method for mosaic of the curved human color retinal images", PROC IEEE ICIP, 2008
L. WEI; L. HUANG; L. PAN; L. YU: "The retinal image mosaic based on invariant feature and hierarchial transformation models", PROC IEEE CISP, 2009, pages 1 - 5, XP031553975
J. CHEN; S. AUSAYAKHUN; S. AUSAYAKHUN ET AL.: "Comparison of autophotomontage software programs in eyes with CMV retinitis", INVEST OPHTHALMOL VIS SCI, vol. 52, no. 13, December 2011 (2011-12-01), pages 9339 - 9344, XP055168662, DOI: doi:10.1167/iovs.11-8322
G. QUELLEC; M. LAMARD; M. D. ABRAMOFF; E. DECENCIERE; B. LAY; A. ERGINAY; B. COCHENER; G. CAZUGUEL: "A multiple-instance learning framework for diabetic retinopathy screening", MED IMAGE ANAL, vol. 16, no. 6, August 2012 (2012-08-01), pages 1228 - 1240, XP055130437, DOI: doi:10.1016/j.media.2012.06.003
R. PIRES; H. F. JELINEK; J. WAINER; E. VALLE; A. ROCHA: "Advancing Bag-of-Visual-Words representations for lesion classification in retinal images", PLOS ONE, vol. 9, no. 6, June 2014 (2014-06-01), pages E96814
H. LI; O. CHUTATAPE: "Automated feature extraction in color retinal images by a model based approach", IEEE TRANS BIOMED ENG, vol. 51, no. 2, February 2004 (2004-02-01), pages 246 - 254, XP011105951, DOI: doi:10.1109/TBME.2003.820400
M. NIEMEIJER; M. D. ABRAMOFF; B. VAN GINNEKEN: "Fast detection of the optic disc and fovea in color fundus photographs", MED IMAGE ANAL, vol. 13, no. 6, December 2009 (2009-12-01), pages 859 - 870, XP026718619, DOI: doi:10.1016/j.media.2009.08.003
S. SEKHAR; F. E. ABD EL-SAMIE; P. YU; W. AL-NUAIMY; A. K. NANDI: "Automated localization of retinal features", APPL OPT, vol. 50, no. 19, July 2011 (2011-07-01), pages 3064 - 3075, XP001564074, DOI: doi:10.1364/AO.50.003064
X. CHENG; D. W. WONG; J. LIU ET AL.: "Automatic localization of retinal landmarks", PROC IEEE EMBS, 2012, pages 4954 - 4957, XP032464048, DOI: doi:10.1109/EMBC.2012.6347104
M. E. GEGUNDEZ-ARIAS; D. MARIN; J. M. BRAVO; A. SUERO: "Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques", COMPUT MED IMAGING GRAPH, vol. 37, July 2013 (2013-07-01), pages 386 - 393, XP028775436, DOI: doi:10.1016/j.compmedimag.2013.06.002
M. BERTALMIO; A. L. BERTOZZI; G. SAPIRO: "Navier-stokes, fluid dynamics, and image and video inpainting", PROC IEEE CVPR, vol. 1, 2001, pages 355 - 362, XP010583767
R. WINDER; P. MORROW; I. MCRITCHIE; J. BAILIE; P. HART: "Algorithms for digital image processing in diabetic retinopathy", COMPUT MED IMAGING GRAPH, vol. 33, no. 8, December 2009 (2009-12-01), pages 608 - 622, XP026705367, DOI: doi:10.1016/j.compmedimag.2009.06.003
J. STAAL; M. ABRAMOFF; M. NIEMEIJER; M. VIERGEVER; B. VAN GINNEKEN: "Ridge based vessel segmentation in color images of the retina", IEEE TRANS MED IMAGING, vol. 23, no. 4, April 2004 (2004-04-01), pages 501 - 509, XP011110228, DOI: doi:10.1109/TMI.2004.825627
M. NIEMEIJER; J. STAAL; B. VAN GINNEKEN; M. LOOG; M. ABRAMOFF: "Proc SPIE Medical Imaging", vol. 5370, 2004, article "Comparative study of retinal vessel segmentation methods on a new publicly available database", pages: 648 - 656
F. ZANA; J. KLEIN: "Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation", IEEE TRANS IMAGE PROCESS, vol. 10, no. 7, July 2001 (2001-07-01), pages 1010 - 1019, XP011025815
G. LIEW; J. J. WANG; N. CHEUNG; Y. P. ZHANG; W. HSU; M. L. LEE; P. MITCHELL; G. TIKELLIS; B. TAYLOR; T. Y. WONG: "The retinal vasculature as a fractal: methodology, reliability, and relationship to blood pressure", OPHTHALMOLOGY, vol. 115, no. 11, November 2008 (2008-11-01), pages 1951 - 1956, XP025610407, DOI: doi:10.1016/j.ophtha.2008.05.029
H. BARROW; J. TENENBAUM; R. BOLLES; H. WOLF: "Parametric correspondance and chamfer matching: two new techniques for image matching", PROC INT JOINT CONF ARTIF INTEL!, vol. 2, 1977
C. R. MAURER JR.; R. QI; V. RAGHAVAN: "A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions", IEEE TRANS PATTERN ANAL MACH INTEL!, vol. 25, no. 2, February 2003 (2003-02-01), pages 265 - 270, XP011095636, DOI: doi:10.1109/TPAMI.2003.1177156
H. BARROW; J. TENENBAUM; R. BOLLES; H. WOLF: "Parametric correspondance and chamfer matching: two new techniques for image matching", PROC INT JOINT CONF ARTIF INTELL, vol. 2, 1977
C. R. MAURER JR.; R. QI; V. RAGHAVAN: "A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions", IEEE TRANS PATTERN ANAL MACH INTELL, vol. 25, no. 2, February 2003 (2003-02-01), pages 265 - 270, XP011095636, DOI: doi:10.1109/TPAMI.2003.1177156
J. A. NELDER; R. MEAD: "A simplex method for function minimization", COMPUT J, vol. 7, no. 4, January 1965 (1965-01-01), pages 308 - 313
Attorney, Agent or Firm:
CABINET PLASSERAUD (FR)
Download PDF:
Claims:
CLAIMS

1. Method for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient, the method comprising the steps consisting in:

- for each image, identifying pixels related to at least one structure of interest,

- registering the images pairwise, each image being associated to each of the other images by a transformation Ί\ j to form pairs, each pair comprising a first image I and a second image J, the second image J being the image transformed by transformation j of the first image I,

- forming at least one group of images belonging to one same retina,

wherein the step of forming at least one group of images is performed by implementing graph theory which includes:

- building a graph comprising vertices and edges {I, J}, each vertex corresponding to one of the images, and each edge {I, J} connecting a pair of vertices Iv, Jv corresponding to one of the pairs of first and second images I, J in which first transformed pixels resulting from an application of the transformation j to the first image I and second pixels of the second image J overlap over a predetermined overlap threshold,

- forming at least one subgraph of vertices, consistent edges and inconsistent edges being identified among the edges {I, J} on basis of a difference in an arrangement of the first transformed pixels with respect to the second pixels, inconsistent edges being removed so that the vertices of each subgraph are connected by consistent edges with each other, and are not connected by any edge with the vertices of another subgraph,

- assigning the images corresponding to the vertices of each subgraph to one group of images belonging to one same retina,

the method being characterized in that the transformation j of each pair of first and second images I, J is invertible, and

in that during the step of forming at least one subgraph of vertices, the graph theory is implemented through a clique analysis, the clique analysis comprising:

- weighing each edge {I, J} with a registration error W{i, r\ calculated on basis of the difference in the arrangement of the first transformed pixels with respect to the second pixels, and

- performing a k-clique analysis,

wherein k-clique analysis comprises: - defining complete cliques A including k vertices, k > 3, and in which all pairs of vertices are connected with edges,

- within each clique A in an increasing number k of vertices, for each pair of first and second vertices Iv, Jv of the clique A, checking whether the transformation Ί\ j from the first image I to the second image J meets a clique consistency condition according to which first transformed pixels resulting from an application of a composition of transformations of the other pairs of first and second images along any path within the clique A to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

TI ,J = TI,Kl o TKl,K2 ° - ° TKl,J

where I, J, Ki, i = 1, ... 1, 1 < k-2 are the k images of the clique A,

if j meets the consistency condition, the edge {I, J} is identified as a consistent edge,

if Ί\ j does not meet the consistency condition, the edge {I, J} is identified as an inconsistent edge and is removed and an inscribed clique B including k-1 vertices, in which all the transformations meet the clique consistency condition and having a minimum cumulative registration error ^ yj is retained.

2. Method according to claim 1, wherein the k-clique analysis further comprises:

- defining incomplete cliques C including k vertices, k > 3, and in which the pairs of vertices are connected with edges except one pair of first and second vertices Iv, Jv,

- within each clique C in an increasing number k of vertices, checking whether the transformation Ί\ j from the first image I to the second image J meets a clique completeness condition according to which first transformed pixels resulting from an application of a composition of transformations of the pairs of one of the first and second images with each of the other images Kl of the clique C to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

where 1 = 1, · · · k-2,

- if Ί\ j meets the clique completeness condition, the edge {I, J} is identified as a consistent edge,

- if Ί\ j does not meet the clique completeness condition, retaining an inscribed clique D including: k-2 vertices connected by pairs with edges to all the other images of clique C, and

one vertex chosen between the first and second vertices Iv, Jv connected by pairs to said k-2 vertices with edges having the minimum cumulative registration error ^ w{x ], X e {/, /}, and edges between the other image chosen between the first and second vertices Iv, Jv and said k-2 vertices are identified as inconsistent edges and are removed.

3. Method according to any of claims 1 and 2, wherein, if after performing the k- clique analysis, all vertices of the examination record are still connected with at least one edge, the clique analysis further includes a 2-clique analysis in which edges are removed in a decreasing order of registration error w^ j) until a criteria chosen between obtaining at least two distinct subgraphs and reaching a predetermined connection threshold is reached.

4. Method according to any of claims 1 to 3, wherein the transformation j of each pair of first and second images I, J has transformation parameters, in particular a rotation angle a, a scaling factor σ and translations along two orthogonal axes τχ and τγ, and

wherein, during the step of registering the images pairwise, the transformation j of each pair of first and second images I, J is chosen by varying the transformation parameters and minimising the registration error

5. Method according to any of claims 1 to 4, wherein, during the step of registering the images pairwise, the registration error W{i, r\ is calculated on basis of both the difference in the arrangement of the first transformed pixels with respect to the second pixels and a difference in the arrangement between second transformed pixels resulting from an application of the inverted transformation j 1 to the second image J with respect to the first pixels.

6. Method according to any of claims 1 to 5, wherein, during the step of registering the images pairwise, the registration error w^ j) is calculated on basis of a distance between transformed pixels of one of the first and second images and pixels of the other of the first and second images as the difference in the arrangement.

7. Method according to claim 6, wherein the registration error W{i, r\ is calculated on basis of the minimum distance between each of the transformed pixels of one of the first and second images and one of the pixels of the other of the first and second images.

8. Method according to any of claims 1 to 7 for processing at least one subsequent examination record comprising a plurality of new images of at least parts of at least one retina of the patient acquired subsequently with respect to previous images of a previous examination record, the method comprising:

- for each new image, identifying pixels related to at least one structure of interest,

- registering the new images pairwise,

- forming at least one new group of new images belonging to one same retina by implementing graph theory to pairs of new images,

- merging each new group with previous groups of previous images, each new image of each new group being registered pairwise with each previous image of each previous group,

- forming at least one group of images belonging to one same retina by implementing graph theory to pairs of new and previous images.

9. Method according to any of claims 1 to 8, wherein the step of identifying pixels comprises segmenting blood vessels as a structure of interest to obtain binary images of the blood vessels of the retina.

10. Method according to any of claims 1 to 9, further comprising a step consisting in creating a composite image of each retina, wherein the step of creating a composite image comprises for each group:

- selecting as reference image R the image of the group that minimizes a cumulative minimum distance with the other images of the group,

- projecting each image I of the group in the reference image by the transformation

Tl,R.

11. Method according to claim 10, further comprising a step consisting in determining a laterality of the retina of each composite image, the step of determining a laterality of the retina comprising:

- creating a composite image of the blood vessels of the retina and detecting a main blood vessel arch,

- creating a composite image of the retina in which the optic disc is enhanced and detecting the optic disc,

- determining the laterality of the retina from the main blood vessel arch and the optic disc.

12. Computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient, said computer program product being stored on a carrier readable by an electronic processing unit and comprising instructions operable to cause the electronic processing unit to implement the method according to any of claims 1 to 11.

Description:
Method and computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient

The invention relates to a method and a computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient.

The invention finds particular applications in automated detection and progression measurement of retinal pathologies.

Retinal pathologies are responsible for millions of blindness cases worldwide. According to the World Health Organization, 4.5 million people are blind due to glaucoma, 3.5 million people are blind due to age-related macular degeneration and 2 million people are blind due to diabetic retinopathy. Early diagnosis is the key to slowing down the progression of these diseases and therefore preventing the occurrence of blindness.

The leading modality for detecting retinal pathologies is color fundus photography. During an eye fundus examination, the operator usually takes multiple photographs of each retina. The number of photographs per retina depends on the camera's field of view, the institution's policy, the operator and the patient. The Early Treatment Diabetic Retinopathy Study (ETDRS) recommends that seven photographs are taken per retina in order to grade diabetic retinopathy reliably (ETDRS Research Group, "Grading diabetic retinopathy from stereoscopic color fundus photographs: an extension of the modified Airlie House classification (ETDRS report number 10)", Ophthalmology , vol. 98, no. 5 Suppl, pp. 786- 806, May 1991). In the minimal setting, one photograph is centered on the fovea, which is responsible for sharp central vision, and one is centered on the optic disc, the point of exit for ganglion cell axions leaving the eye.

In order to facilitate image interpretation by a human reader, typically an ophthalmologist, several algorithms have been proposed to create a composite image, called mosaic, from all images of one retina. Examples of these algorithms are disclosed in the following references:

- A. Can, C. V. Stewart, B. Roysam and H. L. Tanenbaum, "A feature-based technique for joint, linear estimation of high-order image-to -mosaic transformations: mosaicing the curved human retina", IEEE Trans Pattern Anal Mach Intell, vol. 24, no. 3, pp. 412-9, March 2002, hereafter Can et al., - G. Yang and C. V. Stewart, "Covariance-driven mosaic formation from sparsely- overlapping image sets with application to retinal image mosaicing", in Proc IEEE CVPR, vol. 1, 2004, pp. 1-804-10, hereafter Yang et al.,

- T. E. Choe, I. Cohen, M. Lee and G. Medioni, "Optimal global mosaic generation from retinal images", in Proc IEEE ICPR, 2006, pp. 681-4, hereafter Choe et al.,

- J. Li, H. Chen, C. Yao and X. Zhang, "A robust feature-based method for mosaic of the curved human color retinal images", in Proc IEEE ICIP, 2008, hereafter Li et al.,

- L. Wei, L. Huang, L. Pan and L. Yu, "The retinal image mosaic based on invariant feature and hierarchial transformation models", in Proc IEEE CISP, 2009, pp. 1- 5, hereafter Wei et al.,

- M. D. Abramoff, M. Niemeijer, S. Lee and J. Reinhardt, "Optimal registration of multiple deformed images using a physical model of the imaging distortion", U.S. Patent 8,194,936 B2, June 5, 2012, hereafter Abramoff et al..

In all these algorithms, registration between images relies on the extraction of features, either blood vessel features (vessel branchings and crossings) (see Can et al., Yang et al., Choe et al. and Abramoff et al.) or any features (SIFT, SURF, etc.) (see Li et al. and Wei et al.). Because the retina is spherical, images need to be registered using non- invertible transformations in order to produce undistorted mosaics: quadratic transformations (see Can et al., Yang et al., Li et al., Wei et al. and Abramoff et al.), general affine transformations (see Choe et al. and Wei et al.), etc. It implies that an "anchor image" has to be chosen and all other images are registered to that image. In order to produce eye pleasing mosaics for the human reader, various image fusion strategies can be used to attenuate intensity variations between images forming the mosaic (see Can et al., Abramoff et al. and J. Chen, S. Ausayakhun and S. Ausayakhun et al., "Comparison of autophotomontage software programs in eyes with CMV retinitis", Invest Ophthalmol Vis Sci, vol. 52, no. 13, pp. 9339-44, December 2011).

Existing mosaicing algorithms have one major limitation: they assume that images in one examination record have been separated into two groups, one per retina. What usually happens during an eye fundus examination is that all images from both retinas are stored altogether into a single directory. This limitation is particularly important in the context of automatic image analysis, where no human interaction is possible.

Document US 2015/0186752 discloses a method for processing images of retinas implementing graph theory to determine if images are of the same eye. Such method based on images of the whole retinas does not fulfil the need expressed above for a method capable of processing an examination record comprising images of only parts of one or several retinas to group images belonging to one same retina in a robust and fully automated manner.

The invention aims to fulfil such need.

To this end, the invention provides a method for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient, the method comprising the steps consisting in:

- for each image, identifying pixels related to at least one structure of interest,

- registering the images pairwise, each image being associated to each of the other images by a transformation j to form pairs, each pair comprising a first image I and a second image J, the second image J being the image transformed by the transformation j, of the first image I,

- forming at least one group of images belonging to one same retina,

wherein the step of forming at least one group of images is performed by implementing graph theory which includes:

- building a graph comprising vertices and edges {I, J}, each vertex corresponding to one of the images, and each edge {I, J} connecting a pair of vertices Iv, Jv corresponding to one of the pairs of first and second images I, J in which first transformed pixels resulting from an application of the transformation T 1; j to the first image I and second pixels of the second image J overlap over a predetermined overlap threshold,

- forming at least one subgraph of vertices, consistent edges and inconsistent edges being identified among the edges {I, J} on basis of a difference in an arrangement of the first transformed pixels with respect to the second pixels, inconsistent edges being removed so that the vertices of each subgraph are connected by consistent edges with each other, and are not connected by any edge with the vertices of another subgraph,

- assigning the images corresponding to the vertices of each subgraph to one group of images belonging to one same retina,

wherein the transformation j of each pair of first and second images I, J is invertible, and

wherein during the step of forming at least one subgraph of vertices, the graph theory may be implemented through a clique analysis, the clique analysis comprising:

- weighing each edge {I, J} with a registration error w^ j ) calculated on basis of the difference in the arrangement of the first transformed pixels with respect to the second pixels, and - performing a k-clique analysis,

wherein k-clique analysis comprises:

- defining complete cliques A including k vertices, k > 3, and in which all pairs of vertices are connected with edges,

- within each clique A in an increasing number k of vertices, for each pair of first and second vertices Iv, Jv of the clique A, checking whether the transformation from the first image I to the second image J meets a clique consistency condition according to which first transformed pixels resulting from an application of a composition of transformations of the other pairs of first and second images along any path within the clique A to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

-^7 ,7 T I Kl ° T Kl J TI ,J = T I,Kl ° T Kl,K2 ° - ° T Kl,J

where I, J, Ki, i = 1, ... 1, 1 < k-2 are the k images of the clique A,

if meets the consistency condition, the edge {I, J} is identified as a consistent edge,

if Ti does not meet the consistency condition, the edge {I, J} is identified as an inconsistent edge and is removed and an inscribed clique B including k-1 vertices, in which all the transformations meet the clique consistency condition and having a minimum cumulative registration error ^ yj is retained. Hence, thanks to the implementation of an algorithm based on graph theory, the invention proposes an efficient and reliable automated method to gather images belonging to one same retina in one group, separate from possible other groups gathering respectively images belonging to other retinas. In particular, prior to subsequent automated processing of the images, since it is generally unknown whether the examination record contains the images of one or more retinas, the method enables to make sure that all images of an examination record belong to one same retina or to separate images belonging to each of the two retinas of a patient.

The k-clique analysis may further comprise:

- defining incomplete cliques C including k vertices, k > 3, and in which the pairs of vertices are connected with edges except one pair of first and second vertices Iv, Jv, - within each clique C in an increasing number k of vertices, checking whether the transformation from the first image I to the second image J meets a clique completeness condition according to which first transformed pixels resulting from an application of a composition of transformations of the pairs of one of the first and second images with each of the other images Kl of the clique C to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

Tu = T I ,KI ° T Ki where 1 = 1, ... k-2,

- if T^ j meets the clique completeness condition, the edge {I, J} is identified as a consistent edge,

- if T I J does not meet the clique completeness condition, retaining an inscribed clique D including:

k-2 vertices connected by pairs with edges to all the other images of clique C, and

one vertex chosen between the first and second vertices Iv, Jv connected by pairs to said k-2 vertices with edges having the minimum cumulative registration k-2

error∑w {x ] , X e {/, /}, and

1=1

edges between the other image chosen between the first and second vertices Iv, Jv and said k-2 vertices are identified as inconsistent edges and are removed.

If after performing the k-clique analysis, all vertices of the examination record are still connected with at least one edge, the clique analysis may further include a 2-clique analysis in which edges are removed in a decreasing order of registration error until a criteria chosen between obtaining at least two distinct subgraphs and reaching a predetermined connection threshold is reached.

The transformation of each pair of first and second images I, J may have transformation parameters, in particular a rotation angle a, a scaling factor σ and translations along two orthogonal axes τ χ and x y , and

during the step of registering the images pairwise, the transformation of each pair of first and second images I, J may be chosen by varying the transformation parameters and minimising the registration error

During the step of registering the images pairwise, the registration error may be calculated on basis of both the difference in the arrangement of the first transformed pixels with respect to the second pixels and a difference in the arrangement between second transformed pixels resulting from an application of the inverted transformation Ti , j _1 to the second image J with respect to the first pixels.

During the step of registering the images pairwise, the registration error w^ j ) may be calculated on basis of a distance between transformed pixels of one of the first and second images and pixels of the other of the first and second images as the difference in the arrangement.

The registration error w^ j ) may be calculated on basis of the minimum distance between each of the transformed pixels of one of the first and second images and one of the pixels of the other of the first and second images.

The aforementioned method may be part of an automated image analysis. In particular, in recent works, the inventors and others have developed image mining algorithms for automatically identifying and grading pathological retinal images (see G. Quellec, M. Lamard, M. D. Abramoff, E. Decenciere, B. Lay, A. Erginay, B. Cochener and G. Cazuguel, "A multiple-instance learning framework for diabetic retinopathy screening", Med Image Anal, vol. 16, no. 6, pp. 1228-40, August 2012 and R. Pires, H. F. Jelinek, J. Wainer, E. Valle and A. Rocha, "Advancing Bag-of-Visual-Words representations for lesion classification in retinal images", PLOS One, vol. 9, no. 6, p. e96814, June 2014). It has been hypothesized that including spatial localization into the image mining process will push the performance of these algorithms forward. Firstly, image mosaics could be used to localize the detected anomalies with respect to the optic disc and with respect to the fovea. Secondly, diagnoses assigned by ophthalmologists to one particular retina (e.g. "the left retina is healthy but the right retina has nonproliferative diabetic retinopathy") could be linked to images of that retina in a digital archive. In this context, there is no need for eye pleasing mosaics. Also, small mosaic distortions due to retinal sphericity are acceptable. Moreover, distortion of input images, which could affect the texture and shape of anomalies, is not desirable. A simple estimation of a transformation from each image to a common reference system may be sufficient.

Therefore, the method may further comprise a step consisting in creating a composite image of each retina, wherein the step of creating a composite image comprises for each group:

- selecting as reference image R the image of the group that minimizes a cumulative minimum distance with the other images of the group,

- projecting each image I of the group in the reference image by the transformation

Tl, R. Once two mosaics are formed, it can be proposed to jointly identify which mosaic corresponds to the left retina and which one corresponds to the right retina. In recent years, several algorithms have been presented for the automatic detection of the optic disc and the fovea. Examples of these algorithms are disclosed in the following references:

- H. Li and O. Chutatape, "Automated feature extraction in color retinal images by a model based approach", IEEE Trans Biomed Eng, vol. 51, no. 2, pp. 246-54, February 2004,

- M. Niemeijer, M. D. Abramoff and B. van Ginneken, "Fast detection of the optic disc and fovea in color fundus photographs", Med Image Anal, vol. 13, no. 6, pp. 859-70, December 2009,

- S. Sekhar, F. E. Abd El-Samie, P. Yu, W. Al-Nuaimy and A. K. Nandi, "Automated localization of retinal features", Appl Opt, vol. 50, no. 19, pp. 3064-75, July 2011,

- X. Cheng, D. W. Wong and J. Liu et al., "Automatic localization of retinal landmarks", in Proc IEEE EMBS, 2012, pp. 4954-7,

- M. E. Gegundez- Arias, D. Marin, J. M. Bravo and A. Suero, "Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques", Comput Med Imaging Graph, vol. 37, no. 5-6, pp. 386-93, July-September 2013.

Detecting these two landmarks may be enough to identify the laterality of a retina.

However, fovea detection is not reliable in pathological images, due to the presence of possibly large anomalies in the area. Therefore, a different approach is proposed in the present application: it is proposed to detect, in both mosaics, the main vessel arch surrounding the macula and the optic disc, which is also enough for laterality identification.

Therefore, the method may further comprise a step consisting in determining a laterality of the retina of each composite image, the step of determining a laterality of the retina comprising:

- creating a composite image of the blood vessels of the retina and detecting a main blood vessel arch,

- creating a composite image of the retina in which the optic disc is enhanced and detecting the optic disc,

- determining the laterality of the retina from the main blood vessel arch and the optic disc. Blood vessels need to be traced to detect retinal landmarks: for efficiency, blood vessel tracings may be used for mosaic formation as well. In this context, in the aforementioned method, the step of identifying pixels may comprise segmenting blood vessels as a structure of interest to obtain binary images of the blood vessels of the retina.

One particular application of image mosaicing in the context of automated image analysis is disease progression measurement (or follow-up). In this context, it can be useful to register mosaics formed using images from consecutive eye fundus examinations, in order to monitor the appearance or progression of anomalies.

Therefore, the method may be implemented for processing at least one subsequent examination record comprising a plurality of new images of at least parts of at least one retina of the patient acquired subsequently with respect to previous images of a previous examination record, the method comprising:

- for each new image, identifying pixels related to at least one structure of interest,

- registering the new images pairwise,

- forming at least one new group of new images belonging to one same retina by implementing graph theory to pairs of new images,

- merging each new group with previous groups of previous images, each new image of each new group being registered pairwise with each previous image of each previous group,

- forming at least one group of images belonging to one same retina by implementing graph theory to pairs of new and previous images.

Instead of forming mosaics independently (two per examination) and register them afterward, two compound mosaics may be formed directly (one per retina). Precisely, these two mosaics can be initialized with images from the first examination and updated whenever the patient undergoes a new examination. This procedure is expected to form more accurate mosaics.

According to a second aspect, the invention proposes a computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient, said computer program product being stored on a carrier readable by an electronic processing unit and comprising instructions operable to cause the electronic processing unit to implement the previously defined method.

Other objects and advantages of the invention will emerge from the following disclosure of a particular embodiment of the invention given as non limitative example, the disclosure being made in reference to the enclosed drawings in which: - Figure 1 illustrates an embodiment of a method for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient, at least one group of images belonging to one same retina being formed in an automated manner, the method being implemented for creating a composite image of each retina of the examination record and for determining a laterality of each retina,

- Figure 2 illustrates image operations of a preprocessing step of the method of Figure 1 for detecting blood vessels and enhancing the optic disk of the images: a) green channel extraction, b) thresholding, c) inpainting, d) median filtering, e) background removal, f) vessel detection, g) thresholding, h) Maurer distance transform, i) skeletonization, j) optic disc enhancement,

- Figure 3 illustrates preprocessed images resulting from some image operations of the processing step of the method of Figure 1, five resized and cropped images being shown in a first column, five corresponding illumination-corrected intensity images being shown in a second column and five corresponding binary blood vessel- segmented images being shown in a third column,

- Figure 4 illustrates steps of the formation of groups of images belonging to one same retina according to graph theory implemented in the method of Figure 1, a graph being built with vertices corresponding to images of the examination record connected by edges, the edges being weighted by a registration error in pixels, two registration subgraphs being represented with dashed line and dashed and dotted lines, respectively.

- Figure 5 illustrates definitions used in clique analysis performed in the graph theory implemented in the method of Figure 1,

- Figure 6 illustrates two composite images (mosaics) each created from the images of one of the group of images belonging to one same retina obtained by the method of Figure 1, main vessel arches and optic disc centers being identified in the mosaics,

- Figure 7 illustrates mosaics of the retinas of six different patients each referred to by a corresponding letter a to f (right eyes in the first column, left eyes in the second column), each mosaic being updated with new images of the retina resulting from a subsequent eye fundus examination,

- Figure 8 illustrates image operations of the step of creating a composite image: a) averaging, b) matched filtering, c) optic disc region of interest, d) matched filtering.

In relation to the Figures, a method for processing in an automated an examination record of at least one retina of a patient is disclosed. The examination record generally comprises multiple images of each of the two retinas of the patient mixed together in one single directory. The examination record may, however, comprise multiple images of one of the retinas of the patient or multiple images of retinas of different patients. In the examination record, each retina can be either in part or fully imaged by any appropriate modality and especially by color fundus photography.

The method enables the images of one same retina to be grouped together. Thanks to the method, in case the examination record comprises images of different retinas, being those of one patient or of different patients, images of each retina can be gathered in one group separate from the images of other retinas. Otherwise, the method enables to make sure that the examination record comprises images of one same retina.

In particular, the method comprises the steps consisting in:

- for each image, identifying pixels related to at least one structure of interest such as blood vessels and/or optic disk,

- registering the images pairwise,

- forming at least one group of images belonging to one same retina.

The images of one same retina gathered in each group can then be processed in an automated manner to perform efficient and reliable detection and progression measurement of retinal pathologies of the retina. In the disclosed embodiment, the method further comprises the step consisting in creating a composite image of each retina and, possibly, the step of determining a laterality of the retina of each composite image.

These steps and any subsequent processing step of the images of each group can be performed on the basis of an initial examination record comprising images of one or several retinas taken at first and/or on the basis of an updated examination report comprising images of one or several retinas taken at first and images of one or several of the retinas taken subsequently.

The method may be implemented in an automated manner by a computer through a computer program product stored on a carrier readable by an electronic processing unit and comprising instructions operable to cause the electronic processing unit to implement the method. I. IMAGE PROCESSING

In order to identifying pixels related to at least one structure of interest such as blood vessels and/or optic disk, and especially to detect retinal landmark, each image is processed for example by normalizing their intensity, detecting blood vessels and enhancing the optic disc. The image processing pipeline is illustrated in Figure 2. A. Preprocessing

As a first preprocessing step, images can be resized and cropped to 780 x 780 pixels, in order to simplify the selection of filter sizes: it corresponds to a minimal image resolution (G. Quellec, M. Lamard, M. D. Abramoff, E. Decenciere, B. Lay, A. Erginay, B. Cochener and G. Cazuguel, "A multiple-instance learning framework for diabetic retinopathy screening", Med Image Anal, vol. 16, no. 6, pp. 1228-40, August 2012). Let / denote a resized and cropped image.

Retinal vessels are best contrasted in the green channel IG. SO, the red and blue channels can be discarded (Figure 2a). The black border of image / can also be discarded (Figure 2b): a region of interest, corresponding to the camera' s field of view, is segmented in IG- Given a threshold TM, for example TM = 10, a binary mask image IM is defined as follows:

I M (X, y) = l wherever I G (x, y)≥ τ Μ ,

IM (X, y) = elsewhere.

B. Intensity Normalization

To facilitate vessel segmentation in the green channel, illumination variations throughout the region of interest are corrected: a background image I B , capturing those illumination variations, is defined and 7 G is divided by I B .

The background image is defined using a large smoothing filter. In order to avoid border effects, a temporary image / / is defined by inpainting IG wherever IM = 0 (Figure 2c), using the Navier-Stokes algorithm (M. Bertalmio, A. L. Bertozzi and G. Sapiro, "Navier- stokes, fluid dynamics, and image and video inpainting", in Proc IEEE CVPR, vol. 1, 2001, pp. 355-62). As already done in the literature (R. Winder, P. Morrow, I. McRitchie, J. Bailie and P. Hart, "Algorithms for digital image processing in diabetic retinopathy", Comput Med Imaging Graph, vol. 33, no. 8, pp. 608-22, December 2009), a median filter of radius r OD is then applied to image / / in order to obtain the background image I B (Figure 2d), where ΥΟΌ denotes the typical radius of optic discs in images. For example, roD = 75 pixels. At this point, the mask image is updated: ΙΜ· (X, y) = 1 wherever I G (x, y)≥ TM and I B (x, y)≥ τ Μ ; ΙΜ· (X, y) = elsewhere, for example τ Μ · = 20. This update ensures that all poorly illuminated areas of the retina are discarded. Finally, I G (x, y) is divided by h (x, y) wherever ΙΜ· (X, y) = 1, in order to obtain the illumination-corrected intensity image lici (e).

C. Vessel Segmentation

The step of identifying pixels comprises segmenting blood vessels as a structure of interest to obtain binary images of the blood vessels of the retina. As it will become apparent from the following of the description, such segmentation can advantageously be used for both mosaic creation and laterality determination.

Many retinal vessel segmentation algorithms have been developed over the past decades. A DRIVE (Digital Retinal Images for Vessel Extraction) benchmark was created to compare them. The best performing algorithms (J. Staal, M. Abramoff, M. Niemeijer, M. Viergever and B. van Ginneken, "Ridge based vessel segmentation in color images of the retina", IEEE Trans Med Imaging, vol. 23, no. 4, pp. 501-9, April 2004 and M. Niemeijer, J. Staal, B. van Ginneken, M. Loog and M. Abramoff, "Comparative study of retinal vessel segmentation methods on a new publicly available database", in Proc SPIE Medical Imaging, M. S. J. Michael Fitzpatrick, Ed., vol. 5370, 2004, pp. 648-56) require manual segmentation for training and are therefore less straightforward to use in a different dataset. Zana' s algorithm (F. Zana and J. Klein, "Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation", IEEE Trans Image Process, vol. 10, no. 7, pp. 1010-9, July 2001), the third best performing algorithm, does not. This algorithm, based on mathematical morphology, can be used to define a vessel probability map Iyp from the illumination-corrected intensity image lici (f). This algorithm has one main parameter: the diameter d Bv of the largest blood vessels, for example d Bv = 12.

Then, a binary vessel segmentation I vs can be obtained as follow:

I vs (x, y) = 1 wherever I VP (x, y) > T V S,

Ivs (x, y) = elsewhere (g).

Threshold vs can be chosen adaptively in order to make sure that all vessel segmentations have the same level of details: vs can be chosen so that a fractal dimension of Ivs, estimated using the box-counting technique (G. Liew, J. J. Wang, N. Cheung, Y. P. Zhang, W. Hsu, M. L. Lee, P. Mitchell, G. Tikellis, B. Taylor and T. Y. Wong, "The retinal vasculature as a fractal: methodology, reliability, and relationship to blood pressure", Ophthalmology, vol. 115, no. 11, pp. 1951-6, November 2008), equals Of, where, for example, Df = 1.6. D. Post-Processing the Vessel Segmentation for Image Registration

As detailed later in section II, two vessel segmentations can be registered using fast chamfer matching (H. Barrow, J. Tenenbaum, R. Bolles and H. Wolf, "Parametric correspondance and chamfer matching: two new techniques for image matching", in Proc Int Joint Conf Artif Intell, vol. 2, 1977). In fast chamfer matching, a distance transform of the reference image is required: the Maurer distance transform IVSD of Ivs is used (C. R. Maurer Jr., R. Qi and V. Raghavan, "A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions", IEEE Trans Pattern Anal Mach Intell, vol. 25, no. 2, pp. 265-70, February 2003) (Figure 2h). Pixels from the object of interest in the moving image are also required: a skeleton Ivss of Ivs is used (Figure 2i).

E. Optic Disc Enhancement

The optic disc is white to yellowish, therefore it is visible in all three color channels. In order to enhance the optic disc, it is proposed to subtract the background in all three color channels (negative intensities are set to 0). The background of each color channel is defined as explained in section I-B above, but using a 50% larger median filter in order to preserve the optic disc. An optic disc enhanced image IODE is obtained by averaging the three background subtracted images and by removing blood vessels in the resulting image through morphological closing (Figure 2j). Because blood vessels converge in the optic disc, the size of the structuring element is set to twice the diameter of the largest blood vessels (2 CILBV)-

II. PAIRWISE IMAGE REGISTRATION

Once every image in one examination record has been processed (see Figure 3), images are registered two by two. Let 77 , j denote the transformation from one moving image I to one reference image J. During pairwise image registering, each, first, of the images is associated to each, second, of the images by the transformation 7 / j to form pairs comprising a first image I and a second image J.

A. Transformation Model

Transformation 7 / y is invertible, i.e. T , i = Tu ~X , which excludes elastic and quadratic transformations, as well as general affine transformations. T I: J can be defined as a linear transformation combining translation, rotation and scaling. B. Fast Chamfer Matching

Chamfer Matching (H. Barrow, J. Tenenbaum, R. Bolles and H. Wolf, "Parametric correspondance and chamfer matching: two new techniques for image matching", in Proc Int Joint Conf Artif Intell, vol. 2, 1977) is a popular technique to find the best alignment between two sets of pixel locations 3 = {pi and ^ = {¾·}. The chamfer distance dcM (3, J between 3 and is given by:

A fast implementation of the chamfer distance can be achieved using a distance transform d of ^ (for instance the Maurer distance -C. R. Maurer Jr., R. Qi and V. Raghavan, "A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions", IEEE Trans Pattern Anal Mach Intell, vol. 25, no. 2, pp. 265-70, February 2003): dcM ( 3, ) = ^ J d {p i ) (2)

J \ p,

Assuming 3 and J come from vessel segmentations of images I and J, respectively, dcM can be used to assess some transformation 77 /. d CM ( T IJ(3), ) rf J T u (Pi )) (3)

I I V,^3

Chamfer matching provides a smooth measure of fitness for 77 j and can tolerate small misalignments, occlusions and deformations. The latter property is particularly important given our choice of transformation model.

C. Transformation Optimization

In order to estimate 77 j robustly, while taking advantage of its reversibility, the following fitness function / _ j (T I: j) is proposed:

//, J ( j) = dcM (77, j (3), J) + dcM (77, 3) (4) where 3 and ^ are the locations of nonzero pixels in I VS and J VS , respectively (see section I-D). The distance transforms dj = IVSD and d = JVSD are used.

For faster matching, a modified fitness function 7, J (77, ) can also be used:

f' J (77, ) = dcM (77, y(J'), J + dcM (77, Γ 1 ( ), 3) (5) where and J.' are the locations of nonzero pixels in Ivss and Jvss, respectively (see section I-D): i.e. only the skeleton of the moving vessel segmentations are used.

Transformation 7 / j has transformation parameters and especially four transformation parameters: a rotation angle a, a scaling factor σ and translations along x and y- axes, τ χ and y respectively. These parameters are obtained by minimizing the function/'/, /, using the simplex algorithm (J. A. Nelder and R. Mead, "A simplex method for function minimization", Comput J, vol. 7, no. 4, pp. 308-13, January 1965). Simplex- based minimization requires an initial solution. In this initial solution, a is set to 0 and σ is set to 1. The translation parameters (τ χ , τ γ ) are found by a 2-D grid search. The transformation parameters are then varied to minimise the function/'/, J ( *i j) where Γ* / j denote optimal transformation from I to J.

At the end of the registration process, a success criterion is computed. When two images do not match, minimization may lead to a trivial solution where the images only overlap in areas where no blood vessels were detected. Let 0 j denote the percentage of pixels from the transformed skeleton image Γ* / j (Ivss) projected inside the mask image J M - 0[ j indicates how much I and / overlap, with an emphasis on the most vascularized regions. A success criterion S (I, J) can be defined as:

III. DOUBLE MOSAIC FORMATION

Once every input image in an examination record has been processed (see Figure 3) and every pair of images has been registered, images are divided into two groups, one per retina.

The step of forming groups of images is performed by implementing graph theory which includes:

- building a graph comprising vertices and edges {I, J},

- forming at least one subgraph of vertices,

- assigning the images corresponding to the vertices of each subgraph to one group of images belonging to one same retina.

Each vertex corresponds to one of the images, and each edge {I, J} connects a pair of vertices Iv, Jv corresponding to one of the pairs of first and second images I, J in which first transformed pixels resulting from an application of the transformation Ti , j to the first image I and second pixels of the second image J overlap over a predetermined overlap threshold.

In particular, in the following, a graph is built (see Figure 4) on the basis of the images of the examination record and two subgraphs are extracted from the graph in order to specify two groups of images corresponding respectively to the two retinas of a patient.

In the disclosed embodiment, the graph theory is implemented through a clique analysis for forming the subgraphs. To that end, consistent edges and inconsistent edges are identified among the edges {I, J} on basis of a difference in an arrangement of the first transformed pixels with respect to the second pixels. The clique analysis then comprises a k-clique analysis and, where necessary, a 2-clique analysis to remove inconsistent edges. In doing so, the vertices of each subgraph are connected by consistent edges with each other, and are not connected by any edge with the vertices of another subgraph.

A. Registration Graph and Subgraphs

Let G = (V, E) denote an undirected graph where each vertex I E V corresponds to an input image and each edge {/, /} E E corresponds to the pairwise registration between images / and /. Two images / and / are connected by an edge if and only if they were successfully registered based on the overlap criterion (see section II-C). Edge {/, /} is associated with a weight W { u ) denoting the cost of assigning images / and / to the same mosaic: wy ] is defined as a registration error resulting from the application of the function f'i , j (T*u) between images / and / (see section II-C). G is referred to as the registration graph.

As apparent from section II-C, the registration error w^ j ) is calculated on basis of both:

- the difference in the arrangement of the first transformed pixels resulting from the application of application j to the first image I with respect to the second pixels and

- a difference in the arrangement between second transformed pixels resulting from an application of the inverted transformation j 1 to the second image J with respect to the first pixels.

In addition, in the disclosed embodiment, the arrangement difference is a distance difference between transformed pixels of one of the first and second images and pixels of the other of the first and second images as the difference in the arrangement. In particular, from the above, the registration error w^ j ) is calculated on basis of the minimum distance between each of the transformed pixels of one of the first and second images and one of the pixels of the other of the first and second images.

In order to separate the set of input images into two mosaics, one per retina, two disjoint subgraphs G \ = (V , E \ ) and G 2 = (V 2 , E 2 ) of G should be identified such that:

Vi n v 2 = ø,

Ei n E 2 = ø.

G \ and G 2 are referred to as registration subgraphs: each of them corresponds to images of one of the retina on the basis of which one mosaic can be created.

Ideally, V \ and V 2 should form a partition of V, i.e. V \ U V 2 = V, but in the presence of poor quality images, this condition may not be achievable.

B. Registration Subgraph Extraction

The general strategy for extracting registration subgraphs G \ and G 2 is described hereafter, after giving the following definitions from the graph theory in relation to Figure 5):

- a clique is a set of pairwise adjacent vertices in a graph,

- a ^-clique is a clique containing k vertices: a 2-clique is a line, a 3-clique is a triangle, etc.

- a graph is connected when there is a path between every pair of vertices,

- a connected component is a maximal connected subgraph.

The proposed algorithm for registration subgraph extraction starts by progressively removing edges from a graph G = (V, E'); initially G' = G, E' = E. This initial edge removal process relies on the analysis of ^-cliques in G', k≥ 3 (see section III-C). At the end of this process, graph G' may or may not be connected. If necessary, a second edge removal process, relying on the analysis of 2-cliques in G' , is applied until G' is disconnected (see section III-D). Each connected component in G' defines a subgraph. Finally, if more than two subgraphs are defined, a score is computed for each pair of subgraphs and (G \ , G 2 ) is defined as the best pair (see section III-E). C. k-Clique Analysis

Too rules on the ^-cliques of G' , k≥3, are presented hereafter (see Figure 5). Whenever one of these rules is violated, edges are considered as inconsistent and are removed from E' to correct the graph. Each rule is first presented in the particular case k = 3 and generalized to k > 3 afterward. The overall rule checking strategy is detailed in section III-C-3).

1) First Rule (Clique Consistency)

If three images {/, J K} form a 3-clique in G', it means that / could be registered to J, J could be registered to K and I could be registered to K. If these three registrations are free of error, then the transformations should meet the following conditions:

L

JJ ..KK ~ T *

~ * ^,,, oT 1 * II ..KK = (Γ * ) ° Γ * I.K (7)

If not, it means that at least one image was erroneously matched to the other two: only the edge with minimal weight among {/, /}, {J, K} and {/, K} is retained, the other two are removed from Έ .

In the general case k > 3, let (/, J) be a pair of images inside a ^-clique A of G', namely a complete clique A including k vertices and in which all pairs of vertices are connected with edges. For any path (/, K \ , K 2, ,..., K J) inside A, I < k— 2, transformation T* j should meet the following condition:

T 1 * I.J = T 1 * I.Kl oT 1 * K1.K2 o ■■■ o T 1 * Kl.J

If one has already checked that all the (k - l)-cliques comply with this rule, then only the paths of length k - 1 need to be considered. These conditions should be verified for every pair (/, J) inside A. If one of these conditions is not met, then only the inscribed (k - l)-clique B with minimal cumulative weight ^ yj is retained; the remaining vertex

I .JeB

is disconnected from B in G' .

In other words, a clique consistency condition is defined as being met if first transformed pixels resulting from an application of a composition of transformations of the other pairs of first and second images along any path within a complete k-clique to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

T I ,J = T I.Kl ° T Kl.K2 ° - ° T Kl,J

where I, J, Ki, i = 1, ... 1, 1 < k-2 are the k images of the clique A. Within each complete k-clique A in an increasing number k of vertices, for each pair of first and second vertices Iv, Jv of the clique A, it is checked whether the transformation Ί\ j from the first image I to the second image J meets the clique consistency condition:

- if j meets the consistency condition, the edge { I, J} is identified as a consistent edge,

- if Ti does not meet the consistency condition, the edge { I, J} is identified as an inconsistent edge and is removed and the inscribed clique B including k-1 vertices, in which all the transformations meet the clique consistency condition and having a minimum cumulative registration error ^ y j is retained.

2) Second Rule (Clique Completeness)

For this second rule, sets of k vertices from V are considered such that all pairs of vertices but one are connected in E' . These sets of vertices are referred to as ^-incomplete cliques.

If edges {/, K} and {J K} belong to E' but {/, /} does not, it means that / could be registered to K, J could be registered to K but / could not be registered to J. There are two possible explanations. The first explanation is that images / and J do not overlap or at least that the overlapping score (see section III-C) is too low: this would explain why G' does not contain the {/, J K} 3-clique. A second explanation is that at least one image, among / and J, was erroneously matched to K. To find this out, the overlapping score min(0 / , OJ ) is computed after defining T * ; J = T * I K °T * K J = T * I K °(T * j K ) ~l . If I and / do overlap (S (I, J) = true), then only the edge with minimal weight among {/, K} and {J K} is retained. The other one is removed from

In the general case k > 3, let (/, J) be the pair of unconnected images inside a k- incomplete clique C of G' . To make sure clique C is valid (i.e. / and / do not overlap), it is needed to check that S (I, J) = false, after defining T * t y = T * t Kl T * Kl y for every other image Ki, 1 = 1, 2, k-2 in clique C. If one of these conditions is not met, then only the vertex with minimum cumulative weight ^ fj^j. X e { J} remains connected to the Ki, K 2 , K k - 2 vertices. The other one is disconnected from those vertices. In other words, a clique completeness condition is defined as being met if first transformed pixels resulting from an application of a composition of transformations of the pairs of one of the first and second images with each of the other images Kl of the k- incomplete clique C to the first image I and the second pixels of the second image J overlap over the predetermined overlap threshold:

w here 1 = 1, .. · k-2.

Within each clique C in an increasing number k of vertices, it is checked whether the transformation j from the first image I to the second image J meets the clique completeness condition:

- if Ίγ j meets the clique completeness condition, the edge {I, J} is identified as a consistent edge,

- if Ίγ j does not meet the clique completeness condition, an inscribed clique D is retained, the inscribed clique D including:

k-2 vertices connected by pairs with edges to all the other images of clique C, and

one vertex chosen between the first and second vertices Iv, Jv connected by pairs to said k-2 vertices with edges having the minimum cumulative registration error Jw M , X e {/, /}, and edges between the other image chosen between the first and second vertices Iv, Jv and the k-2 vertices are identified as inconsistent edges and are removed.

3) Rule Checking Strate y

The ^-cliques and ^-incomplete cliques of G' can be checked in any order. But for efficiency, ^-cliques are checked before ^-incomplete cliques, and both ^-cliques and ^-incomplete cliques are checked in increasing order of k. The 3-cliques are checked first. Once all the 3-cliques comply with the first rule, the 4-cliques are checked. And so on until no ^-cliques are found in G'. Then, 3-incomplete cliques are checked with respect to the second rule, followed by 4-incomplete cliques and so on. The advantage of this order is that G' becomes progressively simpler, so fewer large cliques and incomplete cliques need to be tested. D. 2-Clique Analysis for Graph Disconnection

If, after ^-clique analysis (k≥ 3), graph G' is still connected, then it is not possible to define two registration subgraphs (one per retina) yet: more edges need to be removed from Edges can be simply removed from E' in decreasing order of weight until G' becomes disconnected. If, at any point, all remaining edge weights are below some predetermined connection threshold δψ, for example δψ = 3, chances are that a single retina was photographed during the eye fundus examination. In that case, mosaic formation stops: G\ = G' and G 2 = (0, 0). Otherwise, exactly two connected components will appear: G\ and G 2 are defined as these connected components.

Ε. Registration Subgraph Selection

If the operator took photographs from the periphery of the retina that do not overlap with the others, or if he or she took photographs of very poor quality, then more than two connected components may appear after ^-clique analysis (k≥ 3). So, two of them need to be selected.

A score sc is defined for each connected component G" = (V", E"):

where \I V s \ is the number of nonzero pixels in the vessel segmentation of / (see section I-C). By defining G \ and G 2 as the two connected components maximizing this score, low quality images and peripheral images are less likely to be selected (blood vessels converge around the optic disc).

F. Mosaic Rendering

For creating a mosaic as a composite image of each retina, for each group:

- the image of the group that minimizes a cumulative minimum distance with the other images of the group is selected as reference image R,

- each image I of the group is projected in the reference image by the transformation T 1;R .

In order to render one mosaic from a registration subgraph G" = (V", E"), one image R £ V" is selected as reference. The shortest path in G" between each pair of images in V" is computed. R is defined as the image that minimizes the cumulative minimum distance to every other image in V" . Then every other image / £ V" is projected in the reference system attached to R. Let (/, Ki, K 2 , Ki, R) denote the shortest path from / to R. The transformation T 1 R from I to R is defined as:

TI ,R = T L * Ι,ΚΙ oT L * Kl,K2 o · " o T L * Kl,R (10) An example is presented in Figure 6. A simple blending strategy is used: the intensity of one pixel in the mosaic image is the average illumination-corrected intensity among images projected in that pixel.

It should be noted that if eye pleasing mosaics are needed, a quadratic transformation (A. Can, C. V. Stewart, B. Roysam and H. L. Tanenbaum, "A feature-based technique for joint, linear estimation of high-order image-to -mosaic transformations: mosaicing the curved human retina", IEEE Trans Pattern Anal Mach Intell, vol. 24, no. 3, pp. 412-9, March 2002) can be estimated from every image / to the reference R using T 1 R (angle , scaling factor σ and translation (τ χ , f ) ) as the initial solution. The initial quadratic transformation T quad is given by: quad

The proposed optimization procedure based on the modified fast chamfer transform section II-C) can also be used in the quadratic case.

IV. DOUBLE MOSAIC UPDATE

When a patient undergoes a subsequent eye fundus examination, in the context of pathology progression measurement, it is desirable to register the new images to the initial set of images. One way would be to build a new pair of mosaics from the new images and register them to the old pair, as described in section II. However, this solution is suboptimal: indeed, if two images from the same retina were misregistered in the initial patient examination record due to poor overlapping, chances are that one new image in the subsequent patient examination record will bridge the gap between them. So it is best to build a single pair of mosaics using all images from both examinations. Of course, images from two different examinations can be separated afterward in order to monitor the temporal changes: this is illustrated on several examples in Figure 7.

It should be noted that mosaic formation does not need to start from scratch every time the patient undergoes a new examination, which could lead to quite complex registration graphs, as the number of examinations increases. Two improvements are proposed hereafter.

A. First Improvement: Two-Stage Clique Analysis

A first improvement consists in 1) applying ^-clique analysis (k≥3) separately to each set of images (one per examination record) as described in section III-C, 2) merge the resulting graphs, and 3) extract two registration subgraphs from the resulting graph (one per retina).

More particularly, for processing at least one subsequent examination record comprising a plurality of new images of at least parts of at least one retina of the patient acquired subsequently with respect to previous images of a previous examination record, the method comprises:

- for each new image, identifying pixels related to at least one structure of interest,

- registering the new images pairwise,

- forming at least one new group of new images belonging to one same retina by implementing graph theory to pairs of new images,

- merging each new group with previous groups of previous images, each new image of each new group being registered pairwise with each previous image of each previous group,

- forming at least one group of images belonging to one same retina by implementing graph theory to pairs of new and previous images.

Namely, in the particular case of two consecutive examinations, let G' i (respectively G' 2 ) denote the graph obtained after ^-clique analysis (k≥3) using images from the first (respectively the second) eye fundus examination. G' i and G' 2 are merged by connecting each vertex of G' i to each vertex of G' 2 ; let G' denote the resulting graph. From that point, double clique formation is performed exactly as described in sections III-C to III-E. The advantage is that G' has a lower complexity than it would have had if created directly using all images from both examinations. But the resulting registration subgraphs are identical.

B. Second Improvement: Inline Clique Analysis

As mentioned above, a two-stage ^-clique analysis (k≥ 3) reduces the overall complexity. To follow up with this idea, it is also possible to progressively update the registration graph by adding images one by one, as described hereafter. Let {I i, h , IN} be a set of N images collected during one or several eye fundus examinations. Double mosaic formation starts with the empty graph G' = (V = 0, £" = 0). Then for / = 1 to N, 1) / / is connected to each image in V, 2) / / is added to V and 3) G' is updated through ^-clique analysis (k≥3). During ^-clique analysis, only cliques and incomplete cliques involving / / need to be examined. Finally, double clique formation is performed exactly as described in sections III-C to III-E.

V. LATERALITY IDENTIFICATION

Now that mosaics have been formed, it is possible to detect retinal landmarks: the main vessel arch (see section V-A) and the optic disc (see section V-B). The image processing pipeline for retinal landmark detection is illustrated in Figure 8. Based on their detection in both mosaics, the laterality of each mosaic can be determined robustly from the main blood vessel arch and the optic disc (see section V-C). A. Main Vessel Arch Detection

The macular region is delimited by four large blood vessels: one artery and one vein running on top of the macula, one artery and one vein running below the macula.

These four blood vessels converge in the optic disc. They form a noticeable circular or elliptical pattern. A similar pattern can be observed on the other side of the optic disc, except that this second pattern is usually more elongated and, more importantly, it is significantly smaller.

The detection of this main vessel arch relies on a circular model of the main vessel arches. Let RMVA and SRMVA denote the average and standard deviation of the main vessel arch in images, for example RMVA = 316, SRMVA = 16). Let WMVA denote the average width of the main vessel arch, i.e. the average maximal distance between the main artery and the main vein, for example WMVA = 85). It should be noted that the width of the main vessel arch WMVA is significantly larger than the variability of the main vessel arch radius 2*SRMVA- This implies that a single circular pattern can be used reliably.

Main vessel arch detection is illustrated in the first line of Figure 8. It starts with a mosaic of blood vessel segmentations on the left, denoted by M vs . Section I-C describes how the inputs of this mosaic were obtained. Mvs is convolved with a binary disc of radius WMVA to form the main vessel arch probability image MMVA (Figure 8a). MMVA is convolved with a binary circle of radius RMVA (Figure 8b). A main vessel arch center probability image MMVAC is obtained: the pixel with maximal intensity in MMVAC indicates the center of the main arch.

The fitted circular pattern is reported in Figure 6. It can be seen that its center does not necessarily match the fovea: this is because the main arch is closer to an ellipse than to a circle. The advantage of a circular model, over an elliptical model, is that only one template is needed. With an elliptical model, several patterns (at least two) are needed to cover the range of possible orientations.

B. Optic Disc Detection

Knowing that the four main blood vessels converge in the optic disc, two regions of interest, one on each side of the circular pattern, are defined for optic disc detection (Figure 8c). The width of these regions of interest is set to the average diameter of optic discs in images 2*roD- Let and R r i g h t denote the region interest on the left and right side of the main vessel arch, respectively.

Optic disc detection relies on a mosaic of optic disc enhanced images (second column of Figure 8, on the left), denoted by MODE- Section I-E describes how the inputs of this mosaic were obtained. An optic disc probability p S oD is defined for each region of interest R^, δ £ { left, right}. A region of interest is likely to contain an optic disc if the optic disc enhanced image MODE is bright and if the main vessel arch probability MMVA is high. Therefore, the optic disc probability for is defined as:

Limiting the search to highly vascularized regions on both sides of the main arch center dramatically reduces the probability of false alarms due to illumination artifacts and bright lesions.

The optic disc probability is used to determine mosaic laterality in section V-C.

Once laterality is determined, the region of interest ROD containing the optic disc is known. To precisely locate the optic disc center inside ROD, MODE is convolved with a binary disc of radius r 0 D to form the optic disc center probability image M 0 DC (Figure 8d). The pixel with maximal intensity in M 0 DC, restricted to ROD, indicates the center of the optic disc. C. Laterality Identification

The main vessel arch detections and the optic disc probabilities defined above are used to determine the laterality of each mosaic.

If a single mosaic was detected, its laterality is defined as argmax^ p S oo, with δ £ { left, right}.

If two mosaics G \ and G 2 were detected, their laterality L \ and L 2 are defined as follow:

= left and L 2 = right if p l D {G, )p *? (G 2 ) > p *? {G, )p% (G 2 ) ,

L \ = right and L 2 = left otherwise.