Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER VISION BASED METHOD FOR EXTRACTING FEATURES RELATING TO THE DEVELOPMENTAL STAGE OF TRICHURIS SPP. EGGS
Document Type and Number:
WIPO Patent Application WO/2013/159856
Kind Code:
A1
Abstract:
There is provided a computer vision based method for extracting features relating to the developmental stages of Trichuris spp. eggs, wherein for the final developmental stages a larva is present inside the egg, said Trichuris spp. eggs having a substantially oblong or elliptical shape with a protruding polar plug at each end, the shape of the Trichuris spp. eggs thereby defining a longitudinal direction and a transverse direction of the eggs. The method comprises a step of obtaining and storing one or more digital images of Trichuris spp. eggs suspended in a liquid solution, a step of detecting one or more Trichuris spp. eggs in the image(s), and a step of extracting one or more features from an egg content image region representing at least part of the egg contents of a detected egg. The extraction of one or more features from the egg content image region may include one or more measurements of the direction-dependent structures of the egg contents, such as one or more measurements of the longitudinal structures of the egg contents and/or one or more measurements of the transverse structures of the egg contents. The Trichuris spp. eggs may be Trichuris suis eggs.

Inventors:
BRUUN JOHAN MUSAEUS (DK)
CARSTENSEN JENS MICHAEL (DK)
MOLIIN OUTZEN KAPEL CHRISTIIAN (DK)
Application Number:
PCT/EP2013/000855
Publication Date:
October 31, 2013
Filing Date:
March 21, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PARASITE TECHNOLOGIES AS (DK)
BRUUN JOHAN MUSAEUS (DK)
CARSTENSEN JENS MICHAEL (DK)
MOLIIN OUTZEN KAPEL CHRISTIIAN (DK)
International Classes:
G06K9/00; A61K35/62
Other References:
AVCI D ET AL: "An expert diagnosis system for classification of human parasite eggs based on multi-class SVM", EXPERT SYSTEMS WITH APPLICATIONS, OXFORD, GB, vol. 36, no. 1, 1 January 2009 (2009-01-01), pages 43 - 48, XP025559923, ISSN: 0957-4174, [retrieved on 20081017], DOI: 10.1016/J.ESWA.2007.09.012
YOON SEOK YANG ET AL: "Automatic Identification of Human Helminth Eggs on Microscopic Fecal Specimens Using Digital Image Processing and an Artificial Neural Network", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE SERVICE CENTER, PISCATAWAY, NJ, USA, vol. 48, no. 6, 1 June 2001 (2001-06-01), XP011007098, ISSN: 0018-9294
DOGANTEKIN E ET AL: "A robust technique based on invariant moments - ANFIS for recognition of human parasite eggs in microscopic images", EXPERT SYSTEMS WITH APPLICATIONS, OXFORD, GB, vol. 35, no. 3, 1 October 2008 (2008-10-01), pages 728 - 738, XP022735303, ISSN: 0957-4174, [retrieved on 20080617], DOI: 10.1016/J.ESWA.2007.07.020
SOMMER C: "Quantitative characterization of texture used for identification of eggs of bovine parasitic nematodes", JOURNAL OF HELMINTHOLOGY, LONDON, vol. 72, no. 2, 1 June 1998 (1998-06-01), pages 179 - 182, XP008163730, ISSN: 0022-149X, [retrieved on 20090605], DOI: 10.1017/S0022149X00016370
CONOR NUGENT ET AL: "Using active learning to annotate microscope images of parasite eggs", ARTIFICIAL INTELLIGENCE REVIEW, KLUWER ACADEMIC PUBLISHERS, DO, vol. 26, no. 1-2, 20 September 2007 (2007-09-20), pages 63 - 73, XP019547854, ISSN: 1573-7462
PENG SHE-XIN ET AL: "Engineering Research on the Automatic Identification of Helminth Egg Images", no. 2, 1 February 2005 (2005-02-01), pages 11 - 15, XP008163737, ISSN: 1673-016X, Retrieved from the Internet [retrieved on 20130723]
ZHAO YAE: "Automatic recognition of human parasite egg pictures", ZHONGGUO TISHIXUE YU TUXIANG FENXI = CHINESE JOURNAL OF STEREOLOGY AND IMAGE ANALYSIS, ZHONGGUO TISHIXUE XUEHUI, CN, vol. 2, no. 3, 1 March 1997 (1997-03-01), pages 125 - 138, XP008163736, ISSN: 1007-1482
A. DAUGSCHIES ET AL: "Autofluorescence microscopy for the detection of nematode eggs and protozoa, in particular Isospora suis , in swine faeces", PARASITOLOGY RESEARCH, vol. 87, no. 5, 27 April 2001 (2001-04-27), pages 409 - 412, XP055073387, ISSN: 0932-0113, DOI: 10.1007/s004360100378
R. W. SUMMERS; D. E. ELLIOTT; J. F. URBAN; R. THOMPSON; J. V. WEINSTOCK: "Trichuris suis therapy in Crohn's disease.", GUT, vol. 54, no. 1, January 2005 (2005-01-01), pages 87 - 90
R. W. SUMMERS; D. E. ELLIOTT; J. F. URBAN; R. A. THOMPSON; J. V. WEINSTOCK: "Trichuris suis therapy for active ulcerative colitis: A randomized controlled trial", GASTROENTEROLOGY, vol. 128, no. 4, April 2005 (2005-04-01), pages 825 - 832
A. REDDY; B. FRIED: "The use of Trichuris suis and other helminth therapies to treat Crohn's disease", PARASITOLOGY RESEARCH, vol. 100, no. 5, April 2007 (2007-04-01), pages 921 - 7
Y. S. YANG; D. K. PARK; H. C. KIM; M. H. CHOI; J. Y. CHAI: "Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network", IEEE TRANSACTIONS ON BIO-MEDICAL ENGINEERING, vol. 48, no. 6, June 2001 (2001-06-01), pages 718 - 30
D. AVCI; A. VAROL: "An expert diagnosis system for classification of human parasite eggs based on multi-class SVM", EXPERT SYSTEMS WITH APPLICATIONS, vol. 36, no. 1, January 2009 (2009-01-01), pages 43 - 48
E. DOGANTEKIN; M. YILMAZ; A. DOGANTEKIN; E. AVCI; A. SENGUR: "A robust technique based on invariant moments - ANFIS for recognition of human parasite eggs in microscopic images", EXPERT SYSTEMS WITH APPLICATIONS, vol. 35, no. 3, October 2008 (2008-10-01), pages 728 - 738
D. THIENPONT; F. ROCHETTE; O. F. J. VANPARIJS, DIAGNOSING HELMINTHIASIS BY COPROLOGICAL EXAMINATION, 1986
L. S. ROBERTS; J. JANOVY JR.; G. D. SCHMIDT: "Foundations afParasitology", 2005
J. S. PITTMAN; G. SHEPHERD; B. J. THACKER; G. H. MYERS: "Trichuris suis in finishing pigs - Case report and review.pdf", JOURNAL OF SWINE HEALTH AND PRODUCTION, vol. 18, no. 6, 2010, pages 306 - 313
M. ABRAMOWITZ; M. W. DAVIDSON, DARKFIELD ILLUMINATION OLYMPUS MICROSCOPY RESOURCE CENTER, 26 April 2012 (2012-04-26), Retrieved from the Internet
R. J. S. BEER: "Morphological descriptions of the egg and larval stages of Trichuris suis Schrank, 1788", PARASITOLOGY, vol. 67, no. DEC, 1973, pages 263 - 278
M.1. BLACK; P. V. SCARPINO; C. J. O'DONNELL; K. B. MEYER; J. V. JONES; E. S. KANESHIRO: "Survival rates of parasite eggs in sludge during aerobic and anaerobic digestion", APPLIED AND ENVIRONMENTAL MICROBIOLOGY, vol. 44, no. 5, November 1982 (1982-11-01), pages 1138 - 43
N. OTSU: "A Threshold Selection Method from Gray-Level Histograms", IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, vol. 9, no. 1, 1979, pages 62 - 66
M. B. DILLENCOURT; H. SAMET; M. TAMMINEN, AUTOMATIC CLUMP SPLITTING FOR CELL QUANTIFICATION IN MICROSCOPICAL IMAGES, vol. 39, no. 2, 1992, pages 253 - 280
MATHWORKS, MATLAB REGIONPROPS, 2012, pages 20120426, Retrieved from the Internet
H. BERGE; D. TAYLOR; S. KRISHNAN; T. S. DOUGLAS, MRC / UCT MEDICAL IMAGING RESEARCH UNIT, DEPARTMENT OF HUMAN BIOLOGY DIVISION OF CLINICAL PHARMACOLOGY UNIVERSITY OF CAPE TOWN , SOUTH AFRICA, 2011, pages 204 - 207
G. DIAZ; F. GONZALEZ; E. ROMERO: "Automatic Clump Splitting for Cell Quantification in Microscopical Images", CIARP'07 PROCEEDINGS OF THE CONGRESS ON PATTERN RECOGNITION 12TH IBEROAMERICAN CONFERENCE ON PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, 2007, pages 1 - 10
A. P. WITKIN: "Scale-space filtering: A New Approach To Multi-Scale Description", ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, IEEE INTERNATIONAL CONFERENCE ON ICASSP '84, vol. 9, 1984, pages 150 - 153
P. BURT; E. ADELSON: "The Laplacian Pyramid as a Compact Image Code", IEEE TRANSACTIONS ON COMMUNICATIONS, vol. 31, no. 4, April 1983 (1983-04-01), pages 532 - 540
S. G. MALLAT: "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS ON, vol. 11, no. 7, 1989, pages 674 - 693
J. CANNY: "A computational approach to edge detection.", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. PAMI-8, no. 6, June 1986 (1986-06-01), pages 679 - 698
R. S. GONZALEZ; R. E. WOODS, DIGITAL IMAGE PROCESSING, INTERNATIONAL EDITION, 2008
Attorney, Agent or Firm:
NORDIC PATENT SERVICE A/S (Copenhagen K, DK)
Download PDF:
Claims:
CLAIMS

1. A computer vision based method for extracting features relating to the developmental stages of Trichuris spp. eggs, wherein for the final developmental stages a larva is present inside the egg, said Trichuris spp. eggs having a substantially oblong or elliptical shape with a protruding polar plug at each end, the shape of the Trichuris spp. eggs thereby defining a longitudinal direction and a transverse direction of the eggs, said method comprising:

a) obtaining and storing one or more digital images of Trichuris spp. eggs suspended in a liquid solution,

b) detecting one or more Trichuris spp. eggs in the image(s), and

c) extracting one or more features from an egg content image region representing at least part of the egg contents of a detected egg. 2. A method according to claim 1 , wherein in step a) the stored digital images of the Trichuris spp. eggs comprises one or more bright-field images and wherein in step c) one or more features are extracted from an egg content image region being a bright-field egg content image region. 3. A method according to claim 1 or 2, wherein one or more features are extracted from an egg content image region being extracted from an image or image region which includes a full representation of a detected Trichuris spp. egg.

4. A method according to claims 2 and 3, wherein the bright-field egg content image region is extracted from a bright-field image or image region, which includes a full representation of a detected Trichuris spp. egg

5. A method according to claim 3 or 4, wherein the extracted egg content image region excludes the polar plugs of the detected Trichuris spp. egg.

6. A method according to any one of the claims 3-5, wherein the extracted egg content image region excludes the shell of the detected Trichuris spp. egg.

7. A method according to claim 6, wherein the extracted egg content image region has a substantially elliptical shape, thereby defining a content ellipse image. 8. A method according to any one of the claims 1-7, wherein the extraction of one or more features from the egg content image region includes one or more measurements of the direction-dependent structures of the egg contents.

9. A method according to claim 8, wherein the extraction of one or more features from the egg content region includes one or more measurements of the longitudinal structures of the egg contents and/or one or more measurements of the transverse structures of the egg contents.

10. A method according to claim 9, wherein the one or more measurements of the longitudinal structures are based on a measure of the linear structures and/or edge structures in the longitudinal direction.

1 1. A method according to claim 9 or 10, wherein one or more measurements of the transverse structures are based on a measure of the linear structures and/or edge structures in the transverse direction.

12. A method according to claim 10 or 1 1, wherein the linear structures and/or edge structures are measured at a predetermined scale. 13. A method according to claim 9, wherein the one or more measurements of the longitudinal structures are based on a measure of the linear structures and/or edge structures in the longitudinal direction at one or more scales in a multi-scale

representation of the image region from which the features are extracted.

14. A method according to claim 9 or 13, wherein one or more measurements of the transverse structures are based on a measure of the linear structures and/or edge structures in the transverse direction at one or more scales in a multi-scale

representation of the image region from which the features are extracted.

15. A method according to claim 13 or 14, wherein the multi-scale

representation of the image region from which the features are extracted is a pyramid representation or a scale space representation.

16. A method according to any one of the claims 9-15, wherein one or more measurements of the longitudinal structures of the egg contents is based on

a longitudinal comparison of pixels intensities obtained from similarly addressed pixels in first and second image parts representing at least part of the egg contents of a detected egg, with the second image part being obtained by shifting the first image part one or more pixels in a direction substantially following the longitudinal direction of the egg.

17. A method according to any one of the claims 9-16, wherein one or more measurements of the transverse structures of the egg contents is based on

a transverse comparison of pixel intensities obtained from similarly addressed pixels in the first image part and a third image part representing at least part of the egg contents of a detected egg, with the third image part being obtained by shifting the first image part one or more pixels in a direction substantially following the transverse direction of the egg.

18. A method according to claims 16 and 17, wherein the longitudinal comparison of pixel intensities from the first and second image parts comprises calculating a longitudinal correlation coefficient piong for pixel intensities representing at least part of the similarly addressed pixels, and wherein the transverse comparison of pixel intensities from the first and third image parts comprises calculating a transverse correlation coefficient ptrans for pixel intensities representing at least part of the similarly addressed pixels.

19. A method according to claim 18, wherein the feature extraction further includes a ratio measure based on the ratio between the longitudinal correlation coefficient piong and the transverse correlation coefficient ptrans- 20. A method according to claims 10 and 11, wherein expressions

representing a measure of the edge structures in the longitudinal and transverse directions are obtained by use of an edge detector algorithm.

21. A method according to claim 20, wherein the edge detector algorithm is selected from the following algorithms: the Canny edge detector algorithm, the Sobel edge detector algorithm, and the Prewitt edge detector algorithm.

22. A method according to claim 20 or 21, wherein the expression

representing the edge structures in the longitudinal direction, longitudinal edge count, is defined as the number of edge pixels of the egg contents given by the edge detector algorithm and being oriented substantially in the longitudinal direction, and wherein the expression representing the edge structures in the transverse direction, transverse edge count, is defined as the number of edge pixels of the egg contents given by the edge detector algorithm and being oriented substantially in the transverse direction.

23. A method according to claim 22, wherein the longitudinal edge count is defined as the number of edge pixels of the egg contents given by the edge detector algorithm and being oriented in the longitudinal direction plus/minus an angle within the range of 10-45 degrees, such as 22.5 degrees, and wherein the transverse edge count is defined as the number of edge pixels of the egg contents given by the edge detector algorithm and being oriented in the transverse direction plus/minus an angle within the range of 10-45 degrees, such as 22.5 degrees.

24. A method according to any one of the claims 1-23, wherein in step a) the stored digital images of the Trichuris spp. eggs comprises one or more dark-field images and wherein in step c) one or more features are extracted from an egg content image region being a dark-field egg content image region.

25. A method according to claim 24, wherein one or more features are extracted from a dark-field egg content image region being extracted from a dark-field image region which includes a full representation of a detected Trichuris spp. egg.

5

26. A method according to claim 25, wherein the extracted dark-field egg content image region excludes the polar plugs of the detected Trichuris spp. egg.

27. A method according to claim 25 or 26, wherein the extracted dark-field 10 egg content image region excludes the shell of the detected Trichuris spp. egg.

28. A method according to claim 27, wherein the extracted dark-field egg content image region has a substantially elliptical shape, thereby defining a content ellipse image.

15

29. A method according to claims 25-28, wherein the feature extraction of step c) includes dark-field features extracted from the dark-field egg content image region, said dark-field feature extraction being based on variations in pixel intensities measured or extracted for at least part of the dark-field egg content image region.

20. .. . . .

30. A method according to claim 29, wherein the dark-field feature extraction comprises a computation of the average of the extracted pixel intensities.

31. A method according to claim 29 or 30, wherein the dark- field feature

25 extraction comprises a computation of the mean of the extracted pixel intensities, mean scattering intensity, and/or of the median of the extracted pixel intensities, median scattering intensity.

32. A method according to any one of the claims 1-31 , wherein the Trichuris 30 spp. eggs are Trichuris suis eggs.

33. A method according to any one of the claims 1- 32, further comprising a classification step, wherein at least part of the features extracted from an egg content image region representing a detected egg are used for classifying the detected egg. 34. A method according to claim 33, wherein the classification of the detected egg is a binary classification with respect to the developmental stage of egg.

35. A method according to claim 34, wherein the detected egg is classified as either containing a larva or not containing a larva.

36. A method according to claim 33, wherein the classification of the detected egg is a multi-class classification with respect to the developmental stage of egg, said multi-class classification comprising at least three classes of developmental stages. 37. A method according to any one of the claims 33-36, wherein the classification is at least partly based on extracted features, for which features the extraction includes one or more measurements representing longitudinal structures and transverse structures of the egg contents. 38. A method according to claim 37, wherein the classification is at least partly based on a ratio measure obtained from a measure representing the longitudinal structures of the egg contents and a measure representing the transverse structures of the egg contents. 39. A method according to claim 37 or 38, wherein one or more

measurements representing the longitudinal structures are based on a measure of the linear structures and/or edge structures in the longitudinal direction, and wherein one or more measurements representing the transverse structures are based on a measure of the linear structures and/or edge structures in the transverse direction.

40. A method according to claim 39, wherein measurements of the linear structures and/or edge structures in the longitudinal and transverse directions are measured according to one or more of the claims 12-23. 41. A method according to any one of the claims 37-40, wherein a measure representing the longitudinal structures of the egg contents have to exceed a

corresponding measure representing the transverse structures of the egg contents by a predetermined factor being larger than one in order to have the egg classified as containing a larvae.

42. A method according to any one of the claims 34-41, wherein the classification is at least partly based on extracted dark-field features according to one or more of the claims 29-31.

Description:
COMPUTER VISION BASED METHOD FOR EXTRACTING FEATURES

RELATING TO THE DEVELOPMENTAL STAGES OF TRICHURIS SPP. EGGS

FIELD OF THE INVENTION

The present invention relates to a computer vision based method for extracting features relating to the developmental stages of T ichuris spp. eggs, wherein for the final developmental stages a larva is present inside the egg. More particularly, the method may be used for extracting features for Trichuris suis eggs. BACKGROUND OF THE INVENTION

Purpose of the invention

Exposure to helminths (intestinal worms) such as whipworms have been shown to have a mitigating effect on a number of autoimmune diseases such as Crohn's disease [1] and Ulcerative colitis [2]. This new treatment is known as helminthic therapy and it utilizes the immunoregulatory behavior [3] of helminths in the intestines where orally administered whipworm eggs hatch into larvae which establish for a shorter period of time in a self-limiting intestinal infection. Only eggs that contain a fully developed larva can induce the positive immune response. Thus, assessment of the medicinal potency is correlated to the proportion of eggs with fully developed larvae in a given egg suspension. The presented invention enables an automated, non-invasive and cost-effective way of assessing the biological potency of a particular egg suspension.

Related work

Several research papers have studied the use of image analysis to separate parasite eggs of distinct species, including helminths. Yang et al. [4] and others [5], [6] had some success in separating different species of human helminth eggs based on their exterior size and shape. Such approach cannot be used for developmental stages and thereby biological potency since there are no visible differences in the exterior size and shape of eggs that contain a larva and eggs that do not. Similarly, the larvae in the egg may not be fully developed (see figure 15) and thereby without the ability to establish in the intestine.

SUMMARY OF THE INVENTION

According to the present invention there is provided a computer vision based method for extracting features relating to the developmental stages of Trichuris spp. eggs, said method comprising:

a) obtaining and storing one or more digital images of Trichuris spp. eggs suspended in a liquid solution,

b) detecting one or more Trichuris spp. eggs in the image(s), and

c) extracting one or more features from an egg content image region representing at least part of the egg contents of a detected egg.

For the final developmental stages of Trichuris spp. eggs, a larva is present inside the egg. The Trichuris spp. eggs have a substantially oblong or elliptical shape with a protruding polar plug at each end. The shape of a Trichuris spp. egg thereby defines a longitudinal direction and a transverse direction of the egg.

Thus, according to the present invention there is also provided a computer vision based method for extracting features relating to the developmental stages of Trichuris spp. eggs, wherein for the final developmental stages a larva is present inside the egg, said Trichuris spp. eggs having a substantially oblong or elliptical shape with a protruding polar plug at each end, the shape of the Trichuris spp. eggs thereby defining a longitudinal direction and a transverse direction of the eggs, said method comprising: a) obtaining and storing one or more digital images of Trichuris spp. eggs suspended in a liquid solution,

b) detecting one or more Trichuris spp. eggs in the image(s), and c) extracting one or more features from an egg content image region representing at least part of the egg contents of a detected egg.

The extracted features are related to the contents of an egg, and may also be denoted egg content features.

It should be understood that for the computer vision based method of the present invention the digital images obtained in step a) may be stored in computer memory. The detection of one or more Trichuris spp. eggs in the image(s) of step b) may be performed by running one or more computer programs or executing instructions on a computer to thereby detect one or more Trichuris spp. eggs in the image(s). Similarly, the extraction of one or more features from an egg content image region in step c) may be performed by running one or more computer programs or executing instructions on a computer to extract said one or more features from the egg content image region representing at least part of the egg contents of a detected egg.

According to one or more embodiments of the invention, the Trichuris spp. eggs are Trichuris suis eggs. It is preferred that for step a) the stored digital images of the Trichuris spp. eggs comprises one or more bright- field images, and that for step c) one or more features are extracted from an egg content image region being a bright-field egg content image region. In a preferred embodiment the extracted egg content image region excludes the polar plugs of the detected Trichuris spp. egg. It is also preferred that the extracted egg content image region excludes the shell of the detected Trichuris spp. egg. Here, the extracted egg content image region may have a substantially elliptical shape, thereby defining a content ellipse image.

It is preferred that the extraction of one or more features from the egg content image region includes one or more measurements of the direction-dependent structures of the egg contents. Here, the extraction of one or more features from the egg content region may include one or more measurements of the longitudinal structures of the egg contents and/or one or more measurements of the transverse structures of the egg contents. One or more measurements of the longitudinal structures may be based on a measure of the linear structures and/or edge structures in the longitudinal direction, and one or more measurements of the transverse structures may be based on a measure of the linear structures and/or edge structures in the transverse direction.

According to an embodiment of the invention the linear structures and/or edge structures are measured at a predetermined scale.

According to another embodiment of the invention one or more measurements of the longitudinal structures may be based on a measure of the linear structures and/or edge structures in the longitudinal direction at one or more scales in a multi-scale

representation of the image region from which the features are extracted. Also one or more measurements of the transverse structures may be based on a measure of the linear structures and/or edge structures in the transverse direction at one or more scales in a multi-scale representation of the image region from which the features are extracted. The multi-scale representation of the image region from which the features are extracted may be a pyramid representation or a scale space representation.

It is within one or more embodiments of the invention that one or more measurements of the longitudinal structures of the egg contents is based on a longitudinal comparison of pixels intensities obtained from similarly addressed pixels in first and second image parts representing at least part of the egg contents of a detected egg, with the second image part being obtained by shifting the first image part one or more pixels in a direction substantially following the longitudinal direction of the egg. It is also within one or more embodiments of the invention that one or more measurements of the transverse structures of the egg contents is based on a transverse comparison of pixel intensities obtained from similarly addressed pixels in the first image part and a third image part representing at least part of the egg contents of a detected egg, with the third image part being obtained by shifting the first image part one or more pixels in a direction substantially following the transverse direction of the egg.

It is preferred that the longitudinal comparison of pixel intensities from the first and second image parts comprises calculating a longitudinal correlation coefficient pi ong for pixel intensities representing at least part of the similarly addressed pixels, and that the transverse comparison of pixel intensities from the first and third image parts comprises calculating a transverse correlation coefficient p tTans for pixel intensities representing at least part of the similarly addressed pixels. Here, the feature extraction may further include a ratio measure based on the ratio between the longitudinal correlation coefficient pi ong and the transverse correlation coefficient p tra ns-

For embodiments of the invention wherein one or more measurements of the

longitudinal structures are based on a measure of the edge structures in the longitudinal direction, and one or more measurements of the transverse structures are based on a measure of the edge structures in the transverse direction, then it is preferred that expressions representing a measure or measures of the edge structures in the

longitudinal and transverse directions are obtained by use of an edge detector algorithm. Here, the edge detector algorithm may be selected from the following algorithms: the Canny edge detector algorithm, the Sobel edge detector algorithm, and the Prewitt edge detector algorithm.

The expression or expressions representing the edge structures in the longitudinal direction, longitudinal edge count, may be defined as the number of edge pixels of the egg contents, which is given by the edge detector algorithm, and which are oriented substantially in the longitudinal direction, and the expression or expressions

representing the edge structures in the transverse direction, transverse edge count, may be defined as the number of edge pixels of the egg contents, which is given by the edge detector algorithm, and which is oriented substantially in the transverse direction. The longitudinal edge count may be defined as the number of edge pixels of the egg contents, which is given by the edge detector algorithm and which are oriented in the longitudinal direction plus/minus an angle within the range of 10-45 degrees, such as within the range of 15-35 degrees, such about 22.5 degrees, and wherein the transverse edge count is defined as the number of edge pixels of the egg contents given by the edge detector algorithm and being oriented in the transverse direction plus/minus an angle within the range of 10-45 degrees, such as within the range of 15-35 degrees, such as about 22.5 degrees.

The present invention also covers embodiments, wherein in step a) the stored digital images of the Trichuris spp. eggs comprises one or more dark-field images and wherein in step c) one or more features are extracted from an egg content image region being a dark-field egg content image region. Here, one or more features may be extracted from a dark- field egg content image region being extracted from a dark-field image region which includes a full representation of a detected Trichuris spp. egg. Preferably, the extracted dark-field egg content image region excludes the polar plugs of the detected Trichuris spp. egg. It is also preferred that the extracted dark-field egg content image region excludes the shell of the detected Trichuris spp. egg, and here the extracted dark- field egg content image region may have a substantially elliptical shape, thereby defining a content ellipse image.

According to one or more embodiments of the invention the feature extraction of step c) may include dark-field features extracted from the dark-field egg content image region, where the dark-field feature extraction is based on variations in pixel intensities measured or extracted for at least part of the dark-field egg content image region. The dark-field feature extraction may comprise a computation of the average of the extracted pixel intensities. According to embodiments of the invention the dark-field feature extraction may comprise a computation of the mean of the extracted pixel intensities, mean scattering intensity, and/or a computation of the median of the extracted pixel intensities, median scattering intensity.

The present invention also covers embodiments, which further comprises a

classification step, wherein at least part of the features extracted from an egg content image region representing a detected egg are used for classifying the detected egg. Here, the classification of the detected egg may be a binary classification with respect to the developmental stage of egg, and the detected egg may be classified as either containing a larva or not containing a larva.

The classification of the detected egg may also be a multi-class classification with respect to the developmental stage of egg, where the multi-class classification comprises at least three classes of developmental stages.

The classification step may be performed by running one or more computer programs or executing instructions on a computer to determine or classify the detected egg as containing a larva or not, or to determine the developmental stage of the detected egg. The determination or classification results may be used as an indicator of the biological potency of the egg, and the computer system used for the classification may generate a report of the analysis results obtained from the different steps. The classification may be at least partly based on extracted features, for which features the extraction includes one or more measurements representing longitudinal structures and transverse structures of the egg contents. Here, the classification may be at least partly based on a ratio measure obtained from a measure representing the longitudinal structures of the egg contents and a measure representing the transverse structures of the egg contents. It is preferred that the one or more measurements representing the longitudinal structures are based on a measure of the linear structures and/or edge structures in the longitudinal direction, and that the one or more measurements representing the transverse structures are based on a measure of the linear structures and/or edge structures in the transverse direction. The measurements of the linear structures and/or edge structures in the longitudinal and transverse directions may be measured according to one or more of the herein mentioned embodiments. It is within an embodiment of the invention that a measure representing the longitudinal structures of the egg contents have to exceed a corresponding measure representing the transverse structures of the egg contents by a predetermined factor being larger than one in order to have the egg classified as containing a larvae. The present invention also covers embodiments, wherein the classification is at least partly based on extracted dark-field features, where the dark-field features may be extracted according to one or more of the embodiments described herein.

BRIEF DESCRIPTION OF THE DRAWINGS

List of figures:

Fig. 1 : Complete egg analysis system

Fig. 2: Detection phase

Fig. 3 : Feature extraction phase

Fig. 4: Classification phase

Fig. 5: Image definitions

Fig. 6: Orientation alignment

Fig. 7: Shape profile computation

Fig. 8: Examples of the seven egg categories

Fig. 9: Egg content region extraction

Fig. 10: Spatial autocorrelation computation

Fig. 11 : Correlation formula

Fig. 12: Edge orientations

Fig. 13: Classification graphs I

Fig. 14: Classification graphs II

Fig. 15: Developmental stages

Fig. 16: Examples of corresponding brightfield and darkfield images

Fig. 17: Example of physical system setup

Egg characteristics The presented analysis relates to Trichuris spp. eggs, i.e. eggs from various species of the genus Trichuris, also known as whipworms. Trichuris spp. eggs, in the following denoted Trichuris eggs, have a distinct, lemon-like shape and a smooth outer shell. The shape can be described as a prolate spheroid, also known as a prolate ellipsoid of revolution, with protruding, operculate plugs in each end (apex). When observed from the side (lateral view), the shape of a Trichuris egg is oblong and elliptical with protruding plugs at the ends [7]. The average length (major axis) and width (the two minor axes) of Trichuris eggs depend to a certain degree on the specific species. The sizes of some of the main species of Trichuris are:

Trichuris trichiura (human whipworm): 50-58 x 22-27 μιη[7], 50-54 22-23 μηι[8]. Trichuris suis (pig whipworm): 50-68 χ 21-31 μηι [7], 60 χ 25 μπι [9]

Trichuris muris (mouse whipworm): 67-70 χ 31-34 μπι [7]

Trichuris vulpis (canine whipworm): 70-90 χ 32-41 μπι [7] Trichuris ovis (ruminant whipworm): 70-80 x 30-42 μm [7]

Experimental setup

A generic setup is as follows:

A 40 μΐ sample of an egg suspension with approximately 40.000 eggs per ml is placed on a microscope slide or a similar container with a cover glass on top.

The slide is placed under an upright or inverted microscope and images are acquired at around 100-200X magnification, in both brightfield and darkfield illumination [10]. Examples of corresponding brightfield and darkfield images can be seen in figure 16.

Definitions

The following section defines some of the terms that are used throughout the description. It includes terms related to egg positions and egg categorization as well as image handling and digital image analysis.

Lateral

A lateral object is a prolate object that is placed on its side. A lateral egg is an egg lying on its side. In an image, the outline of such an egg is elliptical with protruding polar plugs at the ends.

Upright

An upright object is a prolate object that is placed on one of its apices.

An upright egg is an egg that is standing upright on one of its polar plugs. In an image, the outline of such an egg is circular or close to circular with a diameter corresponding to the width of a lateral egg. Singularized

A singularized object is an object that does not touch or overlap with other objects, i.e. is clearly separated from nearby objects.

A singularized egg is an egg that does not touch or overlap with other eggs or foreign particles.

Touching

A touching object is an object that touches, but does not overlap with, other objects. A touching egg is an egg that touches one or more other eggs or impurities, but is not overlapping with them, i.e. its entire content region is clearly visible.

Partly covered egg

A partly covered egg is an egg whose full outline is not distinguishable because the egg is covering or covered by one or more other objects, for instance other eggs. Foreign particle / impurity

The terms 'foreign particle' and 'impurity' are used interchangeably to describe all objects in the images that are not Trichuris eggs. This includes both organic and inorganic impurities such as dust particles, fibers, minerals, plant remnants, and pollen, as well as gas, air, or oil bubbles and non-Trichuris eggs. Multiple objects

The term 'multiple objects' is used to describe two or more objects that touch or overlap. In the analysis these are seen and treated as the same object until they are split into a number of separate objects.

ROI, blob, subimage

A ROI (region-of-interest) is a rectangular, cropped region of an input image that contains the object of interest plus a margin of its surroundings. The blob (binary large object) is a binary image of the same dimensions that indicates what pixels that belong to the object of interest in the ROI image. In this description, these are collectively called 'subimages' to distinguish them from the input images of the overall system. These terms are illustrated in figure 5.

Egg orientation / egg direction

The orientation of an egg is the angle between its longitudinal axis and the image x- axis. It is also called the direction of the egg. See left half of figure 6 for an illustration of the image coordinate system and the egg axes.

Developmental stages / 'containing a larva'

The level of embryonation and larval development inside Trichuris spp. eggs, here referenced for Trichuris suis eggs, can be assessed by following morphological changes inside the egg shell [11], [12]. The eggs can be classified as either unsegmented eggs, eggs undergoing cellular division (1, 2, 3, 6, and many blastomeres), eggs containing cylindrical embryo, and eggs with fully developed, defined infective LI larva [11]. The term 'containing a larva' is used to describe the latter two of these. The developmental stages are illustrated in figure 15.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The description of the presented invention is built around four main flowcharts that can be seen on figures 1 through 4. These are described one at a time in the following, and additional figures are introduced along the way when needed. Figure 1: Complete egg analysis system

One or more microscopic images are passed to the Detection phase (see figure 2). The microscopic images can be both multispectral (more than one wavelength) and multimodal (more than one illumination mode). An image can therefore consist of several bands, each representing a combination of a wavelength (for instance ultraviolet, a visible color, or near-infrared) and an illumination mode (for instance brightfield, darkfield, or phase contrast). In the Detection phase, all foreground objects in a chosen or computed band of each input image are detected and assigned one of the following six categories:

Lateral singularized eggs - Trichuris eggs that do not touch or overlap with other objects and lie down on their side. Upright singularized eggs -Trichuris eggs that do not touch or overlap with other objects and stand upright on one of their plugs.

Lateral touching eggs - Lateral Trichuris eggs that are touching but not covering or covered by other objects.

Upright touching eggs - Upright Trichuris eggs that are touching but not covering or covered by other objects.

Partly covered eggs - Trichuris eggs whose full outline is not distinguishable since the egg is covering or covered by other objects, for instance other eggs.

Foreign particles - All objects in the images that are not Trichuris eggs.

During the Detection phase an object can be assigned an additional, intermediate category: Multiple objects - Two or more objects that touch or overlap. In the analysis these are seen and treated as the same object until they are split into a number of separate objects.

See also the definitions of the terms used in the category names in the Definitions section above. In the following description, the term 'eggs' is referring to 'Trichuris eggs' unless otherwise noted.

Examples of ROI and blob subimages of each category can be seen on figure 8. All of the objects in the six main categories are counted. The total number of detected eggs (categories 1 through 5) can be used to assess the concentration of the analyzed egg suspension, i.e. the number of eggs per ml. For a good assessment, the analyzed sample should be representative of the entire suspension. The particle count (category 6) can be used to assess the purity of the analyzed egg suspension, for instance measured as the number of impurities or foreign particles per ml. Further analysis of the impurities, for instance a multiclass classification of the detected particles, is not presented here. Only the Lateral singularized eggs and the Lateral touching eggs are passed to the Feature Extraction phase (figure 3) since these have clearly visible and unobstructed contents. Here, relevant quantitative features of the egg contents are extracted for each egg. The resulting list of features is used in the Classification phase (figure 4), where each of the eggs is classified as either containing a larva or not (binary classification), or by their developmental stage (multiclass classification). The classification results can be used as an indicator of the biological potency of the egg suspension.

The system finally produces a report of the analysis results, including detection results (e.g. listings and subimages of the detected objects and their categories), feature extraction results (e.g. feature lists), and classification results (e.g. assigned classes, certainty measures of the assigned classes etc.). Figure 2: Detection phase

This description and figure 2 illustrate one possible way of detecting the Trichuris eggs in an input image. (1) Select or compute gray-level image

The detection of the eggs is done on a single gray-level (monochromatic) image. This gray-level image can be one of the following:

A band from the original image (corresponding to one wavelength or one channel of RGB)

A linear combination of bands, e.g. a standard grayscale representation or a band from the output of a dimensionality reduction algorithm like principal component analysis (PCA) or canonical discriminant analysis (CD A). A non-linear combination of bands, e.g. a channel from another color space, for instance the V-channel of HS V, or a band from the output of a nonlinear dimensionality reduction algorithm.

(2) Threshold

The gray-level image is thresholded resulting in a binary image indicating all foreground pixels, in this case pixels with values below the threshold. The thresholding can be either fixed (for instance at 0.6), automatic (for instance using Otsu's method [13]), or adaptive to local regions of the image. (3) Morphological processing

The morphological processing used to prepare the binary image for object detection is hole filling and morphological closing.

(4) Identify objects (blobs)

Blobs (binary large objects) - connected groups of foreground pixels - are extracted using a connected-components labeling [14]. (5) Remove small blobs

Small blobs are removed if their area (pixel count) is under a predefined threshold (e.g. 1000 pixels). Each of the remaining blobs are then processed one at a time.

(6) Align blob

A copy of the blob and the corresponding original gray-level image are created. These are then aligned with the image axes as illustrated on figure 6, by being rotated the same number of degrees so that the longitudinal direction of the object is aligned with the vertical axis of the image coordinate system.

(7) Compute shape profile

The radial distance from the blob's centroid (center of mass) to the edge of the blob at angles from 0 to 360 degrees is computed. The resulting set of (angle, distance) - measurements are denoted the 'shape profile' of the blob. A similar approach is used in [4]·

(8) Analyze shape

The blob and its shape profile are then analyzed in order to assign a category to the blob. The analysis consists of the following steps:

1) Compare the shape profile against a model shape profile of an 'ideal' lateral singularized egg. Assign it the category "Lateral singularized egg" if these are sufficiently similar. The similarity measure could for instance be the sum of absolute differences between the two shape profiles, which would then be compared to a threshold value in order to make the decision.

2) Compare shape profile against a model shape profile of an 'ideal' upright singularized egg. Similarly to the above, assign it the category "Upright singularized egg" if these are sufficiently similar. 3) Compute the 'solidity' of the blob as the ratio between the area of the blob and the area of the blob's convex hull [15]. Blobs with a low solidity and an area somewhat larger (for instance 1.2 times larger) than the area of an 'ideal' lateral singularized egg are given the intermediate category of "Multiple objects", the rest are assigned the category "Foreign particle".

4) If given the category "Multiple objects", the blob is split into one or more smaller blobs using a clump splitting algorithm, e.g. using concavity analysis [16] or template matching [17]. Each of these new blobs is then categorized as either "Lateral touching egg" or "Upright touching egg" if the objects did not overlap, "Partly covered egg" if it did overlap with another object but still can be identified as an egg, or "Foreign particle" if none of the above.

Figure 3: Feature extraction phase The input to the feature extraction phase consists of at least two images, a ROI subimage and a blob subimage. Additionally it may include any number of other ROI subimages of the same region from other bands or illumination modes, like a darkfield ROI subimage used for darkfield-based features. (1) Extract egg content region

The 'egg content region', i.e. the region inside the egg where the egg contents are located, is extracted for further analysis. One way to do this, which is independent of the egg orientation, is illustrated in figure 9 and consists of the following: 1) Compute the ellipse that has the same normalized second central moments as the blob [15]. This is denoted the 'blob ellipse'. See fig. 9 (c).

2) Define a new ellipse with minor axis of the same length and major axis of 95% of the length of the blob ellipse as well as the same orientation as the blob ellipse. This ellipse is denoted the 'body ellipse' since it indicates the body of the egg including egg shell but without the polar plugs. See fig. 9 (d) and (e). 3) Define a new ellipse with axis lengths at 80% of the length of the corresponding body ellipse axis. This ellipse is denoted the 'content ellipse' since it indicates the contents of the egg. See fig. 9 (f). The image region covered by this ellipse is used as the 'egg content region' in the subsequent analysis.

Examples of egg content regions extracted this way can be seen on fig. 9 (g).

(2) Compute one or more features below at one or more scales

One or more features are computed for each egg based on the extracted egg content region. Each feature is computed at a chosen scale that depends on the size of the details and structures that the feature is meant to measure. Besides choosing the scale directly and resizing the image to this scale, there are several ways to represent and work with images at multiple scales; three important ways being scale space representation [18], image pyramid representation [19], and multi-resolution analysis [20]. (3) Measure longitudinal/transverse linear structures and/or edge structures

When a scale has been selected, the direction-dependent linear structures and/or edge structures of the egg contents are measured at this scale. The measurements are carried out according to the longitudinal and transverse directions of the egg, either by measuring them in-place at the eggs original orientation or by aligning the egg with the horizontal and vertical image axes via rotation, as illustrated on figure 6.

Below are presented a range of examples of methods for constructing features based on measurements of the longitudinal and transverse structures of the egg contents. The idea behind this is to construct features based on the measurements of the direction- dependent structures and use them in the classification phase. The underlying hypothesis is that eggs containing visible larvae have more prominent longitudinal structures than transverse structures due to the way larvae are positioned inside the eggs if present. Fully developed larvae have more than partly developed larvae (see figure 15).

The measurement of longitudinal and transverse structures can sometimes be simplified by aligning the longitudinal and transverse axes of the egg with coordinate system as described above. If the aligned versions are used in the feature extraction it is recommended to align the subimages before extracting the egg content region.

Example 1: Features based on spatial autocorrelation

The idea behind these features is to measure the longitudinal and transverse structures of the egg contents using spatial autocorrelation of the egg contents in the longitudinal and transverse direction. The longitudinal and transverse spatial autocorrelation coefficients of the egg contents are computed in the following way: The egg subimages are aligned with the image axes and the egg content region is extracted as described above. The extracted egg content region is then downscaled to a resolution of approximately 1.4 pixels per micrometer for eggs around 60 μιη in length.

The resulting downscaled egg content image, denoted /, will form the basis of the spatial autocorrelation computations, which are explained in the following.

Three copies are made of /called I \ , / 2 , and / 3 . From I \ , the last row and the last column of pixels are discarded (cropped away). For / 2 , the first row and the last column of pixels are discarded. For / 3 , the last row and the first column of pixels are discarded. This is illustrated in figure 10 (top).

These three images are now all of the same dimensions, which are equal to the height of /minus one and the width of /minus one. All of the three images contain the egg content region although the region has shifted one pixel upwards on image / 2 compared to I \ , and one pixel to the left on image / 3 compared to I\.

An image region called Ω is now computed. It is defined to be the intersection of the three egg content regions, i.e. all pixel locations (i j) where Zi(ij), / 2 (i j), and / 3 (i,j) all contain a pixel from the egg content region, as illustrated in figure 10 (middle). This way, the image region Ω covers exactly the locations where the three egg content regions overlap. The set of pixels in I \ that Ω covers are called A, and similarly for 7 2 with B, and 7 3 with C, as illustrated in figure 10 (bottom). The longitudinal autocorrelation coefficient is now computed as the correlation between A and B, and the transverse autocorrelation coefficient is now computed as the correlation between A and C. The formula for this is explained in figure 11.

The longitudinal and transverse autocorrelation coefficients can be used directly as two separate features in the classification or they can be combined into a single feature, for instance the ratio between the two. The ratio between the two, defined as the longitudinal autocorrelation coefficient divided by the transverse autocorrelation coefficient is hereby defined as the 'longitudinal anisotropy'. A high longitudinal anisotropy corresponds to a relatively larger longitudinal autocorrelation coefficient, which indicates that the longitudinal, linear structures of the egg contents are more prevalent than the transverse, linear structures of the egg contents.

Examples of the use of these spatial autocorrelation-based features for classification are presented in the Classification section.

Example 2: Features based on edge detection

The egg subimages are aligned with the image axes and the egg content region is extracted as described above. The extracted egg content region is then downscaled to a resolution of approximately 2.8 pixels per micrometer for eggs around 60 μπι in length.

The Canny edge detector [21] is applied to the downscaled egg content region in order to locate and measure the prevalent edges of the egg contents. The standard deviation of the Gaussian filter is set to 1, and the high and low sensitivity thresholds are set to 0.15 and 0.05, respectively, although an automatic, heuristic determination of these could be used as well. The intermediate horizontal and vertical filter responses of the edge detector are used to compute the orientation of the detected prevalent edges. This is done using the default formula as seen in equation (10.2-11) of [22]. A possible quantification of the measured edge structures is to compute a 'longitudinal edge count' and a 'transverse edge count' as defined in the following. The 'longitudinal edge count' is defined to be the number of edge pixels of the egg content region that are oriented primarily 'north' or 'south'. Similarly, the 'transverse edge count' is defined to be the number of edge pixels that are oriented primarily 'east' or 'west'. Being oriented primarily in one direction is here defined to mean being oriented in that direction plus/minus a margin of 10 to 45 degrees, for instance 22.5 degrees, as illustrated in figure 12.

The longitudinal and transverse edge counts can be used directly as two separate features in the classification and/or they can be combined into a single feature, for instance the ratio between the two. It is suggested to use either the ratio as a single feature or use the two edge counts as two separate features.

Examples of the use of these edge detection based features for classification are presented in the Classification section.

Examples of darkfield-based features

As a compliment or an alternative to the above-described features, the light-scattering behavior/properties of the egg internals under darkfield illumination [10] can be measured and quantified in one or more ways and used as features in the classification.

The underlying idea is that the internal structures of eggs that do not contain a larva are different than for the eggs that do contain a larva. The internal structures of the first group of eggs are generally not as coarse as those of the second group of eggs, and correspondingly seem to scatter the darkfield-illumination to a higher degree. A quantification of the internal scattering can therefore be used to distinguish between the two groups. One possible way of quantifying the internal scattering under darkfield illumination is by first extracting the egg content region from the darkfield image (as opposed to extracting it from the brightfield image as in the above-presented examples) and then computing statistics of the extracted pixel intensities. One suitable statistic would be the mean scattering intensity, i.e. the mean of the extracted pixel intensities, or the median scattering intensity, but also other statistics like the standard deviation or other image moments or order statistics could be used, including a weighted average or a contrast measure. An example of the use of a darkfield-based feature for classification is presented in the Classification section.

Figure 4: Classification phase

The classification of Trichuris eggs is a statistical classification problem, also known as a supervised learning problem. The classification problem can be either binary, where each egg is classified as either containing a larva or not, or multiclass where the developmental stage of each egg is sought determined. The developmental stages are defined in the Definitions section. The classification results can be used as an indicator of the biological potency of the egg suspension.

The presented classification phase is illustrated on figure 4. It consists of two parts; a model construction part and an actual classification part.

The model construction part is used to build a classification model, which is later used for classifying the eggs. For the model construction, it uses an annotated dataset consisting of a feature matrix and an annotation vector. The feature matrix consists of a number of feature vectors that each correspond to an egg. Each feature vector is a row vector containing one or more features for the given egg. The annotation vector is a column vector with one value for each egg, namely the manually determined class of that egg. The manual annotation is performed by an experienced technician who is used to classify the eggs based on microscopic inspection. The classification model is built on all or a subset of the features in the feature matrix. The number of features influences the choice of classification method. There exist a plethora of classification methods and algorithms, from k-nearest neighbor

classification via linear and quadratic classifiers to decision trees, supper vector machines and neural networks, just to name some of the common approaches. The choice of algorithm is not important in this context, so a simple threshold or a linear discriminant analysis is used in the examples later.

In the actual classification part of the classification phase, the constructed classification model is applied to feature matrices of new sets of eggs with unknown classes. The result of a classification is a vector of assigned classes for each of the eggs. Along with results from the previous phases, the classification results are presented in a report of the analysis results. This report can include listings and images of the detected objects and their assigned categories as well as feature scores, assigned class and possibly a measure of the class assignment certainty.

Below are given some examples of classification based on some of the features that were introduced and explained in the previous section.

Classification example 1: Classification based on the Canny features

The presented Canny edge detection based feature resulted in two quantities; the longitudinal edge count and the transverse edge count. Besides using them separately as features, the ratio could be used as a feature as well.

Let the 'edge count ratio' be defined as the longitudinal edge count divided by the transverse edge count. A possible classification based on this feature alone is to use a single threshold value as classifier, for instance the value 1.8. All eggs with an edge count ratio above 1.8 are classified as containing a larva and the remaining are classified as not containing a larva. A graph of the edge count ratios of 100 eggs, presented in descending order, is seen on figure 13 (a). Notice that for a threshold of 1.8, only a single egg out of the 100 is misclassified based on this edge count ratio feature alone. Another way to classify the eggs is to use the longitudinal and transverse edge counts as two separate features as mentioned. A linear discriminant analysis with empirical prior probabilities could be used for this task. As seen on figure 13 (b), this classification is also able to correctly classify all eggs except one.

Classification example 2: Classification based on longitudinal anisotropy

In the previous section the longitudinal anisotropy was introduced as the ratio between the longitudinal, spatial autocorrelation coefficient and the transverse, spatial autocorrelation coefficient of the egg content region. Similar to the edge count features above, this feature could be used for a one-dimensional classification, or the individual correlation coefficients could be used as separate features. Figure 14 (a) shows a classification based on the individual correlation coefficients. It is also possible to combine any of the above features with other features, for instance the darkfield-based mean scattering intensity, as described earlier. An example of a two- dimensional classification based on the longitudinal anisotropy and the mean scattering intensity for another dataset can be seen on figure 14 (b). Figure 17: Physical system setup

Figure 17 illustrates one possible embodiment of the physical system setup. A Trichuris spp. egg suspension sample is placed in a sample container, and a combination of light sources is directed at the sample. A camera (for instance a CCD or CMOS camera) acquires photos of the sample through an objective, possibly a microscope objective, to obtain the desired resolution. An extension tube or similar is used to make sure that the sample is in focus. The acquired digital images are transferred from the camera to a computer, where the image analysis takes place. REFERENCES

[1] R. W. Summers, D. E. Elliott, J. F. Urban, R. Thompson, and J. V. Weinstock, "Trichuris suis therapy in Crohn's disease.," Gut, vol. 54, no. 1, pp. 87-90, Jan. 2005.

[2] R. W. Summers, D. E. Elliott, J. F. Urban, R. a. Thompson, and J. V. Weinstock, "Trichuris suis therapy for active ulcerative colitis: A randomized controlled trial," Gastroenterology, vol. 128, no. 4, pp. 825-832, Apr. 2005.

[3] A. Reddy and B. Fried, "The use of Trichuris suis and other helminth therapies to treat Crohn's disease.," Parasitology research, vol. 100, no. 5, pp. 921-7, Apr. 2007.

[4] Y. S. Yang, D. K. Park, H. C. Kim, M. H. Choi, and J. Y. Chai, "Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network.," IEEE transactions on bio-medical engineering, vol. 48, no. 6, pp. 718-30, Jun. 2001.

[5] D. Avci and A. Varol, "An expert diagnosis system for classification of human parasite eggs based on multi-class SVM," Expert Systems with Applications, vol. 36, no. 1, pp. 43-48, Jan. 2009.

[6] E. Dogantekin, M. Yilmaz, A. Dogantekin, E. Avci, and A. Sengur, "A robust technique based on invariant moments - ANFIS for recognition of human parasite eggs in microscopic images," Expert Systems with Applications, vol. 35, no. 3, pp. 728-738, Oct. 2008.

[7] D. Thienpont, F. Rochette, and O. F. J. Vanparijs, Diagnosing helminthiasis by coprological examination. 1986.

[8] L. S. Roberts, J. Janovy Jr., and G. D. Schmidt, Foundations of Parasitology, Seventh edition. 2005. [9] J. S. Pittman, G. Shepherd, B. J. Thacker, and G. H. Myers, "Trichuris suis in finishing pigs - Case report and review.pdf," Journal of swine health and production, vol. 18, no. 6, pp. 306-313, 2010.

[10] M. Abramowitz and M. W. Davidson, "Darkfield Illumination | Olympus

Microscopy Resource Center," 2012. [Online]. Available:

http://wvvw.olympusmicro.com/primer/techniques/dark. [Accessed: 26-Apr- 2012].

[11] R. J. S. Beer, "Morphological descriptions of the egg and larval stages of

Trichuris suis Schrank, 1788," Parasitology, vol. 67, no. DEC, pp. 263-278, 1973.

[12] M. I. Black, P. V. Scarpino, C. J. O'Donnell, K. B. Meyer, J. V. Jones, and E. S.

Kaneshiro, "Survival rates of parasite eggs in sludge during aerobic and anaerobic digestion," Applied and environmental microbiology, vol. 44, no. 5, pp. 1138-43, Nov. 1982. [13] N. Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions On Systems Man And Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.

[14] M. B. Dillencourt, H. Samet, and M. Tamminen, "Automatic Clump Splitting for Cell Quantification in Microscopical Images," vol. 39, no. 2, pp. 253-280, 1992.

[15] MathWorks, "MATLAB regionprops," 2012. [Online]. Available:

http://www.mathworks.se/help/toolbox/images/ref/regionprops. html. [Accessed:

26-Apr-2012].

[16] H. Berge, D. Taylor, S. Krishnan, and T. S. Douglas, "MRC / UCT Medical

Imaging Research Unit , Department of Human Biology Division of Clinical Pharmacology University of Cape Town , South Africa," pp. 204-207, 2011. [17] G. Diaz, F. Gonzalez, and E. Romero, "Automatic Clump Splitting for Cell

Quantification in Microscopical Images," in CIARP '07 Proceedings of the Congress on pattern recognition 12th Iberoamerican conference on Progress in pattern recognition, image analysis and applications, 2007, pp. 1-10.

[18] A. P. Witkin, "Scale-space filtering: A New Approach To Multi-Scale

Description," Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '84., vol. 9, pp. 150 - 153, 1984.

[19] P. Burt and E. Adelson, "The Laplacian Pyramid as a Compact Image Code," IEEE Transactions on Communications, vol. 31, no. 4, pp. 532-540, Apr. 1983.

[20] S. G. Mallat, "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 1 1, no. 7, pp. 674-693, 1989.

[21] J. Canny, "A computational approach to edge detection.," IEEE transactions on pattern analysis and machine intelligence, vol. PAMI-8, no. 6, pp. 679-698, Jun. 1986.

[22] R. S. Gonzalez and R. E. Woods, Digital Image Processing, International

Edition. 2008.