Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD FOR MEDICAL SCREENING AND A SYSTEM THEREFOR
Document Type and Number:
WIPO Patent Application WO/2016/189469
Kind Code:
A1
Abstract:
Embodiments of the present disclosure provide a method and a portable system for screening cervical cancer at point-of-sample collection. In one embodiment, a sample collected from a user is disposed on a slide and stained with suitable reagents. The stained sample on the slide is then disposed on the system for capturing one or more images of the stained sample. The captured images are then processed to classify the processed images into one or more cell types and stages of cancer. Based on the classification, one or more reports are generated and forwarded to experts or cytotechnologist or cyto-pathologists for further analysis. Thus, the present disclosure enables the interpretation and screening of the samples at the point-of-sample collection centre and avoids the delay in forwarding the sample to the centralized place for experts analysis and interpretation report being sent back to the point of sample collection.

Inventors:
NATARAJAN ADARSH (IN)
K K HARINARAYANAN (IN)
SAO ANIL KUMAR (IN)
BHAVSAR ARNAV (IN)
GAUTAM SRISHTI (IN)
GUPTA KRATI (IN)
Application Number:
PCT/IB2016/053052
Publication Date:
December 01, 2016
Filing Date:
May 25, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NATARAJAN ADARSH (IN)
International Classes:
A61B10/00; G01N33/574
Foreign References:
EP2685881A22014-01-22
US20090076368A12009-03-19
US20140199722A12014-07-17
Attorney, Agent or Firm:
MANICKAM, Devi, Sundaramurthy et al. (Intellectual Property AttorneysIvy Terrace, First Floor, East Wing, Plot no. 119,Kavuri Hills, Madhapu, Hyderabad Telangana 1, IN)
Download PDF:
Claims:
The Claim:

1. A method of screening for cervical cancer, comprising steps of:

receiving a sample on one or more slides used for collecting sample; staining the received sample, using a sample stainer, with one or more reagents to obtain a stained sample;

capturing, using an image acquisition unit, one or more set of images of the stained sample illuminated at a plurality of angular degrees by positioning the one or more slides at one or more positions;

segmenting, by an image processing unit, the one or more set of images thus captured into one or more cell regions;

classifying, by the image processing unit, the one or more determined cell regions into one or more cell types based on one or more features extracted from the one or more cell regions; and

determining, by the image processing unit, a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions and determining one or more stages of cancer of the one or more cell regions based on the one or more cell type and the confidence score thus determined.

2. The method as claimed in claim 1, further comprising:

generating one or more reports including information associated with the one or more cell regions, the cell type and the confidence score, one or more recommendations of follow-up test procedures to be conducted and analysis opinion provided by a cytologist based on the cell type and the confidence score thus determined; and

updating one or more data models with the one or more set of images of the sample, one or cell regions, the cell type and the confidence score corresponding to the one or more cell regions and the machine learning models and relevant parameters associated with the sample obtained during the image processing. The method as claimed in claim 2, wherein the one or more data models and the machine learning models are created based on the one or more cell regions obtained from one or more sample collected from users, comprising the one or more sample images, the corresponding cell type, the confidence score, the one or more recommendations of follow-up test procedures to be conducted and the analysis opinion of a cytologist.

The method as claimed in claim 1, wherein the step of capturing one or more set of images, comprising:

controlling one or more positions of a stage holding the sample;

illuminating the stage comprising the stained sample at a plurality of angular degrees;

capturing one or more set of images of the stained sample arranged at the one or more set positions at each of the plurality of angular degrees;

generating a complete image by combining one or more set of images captured at one position at each of the plurality of angular degrees; and

storing the one or more set of captured images and the complete image.

The method as claimed in claim 1, wherein the step of segmenting the one or more set of images to determine the one or more cell regions comprising:

receiving the one or more set of images captured at the one or more positions;

pre-processing the one or more set of received images to obtain a normalized complete image of the stained sample;

obtaining one or more first clusters of pixels from the complete image based on colour and location information of the one or more pixels of the complete image;

extracting one or more first features of the complete image based on one or more predetermined texture filters; and

grouping the one or more pixels of the complete image into the one or more regions of interest based on a predetermined threshold value, the one or more first clusters of pixels and one or more extracted first features of the complete image.

The method as claimed in claim 4, wherein the step of segmenting the one or more set of images further comprising the steps of:

identifying one or more seed regions of the sample image and one or more pixels associated with the one or more seed regions;

obtaining one or more second clusters of pixels associated with the one or more identified seed regions and extracting one or more second features from the one or more identified seed regions;

grouping the one or more pixels associated with the one or more seed regions to obtain one or more second region of interest based on the one or more second clusters of pixels, and one or more second features extracted from the one or more identified seed regions;

determining one or more stable regions associated with the one or more identified seed regions based on one or more threshold values and stability of the one or more identified seed region determined in response to one or more threshold values; and

determining the one or more cell regions on the sample based on the one or more second region of interest and the one or more stable regions associated with the one or more identified seed regions of the sample image.

The method as claimed in claim 1, wherein the step of classifying the one or more cell regions into the one or more cell types comprising the steps of:

pre-processing the one or more cell regions to obtain one or more normalized cell regions;

determining one or more features, including colour, shape, dimension and texture information associated with the one or more normalized cell region; calculating one or more contextual cues related to the one or more set of image of the received sample; and determining the one or more cell types corresponding to the one or more cell regions based on the one or more determined features and the one or more contextual cues related to the one or more set of image of the received sample.

8. A system for medical screening for cervical cancer, comprising:

one or more slides used for collecting sample;

a sample stainer coupled with the one or more slides, configured to stain the received sample with one or more reagents to obtain a stained sample; and an image acquisition unit configured to capture one or more set of images of the stained sample illuminated at a plurality of angular degrees by positioning the one or more slides at one or more positions;

a data repository coupled with the image sensor, configured to store one or more set of images thus captured; and

an image processing unit coupled with the image acquisition unit, comprising:

a processor; and

a memory communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to:

segment the one or more set of images into one or more cell regions;

classify the one or more cell regions into one or more cell types based on one or more features extracted from the one or more cell regions;

determine a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions; and

determine one or more stages of cancer of the one or more cell regions based on the one or more cell type and the confidence score thus determined.

9. The system as claimed in claim 8, wherein the processor is further configured to perform one or more steps comprising:

generating one or more reports including information associated with the one or more cell regions, the cell type and the confidence score, one or more recommendations of follow-up test procedures to be conducted and analysis opinion provided by a cytologist based on the cell type and the confidence score thus determined; and

updating one or more data models, stored in a data repository coupled with the image processing unit, and the machine learning models and relevant parameters associated with the sample obtained during the image processing with the one or more set of images of the sample, one or cell regions, the cell type and the confidence score corresponding to the one or more cell regions.

10. The system as claimed in claim 9, wherein the processor is configured to create the one or more data models based on the one or more cell regions obtained from one or more sample collected from users, the corresponding cell type, the confidence score, the one or more recommendations of follow-up test procedures to be conducted and relevant parameters and the analysis opinion of a cytologist.

11. The system as claimed in claim 8, wherein the processor is configured to segment the one or more set of images by performing the steps of:

receiving the one or more set of images captured at the one or more positions;

pre-processing the one or more set of received images to obtain a normalized complete image of the stained sample;

obtaining one or more first clusters of pixels from the complete image based on colour and location information of the one or more pixels of the complete image;

extracting one or more first features of the complete image based on one or more predetermined texture filters; and grouping the one or more pixels of the complete image into the one or more first regions of interest based on a predetermined threshold value, the one or more first clusters of pixels and one or more extracted first features of the complete image.

12. The system as claimed in claim 8, wherein the processor is configured to segment the one or more set of images by further performing the steps of:

identifying one or more seed regions of the sample image and one or more pixels associated with the one or more seed regions;

obtaining one or more second clusters of pixels associated with the one or more identified seed regions and extracting one or more second features from the one or more identified seed regions;

grouping the one or more pixels associated with the one or more seed regions to obtain one or more second region of interest based on the one or more second clusters of pixels, and one or more second features extracted from the one or more identified seed regions;

determining one or more stable regions associated with the one or more identified seed regions based on one or more threshold values and stability of the one or more identified seed region determined in response to one or more threshold values; and

determining the one or more cell regions associated with the sample based on the one or more second region of interest and the one or more stable regions associated with the one or more identified seed regions of the sample image.

13. The system as claimed in claim 8, wherein the processor is configured to classify the one or more cell regions into the one or more cell types comprising the steps of:

pre-processing the one or more cell regions to obtain one or more normalized cell regions; determining one or more features, including colour, shape, dimension and texture information, associated with the one or more normalized cell regions;

calculating one or more contextual cues related to the one or more set of image of the received sample; and

determining the one or more cell types corresponding to the one or more cell regions based on the one or more determined features and the one or more contextual cues related to the one or more set of image of the received sample.

The system as claimed in claim 8, wherein the image acquisition unit, comprising:

an illumination source capable of emitting illumination light signals on a stained sample at a plurality of angular degrees;

an image sensor configured to capture one or more set of images of the stained sample illuminated at a plurality of angular degrees by positioning the one or more slides at one or more positions;

a slide position controlling unit coupled with the image sensor, configured to control the one or more positions of the slide; and

a processor; and

a memory communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to:

setting the slide at the one or more positions;

illuminate the stained sample with illumination light signals at the plurality of angular degrees;

capturing the one or more images of the stained sample arranged at the one or more set positions at each of the plurality of illuminating angular degrees;

generating a complete image by combining one or more set of images captured at one position at each of the plurality of angular degrees; and storing the one or more set of captured images and the complete image.

15. The system as claimed in claim 14, further comprises a microscope coupled with the image sensor, capable of varying the illumination light signals passing through the sample slide.

16. The system as claimed in claim 14, wherein the slide position controlling unit comprises:

a stage for holding the stained sample;

an actuating assembly, comprising one or more actuators comprising at least one stepper motor and one or more guiding rods coupled with the at least one stepper motor and the stage, configured to enable movement of the stage in one or more of X-axis, Y-axis and Z-axis directions;

a processor;

a stage control unit coupled with the processor, the stage and the actuating assembly, configured to control the actuators and the movement of the stage;

one or more position sensors, coupled with the actuating assembly, configured to provide information related to current position of the one or more actuators; and

an auto focus module coupled with stage control unit and the one or more position sensors, configured to generate a feedback signal indicative of modification in the current position of the one or more actuators.

17. The system as claimed in claim 14, wherein the processor is configured to receive the one or more captured images of the stained sample with low resolution and convert into one or more corresponding images of high resolution.

Description:
A METHOD FOR MEDICAL SCREENING AND A SYSTEM THEREFOR

TECHNICAL FIELD

The present subject matter is related, to medical screening in general and more particularly, but not exclusively to a method and a system for screening for cervical cancer at point-of-sample collection.

BACKGROUND

Cervical Cancer is the most deadly cancer for women in the World. Cervical cancer screening is commonly based on etiological and colposcopy analyses. The generally- accepted gold standard in screening for cervical Cancer is the Pap smear Test (Papanicolaou test or Pap test). The common Pap smear detects cellular abnormalities and thus the development of potentially pre-cancerous lesions. In a Pap smear, the collected cervical ceils are placed on a glass slide, stained and examined by a specially-trained and qualified cytotechnologist using a light microscope. Even though this test has led to a reduction in the incidences of and mortalities caused by cervical cancer, it is a subjective analysis with several known disadvantages, such as an increase in false-negatives and equivocal results as a consequence of debris obscuring abnormal cells. There may be a delay in analysing and providing interpretation of the samples in geographical areas where there is poor infrastructure. This delay not only discourages women from testing but also makes follow up very difficult.

Hence, there exists a need for a method and a system for screening cervical cancer that has improved effectiveness of the screening at the point-of-sample collection and reduces the delay in delivering the screening results.

SUMMARY

One or more shortcomings of the prior art are overcome and additional advantages are provided through the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.

Embodiments of the present disclosure relates to a method of screening for cervical cancer. The method comprising the steps of receiving a sample on one or more slides used for collecting the sample and staining the received sample using one or more reagents. The method further comprises the step of capturing one or more set of images of the stained sample positioned at one or more positions and illuminated at a plurality of angular degrees. Upon capturing the one or more set of images of the stained sample, the method performs segmentation into one or more cell regions and further classifying the one or more determined cell regions into one or more cell types based on one or more features extracted from the cell regions. Further, the method comprises the step of determining a confidence score and determining one or more stages of cancer of the one or more cell regions based on the confidence score and one or more cell types thus determined. In another aspect, the present disclosure relates to a system for screening cervical cancer of a user. The system comprises one or more slides for collecting sample from user and a sample stainer coupled with the slides. The sample stainer is configured to automatically stain the received sample with one or more reagents to obtain a stained sample. The medical screening system further comprises an image acquisition unit configured to capture one or more set of images of the stained sample illuminated at a plurality of angular degrees by positioning the one or more slides at one or more positions. Further, the system comprises a data repository for storing the one or more set of images thus captured. The system also comprises an image processing unit comprising a processor; and a memory communicatively coupled to the processor. The processor is configured to segment the one or more set of images into one or more cell regions and classify the one or more cell regions into one or more cell types based on one or more features extracted from the one or more cell regions. Further, the processor is configured to determine a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions and further determine one or more stages of cancer of the one or more cell regions based on the one or more cell type and the confidence score thus determined. The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS

The novel features and characteristics of the disclosure are explained herein. The embodiments of the disclosure itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings. One or more embodiments are now described, by way of example only, with reference to the accompanying drawing in which:

Figure 1 illustrates exemplary architecture of a system for screening for cervical cancer in accordance with some embodiments of the present disclosure;

Figure 2 illustrates a block diagram of a medical screening system of Figure 1 in accordance with some embodiments of the present disclosure;

Figures 3a and 3b illustrate exemplary isometric and side views of an image acquisition device in accordance with one embodiment of the present disclosure;

Figure 4 illustrates an exemplary image acquisition device in accordance with another embodiment of the present disclosure; Figure 5 illustrates a flowchart of a method of screening for cervical cancer in accordance with some embodiments of the present disclosure; and

Figures 6a - 6m illustrate exemplary images of sample obtained during segmentation of image processing in accordance with some embodiments of the present disclosure;

The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein. DETAILED DESCRIPTION

The foregoing has broadly outlined the features and technical advantages of the present disclosure in order that the detailed description of the disclosure that follows may be better understood. Additional features and advantages of the disclosure will be described hereinafter. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objectives and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. The present disclosure relates to a method and system for screening cervical cancer at point-of- sample collection. In one embodiment, a sample collected from a user is deposited on a slide and stained with suitable reagents. The stained sample on the slide is then disposed on the system for capturing one or more images of the stained sample. The captured images are then processed to classify the processed images into one or more images of normal and abnormal samples. Based on the classification, one or more reports are generated and forwarded to experts or cytotechnologist or cyto- pathologists for further analysis. Thus, the present disclosure enables the interpretation and screening of the samples at the point-of- sample collection centre and avoids the delay in forwarding the sample to the centralized place for experts analysis and interpretation report being sent back to the point of sample collection.

Figure 1 illustrates block diagram of a system (100) for screening for cervical cancer in accordance with some embodiments of the present disclosure.

In one embodiment, the system (100) comprises at least one Image Acquisition & Processing Device (IAD) (102), a sample stainer (104), a screening server (106) and one or more user devices (108-1), (108-2), ... (108-N) (collectively referred to as user device 108) of one or more users coupled via the network (110). The users may be experts like cytotechnologists or cyto-pathologists and the user devices (108) may be for example, a mobile device generally a portable computer or a computing device including the functionality for communicating over the network (110). For example, the mobile device can be a mobile phone, a tablet computer, a laptop computer, a handheld game console, or any other suitable portable devices. The network (110) may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. In one embodiment, a patient visits a health worker or a clinician who obtains a sample of for example, cervical cells using known standard methods. The health worker or the clinician then fixes the sample on a medium such as a slide using known laboratory techniques that enables slide preparation. The slide preparation is a process where the slide is uniformly processed and fixed followed by a process called as cover slipping. A slide is prepared using known fixation techniques of medium such as for example cyto fix, and then subjected to staining process. For example, the slide is prepared by depositing proportionate and representative cells from the sample onto the slide and the slide is then subjected to the staining process. The sample stainer (104) comprises a slide holder for holding the slide comprising the sample and is configured to stain the sample with one or more reagents to distinguish the cells from one another using staining techniques. The sample stainer (104) may be a portable sample stainer and may be directly coupled with the IAD (102). In another embodiment, the sample stainer (104) may be integrated within the IAD (102). The stained sample is then subjected to screening process by IAD (102) to evaluate potential abnormalities in the sample and identify cells or cell-clusters that require further review by experts like cytotechnologists or cyto-pathologists.

The IAD (102) may be a portable system and capable of communicating with the sample stainer (104) and the screening server (106) and the user devices (108) via the network (110). In one embodiment, the IAD (102) includes a central processing unit ("CPU" or "processor") (112), a memory (114) and an I/O interface (not shown). The I/O interface is coupled with the processor (112) and an I/O device. The I/O device is configured to receive inputs via the I/O interface and transmit outputs for displaying in the I/O device via the I/O interface. The IAD (102) further includes one or more modules comprising at least an image acquisition module (116) and an image processing module (118) that comprises a segmentation module (120) and a classification module (122). The image acquisition module (116) is configured to acquire one or more set of images of the stained sample illuminated by an illumination source at a plurality of angular degrees positioning the stained sample at one or more positions. Upon capturing, the segmentation module (120) segments the one or more set of images of the stained sample into one or more cell regions for further classification. The classification module (122) classifies the one or more cell regions into one or more cell types based on one or more features extracted from the one or more cell regions. Upon classification, the image processing module (118) determines a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions. Based on the one or more cell type and the confidence score, the image processing module (118) determines one or more stages of cancer of the one or more cell regions. The image processing module (118) further enables the report generation based on the determination of stages of cancerous cell from cell regions.

In one embodiment, the screening server (106) is configured to generate the report based on the determination of stages of cancerous cell by the image processing module (118). The screening server (106) comprises a data repository (124) configured to store the one or more set of images thus captured by the image acquisition module (116). The data repository (124) is also configured to store one or more training dataset comprising training images of one or more cell types and one or more stages of cancer, one or more data models, reports generated and the machine learning models and relevant parameters associated with the sample obtained during the image processing. In one aspect, the screening server (106) comprises a report generation module (126) and a learning module (128). In another embodiment, the screening server (106) comprises the image processing module (118). The report generation module (126) is configured to generate one or more report comprising information associated with the one or more cell regions, the cell type and the confidence score, one or more recommendations of follow-up test procedures to be conducted and analysis opinion provided by a cytologist or by the image processing module in the server based on the cell type and the confidence score thus determined. The screening server (106) also enables transmission of the one or more reports generated to the user device (108) and also to point of care centres. In one example, the screening server (106) transmits one or more notifications to the user device (108) via email/ API/web interface/SMS or other channels of information to users regarding scheduling next visit, reports, and system status/updates.

Upon report generation, the learning module (128) is configured to update one or more data models previously created and stored in the data repository (124) of the screening server (106). In one embodiment, the learning module (128) is configured to create the one or data models comprising the one or more set of images of the stained sample, one or more cell regions, the cell type and the confidence score associated with a particular sample collected from a particular user along with relevant parameters and features that are obtained and observed during the image processing. Further, the learning module (128) is configured to update the one or more data models and the machine learning models and relevant parameters associated with the sample obtained during the image processing with the cell type and the confidence score associated with a recently collected sample and any inputs received from the user device (108) including opinion or annotation regarding the confirmation of cell analysis.

As illustrated in Figure 2, the IAD (102) comprises the processor (112), the memory (114), an I/O Interface (202), data (204) and modules (206). In one implementation, the data (204) and the modules (206) may be stored within the memory (114). In one example, the data (204) may include one or more sample images (208), one or more cell regions (210), one or more cell types (212), confidence score (214), results of analysis & reports (216) and other data (218). In one embodiment, the data (204) may be stored in the memory (114) in the form of various data structures. Additionally, the aforementioned data can be organized using data models, such as relational or hierarchical data models. The other data (218) may be also referred to as reference repository for storing recommended implementation approaches as reference data. The other data (218) may also store data, including temporary data and temporary files, generated by the modules (206) for performing the various functions of the IAD (102). The other data (218) also store machine learning models and relevant parameters and features extracted during image processing along with data model downloaded from the screening server (106).

The modules (206) may include, for example, the image acquisition module (116) comprising a slide positioning module (220) and an image capturing module (222), the image processing module (118), and an image stitching module (224). The modules (206) may also comprise other modules (226) to perform various miscellaneous functionalities of the IAD (102). It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules. The modules (206) may be implemented in the form of software, hardware and/or firmware.

In operation, the IAD (102) receives one or more stained samples from the sample stainer (104) and acquires one or more set of images (208) of the received stained sample. In one embodiment, the image acquisition module (116) acquires the one or more set of images of the stained sample by positioning the stained sample at one or more positions. The slide positioning module (220) is configured to control the position of the slide having the stained sample, and also control the visual representation of the stained samples by increasing or decreasing the magnification levels of the image of the stained sample. An exemplary IAD (102) is illustrated in Figures 3a and 3b.

As shown, the IAD (102) comprises a user interface (302), an image sensor (304), an optics assembly (306), a slide position control unit (308), a display (310), a power source (312) and an illumination source (314). The image sensor (304) enables capturing of the photons that are transmitted from the illumination source (314) passing across the samples. The illumination source (314) provides the necessary light levels ranging from low light to bright light which is automatically controlled based on the required type of image construction. The position of Illumination Source (314) can be controlled by the Pan as well as tilt movements which provide increased angular positional degrees. The Illumination source can be for example, LED Matrix, Laser Matrix, LED Lamp as well as Halogen Lamps. The optics assembly (306) comprises a magnifying digital microscope to magnifies the optical light passing through the sample slide in the power of 2x, 4x, lOx, 20x, 40x and lOOx ranging for example, 2 to 100 times. The digital microscope is configured to control the visual representation of the samples by increasing or decreasing the magnification levels of the image of the samples.

The slide positioning control unit (308) may be a motorized Stage for positioning of the sample slide in a given x-y and z plane. In one embodiment, the slide positioning control unit (308) comprises a stage for holding the stained sample, and an actuating assembly coupled with the stage and configured to enable movement of the stage in one or more of X-axis, Y-axis and Z-axis directions. The actuating assembly comprises one or more actuators comprising at least one stepper motor and one or more guiding rods coupled with the at least one stepper motor and the stage. The positions are selected so to have an overlap between the imaged at each position. The stage by being positioned at multiple positions will enable capturing the image of the whole sample. The slide positioning control unit (308) comprises a stage control unit coupled with the stage and the actuating assembly, configured to control the actuators and the movement of the stage. The slide positioning control unit (308) also comprises one or more position sensors, coupled with the actuating assembly, configured to provide information related to current position of the one or more actuators and an auto focus module coupled with stage control unit and the one or more position sensors, configured to generate a feedback signal indicative of modification in the current position of the one or more actuators. In another embodiment, as illustrated in Figure 4, the IAD (102) comprises a user interface (402), a slide position control unit (404), an emergency/power stop switch (406), and power source (410). The IAD (102) is externally coupled with an image sensor (412), and an optics assembly (414) for controlling the visual representation of the samples and capturing of the one or more set of images of the stained sample by positioning the stained sample at one or more positions. The image capturing module (222) enables the image sensor (304, 412) to captures one or more images of the stained sample deposited on the slide, i.e., at least one of two-dimensional or three-dimensional images of the stained sample and store the captured images in the memory (114). In one embodiment, if the one or more captured images are determined to have low resolution, then the image capturing module (222) converts the one or more captured images of the stained sample with low resolution into one or more corresponding images of high resolution and store in the memory (114) of the IAD (102) and the data repository(124) of the screening server (106). In another embodiment, the captured images of high resolution may be stored in the memory (114) of the IAD (102) and the data repository (124) of the screening server (106). The captured images are then processed by the image processing module (118).

In one embodiment, the image processing module (118) processes the received captured images and classifies the received images into one or more cell types and one or more stages of cancer cells as per the global standards of cancerous stages. In one implementation, the images thus captured may be pre-processed by the IAD (102) before segmentation and classification into cell types and stages. The image processing module (118) performs one or more pre-processing techniques on the received captured images including but not limited to image stitching, histogram equalization, and contrast normalization followed by image stitching. In one embodiment, the image stitching module (224) is configured to combine the one or more set of images taken at one or more positions into a whole image. Upon combining, the image stitching module (224) also performs normalization to obtain normalized image of the sample. The normalized image is then segmented into one or more sub-images using segmentation techniques.

In accordance with a first embodiment, the segmentation module (120) is configured to obtain one or more first clusters of pixels from the complete image based on colour and location information of the one or more pixels of the complete image. Further, the segmentation module (120) is configured to extract one or more first features of the complete image based on one or more predetermined texture filters. In another embodiment, the segmentation module (120) is configured to extract the one or more first features using known texture filters. Upon obtaining the one or more first clusters of pixels and extraction of the one or more first features of the complete image, the segmentation module (120) groups the one or more pixels of the complete image into the one or more regions of interest based on a predetermined threshold value, the one or more first clusters of pixels and one or more extracted first features of the complete image. In another embodiment, the segmentation module (120) groups the one or more pixels of the complete image into the one or more first regions of interest using global thresholding techniques. Upon generating the one or more first regions of interest, the segmentation module (120) is configured to merge the one or more first regions of interest based on one or more features such as size, colour and so on. In another embodiment, the segmentation module (120) is configured to merge the one or more first regions of interest thus obtained using region merging techniques. Upon merging, the segmentation module (120) is configured to determine one or more seed regions of the sample image and one or more pixels associated with the one or more seed regions.

In one embodiment, the segmentation module (120) is configured to perform adaptive thresholding to identify one or more neighbouring regions associated with the one or more regions of interest and compare with the identified neighbouring regions to determine the level of darkness or texture of the one or more first regions of interest. Based on the comparison, the segmentation module (120) determines the one or more seed regions where the expected cell may be present. For each of the one or more seed regions, thus determined, the segmentation module (120) obtains one or more second clusters of pixels associated with the one or more identified seed regions and extracts one or more second features from the one or more identified seed regions. In another embodiment, the segmentation module (120) performs local texture clustering on each of the one or more seed regions to determine the one or more second clusters of pixels. Upon determination, the segmentation module (120) groups the one or more pixels associated with the one or more seed regions to obtain one or more second region of interest based on the one or more second clusters of pixels, and one or more second features extracted from the one or more identified seed regions. Further, the segmentation module (120) determines the one or more stable regions associated with the one or more identified seed regions based on one or more threshold values and stability of the one or more identified seed region determined in response to one or more threshold values. The segmentation module (120) further determines the one or more cell regions on the sample based on the one or more second region of interest and the one or more stable regions associated with the one or more identified seed regions of the sample image. In another embodiment, the segmentation module (120) is configured to determine the one or more cell regions by using graph cut techniques or conditional random field techniques. Upon segmentation, the classification module (122) pre-processes the one or more cell regions to obtain one or more normalized cell regions. In one embodiment, the classification module (122) performs pre-processing of the one or more cell regions including normalization, histogram equalization and contrast enhancement steps to obtain the one or more normalized cell regions and determines one or more features associated with the one or more normalized cell regions. The one or more features thus extracted may include colour, shape, dimension and texture information of the one or more normalized cell regions. The classification module (122) is then configured to calculate one or more contextual cues related to the one or more set of image of the received sample. The contextual cues may include any feature of the acquired image that cannot be considered to be part of a single cell or multiple adjacent cells. The contextual cues, may be for example include, image colour, image clutter, background colour, Neutrophil size/count/distribution. Upon calculating the one or more contextual cues, the classification module (122) determines the cell type of the one or more cell regions. In one embodiment, the classification module (122) determines the one or more cell types corresponding to the one or more cell regions based on the one or more determined features and the one or more contextual cues related to the one or more set of image of the received sample and the one or more machine learning models comprising relevant parameters and features and opinion received from cytologist via the user device (108). Upon determination of the one or more cell types, the classification module (122) determines a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions according to global standards and further determines one or more stages of cancer based on the one or more cell type and the confidence score and the one or more machine learning models comprising relevant parameters and features and opinion received from cytologist.

In accordance with a second embodiment, the segmentation module (120) is configured to employ deep neural network techniques and one or more conditional random field techniques and one or more data models, machine learning models and relevant parameters associated with the sample obtained during the image processing to segment the one or more set of images into one or more cell regions. Upon segmentation, the classification module (122) is configured to deploy neural network techniques to classify the one or more cell regions into one or more cell types based on one or more features extracted from the one or more cell regions and determine one or more stages of the cancer as per the global standards based on the one or more cell type and the confidence score thus determined.

Upon classification, the results of the classification and one or more reports are generated and transmitted to the screening server (106) for further processing. In one embodiment, the classification results are stored in the data repository (124) and the report generation module (126) generates one or more reports including information associated with the one or more classified cell regions. In one example, the report generation module (126) is configured to generate one or more reports including information associated with the one or more cell regions, the cell type and the confidence score, one or more recommendations of follow-up test procedures to be conducted and analysis opinion provided by a cytologist based on the cell type and the confidence score thus determined. Further, the learning module is configured to update the one or more data models and the machine learning models and relevant parameters of the other data (218) with the one or more set of images of the sample, one or cell regions, the cell type and the confidence score corresponding to the one or more cell regions thus determined after report generation and annotation by cytologist. Thus, the present disclosure enables the interpretation and screening of the samples at the point-of- sample collection centre and avoids the delay in forwarding the sample to the centralized place for experts analysis and interpretation report being sent back to the point of sample collection. Figure 5 illustrates a flowchart of a method for screening cervical cancer of a patient in accordance with an embodiment of the present disclosure.

As illustrated in Figure 5, the method (500) comprises one or more blocks implemented by the IAD (102) for screening cervical cancer of a patient. The method (500) may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.

The order in which the method (500) is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method (500). Additionally, individual blocks may be deleted from the method (500) without departing from the spirit and scope of the subject matter described herein. Furthermore, the method (500) can be implemented in any suitable hardware, software, firmware, or combination thereof.

At block (502), receive sample slide and stain the sample. In one embodiment, a sample obtained from a patient is collected on a slide and stained with one or more reagents to distinguish the cells from one another. The sample stainer (104) comprises a sample holder for holding the slide comprising the sample and is configured to stain the sample.

At block (504), capture image of the stained sample. In one embodiment, the image acquisition module (116) acquires the one or more set of images of the stained sample as illustrated in Figure 6a, by positioning the stained sample at one or more positions. The slide positioning module (220) is configured to control the position of the slide having the stained sample, and also control the visual representation of the stained samples by increasing or decreasing the magnification levels of the image of the stained sample.

In one embodiment, if the one or more captured images are determined to have low resolution, then the image capturing module (222) converts the one or more captured images of the stained sample with low resolution into one or more corresponding images of high resolution and store in the data repository(124) of the screening server (106). In another embodiment, the captured images of high resolution may be stored in the data repository (124) of the screening server (106). The captured images are then processed by the image processing module (118). At block (506), segment the set of images into cell regions. The image processing module (118) processes the received captured images of Figure 6a and classifies the received images into one or more cell types and one or more stages of cancer based on the cell type classification as per the global standards. In one implementation, the images thus captured may be pre-processed by the IAD (102) before segmentation and classification into cell types and stages. The image processing module (118) performs one or more pre-processing techniques on the received captured images including but not limited to image stitching, histogram equalization, and contrast normalization followed by image stitching. In one embodiment, the image stitching module (224) is configured to receive the one or more captured images of the stained sample with low resolution and convert into one or more corresponding images of high resolution. In one embodiment, the image stitching module (224) is configured to combine the one or more set of images taken at one or more positions into a whole image. Upon combining, the image stitching module (224) also performs normalization to obtain normalized image of the sample. The normalized image is then segmented into one or more sub-images using segmentation techniques as illustrated in Figure 6b.

In accordance with a first embodiment, the segmentation module (120) is configured to obtain one or more first clusters of pixels from the complete image based on colour and location information of the one or more pixels of the complete image, as illustrated in Figure 6c. Further, the segmentation module (120) is configured to extract one or more first features of the complete image based on one or more predetermined texture filters as illustrated in Figure 6d. In another embodiment, the segmentation module (120) is configured to extract the one or more first features using known texture filters. Upon obtaining the one or more first clusters of pixels and extraction of the one or more first features of the complete image, the segmentation module (120) groups the one or more pixels of the complete image into the one or more regions of interest as illustrated in Figure 6e, based on a predetermined threshold value, the one or more first clusters of pixels and one or more extracted first features of the complete image. In another embodiment, the segmentation module (120) groups the one or more pixels of the complete image into the one or more first regions of interest using global thresholding techniques. Upon generating the one or more first regions of interest, the segmentation module (120) is configured to merge the one or more first regions of interest based on one or more features such as size, colour and so on as illustrated in Figure 6f. In another embodiment, the segmentation module (120) is configured to merge the one or more first regions of interest thus obtained using region merging techniques. Upon merging, the segmentation module (120) is configured to determine one or more seed regions of the sample image and one or more pixels associated with the one or more seed regions.

In one embodiment, the segmentation module (120) is configured to perform adaptive thresholding, as illustrated in Figure 6g, to identify one or more neighbouring regions associated with the one or more regions of interest and compare with the identified neighbouring regions to determine the level of darkness/texture features of the one or more first regions of interest. Based on the comparison, the segmentation module (120) determines the one or more seed regions where the expected cell may be present as illustrated in Figure 6h. For each of the one or more seed regions, thus determined, the segmentation module (120) obtains one or more second clusters of pixels associated with the one or more identified seed regions and extracts one or more second features from the one or more identified seed regions. In another embodiment, the segmentation module (120) performs local texture clustering on each of the one or more seed regions to determine the one or more second clusters of pixels as illustrated in Figure 6i. Upon determination, the segmentation module (120) groups the one or more pixels associated with the one or more seed regions to obtain one or more second region of interest based on the one or more second clusters of pixels, and one or more second features extracted from the one or more identified seed regions.

Further, the segmentation module (120) determines the one or more stable regions as illustrated in Figure 6j, associated with the one or more identified seed regions based on one or more threshold values and stability of the one or more identified seed region determined in response to one or more threshold values. The segmentation module (120) further determines the one or more cell regions as illustrated in Figure 6k indicative of cervical cancerous cells on the sample based on the one or more second region of interest and the one or more stable regions associated with the one or more identified seed regions of the sample image. In another embodiment, the segmentation module (120) is configured to determine the one or more cell regions by using graph cut techniques or conditional random field techniques as illustrated in Figures 6k, 61 and 6m. Upon segmentation, the classification module (122) pre-processes the one or more cell regions to obtain one or more normalized cell regions.

At block (508), classify the cell regions into cell types. In one implementation, the classification module (122) performs pre-processing of the one or more cell regions including normalization, histogram equalization and contrast enhancement steps to obtain the one or more normalized cell regions and determines one or more features associated with the one or more normalized cell regions. The one or more features thus extracted may include colour, shape, dimension and texture information of the one or more normalized cell regions. The classification module (122) is then configured to calculate one or more contextual cues related to the one or more set of image of the received sample. The contextual cues may include any feature of the acquired image that cannot be considered to be part of a single cell or multiple adjacent cells. The contextual cues, may be for example include, image colour, image clutter, background colour, Neutrophil size/count/distribution. Upon calculating the one or more contextual cues, the classification module (122) determines the cell type of the one or more cell regions. In one embodiment, the classification module (122) determines the one or more cell types corresponding to the one or more cell regions based on the one or more determined features and the one or more contextual cues related to the one or more set of image of the received sample and the one or more machine learning models comprising relevant parameters and features and opinion received from cytologist via the user device (108). Upon determination of the one or more cell types, the classification module (122) determines a confidence score indicating probability of the one or more cell regions likely to be cervical cancerous regions and further determines stages of cancer of one or more cell regions based on the one or more cell type and the confidence score and the one or more machine learning models comprising relevant parameters and features and opinion received from cytologist. In accordance with a second embodiment, the segmentation module (120) is configured to employ deep neural network techniques and one or more conditional random field techniques and one or more data models to segment the one or more set of images into one or more cell regions. Upon segmentation, the classification module (122) is configured to deploy neural network techniques to classify the one or more cell regions into one or more cell types based on one or more features extracted from the one or more cell regions and determine one or more stages of the cancer as per the global standards based on the one or more cell type and the confidence score thus determined.

At block (510), generate report. In one embodiment, the report generation module (126) generates one or more reports including information associated with the one or more classified cell regions. In one example, the report generation module (126) is configured to generate one or more reports including information associated with the one or more cell regions, the cell type and the confidence score, one or more recommendations of follow-up test procedures to be conducted and analysis opinion provided by a cytologist based on the cell type and the confidence score thus determined. Further, the learning module is configured to update the one or more data models and the machine learning models and relevant parameters of the other data (218) with the one or more set of images of the sample, one or cell regions, the cell type and the confidence score corresponding to the one or more cell regions thus determined after report generation and annotation by cytologist.

Thus, the present disclosure enables the interpretation and screening of the samples at the point-of- sample collection centre and avoids the delay in forwarding the sample to the centralized place for experts analysis and interpretation report being sent back to the point of sample collection.

Advantages of the Present Invention

The present invention relates to a portable, low weight, easy to carry medical screening device for cervical cancer used at point of care centres. The medical screening device also enables quick delivery of reports to the users with accurate analysis and opinion from the experts. The medical screening device is also capable of undergoing continuous learning process of the previous analysis and updates the analysis techniques based on the results of the previous analysis and processes. The device can be operated by an unskilled resource with a minimum training to operate the Image Acquisition Device and Slide Stainer. The whole arrangement decentralizes the screening process thereby optimizing the infrastructure and at the same time provides a Point of Care diagnostic routine. The system aims to realize a tele- pathology system.

As described above, the modules, amongst other things, include routines, programs, objects, components, and data structures, which perform particular tasks or implement particular abstract data types. The modules may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions. Further, the modules can be implemented by one or more hardware components, by computer-readable instructions executed by a processing unit, or by a combination thereof.

The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise.

Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer- readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term "computer-readable medium" should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art.