Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR AUTOMATED TEST RESULT DIAGNOSTICS
Document Type and Number:
WIPO Patent Application WO/2022/016022
Kind Code:
A1
Abstract:
A diagnostic server and method for conducting analytics on results displayed on an image received from an image capturing device. According to one embodiment of the disclosure, the diagnostic server features a processor and a non-transitory storage medium. The storage medium includes image conditioning logic and result analytics logic. When executed, the image conditioning logic is configured, after receipt of an image including a test cassette image, determine a test cassette type in response to identifying the test cassette image is captured within the received image, extract a test strip image being a portion of the test cassette image, and enhance virtual indicia associated with results produced from an assay of the test cassette positioned within the test strip area. The result analytics logic is configured to conduct analytics of the enhanced virtual indicia to determine and confirm the results produced from the assay of the test cassette.

Inventors:
CHANG DONG SIK (US)
INDORIA SAURABH (IN)
Application Number:
PCT/US2021/041904
Publication Date:
January 20, 2022
Filing Date:
July 15, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KAHALA BIOSCIENCES LLC (US)
International Classes:
B01L9/00; G01N21/77; G01N33/53; G06T7/00; G06T7/10; G06T7/70
Foreign References:
US20190073763A12019-03-07
US20190027251A12019-01-24
US20180004910A12018-01-04
US20160381265A12016-12-29
US20070161103A12007-07-12
Attorney, Agent or Firm:
SCHAAL, William (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A diagnostic server for conducting analytics on results displayed on an image received from an image capturing device, the diagnostic server comprising: a processor; and a non-transitory storage medium including image conditioning logic configured to (i) receive an image including a test cassette image, (ii) determine a test cassette type in response to identifying the test cassette image is captured within the received image, (iii) extract a test strip image being a portion of the test cassette image, and (iv) enhance virtual indicia associated with results produced from an assay of the test cassette positioned within the test strip area, and result analytics logic configured to conduct analytics of the enhanced virtual indicia to determine and confirm the results produced from the assay of the test cassette.

2. The diagnostic server of claim 1 being in communication with an image capture software module installed within the image capturing device to conduct preliminary analytics on the image captured by the image capture software module to determine whether the test cassette image is captured within the image prior to transmission of the image to the diagnostic server.

3. The diagnostic server of claim 1, wherein the image conditioning logic comprises image alignment logic that, when executed by the processor, compares features of the test cassette image to features of one or more reference images of one or more known test cassettes to determine the test cassette type.

4. The diagnostic server of claim 3, wherein the image alignment logic is further configured, when executed by the processor, to extract the test cassette image from the received image prior to comparison of the features of the test cassette image to features of the one or more reference images. 5. The diagnostic server of claim 4, wherein the features of the test cassette image comprise particular grouping of pixels associated with a color or shape displayed at a prescribed location on the test cassette image illustrating a top surface of the test cassette.

6. The diagnostic server of claim 4, wherein the image alignment logic, when executed by the processor, is further configured to orient the test strip image for extraction from the test cassette image by image cropping logic of the image conditioning logic.

7. The diagnostic server of claim 1, wherein the image conditioning logic comprises greyscale conversion and separation logic that, when executed by the processor, alters a pixel content within the test strip image from color pixel values to greyscale pixel values to generate the enhance virtual indicia being the virtual indicia represented as greyscale pixel values.

8. The diagnostic server of claim 7, wherein the image conditioning logic further comprises binary image conversion logic to convert the content within the test strip image with greyscale pixel values into one or more corresponding binary images by at least conducting a blurring operation to avoid distortion caused by pixel noise that could be experienced during conversion from content with greyscale pixel values into the one or more binary images.

9. A diagnostic server for conducting analytics on results displayed on an image, comprising: a processor; and a non-transitory storage medium to store image conditioning logic and result analytics logic, wherein the image conditioning logic comprises image alignment logic to compare features of a test cassette image, being a portion of an image of a top surface of a test cassette that includes a well structure sized to receive a body fluid sample and a test strip area that allows for a display of visual indicia representative of results produced from an assay exposed to the body fluid sample, image cropping logic to crop the test cassette image to recover a test strip image being a displayable representative of the test strip area, greyscale conversion and separation logic to alter color pixel values for pixel content associated with the test strip area including the visual indicia into greyscale pixel values, binary image conversion logic to convert the content associated with the test strip area including the visual indicia into greyscale pixel values into one or more corresponding binary images by at least conducting a blurring operation to avoid distortion caused by pixel noise that could be experienced during conversion from content with greyscale pixel values into the one or more binary images, and wherein the result analytics logic to conduct analytics of the one or more binary images to determine and confirm the results produced from the assay of the test cassette.

10. The diagnostic server of claim 9, wherein the result analytics logic includes pixel analytic logic.

11. A computerized method for conducting analytics on results displayed on an image received from an image capturing device, the method comprising: receiving an image including a test cassette image, determining a test cassette type in response to identifying the test cassette image is captured within the received image; extracting a test strip image being a portion of the test cassette image, and enhancing virtual indicia associated with results produced from an assay of the test cassette positioned within the test strip area, and conducting analytics of the enhanced virtual indicia to determine and confirm the results produced from the assay of the test cassette.

12. The computerized method of claim 11, wherein prior to receiving the image, the method further comprising: conducting preliminary analytics on the image to determine that the test cassette image is captured within the image prior to transmission to the diagnostic server.

13. The computerized method of claim 11 , wherein the determining of the test cassette type comprises comparing features of the test cassette image to features of one or more reference images of one or more known test cassettes.

14. The computerized method of claim 13, wherein the extracting of the test cassette image from the received image is conducted prior to comparing the features of the test cassette image to features of the one or more reference images.

15. The computerized method of claim 14, wherein the features of the test cassette image comprise a particular grouping of pixels associated with a color or shape displayed at a prescribed location on the test cassette image illustrating a top surface of the test cassette.

16. The computerized method of claim 14, wherein the enhancing of the virtual indicia comprises altering a pixel content within the test strip image from color pixel values to greyscale pixel values .

17. The computerized method of claim 1, wherein the enhancing of the virtual indicia further comprises converting the pixel content within the test strip image having greyscale pixel values into one or more corresponding binary images by at least conducting a blurring operation to avoid distortion caused by pixel noise that could be experienced during the converting of the pixel content from the greyscale pixel values into the one or more binary images.

Description:
SYSTEM AND METHOD FOR AUTOMATED TEST RESULT DIAGNOSTICS

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application is based upon and claims the benefit of priority from U.S. Provisional Patent Application No. 63/053525 filed July 17, 2020, the entire contents of which are incorporated herein by reference.

BACKGROUND

[0002] All countries, including the United States, are grappling with a pandemic that has caused the death of hundreds of thousands of people and the largest job loss in the United States since the Great Depression. The pandemic is based on a novel coronavirus, referred to as COVID-19, which is highly contagious and may cause infected persons to experience a respiratory condition. The severity of the respiratory condition may vary from person to person. For instance, some persons may be infected but fail to experience any recognized symptoms. Other persons may have mild symptoms and improve on their own. Yet other persons may experience severe respiratory issues that cause breathing difficulties.

[0003] Independent of the severity of the illness experienced by persons, at certain times, all persons infected with COVID-19 are highly contagious so that, if one individual is diagnosed with COVID-19, any persons in contact with that individual, especially over a prescribed period of time, may be infected. As a result, to curtail the spread of COVID-19, the person(s) in contact with the infected individual may need to be tested. In some cases, a workplace frequented by the infected individual may need to temporarily shut down until other employees in contact with the infected individual have received a negative (non-infected) test result.

[0004] Unfortunately, COVID-19 test results are taking days to process. These delays are causing workplaces to unnecessarily remain shut down despite the tested employees have failed to contract COVID-19 after exposure. The lack of real-time reporting of any medical test results, especially COVID-19 test results, is causing tremendous economic loss to both employees and a number of industries because tested employees are not allowed to return to the workforce until negative COVID-19 test results are received. Besides economic loss, these delays may cause family members or other persons to be needlessly tested based on their exposure to a potentially infected individual. Such unnecessary testing wastes resources that are in desperate need by all communities.

[0005] There is a need to develop a testing system that, based on the COVID-19 or other medical tests being conducted outside of a medical facility (e.g., self-tests conducted by a potentially infected individual), performs real-time diagnostics of the test results can be conducted to confirm these test results. The confirmation is needed as self-diagnosis, without real-time confirmation, may lead to higher rates of false positives and/or false negatives caused by an inaccurate reading of the test results.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] A more particular description of the present disclosure will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. Example embodiments of the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:

[0007] FIG. 1 is an exemplary embodiment of a medical test kit with the capturing of an image of the test results produced by the medical test kit for transmission to a diagnostic server;

[0008] FIG. 2 is an exemplary block diagram of the diagnostic server of FIG. 1;

[0009] FIGS. 3A-3J are illustrative images associated with the test cassette image or test strip image, inclusive of indicia representing the test results;

[0010] FIG. 4 is an exemplary flowchart of the operability of the web-based image capture logic deployed within the diagnostic server of FIG. 2;

[0011] FIG. 5 is an exemplary flowchart of the operability of the image alignment logic deployed within the diagnostic server of FIG. 2;

[0012] FIG. 6 is an exemplary flowchart of the operability of the image cropping logic deployed within the diagnostic server of FIG. 2;

[0013] FIG. 7 is an exemplary flowchart of the operability of the greyscale conversion and separation logic deployed within the diagnostic server of FIG. 2;

[0014] FIG. 8 is an exemplary flowchart of the operability of the binary image conversion logic deployed within the diagnostic server of FIG. 2;

[0015] FIG. 9 is an exemplary flowchart of the operability of the result analytics logic deployed within the diagnostic server of FIG. 2. [0016] These and other features of embodiments of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of embodiments of the invention as set forth hereinafter.

DETAILED DESCRIPTION

[0017] Reference will now be made to figures wherein like structures will be provided with like reference designations. It is understood that the drawings are diagrammatic and schematic representations of exemplary embodiments of the invention, and are neither limiting nor necessarily drawn to scale.

[0018] Regarding terms used herein, it should be understood the terms are for the purpose of describing some particular embodiments, and the terms do not limit the scope of the concepts provided herein. Ordinal numbers (e.g., first, second, third, etc.) are sometimes used to distinguish or identify different operations or indicia (e.g., detection lines), for example, and do not supply a serial or numerical limitation. For example, “first,” “second,” and “third” operations or indicia (detection line) need not necessarily appear in that order, and the particular embodiments including visible indicia representative of test results, need not necessarily be limited or restricted to the three indicia identified below. Similarly, labels such as “left,” “right,” “top,” “bottom,” “front,” “back,” and the like are used for convenience and are not intended to imply, for example, any particular fixed location, orientation, or direction. Instead, such labels are used to reflect, for example, relative location, orientation, or directions. Singular forms of “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.

[0019] The term “logic” is representative of hardware and/or software that is configured to perform one or more functions. As hardware, logic may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a processor, a programmable gate array, a microcontroller, an application specific integrated circuit, combinatorial circuitry, or the like. Alternatively, or in combination with the hardware circuitry described above, the logic may be software in the form of one or more software modules, which may be configured to operate in a manner as would its counterpart circuitry. The software modules may include, for example, an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, a routine, source code, or even one or more instructions. The software module(s) may be stored in any type of a suitable non-transitory storage medium, such as a programmable circuit, a semiconductor memory, non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”), persistent storage such as non volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase- change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.

[0020] In certain instances, the terms “compare,” comparing,” “comparison,” or other tenses thereof generally mean determining if a match (e.g., identical or a prescribed level of correlation) is achieved between data associated with two different items such as different images or portions of these images.

[0021] In the following description, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, components, functions, steps or acts are in some way inherently mutually exclusive.

[0022] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by those of ordinary skill in the art.

I. GENERAL ARCHITECTURE

[0023] Referring to FIG. 1, an illustrative embodiment of a capturing of an image including a test cassette 110 associated with a medical testing system 100 is shown. Herein, the test cassette 110 is configured to test for the presence or absence of certain antibodies (e.g., immunoglobulins) within a body fluid sample (e.g., a drop of blood) 120, where these antibodies may include an early marker antibody (Immunoglobulin M “IgM”) and a late marker antibody (e.g., Immunoglobulin G “IgG”). The test cassette 110 includes a well structure 112 that is sized to receive the sample 120 from a patient. After the sample 120 is applied, the test cassette 110 may include a solid phase immunochromatographic assay 114, which returns a result 130 after the sample 120 has had a prescribed amount of exposure to the assay 114. The result 130 may include, but is not limited or restricted to, indicia visually perceived within a test strip area 140 of the test cassette 110 that signifies a test result such as detection of IgG and IgM antibodies based on exposure to an antigen (e.g., COVID-19 virus).

[0024] According to this embodiment, the visual indicia may constitute information that indicates (i) the results of one or more medical tests and/or (ii) whether the result of the medical test(s) is valid (or invalid requiring a repeated test to be conducted). As an illustrated example, as shown in FIG. 1, the information may be represented by one or multiple (two or more) visible lines, where each visible line may convey the results of a certain medical test. For this embodiment, the medical tests may be directed to detect the presence or absence of Immunoglobulin M and/or Immunoglobulin G antibodies for example. Immunoglobulin G (IgG) is the most common type of antibody with blood and other body fluids. In general, these antibodies protect individuals against infection by “remembering” prior germ exposure. Immunoglobulin M (IgM) is one of several isotypes of antibody that is normally the first antibody to appear in the response to initial exposure to an antigen, such as COVID-19 for example.

[0025] However, the medical test(s) provided by the assay 114 may be directed to detecting another type of antibody, a targeted protein, chemical, or other compound that may be associated with a medical condition. For example, besides COVID-19 testing determination and/or confirmation, the medical testing system may be used for other types of viral COVID testing, drug testing, fecal colon testing, pregnancy testing, or the like. Herein, the information may be directed to the visible (detection) line, albeit other types of indicia (e.g., images, symbols, colors, etc.) may be used to identify detection or lack of detection of an element (e.g., presence of an antibody, protein, chemical, etc.) for which the medical test is designed to detect. Herein, embodiments of the invention may be associated with different types of assays 114, depending on the medical condition for which a person is being tested.

[0026] Referring still to FIG. 1, for illustrative purposes, the test strip area 140 may feature a plurality of visual indicia (or markers) 150, such as detection lines for example. As indicated above, it is contemplated that other types of indicia besides the detection lines 150 may be used to convey the test results. For this embodiment, the detection lines 150 may be used to identify the presence or absence of a particular targeted protein (e.g., novel coronavirus antibodies such as IgG and/or IgM) or other material/chemical composition.

[0027] As an illustrative embodiment, the plurality of detection lines 150 may include the following: (1) a first (quality control) detection line 160, a second detection line 162, and a third detection line 164. A “negative result” occurs when only the first detection line 160 appears within the test strip area 140 to signify that a certain material/chemical composition has not been detected (e.g., no novel coronavirus antibody detected). Additionally, as an optional feature, the color of the first detection line 160 may be used to signify whether the test results 130 are valid (i.e., the assay 114 operated as expected) or invalid (e.g., an error). A “positive IgM result,” which identifies the presence of the novel coronavirus IgM antibody for example, occurs when both the first and second detection lines 160/162 appear in the test strip area 140. A “positive IgG result,” which identifies the presence of the novel coronavirus IgG antibody for example, occurs when both the first and third detection lines 160/164 appear in the test strip area 140. A “positive IgM/IgG result,” identifying the presence of novel coronavirus IgM and IgG antibodies for example, occurs when both the first, second and third detection lines 160/162/164 are visibly displayed.

[0028] Referring still to FIG. 1, an image capturing device 170 (e.g., a smartphone, digital camera, etc.) is configured to capture an image 175, intended to include a sub-image of the test cassette 110 (hereinafter, “test cassette sub-image 177”) along with images of components of the test cassette 110, especially a sub-image of the test strip area 140 (hereinafter, “test strip sub-image 178”) and a sub-image of the visual indicia representing the test results 130 such as the detection lines 150 or the lack thereof (hereinafter, “visual indicia image 179”). The captured image 175 is transmitted to a diagnostic test evaluation device or service (e.g., diagnostic server) 180 via any type of transmission medium 190, such as any a physical or logical communication link (or path). For instance, as a logical communication link, the transmission medium 190 may be in the form of a wireless communication network using radio frequency (RF), satellite or cellular communications as shown. As a physical communication link, the transmission medium 190 may be a wired interconnect in the form of electrical wiring, optical fiber, cable, or the like. The transmission medium 190 may be a combination of physical and logical communication links (e.g., RF transceiver and routing to the diagnostic server 180 over a wired interconnect.

[0029] Referring now to FIG. 2, an exemplary block diagram of the diagnostic server 180 of FIG. 1 is shown, where the diagnostic server 180 may be deployed as a web-based server accessed by one or more image capturing devices or a cloud service with functionality of the diagnostic server 180. Herein, the diagnostic server 180 features a processor 200, a memory 210 and a communication interface 220 configured to receive the captured image 175 via the transmission medium 190. Herein, the processor 200 may be deployed as a physical processor (e.g., a central processing unit, digital signal processor, a programmable gate array, a microcontroller, an application specific integrated circuit, or the like) or a virtual processor coded to exhibit functionality associated with a physical processor (e.g., cloud compute engine such as Amazon® Elastic Compute Cloud (EC2), Azure® Cloud Compute, etc.). Formed with or based on non-transitory storage medium, the memory 210 may constitute physical or logical memory that features a web-based image capture software module 230, image conditioning logic 235 (e.g., image alignment logic 240, image cropping logic 250, greyscale conversion and separation logic 260, binary image conversion logic 270), and result analytics logic 280. The result analytics logic 280 includes pixel analytic logic 290 to process the binary image and detect (or confirm) the test results 130 as, in some cases, the test results 130 is difficult to read for self-diagnosis. Collectively, portions of all of the image conditioning logic 235 may be configured as artificial intelligence or machine-learning (ML) logic that relies on prior heuristics (e.g., prior successful or errored visual indicia determinations) when conducting image alignment, image cropping, greyscale conversion and/or separation, binary image conversion, or the like.

[0030] Herein, as shown in FIGS. 1-2, the web-based image capture software module 230 is downloadable by the image capturing device 170 (e.g., smartphone). When executed, the web- based image capture software module 230 may be configured to perform preliminary analytics on the captured image 175. These preliminary analytics are intended to determine, prior to transmission to the diagnostic server 180, whether the captured image 175 exists in an acceptable state for subsequent image processing and analysis. [0031] For example, as an illustrative embodiment, the preliminary analytics performed by the web-based image capture software module 230 may include a first analysis of the captured image 175 to determine that some type of test cassette (e.g., test cassette sub-image 177) is part of the captured image 175. Also, as an optional feature, the web-based image capture software module 230 may be configured to detect the specific type of test cassette included in the captured image 175. Also, as an optional feature, the web-based image capture software module 230 may be configured to detect whether a portion of the test cassette, such as the test strip area for example (e.g., test strip sub-image 178), is included as part of the captured image 175.

[0032] Additionally, or in the alternative, the preliminary analytics performed by the web- based image capture software module 230 may include a second analysis of the captured image 175 to determine whether any imaging artifacts (e.g., images of blood particles, excessive shading, etc.) exist within the test strip sub-image 178, and if so, the imaging artifact(s) is removed therefrom. Upon completion of the preliminary analytics, which signifies that the captured image 175 is in an acceptable state for further image processing by logic within the diagnostic server 180, the web-based image capture software module 230 may compress and resize the captured image 175.

[0033] As an alternative embodiment, however, the web-based image capture software module 230 may be configured to merely compress and resize the captured image 175 for subsequent image processing by logic within the diagnostic server 180. Herein, the web-based image capture software module 230 is not configured to perform the preliminary analytics, as described above.

[0034] As shown in FIGS. 2 & 3A, the image alignment logic 240, during execution by the processor 200, is configured to compare features (e.g., characteristics) of the captured image 175 with features of a reference image 300 of a known test cassette. As shown in FIG. 3A, lead lines 310 illustrate the comparisons conducted between the features of the test cassette sub-image 177 depicted within the captured image 175 and the features of the reference test cassette image 300. For example, a first set of lead lines 312 may pertain to a particular grouping of pixels 315 associated with a certain color(s) and/or decorative shape(s) displayed at a prescribed location on a top surface of the test cassette sub-image 177 to color(s) or decorative shape(s) 302 displayed on a top surface of the reference test cassette image 300. Similarly, a second set of lead lines 314 may pertain to features such as the positioning and/or shape of the test strip sub-image 178 and/or the visual indicia sub-image 179 included as part of the test cassette sub-image 177 to the positioning and/or shape of the test strip area 305 and/or the test results 307 displayed within the reference test cassette image 300.

[0035] These features of the test cassette sub-image 177 and the reference test cassette image 300 are compared to identify the positioning of the test cassette sub-image 177 within the captured image 175. This allows for the extraction of the test cassette sub-image 177 from the captured image 175 in order to produce the test cassette image 320 as shown in FIG. 3B. The image alignment logic 240 is further configured to ensure that the test strip sub-image 178, which is a portion of the test cassette image 320, is properly oriented for extraction to generate a test strip image 330 as shown in FIG. 3C.

[0036] According to one embodiment of the disclosure, the test strip image 330 may undergo subsequent cropping by the image cropping logic 250. For example, the image cropping logic 250 may be configured to crop the test cassette image 320 to recover the test strip image 330 thereby separating the test strip image 330 from the test cassette image 320. Additionally, the image cropping logic 250 is configured to crop vertically-oriented edges 335 of the test strip image 330 to produce a cropped test strip area 340 as shown in FIG. 3D. The cropped test strip area 340 is oriented to retain the centrally located portions of the detection lines 150.

[0037] Referring to FIGS. 2, & 3E, the greyscale conversion and separation logic 260, during execution by the processor 200, is configured to produce a greyscale test strip image 350 by altering the pixel content within the cropped test strip area 340 from color pixel values to greyscale values. Before applying the greyscale conversion, a highlight operation is performed to highlight the hidden details within the image and improve the visibility of any faint lines within the cropped test strip area 340. In particular, as an illustrative example, the pixel values associated with the areas reserved for each of the detection lines 160/162/164 of FIG. 1 are converted to greyscale, and thereafter, portions of the images associated with these detection lines 160/162/164 are separated into different greyscale result images 360-362 for analysis, as shown in FIG 3F.

[0038] Thereafter, for each greyscale result image 360, 361 and/or 362, the binary image conversion logic 270 is configured to convert that greyscale result image 360, 361 and/or 362 into a corresponding binary images. In particular, a greyscale result image (e.g., image 360) may undergo a blurring operation 370, as shown in FIG. 3G. Such blurring assists in avoiding distortion caused by pixel noise that may be experienced during conversion of the greyscale result images 360, 361 and/or 362 into binary images. Thereafter, analytics are performed on each binary image to determine or confirm a test result, where such analytics may be based on averaging of pixel intensity values associated with a particular region 380 (e.g., average pixel intensity values per each row of the binary image) as shown in FIG. 3H, in which FIG. 31 illustrates an exemplary embodiment of pixel values 390 averaged across rows after the blurring operation of FIG. 3G and FIG. 3J illustrates an exemplary embodiment of pixel values 395 averaged across rows averaging of the pixel intensity per row as shown in FIG. 3H. Additionally, the standard deviations associated with the row pixel intensity values may be further computed by pixel analytic logic 290 being part of the result analytics logic 280 described below.

II. OPERABILITY OF MEDICAL TESTING SYSTEM

[0039] Referring now to FIG. 4, an exemplary flowchart of the operability of the web-based image capture logic 230 deployed within the diagnostic server 180 of FIG. 2 is shown. According to this embodiment, an image (e.g., photograph, digital scan, etc.) is captured after a self-test for a medical condition (e.g., novel coronavirus, etc.) has been conducted (operation 400). Thereafter, the captured image undergoes image processing to determine whether this image includes, at least in part, a test cassette sub-image (operation 410). This image processing may include a first analysis of certain portions of the captured image to detect features associated with the test cassette sub-image.

[0040] For instance, as an illustrative embodiment shown in FIG. 1, the test cassette sub-image may include, at least in part, the test strip sub-image with visible indicia that collectively represents the test results, such as the control IgM and IgG detection lines as described above and illustrated in FIG. 1. For this embodiment, to detect the test cassette sub-image, certain features of the test cassette sub-image may pertain to various features of a known reference test cassette image, such as a particular region of pixels that are arranged to include a type of visual indicia (e.g., text character, symbol, etc.), as defined by a particular shape, color, location, orientation and/or sizing. The test cassette sub-image includes the test strip sub image.

[0041] In the event that the certain features associated with the test cassette sub-image are detected (operation 410), a second analysis of the captured image may be conducted to determine and remove any imaging artifacts (e.g., images of blood particles, shading, etc.) within the test strip sub-image shown in FIG. 1 (operation 420). Upon detecting imaging artifacts existing within the test strip sub-image and these imaging artifacts have been successfully removed (operation 430), the captured image may be compressed and/or resized prior to subsequent processing of the captured image (operation 440).

[0042] Referring to FIG. 5, an exemplary flowchart of the operability of the image alignment logic 240 deployed within the diagnostic server 180 of FIG. 2 is shown. Herein, according to one embodiment of the disclosure, portions of the captured image are compared with one or more test cassette reference images to locate points of interest (operation 500). More specifically, according to one embodiment of the disclosure, the points of interest may include comparing edges of the test cassette sub-image to prescribed perimeter edges associated with a known reference test cassette images. According to another embodiment of the disclosure, the points of interest may be directed to certain pixel regions of the test cassette sub -image with a certain color (or colors) of corresponding regions within a known reference test cassette image. Similarly, the points of interest may include a comparison of certain elements pertaining to the test cassette sub-image within the captured image (e.g., test strip area, indicia located at a prescribed area for a particular type of test cassette, etc.) to similar elements within one of the known reference test cassette images.

[0043] Thereafter, the image alignment logic may be configured to find the homography between these points of interest and corresponding points of interest associated with a known reference test cassette image (operation 510). This operation is conducted to arrange the test cassette sub-image within the captured image for cropping by the image cropping logic 260, as illustrated in FIG. 6 and described below (operation 520). Where, after such cropping, the image alignment logic may be configured to determine whether the test cassette sub-image was arranged properly, as portions of the test cassette sub-image may have been skewed (e.g., rotated, twisted, etc.) when the image was captured or accidentally removed (operation 530).

[0044] For example, in determining proper alignment (arrangement) of the test cassette sub image, the image alignment logic may be configured to determine a structural similarity index (SSI) value for each point of interest or for a prescribed number or percentage of the points of interest. Each SSI value is a unit of measure that indicates a level of similarity between two features such as a particular point of interest associated with the test cassette sub-image and the corresponding point of interest associated with a particular reference cassette image. The point of interest may include, but is not limited or restricted to a grouping of pixels representing (i) indicia visible on the test cassette sub-image, (ii) a particular form (shape) of a region of the test cassette sub-image that may be associated with the test strip area, the sample deposit area, or the like. Hence, as an illustrative example, the SSI value may indicate a percentage or number of pixels associated with the region of the test cassette sub-image that compare to a corresponding region of a known reference test cassette image.

[0045] Where the SSI values (e.g., a prescribed number of SSI values, an aggregate of the SSI values, an average (mean) of the SSI values, etc.) exceed a threshold, the image cropping logic has concluded that the test cassette sub-image is properly arranged and is cropped to separate the test cassette sub-image from the captured (altered) image received from the image capturing device to form the test cassette image. Additionally, based on the SSI values exceeding the threshold, the particular type of test cassette has been determined and further cropping of the test cassette image may be conducted (operation 540). Otherwise, the image alignment logic reports an error requesting an additional image of the test cassette to be captured and resent to the diagnostic server or may repeat operations of any one or more of the operations 500-530 to confirm that the test cassette cannot be located within the received image (operation 550). [0046] Referring now to FIG. 6, an exemplary flowchart of the operability of the image cropping logic 250 deployed within the diagnostic server 180 of FIG. 2 is shown. Herein, the image cropping logic is configured to crop the test cassette sub-image from the captured image after the test cassette sub-image has been determined to have been arranged properly (operation 600). In the event that the image alignment logic has determined that the test cassette sub-image is arranged properly and identified as described above (see FIG. 5), a portion of the test cassette image is now cropped in order to extract an image inclusive of the test strip area (hereinafter, “test strip image”) from the cropped test cassette image (operations 610-620). For example, for a particular type of test cassette identified, a predetermined set of coordinates may correspond to the test strip area for that test cassette image and the area bounded by the set of coordinates is cropped to recover the test strip image.

[0047] Besides cropping the test strip image from the test cassette image, the edges of the test strip image may be cropped (removed) to remove potential “noise” associated with the results visible within the test strip image (operation 630). The edges may be identified by a subset of the predetermined set of coordinates and the amount of cropping may be based on a predetermined number of pixels or a particular distance from the edges of the test strip image. Stated differently, as most of the artifacts and bleeding of the colors associated with the indicia representing the test results (e.g., detection lines) occur along the edges of the indicia, a cropping of the edges produces a resultant test strip image including indicia that more accurately represents the actual test results.

[0048] Referring to FIG. 7, an exemplary flowchart of the operability of the greyscale conversion and separation logic 260 deployed within the diagnostic server 180 of FIG. 2 is shown. Before applying the greyscale conversion, a highlight operation is performed to highlight the hidden details within the image and improve the visibility of faint lines, most notably indicia representing the test results. The greyscale conversion and separation logic 260 is configured to locate the pixels associated with the test strip image and converts the color pixel values associated with these pixels into greyscale values (operations 700-710). This conversion may involve converting three intensity values for each color pixel into a single value for a greyscale image. [0049] In particular, by assigning the color intensity value associated with each pixel within the test strip image to a greyscale value (e.g., a pixel value representing an amount of light intensity associated with that pixel). The greyscale conversion assists in location and identification of the indicia (detection line) associated with the test results within the test strip image. Besides the greyscale version of the test strip image, the color version of the test strip image (or the color version of the first detection line) may be stored for use in analysis as to the validity of the test results, as described above (operation 720).

[0050] The greyscale conversion and separation logic 260 further separates portions of the visible indicia that pertain to different test results into separate images for each test result (operation 730). According to the test cassette of FIG. 1 as an illustrative embodiment, the greyscale conversion and separation logic 260 separates the visible indicia into a plurality of separate greyscale result images, each image directed to a particular detection line converted into a greyscale version for subsequent processing by the binary image conversion logic 270 and result analytics logic 280 of FIG. 2, as illustrated in FIGS. 8-9. Therefore, each visible indicia (detection line) can be analyzed separately from the other indicia (operation 740).

[0051] Referring now to FIG. 8, an exemplary flowchart of the operability of the binary image conversion logic 270, which prepares each greyscale test result image for analysis, is shown. According to one embodiment of the disclosure, for every greyscale result image (operation 800), the binary image conversion logic performs operations to avoid distortions (operation 810) and convert the greyscale result image into a corresponding binary image (operation 820).

[0052] More specifically, with respect to prescribed regions of each greyscale result image (e.g., certain 5x5 pixel regions), an operation is conducted to “blur” the image to avoid distortions due to pixel noise. For example, with respect to this stage in image processing, a Gaussian blur (also known as Gaussian smoothing) may be performed to reduce image noise and reduce imaging details. The visual effect of this blurring technique is as pre-processing stage for a computer vision operations, such as OTSU thresholding for example, which involves calculating a measure of spread for the pixel levels that reside as part of a foreground layer or a background layer of the greyscale result image. The resultant binary images produced from application of OTSU thresholding to the greyscale result images are provided to the result analytics logic for image processing and confirmation of the test results, as illustrated in

FIG. 9

[0053] More specifically, referring to FIG. 9, an exemplary flowchart of the operability of the result analytics logic 280 deployed within the diagnostic server 180 of FIG. 2 is shown. Herein, the result analytics logic features pixel analytic logic. According to one embodiment of the disclosure, the pixel analytic logic calculates an average pixel value for particular pixel regions associated with each resultant binary image (operation 900). For example, according to one embodiment of the disclosure in which the visible indicia is oriented as one or more (horizontal) detection lines and each detection line includes multiple (two or more) rows of pixels, for each pixel row, the average pixel intensity values is calculated in order to focus only on unidirectional (horizontal) variations and remove any other directional variations (e.g., remove vertical variations between vertically oriented pixels). The average pixel intensity values may be represented by the average greyscale pixel intensity values along each row of the visible (test) indicia ranging from 0-to-255 in value. Alternatively, where the visible (test) indicia (e.g., detection lines) is oriented in a vertical direction, it is contemplated that the pixel analytic logic may be configured to calculate the average pixel intensity values for each column of pixels associated with each resultant binary image.

[0054] Additionally, the pixel analytic logic is configured to calculate the standard deviation between neighboring average pixel row values to assist in identifying the presence or absence of visible indicia (e.g., detection lines) representing a particular result of a medical test such as a presence of IgG antibody and/or IgM antibody, as shown in FIG. 1 (operation 910). It is contemplated that the changes in standard deviation may be used to identify a presence of indicia within the test strip image (operation 920). For example, the start of visible (test) indicia (e.g., detection line) may be represented with a higher standard deviation value, suggesting that the pixel area pertaining to the standard deviation is part of a foreground layer of the test strip image. This higher standard deviation may be a deviation value greater than or equal to a first prescribed threshold. A lower standard deviation may represent the pixel area of the test strip image that is part of a background layer of the test strip image as the pixel area would be devoid of the visible (test) indicia may have a more uniform (constant) pixel intensity value resulting in a lower standard deviation value. This lower standard deviation may be a deviation value greater than or equal to a second prescribed threshold, which is less than or equal to the first prescribed threshold.

[0055] Embodiments of the invention may be embodied in other specific forms without departing from the spirit of the present disclosure. The described embodiments are to be considered in all respects only as illustrative, not restrictive. The scope of the embodiments is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.