Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATIC STENOSIS DETECTION
Document Type and Number:
WIPO Patent Application WO/2021/117043
Kind Code:
A1
Abstract:
Embodiments of the invention provide a fully automated solution to vessel analysis based on image data. A system for analysis of a vessel receives a 2D lengthwise image of a patient's vessels, the image obtained during X-ray angiography and applies a pre-trained classifier on the image to output an indication of a presence of a stenosis in the vessels and an x,y location of the stenosis. The indication of the stenosis is then displayed via a user interface device, on an image of the patient's vessels, at the x,y location of the stenosis.

Inventors:
BARUCH EL OR (IL)
GOLDMAN DAVID (IL)
LOEVSKY IGAL (IL)
Application Number:
PCT/IL2020/051276
Publication Date:
June 17, 2021
Filing Date:
December 10, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEDHUB LTD (IL)
International Classes:
A61B5/02; A61B5/026; G06T11/00
Foreign References:
EP3277183A12018-02-07
US20140200867A12014-07-17
US20190180153A12019-06-13
Other References:
CONG CHAO, LIMA JOAO, VENKATESH BHARATH, RD 3, VASCONCELLOS HENRIQUE DORIA: "Automated Stenosis Detection and Classification in X-ray Angiography Using Deep Neural Network", IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 18 November 2019 (2019-11-18), pages 1301 - 1308, XP033703872
Attorney, Agent or Firm:
BENTOV, Rachel (IL)
Download PDF:
Claims:
CLAIMS

1. A system for analysis of a vessel, the system comprising a processor in communication with a user interface device, the processor configured to: receive a 2D lengthwise image of a patient’s vessels, the image obtained during X-ray angiography; apply a pre-trained classifier on the image to output an indication of a presence of a stenosis in the vessels and an x,y location of the stenosis; and cause an indication of the stenosis to be displayed via the user interface device, on an image of the patient’s vessels, at the x,y location.

2. The system of claim 1 wherein the classifier is pre-trained on X-ray angiography images that include a stenosis.

3. The system of claim 1 wherein the 2D lengthwise image is selected from a plurality of angiography images of the patient’s vessels, as an image showing the most detail.

4. The system of claim 1 wherein the processor applies on the 2D lengthwise image an algorithm for segmenting, to obtain an image of segmented out vessels and applies the classifier on the image of segmented out vessels.

5. The system of claim 4 wherein the classifier is applied on a plurality of partially overlapping portions of the image of segmented out vessels.

6. The system of claim 1 wherein the processor determines a centerline of the vessel in the 2D lengthwise image and wherein the classifier is applied on a plurality of image portions, each portion including a different part of the centerline.

7. The system of claim 6 wherein each different part of the centerline partially overlaps another part of the centerline.

8. The system of claim 1 wherein the processor is to determine a probability of presence of the stenosis and to cause an indication of the stenosis to be displayed only if the probability is above a predetermined threshold.

9. The system of claim 1 wherein the processor causes the indication of stenosis to be displayed at a same x,y location on a plurality of consecutive images.

10. The system of claim 1 wherein the processor is configured to calculate a value of FFR for the stenosis and cause the value to be displayed by the user interface device.

11. The system of claim 10 wherein the processor is configured to calculate the FFR value based on a color feature of the image at a location of the stenosis; and output the FFR value to a user.

12. The system of claim 11 wherein the processor inputs the color feature into a machine learning model, the model to predict the FFR value.

13. A system for analysis of a vessel, the system comprising a processor in communication with a user interface device, the processor configured to: receive a plurality of consecutive 2D lengthwise images of a patient’s vessels; apply a classifier on one of the plurality of images to determine presence and location of a stenosis in the vessels in the one image; track the stenosis throughout the plurality of images; and cause an indication of stenosis to be displayed, via the user interface device, at the location on a plurality of the consecutive images.

14. The system of claim 13 wherein the processor attaches a virtual mark to the stenosis to track the stenosis throughout the plurality of consecutive images.

15. The system of claim 13 wherein the processor segments the one image to obtain an image of segmented out vessels; and applies a classifier on the image, using the segmented out vessels, the classifier to output an indication of a presence of a stenosis in the vessels and the location of the stenosis.

16. The system of claim 13 wherein the processor determines a centerline of the vessels in the 2D lengthwise images and wherein the classifier is applied on a plurality of portions of the 2D lengthwise images, each portion include a different part of the centerline, where each different part of the centerline partially overlap another part of the centerline.

17. The system of claim 16 wherein the location comprises one or both of an x,y location in the image and a description of a section of a vessel in which the stenosis is located.

Description:
AUTOMATIC STENOSIS DETECTION

FIELD

[0001] The present invention relates to automated vessel analysis from 2D image data.

BACKGROUND

[0002] Artery diseases involve circulatory problems in which narrowed arteries reduce blood flow to body organs. For example, coronary artery disease (CAD) is the most common cardiovascular disease, which involves reduction of blood flow to the heart muscle due to build-up of plaque in the arteries of the heart.

[0003] Current clinical practices used in the diagnosis of coronary artery disease include coronary angiography and/or non- invasive image-based methods such as computerized tomography (CT), which require constructing a 3D model of arteries from which a computer can create cross- sectional images (slices) of the imaged tissues, and which require an expert's interpretation of the images. Other image-based diagnostic methods typically require user input (e.g., a physician is required to mark vessels in an image) based on which further image analysis may be performed to detect pathologies such as lesions and stenoses.

SUMMARY

[0004] Embodiments of the invention provide a fully automated solution to vessel analysis based on image data. A system, according to embodiments of the invention, detects a pathology and may provide a functional measurement value from a 2D image of a vessel, without having to construct a 3D model (i.e., without using CT techniques) and without requiring user input regarding the vessel and/or location of the pathology. Thus, embodiments of the invention enable detecting pathologies and providing a functional measurement value from 2D lengthwise images of a vessel obtained during X-ray angiography, as opposed to 2D cross section images that are used in CT procedures, such as CT angiography (CTA). Consequently, embodiments of the invention enable vessel analysis while exposing a patient to a significantly lower radiation dose compared with the level of radiation used during CT. Additionally, embodiments of the invention enable real-time analysis and treatment (e.g., stenting) of vessels, whereas analysis of CT generated images cannot be done in real-time and do not enable real-time treatment of vessels.

[0005] In one embodiment, a system for analysis of a vessel includes a processor in communication with a user interface device, the processor to receive a 2D image of a patient’s vessels, typically a lengthwise image of a vessel obtained during an X-ray angiogram procedure, and apply a classifier on the image. The classifier outputs an indication of a presence of a pathology, such as a stenosis, in the vessels and an x,y location of the pathology. The processor may then cause an indication of the pathology to be displayed via the user interface device, on an image of the patient’s vessels, at the x,y location.

[0006] The 2D image (typically a lengthwise image of a vessel obtained during X-ray angiography) may be selected, as the image showing the most detail, from a plurality of 2D images of the patient’s vessels.

[0007] In one embodiment the processor applies on the 2D image an algorithm for segmenting (e.g., the processor may apply a machine learning model), to obtain an image of segmented out vessels and applies the classifier on the image of segmented out vessels.

[0008] In some embodiments the processor determines a centerline of the segmented out vessels and the classifier is applied on a plurality of image portions, each portion including a different part of the centerline.

[0009] In some embodiments the processor determines a probability of presence of the stenosis (or other pathology) and causes an indication of the stenosis to be displayed only if the probability is above a predetermined threshold.

[0010] In some embodiments the processor can cause the indication of stenosis to be displayed at a same x,y location on a plurality of consecutive images. Thus, a system, according to embodiments of the invention, may include a processor to receive a plurality of consecutive images of a patient’s vessels and apply a classifier on one of the plurality of images to determine presence and location of a stenosis (or other pathology) in the vessels in the one image. The processor may then cause an indication of stenosis to be displayed, via a user interface device, at the location on each or a plurality of the consecutive images, thereby displaying a video of the consecutive images with an indication of stenosis.

[0011] The processor may track the stenosis throughout the plurality of consecutive images (e.g., by attaching a virtual mark to the stenosis) to enable displaying an indication of stenosis. [0012] The location determined by the classifier and displayed to the user may include one or both of an x,y location in the image and a description of a section of a vessel in which the stenosis (or other pathology) is located.

[0013] In additional embodiments, a system for analysis of a vessel, includes a processor to receive an image of a patient’s vessels and to automatically determine a location of a stenosis in the vessels based on the image. The processor may calculate a functional measurement of the vessel based on a color feature of the image at the location of the stenosis, and may output to a user an indication of the functional measurement. For example, the processor may input the color feature into a machine learning model that can predict the functional measurement based on the input color feature.

BRIEF DESCRIPTION OF THE FIGURES

[0014] The invention will now be described in relation to certain examples and embodiments with reference to the following illustrative figures so that it may be more fully understood. In the drawings:

[0015] Fig. 1 schematically illustrates a system for analysis of a vessel, according to embodiments of the invention;

[0016] Fig. 2 schematically illustrates a method for automatically indicating a location of a stenosis on an image of a patient’s vessels, according to an embodiment of the invention;

[0017] Figs. 3 A and 3B schematically illustrate images of vessels analyzed according to embodiments of the invention;

[0018] Fig. 4 schematically illustrates a method for tracking a pathology throughout images of vessels, according to an embodiment of the invention; and

[0019] Fig 5 schematically illustrates a method for providing a functional measurement for a pathology, according to embodiments of the invention.

DETAIFED DESCRIPTION

[0020] Embodiments of the invention provide methods and systems for automated analysis of vessels from images of the vessels, or portions of the vessels, and display of the analysis results. [0021] Analysis, according to embodiments of the invention, may include diagnostic information, such as presence of a pathology, identification of the pathology, location of the pathology, etc. Analysis may also include functional measurements, such as estimates of FFR values. The analysis results, may be displayed to a user.

[0022] A “vessel” may include a tube or canal in which body fluid is contained and conveyed or circulated. Thus, the term vessel may include blood veins or arteries, coronary blood vessels, lymphatics, portions of the gastrointestinal tract, etc.

[0023] An image of a vessel may be obtained using suitable imaging techniques, for example, X- ray imaging, ultrasound imaging, Magnetic Resonance imaging (MRI) and others suitable imaging techniques. Embodiments of the invention use angiography, which includes injecting a radio opaque contrast agent into a patient’s blood vessel and imaging the blood vessel using X-ray based techniques. The images obtained, according to embodiments of the invention are typically 2D lengthwise images of a vessel, as opposed to, for example, 2D cross section images that are used in methods that require constructing a 3D model of the vessel, such as CTA and other CT methods. [0024] A pathology may include, for example, a narrowing of the vessel (e.g., stenosis or stricture), lesions within the vessel, etc.

[0025] A “functional measurement” is a measurement of the effect of a pathology on flow through the vessel. Functional measurements may include measurements such as an estimate of fractional flow reserve (FFR), an estimate of instant flow reserve (iFR), coronary flow reserve (CFR), quantitative flow ratio (QFR), resting full-cycle ratio (RFR), quantitative coronary analysis (QCA), and more.

[0026] In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention.

[0027] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “using”, “analyzing”, "processing," "computing," "calculating," "determining," “detecting”, “identifying” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. Unless otherwise stated, these terms refer to automatic action of a processor, independent of and without any actions of a human operator.

[0028] In one embodiment, which is schematically illustrated in Fig. 1, a system 100 for analysis of a vessel includes a processor 102 in communication with a user interface device 106. Processor 102 receives one or more images 103 of a vessel 113. The images 103 may be consecutive images, typically forming a video that can be displayed via the user interface device 106.

[0029] Processor 102 performs analysis on the received image(s) and communicates analysis results and/or instructions or other communications, based on the analysis results, to a user via the user interface device 106. In some embodiments, user input can be received at processor 102, via user interface device 106.

[0030] Vessels 113 may include one or more vessel or portion of a vessel, such as a vein or artery, a branching system of arteries (arterial trees) or other portions and configurations of vessels.

[0031] Processor 102 may include, for example, one or more processors and may be a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a field- programmable gate array (FPGA), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Processor 102 may be locally embedded or remote, e.g., on the cloud.

[0032] Processor 102 is typically in communication with a memory unit 112. In one embodiment the memory unit 112 stores executable instructions that, when executed by the processor 102, facilitate performance of operations of the processor 102, as described below. Memory unit 112 may also store image data (which may include data such as pixel values that represent the intensity of light having passed through body tissue and/or light reflected from tissue or from a contrast agent within vessels, and received at an imaging sensor, as well partial or full images or videos) of at least part of the images 103.

[0033] Memory unit 112 may include, for example, a random access memory (RAM), a dynamic RAM (DRAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.

[0034] The user interface device 106 may include a display, such as a monitor or screen, for displaying images, instructions and/or notifications to a user (e.g., via graphics, images, text or other content displayed on the monitor). User interface device 106 may also be designed to receive input from a user. For example, user interface device 106 may include or may be in communication with a mechanism for inputting data, such as a keyboard and/or mouse and/or touch screen, to enable a user to input data.

[0035] All or some of the components of system 100 may be in wired or wireless communication, and may include suitable ports such as USB connectors and/or network hubs. [0036] In one embodiment, processor 102 detects a location of a pathology, such as a stenosis, within a 2D image of a patient’s vessels. Thus, processor 102 may automatically indicate the actual location of a stenosis on an image of a patient’s vessels, e.g., on an X-ray image, thereby providing a fully automated solution for vessel analysis.

[0037] In one example, which is schematically illustrated in Fig. 2, processor 102 receives a 2D image of a patient’s vessels (e.g., an angiogram image) (step 202) and applies on the image algorithms for segmenting the image (e.g., semantic segmentation algorithms and/or machine learning models, as described below), to obtain an image of segmented out vessels, also referred to as a vessels mask (step 204). Processor 102 then applies a classifier on the image of the segmented out vessels (step 206) to obtain, from output of the classifier (assisted by using the vessels mask), an indication of a presence of a pathology (e.g., stenosis) in the vessels and a location of the pathology (step 208). The location may be an x,y location on a coordinate system describing the image and/or the location may be a description of the section of the vessel where the pathology is located.

[0038] Classifiers, such as DenseNet, CNN (Convolutional Neural Network) or EfficientNet, may be used to obtain a determination of presence of a pathology and to determine the location of the pathology. Classifiers, according to embodiments of the invention, may be pre-trained on training data that includes 2D lengthwise X-ray angiogram images of vessels which may include a pathology (e.g., stenosis). In one embodiment the training data includes X-ray angiography images that include a stenosis and, optionally, X-ray angiography images that do not include a stenosis. The neural network composing the classifier may be learned by a scheme of supervised learning, or possibly semi-supervised learning. In the training of the neural network, training data is repeatedly input to the neural network and an error of an output of the neural network for the training data and a target, is calculated, and the error of the neural network is back-propagated in order to decrease the error and update the neural network. In the case of supervised learning, training data, which includes 2D X-ray angiogram images (e.g., images of a single vessel (with and possibly without stenoses) obtained from different points of view and/or images of different vessels with and possibly without stenoses), is labelled with a correct answer (e.g., stenosis exists/does not exist in the image).

[0039] Applying a classifier on an angiography image enables detection of a pathology by using computer vision techniques, without requiring user input regarding a location of the vessels in the image and/or location of the pathology.

[0040] Processor 102 may then cause an indication of the pathology to be displayed, via the user interface device 106, on the image of the patient’s vessels (e.g., image 103), at the location of the pathology (step 210). In some embodiments the indication of pathology can be displayed at a same location on a plurality of consecutive images (e.g., a video angiogram).

[0041] An indication of a pathology displayed on a display of a user interface device may include, for example, graphics, such as, letters, numerals, symbols, different colors and shapes, etc., that can be superimposed on the image or video of the patient’s vessels.

[0042] In some embodiments, processor 102 determines a probability of presence of the pathology, e.g., based on output of the classifier, and causes an indication of the pathology to be displayed only if the probability is above a predetermined threshold.

[0043] In some embodiments, processor 102 obtains a vessels mask by using semantic segmentation algorithms on the image. A machine learning model can be used for the segmentation, e.g. deep learning models such as Unet or FastFCN or other deep learning based semantic segmentation techniques.

[0044] Fig. 3A. schematically illustrates a vessels mask image 300 including vessels 302. Processor 102 may determine a centerline 301 of the vessels 302 and may input to the classifier described above, a distance between the centerline 301 and a border of the vessels, e.g. distance D1 and/or D2. The classifier may be applied on a plurality of portions of image 300, each portion including a different part of the centerline 301. The classifier may use the input distances D1 and/or D2 to determine presence of a pathology in the vessels and a location of the pathology.

[0045] In other embodiments, one example of which is schematically illustrated in Fig. 3B, the classifier is applied on a plurality of portions 311 of a lengthwise image 310 of a vessel 320, without input of distances D1 or D2 or input of any other measurement. In these embodiments, the classifier, optionally based on or including a deep neural network, accepts as input only an image (e.g., image 310), or portions of an image (e.g., portions 311) of a vessel and outputs an analysis of the vessel (e.g., the presence and location of a pathology in the vessel) based on the input image(s).

[0046] In some embodiments, the classifier may be applied on a plurality of partially overlapping portions of an image of a vessel (possibly, a vessel mask image) and may output an analysis of the vessel based on the partially overlapping portions of image. In one embodiment, the plurality of portions 311 of image 310, on which the classifier is applied, each include a different part of the centerline 301, such as parts 301a, 301b and 301c, where each of the different parts of the centerline may partially overlap another part of the centerline. For example, part 301a partially overlaps part 301b and part 301b partially overlaps parts 301a and 301c. This ensures that each portion of image input to the classifier includes a full portion of the vessel (typically including both borders 321 and 322) such that a possible stenosis will be located more or less in the center parts of the portion of image and not at the periphery of the portion of image, where it may be cut off or otherwise not clearly presented.

[0047] Determining a centerline as well as calculating distances D1 and D2 can be done by using known algorithms for medial axis skeletonization, for example, scikit-image algorithms. [0048] In some embodiments the 2D image (from which a vessels mask can be obtained) is an optimal image, selected from a plurality of 2D images of the patient’s vessels, as the image showing the most detail. In the case of angiogram images, which include contrast agent injected to a patient to make vessels (e.g., blood vessels) visible on an X-ray image, an optimal image may be an image of a blood vessel showing a large/maximum amount of contrast agent. Thus, an optimal image can be detected by applying image analysis algorithms (e.g., to detect the image frames having the most colored pixels) on a sequence of images.

[0049] In one embodiment, an image captured at a time corresponding with maximum heart relaxation is an image showing a maximum amount of contrast agent. Thus, an optimal image may be detected based on capture time of the images compared with, for example, measurements of electrical activity of the heartbeat (e.g., ECG printout) of the patient.

[0050] In one embodiment, which is schematically illustrated in Fig. 4, processor 102 receives a plurality of consecutive images of a patient’s vessels (step 402). Processor 102 determines presence and location of a pathology (such as a stenosis) in the vessels in one image from the plurality of images (step 404), e.g., by applying a machine learning model on one of the images and applying a classifier on the images, as described above. Processor 102 then causes an indication of pathology to be displayed, via the user interface device 106, at the determined location on a plurality (possibly, on each) of the consecutive images (step 406).

[0051] In some embodiments, once a pathology is detected in a first image from the plurality of images, the pathology may be tracked throughout the plurality of images (e.g., video) (step 405), such that the same pathology can be detected in each of the images, even if it’s shape or other visual characteristics change in between images.

[0052] One method of tracking a pathology may include attaching a virtual mark to the pathology detected in the first image. In some embodiments the virtual mark is location based, e.g., based on location of the pathology within portions of the vessel. In some embodiments, a virtual mark includes the location of the pathology relative to a structure of the vessel. A structure of a vessel can include any visible indication of anatomy of the vessel, such as junctions of vessels and/or specific vessels typically present in patients. Processor 102 may detect the vessel structure in the image by using computer vision techniques (such as by using the vessel mask described above), and may then index a detected pathology based on its location relative to the detected vessel structures.

[0053] For example, a segmenting algorithm can be used to determine which pixels in the image are part of the pathology and the location of the pathology relative to structures of the vessel can be recorded, e.g., in a lookup table or other type of virtual index. For example, in a first image a stenosis is detected at a specific location (e.g., in the distal left anterior descending artery (FAD)). A stenosis located at the same specific location (distal FAD) in a second image, is determined to be the same stenosis that was detected in the first image. If, for example, more than one stenosis is detected within the distal FAD, each of the stenoses are marked with their relative location to additional structures of the vessel, such as, in relation to a junction of vessels, enabling to distinguish between the stenoses in a second image. [0054] Thus, the processor 102 creates a virtual mark which is specific per pathology, and in a case of multiple pathologies in a single image, distinguishes the multiple pathologies from one another. The pathology can then be detected in following images of the vessel, based on the virtual mark.

[0055] In some embodiments, processor 102 may assign a name or description to a pathology based on the location of the pathology within the vessel and the indication of pathology can include the name or description assigned to the pathology.

[0056] In one embodiment, the processor can calculate a value of a functional measurement, such as an FFR estimated value, for each pathology and may cause the value(s) to be displayed. [0057] As schematically illustrated in Fig. 5, processor 102 receives an image of a patient’s vessels (step 502) and determines a location of a pathology in the vessels based on the image (step 504), e.g., as described above. Processor 102 then calculates a functional measurement of the vessel based on a color feature (which may include grayscale) of the image at the location of the stenosis, e.g., by inputting the color feature into a machine learning model that predicts a value of the functional measurement (step 506). In one embodiment, a machine learning model running a regression algorithm is used to predict a value of a functional measurement (e.g., FFR estimate) from an image of the vessel, namely, based on a color feature of the image at the location of a pathology. For example, the machine learning algorithm can be implemented by using the XGBoost algorithm or other Gradient Boosted Machine or decision trees regression. In other examples, neural network or deep neural network based regression can be used.

[0058] Processor 102 then outputs an indication of the functional measurement to a user (step 508), e.g., via user interface device 106.

[0059] The image of the patient’s vessels may be a grayscale image and the color feature may include shades of grey. Other features may be input to the machine learning model in addition to the color features, for example, morphological and/or shape features.

[0060] Thus, processor 102 determines a functional measurement directly from an image, e.g., by employing machine learning models and classifiers as described above, with no need for user input.

[0061] In some embodiments, FFR estimate and/or other functional measurements can be obtained during or after stenting, by using the systems and methods described above, namely, obtaining an image of the patient’s vessels during or after stenting and calculating a functional measurement of the vessel based on a color feature of the image at the location of the stent (e.g., at the stents ends and/or within the stent). Obtaining functional measurements during or after stenting provides information in real-time regarding the success of the stenting procedure.