Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ASSESSING ENDOSCOPE CHANNEL DAMAGE USING ARTIFICIAL INTELLIGENCE VIDEO ANALYSIS
Document Type and Number:
WIPO Patent Application WO/2020/096889
Kind Code:
A1
Abstract:
Embodiments of a system and method for training and using a model for assessing endoscope channel damage, based on artificial intelligence image and video analysis techniques, are generally described. In example embodiments, an analysis is performed by: receiving image data captured from a borescope advancing in a lumen of an endoscope; evaluating the image data with a trained artificial intelligence model to generate a predicted state of the lumen from the image data, with the model trained to generate the predicted state based on a feature identified from the image data; and outputting data that represents the predicted state of the lumen of the endoscope. Corresponding techniques for training the model are also performed by: receiving training image data captured from a lumen of an endoscope; performing feature extraction; training a model based on the identified features; and outputting the model for use in analyzing images.

Inventors:
CHEONG MER WIN (US)
Application Number:
PCT/US2019/059356
Publication Date:
May 14, 2020
Filing Date:
November 01, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEDIVATORS INC (US)
International Classes:
A61B1/00; A61B1/04; A61B90/70; B08B7/04
Foreign References:
US20120316421A12012-12-13
US20180084162A12018-03-22
US20110255763A12011-10-20
US20170140528A12017-05-18
US20020165837A12002-11-07
US20190038791A12019-02-07
Attorney, Agent or Firm:
ACKLEY, James (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of artificial intelligence analysis for endoscope inspection, comprising: receiving image data captured from a borescope, the image data capturing at least one image of a lumen of an endoscope;

analyzing the image data with a trained artificial intelligence model to generate a predicted state of the lumen from the image data, the trained model configured to generate the predicted state based on a trained feature identified from the at least one image of the lumen; and

outputting data that represents the predicted state of the lumen of the endoscope.

2. The method of claim 1, further comprising:

performing feature extraction on the at least one image of the lumen to identify a subject feature;

wherein the subject feature is correlated to the trained feature based on a classification produced from the trained model, the trained model utilizing a trained machine learning algorithm.

3. The method of claim 2, wherein the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

4. The method of claim 3, wherein the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective classification words.

5. The method of claim 4, wherein the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

6. The method of claim 1, wherein the trained model is further configured to generate the predicted state based on a feature set including the trained feature, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

7. The method of claim 1, further comprising:

performing image pre-processing on the image data, before analyzing the image data with the trained model.

8. The method of claim 7, wherein the image pre-processing comprises at least one of: noise removal, an illumination change, geometric transformation, color mapping, resizing, cropping, or segmentation, to the at least one image of the lumen.

9. The method of claim 1, wherein the predicted state of the lumen comprises a damage state.

10. The method of claim 9, wherein the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

11. The method of claim 1, wherein the trained model is a machine learning classifier.

12. The method of claim 11, wherein the machine learning classifier is a support vector machine classifier, random forest classifier, gradient boosting classifier, or a contribution- weighted combination of multiple classifiers.

13. The method of claim 11, wherein the machine learning classifier indicates a classification label associated with an integrity state or a state of damage.

14. The method of claim 1, wherein the trained model is a deep neural network, convolutional neural network, or recurrent neural network.

15. The method of claim 1, wherein the image data comprises a plurality of images extracted from a video, and wherein the analyzing of the image data with the trained artificial intelligence model is performed using multiple of the plurality of images extracted from the video.

16. The method of claim 1, wherein the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

17. A computing device, comprising:

at least one processor; and

at least one memory device including instructions embodied thereon, wherein the instructions, when executed by the processor, cause the processor to perform artificial intelligence analysis operations for endoscope inspection, with the operations comprising: receiving image data captured from a borescope, the image data capturing at least one image of a lumen of an endoscope;

analyzing the image data with a trained artificial intelligence model to generate a predicted state of the lumen from the image data, the trained model configured to generate the predicted state based on a trained feature identified from the at least one image of the lumen; and

outputting data that represents the predicted state of the lumen of the endoscope.

18. The computing device of claim 17, the operations further comprising:

performing feature extraction on the at least one image of the lumen to identify a subject feature;

wherein the subject feature is correlated to the trained feature based on a classification produced from the trained model, the trained model utilizing a trained machine learning algorithm.

19. The computing device of claim 18, wherein the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

20. The computing device of claim 19, wherein the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective classification words.

21. The computing device of claim 20, wherein the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

22. The computing device of claim 17, wherein the trained model is further configured to generate the predicted state based on a feature set including the trained feature, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

23. The computing device of claim 17, the operations further comprising:

performing image pre-processing on the image data, before analyzing the image data with the trained model.

24. The computing device of claim 23, wherein the image pre-processing comprises at least one of: noise removal, an illumination change, geometric transformation, color mapping, resizing, cropping, or segmentation, to the at least one image of the lumen.

25. The computing device of claim 17, wherein the predicted state of the lumen comprises a damage state.

26. The computing device of claim 25, wherein the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

27. The computing device of claim 17, wherein the trained model is a machine learning classifier.

28. The computing device of claim 27, wherein the machine learning classifier is a support vector machine classifier, random forest classifier, or gradient boosting classifier.

29. The computing device of claim 27, wherein the machine learning classifier indicates a classification label associated with an integrity state or a state of damage.

30. The computing device of claim 17, wherein the trained model is a deep neural network, convolutional neural network, or recurrent neural network.

31. The computing device of claim 17, wherein the image data comprises a plurality of images extracted from a video, and wherein the analyzing of the image data with the trained artificial intelligence model is performed using multiple of the plurality of images extracted from the video.

32. The computing device of claim 17, wherein the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

33. The computing device of claim 17, further comprising:

a user input device to receive input to select the image data and control the analyzing of the image data; and

a user output device to output a representation of data that represents the predicted state of the lumen of the endoscope.

34. A machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing system, cause the processing circuitry to perform operations of any of claims 1 to 33.

35. A system, comprising:

a borescope adapted to capture at least one digital image from a working channel of an endoscope; and

a visual inspection computing system, comprising a memory device and processing circuitry, the processing circuitry adapted to:

obtain the at least one digital image of the endoscope working channel;

analyze the at least one digital image with a trained artificial intelligence model to generate a predicted state of the working channel, wherein the trained model is configured to generate the predicted state based on a trained feature identified from the at least one digital image of the working channel; and

an output device adapted to provide a representation of data that represents the predicted state of the working channel of the endoscope.

36. The system of claim 35, further comprising:

a borescope movement device, adapted to advance the borescope within the working channel of the endoscope at a controlled rate to capture the at least one digital image of the endoscope working channel.

37. A method of training an artificial intelligence model for endoscope inspection, comprising:

receiving training data, the training data comprising image data capturing at least one image of a lumen of an endoscope, and respective labels associated with the at least one image;

performing feature extraction on the at least one image of the lumen to identify respective subject features;

training a model to associate the respective subject features with respective predicted states; and

outputting the model for use in analyzing subsequent images, to generate a predicted state of an image of a lumen from a subject endoscope.

38. The method of claim 37, wherein the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

39. The method of claim 38, wherein the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective words.

40. The method of claim 39, wherein the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

41. The method of claim 37, wherein the model is further trained to generate the predicted state based on a feature set including the subject features, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

42. The method of claim 37, further comprising:

performing image pre-processing on the image data, before performing the feature extraction.

43. The method of claim 42, wherein the image pre-processing comprises at least one of: noise removal, an illumination change, cropping, geometric transformation, color mapping, resizing, or segmentation, to the at least one image of the lumen.

44. The method of claim 37, wherein the predicted state of the lumen comprises a damage state.

45. The method of claim 44, wherein the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

46. The method of claim 37, wherein the model is trained as a machine learning classifier comprising a: support vector machine classifier, random forest classifier, or gradient boosting classifier.

47. The method of claim 46, wherein the machine learning classifier is adapted to provide a classification label associated with an integrity state or a state of damage.

48. The method of claim 37, wherein the image data comprises a plurality of images extracted from a video, and wherein the training of the model is performed using multiple of the plurality of images extracted from the video.

49. The method of claim 37, wherein the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

50. A computing device, comprising:

at least one processor; and

at least one memory device including instructions embodied thereon, wherein the instructions, when executed by the processor, cause the processor to perform artificial intelligence analysis operations for training an artificial intelligence model for endoscope inspection, with the operations comprising:

receiving training data, the training data comprising image data capturing at least one image of a lumen of an endoscope, and respective labels associated with the at least one image;

performing feature extraction on the at least one image of the lumen to identify respective subject features;

training a model to associate the respective subject features with respective predicted states; and

outputting the model for use in analyzing subsequent images, to generate a predicted state of an image of a lumen from a subject endoscope.

51. The computing device of claim 50, wherein the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

52. The computing device of claim 51, wherein the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective words.

53. The computing device of claim 52, wherein the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

54. The computing device of claim 50, wherein the model is further trained to generate the predicted state based on a feature set including the subject features, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

55. The computing device of claim 50, the operations further comprising:

performing image pre-processing on the image data, before performing the feature extraction.

56. The computing device of claim 55, wherein the image pre-processing comprises at least one of: noise removal, an illumination change, cropping, geometric transformation, color mapping, resizing, or segmentation, to the at least one image of the lumen.

57. The computing device of claim 50, wherein the predicted state of the lumen comprises a damage state.

58. The computing device of claim 57, wherein the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

59. The computing device of claim 50, wherein the model is trained as a machine learning classifier comprising a: support vector machine classifier, random forest classifier, or gradient boosting classifier.

60. The computing device of claim 59, wherein the machine learning classifier is adapted to provide a classification label associated with an integrity state or a state of damage.

61. The computing device of claim 50, wherein the image data comprises a plurality of images extracted from a video, and wherein the training of the model is performed using multiple of the plurality of images extracted from the video.

62. The computing device of claim 50, wherein the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

63. A machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing system, cause the processing circuitry to perform operations of any of claims 37 to 62.

Description:
ASSESSING ENDOSCOPE CHANNEL DAMAGE USING ARTIFICIAL INTELLIGENCE VIDEO ANALYSIS

PRIORITY CLAIM

[0001] This application claims priority to and the benefit of U.S Provisional application with serial number 62/755,922, filed on November 5, 2018, entitled ASSESSING

ENDOSCOPE CHANNEL DAMAGE USING ARTIFICIAL INTELLIGENCE VIDEO ANALYSIS, which is herein incorporated by reference in its entirety.

TECHNICAL FIELD

[0002] Embodiments pertain to the cleaning and reprocessing of reusable medical equipment. Some embodiments more specifically relate to automated inspection and detection processes to identify damage from within the interior channels of an endoscope.

BACKGROUND

[0003] Specific de-contamination procedures and protocols are utilized to clean reusable medical equipment. As one example in the medical setting involving reusable medical equipment, endoscopes that are designed for use in multiple procedures must be fully cleaned and reprocessed after a medical imaging procedure to prevent the spread of infectious organisms. Once an endoscope is used in the medical procedure, an endoscope is considered contaminated until it is properly cleaned and disinfected through a series of specific cleaning actions.

[0004] A number of protocols and assisting equipment for cleaning, disinfection, and inspection are used by current medical practices to reprocess endoscopes and prepare them for subsequent procedures. For example, various machines and devices such as automated endoscope reprocessors are used to perform deep cleaning of an endoscope, through the application of disinfecting cleaning solutions. High-level disinfection or sterilization processes are typically performed after manual cleaning to remove any remaining amounts of soils and biological materials. However, an endoscope is not considered as ready for high- level disinfection or sterilization until it has been inspected and verified to function correctly, without any damage or leaking parts. If the endoscope includes damaged surfaces, leaks, broken controls, or the like, the endoscope may not be fully exposed to deep cleaning by the disinfecting chemicals, and the opportunity for spreading contamination significantly increases. [0005] During existing manual cleaning procedures, a human technician may inspect the endoscope for damage and perform various types of inspections, verifications, or tests on the external surfaces and operational components of the endoscope. However, many types of contaminants and damage within the interior channels of the endoscope are not readily visible or observable by a human. Therefore, there is a need to improve cleaning processes of endoscopes to reduce the incidence and amount of residual biological material, as well as a need to improve inspection processes to detect residual biological material or damage to the endoscope.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] FIG. 1 illustrates an overview of devices and systems involved in stages of endoscope use and reprocessing, according to various examples discussed herein;

[0007] FIG. 2 is a schematic cross-section illustration of an endoscope, operated, according to various examples discussed herein;

[0008] FIG. 3 illustrates data flows provided with a cleaning workflow and tracking system, during respective stages of endoscope use and processing, according to various examples discussed herein;

[0009] FIG. 4 is a block diagram of system components used to interface among imaging, tracking, and processing systems, according to various examples discussed herein;

[0010] FIG. 5 illustrates a flowchart providing an overview of classifier training for analysis of endoscope lumen imaging data, according to various examples discussed herein;

[0011] FIG. 6 illustrates a flowchart providing an overview of classifier usage for analysis of endoscope lumen imaging data, according to various examples discussed herein;

[0012] FIG. 7 is a block diagram of model training and usage for analysis of endoscope lumen imaging data, according to various examples discussed herein;

[0013] FIG. 8 illustrates aspects of feature selection and extraction from endoscope lumen imaging data, according to various examples discussed herein;

[0014] FIGS. 9A and 9B respectively illustrate a flowchart of example methods of performing artificial intelligence analysis for endoscope inspection and training a model to perform such artificial intelligence analysis, according to various examples discussed herein; and

[0015] FIG. 10 is a block diagram of architecture for an example computing system used in accordance with some embodiments. DETAILED DESCRIPTION

[0016] The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments.

Embodiments set forth in the claims encompass all available equivalents of those claims.

[0017] Various techniques for automated detection of endoscope damage with the use of imaging analysis are described herein. These techniques include the use of software and classification systems that are adapted to assess the state or degree of damage in an endoscope lumen, using computer vision processing logic and machine learning algorithms. These techniques, which may be used as part of a manual cleaning and visual inspection protocol, can be used to identify scratches, nicks, cracks, burns, stains, perforation, or other defects in the endoscope lumen. For instance, the techniques can enable analysis of an internal suction channel or the biopsy channel (the portion of the suction channel in which instruments are advanced and retracted), which is at a high risk of contamination if such defects are present.

[0018] In an example, imaging data, in the form of captured images, captured video, or a live image/video feed, is obtained from a borescope or other camera sensor and provided as input to a trained machine learning classifier model. Unique and localized information is extracted and enhanced from the imaging data using image segmentation and other image pre-processing techniques. The pre-processed imaging data is then fed into a machine learning classifier model to assess the state or degree of the damage of the respective channel. This classifier may be designed to produce an integrity classification (e.g., damaged, not damaged), a specific type of integrity classification (e.g., scratched, contaminated, discolored), a measurement (degree) or confidence level of classification (e.g., the amount or number of scratches observed, or a confidence level in a scratch classification being present), and other types of labels and outputs (e.g., data indicating a combination of which damage is detected, and a confidence level measurement of various types of damage, such as: scratched 97%, and peel 60%, and discoloration 2%). The classification of a specific type of damage may be based on factors additional to the observations measured in a specific image. These factors include, but are not limited, to: a number of scratches, observed biofilm growth, a degree of discoloration, or device history. [0019] In an example, discussed and illustrated in more detail below, a training process may employ a support vector machine (SVM) binary classifier that is trained to produce a classification of an outcome state (e.g., detected damage or no detected damage) based on various extracted features of endoscope lumen images. In this example, the classifier model may be trained upon extracted information from training images and labels until a desired model accuracy is achieved for the analysis of variable endoscope lumen conditions.

Accordingly, this trained classifier model may be used in a prediction process to analyze new images captured from within endoscope lumens during reprocessing, to identify if a damage or compromised condition state is present before attempting high-level disinfection or sterilization (e.g., with an automated cleaning machine). The trained classifier model may be further configured to identify particular damage states and defects, such as, surface irregularities, scratches and fissures, material deposits, discoloration, or other defects or abnormal states, and the level of damage or the confidence in classification of such state.

[0020] As referred to herein, references to the trained“model”,“algorithm”, or artificial intelligence“software” may refer to a variety of forms of binaries, data structures, or software, which include logic and/or organized information to analyze imaging data and features of such data. Such software may be hosted and executed at a variety of locations, in both client and server settings, including at a single or multiple of machines, devices, computing systems, and like electronic or mechanical apparatuses. Such software may be provided in a wide variety of artificial intelligence approaches, including with use of the machine learning classifiers discussed in many of the examples, or with the use of artificial neural networks (e.g., convolutional or“deep” neural networks) which involve supervised or unsupervised learning approaches.

[0021] Many aspects of the following techniques are discussed in reference to a cleaning and sterilization scenario in a clinical setting, commonly referred to as endoscope reprocessing. However, it will be understood that the techniques for damage detection of endoscope channels may also be applicable in other settings, such as repair, refurbishment, and quality assurance testing. Likewise, although the following techniques are discussed with reference to endoscope channel inspection, it will be understood that such techniques may also be applicable to other forms of reusable medical, industrial, or scientific equipment, in which inspection of a lumen or internal cavity is difficult or infeasible.

[0022] FIG. 1 illustrates an overview of devices and systems involved in example stages of endoscope use and reprocessing. In the environment illustrated in FIG. 1, a series of stages are sequentially depicted for use and handling of the endoscope, transitioning from a procedure use stage 110, to manual reprocessing stage 120, to an automated reprocessing stage 140, to a storage stage 150. It will be understood that the stages 110, 120, 140, 150 as depicted and described provide a simplified illustration of typical scenarios in the use, handling, and reprocessing for reusable endoscopes. As a result, many additional steps and the use of additional devices and procedures (or, substitute procedures and substitute devices) may be involved in the respective stages.

[0023] The procedure use stage 110 depicts a human user 112 (e.g., technician, nurse, physician, etc.) who handles an endoscope. At the commencing of the procedure use stage 110, the endoscope 116A is obtained in a sterile or high-level disinfected/clean state. This disinfected/clean state typically results from reprocessing and storage of the endoscope 116A, although the state may also be provided from a disinfected repair or factory -provided state (not shown). In the procedure use stage 110, the endoscope 116A may be used for various endoscopic procedures (e.g., colonoscopy, upper endoscopy, etc.) on a subject human patient, for any number of diagnostic or therapeutic purposes. During the endoscopic procedures, the endoscope 116A is exposed to biological material from the subject patient or the surrounding environment. Thus, at the completion of the procedure use stage 110, the endoscope 116A exists in a contaminated state.

[0024] The disinfected or contamination state of the endoscope 116A may be tracked by a tracking system for purposes of monitoring, auditing, and other aspects of workflow control. An interface 114 to the tracking system is shown, which receives an identifier of the endoscope 116A and provides a graphical status as output. The tracking system may be used in the procedure use stage 110 (and the other stages 120, 140, 150) to identify the use of the endoscope 116A to be associated with a particular imaging procedure, patient, procedure equipment, procedure room, preparation or cleaning protocol, or other equipment or activities. This identifying information may enable the tracking system to track the contamination or disinfected state of the endoscope, and to identify and prevent exposure of contamination or infectious agents to patients or handling personnel from damaged endoscopes or improper cleaning procedures.

[0025] After the procedure use stage 110, the endoscope transitions to handling in a manual reprocessing stage 120. The manual reprocessing stage 120 specifically depicts the use of manual cleaning activities being performed by a technician 122, to clean the endoscope 116B. The type of manual cleaning activities may include use of disassembly and removal of components, applying brushes to clear channels, wiping to remove visible liquids and solids, and other human-performed cleaning actions. Some of the manual cleaning activities may occur according to a regulated sequence or manufacturer-specified

instructions.

[0026] The manual reprocessing stage 120 also depicts the use of a flushing aid device 128 and a borescope 126 to conduct additional aspects of cleaning and inspection. In an example, the flushing aid device 128 serves to perform an initial chemical flush of the internal channels of the endoscope 116B (e.g., water, air, or suction channels) with cleaning agents. The flushing aid device 128 may also enable the performance of leak testing, to verify whether components or structures of the endoscope leak fluid (e.g., leak water or air). In other examples, the flushing or leak test actions performed by the flushing aid device 128 are manually performed by the syringing of chemicals or air into the endoscope channels. The results of the leak testing and the flushing may be tracked or managed as part of a device tracking or cleaning workflow.

[0027] In an example, the borescope 126 is used as part of an inspection process, such as to inspect an interior lumen of a channel in the endoscope 116B. This may include the inspection of a channel of the endoscope 116B used for instrument insertion and extraction (e.g., the biopsy channel portion of the suction channel). The borescope 126 may be inserted and moved (e.g., retracted or advanced) by a human or a machine within one or more lumens of the endoscope 116B to perform the inspection process. This inspection process may occur before or after the performance of the leak test, flushing, or other cleaning or testing activities in the manual reprocessing stage 120.

[0028] The borescope 126 may produce image data 132 (e.g., one or more images, such as a video) that provides a detailed, high-resolution view of the status of a channel of the endoscope 116B. The image data 132 may be provided to a computing system 130 for processing and analysis. The borescope 126 may be operated as part of a borescope inspection system, which provides controlled or mechanicalized advancement and movement of the borescope 126 within an inspection procedure. The results of the borescope inspection procedure may be tracked or managed as part of a device tracking or cleaning workflow, including with the aforementioned tracking system.

[0029] In an example, the computing system 130 is provided by a visual inspection processing system that uses a trained artificial intelligence (e.g., machine learning) model to analyze image data 132 and identify a state of the endoscope channel. For instance, the state of the endoscope channel may include, no detected abnormalities (e.g., an integrity state), or a detected presence of biological material or a detected presence of channel damage (e.g., a compromised state). Further examples of the borescope inspection system and the visual inspection processing system are provided in the discussed examples below.

[0030] After completion of the manual reprocessing stage 120, the endoscope is handled in an automated reprocessing stage 140. This may include the use of an automatic endoscope reprocessor (AER) 142, or other machines which provide a high-level disinfection and cleaning of the endoscope. For instance, the AER 142 may perform disinfection for a period of time (e.g., for a period of minutes) to expose the interior channels and exterior surfaces of the endoscope to deep chemical cleaning and disinfectant solutions. The AER 142 may also perform rinsing procedures with clean water to remove chemical residues.

[0031] After completion of the automated reprocessing stage 140 and the production of the endoscope in a disinfected state, the endoscope transitions to handling in a storage stage 150. This may include the storage of the endoscope in a sterile storage unit 152. In some examples, this stage may also include the temporary storage of the endoscope in a drying unit. Finally, retrieval of the endoscope from the storage stage 150 for use in a procedure results in transitioning back to the procedure use stage 110.

[0032] The overall cleaning workflow provided for an endoscope within the various reprocessing stages 120 and 140 may vary according to the specific type of device, device specific requirements and components, regulations, and the types of cleaning chemicals and devices applied. However, the overall device use and cleaning workflow, relative to stages of contamination, may be generally summarized in stages 110, 120, 140, 150, as involving the following steps:

[0033] 1) Performance of the endoscopic procedure. As will be well understood, the endoscopic procedure results in the highest amount of contamination, as measured by the amount of microbes contaminating the endoscope.

[0034] 2) Bedside or other initial post-procedure cleaning. This cleaning procedure removes or reduces the soils and biological material encountered on the endoscope during the endoscopic procedure. As a result, the amount of contamination, as measured by the amount of microbes, is reduced.

[0035] 3) Transport to reprocessing. The more time that is spent between the procedure and reprocessing results in a potential increase in the amount of contamination or difficulty to remove the contamination, due to biological materials drying, congealing, growing, etc.

[0036] 4) Performance of a leak test (e.g., conducted in the manual reprocessing stage

140 with the flushing aid device 128 or a standalone leak testing device or procedure (not shown)). This leak test is used to verify if any leaks exist within channels, seals, controls, valve housings, or other components of the endoscope. If the endoscope fails the leak test, or encounters a blockage during flushing, then high-level disinfection or sterilization attempted in automated reprocessing will be unable to fully flush and disinfect all areas of the endoscope. Further, if the leak test fails but the instrument is placed in an automatic reprocessing machine, the instrument will be damaged through fluid ingress during the reprocessing cycle.

[0037] 5) Manual washing (e.g., conducted in the manual reprocessing stage 140 with brushes, flushing, etc.). This aspect of manual washing is particularly important to remove biofilm and lodged biological agents from spaces on or within the endoscope. Biofilm generally refers to a group of microorganisms that adheres to a surface, which may become resistant or impervious to cleaning and disinfectant solutions. The successful application of manual washing significantly reduces the amount of contamination on the endoscope.

[0038] 6) Damage inspection (e.g., conducted in manual reprocessing stage 140 with borescope inspection and artificial intelligence image processing). Microbes and in particular biofilm may resist cleaning if lodged in damaged or irregular portions of the endoscope. A procedure of damage inspection can be used to identify surface irregularities, scratches and fissures, or other defects or abnormal states (e.g., a compromised state) within the interior channels, exterior surfaces, or components of the endoscope. This damage inspection may also be accompanied by the detection of biological materials (such as biofilms) which remain after manual washing. Such damage inspection may be performed with use of a borescope inspection system, visual inspection system, and other mechanisms discussed herein.

[0039] 7) High level disinfection or sterilization (e.g., conducted in AER 142). Upon successful conclusion of the high-level disinfection or sterilization process, in an ideal state for an endoscope with no damage, no biological contamination will remain from the original endoscopic procedure.

[0040] 8) Rinse and Air Purge. This stage involves the introduction of clean water and air, to flush any remaining chemical solution and to place the endoscope in a disinfected and clean state. The risk of introducing new contamination may be present if contaminated water or air are introduced to the endoscope.

[0041] 9) Transport to Storage. This stage involves the transport from the AER or other device to storage. A risk of introducing new contamination may be present based on the method and environment of transport and handling. [0042] 10) Storage. This stage involves the storage of the endoscope until needed for a procedure. A risk of introducing new contamination may be present based on the conditions in the storage unit.

[0043] 11) Transport to Patient. Finally, the endoscope is transported for use in a procedure. A risk of introducing new contamination may also be present based on the method and environment of transport and handling.

[0044] Further aspects which may affect contamination may involve the management of valves and tubing used with a patient. For instance, the use of reusable valves, tubing, or water bottles in the procedure may re-introduce contamination to the endoscope.

Accordingly, the disinfected state of a processed endoscope can only be provided in connection with the use of other sterile equipment and proper handling in a clean

environment.

[0045] FIG. 2 is a schematic cross-section illustration of an endoscope 200, operable according to various examples. The endoscope 200 as depicted includes portions that are generally divided into a control section 202, an insertion tube 204, a universal cord 206, and a light guide section 208. A number of imaging, light, and stiffness components and related wires and controls used in endoscopes are not depicted for simplicity. Rather, FIG. 2 is intended to provide a simplified illustration of the channels important for endoscope cleaning workflows. It will be understood that the presently discussed endoscope cleaning workflows will be applicable to other form factors and designs of endoscopes. The techniques, systems, and apparatus discussed herein can also be utilized for inspection operations on other instruments that include lumens that can become contaminated or damaged during use.

[0046] The control section 202 hosts a number of controls used to actuate the positioning, shape, and behavior of the endoscope 200. For instance, if the insertion tube 204 is flexible, the control section 202 may enable the operator to flex the insertion tube 204 based on patient anatomy and the endoscopic procedure. The control section 202 also includes a suction valve 210 allowing the operator to controllably apply suction at a nozzle 220 via a suction channel 230. The control section 202 also includes an air/water valve 212 which allows the distribution of air and/or water from an air channel 232 (provided from an air pipe source 218) or a water channel 228 (provided from a water source connected to a water source connector 224) to the nozzle 220. The depicted design of the endoscope 200 also includes a water jet connector 222 via a water-jet channel 226, to provide additional distribution of water separate from the air channel 232. [0047] The universal cord 206 (also known as an“umbilical cable”) connects the light guide section 208 to the control section 202 of the endoscope. The light guide section 208 provides a source of light which is distributed to the end of the insertion tube 204 using a fiber optic cable or other light guides. The imaging element (e.g. camera) used for capturing imaging data may be located at in the light guide section 208 or adjacent to the nozzle 220.

[0048] As shown, the various channels of the endoscope 200 allow the passage of fluids and objects, which may result in the contamination throughout the extent of the channels. The portion of the suction channel 230 which extends from the biopsy valve 214 to the distal end of the insertion tube 204 (to the nozzle 220) is also known as the biopsy channel. In particular, the biopsy channel, and the remainder of the suction channel 230, is subject to a high likelihood of contamination and/or damage in the course of an endoscopic procedure.

For example, the insertion, manipulation, and extraction of instruments (and biological material attached to such instruments) through the suction channel 230 commonly leads to the placement of microbes within the suction channel 230.

[0049] Any damage to the interior layer(s) of the biopsy channel, such as in scratches, nicks, or other depressions or cavities to the interior surface caused by instruments moving therein may also lead to deposits of biological material. Such biological material which remains in cavities, or which congeals in the form of biofilm, may be resistant to many manual cleaning techniques such as brushes pulled through the suction channel. Such damage may also occur in the other channels 228, 230, 232, as a result of usage, deterioration, or failure of components. The techniques discussed herein provide enhanced techniques in connection with the inspection and verification of the integrity of the channels 228, 230, 232, including integrity from damages or defects, and/or integrity from deposited biological materials and contamination·

[0050] FIG. 3 illustrates data flows 300 provided with an example cleaning workflow and tracking system 380, during respective stages of endoscope use and processing, including the use of a borescope inspection system 350 and visual inspection processing system 360 used to perform an integrity verification of one or more endoscope channels.

[0051] The data flows 300 specifically illustrate the generation and communication of data as an endoscope is handled or used at various locations. These include: status of the endoscope at a storage facility 310 (e.g., the storage unit 152 in the storage stage 150), as indicated via status data (e.g., a location and sterilization status of the endoscope); status of the use of the endoscope at a procedure station 320 (e.g., as handled in the procedure use stage 110), as indicated via procedure data (e.g., an identification of a patient, physician, and handling details during the procedure); status of the testing of the endoscope at a testing station 330 (e.g., at a leak or component test device), as indicated via test result data (e.g., a pass or fail status of a test, measurement values, etc.); status of the manual cleaning actions performed at a manual cleaning station 340 (e.g., as performed by the technician 122), as indicated by inspection data (e.g., a status that logs the timing and result of inspection procedures, cleaning activities); and a status of the machine cleaning actions performed at an automated cleaning station 370 (e.g., as performed by the AER 124), as indicated by cleaning result data (e.g., a status that logs the procedures, chemicals, timing of automated

reprocessing activities). Such statuses and data may be communicated for storage, tracking, maintenance, and processing, at a cleaning workflow and tracking system 380 (and databases operated with the system 380).

[0052] The location of the endoscope among the stations, and activities performed with the endoscope, may be performed in connection with specific device handling workflow.

Such a workflow may include a step-by-step cleaning procedure, maintenance procedures, or a tracking workflow, to track and manage a disinfected or contaminated status, operational or integrity status, or cleaning procedure status of the endoscope components or related equipment. In connection with cleaning operations at the manual cleaning station 340 or the automated cleaning station 370, the subject endoscope may be identified using a tracking identifier unique to the endoscope, such as a barcode, RFID tag, or other identifier coupled to or communicated from the endoscope. For instance, the manual cleaning station 340 and automated cleaning station 370 may host an identifier detector to receive identification of the particular endoscope being cleaned at the respective cleaning station. In an example, the identifier detector comprises a RFID interrogator or bar code reader used to perform hands free identification.

[0053] Additionally, in connection with a cleaning workflow, tracking workflow, or other suitable device handling workflow, a user interface may be output to a human user via a user interface device (e.g., a display screen, audio device, or combination). For example, the user interface may request input from the human user to verify whether a particular cleaning protocol has been followed by the human user at each of the testing station 330, manual cleaning station 340 and automated cleaning station 370. A user interface may also output or receive modification of the status in connection with actions at the storage facility 310 and the procedure station 320. The input to such user interface may include any number of touch or touch-free (e.g., gesture, audio command, visual recognition) inputs, such as with the use of touchless inputs to prevent contamination with an input device. [0054] In various examples, input recognition used for control or identification purposes may be provided within logic or devices of any of the stations 310, 320, 330, 340, 370, or as interfaces or controls to the borescope inspection system 350 or the visual inspection processing system 360. In still further examples, tracking of patients, cleaning personnel, technicians, and users or handlers of the endoscope may be tracked within the data values communicated to the cleaning workflow and tracking system 380. The interaction with the cleaning workflow and tracking system 380 may also include authentication and logging of user identification information, including validation of authorized users to handle the device, or aspects of user-secure processing.

[0055] A variety of inquiries, prompts, or collections of data may occur at various points in a device cleaning or handling workflow, managed by the cleaning workflow and tracking system 380, to collect and output relevant data. Such data may be managed for procedure validation or quality assurance purposes, for example, to obtain human verification that a cleaning process has followed proper protocols, or that human oversight of the cleaning process has resulted in a satisfactory result. Workflow steps may also be required by the workflow and tracking system 380 to be performed in a determined order to ensure proper cleaning, and user inquiries and prompts may be presented in a determined order to collect full information regarding compliance or procedure activities. Further, the cleaning workflow and tracking system 380 may be used to generate an alert or display appropriate prompts or information if a user or device does not fully completion certain steps or procedures.

[0056] FIG. 4 is a block diagram of system components used to interface among example imaging, tracking, and processing systems. As shown, the components of the borescope inspection system 350 may include a borescope imaging device 352, which is operably coupled to a movement control device 354. The borescope inspection system 350 may provide video or still imaging output in connection with imaging of one or more internal channels of the endoscope 410. The movement control device 354 may also track a position of the borescope 352 in the imaged channel as the borescope 352 is actively moved

(advanced or retracted), and associate or embed position data with the imaging data. The use of the borescope inspection system 350 may be tracked and managed as part of an inspection procedure in a cleaning workflow, with resulting tracking and inspection data facilitated by the cleaning workflow and tracking system 380.

[0057] The cleaning workflow and tracking system 380 may include functionality and processing components used in connection with a variety of cleaning and tracking purposes involving the endoscope 410. Such components may include device status tracking management functionality 422 that utilizes a device tracking database 426 to manage data related to status(es) of contamination, damage, tests, and usage for the endoscope 410 (e.g., among any of the stages 110, 120, 140, 150). Such components may also include a device cleaning workflow management functionality 424 used to track cleaning, testing, verification activities, initiated as part of a cleaning workflow for the endoscope 410 (e.g., among the reprocessing stages 120, 140). As specific examples, the workflow management database 428 may log the timing and performance of specific manual or automatic cleaning actions, the particular amount or type of cleaning or disinfectant solution applied, which user performed the cleaning action, and the like.

[0058] The data and workflow actions in the cleaning workflow and tracking system 380 may be accessed (e.g., viewed, updated, input, or output) through use of a user computing system 430, such as with an input device 432 and output device 434 of a personal computer, tablet, workstation, or smartphone, operated by an authorized user. The user computing system 430 may include a graphical user interface 436 to allow access to the data and workflow actions before, during, or after any of the handling or cleaning stages for the endoscope 410 (e.g., among any of the stages 110, 120, 140, 150). For instance, the user computing system 430 may display a real-time status of whether the endoscope 410 is disinfected, which tests have been completed and passed during cleaning, and the like.

[0059] The visual inspection processing system 360 is shown as also including functionality and processing components used in connection with analysis of data from the borescope inspection system 350, and/or the cleaning workflow and tracking system 380. For instance, video captured by a borescope imaging device 352, advanced within a channel of the endoscope 410 at a particular rate by the movement control device 354, may be captured in real-time through use of inspection video capture processing 362. The respective images or video sequences captured are subjected to image pre-processing 364, such as to enhance, crop, or modify images from the borescope imaging device 352. Finally, respective images or sequences of images are input into an image recognition model 366 for computer analysis of the integrity state of the captured channel. The visual inspection processing performed by the processing system 360 may occur in real time with coordinated use (and potentially, automated or machine-assisted control) of the borescope inspection system 350, or as part of a subsequently performed inspection procedure.

[0060] In an example, the image recognition model 366 may be a machine-learning image classifier which is trained to identify normal (full integrity) conditions of an imaged channel lumen, versus abnormal (compromised integrity) conditions of the imaged channel lumen. Such abnormal conditions may include the existence of damage or defects to the lumen, the deposit of biological material on the lumen, etc. Other types and forms of artificial intelligence processing may also be used in combination with the image recognition model 366. The results (e.g., classification or other data outputs) from the image recognition model 366 may be output via the user computing system 430 or recorded in the cleaning workflow and tracking system 380. Additional implementation detail of the image recognition model 366 is provided in the following examples.

[0061] FIG. 5 illustrates a flowchart 500 providing an example overview of classifier training for analysis of endoscope lumen imaging data. This classifier training may be used to produce a trained image recognition model (e.g., model 366) or functional aspects of an image recognition engine. The flowchart 500 is provided from the perspective of a single training location (e.g., computing system) that analyzes a single set of imaging data using a model production process (e.g., an offline learning method). However, it will be understood that the training may be integrated into a model update process (e.g., an online learning method) which involves retraining and incorporating iterative improvements from quality assurance or human-assisted result verification. Further, new imaging data captured with a borescope may be used to train and update the model.

[0062] The flowchart 500 begins at 510 with obtaining imaging data from a lumen inspection, such as an inspection of a suction channel of an endoscope performed by a borescope digital camera. The imaging data may include (e.g., embed) or be associated with imaging characteristics, such as metadata which indicates a type of channel, the location of the image in the channel, the type or characteristics of the imaged endoscope channel or endoscope component, the type or characteristics of the borescope digital camera, and the like. In some examples, this metadata may be used as an input for training the model, to allow for variation in training and the model results based on imaging location, endoscope characteristics, or borescope characteristics.

[0063] The flowchart 500 continues at 520 with extracting subject images from the imaging data, and at 530 with performing pre-processing on the extracted subject images. In an example, the extracting may include capturing individual imaging frames from a multi frame image set (e.g., video data captured with the borescope digital camera). In another example, the extracting may include capturing specific images from a time- or location-based sequence of sequentially captured images (e.g., images captured as the borescope digital camera is advanced or retracted in a direction in the lumen). In an example, the image pre processing may include performing color enhancement or inversion, illumination changes (e.g., brightness, contrast, or saturation enhancement), segmentation (e.g., to center or crop the image), noise reduction, rotation or translation, or other changes to the size, shape, or illustrated characteristics. For instance, such pre-processing techniques may be designed to provide a standardized input to the training process, while removing or reducing extraneous imaging data. In an example, the borescope captures a 180-degree view of a lumen, projected into a two-dimensional camera image, as the camera is advanced in the lumen; however, the borescope may include specialized lenses or be coupled to image processing features that capture a different degree view or areas of the lumen.

[0064] The flowchart 500 continues at 540 with the labeling of the subject images for defect and non-defect conditions. In an example, an entire image may be labeled for the presence of a defect (e.g., labeled as an image that contains a scratch), or, an area of the image may be labeled for the presence of the defect. Subject images which do not include such defects may also be used in the training method, to provide a basis for learning to distinguish the visible characteristics of a lumen with known integrity and lack of defects.

[0065] The flowchart 500 continues at 550 with the extracting of features from the labeled subject images. These features may include point descriptors and patch descriptors (such as discussed below with reference to FIG. 8). The feature selection and extraction may be based on any number of image characteristics and descriptors, including one or more of such interest point descriptors, patch descriptors, and other image characteristics. In an example, such features are not sensitive to (e.g., are resistant to or unaffected by) noise, rotation, translation, and illumination changes, due to the variability in imaging which is obtained from the endoscope model. Also in an example, the features which are detected and extracted, from a plurality of key points of the image, are produced into a feature vector of a plurality of features. In a further specific example, the plurality of features may be clustered for image classification using a bag of visual words model. This may include conversion of vector areas to“codewords” and performing clustering (e.g., k-means clustering) of the vector areas. The resulting clustered words may represent visual words that represent specific classified characteristics (e.g., tear, dirty, discolored, scratched, peeling, buckling, perforation, etc.).

[0066] In an example, the“bag” of visual words is structured to include specific “codewords” used for determining the integrity state which can be interpreted as a dictionary. A comprehensive bag/dictionary would contain codewords for all relevant damage states: tear, dirty, discoloration, peel, buckling, perforation, and the like. Then, when an image is being analyzed, a histogram of the image is produced. This histogram is binned into the codewords and compared with the dictionary to identify similarity.

[0067] The flowchart 500 concludes at 560 with the production of a classifier model that is trained from the labeled image features. This classifier model may be produced from a support vector machine (i.e., support vector network) or other machine learning classifier (e.g., random forest classifier, or gradient boosting classifier), which uses a learning algorithm to produce the classifier model from the supervised training. In some examples, the training data and classifier model produced within flowchart 500 may comprise a training set with imaging data and labels that are customized to a specific condition, type, or model of endoscope, endoscope component, or lumen characteristic. In other examples, the training data may comprise images which are generic to multiple types or characteristics of endoscopes and lumens. Accordingly, some implementations of the produced classifier model may be trained with a relatively large data set to produce a trained classifier that considers defects and defect identification in the context of many variations of endoscopes and endoscope lumen conditions.

[0068] FIG. 6 illustrates a flowchart 600 providing an example overview of classifier usage for analysis of endoscope lumen imaging data. The flowchart 600 illustrates a sequence of operations being performed on a set of captured imaging data for classification and analysis using a trained model, whereas the flowchart 500 discussed above illustrates a sequence of operations being performed on a set of training imaging data for training of this model. However, it will be understood that many of the image extraction, pre-processing, feature extraction and analysis, and classification aspects discussed with reference to the operations of flowchart 500 may also apply to the operations of flowchart 600.

[0069] The flowchart 600 begins with operations at 610 to obtain imaging data from a lumen inspection of a subject endoscope. In an example, this lumen inspection involves the generation of a series (e.g., time-series, and/or location-series) of digital images as captured by a borescope that is advanced within a lumen. The imaging data may include imaging characteristics and metadata, such as relating to the channel, endoscope, or camera, as discussed above (in reference to operations at 510). The flowchart 600 then continues at 620 with extracting subject images from the imaging data, and at 630 with performing pre processing on the extracted subject images (e.g., similar to the extraction and pre-processing operations at 520, 530). Such pre-processing may be used to extract relevant imaged portion(s) of the lumen and to discard irrelevant pixels or areas of the image from

consideration or use in the prediction. [0070] The flowchart 600 continues at 640 with the extraction of the features of the pre- processed images, such as with point descriptors and patch descriptors. These features are provided directly for analysis by the trained classifier model, at 650, such as with a machine learning classifier model produced at 560 which considers a similar set of features. Again, in various examples, such features may rely on the identification of visual words that are produced from clustering of feature descriptors to various vocabulary units. However, other types of features may be trained and predicted within the model.

[0071] Finally, the flowchart 650 concludes by producing (identifying) a classification of the images, based on analysis of the image feature using the trained model. In various examples, a binary classification is produced (e.g., to indicate damage or no damage);

whereas in other examples, a ranking, severity, or other predicted value of the classification is produced (e.g., a numerical value that represents the amount or confidence value in a classified condition being present).

[0072] FIG. 7 is a block diagram of an example model training and usage scenario for the analysis of endoscope lumen imaging data. Specifically, this scenario illustrates the application of the training operations from flowchart 500 and prediction operations from flowchart 600 within a training phase 710 and a prediction phase 730. However, some aspects of the training operations in flowchart 500 and prediction operations in flowchart 600 (e.g., image pre-processing) are not depicted for simplicity.

[0073] As shown in FIG. 7, the training phase 710 demonstrates the input of a first image 702A and a second image 702B, received from a borescope, as input imaging data 714. Each of the images 702A, 702B is associated with one or more labels 712. Specifically, the labels 712 indicate the respective ground truth condition of the images 702A, 702B, and various conditions that are observable from the images 702A, 702B. For instance, the labels 712 may include a label value for image 702A to indicate the presence of a scratch (feature 704A); the labels 712 may include a label value for image 702B to indicate the absence of any damage (undamaged).

[0074] The feature extractor 716 operates within the training phase 710 to identify features of the images, including key points that correspond to illustrated features 704A, 704B, and 704C. The feature extractor 716 specifically will produce a feature vector (e.g., a n-dimensional vector of numerical features that represent the feature) for these and other identified aspects of each analyzed image, with the feature vector being produced into a feature set 718 for the image. In various examples, the feature extractor 716 may identify basic or advanced features. For instance, basic features might include: a first feature that represents a mean value of the entire image; a second feature that represents a number of black/white pixel intensity in the grayscale image; a third feature that represents an intensity value of location (i,j,k) of an RGB image (e.g., a M x N x P matrix). Also for instance, advanced features might include: a feature incorporating multiple filters (preprocessing) and extracting individual pixel values after each filter, each produced feature vector/set can be used to provide characterization to the damage/non-damage present in the image, and the artificial intelligence classifier is designed to learn from these characterizations.

[0075] The labels 712 and the feature set 718 for the respective images are then provided as input to a machine learning algorithm 720, to leam the features of the images relative to the applied labels. For instance, based on the learning, the machine learning algorithm 720 may identify that the presence of feature 704A is indicative of a scratch, due to the labeling of the image 702A as including a scratch; whereas, the presence of features 704B, 704C are not scratches, due to the labeling of the image 702B as being undamaged.

[0076] The prediction phase 730 demonstrates the input of a first image 706A and a second image 706B as input imaging data 732. In a similar manner to the feature extractor 716, the prediction phase feature extractor 734 operates to identify features of the images and produce a feature set 736 tied to the respective images 706A, 706B. This feature set 736 is provided as input to the classifier model 740, which identifies a prediction label value 738 based on the previous training. Based on the training phase 710, for example, the classifier model 740 is configured to identify feature 708A included in image 706A as indicative of a scratch, whereas the presence of feature 708B included in image 706B is indicative of an undamaged condition.

[0077] Although not depicted, similar analyses and outcomes may be produced in the prediction phase 730 on additional features, labels, and classifications, including the use of sequential images. Likewise, with variation to the classifier model 740, the prediction label value 738 may be provided in the form of a binary confirmation of a damage or condition (e.g., a“yes” or“no” value), a measurement value, a confidence value, or the like. Further, it will be understood that additional training in the training phase 710 may be based on a far larger set of image data, to refine and improve the accuracy of the resulting classifier model.

[0078] FIG. 8 illustrates further aspects of feature selection and extraction from examples of endoscope lumen imaging data. Specifically, FIG. 8 illustrates a first imaging set 810 being analyzed with an interest point descriptor approach, in contrast to a second imaging set 820 being analyzed with a patch descriptor approach. [0079] The illustration of the first imaging set 810 includes an original image 810A, having an image portion 812 being enlarged 814 to show pixel-by-pixel values for use with interest point detection. This illustration further shows the extraction of an interest point descriptor, to produce an analyzed image 810B which indicates relevant features for analysis. In this example, an interest point descriptor is used to determine intensity comparisons, for corner/edge detection of objects within an image.

[0080] An interest point is a point that lies on the defect and can lie anywhere on an image. The identification of this interest point may be performed with an interest point“test” with threshold selection. In an undamaged lumen channel (e.g., as depicted in image 820B), the lumen will include N interest points and these points can be relevant/irrelevant and scattered across the entire image. In a scratched lumen channel (e.g., as depicted in image 810B), the lumen would include >N interest points (points that are present in an undamaged channel and points that show up due to the presence of a defect), as the presence of defects indicates a difference between the foreground and background.

[0081] In an example, the interest point descriptor may obtain respective features from an accelerated segment test (FAST), which is used to identify a feature if there exists a set of n contiguous pixels in the circle which are brighter than l p + t or darker than l p - 1. In this example, the threshold is universal across the whole picture, where the number of points oc number of scratches. The use of an interest point descriptor, in particular, can be used to detect the presence of discoloration or a foreign object.

[0082] The illustration of the second imaging set 820 includes an original image 820A, relative to the use of a patch descriptor - oriented gradients approach to produce an analyzed image 820B. In this example, a blob detection approach is used to identify respective features, using a patch descriptor technique such as (speeded up robust features (SURF). Further, a patch descriptor may be used to describe a pixel’s intensity distribution within the neighborhood of point of interest.

[0083] The use of a point descriptor may identify a single location. By including the neighboring points (3x3, 5x5, 7x7 etc.), the model may also identify if the interest point is useful or not. For instance, consider a scenario where the raw intensity values of a point that lies on the scratch and a point that lies on the clear wall are both 5. However, the mean magnitude value of the surrounding points on the scratch is 3, while the mean magnitude value of the surrounding points on the clear wall is 7. The direction vector of the surrounding points on the scratch is 300 and 50 on the clear wall. This forms a distinction between the 2 categories (damaged and clear), which can reduce a rate of false positive rates. [0084] Other artificial intelligence approaches may be applied to the training and analysis of borescope imaging data beyond the machine learning classifier models discussed above. For instance, a deep learning approach may provide improvements over traditional machine learning classifier approaches, because machine learning classifiers are designed to leam from features, whereas deep learning approaches are designed to learn from the data itself.

For instance, deep learning approaches may be used to analyze raw pixel intensity as input, with this raw pixel data fed into a deep neural network to train a model and make predictions. This enables the consideration of a scale of features not feasible with machine learning classifiers (e.g., a 300 x 300 pixel grayscale image would have 1 x 90,000 features, a 3 x 300 x 300 pixel RGB image would have 1 x 270,000 features, a 3 x 300 x 300 pixel HSV image would have 1 x 270,000 features). Additionally, in some examples, a deep neural network may be applied as a feature extractor to produce features used to feed into traditional machine learning algorithm (thus, telling the machine to learn what another machine has learned).

[0085] FIG. 9A illustrates a flowchart 900 of an example method of performing artificial intelligence analysis for endoscope inspection, and FIG. 9B illustrates a flowchart 950 of an example method of training a model to perform such artificial intelligence analysis.

[0086] The flowchart 900 begins at 910 with the receipt (e.g., capture, extraction, etc.) of training data from the inspection of one or more endoscope lumens. In an example, this training data comprises both image data providing images and labels defined for images or areas of such images, from a variety of types and conditions of endoscope lumens. Such labels may include examples of damage and undamaged image states, including damage or conditions in the form of a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

[0087] Prior to analysis by the feature extraction at 920, operations for image pre processing (not illustrated) may be performed on the training data, such as by performing noise removal, an illumination change, cropping, geometric transformation (e.g., translation or rotation), color mapping (e.g., RGB to grayscale, RGB to HSV, etc.), resizing, or segmentation, to a training image.

[0088] The flowchart 900 continues at 920 with the performance of feature extraction on the training images, to identify subject features for training. In an example, feature extraction includes extracting key points and feature descriptors from the respective images. For instance, the feature descriptors may include point descriptors and/or patch descriptors. As a specific example, the descriptors may be associated with respective words and used to populate a feature vector. Further, the respective features of the feature vector are clustered and analyzed using a bag of visual words model.

[0089] The flowchart 900 continues at 930 with the training of the classification model, to associate subject features with the labels that identify the image characteristics. In an example, the model is trained as a machine learning classifier comprising a: support vector machine classifier, random forest classifier, or gradient boosting classifier. Further variation or optimization of the machine learning algorithm learning procedure, including optimization that is unique to a medical facility, may occur based on other known characteristics of the training data that are considered by the model. For example, a model optimization may consider that because Facility A has experienced detection of a large number of scratched channels, the model is tuned to be slightly more sensitive for such type of damage; or, because Facility B has experienced detection of discoloration at a high rate, the model is tuned to be more sensitive for such type of damage. Finally, the flowchart 940 concludes with the output (e.g., to a data storage device, to a computing system, etc.) of the classification model (e.g., as a binary, data structure, or algorithmic code) for use in predicting a state of subsequently analyzed images.

[0090] The flowchart 950 depicts a use process of predictive analysis for endoscope inspection, based on use of a trained model (e.g., as trained with the operations of flowchart 900). The flowchart 950 commences at 960 with the receipt of image data captured from a borescope, with this image data capturing at least one image of a lumen of a subject channel. In an example, the image data comprises a plurality of images extracted from a video, and the following analysis of the image data is performed using multiple of the plurality of images extracted from the video. Also in an example, the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen. The use of such images may occur based on previously captured images, previously captured video, or with real time images or video. User input and/or a user input device may also be used to select the imaging data and control the following analysis process.

[0091] The flowchart 950 continues at 970 with the performance of feature extraction, such as with the feature extraction operations discussed above at 920. This may also include feature extraction to identify a feature set, with respective point descriptors or patch descriptors that describe respective features of the feature set. (For instance, the feature set may include point descriptor A describing feature 1, point descriptor B describing feature 2, patch descriptor A describing feature 3, etc.) Prior to or after feature extraction, image pre- processing may be performed, such as with the pre-processing operations discussed above for flowchart 900.

[0092] The flowchart 950 continues at 980 with the analysis of one or more image data subject features, using the trained artificial intelligence model to identify a predicted state of the subject lumen. As indicated above, the model may comprise a machine learning classifier, such as a support vector machine classifier, random forest classifier, or gradient boosting classifier (or, a contribution-weighted combination of multiple models). In other examples, the trained model is a deep neural network, convolutional neural network, or recurrent neural network, or a combination thereof (e.g., an artificial neural network). Other variations of artificial intelligence model and engines may also be utilized.

[0093] The predicted state of the lumen (or, for a portion of the lumen or the subject channel) may represent a damaged or a non-damaged state, or a specific identified type or level of damage. For instance, the specific damage state may be associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation, detected from the image. The damage state may also be associated with a confidence level related to the level or frequency of occurrence in one or multiple images (e.g., in scenarios where one damaged image in a series of non-damaged images represents an anomaly or false positive).

[0094] The flowchart 950 concludes at 990 with the output of data representing a predicted state of the subject lumen. This predicted state may be identified relative to an image, a set of images, an identifiable area of the lumen, a lumen, an endoscope, or the like. This data may be outputted as a data value or flag, as an indication, on an output device, as a communication to another computing system or device (e.g., to a cleaning and workflow tracking system), produced or updated in a report, and the like.

[0095] The analysis of images may be performed on the fly, or with captured images or video. In terms of analysis, similar approaches may be used to analyze both on-the-fly and pre-captured images or video; both pre-captured and on-the-fly image data can be analyzed and used to display statistics/predictions in real/run time; further, results from both pre- captured and on-the-fly image data can be output to a file. However, in some examples, on- the-fly learning/online learning may be enhanced, such that once the prediction has been served, the image/video will be stored and used to dynamically update the model.

[0096] Although many of the preceding examples were provided with reference to endoscope processing and similar medical device cleaning settings, it will be understood that a variety of other uses may be applied in both medical and non-medical settings to identify, prevent, or reduce the potential of contamination. These settings may include the handling of hazardous materials in a various of scientific and industrial settings, such as the handling of objects contaminated with biological or radioactive agents; the human control of systems and devices configured to process and clean potentially contaminated objects; and other settings involving a contaminated object or human. Likewise, the preceding examples may also be applicable in clean room settings where the environment or particular objects are intended to remain in a clean state, and where human contact with substances or objects may cause contamination that is tracked and remediated.

[0097] Further, although the present examples were provided with reference to image processing performed using camera-captured images, it will be understood that the other types of images or data including thermal imaging, ultrasound imaging, and other imaging formats may also be utilized with the present techniques.

[0098] FIG. 10 is a block diagram illustrating an example computer system machine upon which any one or more of the previous techniques may be performed or facilitated by. Computer system 1000 specifically may be used in connection with facilitating the operations of the cleaning workflow and tracking system 380, the visual inspection processing system 360, the user computing system 430, or any other computing platform described or referred to herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments. The machine may be a personal computer (PC), a tablet PC, a smartphone, a web appliance, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

[0099] Example computer system 1000 includes a processor 1002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 1004 and a static memory 1006, which communicate with each other via a link 1008 (e.g., an interlink, bus, etc.). The computer system 1000 may further include a video display unit 1010, an alphanumeric input device 1012 (e.g., a keyboard), and a user interface (UI) navigation device 1014 (e.g., a mouse). In an example, the video display unit 1010, input device 1012 and UI navigation device 1014 are a touch screen display. The computer system 1000 may additionally include a storage device 1016 (e.g., a drive unit), a signal generation device 1018 (e.g., a speaker), and a network interface device 1020 which may operably communicate with a communications network 1026 using wired or wireless communications hardware. The computer system 1000 may further include one or more human input sensors 1028 configured to obtain input (including non-contact human input) in accordance with input recognition and detection techniques. The human input sensors 1028 may include a camera, microphone, barcode reader, RFID reader, near field communications reader, or other sensor producing data for purposes of input. The computer system 1000 may further include an output controller 1030, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR)) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).

[0100] The storage device 1016 may include a machine-readable medium 1022 on which is stored one or more sets of data structures or instructions 1024 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 1024 may also reside, completely or at least partially, within the main memory 1004, static memory 1006, and/or within the processor 1002 during execution thereof by the computer system 1000, with the main memory 1004, static memory 1006, and the processor 1002 also constituting machine-readable media.

[0101] While the machine -readable medium 1022 is illustrated in an example embodiment to be a single medium, the term“machine -readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 1024. The term“machine- readable medium” shall also be taken to include any tangible medium (e.g., a non- transitory medium) that is capable of storing, encoding or carrying instructions for execution by the computer system 1000 and that cause the computer system 1000 to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term“machine- readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including, by way of example, semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. [0102] The instructions 1024 may further be transmitted or received over a

communications network 1026 using a transmission medium via the network interface device 1020 utilizing any one of a number of well-known transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP)). Examples of communication networks include a local area network (LAN), wide area network (WAN), the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi, 3G, and 4G LTE/LTE-A or 5G networks). The term“transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the computing system 1000, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.

[0103] As an additional example, computing embodiments described herein may be implemented in one or a combination of hardware, firmware, and software. Embodiments may also be implemented as instructions stored on a computer-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A computer-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a computer-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.

[0104] It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components or modules, in order to more particularly emphasize their implementation independence. Component or modules may be implemented in any combination of hardware circuits, programmable hardware devices, other discrete components. Components or modules may also be implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module. Indeed, a component or module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.

[0105] Similarly, operational data may be identified and illustrated herein within components or modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions.

[0106] Additional examples of the presently described method, system, and device embodiments include the following, non- limiting configurations. Each of the following non limiting examples may stand on its own, or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.

[0107] Example 1 is a method of artificial intelligence analysis for endoscope inspection, comprising: receiving image data captured from a borescope, the image data capturing at least one image of a lumen of an endoscope; analyzing the image data with a trained artificial intelligence model to generate a predicted state of the lumen from the image data, the trained model configured to generate the predicted state based on a trained feature identified from the at least one image of the lumen; and outputting data that represents the predicted state of the lumen of the endoscope.

[0108] In Example 2, the subject matter of Example 1 includes, performing feature extraction on the at least one image of the lumen to identify a subject feature; wherein the subject feature is correlated to the trained feature based on a classification produced from the trained model, the trained model utilizing a trained machine learning algorithm.

[0109] In Example 3, the subject matter of Example 2 includes, subject matter where the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

[0110] In Example 4, the subject matter of Example 3 includes, subject matter where the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective classification words.

[0111] In Example 5, the subject matter of Example 4 includes, subject matter where the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model. [0112] In Example 6, the subject matter of Examples 1-5 includes, subject matter where the trained model is further configured to generate the predicted state based on a feature set including the trained feature, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

[0113] In Example 7, the subject matter of Examples 1-6 includes, performing image pre-processing on the image data, before analyzing the image data with the trained model.

[0114] In Example 8, the subject matter of Example 7 includes, subject matter where the image pre-processing comprises at least one of: noise removal, an illumination change, geometric transformation, color mapping, resizing, cropping, or segmentation, to the at least one image of the lumen.

[0115] In Example 9, the subject matter of Examples 1-8 includes, subject matter where the predicted state of the lumen comprises a damage state.

[0116] In Example 10, the subject matter of Example 9 includes, subject matter where the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

[0117] In Example 11, the subject matter of Examples 1-10 includes, subject matter where the trained model is a machine learning classifier.

[0118] In Example 12, the subject matter of Example 11 includes, subject matter where the machine learning classifier is a support vector machine classifier, random forest classifier, gradient boosting classifier, or a contribution-weighted combination of multiple classifiers.

[0119] In Example 13, the subject matter of Examples 11-12 includes, subject matter where the machine learning classifier indicates a classification label associated with an integrity state or a state of damage.

[0120] In Example 14, the subject matter of Examples 1-13 includes, subject matter where the trained model is a deep neural network, convolutional neural network, or recurrent neural network.

[0121] In Example 15, the subject matter of Examples 1-14 includes, subject matter where the image data comprises a plurality of images extracted from a video, and wherein the analyzing of the image data with the trained artificial intelligence model is performed using multiple of the plurality of images extracted from the video.

[0122] In Example 16, the subject matter of Examples 1-15 includes, subject matter where the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen. [0123] Example 17 is a computing device, comprising: at least one processor; and at least one memory device including instructions embodied thereon, wherein the instructions, when executed by the processor, cause the processor to perform artificial intelligence analysis operations for endoscope inspection, with the operations comprising: receiving image data captured from a borescope, the image data capturing at least one image of a lumen of an endoscope; analyzing the image data with a trained artificial intelligence model to generate a predicted state of the lumen from the image data, the trained model configured to generate the predicted state based on a trained feature identified from the at least one image of the lumen; and outputting data that represents the predicted state of the lumen of the endoscope.

[0124] In Example 18, the subject matter of Example 17 includes, performing feature extraction on the at least one image of the lumen to identify a subject feature; wherein the subject feature is correlated to the trained feature based on a classification produced from the trained model, the trained model utilizing a trained machine learning algorithm.

[0125] In Example 19, the subject matter of Example 18 includes, subject matter where the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

[0126] In Example 20, the subject matter of Example 19 includes, subject matter where the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective classification words.

[0127] In Example 21, the subject matter of Example 20 includes, subject matter where the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

[0128] In Example 22, the subject matter of Examples 17-21 includes, subject matter where the trained model is further configured to generate the predicted state based on a feature set including the trained feature, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

[0129] In Example 23, the subject matter of Examples 17-22 includes, performing image pre-processing on the image data, before analyzing the image data with the trained model.

[0130] In Example 24, the subject matter of Example 23 includes, subject matter where the image pre-processing comprises at least one of: noise removal, an illumination change, geometric transformation, color mapping, resizing, cropping, or segmentation, to the at least one image of the lumen.

[0131] In Example 25, the subject matter of Examples 17-24 includes, subject matter where the predicted state of the lumen comprises a damage state. [0132] In Example 26, the subject matter of Example 25 includes, subject matter where the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

[0133] In Example 27, the subject matter of Examples 17-26 includes, subject matter where the trained model is a machine learning classifier.

[0134] In Example 28, the subject matter of Example 27 includes, subject matter where the machine learning classifier is a support vector machine classifier, random forest classifier, or gradient boosting classifier.

[0135] In Example 29, the subject matter of Examples 27-28 includes, subject matter where the machine learning classifier indicates a classification label associated with an integrity state or a state of damage.

[0136] In Example 30, the subject matter of Examples 17-29 includes, subject matter where the trained model is a convolutional neural network, deep neural network, or recurrent neural network.

[0137] In Example 31, the subject matter of Examples 17-30 includes, subject matter where the image data comprises a plurality of images extracted from a video, and wherein the analyzing of the image data with the trained artificial intelligence model is performed using multiple of the plurality of images extracted from the video.

[0138] In Example 32, the subject matter of Examples 17-31 includes, subject matter where the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

[0139] In Example 33, the subject matter of Examples 17-32 includes, a user input device to receive input to select the image data and control the analyzing of the image data; and a user output device to output a representation of data that represents the predicted state of the lumen of the endoscope.

[0140] Example 34 is a machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing system, cause the processing circuitry to perform operations of any of Examples 1 to 33.

[0141] Example 35 is a system, comprising: a borescope adapted to capture at least one digital image from a working channel of an endoscope; and a visual inspection computing system, comprising a memory device and processing circuitry, the processing circuitry adapted to: obtain the at least one digital image of the endoscope working channel; analyze the at least one digital image with a trained artificial intelligence model to generate a predicted state of the working channel, wherein the trained model is configured to generate the predicted state based on a trained feature identified from the at least one digital image of the working channel; and an output device adapted to provide a representation of data that represents the predicted state of the working channel of the endoscope.

[0142] In Example 36, the subject matter of Example 35 includes, a borescope movement device, adapted to advance the borescope within the working channel of the endoscope at a controlled rate to capture the at least one digital image of the endoscope working channel.

[0143] Example 37 is a method of training an artificial intelligence model for endoscope inspection, comprising: receiving training data, the training data comprising image data capturing at least one image of a lumen of an endoscope, and respective labels associated with the at least one image; performing feature extraction on the at least one image of the lumen to identify respective subject features; training a model to associate the respective subject features with respective predicted states; and outputting the model for use in analyzing subsequent images, to generate a predicted state of an image of a lumen from a subject endoscope.

[0144] In Example 38, the subject matter of Example 37 includes, subject matter where the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen.

[0145] In Example 39, the subject matter of Example 38 includes, subject matter where the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective words.

[0146] In Example 40, the subject matter of Example 39 includes, subject matter where the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

[0147] In Example 41, the subject matter of Examples 37-40 includes, subject matter where the model is further trained to generate the predicted state based on a feature set including the subject features, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

[0148] In Example 42, the subject matter of Examples 37-41 includes, performing image pre-processing on the image data, before performing the feature extraction.

[0149] In Example 43, the subject matter of Example 42 includes, subject matter where the image pre-processing comprises at least one of: noise removal, an illumination change, cropping, geometric transformation, color mapping, resizing, or segmentation, to the at least one image of the lumen. [0150] In Example 44, the subject matter of Examples 37-43 includes, subject matter where the predicted state of the lumen comprises a damage state.

[0151] In Example 45, the subject matter of Example 44 includes, subject matter where the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

[0152] In Example 46, the subject matter of Examples 37-45 includes, subject matter where the model is trained as a machine learning classifier comprising a: support vector machine classifier, random forest classifier, or gradient boosting classifier.

[0153] In Example 47, the subject matter of Example 46 includes, subject matter where the machine learning classifier is adapted to provide a classification label associated with an integrity state or a state of damage.

[0154] In Example 48, the subject matter of Examples 37-47 includes, subject matter where the image data comprises a plurality of images extracted from a video, and wherein the training of the model is performed using multiple of the plurality of images extracted from the video.

[0155] In Example 49, the subject matter of Examples 37-48 includes, subject matter where the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen.

[0156] Example 50 is a computing device, comprising: at least one processor; and at least one memory device including instructions embodied thereon, wherein the instructions, when executed by the processor, cause the processor to perform artificial intelligence analysis operations for training an artificial intelligence model for endoscope inspection, with the operations comprising: receiving training data, the training data comprising image data capturing at least one image of a lumen of an endoscope, and respective labels associated with the at least one image; performing feature extraction on the at least one image of the lumen to identify respective subject features; training a model to associate the respective subject features with respective predicted states; and outputting the model for use in analyzing subsequent images, to generate a predicted state of an image of a lumen from a subject endoscope.

[0157] In Example 51, the subject matter of Example 50 includes, subject matter where the feature extraction comprises extracting key points and feature descriptors from the at least one image of the lumen. [0158] In Example 52, the subject matter of Example 51 includes, subject matter where the feature descriptors comprise point descriptors and patch descriptors, and wherein the feature descriptors are associated with respective words.

[0159] In Example 53, the subject matter of Example 52 includes, subject matter where the feature descriptors are used to populate a feature vector, and wherein respective features of the feature vector are clustered and analyzed using a bag of visual words model.

[0160] In Example 54, the subject matter of Examples 50-53 includes, subject matter where the model is further trained to generate the predicted state based on a feature set including the subject features, and wherein respective point descriptors or patch descriptors describe respective features of the feature set.

[0161] In Example 55, the subject matter of Examples 50-54 includes, performing image pre-processing on the image data, before performing the feature extraction.

[0162] In Example 56, the subject matter of Example 55 includes, subject matter where the image pre-processing comprises at least one of: noise removal, an illumination change, cropping, geometric transformation, color mapping, resizing, or segmentation, to the at least one image of the lumen.

[0163] In Example 57, the subject matter of Examples 50-56 includes, subject matter where the predicted state of the lumen comprises a damage state.

[0164] In Example 58, the subject matter of Example 57 includes, subject matter where the damage state is associated with at least one of: a discoloration, a foreign object, a residue, a scratch, a peeling surface, a buckling surface, or a perforation.

[0165] In Example 59, the subject matter of Examples 50-58 includes, subject matter where the model is trained as a machine learning classifier comprising a: support vector machine classifier, random forest classifier, or gradient boosting classifier.

[0166] In Example 60, the subject matter of Example 59 includes, subject matter where the machine learning classifier is adapted to provide a classification label associated with an integrity state or a state of damage.

[0167] In Example 61, the subject matter of Examples 50-60 includes, subject matter where the image data comprises a plurality of images extracted from a video, and wherein the training of the model is performed using multiple of the plurality of images extracted from the video.

[0168] In Example 62, the subject matter of Examples 50-61 includes, subject matter where the image data comprises a sequence of images from a first position in the lumen to a second position in the lumen. [0169] Example 63 is a machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing system, cause the processing circuitry to perform operations of any of Examples 37 to 62.

[0170] The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.