Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ADAPTIVE VISUALIZATION OF CONTEXTUAL TARGETS IN SURGICAL VIDEO
Document Type and Number:
WIPO Patent Application WO/2022/195305
Kind Code:
A1
Abstract:
An aspect includes a computer-implemented method that predicts a proposed region of interest in an image from a video o f a surgical procedure based on one or more contextual targets. An image adjustment is synthesized based on the proposed region of interest and the image. A modified visualization of the surgical procedure is generated by incorporating the image adjustment in a real-time output of the video of the surgical procedure. The video of the surgical procedure is displayed with the modified visualization.

Inventors:
GRAMMATIKOPOULOU MARIA (GB)
STOYANOV DANAIL V (GB)
LUENGO MUNTION IMANOL (GB)
OWEN DAVID (GB)
SANCHEZ-MATILLA RICARDO (GB)
ROBU MARIA RUXANDRA (GB)
Application Number:
PCT/GR2022/000015
Publication Date:
September 22, 2022
Filing Date:
March 18, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DIGITAL SURGERY LTD (GB)
GRAMMATIKOPOULOU MARIA (GB)
International Classes:
G06V10/82; G06T7/00; G06V20/20
Other References:
ISLAM MOBARAKOL ET AL: "AP-MTL: Attention Pruned Multi-task Learning Model for Real-time Instrument Detection and Segmentation in Robot-assisted Surgery", 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 31 May 2020 (2020-05-31), pages 8433 - 8439, XP033826236, DOI: 10.1109/ICRA40945.2020.9196905
COLLEONI EMANUELE ET AL: "Deep Learning Based Robotic Tool Detection and Articulation Estimation With Spatio-Temporal Layers", IEEE ROBOTICS AND AUTOMATION LETTERS, IEEE, vol. 4, no. 3, 1 July 2019 (2019-07-01), pages 2714 - 2721, XP011726185, DOI: 10.1109/LRA.2019.2917163
JIN YUEMING ET AL: "Multi-task recurrent convolutional network with correlation loss for surgical video analysis", MEDICAL IMAGE ANALYSIS, OXFORD UNIVERSITY PRESS, OXOFRD, GB, vol. 59, 10 October 2019 (2019-10-10), XP085913951, ISSN: 1361-8415, [retrieved on 20191010], DOI: 10.1016/J.MEDIA.2019.101572
DUYGU SARIKAYA ET AL: "Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 July 2020 (2020-07-29), XP081731746, DOI: 10.1109/TMI.2017.2665671
YANG CONGMIN ET AL: "Image-based laparoscopic tool detection and tracking using convolutional neural networks: a review of the literature", COMPUTER ASSISTED SURGERY, vol. 25, no. 1, 1 January 2020 (2020-01-01), pages 15 - 28, XP055927718, Retrieved from the Internet [retrieved on 20220602], DOI: 10.1080/24699322.2020.1801842
Attorney, Agent or Firm:
ATHANASIADOU, Maria (GR)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method comprising: extracting as a prediction of a first machine-learning model or as a user input, a proposed region of interest in an image from a video of a surgical procedure based on one or more contextual targets; synthesizing an image adjustment, by a second machine-learning model, based on the proposed region of interest and the image; generating a modified visualization of the surgical procedure by incorporating the image adjustment in a real-time output of the video of the surgical procedure; and displaying the video of the surgical procedure with the modified visualization.

2. The computer-implemented method of claim 1, wherein the first machine-learning model comprises a surgical phase and structure network configured to determine a phase of the surgical procedure in the image, and the one or more contextual targets are determined based on one or more outputs of the surgical phase and structure network.

3. The computer-implemented method of claim 1, wherein the first machine learning model uses weak labels, and the second machine learning model uses weak labels and joint detection and segmentation.

4. The computer-implemented method of claim 1, further comprising: determining motion based on temporal data associated with the image; and determining a current area of focus as at least a portion of the proposed region of interest based on the motion.

5. The computer-implemented method of claim 1 , further comprising: using a depth map to refine the proposed region of interest.

6. The computer-implemented method of claim 1, wherein the first machine-learning model is trained based on a training dataset of a plurality of temporally aligned annotated data streams comprising temporal annotations, spatial annotations, and sensor annotations.

7. The computer-implemented method of claim 1, further comprising: performing feature fusion to combine one or more task-specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames.

8. The computer-implemented method of claim 1, wherein the user input comprises a drawing input received from one or more devices.

9. The computer-implemented method of claim 1, further comprising: performing eye tracking of a surgeon during the surgical procedure; and predicting the proposed region of interest based at least in part on a detected area of focus from the eye tracking of the surgeon.

10. A system comprising: a data collection system configured to capture a video of a surgical procedure; a model execution system configured to execute one or more machine-learning models to predict a proposed region of interest in an image from the video of the surgical procedure based on one or more contextual targets; and an output generator configured to generate a modified visualization of the surgical procedure in a real-time output of the video of the surgical procedure based on the proposed region of interest.

1 1 . The system of claim 10, wherein the system is further configured to determine motion based on temporal data and determine a current area of focus as at least a portion of the proposed region of interest based on the motion.

12. The system of claim 10, wherein the one or more machine-learning models are configured to use a depth map to refine the proposed region of interest and in training the one or more machine-learning models.

13. The system of claim 10, wherein the one or more machine-learning models are configured to perform feature fusion to combine one or more task-specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames of the video.

14. The system of claim 10, further comprising a display configured to output the modified visualization comprising an image adjustment within the proposed region of interest.

15. A computer program product comprising a memory device having computer executable instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform a method comprising: identifying a proposed region of interest in an image from a video of a surgical procedure; synthesizing an image adjustment, by the one or more machine-learning models, based on the proposed region of interest and the image; and generating a modified visualization of the surgical procedure by incorporating the image adjustment in an output of the video of the surgical procedure.

16. The computer program product of claim 15, wherein execution of the computer executable instructions causes the one or more processors to determine motion based on temporal data associated with the image and determine a current area of focus as at least a portion of the proposed region of interest based on the motion, and one or more contextual targets are determined based on one or more outputs of a surgical phase and structure network.

17. The computer program product of claim 15, wherein the one or more machine learning models are configured to use a depth map to refine the proposed region of interest, and the depth map comprises an estimate of three-dimensional depth based on one or more two-dimensional images.

18. The computer program product of claim 15, wherein the one or more machine learning models are trained to perform feature fusion to combine one or more task- specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames, and the feature fusion is based on one or more transform- domain fusion algorithms to implement an image fusion neural network.

19. The computer program product of claim 15, wherein the proposed region of interest is identified based on a user input, and the image adjustment is concentrated with a greater intensity near a centroid of the proposed region of interest.

20. The computer program product of claim 15, wherein execution of the computer executable instructions causes the one or more processors to perform eye tracking of a surgeon during the surgical procedure to track a position of gaze relative to a region of an image being observed and predict the proposed region of interest based at least in part on a detected area of focus from the eye tracking of the surgeon.

Description:
ADAPTIVE VISUALIZATION OF CONTEXTUAL TARGETS IN SURGICAL

VIDEO

CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application is a PCT application which claims the benefit of U.S.

Provisional Patent Application No. 63/163,417, filed March 19, 2021, U.S. Provisional Patent Application No. 63/163,425, filed March 19, 2021, and U.S. Provisional Patent Application No. 63/212,157, filed June 16, 2021, all of which are incorporated by reference in their entirety herein.

BACKGROUND

[0002] The present invention relates in general to computing technology and relates more particularly to computing technology for adaptive visualization of contextual targets in surgical video.

[0003] Computer-assisted systems can be useful to augment a person’s physical sensing, perception, and reaction capabilities. For example, such systems can effectively provide information corresponding to an expanded field of vision, both temporal and spatial, that enables a person to adjust current and future actions based on the part of an environment not included in his or her physical field of view. However, providing such information relies upon an ability to process part of this extended field in a useful manner. Highly variable, dynamic, and/or unpredictable environments present challenges in defining rules that indicate how representations of the environments are to be processed to output data to productively assist the person in action performance. Further, identifying and tracking multiple objects in complex scenes can be challenging where variations in lighting, obstructions, and orientation of the objects may occur.

SUMMARY [0004] According to an aspect, a computer-implemented method extracts as a prediction of a first machine-learning model or as a user input, a proposed region of interest in an image from a video of a surgical procedure based on one or more contextual targets. An image adjustment is synthesized based on the proposed region of interest and the image. A modified visualization of the surgical procedure is generated by incorporating the image adjustment in a real-time output of the video of the surgical procedure. The video of the surgical procedure is displayed with the modified visualization.

[0005] In one or more examples, the first machine-learning model can include a surgical phase and structure network configured to determine a phase of the surgical procedure in the image.

[0006] In one or more examples, the one or more contextual targets can be determined based on one or more outputs of the surgical phase and structure network.

[0007] In one or more examples, the first machine learning model uses weak labels, and the second machine learning model uses weak labels and joint detection and segmentation.

[0008] In one or more examples, the computer-implemented method can include determining motion based on temporal data associated with the image, and determining a current area of focus as at least a portion of the proposed region of interest based on the motion.

[0009] In one or more examples, the computer-implemented method can include using a depth map to refine the proposed region of interest.

[0010] In one or more examples, the first machine-learning model can be trained based on a training dataset of a plurality of temporally aligned annotated data streams including temporal annotations, spatial annotations, and sensor annotations. [0011] In one or more examples, the computer-implemented method can include performing feature fusion to combine one or more task-specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames.

[0012] In one or more examples, the user input can include a drawing input received from one or more devices.

[0013] In one or more examples, the computer-implemented method can include performing eye tracking of a surgeon during the surgical procedure, and predicting the proposed region of interest based at least in part on a detected area of focus from the eye tracking of the surgeon. [0014] According to another aspect, a system includes a data collection system configured to capture a video of a surgical procedure, a model execution system configured to execute one or more machine-learning models to predict a proposed region of interest in an image from the video of the surgical procedure based on one or more contextual targets, and an output generator configured to generate a modified visualization of the surgical procedure in a real-time output of the video of the surgical procedure based on the proposed region of interest.

[0015] In one or more examples, the system can be further configured to determine motion based on temporal data and determine a current area of focus as at least a portion of the proposed region of interest based on the motion. [0016] In one or more examples, the one or more machine-learning models can be configured to use a depth map to refine the proposed region of interest and in training the one or more machine-learning models.

[0017] In one or more examples, the one or more machine-learning models can be configured to perform feature fusion to combine one or more task-specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames of the video. [0018J In one or more examples, the system can include a display configured to output the modified visualization including an image adjustment within the proposed region of interest.

[0019] According to another aspect, a computer program product includes a memory device having computer executable instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform a method. The method includes identifying a proposed region of interest in an image from a video of a surgical procedure, synthesizing an image adjustment, by the one or more machine learning models, based on the proposed region of interest and the image, and generating a modified visualization of the surgical procedure by incorporating the image adjustment in an output of the video of the surgical procedure.

[0020] In one or more examples, execution of the computer executable instructions can cause the one or more processors to determine motion based on temporal data associated with the image and determine a current area of focus as at least a portion of the proposed region of interest based on the motion, and one or more contextual targets can be determined based on one or more outputs of a surgical phase and structure network.

[0021] In one or more examples, the one or more machine-learning models can be configured to use a depth map to refine the proposed region of interest, and the depth map can include an estimate of three-dimensional depth based on one or more two- dimensional images.

[0022] In one or more examples, the one or more machine-learning models can be trained to perform feature fusion to combine one or more task-specific features of the surgical procedure with one or more temporally aligned features spanning two or more frames, and the feature fusion can be based on one or more transform-domain fusion algorithms to implement an image fusion neural network. [0023] In one or more examples, the proposed region of interest can be identified based on a user input, and the image adjustment can be concentrated with a greater intensity near a centroid of the proposed region of interest.

[0024] In one or more examples, execution of the computer executable instructions can cause the one or more processors to perform eye tracking of a surgeon during the surgical procedure to track a position of gaze relative to a region of an image being observed and predict the proposed region of interest based at least in part on a detected area of focus from the eye tracking of the surgeon.

[0025] Additional technical features and benefits are realized through the techniques of the present invention. Aspects of the invention are described in detail herein and are considered a part of the claimed subject matter. For a better understanding, refer to the detailed description and to the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] The specifics of the exclusive rights described herein are particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the aspects of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:

[0027] FIG. 1 shows a system for detection of surgical phases and structures in surgical data using machine learning according to one or more aspects;

[0028] FIG. 2 depicts a flowchart of a method for adaptive visualization of contextual targets using machine learning according to one or more aspects;

[0029] FIG. 3 depicts a visualization of surgical data used for training one or more machine-learning models according to one or more aspects; [0030] FIG. 4 depicts a flow diagram for training of machine-learning models used to propose regions of interest according to one or more aspects;

[0031] FIG. 5A depicts an example of a non-augmented visualization of a surgical view according to one or more aspects; [0032] FIG. 5B depicts an example of an augmented visualization of the surgical view of FIG. 5A according to one or more aspects;

[0033] FIG. 6 depicts a flow diagram of training a machine-learning model to perform region of interest enhancement according to one or more aspects;

[0034] FIG. 7 depicts a flow diagram of image synthesis with visualization enhancement according to one or more aspects;

[0035] FIG. 8 depicts a computer system in accordance with one or more aspects;

[0036] FIG. 9 depicts a surgical procedure system in accordance with one or more aspects;

[0037] FIG. 10 depicts improvements to machine learning provided by one or more aspects of the technical solutions herein using label relaxation;

[0038] FIG. 1 1 depicts comparison of results in an example scenario obtained by using the different types of machine learning models according to one or more aspects;

[0039] FIG. 12 shows example frames and model outputs from an excerpt of a laparoscopic cholecystectomy video; [0040] FIG. 13 depicts a block diagram of an architecture of a multi-task machine learning model for joint detection and segmentation of surgical instruments according to one or more aspects; [0041] FIG. 14 depicts an architecture for segmentation according to one or more aspects; and

[0042] FIG. 15 depicts example results of using a machine learning model to perform joint detection and segmentation of surgical instruments according to one or more aspects.

[0043] The diagrams depicted herein are illustrative. There can be many variations to the diagram, or the operations described therein without departing from the spirit of the invention. For instance, the actions can be performed in a differing order, or actions can be added, deleted, or modified. Also, the term “coupled”, and variations thereof describe having a communications path between two elements and do not imply a direct connection between the elements with no intervening elements/connections between them. All of these variations are considered a part of the specification.

DETAILED DESCRIPTION

[0044] Exemplary aspects of technical solutions described herein relate to, among other things, devices, systems, methods, computer-readable media, techniques, and methodologies for using machine learning and computer vision to automatically predict regions of interest in surgical data and generate modified visualizations. More generally, aspects can include detection, tracking, and predictions associated with one or more structures, the structures being deemed to be critical for an actor involved in performing one or more actions during a surgical procedure (e.g., by a surgeon). In one or more aspects, the structures are predicted dynamically and substantially in real-time as the surgical data is being captured and analyzed by technical solutions described herein. A predicted structure can be an anatomical structure, a surgical instrument, etc.

[0045] In some instances, a computer-assisted surgical (CAS) system is provided that uses one or more machine-learning models, trained with surgical data, to augment environmental data directly sensed by an actor involved in performing one or more actions during a surgical procedure (e.g., a surgeon). Such augmentation of perception and action can increase action precision, optimize ergonomics, improve action efficacy, enhance patient safety, and improve the standard of the surgical process. The output of the one or more machine-learning models can also be an alert used to trigger a real-time notification of a deviation in the surgical procedure or highlight a region outside of a current region of interest, for example.

[0046] The surgical data provided to train the machine-learning models can include data captured during a surgical procedure, as well as simulated data. The surgical data can include time-varying image data (e.g., a simulated/real video stream from different types of cameras) corresponding to a surgical environment. The surgical data can also include other types of data streams, such as audio, radio frequency identifier (RFID), text, robotic sensors, other signals, etc. The machine-learning models are trained to predict and identify, in the surgical data, “structures” including particular tools, anatomic objects, actions being performed in the simulated/real surgical stages. In one or more aspects, the machine-learning models are trained to define one or more parameters of the models so as to learn how to transform new input data (that the models are not trained on) to identify one or more structures. During the training, the models receive as input, one or more data streams that may be augmented with data indicating the structures in the data streams, such as indicated by metadata and/or image-segmentation data associated with the input data. The data used during training can also include temporal sequences of one or more input data.

[0047] In one or more aspects, the simulated data can be generated to include image data (e.g., which can include time-series image data or video data and can be generated in any wavelength of sensitivity) that is associated with variable perspectives, camera poses, lighting (e.g., intensity, hue, etc.) and/or motion of imaged objects (e.g., tools). In some instances, multiple data sets can be generated - each of which corresponds to the same imaged virtual scene but varies with respect to perspective, camera pose, lighting, and/or motion of imaged objects, or varies with respect to the modality used for sensing, e.g., red-green-blue (RGB) images or depth or temperature. In some instances, each of the multiple data sets corresponds to a different imaged virtual scene and further varies with respect to perspective, camera pose, lighting, and/or motion of imaged objects.

[0048] The machine-learning models can include a fully convolutional network adaptation (FCN) and/or conditional generative adversarial network model configured with one or more hyperparameters to perform image segmentation into classes. For example, the machine-learning models (e.g., the fully convolutional network adaptation) can be configured to perform supervised, self-supervised or semi-supervised semantic segmentation in multiple classes - each of which corresponding to a particular surgical instrument, anatomical body part (e.g., generally or in a particular state), and/or environment. Alternatively, or in addition, the machine-learning model (e.g., the conditional generative adversarial network model) can be configured to perform unsupervised domain adaptation to translate simulated images to semantic instrument segmentations. As a further example, the machine-learning models can include one or more transformer-based networks. It is understood that other types of machine-learning models or combinations thereof can be used in one or more aspects. Machine-learning models can be collectively managed as a group, also referred to as an ensemble, where the machine-learning models are used together and may share feature spaces between elements of the models. As such, reference to a machine-learning model or machine learning models herein may refer to a combination of multiple machine-learning models that are used together, such as operating on a same group of data. Machine-learning models can also be further subdivided into multiple networks that have specific types of outputs, which may be individual parameters or multi-dimensional regions.

[0049] In one or more aspects, one or more machine-learning models are trained using multiple detectors that can operate on a shared set of input data and/or intermediate computed features. Further machine-learning refinements can be achieved by using one or more outputs of previously trained machine-learning networks. For example, semi- supervised learning can be used to initially train the one or more machine-learning models using partially annotated input data as a training dataset. A surgical workflow can include, for instance, surgical phases, steps, actions, and/or other such states/activities. Aspects further described herein with respect to surgical phase can be applied to other surgical workflow states/activities, such as surgical steps and/or actions. A surgical workflow and structure network learned as part of the one or more machine- learning models can be used to train and enhance one or more other networks. For example, feature space data from the surgical workflow and structure network can be fused with feature space data used for a region of interest proposal network. A proposed region of interest output of the region of interest proposal network can provide partial input to an image synthesis network. Thus, learning by multiple machine-learning models is chained to enhance detection and image adjustment using features related to surgical phase, anatomy, instruments, motion/depth, and multiple types of inputs, such as temporal, spatial, and sensor inputs.

[0050] After training, the one or more machine-learning models can then be used in real-time to process one or more data streams (e.g., video streams, audio streams, RFID data, etc.). The processing can include predicting and characterizing visualization modifications in images of a video of a surgical procedure based on one or more surgical phases, instruments, and/or other structures within various instantaneous or block time periods. The visualization can be modified to highlight the presence, position, and/or use of one or more structures. Alternatively, or in addition, the structures can be used to identify a stage within a workflow (e.g., as represented via a surgical data structure), predict a future stage within a workflow, etc.

[0051] FIG. 1 shows a system 100 for predicting surgical phases and structures in surgical data and generating adaptive visualization of contextual targets using machine learning according to one or more aspects. System 100 uses data streams that are part of the surgical data to identify procedural states according to some aspects. System 100 includes a procedural control system 105 that collects image data and coordinates outputs responsive to predicted structures and states. The procedural control system 105 can include one or more devices (e.g., one or more user devices and/or servers) located within and/or associated with a surgical operating room and/or control center. System 100 further includes a machine-learning processing system 1 10 that processes the surgical data using one or more machine-learning models to identify a procedural state (also referred to as a phase or a stage), which is used to identify a corresponding output. It will be appreciated that machine-learning processing system 1 10 can include one or more devices (e.g., one or more servers), each of which can be configured to include part or all of one or more of the depicted components of the machine-learning processing system 1 10. In some instances, a part, or all of machine-learning processing system 110 is in the cloud and/or remote from an operating room and/or physical location corresponding to a part, or all of procedural control system 105. For example, the machine-learning training system 125 can be a separate device, (e.g., server) that stores its output as the one or more trained machine-learning models 130, which are accessible by the model execution system 140, separate from the machine-learning training system 125. In other words, in some aspects, devices that “train” the models are separate from devices that “infer,” i.e., perform real-time processing of surgical data using the trained models 130.

[0052] Machine-learning processing system 1 10 includes a data generator 115 configured to generate simulated surgical data, such as a set of virtual images, or record surgical data from ongoing procedures, to train one or more machine-learning models. Data generator 1 15 can access (read/write) a data store 120 with recorded data, including multiple images and/or multiple videos. The images and/or videos can include images and/or videos collected during one or more procedures (e.g., one or more surgical procedures). For example, the images and/or video may have been collected by a user device worn by a participant (e.g., surgeon, surgical nurse, anesthesiologist, etc.) during the surgery, and/or by a non-wearable imaging device located within an operating room.

[0053] Each of the images and/or videos included in the recorded data can be defined as a base image and can be associated with other data that characterizes an associated procedure and/or rendering specifications. For example, the other data can identify a type of procedure, a location of a procedure, one or more people involved in performing the procedure, surgical objectives, and/or an outcome of the procedure. Alternatively, or in addition, the other data can indicate a stage of the procedure with which the image or video corresponds, rendering specification with which the image or video corresponds and/or a type of imaging device that captured the image or video (e.g., and/or, if the device is a wearable device, a role of a particular person wearing the device, etc.).

Further, the other data can include image-segmentation data that identifies and/or characterizes one or more objects (e.g., tools, anatomical objects, etc.) that are depicted in the image or video. The characterization can indicate the position, orientation, or pose of the object in the image. For example, the characterization can indicate a set of pixels that correspond to the object and/or a state of the object resulting from a past or current user handling.

[0054] Data generator 1 15 identifies one or more sets of rendering specifications for the set of virtual images. An identification is made as to which rendering specifications are to be specifically fixed and/or varied. Alternatively, or in addition, the rendering specifications that are to be fixed (or varied) are predefined. The identification can be made based on, for example, input from a client device, a distribution of one or more rendering specifications across the base images and/or videos, and/or a distribution of one or more rendering specifications across other image data. For example, if a particular specification is substantially constant across a sizable data set, the data generator 1 15 defines a fixed corresponding value for the specification. As another example, if rendering-specification values from at least a predetermined amount of data span across a range, the data generator 1 15 can define the rendering specifications based on the range (e.g., to span the range or to span another range that is mathematically related to the range of distribution of the values).

[0055] A set of rendering specifications can be defined to include discrete or continuous (finely quantized) values. A set of rendering specifications can be defined by a distribution, such that specific values are to be selected by sampling from the distribution using random or biased processes. [0056] One or more sets of rendering specifications can be defined independently or in a relational manner. For example, if the data generator 1 15 identifies five values for a first rendering specification and four values for a second rendering specification, the one or more sets of rendering specifications can be defined to include twenty combinations of the rendering specifications or fewer (e.g., if one of the second rendering specifications is only to be used in combination with an incomplete subset of the first rendering specification values or the converse). In some instances, different rendering specifications can be identified for different procedural phases and/or other metadata parameters (e.g., procedural types, procedural locations, etc.). [0057] Using the rendering specifications and base image data, the data generator 115 generates simulated surgical data (e.g., a set of virtual images), which is stored at the data store 120. For example, a three-dimensional model of an environment and/or one or more objects can be generated using the base image data. Virtual image data can be generated using the model to determine - given a set of particular rendering specifications (e.g., background lighting intensity, perspective, zoom, etc.) and other procedure-associated metadata (e.g., a type of procedure, a procedural state, a type of imaging device, etc.). The generation can include, for example, performing one or more transformations, translations, and/or zoom operations. The generation can further include adjusting the overall intensity of pixel values and/or transforming RGB values to achieve particular color-specific specifications.

[0058] A machine-learning training system 125 uses the recorded data in the data store 120, which can include the simulated surgical data (e.g., set of virtual images) and actual surgical data to train one or more machine-learning models. The machine-learning models can be defined based on a type of model and a set o f hyperparameters (e.g., defined based on input from a client device). The machine-learning models can be configured based on a set of parameters that can be dynamically defined based on (e.g., continuous or repeated) training (i.e., learning, parameter tuning). Machine-learning training system 125 can use one or more optimization algorithms to define the set of parameters to minimize or maximize one or more loss functions. The set of (learned) parameters can be stored as a trained machine-learning model data structure 130, which can also include one or more non-learnable variables (e.g., hyperparameters and/or model definitions). [0059] A model execution system 140 can access the machine-learning model data structure 130 and accordingly configure one or more machine-learning models for inference (i.e., prediction). The one or more machine-learning models can include, for example, a fully convolutional network adaptation, an adversarial network model, or other types of models as indicated in data structure 130. The one or more machine- learning models can be configured in accordance with one or more hyperparameters and the set of learned parameters.

[0060] The one or more machine-learning models, during execution, can receive, as input, surgical data 147 to be processed and generate one or more inferences according to the training. For example, the surgical data 147 can include data streams (e.g., an array of intensity, depth, and/or RGB values) for a single image or for each of a set of frames representing a temporal window of fixed or variable length in a video. The surgical data

147 that is input can be received from a real-time data collection system 145, which can include one or more devices located within an operating room and/or streaming live imaging data collected during the performance of a procedure. The surgical data 147 can include additional data streams, such as audio data, RFID data, textual data, measurements from one or more surgical instruments/sensors, etc., that can represent stimuli/procedural state from the operating room. In some aspects, the real-time data collection system 145 can also receive eye tracking data 149 as a supplemental input, for example, based on one or more optical sensors that track the eyes of a surgeon performing a surgical procedure. The various inputs from different devices/sensors are synchronized before being input to the model.

[0061] In some aspects, the real-time data collection system 145 can receive user input

148 from one or more devices, such as a tablet computer, a tactile input, or other such device. For example, the user input 148 can allow a surgical team to draw a contour of interest for tracking and enhancing over time. Drawings can be in a tree-form scribble format and/or an area selection, for instance, by marking corner/vertex points. Structures identified through user-based identification via user input 148 can be tracked in combination with or in place of structures identified through machine learning. Further, the user input 148 may support modifying contours or other features identified for enhancement by machine learning. Records of the user input 148 can be tracked to assist in f urther training of machine learning and/or can be tracked as preferences available for use by the surgical team in future surgical procedures.

[0062] The one or more machine-learning models can analyze the surgical data 147, and in one or more aspects, predict and/or characterize structures included in the visual data from the surgical data 147. The visual data can include image and/or video data in the surgical data 147. The prediction and/or characterization of the structures can include segmenting the visual data or predicting the localization of the structures with a probabilistic heatmap. In some instances, the one or more machine-learning models include or are associated with a preprocessing or augmentation (e.g., intensity normalization, resizing, cropping, etc.) that is performed prior to segmenting the visual data. An output of the one or more machine-learning models can include image- segmentation or probabilistic heatmap data that indicates which (if any) of a defined set of structures are predicted within the visual data, a location and/or position and/or pose of the structure(s) within the image data, and/or state of the structure(s) associated with a proposed region of interest. The location can be a set of coordinates in the image data.

For example, the coordinates can provide a bounding box that defines the proposed region of interest. Alternatively, the coordinates provide boundaries that surround the structure(s) being predicted as the proposed region of interest. The one or more machine learning models can be trained to perform higher-level predictions and tracking, such as predicting a phase of a surgical procedure and tracking one or more surgical instruments used in the surgical procedure. In some aspects, the machine-learning models can synthesize an image adjustment or provide supporting data for another component to synthesize an image adjustment.

[0063] A state detector 150 can use the output from the execution of the machine learning model to identify a state within a surgical procedure (“procedure”)· A procedural tracking data structure can identify a set of potential states that can correspond to part of a performance of a specific type of procedure. Different procedural data structures (e.g., and different machine-learning-model parameters and/or hyperparameters) may be associated with different types of procedures. The data structure can include a set of nodes, with each node corresponding to a potential state.

The data structure can include directional connections between nodes that indicate (via the direction) an expected order during which the states will be encountered throughout an iteration of the procedure. The data structure may include one or more branching nodes that feed to multiple next nodes and/or can include one or more points of divergence and/or convergence between the nodes. In some instances, a procedural state indicates a procedural action (e.g., surgical action) that is being performed or has been performed and/or indicates a combination of actions that have been performed. In some instances, a procedural state relates to a biological state of a patient undergoing a surgical procedure. For example, the biological state can indicate a complication (e.g., blood clots, clogged arteries/veins, etc.), pre-condition (e.g., lesions, polyps, etc.).

[0064] Each node within the data structure can identify one or more characteristics of the state. The characteristics can include visual characteristics. In some instances, the node identifies one or more tools that are typically in use or availed for use (e.g., on a tool tray) during the state, one or more roles of people who are typically performing a surgical task, a typical type of movement (e.g., of a hand or tool), etc. Thus, state detector 150 can use the segmented data generated by model execution system 140 that indicates the presence and/or characteristics of particular objects within a field of view to identify an estimated node to which the real image data corresponds. Identification of the node (and/or state) can further be based upon previously detected states for a given procedural iteration and/or other detected input (e.g., verbal audio data that includes person-to-person requests or comments, explicit identifications of a current or past state, information requests, etc.).

[0065] In some aspects, state and phase values determined by the state detector 150 can be fed back to be used by one or more machine-learning models executed by the model execution system 140. For example, a machine-learning model can be used to generate features for use by the state detector 150, and state or phase determinations of the state detector 150 can become inputs for other machine-learning models or networks executed by the model execution system 140 as part of a chain or collection of machine-learning models trained to further enhance machine-learning results.

[0066] An output generator 160 can use the state to generate an output. Output generator 160 can include an alert generator 165 that generates and/or retrieves information associated with the state and/or potential next events. For example, the information can include details as to warnings and/or advice corresponding to current or anticipated procedural actions. The alert generator 165 can be configured to communicate with one or more other systems, such as procedural control system 105, to provide notice or trigger actions based on the information. The information can further include one or more events to monitor. The information can identify the next recommended action. An alert may include highlighting a structure that is not within the current region of interest, such as highlighting bleeding, anatomical structures, and/or surgical structures where further attention may be needed to shift the focus of the surgeon.

[0067] The user feedback can be transmitted to an alert output system 170, which can cause the user feedback to be output via a user device and/or other devices that is (for example) located within the operating room or control center. The user feedback can include a visual, audio, tactile, or haptic output that is indicative of the information. The user feedback can facilitate alerting an operator, for example, a surgeon, or any other user of the system 100. [0068] Output generator 160 can also include an augmentor 175 that generates or retrieves one or more graphics and/or text to be visually presented on (e.g., overlaid on) or near (e.g., presented underneath or adjacent to or on separate screen) real-time capture of a procedure. Augmentor 175 can further identify where the graphics and/or text are to be presented (e.g., within a specified size of a display). In some instances, a defined part of a field of view is designated as being a display portion to include augmented data. In some instances, the position of the graphics and/or text is defined so as not to obscure the view of an important part of an environment for the surgery and/or to overlay particular graphics (e.g., of a tool) with the corresponding real-world representation. A modified visualization of the surgical procedure can include incorporating an image adjustment in a real-time output of the video of the surgical procedure, such as adjusting one or more of contrast, color, and focus in a region of interest. Further examples of adjustments can include full new image synthesis, recoloring, pastelization, and/or other enhancement techniques known in the art. The image adjustments may appear as a virtual light source within the image and can be concentrated with a greater intensity near the centroid of the region of interest, for example.

[0069] Augmentor 175 can send the graphics and/or text and/or any positioning information to an augmented reality device 180, which can integrate the graphics and/or text with a user's environment in real-time as an augmented reality visualization. Augmented reality device 180 can include a pair of goggles that can be worn by a person participating in part of the procedure. It will be appreciated that, in some instances, the augmented display can be presented at a non-wearable user device, such as at a computer or tablet. The augmented reality device 180 can present the graphics and/or text at a position as identified by augmentor 175 and/or at a predefined position. Thus, a user can maintain a real-time view of procedural operations and further view pertinent state- related information.

[0070] FIG. 2 depicts a flowchart of a method for adaptive visualization of contextual targets in surgical data using machine learning according to one or more aspects. The method 200 can be executed by the system 100 of FIG. 1 as a computer-implemented method.

[0071] The method 200 includes using (in an inference phase) one or more machine learning models 702 of FIG. 7 to detect, predict, and track surgical phases being performed in a procedure and structures used in the procedure, propose a region of interest, and synthesize an image adjustment. The one or more machine-learning models 702 are examples of the one or more trained machine-learning models 130 of FIG. 1.

[0072] At block 202, the system 100 can capture a video of a surgical procedure as part of surgical data 147 of FIG. 1 as part of a live stream while the surgical procedure is performed. The surgical data 147 may also include spatial data and sensor data temporally associated with a video stream of the surgical procedure.

[0073] At block 204, a proposed region of interest is identified in an image from a video of the surgical procedure. For example, the one or more machine-learning models 702 can predict the proposed region of interest based on one or more contextual targets. The contextual targets can include identified surgical phases and structures associated with the image. For example, by tracking progress through the surgical procedure, such as steps completed and next steps to be performed, the system 100 can determine where the attention of the surgeon is or should be focused within the image. Contextual targets can also track previous areas where surgical work has been performed as part of the same surgical procedure or a previous surgical procedure. Alternatively, user input 148 of FIG. 1 can be used to identify the proposed region of interest, for instance, by a user drawing a contour around a structure of interest as drawing input received from one or more devices. Further, where the one or more machine-learning models 702 are used to predict the proposed region of interest, the user input 148 can modify one or more characteristics of the proposed region of interest, such as shifting location, size, and/or shape of the proposed region of interest as predicted. Thus, block 204 can include extracting as a prediction of a first machine-learning model or as a user input, a proposed region of interest in an image from a video of a surgical procedure based on one or more contextual targets. Further, it will be understood that multiple proposed regions of interest can be identified and may include a combination as predicted by one or more machine learning models and/or user inputs.

[0074] At block 206, the one or more machine-learning models 702 can synthesizing an image adjustment based on the proposed region of interest and the image. As further described herein, the one or more machine-learning models 702 can include a plurality of networks that combine feature detection and region identification to understand the context of structures depicted in the image and how to enhance one or more structures within the image to improve visibility for the surgeon. At block 208, a modified visualization of the surgical procedure can be generated by incorporating the image adjustment in a real-time output of the video of the surgical procedure. In some aspects, the modified visualization can be based on the proposed region of interest but may not include image adjustments within the proposed region of interest. The real-time output can be visible to the surgeon while the surgical procedure is in progress to assist with performing further surgical actions. The modified visualization that shows the image adjustments can be enabled or disabled in response to user input 148 depending upon the preferences of the surgical team observing the output.

[0075] At block 210, the process can continue throughout the surgical procedure to modify the proposed region of interest and the image adjustment in one or more subsequent images in the video of the surgical procedure. At block 212, the video of the surgical procedure with the modified visualization can be displayed. For example, the video can be output to one or more augmented reality devices 180 of FIG. 1 and/or another display (e.g., display 819 of FIG. 8). The use of modified visualization can be turned on and off by the surgeon as desired. In one or more aspects, the modified visualization may be temporarily turned off while there is a substantial change in the scene, or the camera becomes obstructed. Further, relative intensity of the image adjustment can be constant across the region of interest or can be contoured to provide greater intensity in selected portions of the region of interest. In some aspects, the image adjustment intensity may be user selectable, to increase or decrease the adjustment effect to match the preferences of the user.

[0076] The one or more machine-learning models 702 of FIG. 7 can operate on surgical data per frame, but can use information from a previous frame, or a window of previous frames. FIG. 3 depicts a visualization of surgical data used for training the one or more machine-learning models 702 according to one or more aspects. The depicted example surgical data 300 includes video data, i.e., a sequence of N images 302. For training the one or more machine-learning models 702, images 302, and other inputs can be annotated. The annotations can include temporal annotations 306 that identify a surgical phase to which an image belongs or tracking information for different structures in temporal data 305. Accordingly, a particular set or subset of images 302 represents a surgical phase or tracking state. The subset of images 302 can include one or more images and may be sequential.

[0077] Further, the annotations can include spatial annotations 308 of spatial data 307 that identify one or more objects in the images 302. For example, the spatial annotations 308 can specify one or more regions of an image and identify respective objects in the regions. Further, an image can be associated with sensor annotations 310 that include values of one or more sensor measurements from sensor data 309 at the time the image 302 was captured. The sensor measurements can be from sensors associated with the patient, such as oxygen level, blood pressure, heart rate, etc. Alternatively, or in addition, the sensor measurements can be associated with one or more components being used in the surgical procedure, such as a brightness level of an endoscope, a fluid level in a tank, energy output from a generator, etc. Sensor measures can also come from real-time robotic systems indicating surgical activations or position or pose information about instruments. Further, sensor data 309 can include eye tracking data 149 that tracks a position of gaze relative to a region of an image being observed. Other types of annotations can be used to train the one or more machine-learning models 702 in other aspects. [0078] The one or more machine-learning models 702 can take into consideration one or more temporal inputs, such as sensor information, acoustic information, along with spatial annotations associated with images 302 when detecting features in the surgical data 300. A set of such temporally synchronized inputs from the surgical data 300 that are analyzed together by the one or more machine-learning models 702 can be referred to as an “input window” 320 of a training dataset 301 . However, the input window can include any type of observable data, including video data, that can be temporally and/or spatially aligned from the surgical data 300 and/or other sources. The one or more machine-learning models 702, during inference, can operate on the input window 320 to predict a surgical phase represented by the images in the input window 320, although at least a portion of the annotations may not be available where the input window 320 uses real-time input data. Each image 302 in the input window 320 can be associated with synchronized temporal and spatial annotations, such as measurements at a particular timepoint including sensor information, acoustic information, and/or other information.

[0079] The input window 320 can span a plurality of frames of a video stream of images 302 in combination with the spatial data 307 and the sensor data 309 temporally associated with the frames of images 302, for example. The input window 320 also correlates the frames of images 302 with temporal data 305. The input window 320 can slide with respect to time as the one or more machine-learning models 702 predict the phase and track the one or more surgical instruments and/or as the input window 320 is used for training the one or more machine-learning models 702. In some instances, during training or in real-time use, the input window 320 slides on a frame-by-frame basis, such as starting with frame number 10 and advancing to frame number 11 as the starting position. Each iteration using the input window 320 may use available data sets of the starting position plus the next two frames before sliding the input window 320 to start at a different frame number. Thus, one iteration may act upon frame numbers 10,

1 1, and 12 as the input window 320, and the next iteration may act upon frame numbers 1 1, 12, and 13. Further, overlap during sliding of the input window 320 may be reduced, for instance, where the last frame number of one iteration becomes the starting frame number for the next iteration. Alternatively, there may be a gap 325 as one or more of the frames are skipped as the input window 320 slides to a subsequent input window 330 position. The gap 325 may result in no overlap of frame numbers between the input window 320 and the subsequent input window 330.

[0080] In one or more aspects, separate instances of the one or more machine-learning models 702 can be trained for respective types of procedures. For example, separate instances of the one or more machine-learning models 702 can be trained to predict phases in knee arthroscopy, laparoscopic removal of a gallbladder, endoscopic mucosal resection, and other such surgical procedures. Because each procedure can have specific phases (e.g., the sequence of operations) and specific attributes (e.g., anatomical features, instruments, etc.), the one or more machine-learning models 702 can be trained to predict and identi fy the phases of the procedures. It is understood that the technical solutions described herein are not limited to a particular type of surgical procedure unless explicitly indicated. As such, "surgical procedure" or "procedure" can be any of one or more surgeries performed, and not limited to the above-listed examples.

[0081] Training of the one or more machine-learning models 702 can be performed using various types of encoders, feature extractors, and task specific decoders to form task specific networks for desired output types of the one or more machine-learning models 702. FIG. 4 depicts a flow diagram for training of a machine-learning model 400 used to propose regions of interest, which can be used to train the one or more machine learning models 702. The machine-learning model 400 includes a region of interest proposal network 401 that receives input from multiple sources. Input data 420 used by the machine-learning model 400 can be from the input window 320 of FIG. 3, for example, or can be from one or more other sources. For instance, temporal input 422 can be motion data or a depth map associated with an image 302 of a frame 424, where a sequence of multiple images 302 prior to the frame 424 can be used to detect motion and/or depth in combination with an image 302 of the frame 424. Detection input 426 can include input data that temporally align with the frame 424. The detection input 426 can include, for instance, a plurality of temporally aligned annotated data streams, such as the temporal annotations 306, spatial annotations 308, and sensor annotations 310, and can include an expanded input window 320 with a temporal alignment of multiple frames of training data.

[0082] A detection model 430 can determine a feature space 432 based on the detection input 426 associated with surgical data for a procedure. Task-specific decoding of the feature space 432 in the detection model 430 can result in detecting a phase 434 and a structure 436, such as a phase of surgery and one or more surgical instruments being used. The detection model 430 can train one or more encoders and decoders, such as encoder 732 and decoders 734, 736 of FIG. 7, where the feature space 432 is trained for various tasks associated with the surgical procedure.

[0083] The machine-learning model 400 can include feature encoders 402, 412, a feature decoder 406, and feature fusion 408. The feature encoders 402, 412 can be based on one or more artificial neural networks, such as convolutional neural network (CNN), recurrent neural network (RNN), feature pyramid network (FPN), transformer networks, or any other type of neural network or a combination thereof. The feature encoders 402, 412 can use a known technique, supervised, self-supervised or unsupervised (e.g., autoencoder), to learn efficient data “coding” in the surgical data. The “coding” maps input data 420 to one or more feature spaces (e.g., feature spaces 404, 405, 414), which can be fused with feature space 432 and used by the feature decoder 406. The feature decoder 406, the phase 434, and the structure 436 can collectively predict a region of interest in an image 302 of the surgical data as a proposed region of interest 440. The feature encoders 402, 412 can be pre-trained to extract one or more features with task- agnostic input to form feature vectors or feature pyramids as the feature spaces 404, 405, 414.

[0084] In the example of FIG. 4, the structure 436 can detect and track one or more surgical instruments at least partially depicted in one or more images 302 of a video stream from the detection input 426, e.g., from input window 320. The structure 436 may be defined with respect to identifying one or more surgical instruments being present along with position, orientation, and/or movement. The phase 434 can identify a phase of a surgical procedure based on the detection input 426 and relationships learned over a training period. In one or more aspects, the machine-learning is further enhanced by establishing relationships between the feature spaces 404, 405, 414. For example, phase and structural detection features in the feature space 432 can be fused with temporal features from feature space 404 and image-based features from feature spaces 405 and 414. In some aspects, computer vision models can be used to label data in the input data 420 and/or detection input 426. The proposed region of interest 440 can be identified as contours or heatmaps in a current field of view that is likely to be of interest for the current surgical phase, objectives, structures, and instrument position.

[0085] Training of the machine-learning model 400 in combination with the detection model 430 can include using computer vision modeling in combination with one or more artificial neural networks, such as encoders, Recurrent Neural Networks (RNN, e.g. LSTM, GRU, etc.), CNNs, Temporal Convolutional Neural Networks (TCNs), decoders, Transformers, other deep neural networks, etc. For example, an encoder can be trained using weak labels (such as lines, ellipses, local heatmaps or rectangles) or full labels (segmentation masks, heatmaps) to predict (i.e., detect and identify) features in surgical data. In some cases, full labels can be automatically generated from weak labels by using trained machine-learning models). Encoders can be implemented using architectures, such as ResNet, VGG, or other such neural network architectures. During training, encoders can be trained using input windows 320 that includes images 302 that are annotated with the labels (weak or full).

[0086] Extracted features from the input window 320 can include one or more labels assigned to one or more portions of the surgical data in the input window 320. Other types localizations that can be predicted by task-specific decoders can include anatomical localization that provides locations, e.g., coordinates, heatmaps, bounding boxes, boundaries, masks, etc., of one or more anatomical structures identified in the input window 320. Anatomical structures that are identified can include organs, arteries, implants, surgical artifacts (e.g., staples, stitches, etc.), etc. Further yet, based on the type of surgical procedure being performed, one or more of the predicted anatomical structures can be identified as critical structures for the success of the procedure. Anatomical localization, in one or more aspects, can be limited to the spatial domain (e.g., bounding box, heatmap, segmentation mask) of the critical structures but uses temporal annotations 306 of FIG. 3 to enhance temporal consistency of the predictions. The temporal annotations 306 can be based on sensor measurements, acoustic information, and other such data that is captured at the time of capturing the respective images 302 of FIG. 3.

[0087] Temporal information that is provided by phase information can be used to refine confidence of the anatomy prediction in one or more aspects. In one or more aspects, the temporal information can be fused with a feature space, and the resulting fused information can be used by a decoder to output anatomical localization, for example.

[0088] Feature fusion 408 can be based on transform-domain image fusion algorithms to implement an image fusion neural network (IFNN). For example, an initial number of layers in the IFNN extract salient features from the temporal information output by the first model and the feature space. Further, the extracted features are fused by an appropriate fusion rule (e.g., elementwise-max, elementwise-min, elementwise-mean, etc.) or a more complex learning-based neural network module designed to learn to weight and fuse input data (e.g. using attention modules). The fused features can be reconstructed by subsequent layers of the IFNN to produce input data, such as an informative fusion image, for the decoder to analyze. Other techniques for fusing the features can be used in other aspects.

[0089] When the input data 420 includes motion information, the motion can be detected based on changes between a sequence of images 302 over a period of time and/or through sensor information that tracks movement based on sensor data 309. A depth map may be a disparity map, a stereo map, a distance map, or other such maps that estimate three-dimensional depth based on one or more two-dimensional images. For example, a single two-dimensional image can be used for monocular depth estimation, or multiple two-dimensional images can be used, e.g., stereo depth estimation. In some aspects, a depth map can be determined using another machine-learning model or depth estimation network. Motion and/or depth maps need not be used in various aspects to predict the proposed region of interest 440. For example, other inputs can be used to assist in predicting the proposed region of interest 440. Further, predicting the proposed region of interest 440 can be performed without temporal input 422. [0090] In one or more aspects, predicting the proposed region of interest 440 can further include a measure of the uncertainty of the processing, i.e., a level of confidence that the data points resulting from the processing are correct. The measure represents a confidence score of the outputs. The confidence score is a measure of the reliability of a prediction. For example, a confidence score of 95 percent or 0.95 means that there is a probability of at least 95 percent that the prediction is reliable. The confidence score can be computed as a distance transform from the central position to attenuate predictions near the boundaries. The confidence score can also be computed as a probabilistic formulation (e.g., Bayesian deep learning, probabilistic outputs like softmax or sigmoid functions, etc.). The confidence scores for various predictions can be scaled and/or normalized within a certain range, e.g., [0, 1]. The contours of the proposed region of interest 440 can be defined for example, by comparing confidence scores to a minimum threshold.

[0091] FIG. 5A depicts an example of non-augmented visualization of a surgical view, and FIG. 5B depicts an example of an augmented visualization of the surgical view of FIG. 5A according to one or more aspects. An image 500A can be captured from as part of surgical data 147 of FIG. 1 from a frame of a video of a surgical procedure. In the example of FIG. 5A, the image 500A depicts a surgical instrument 502 proximate to an anatomical structure 504. In this example, the surgical instrument 502 and the anatomical structure 504 are located in close physical proximity to a centroid 505 of the image 500A, while background structures 506 are further separated from the centroid 505. According to one or more aspects, when adaptive visualization is activated, the one or more machine-learning models 702 of FIG. 7 can predict the proposed region of interest 440 in image 500B to generate a modified visualization of the surgical procedure by incorporating an image adjustment in a real-time output of the video of the surgical procedure. The contours of the proposed region of interest 440 need not be directly displayed as part of the image 500B to minimize distractions to the surgeon. The image adjustment may appear, for example, as a change in contrast, brightness, focus, or other such parameters to enhance visibility for the surgeon. In the example of FIG. 5B, the proposed region of interest 440, may align in close proximity to the centroid 505 of the image 500B or be located elsewhere depending upon the training provided to the one or more machine-learning models 702. The phase of surgery and objectives as determined or predicted can be used to assist in identifying anatomical structures in close proximity to surgical instruments that may be proximate to or within the predicted region of interest 440. The adjustment to apply may be selected based on parameters of the image 500B, such as average brightness, contrast, size ratio of the region of interest 440 relative to the background structures 506, and other such factors. Overlays and visualizations can be generated by the augmentor 175 of FIG. 1.

[0092] In some aspects, the region of interest 440 can be modified by a surgical team through the user input 148 of FIG. 1. For example, the user input 148 can allow a user to shift a position of the region of interest 440 for visual enhancement. Other examples can include expanding or contracting the region of interest 440. Other customizations can include redefining edges of the region of interest 440, for instance, to more closely align the region of interest 440 to one or more specific features of interest. Additionally, the user input 148 can provide selectable options to change enhancement options, such as increasing/decreasing brightness, contrast, sharpness, colorization, and other such image adjustments. The user input 148 may also support the addition of labels/annotations as an overlay or as metadata captured separately. Further, the user input 148 can support selection of additional structures such that multiple regions of interest 440 can be enhanced in the same image.

[0093] Structures that may be considered for modified visualization can include those identified as “critical anatomical structures". Critical anatomical structures can be specific to the type of surgical procedure being performed and identified automatically. Additionally, a surgeon or any other user can configure the system 100 to identify particular anatomical structures as critical for a particular patient. The selected anatomical structures are critical to the success of the surgical procedure, such as anatomical landmarks (e.g., Calot triangle, Angle of His, etc.) that need to be identified during the procedure or those resulting from a previous surgical task or procedure (e.g., stapled or sutured tissue, clips, etc.). The system 100 can access a plurality of surgical objectives associated with the surgical procedure and correlate the surgical objectives with the one or more surgical instruments and the phase of the surgical procedure. Observations relative to critical anatomical structures and surgical objectives can be used to control alert generation, as well as synthesize image adjustments.

[0094] FIG. 6 depicts a flow diagram of a machine-learning model 600 to perform region of interest enhancement according to one or more aspects. The machine-learning model 600 can be trained to implement a portion of the one or more machine-learning models 702 of FIG. 7. In the example of FIG. 6, frame 424 provides input to feature encoder 602. The feature encoder 602 can be based on one or more artificial neural networks, such as convolutional neural network (CNN), recurrent neural network (RNN), feature pyramid network (FPN), transformer networks, or any other type of neural network or a combination thereof. The feature encoder 602 can use a known technique, supervised, self-supervised or unsupervised (e.g., autoencoder), to learn efficient data “coding” in the surgical data. The “coding” maps input data to one or more feature spaces (e.g., feature space 604), which can be used by feature decoder 606 to perform synthesizing of a portion of an image as a synthesized region of interest 608. In one or more aspects, the feature decoder 606 can also receive the proposed region of interest 440 as an input.

[0095] In the example of FIG. 6, the machine-learning model 600 can include an image synthesis network 601, which may use an auto encoder or other technique to enhance one or more regions of an image in the frame 424. In some aspects, training can be performed by unsupervised, for instance, by saturating or reducing color and/or contrast of images and learning to reconstruct the images. Image synthesis may use discriminator/generator components with temporal information for producing more consistent results. The proposed region of interest 440 can guide localization of where to apply the synthesized region of interest 608.

[0096] FIG. 7 depicts a flow diagram 700 of image synthesis with visualization enhancement using one or more machine-learning models according to one or more aspects. One or more machine-learning models 702 can include a first machine-learning model 704 to predict the proposed region of interest 440 in an image from a video of a surgical procedure based on one or more contextual targets, such as image 500A of FIG. 5A. The image may be part of a frame 720 that can include metadata, such as timing information to align with other observed parameters of the surgical procedure. The one or more machine-learning models 702 can also include a second machine-learning model 706 that can synthesize an image adjustment based on the proposed region of interest 440 and the image. The synthesized region of interest 608 can be used to generate a modified visualization of the surgical procedure by incorporating the image adjustment in a real time output of the video of the surgical procedure. For example, the synthesized region of interest 608 can modify image data in the proposed region of interest 440 resulting in a modified/enhanced image, such as image 500B of FIG. 5B. The modified image with the synthesized region of interest 608 can be inserted into a frame 710 of a video 703 to align with timing of the frame 720 used as input. The frame 710 effectively replaces the frame 720 in the video 703 at a designated point in time. Aligned inputs can include other types of sensed data. For example, eye tracking of a surgeon can be performed during the surgical procedure, and the proposed region of interest 440 can be predicted based at least in part on a detected area of focus from the eye tracking of the surgeon as temporally and spatially aligned with image data of the frame 720.

[0097] In one or more aspects, the frame 720 can be extracted from a live feed of video 703 of a surgical procedure and be analyzed to identify semantic labels in the field of view from a spatio-temporal perspective. Computer vision models can accept weak labels, such as ellipses, bounding boxes, or lines. Full labels can also be generated and used, such as contours, outlines, polygons, and curves. The at least partially labeled input can be used to predict detailed localization as masks, contours, heatmaps, and other such region identifiers for organs, objects, and other such structures as a proposed region of interest 440. There can be multiple regions of interest identified within a single frame 720. Spatial labels can be linked in time by using motion models and/or depth maps to improve temporal consistency and visualization with respect to time. As one example, image synthesis can enhance one or more image properties within each of the proposed regions of interest 440. Further, background regions of an image may be modified to create a greater enhancement effect in the proposed regions of interest. The modifications can be applied to the live feed of video 703. The effects of the visualization modification can be stabilized over time by blending in the temporal domain. [0098] The first machine-learning model 704 can include a surgical phase and structure network 730 configured to determine a phase 434 of the surgical procedure in the image from frame 720. The surgical phase and structure network 730 can include an encoder 732 and task-specific decoders 734, 736 that can be trained as part of detection model 430 of FIG. 4 with feature spaces 432 to detect the phase 434 and structure 436 for a particular type of surgical procedure, for example. One or more contextual targets can be determined based on one or more outputs of the surgical phase and structure network 730, such as the phase 434 and/or structure 436. [0099] Some aspects can include determining motion based on temporal data associated with the image and determining a current area of focus as at least a portion of the proposed region of interest 440 based on the motion. As an example, a motion model can be included in temporal input 422. Alternatively or additionally, a depth map can be used to refine the proposed region of interest 440 in temporal input 422. A feature encoder 738 can be implemented as part of the region of interest proposal network 401 , for instance, as a combination of the feature encoders 402, 412 of FIG. 4 after training is performed. The first machine-learning model 704 can be trained based on a training dataset 301 that includes a plurality of temporally aligned annotated data streams which may include temporal annotations 306, spatial annotations 308, and/or sensor annotations 310. The feature encoder 738 can receive the frame 720 and temporal input 422 as input. Alternatively, in some aspects, the temporal input 422 can be omitted. The feature decoder 406 can also receive phase 434 along with fused features as previously described with respect to FIG. 4. Feature fusion can be performed to combine one or more task- specific features of the surgical procedure with one or more temporally aligned features spanning two or more images of frames 720.

[0100] In some aspects, the proposed region of interest 440 can be modified or replaced based on user input 148 of FIG. 1. As previously described, a surgical team can have the option to modify or replace the proposed region of interest 440 as determined by the first machine-learning model 704 with customized input defined through the user input 148. The user input 148 can be a hand-drawn or separate machine-generated contour that adjusts or replaces the proposed region of interest 440 as determined by the first machine-learning model 704.

[0101] The frame 720 and proposed region of interest 440 can be provided as input to the feature encoder 602 of the second machine-learning model 706 that is a trained instance of the machine-learning model 600 of FIG. 6, also referred to as an image synthesis network. The feature decoder 606 of second machine-learning model 706 uses the feature space 604 after training to determine the synthesized region of interest 608 based on inputs from the feature encoder 602.

[0102] Linking of various input data sources to align with the frame 720 input and frame 710 output can include the use of metadata, tags, overlays, or separately tracked relationship information. For example, when performing real-time inferences, only select frames 710 of the video stream 703 may be updated with or linked to the synthesized region of interest 608. The update rate may depend on the processing capacity of the system 100 of FIG. 1. In some aspects, the determination of the proposed region of interest 440 may occur at a di fferent update rate than the generation of the synthesized region of interest 608. For example, once the proposed region of interest 440 is determined, the position of the proposed region of interest 440 can be tracked in one or more subsequent frames rather than being fully determined again for one or more subsequent frames. This can reduce the real-time computational burden of the system 100 of FIG. 1. By periodically determining the proposed region of interest 440 at a reduced rate, more computational resources may be available to generate the synthesized region of interest 608 with a faster update rate. Filters or other techniques can be used to smooth transitions between the enhancements added by the synthesized region of interest 608. For instance, the effects can fade in or fade out over several frames 710 to avoid flickering and/or other visual distractions. [0103] During an inference phase, the one or more machine-learning models 702 can be input with surgical data 147 that has not been pre-processed. The one or more machine-learning models 702, in the inference phase, generate the predictions. The one or more machine-learning models 702 can also output corresponding confidence scores associated with the predictions. When confidence scores are below a minimum threshold, the synthesis of visualization can be turned off to reduce the chance of highlighting a non-targeted object or structure.

[0104] The outputs the one or more machine-learning models 702 can be used by the output generator 160 to provide augmented visualization via the augmented reality devices 180. The augmented visualization can include the graphical overlays being overlaid on the corresponding features (anatomical structure, surgical instrument, etc.) in the image(s).

[0105] The output generator 160 can also provide a user feedback via the alert output system 170 in some aspects. The user feedback can include highlighting using graphical overlays one or more portions of the image(s) to depict proximity between the surgical instrument(s) and anatomical structure(s). Alternatively, or in addition, the user feedback can be displayed in any other manner, such as a message, an icon, etc., being overlaid on the image(s). [0106] In some aspects, to facilitate real-time performance, an input window 320 can be analyzed at a predetermined frequency, such as 5 times per second, 3 times per second, 10 times per second, etc. The analysis can result in identification of locations of anatomical structures and surgical instruments in the images 302 that are in the input window 320. It can be appreciated that the video of the surgical procedure includes images 302 that are between two successive input windows 320. For example, if the video is captured at 60 frames per second, and if the input window 320 includes 5 frames, and if the input window 320 is analyzed 5 times per second, then a total of 25 frames from the captured 60 frames are analyzed. The remaining 35 frames are in between two successive input windows 320. It is understood that the capture speed, input window frequency, and other parameters can vary from one aspect to another, and that above numbers are examples.

[0107] For the frames, i.e., images 302, between two successive input windows 320, the locations of the anatomical structures and surgical instruments can be predicted based on the locations predicted in the most recent input window 320. For example, a movement vector of the surgical instrument can be computed based on the changes in the location of the surgical instrument in the frames in the prior input window 320. The movement vector can be computed using a machine-learning model, such as a deep neural network. The movement vector is used to predict the location of the surgical instrument in the subsequent frames after the input window 320, until a next input window 320 is analyzed.

[01081 The location of structure(s) predicted by the one or more machine-learning models 702 can also be predicted in the frames between two successive input windows 320 in the same manner. Graphical overlays that are used to overlay the images 302 to represent predicted features (e.g., surgical instruments, anatomical structures, etc.) are accordingly adjusted, if required, based on the predicted locations. Accordingly, a smooth visualization, in real time, is provided to the user with lesser computing resources being used. In some aspects, the graphical overlays can be configured to be switched off by the user, for example, the surgeon, and the system works without overlays, rather only generating the overlays and/or other types of user feedback when an alert is to be provided (e.g., instrument within predetermined vicinity of an anatomical structure).

[0109] In some aspects, the proposed region of interest 440 as determined by the first machine-learning model 704 or modified through user input 148 can be provided to other processes. For example, identifying and overlaying a contour on an image can be useful for identifying structures without necessarily performing synthesis. For instance, the proposed region of interest 440 can appear as an outline without altering the image within the proposed region of interest 440. Further, the proposed region of interest 440 can be used as input for other analysis for greater precision with respect to a structure of interest than using a general bounding box.

[0110] Aspects of the technical solutions described herein can improve surgical procedures by improving the safety of the procedures. Further, the technical solutions described herein facilitate improvements to computing technology, particularly computing techniques used during a surgical procedure. Aspects of the technical solutions described herein facilitate one or more machine-learning models, such as computer vision models, to process images obtained from a live video feed of the surgical procedure in real-time using spatio-temporal information. The machine-learning models using techniques such as neural networks to use information from the live video feed and (if available) robotic sensor platform to predict one or more features, such as anatomical structures, surgical instruments, in an input window of the live video feed, and further refine the predictions using additional machine-learning models that can predict a phase of the surgical procedure. The machine-learning models can be trained to identify the surgical phase(s) of the procedure and instruments in the field of view by learning from raw image data and instrument markers (bounding boxes, lines, key points, etc.). When in a robotic procedure, the computer vision models can also accept sensor information (e.g., instruments enabled, mounted, etc.) to improve the predictions. Computer Vision models that predict instruments and critical anatomical structures use temporal information from the phase prediction models to improve the confidence of the predictions in real-time.

[0111] The predictions and the corresponding confidence scores can be used to generate and display graphical overlays to the surgeon and/or other users in an augmented visualization of the surgical view. The graphical overlays can mark critical anatomical structures, surgical instruments, surgical staples, scar tissue, results of previous surgical actions, etc. The graphical overlays can further show a relationship between the surgical instrument(s) and one or more anatomical structures in the surgical view and thus, guide the surgeon and other users during the surgery. The graphical overlays are adjusted according to the user’s preferences and/or according to the confidence scores of the predictions. Aspects of the technical solutions described herein provide a practical application in surgical procedures.

[0112] Further yet, aspects of the technical solutions described herein address technical challenges of predicting complex features in a live video feed of a surgical view in real time. The technical challenges are addressed by using a combination of various machine learning techniques to analyze multiple images in the video feed. Further yet, to address the technical challenge of real-time analysis and augmented visualization of the surgical view, aspects of the technical solutions described herein predict the present state of the surgical view at a constant frame rate and update the present state using the machine learning models at a predetermined frame rate.

[0113] It should be noted that although some of the drawings depict endoscopic videos being analyzed, the technical solutions described herein can be applied to analyze video and image data captured by cameras that are not endoscopic (i.e., cameras external to the patient’s body) when performing open surgeries (i.e., not laparoscopic surgeries). For example, the video and image data can be captured by cameras that are mounted on one or more personnel in the operating room, e.g., surgeon. Alternatively, or in addition, the cameras can be mounted on surgical instruments, walls, or other locations in the operating room.

[0114] Turning now to FIG. 8, a computer system 800 is generally shown in accordance with an aspect that can implement the system 100 of FIG. 1 or a portion thereof. The computer system 800 can be an electronic, computer framework comprising and/or employing any number and combination of computing devices and networks utilizing various communication technologies, as described herein. The computer system 800 can be easily scalable, extensible, and modular, with the ability to change to different services or reconfigure some features independently of others. The computer system 800 may be, for example, a server, desktop computer, laptop computer, tablet computer, or smartphone. In some examples, computer system 800 may be a cloud computing node. Computer system 800 may be described in the general context of computer executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system 800 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media, including memory storage devices. [0115] As shown in FIG. 8, the computer system 800 has one or more central processing units (CPU(s)) 801 a, 801b, 801c, etc. (collectively or generically referred to as processor(s) 801). The processors 801 can be a single-core processor, multi-core processor, computing cluster, or any number of other configurations. The processors 801, also referred to as processing circuits, are coupled via a system bus 802 to a system memory 803 and various other components. The system memory 803 can include one or more memory devices, such as a read-only memory (ROM) 804 and a random access memory (RAM) 805. The ROM 804 is coupled to the system bus 802 and may include a basic input/output system (BIOS), which controls certain basic functions of the computer system 800. The RAM is read-write memory coupled to the system bus 802 for use by the processors 801. The system memory 803 provides temporary memory space for operations of said instructions during operation. The system memory 803 can include random access memory (RAM), read-only memory, flash memory, or any other suitable memory systems. [0116] The computer system 800 comprises an input/output (I/O) adapter 806 and a communications adapter 807 coupled to the system bus 802. The I/O adapter 806 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 808 and/or any other similar component. The I/O adapter 806 and the hard disk 808 are collectively referred to herein as a mass storage 810. [0117] Software 81 1 for execution on the computer system 800 may be stored in the mass storage 810. The mass storage 810 is an example of a tangible storage medium readable by the processors 801, where the software 811 is stored as instructions for execution by the processors 801 to cause the computer system 800 to operate, such as is described hereinbelow with respect to the various Figures. Examples of computer program product and the execution of such instruction is discussed herein in more detail. The communications adapter 807 interconnects the system bus 802 with a network 812, which may be an outside network, enabling the computer system 800 to communicate with other such systems. In one aspect, a portion of the system memory 803 and the mass storage 810 collectively store an operating system, which may be any appropriate operating system to coordinate the functions of the various components shown in FIG. 8.

[0118] Additional input/output devices are shown as connected to the system bus 802 via a display adapter 815 and an interface adapter 816 and. In one aspect, the adapters 806, 807, 815, and 816 may be connected to one or more I/O buses that are connected to the system bus 802 via an intermediate bus bridge (not shown). A display 819 (e.g., a screen or a display monitor) is connected to the system bus 802 by a display adapter 815, which may include a graphics controller to improve the performance of graphics intensive applications and a video controller. A keyboard, a mouse, a touchscreen, one or more buttons, a speaker, etc., can be interconnected to the system bus 802 via the interface adapter 816, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Thus, as configured in FIG. 8, the computer system 800 includes processing capability in the form of the processors 801 , and, storage capability including the system memory 803 and the mass storage 810, input means such as the buttons, touchscreen, and output capability including the speaker 823 and the display 819.

[0119] In some aspects, the communications adapter 807 can transmit data using any suitable interface or protocol, such as the internet small computer system interface, among others. The network 812 may be a cellular network, a radio network, a wide area network (WAN), a local area network (LAN), or the Internet, among others. An external computing device may connect to the computer system 800 through the network 812. In some examples, an external computing device may be an external web server or a cloud computing node.

[0120] It is to be understood that the block diagram of FIG. 8 is not intended to indicate that the computer system 800 is to include all of the components shown in FIG.

8. Rather, the computer system 800 can include any appropriate fewer or additional components not illustrated in FIG. 8 (e.g., additional memory components, embedded controllers, modules, additional network interfaces, etc.). Further, the aspects described herein with respect to computer system 800 may be implemented with any appropriate logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, an embedded controller, or an application-specific integrated circuit, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and firmware, in various aspects.

[0121] FIG. 9 depicts a surgical procedure system 900 in accordance with one or more aspects. The example of FIG. 9 depicts a surgical procedure support system 902 configured to communicate with a surgical procedure scheduling system 930 through a network 920. The surgical procedure support system 902 can include or may be coupled to the system 100. The surgical procedure support system 902 can acquire image data, such as images 302, using one or more cameras 904. The surgical procedure support system 902 can also interface with a plurality of sensors 906 and effectors 908. The sensors 906 may be associated with surgical support equipment and/or patient monitoring. The effectors 908 can be robotic components or other equipment controllable through the surgical procedure support system 902. The surgical procedure support system 902 can also interact with one or more user interfaces 910, such as various input and/or output devices. The surgical procedure support system 902 can store, access, and/or update surgical data 914 associated with a training dataset and/or live data as a surgical procedure is being performed. The surgical procedure support system 902 can store, access, and/or update surgical objectives 916 to assist in training and guidance for one or more surgical procedures.

[0122] The surgical procedure scheduling system 930 can access and/or modify scheduling data 932 used to track planned surgical procedures. The scheduling data 932 can be used to schedule physical resources and/or human resources to perform planned surgical procedures. Based on the surgical maneuver as predicted by the one or more machine learning models and a current operational time, the surgical procedure support system 902 can estimate an expected time for the end of the surgical procedure. This can be based on previously observed similarly complex cases with records in the surgical data 914. A change in a predicted end of the surgical procedure can be used to inform the surgical procedure scheduling system 930 to prepare the next patient, which may be identified in a record of the scheduling data 932. The surgical procedure support system 902 can send an alert to the surgical procedure scheduling system 930 that triggers a scheduling update associated with a later surgical procedure. The change in schedule can be captured in the scheduling data 932. Predicting an end time of the surgical procedure can increase efficiency in operating rooms that run parallel sessions, as resources can be distributed between the operating rooms. Requests to be in an operating room can be transmitted as one or more notifications 934 based on the scheduling data 932 and the predicted surgical maneuver.

[0123] As surgical maneuvers and steps are completed, progress can be tracked in the surgical data 914, and status can be displayed through the user interfaces 910. Status information may also be reported to other systems through the notifications 934 as surgical maneuvers are completed or if any issues are observed, such as complications.

[0124] The reports/views/annotations and other information described herein is added to an electronic medical record (EMR) in one or more cases. In some aspects, the information about specific surgical procedures can be stored in the patient record associated with the patient that was operated upon during the surgical procedure. Alternatively, or in addition, the information is stored in a separate database for later retrieval. The retrieval can be associated with the patient’s unique identification, such as EMR-identification, social security number, or any other unique identifier. The stored data can be used to generate patient-specific reports. In some aspects, information can also be retrieved from the EMR to enhance one or more operations described herein. In one or more aspects, an operational note may be generated, which includes one or more outputs from the machine learning models. The operational note may be stored as part of the EMR. [0125] Laparoscopic cholecystectomy is a common surgery in which the gallbladder is removed. This involves exposing the critical structures (cystic duct and artery), clipping and dividing them, then extracting the gallbladder. Complications can occur when the structures are misidentified or confused with the common bile duct, particularly as they may be difficult to distinguish without thorough dissection. Official guidance has encouraged that surgeons establish “critical view of safety” (CVS) before clipping and division. In CVS, both structures can clearly and separately be identified, and traced as they enter the gallbladder.

[0126] Aspects of the technical solutions described herein provide computer assistance in achieving CVS to improve surgical safety and workflow by providing improvements in computer vision and machine learning. Existing systems have demonstrated a proof of principle approach using binary CVS classification. Some existing techniques create a bounding box detection system, based on anatomical landmarks that included the common bile duct and cystic duct but not the cystic artery. Some existing techniques have used joint segmentation of the hepatobiliary anatomy and classification of CVS.

[0127] Technical solutions described herein instead directly detect structures that are the critical structures that surgeons must identify and divide. Embodiments of the technical solutions herein accordingly facilitate guiding surgical workflow because once CVS has been achieved, then by definition the structures are already identified and dissected. Embodiments of the technical solutions described herein facilitate detecting the critical structures. Embodiments of the technical solutions described herein outperform conventional segmentation by using label relaxation to address technical challenges where the ground truth labels are ambiguous about where the structures are in the input images/video.

[0128] FIG. 10 depicts improvements to machine learning provided by one or more aspects of the technical solutions herein using label relaxation. The machine learning models are trained using labeled ground truth data. Hence, accuracy of the labeled ground truth data is important. Accuracy of the machine learning model’s predictions depends on the accuracy of the labeled ground truth data used during training. As can be seen in FIG. 10, there can be scenarios where the labeled data 1002 can have some ambiguity. The ambiguity can be because of difficulty in labeling structures separately, for example, due to overlap in structures. While several different examples exist, the view in the labeled data 1002 depicts a view of a cystic artery and cystic duct, in which labels 1004, 1006 are both valid, and as can be seen are inseparable. Such ambiguous labeling is a technical challenge for training the machine learning models using supervised segmentation with provided mass (1010). Conventional segmentation approaches struggle to perform well in this task because of the ambiguous and subjective nature of the annotations. This problem is exacerbated by the use of conventional one- hot encoding: a given pixel is assigned as either 100% structure or 100% background class. This impairs generalization and can cause the machine learning model(s) to have false negatives.

[0129] Some aspects of the technical solutions herein incorporate pseudo-label self- supervision (1012), using unlabeled data (1008) to address such technical challenges with ambiguous labels. The labeled data (1002) and unlabeled data (1008) can be obtained from multiple videos of prior surgical procedures for self-supervision. In comparison to supervised segmentation using ambiguous labels (1010), the self-supervised segmentation using unlabeled data (1012) improve clinical significance when detecting critical structures.

[0130] In some aspects of the technical solutions herein the labeled data 1002, even with ambiguous labels, is used to train the machine learning models using label relaxation. Here, the critical structures are labeled and treated as a single foreground class, with the rest of the image considered as background. This poses the technical problem as a binary segmentation problem. To address this technical challenge, in some aspects of the technical solutions herein, rather than using segmentation, the machine learning models are trained for heatmap regression, where the ground truth heatmap is derived from the original annotations’ Euclidean distance transforms. [0131] Given a binary segmentation ground truth xk for structure k, embodiments of the technical solutions described herein define the relaxed label as x k ' = 1 — exp Ldt ^ - where edt(-) is the Euclidean distance, 0t represents dilation with a square of t pixels, and cl is a parameter to control the relaxation. Each xi is then normalized by its maximum value to allow use as a probability heatmap. Where heatmaps overlap for different structures within an image, the maximum value is used.

[0132] Through such a heatmap regression method, central pixels are assigned high confidence, and more distant pixels are assigned low confidence as shown in FIG. 10. This systematic label relaxation reflects the ambiguity of the structure boundaries, and copes better with variation in annotations (e.g., 1009, 101 1).

[0133] Labelling medical imagery is widely recognized as a bottleneck due to its difficulty, high time cost and compliance challenges. This is particularly true for surgical video, which generates large amounts of unstructured data. Aspects of the technical solutions described herein further improve performance of a machine learning model by using unlabeled data via self-supervision in addition to the heatmap regression. Unlike existing techniques that use self-supervision in endoscopic surgery, which uses generative models and consistency-based losses, aspects of technical solutions described herein use a pseudo-label approach that requires minimal computational overhead.

[0134] After training the machine learning models 1005 on labeled data 1002, the predictions from the machine learning models 1005 are used to provide pseudo-labels in unlabeled data 1004. This serves as a teacher in a teacher-student machine learning architecture, where a newly initialized student is trained on both datasets. Combined with regularization, the student learns a superior distillation of feature space compared to its teacher. [0135] In some aspects of the technical solutions described herein the machine learning models 1009, 101 1 are convolutional neural networks, with FCN segmentation architecture for the machine learning models 1005, 1007 used for segmentation. The artificial neural networks in one or more aspects use ResNetlOl , although, other types of network architectures can be used. For the segmentation and self-supervised segmentation models 1005, 1007 the FCN is trained with cross-entropy loss. FCN can use class frequency weighted cross-entropy loss, equally weighted cross-entropy loss, or other techniques. To assist with comparison, the heatmap machine learning model 1009 is kept similar to the segmentation model 1005, by using softmax to convert raw logits to a heatmap. The heatmap machine learning model 1009 uses soft cross-entropy loss with relaxed ground truth label as described herein.

[0136] The machine learning models 1005, 1007, 1009, 101 1 are trained until convergence. During training, the machine learning models 1005, 1007, 1009, 1011 use random image augmentations (padding, cropping, flipping, blurring, rotation, noising) and model regularization via dropout. Hyperparameter tuning for augmentation, and/or label relaxation parameters t and d can be performed in one or more aspects. Alternatively, predetermined values can be used for t and d. [0137] The machine learning models 1005, 1009 are trained in one or more aspects as a baseline segmentation model, a baseline heatmap model. Predictions from the machine learning models 1005, 1009 are used to train the variants of both models, i.e., the machine learning models 1007, 101 1 using self-supervision to exploit the unlabeled data 1004. [0138] Table 1 shows pixel-level metrics ordered by method (segmentation versus heatmap method described herein) and whether self-supervision was used. As can be seen, the heatmap detection described herein consistently performs better than segmentation in intersection-over-union (IoU) regardless of whether self-supervision is used. IoU is an evaluation metric used to measure the accuracy of an object detector on a particular dataset. The self-supervision approach improves performance further. Self supervision improves accuracy in general, but is particularly beneficial for a few difficult cases. The example of table 1 generally illustrates advantages of heatmap models that can be realized over segmentation. It is noted that other values, including higher accuracy results may be achieved using aspects disclosed herein, and thus, table 1 should not be construed as limiting the scope of the disclosure.

Table 1 [0139] FIG. 1 1 depicts comparison of results in an example scenario obtained by using the different types of machine learning models according to one or more aspects. Structure detection results 1102 are from segmentation machine learning model 1005, structure detection results 1 104 are from heatmap machine learning model 1009, and structure detection results 1 106 are from heatmap self-supervised machine learning model 101 1.

[0140] Each row depicts the same image processed by each of the machine learning models. In rows 1, 3 and 4, self-supervision slightly improved the accuracy, but the overall detection was not changed significantly, and hence the IoU remains similar. In row 2, however, the accuracy improvement was much larger as the supervised model entirely misses the cystic artery, whereas the self-supervised model detected it. Table 2 shows metrics for frame-level presence detection, where artery and duct detections must exceed an IoU threshold to count as true positives in a given frame. In other words, IoU detection score below a predetermined threshold is counted as a false positive. Such statistics are conservative, as a lower IoU overlap may nonetheless be fairly accurate given the ambiguity of ground truth annotation extent. Nevertheless, results show a similar pattern to the pixel-level performance metrics, with the heatmap method outperforming segmentation, and self-supervision improving the models’ performance. Notably, the increased pixel-level precision of segmentation methods does not translate to structure detection, where the heatmap method performs better by every metric. In the specific example of Table 2, higher-level presence detection metrics, evaluated with IoU threshold 0.5 to count as a true positive detection. It is understood that a different IoU score can be used as the predetermined threshold in other aspects. The example of table 2 generally illustrates advantages of heatmap models that can be realized over segmentation. It is noted that other values, including higher accuracy results may be achieved using aspects disclosed herein, and thus, table 2 should not be construed as limiting the scope of the disclosure.

Table 2

[0141] FIG. 12 shows example frames and model outputs from an excerpt of a laparoscopic cholecystectomy video. Aspects of the technical solutions described herein do not suffer false detections before structures are visible (1202), although similar shapes near a tool tip can be ambiguous, particularly if such shapes are visible near the gallbladder. Even when the structures are heavily coated by fat, aspects of the technical solutions described herein tend to recognize them at least partially (1204, 1206). The structure being clipped in the image does not prevent detection (1208). Structures remain detectable after division (1212). In some embodiments, using surgical phase recognition is used to deactivate detection after division of structures. Critical view of safety (1210) is facilitated by the aspects described herein. [0142] The heatmap based machine learning models described herein are more accurate than segmentation based machine learning models, as shown in low-level pixel metrics such as loll and higher-level presence detection such as F I score.

[0143] Accordingly, aspects of the technical solutions described herein detect the critical structures during surgical procedures, such as laparoscopic cholecystectomy. When trying to detect structures with ambiguous extent and challenging annotations, a heatmap-based approach based on label relaxation is used to improve performance over baseline techniques, such as segmentation based machine learning models. Self supervision provided further improvement by using unlabeled data for additional training. Automatic detection of critical structures in surgery improves computing systems used to store and automatically process surgical data. Further, aspects of the technical solutions described herein improve surgical safety, training and workflow and ultimately patient outcomes.

[0144] It should be noted that although embodiments herein are described using laparoscopic cholecystectomy as an example, the technical solutions provided herein can be used for any other type of surgical procedures and are not limited to a particular type of surgical procedure.

[0145] As noted herein, detection of surgical instruments in minimally invasive surgery video frames allows automatic generation of offline surgical analytics, that can provide valuable information for improving surgical procedures. Additionally, surgical instrument detection can provide real-time decision support during the surgery and notification of preventable risks during computer assisted interventions. Accurate models are required to successfully use decision support systems during surgical procedures. Current machine learning approaches typically estimate the location and type of surgical instruments via either bounding box detection or semantic segmentation. Surgical instruments (or tool) detection models generally rely on annotated bounding boxes during training. This has a major limitation for instrument detection as the annotated bounding boxes include a high number of background pixels due to the elongated dimensions of the surgical instruments, which might impede a model from learning discriminative features of the instruments. Alternatively, segmentation models directly estimate the probability of each pixel to belong to a specific instrument type by relying on fine-grained pixel-wise segmentation mask annotations. While masks solve the aforementioned technical challenge faced by bounding boxes, the annotation cost significantly grows up to almost two orders of magnitude for annotating masks with respect to only annotating frame-level labels or bounding boxes. In practice, the annotation of datasets with masks at scale can be unfeasible, which can prevent models from achieving the generalization and robustness required to be applied in real-world applications. [0146] To address the technical challenges above and leverage the strengths of both workstreams, aspects of technical solutions described herein use a multi-task machine learning model (“model”) that jointly learns to estimate bounding boxes and masks for surgical instruments. The model aggregates information from the multiple tasks by using a shared backbone as an encoder, while having a head for each individual task: instrument classification, bounding box regression and segmentation. While the classification and regression heads allow the model to localize and classify surgical instruments using scalable annotations, the segmentation head achieves the detailed pixel-wise annotations. To alleviate the burden of expensive pixel-wise annotation on large datasets, one or more aspects of technical solutions described herein use a training framework that accounts for missing masks and uses weakly-supervised loss computed on frame-level labels, which can be freely obtained from the bounding box annotations. Experimental comparison shows that the model achieves detection and segmentation performance on par with fully-supervised alternatives, while requiring as little as 1% of the masks in training. [0147] Existing solutions to detect and localize surgical tools in video include semantic segmentation and tool detection. Segmentation models are able to segment instruments against background (binary segmentation), tool types (semantic segmentation) or tools instances (instance segmentation). Some existing solutions segment entire instruments instances instead of pixel-wise segmentation.

[0148] As a further example, existing machine learning models for detecting surgical instruments can be anchor-based and based on a convolutional backbone with a ResNet architecture, that generates feature maps at different scales, and two task-specific heads that perform object classification and bounding box regression from the feature pyramid. This approach faces a foreground-background class imbalance during training. This is handled by using the focal loss, a variation of the cross-entropy loss function that down weights the loss assigned to well-classified examples. Some existing solutions jointly scale up model width, depth, and resolution to meet real-time requirement without sacrificing detection accuracy. One example of a model computes a feature pyramid using EfficientNet. Some existing solutions use a weighted bi-directional feature pyramid network (BiFPN) to leverage the multi-scale feature information. Further, some existing solutions use joint detection and segmentation. For example, some existing solutions use a model for semi-supervised object segmentation that relies on single manual bounding box initialization to produce class-agnostic object masks and rotated bounding boxes with a fully-convolutional Siamese model. Using weak supervision, a multi-task model to perform detection and segmentation with a weakly-supervised cyclic policy can be used to complement the learning of both tasks simultaneously in such existing solutions. Further, the existing solutions use weakly-supervised convolutional model to estimate the presence and localization of surgical instruments using only frame- label annotations. Flowever, the performance of these weakly-supervised models is still far from fully supervised ones

[0149] Aspects of the technical solutions described herein address such technical challenges with machine learning models that are used for detecting the surgical tools and instruments by facilitating detection and segmentation jointly. Consider x £ {0,

255} W,H,C is an RGB image with width W , height FI and C = 3 color channels. Let D(·) : x (BN, 4, CN , MW,H,M) be a joint detection and segmentation model that localizes and classifies surgical instruments within x which outputs are a set of bounding boxes (BN, 4), their corresponding estimated classes (CN), and a segmentation mask (MW,H,M), with N being the number of detected instruments and M the number of considered instrument types.

[0150] The technical problem of the joint detection and segmentation is formulated as a multi-task learning problem. FIG. 13 depicts a block diagram of an architecture of a multi-task machine learning model for joint detection and segmentation of surgical instruments according to one or more aspects. The architecture 1300 includes an encoder 1302, multi-scale feature fusion 1304, and multiple task heads 1306, 1308, 1310. As one example, the encoder 1302 can be an EfficientNet backbone, and the multi-scale feature fusion 1304 can be a set of bi-directional feature pyramid networks (BiFPNs). It will be understood that other types of encoders and feature fusion can be used with corresponding task heads 1306-1310, where task heads 1306-1310 can include any number of task heads to process outputs of the multi-scale feature fusion 1304. For purposes of explanation, in the example of FIG. 13, the task heads 1306, 1308, 1310 are for the tasks of bounding box regression, bounding box classification, and segmentation, respectively. The encoder 1302 can be a share backbone that acts as a joint representation learning module with an aim to learn multi-level feature representations suitable for all the tasks, task head 1306 can provide bounding box regression, task head 1308 can provide bounding box classification, and task head 1310 can provide segmentation. Input x can include one or more frames and/or data inputs. Having x as input, the encoder 1302 b(·) : x P can generate a pyramid of features at S scales P = (Ps)s= l · The feature pyramid can be fed to the next layers of the multi-scale feature fusion 1304. For example, when implemented as one or more bi-directional feature pyramid networks (BiFPNs), layers of the multi-scale feature fusion 1304 can fuse the features across scales while maintaining their number and resolution g(. ): P ® P. The three task heads 1306, 1308, 1310 guide the learning of the encoder 1302 and the multi scale feature fusion 1304 to learn more discriminative and complementary features to further improve the tasks, while adapting generated features for task specific problems. The tasks can include, for instance, bounding box regression, bounding box classification, and segmentation.

[0151] In some aspects, as described herein, the amount of training data (annotated masks) used for the machine learning model 1300 is limited (because it is expensive to generate/annotate). Hence, the technical challenge of the lack of data is compensated by employing weak supervision to improve the performance of the machine learning model 1300.

[0152] The segmentation of task head 1310 aims to generate a mask 1320 MW,H,M from the fused feature pyramid P. FIG. 14 depicts an architecture for segmentation of task head 1310 according to one or more aspects. The segmentation of task head 1310 can include: feature up-sampling and concatenation (1402), convolutional block (1404), and up-sampling (1406).

[0153] At block 1402, to make use of the information contained in the multiple scales of the fused feature pyramid, P " , the S - 2 feature maps with smallest spatial resolution (Ps)s=i are first up-sampled to the spatial resolution of p 2 using bi-linear interpolation. Then, the S -1 feature maps are concatenated. where U2(·) is the bilinear interpolation operation that up-samples the feature map to the resolution of p 2 , and (·, . . . , ) represents the concatenation operation.

[0154] At block 1404, a convolutional block is then applied to achieve a feature map with same number of channels as number of considered instrument types (i.e., M ) as where convi(-) is a 2D convolution with kernel (lxl) and S - 1x64 channels that fuses the features with different resolutions, bn(·) is a batch normalization layer, relu(-) is the Rectified Linear Unit (ReLU) operation, and conv 2 (·) is a 2D convolution with kernel ( lxl) and M channels to reduce the number of channels to the number of instrument types, M .

[0155] At block 1406, M is up-sampled to generate masks 1320 with same dimensions as the input images M = t/ 0 ( M) (3) where Uo(-) is the bilinear interpolation operation that up-samples the feature map to the resolution of the input image x.

[0156] Some embodiments of the technical solutions described herein use semi- supervised learning with weak supervision. When the annotated mask 1320, M, is available for a given sample, the cross- entropy loss function is used for training the segmentation at task head 1310. However, as not all samples have an annotated mask 1320, in each batch the cross-entropy loss is weighted by the ratio of samples with annotated masks A to the total number of samples within the batch B as where L CE (·, ) is the cross-entropy loss function. Thus, batches with fewer annotated samples have a lower weight.

[0157] In addition, when a mask is not available, embodiments of the segmentation of task head 1310 are trained to compare condensed mask presence labels with only frame- label annotations 1322, where the estimated mask 1326 is condensed using global max pooling within a single value per instrument type as

® = mpool( M) (5) where mpool(-) is the 2D maxpool operation with kernel size (H, W) that generates a vector (Q) e R 1,1,M . The information within (Q ) resembles the presence/absence of each instrument type within the frame. These outputs indicate the presence of a given instrument type within the frame. Note that these annotations, which are the cheapest to generate, are already available to the model within the bounding box annotations 1328.

[0158] The weakly-supervised loss is the cross entropy between © and the instrument type frame-level annotations, © as

[0159] Lws(-) is computed for all frames, regardless of whether their mask is provided or not. Accordingly, the full loss used to train the encoder 1302, multi-scale feature fusion 1304, and task heads 1306-1310 can be defined as where (L reg , L ci r) is the weighted focal loss, and w reg , w ci r, w seg , and w Ws are weights of regression, classification, segmentation, and weak supervision losses that tune the contribution of each loss.

[0160] In one or more aspects, the machine learning model 1300 is implemented as a neural network. The architecture of the neural network can use, as a backbone, the EfficientNet-DO neural network (e.g., for encoder 1302) with pre-trained weights on

ImageNet. The BiFPN layer (e.g., of multi-scale feature fusion 1304) can be modified to aggregate S = 6 feature scales instead of five for improved segmentation accuracy. The five smallest scales are used for the regression and classification for task heads 1306, 1308. Images can be downscaled to 512x512 pixels and data augmentation that includes geometrical and color transformations are used. A sampler to balance the number of different instruments present in each batch is used. The models can be trained for a predetermined number of epochs. The results obtained in the last epoch are reported.

The proposed loss (Eq. 7) weights are empirically set. For example, wreg = 1, wclf = 5, wseg = 700, and wws = 5. It is understood that the values described above are examples, and can be set differently in other aspects. [0161] When comparing against state-of-the-art joint detection and segmentation models, the machine learning model 1300 obtains improved results against the fully annotated alternatives while only requiring a 1% of masks to be annotated. Three visual segmentation samples, one per each sequence of the testing split, are displayed in FIG. 15 for models trained using 100% (1502), 20% (1504), 5% (1506), and 0% (1508) of annotated masks. The estimated masks maintain the quality even when the available masks are reduced to 5% (1506). Some classification errors are observed in the second sequence when limited masks are used.

[0162] Accordingly, aspects of the technical solutions described herein provide a multi-task machine learning model that jointly learns to detect and segment surgical instruments. A weakly-supervised adaptive loss is also used in some aspects, that enables the learning of segmentation masks when only a fraction of masks are available during training by supervising the learning with frame-level annotations. Results show that the model 1300 provided herein obtains improved results compared to a fully-supervised alternative, while only requiring a 1% of the masks.

[0163] The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer-readable storage medium (or media) having computer- readable program instructions thereon for causing a processor to carry out aspects of the present invention.

[0164] The computer-readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer-readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer-readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.

[0165] Computer-readable program instructions described herein can be downloaded to respective computing/processing devices from a computer-readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network, and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer- readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.

[0166] Computer-readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source-code or object code written in any combination of one or more programming languages, including an object-oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some aspects, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer-readable program instruction by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.

[0167] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to aspects of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

[0168] These computer-readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. [0169] The computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

[0170] The flowchart and block diagrams in any of the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware- based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

[0171] The descriptions of the various aspects of the present invention have been presented for purposes of illustration but are not intended to be exhaustive or limited to the aspects disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described aspects. The terminology used herein was chosen to best explain the principles of the aspects, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the aspects described herein.

[0172] Various aspects of the invention are described herein with reference to the related drawings. Alternative aspects of the invention can be devised without departing from the scope of this invention. Various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the present invention is not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship. Moreover, the various tasks and process steps described herein can be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein.

[0173] The following definitions and abbreviations are to be used for the interpretation of the claims and the specification. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” “contains," or “containing,” or any other variation thereof are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus.

[0174] Additionally, the term “exemplary” is used herein to mean “serving as an example, instance or illustration.” Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. The terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e., one, two, three, four, etc. The terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e., two, three, four, five, etc. The term “connection” may include both an indirect “connection” and a direct “connection.”

[0175] The terms “about,” “substantially,” “approximately,” and variations thereof are intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ± 8% or 5%, or 2% of a given value.

[0176] For the sake of brevity, conventional techniques related to making and using aspects of the invention may or may not be described in detail herein. In particular, various aspects of computing systems and specific computer programs to implement the various technical features described herein are well known. Accordingly, in the interest of brevity, many conventional implementation details are only mentioned briefly herein or are omitted entirely without providing the well-known system and/or process details.

[0177] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the techniques). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a medical device.

[0178] In one or more examples, the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).

[0179] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.