Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBSTACLE AVOIDANCE TECHNIQUES FOR SURGICAL NAVIGATION
Document Type and Number:
WIPO Patent Application WO/2021/003401
Kind Code:
A1
Abstract:
Systems and methods are described herein wherein a localizer is configured to detect a position of a first object and a vision device is configured to generate a depth map of surfaces near the first object. A virtual model corresponding to the first object is accessed, and a positional relationship between the localizer and the vision device in a common coordinate system is identified. An expected depth map of the vision device is then generated based on the detected position of the first object, the virtual model, and the positional relationship. A portion of the actual depth map that fails to match the expected depth map is identified, and a second object is recognize based on the identified portion.

Inventors:
MALACKOWSKI DONALD (US)
BOS JOSEPH (US)
DELUCA RICHARD (US)
Application Number:
PCT/US2020/040717
Publication Date:
January 07, 2021
Filing Date:
July 02, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
STRYKER CORP (US)
International Classes:
A61B34/10; A61B34/20; A61B34/30; A61B90/00; A61B17/00
Foreign References:
US20160191887A12016-06-30
US20140031668A12014-01-30
US20170333137A12017-11-23
US7725162B22010-05-25
US20140200621A12014-07-17
US10531926B22020-01-14
US9119655B22015-09-01
Attorney, Agent or Firm:
FARES, Samir, A. et al. (US)
Download PDF:
Claims:
CLAIMS

1. A navigation system comprising:

a localizer configured to detect a position of a first object;

a vision device configured to generate an actual depth map of surfaces near the first object; and

a controller coupled to the localizer and the vision device, the controller configured to: access a virtual model corresponding to the first object;

identify a positional relationship between the localizer and the vision device in a common coordinate system;

generate an expected depth map of the vision device based on the detected position of the first object, the virtual model, and the positional relationship;

identify a portion of the actual depth map that fails to match the expected depth map; and

recognize a second object based on the identified portion.

2. The navigation system of claim 1, wherein the controller is configured to identify a position of the second object relative to the first object in the common coordinate system based on the detected position of the first object, a location of the second object in the actual depth map, and the positional relationship.

3. The navigation system of claim 2, wherein the first object defines a target volume of patient tissue to be treated according to a surgical plan, and the controller is configured to: determine whether the second object is an obstacle to treating the target volume according to the surgical plan based on the position of the second object relative to the target volume in the common coordinate system and the surgical plan; and

responsive to determining that the second object is an obstacle to the surgical plan, modify the surgical plan and/or trigger a notification and/or halt surgical navigation.

4. The navigation system of any one of claims 1-3, wherein a tracker is rigidly coupled to the first object, and the controller is configured to: detect, via the localizer, a position of the tracker in a first coordinate system specific to the localizer;

identify a position of the virtual model in the first coordinate system based on the detected position of the tracker in the first coordinate system and a positional relationship between the tracker and the first object in the first coordinate system;

transform the position of the virtual model in the first coordinate system to a position of the virtual model in a second coordinate system specific to the vision device based on the position of the virtual model in the first coordinate system and a positional relationship between the localizer and the vision device in the second coordinate system; and

generate the expected depth map based on the position of the virtual model in the second coordinate system.

5. The navigation system of any one of claims 1-4, wherein the controller is configured to identify a portion of the actual depth map that fails to match the expected depth map by being configured to:

compute a difference between the actual depth map and the expected depth map;

determine whether a first section of the difference indicates an absolute depth greater than a threshold depth; and

identify as the portion a second section of the actual depth map that corresponds to the first section of the difference responsive to determining that the first section of the difference indicates an absolute depth greater than threshold depth.

6. The navigation system of claim 5, wherein the threshold depth is non-zero.

7. The navigation system of claims 5 or 6, wherein the controller is configured to identify the portion of the actual depth map that fails to match the expected depth map by being configured to:

determine whether a size of the first section is greater than a minimum size threshold; and identify as the portion the second section responsive to the determining that the size of the first section is greater than the minimum size threshold.

8. The navigation system of any one of claims 1-7, wherein the controller is configured to recognize the second object based on the identified portion by being configured to match the identified portion with a predetermined profile corresponding to the second object.

9. The navigation system of any one of claims 1-8, wherein the portion of the actual depth map comprises an arrangement of features corresponding to the second object and located in a first position of the actual depth map, and the controller is configured to track movement of the second object by monitoring whether the arrangement of features moves to a second position that differs from the first position in an additional actual depth map subsequently generated by the vision device.

10. The navigation system of any one of claims 1-9, wherein the controller is configured to generate a virtual boundary corresponding to the second object in the common coordinate system, the virtual boundary providing a constraint on a motion of a surgical tool.

11. The navigation system of any one of claims 1-10, wherein the controller is configured to crop the actual depth map to a region of interest based on the virtual model, the detected position of the first object, and the positional relationship between the localizer and the vision device in a common coordinate system, and the controller is configured to compare the actual depth map by being configured to compare the cropped actual depth map.

12. The navigation system of any one of claims 1-11, wherein the controller is configured to identify the positional relationship between the localizer and the vision device in the common coordinate system by being configured to:

project a pattern onto a surface in view of the localizer and the vision device;

generate localization data using the localizer indicating a position of the pattern in a first coordinate system specific to the localizer;

receive a calibration depth map illustrating the projected pattern generated by the vision device;

identify a position of the projected pattern in a second coordinate system specific to the vision device based on the calibration depth map; and identify the positional relationship between the localizer and the vision device in the common coordinate system based on the position of the pattern in the first coordinate system and the position of the pattern in the second coordinate system.

13. The navigation system of any one of claims 1-12, wherein the localizer is configured to operate in a first spectral band to detect the position of the first object, the vision device is configured to operate in a second spectral band to generate the actual depth map of the surfaces near the first object, and the first spectral band differs from the second spectral band.

14. A robotic manipulator utilized with the navigation system of any one of claims 1-13, wherein the robotic manipulator supports a surgical tool and comprises a plurality of links and a plurality of actuators configured to move the links to move the surgical tool, and wherein the robotic manipulator is controlled to avoid the second object.

15. A method of operating a navigation system comprising a localizer configured to detect a position of a first object, a vision device configured to generate an actual depth map of surfaces near the first object, and a controller coupled to the localizer and the vision device, the method comprising:

accessing a virtual model corresponding to the first object;

identifying a positional relationship between the localizer and the vision device in a common coordinate system;

generating an expected depth map of the vision device based on the detected position of the first object, the virtual model, and the positional relationship;

identifying a portion of the actual depth map that fails to match the expected depth map; and

recognize a second object based on the identified portion.

16. The method of claim 15, further comprising identifying a position of the second object relative to the first object the common coordinate system based on the detected position of the first object, a location of the second object in the actual depth map, and the positional relationship.

17. The method of claim 16, wherein the first object defines a target volume of patient tissue to be treated according to a surgical plan, and further comprising:

determining whether the second object is an obstacle to treating the target volume according to the surgical plan based on the position of the second object relative to the target volume in the common coordinate system and the surgical plan; and

responsive to determining that the second object is an obstacle to the surgical plan, modifying the surgical plan and/or triggering a notification and/or halting surgical navigation.

18. The method of any one of claims 15-17, wherein a tracker is rigidly coupled to the first object, and further comprising:

detecting, via the localizer, a position of the tracker in a first coordinate system specific to the localizer;

identifying a position of the virtual model in the first coordinate system based on the detected position of the tracker in the first coordinate system and a positional relationship between the tracker and the first object in the first coordinate system;

transforming the position of the virtual model in the first coordinate system to a position of the virtual model in a second coordinate system specific to the vision device based on the position of the virtual model in the first coordinate system and a positional relationship between the localizer and the vision device in the second coordinate system; and

generating the expected depth map based on the position of the virtual model in the second coordinate system.

19. The method of any one of claims 15-18, wherein identifying a portion of the actual depth map that fails to match the expected depth map comprises:

computing a difference between the actual depth map and the expected depth map;

determining whether a first section of the difference indicates an absolute depth greater than a threshold depth; and

identifying as the portion a second section of the actual depth map that corresponds to the first section of the difference responsive to determining that the first section of the difference indicates an absolute depth greater than a threshold depth.

20. The method of claim 19, wherein the threshold depth is non-zero.

21. The method of claims 19 or 20, wherein identifying a portion of the actual depth map that fails to match the expected depth map comprises:

determining whether a size of the first section is greater than a minimum size threshold; and

identifying as the portion the second section responsive to the determining that the size of the first section is greater than the minimum size threshold.

22. The method of any one of claims 15-21, wherein recognizing the second object based on the identified portion comprises matching the identified portion with a predetermined profile corresponding to the second object.

23. The method of any one of claims 15-22, wherein the portion of the actual depth map comprises an arrangement of features corresponding to the second object and located in a first position of the actual depth map, and the controller is configured to track movement of the second object by monitoring whether the arrangement of features moves to a second position that differs from the first position in an additional actual depth map subsequently generated by the vision device.

24. The method of any one of claims 15-23, further comprising generating a virtual boundary corresponding to the second object in the common coordinate system, the virtual boundary providing a constraint on a motion of a surgical tool.

25. The method of any one of claims 15-24, further comprising cropping the actual depth map to a region of interest based on the virtual model, the detected position of the first object, and the positional relationship between the localizer and the vision device in a common coordinate system, wherein comparing the actual depth map comprises comparing the cropped actual depth map.

26. The method of any one of claims 15-25, wherein identifying the positional relationship between the localizer and the vision device in the common coordinate comprises: projecting a pattern onto a surface in view of the localizer and the vision device generating localization data using the localizer indicating a position of the pattern in a first coordinate system specific to the localizer;

receiving a calibration depth map corresponding to the projected pattern generated by the vision device;

identifying a position of the projected pattern in a second coordinate system specific to the vision device based on the calibration depth map; and

identifying the positional relationship between the localizer and the vision device in the common coordinate system based on the position of the pattern in the first coordinate system and the position of the pattern in the second coordinate system.

27. The method of any one of claims 15-26, further comprising:

operating the localizer in a first spectral band to detect the position of the first object; and operating the vision device in a second spectral band to generate the actual depth map of the surfaces near the first object, the second spectral band differing from the first spectral band.

28. A computer program product, comprising a non-transitory computer readable medium having instructions stored thereon, which when executed by one or more processors are configured to implement the method of any one of claims 15-27.

Description:
OBSTACLE AVOIDANCE TECHNIQUES FOR SURGICAL NAVIGATION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The subject application claims priority to and all the benefits of United States

Provisional Patent Application No. 62/870,284, filed July 3, 2019, the contents of which are hereby incorporated by reference in their entirety.

TECHNICAL FIELD

[0002] The present disclosure generally relates to surgical navigation systems.

BACKGROUND

[0003] Surgical navigation systems assist in positioning surgical instruments relative to target volumes of patient tissue for treatment. During a surgical procedure, the target volume to be treated is frequently located adjacent sensitive anatomical structures and surgical tools that should be avoided. Tracking these adjacent anatomical structures using attached trackers is often difficult due to the flexible nature of the structures. Furthermore, attaching trackers to each object adjacent the target volume congests the surgical workspace and increases the cost and complexity of the surgical navigation system.

SUMMARY

[0004] In a first aspect, a navigation system is provided comprising: a localizer configured to detect a first object; a vision device configured to generate an actual depth map of surfaces near the first object; and a controller coupled to the localizer and the vision device, the controller configured to: access a virtual model corresponding to the first object; identify a positional relationship between the localizer and the vision device in a common coordinate system; generate an expected depth map of the vision device based on the detected position of the first object, the virtual model, and the positional relationship; identify a portion of the actual depth map that fails to match the expected depth map; and recognize a second object based on the identified portion.

[0005] In a second aspect, a robotic manipulator is utilized with the navigation system of the first aspect, wherein the robotic manipulator supports a surgical tool and comprises a plurality of links and a plurality of actuators configured to move the links to move the surgical tool, and wherein the robotic manipulator is controlled to avoid the second object.

[0006] In a third aspect, a method of operating a navigation system is provided, the navigation comprising a localizer configured to detect a position of a first object, a vision device configured to generate an actual depth map of surfaces near the first object, and a controller coupled to the localizer and the vision device, the method comprising: accessing a virtual model corresponding to the first object; identifying a positional relationship between the localizer and the vision device in a common coordinate system; generating an expected depth map of the vision device based on the detected position of the first object, the virtual model, and the positional relationship; identifying a portion of the actual depth map that fails to match the expected depth map; and recognize a second object based on the identified portion.

[0007] In a fourth aspect, a computer program product is provided comprising a non- transitory computer readable medium having instructions stored thereon, which when executed by one or more processors are configured to implement the method of the third aspect.

[0008] According to one implementation for any of the above aspects: the localizer is configured to be: an optical localizer configured to detect optical features associated with the first object; an electromagnetic localizer configured to detect electromagnetic features associated with the first object; an ultrasound localizer configured to detect the first object with or without any tracker; an inertial localizer configured to detect inertial features associated with the first object; or any combination of the aforementioned.

[0009] According to one implementation for any of the above aspects: the first object can be any of: an anatomy or bone of a patient; equipment in the operating room, such as, but not limited to: a robotic manipulator, a hand-held instrument, an end effector or tool attached to the robotic manipulator, a surgical table, a mobile cart, an operating table onto which the patient can be placed, an imaging system, a retractor, or any combination of the aforementioned.

[0010] According to one implementation for any of the above aspects: the vision device is coupled to any of: the localizer; a separate unit from the localizer; a camera unit of the navigation system; an adjustable arm; the robotic manipulator; an end effector; a hand-held tool; a surgical boom system, such as a ceiling mounted boom, a limb holding device, or any combination of the aforementioned. [0011] According to one implementation for any of the above aspects, the surfaces near the first object can be surfaces: adjacent to the first object; spaced apart from the first object by a distance; touching the first object; directly on top of the first object; located in an environment near the first object; located in an environment behind or surrounding the first object; within a threshold distance of the first object; within a field of view of the localizer; or any combination of the aforementioned.

[0012] According to one implementation for any of the above aspects: the second object can be object that can form an obstacle, including any of: a second portion of the anatomy of the patient, such as surrounding soft tissue; equipment in the operating room, such as, but not limited to: a robotic manipulator, one or more arms of the robotic manipulator, a second robotic manipulator, a hand-held instrument, an end effector or tool attached to the robotic manipulator or hand-held instrument, a surgical table, a mobile cart, an operating table onto which the patient can be placed, an imaging system, a retractor, the body of a tracking device; a body part of a human being in the operating room, or any combination of the aforementioned.

[0013] According to one implementation for any of the above aspects: the controller can be one or more controllers or a control system. According to one implementation, the controller is configured to identify a position of the second object relative to the first object in the common coordinate system. According to one implementation, the controller identifies so based on the detected position of the first object, a location of the second object in the actual depth map, and the positional relationship.

[0014] According to one implementation, the first object defines a target volume of patient tissue to be treated according to a surgical plan. According to one implementation, the controller is configured to: determine whether the second object is an obstacle to treating the target volume according to the surgical plan based on the position of the second object relative to the target volume in the common coordinate system and the surgical plan. According to one implementation, responsive to determining that the second object is an obstacle to the surgical plan, the controller is configured to modify the surgical plan and/or trigger a notification and/or halt surgical navigation.

[0015] According to one implementation, a tracker is coupled to the first object. According to one implementation, the controller is configured to: detect, via the localizer, a position of the tracker in a first coordinate system specific to the localizer. According to one implementation, the controller can identify a position of the virtual model in the first coordinate system based on the detected position of the tracker in the first coordinate system and a positional relationship between the tracker and the first object in the first coordinate system. According to one implementation, the controller transforms the position of the virtual model in the first coordinate system to a position of the virtual model in a second coordinate system specific to the vision device based on the position of the virtual model in the first coordinate system and a positional relationship between the localizer and the vision device in the second coordinate system. According to one implementation, the controller can generate the expected depth map based on the position of the virtual model in the second coordinate system.

[0016] According to one implementation, the controller is configured to identify a portion of the actual depth map that fails to match the expected depth map by being configured to: compare the actual depth map and the expected depth map. In some implementations, the controller computes a difference between the actual depth map and the expected depth map. According to one implementation, the controller determines whether a first section of the difference indicates an absolute depth greater than a threshold depth. According to one implementation, the controller identifies as the portion a second section of the actual depth map that corresponds to the first section of the difference responsive to determining that the first section of the difference indicates an absolute depth greater than threshold depth. According to one implementation, the threshold depth is non-zero.

[0017] According to one implementation, the controller is configured to identify the portion of the actual depth map that fails to match the expected depth map. In some implementations, the controller does so by being configured to determine whether a size of the first section is greater than a minimum size threshold. In some implementations, the controller identifies as the portion the second section responsive to the determining that the size of the first section is greater than the minimum size threshold.

[0018] According to one implementation, the controller is configured to recognize a second object based on the identified portion by being configured to match the identified portion with a predetermined profile corresponding to the second object.

[0019] According to one implementation, the portion of the actual depth map comprises an arrangement of features corresponding to the second object and located in a first position of the actual depth map. According to one implementation, the controller is configured to track movement of the second object by monitoring whether the arrangement of features moves to a second position that differs from the first position. According to one implementation, the controller monitors such in an additional actual depth map subsequently generated by the vision device.

[0020] According to one implementation, the controller is configured to generate a virtual boundary corresponding to the second object in the common coordinate system. According to one implementation, the virtual boundary provides a constraint. In some examples, the constraint is on a motion of an object, such as a surgical tool, a robotic manipulator, a working end of a robotic hand-held surgical device, an imaging device, or any other moveable equipment in the operating room. In some examples, the constraint is a keep-out boundary or a keep-in boundary.

[0021] According to one implementation, the controller is configured to crop the actual depth map to a region of interest based on the virtual model, the detected position of the first object, and the positional relationship between the localizer and the vision device in a common coordinate system. In some implementations, the controller is configured to compare the actual depth map by being configured to compare the cropped actual depth map.

[0022] According to one implementation, the controller is configured to identify the positional relationship between the localizer and the vision device in the common coordinate system by being configured to project a pattern onto a surface in view of the vision device, and optionally also within view of the localizer. In some implementations, the controller generates localization data using the localizer indicating a position of the pattern in a first coordinate system specific to the localizer. In some implementations, the controller receives a calibration depth map illustrating the projected pattern generated by the vision device. In some implementations, the controller identifies a position of the projected pattern in a second coordinate system specific to the vision device based on the calibration depth map. In some implementations, the controller identifies the positional relationship between the localizer and the vision device in the common coordinate system based on the position of the pattern in the first coordinate system and the position of the pattern in the second coordinate system. In some implementations, the controller is configured to operate in a first spectral band to detect the position of the first object, the vision device is configured to operate in a second spectral band to generate the actual depth map of the surfaces near the first object, and the first spectral band differs from the second spectral band. [0023] Any of the above aspects can be combined in full or in part. Any of the above aspects can be combined in full or in part.

[0024] The above summary may present a simplified overview of some aspects of the invention in order to provide a basic understanding of certain aspects the invention discussed herein. The summary is not intended to provide an extensive overview of the invention, nor is it intended to identify any key or critical elements or delineate the scope of the invention. The sole purpose of the summary is merely to present some concepts in a simplified form as an introduction to the detailed description presented below.

BRIEF DESCRIPTION OF THE DRAWINGS

[0025] FIG. 1 is a perspective view of a surgical navigation system including a localizer and a vision device.

[0026] FIG. 2. is a schematic view of a control system for controlling the surgical navigation system of FIG. 1.

[0027] FIG. 3 is a perspective view of coordinate systems used in the surgical navigation system of FIG. 1.

[0028] FIG. 4 is a flow chart of a method for navigating one example of a target site using tracker-based localization and machine vision.

[0029] FIG. 5 is an illustration of one example of a target site, e.g., an anatomy being treated during a surgical procedure.

[0030] FIG. 6 is an illustration of a position of a virtual model corresponding to an object in the target site of FIG. 5.

[0031] FIG. 7 is an illustration of an expected depth map based on the virtual model of

FIG. 6.

[0032] FIG. 8 is an illustration of an actual depth map captured by the vision device of

FIG. 1.

[0033] FIG. 9 is an illustration of the actual depth map of FIG. 8 cropped to a region of interest.

[0034] FIG. 10 is an illustration of the difference between the expected depth map of FIG.

7 and the actual depth map of FIG. 9. [0035] FIG. 11 is an illustration of a virtual model corresponding to surgical retractors identified in the actual depth map of FIG. 9.

[0036] FIG. 12 is an illustration of a virtual model corresponding to a ligament identified in the actual depth map of FIG. 9.

[0037] FIG. 13 is an illustration of a virtual model corresponding to epidermal tissue identified in the actual depth map of FIG. 9.

[0038] FIG. 14 is an illustration of the virtual models of FIGS. 6 and 11-13 in a common coordinate system.

[0039] FIG. 15 is an illustration of an actual depth map subsequently captured by the vision device of FIG. 1.

[0040] FIG. 16 is an illustration of the virtual models of FIG. 14 with updated positioning based on the actual depth map of FIG. 15.

DETAILED DESCRIPTION

[0041] FIG. 1 illustrates a surgical system 10 for treating a patient. The surgical system

10 may be located in a surgical setting such as an operating room of a medical facility. The surgical system 10 may include a surgical navigation system 12 and a robotic manipulator 14. The robotic manipulator 14 may be coupled to a surgical instrument 16, and may be configured to maneuver the surgical instrument 16 to treat a target volume of patient tissue, such as at the direction of a surgeon and/or the surgical navigation system 12. For example, the surgical navigation system 12 may cause the robotic manipulator 14 to maneuver the surgical instrument 16 to remove the target volume of patient tissue while avoiding other objects adjacent the target volume, such as other medical tools and adjacent anatomical structures. Alternatively, the surgeon may manually hold and maneuver the surgical instrument 16 while receiving guidance from the surgical navigation system 12. As some non-limiting examples, the surgical instrument 16 may be a burring instrument, an electrosurgical instrument, an ultrasonic instrument, a reamer, an impactor, or a sagittal saw.

[0042] During a surgical procedure, the surgical navigation system 12 may track the position (location and orientation) of objects of interest within a surgical workspace using a combination of tracker-based localization and machine vision. The surgical workspace for a surgical procedure may be considered to include the target volume of patient tissue being treated and the area immediately surrounding the target volume being treated in which an obstacle to treatment may be present. The tracked objects may include, but are not limited to, anatomical structures of the patient, target volumes of anatomical structures to be treated, surgical instruments such as the surgical instrument 16, and anatomical structures of surgical personal such as a surgeon’s hand or fingers. The tracked anatomical structures of the patient and target volumes may include soft tissue such as ligaments, muscle, and skin, may include hard tissue such bone. The tracked surgical instruments may include retractors, cutting tools, and waste management devices used during a surgical procedure.

[0043] Fixing trackers to objects of interest in a surgical workspace may provide an accurate and efficient mechanism for the surgical navigation system 12 to determine the position of such objects in the surgical workspace. During the procedure, the trackers may generate known signal patterns, such as in a particular non-visible light band ( e.g ., infrared, ultraviolet). The surgical navigation system 12 may include a localizer that is specific to detecting signals in the particular non-visible light band and ignores light signals outside of this band. Responsive to the localizer detecting the signal pattern associated with a given tracker, the surgical navigation system 12 may determine a position of the tracker relative to the localizer based on the angle at which the pattern is detected. The surgical navigation system 12 may then infer the position of an object to which the tracker is affixed based on the determined position of the tracker and a fixed positional relationship between the object and the tracker.

[0044] While the above trackers may enable the surgical navigation system 12 to accurately and efficiently track hard tissue objects such as bone and surgical instruments in the surgical workspace, these trackers are generally not adequate for tracking soft tissue objects such as skin and ligaments. Specifically, due to the flexible nature of soft tissue objects, maintaining a fixed positional relationship between an entire soft tissue object and a tracker during the course of a surgical procedure is difficult. Moreover, attaching a tracker to each of the several patient tissues and instruments involved in a surgical procedure congests the surgical workspace making it difficult to navigate, and increases the cost and complexity of the surgical navigation system 12. Accordingly, in addition to tracker-based localization, the surgical navigation system 12 may also implement machine vision to track objects in a surgical workspace during a surgical procedure.

[0045] Specifically, in addition to detecting the position of objects in a surgical workspace using a localizer and affixed trackers, the surgical navigation system 12 may include a vision device configured to generate a depth map of surfaces in a workspace (also referred to herein as a target site). The target site can be various different objects or sites. In one example, the target site is a surgical site, such as a portion of anatomy (e.g., bone) requiring treatment or tissue removal. In other examples, the target site can be a equipment in the operating room, such as the robotic manipulator, the end effector or tool attached to the robotic manipulator, a surgical table, a mobile cart, an operating table onto which the patient can be placed, an imaging system, or the like.

[0046] The surgical navigation system 12 may also be configured to identify a positional relationship between the localizer and the vision device in a common coordinate system, and may be configured to generate an expected depth map of the vision device based on a detected position of an object in the target site using the localizer, a virtual model corresponding to the object, and the positional relationship. Thereafter, the surgical navigation system 12 may be configured to compare the expected depth map to an actual depth map generated by the vision device, and to identify a portion of an actual depth map that fails to match the estimated depth map based on the comparison. The surgical navigation system 12 may be configured to then identify an object in the target site based on the identified portion, and to determine whether the object is an obstacle to a current surgical plan.

[0047] The surgical navigation system 12 may display the relative positions of objects tracked during a surgical procedure to aid the surgeon. The surgical navigation system 12 may also control and/or constrain movement of the robotic manipulator 14 and/or surgical instrument 16 to virtual boundaries associated with the tracked objects. For example, the surgical navigation system 12 may identify a target volume of patient tissue to be treated and potential obstacles in the surgical workspace based on the tracked objects. The surgical navigation system 12 may then restrict a surgical tool (e.g., an end effector EA of the surgical instrument 16) from contacting anything beyond the target volume of patient tissue to be treated, improving patient safety and surgical accuracy. The surgical navigation system 12 may also eliminate damage to surgical instruments caused by unintended contact with other objects, which may also result in undesired debris at the target site.

[0048] As illustrated in FIG. 1, the surgical navigation system 12 may include a localizer

18 and a navigation cart assembly 20. The navigation cart assembly 20 may house a navigation controller 22 configured to implement the functions, features, and processes of the surgical navigation system 12 described herein. In particular, the navigation controller 22 may include a processor 23 programmed to implement the functions, features, and processes of the navigation controller 22 and surgical navigation system 12 described herein. For example, the processor 23 may be programmed to convert optical-based signals received from the localizer 18 into localizer data representative of the position of objects affixed to trackers in the surgical workspace.

[0049] The navigation controller 22 may be in operative communication with a user interface 24 of the surgical navigation system 12. The user interface 24 may facilitate user interaction with the surgical navigation system 12 and navigation controller 22. For example, the user interface 24 may include one or more output devices that provide information to a user, such as from the navigation controller 22. The output devices may include a display 25 adapted to be situated outside of a sterile field including the surgical workspace and may include a display 26 adapted to be situated inside the sterile field. The displays 25, 26 may be adjustably mounted to the navigation cart assembly 20. The user interface 24 may also include one or more input devices that enable user-input to the surgical navigation system 12. The input devices may include a keyboard, mouse, and/or touch screen 28 that can be interacted with by a user to input surgical parameters and control aspects of the navigation controller 22. The input devices may also include a microphone that enables user-input through voice-recognition technology.

[0050] The localizer 18 may be configured to detect the position of one or more objects affixed to trackers in the surgical workspace, such as by detecting the position of the trackers affixed to the objects. Specifically, the localizer 18 may be coupled to the navigation controller 22 of the surgical navigation system 12, and may generate and communicate optical-based signals to the navigation controller 22 that indicate the position of the one or more trackers in the surgical workspace. The navigation controller 22 may then be configured to generate localizer data indicative of the position of the objects affixed to the trackers in the surgical workspace based on the optical-based signals and fixed positional relationships between the objects and trackers. Objects in the target site tracked with the localizer 18 may be referred to herein as“localized objects.”

[0051] The localizer 18 may have an outer casing 30 that houses at least two optical sensors

32. Each of the optical sensors 32 may be adapted to detect signals in a particular non- visible light band specific to the trackers, such as infrared or ultraviolet. While FIG. 1 illustrates the localizer 18 as a single unit with multiple optical sensors 32, in an alternative example, the localizer 18 may include separate units arranged around the surgical workspace, each with a separate outer casing and one or more optical sensors 32.

[0052] The optical sensors 32 may be one-dimensional or two-dimensional charge-coupled devices (CCDs). For example, the outer casing 30 may house two two-dimensional CCDs for triangulating the position of trackers in the surgical workplace, or may house three one dimensional CCDs for triangulating the position of trackers in the surgical workplace. Additionally or alternatively, the localizer 18 may employ other optical sensing technologies, such as complementary metal-oxide semiconductor (CMOS) active pixels.

[0053] In some implementations, the navigation system and/or localizer 18 are electromagnetically (EM) based. For example, the navigation system may comprise an EM transceiver coupled to the navigation controller 22 and/or to another computing device, controller, and the like. Here, the trackers may comprise EM components attached thereto (e.g., various types of magnetic trackers, electromagnetic trackers, inductive trackers, and the like), which may be passive or may be actively energized. The EM transceiver generates an EM field, and the EM components respond with EM signals such that tracked states are communicated to (or interpreted by) the navigation controller 22. The navigation controller 22 may analyze the received EM signals to associate relative states thereto. Here too, it will be appreciated that embodiments of EM-based navigation systems may have structural configurations that are different than the active marker- based navigation system illustrated herein.

[0054] In other implementations, the navigation system and/or the localizer 18 could be based on one or more types of imaging systems that do not necessarily require trackers to be fixed to objects in order to determine location data associated therewith. For example, an ultrasound- based imaging system could be provided to facilitate acquiring ultrasound images (e.g., of specific known structural features of tracked objects, of markers or stickers secured to tracked objects, and the like) such that tracked states (e.g., position, orientation, and the like) are communicated to (or interpreted by) the navigation controller 22 based on the ultrasound images. The ultrasound images may be 2D, 3D, or a combination thereof. The navigation controller 22 may process ultrasound images in near real-time to determine the tracked states. The ultrasound imaging device may have any suitable configuration and may be different than the camera unit as shown in FIG. 1. By way of further example, a fluoroscopy-based imaging system could be provided to facilitate acquiring X-ray images of radio-opaque markers (e.g., stickers, tags, and the like with known structural features that are attached to tracked objects) such that tracked states are communicated to (or interpreted by) the navigation controller 22 based on the X-ray images. The navigation controller 22 may process X-ray images in near real-time to determine the tracked states. Similarly, other types of optical-based imaging systems could be provided to facilitate acquiring digital images, video, and the like of specific known objects (e.g., based on a comparison to a virtual representation of the tracked object or a structural component or feature thereof) and/or markers (e.g., stickers, tags, and the like that are attached to tracked objects) such that tracked states are communicated to (or interpreted by) the navigation controller 22 based on the digital images. The navigation controller 22 may process digital images in near real-time to determine the tracked states.

[0055] Accordingly, it will be appreciated that various types of imaging systems, including multiple imaging systems of the same or different type, may form a part of the navigation system without departing from the scope of the present disclosure. Those having ordinary skill in the art will appreciate that the navigation system and/or localizer 18 may have any other suitable components or structure not specifically recited herein. For example, the navigation system may utilize solely inertial tracking or any combination of tracking techniques. Furthermore, any of the techniques, methods, and/or components associated with the navigation system illustrated in FIG. 1 may be implemented in a number of different ways, and other configurations are contemplated by the present disclosure.

[0056] The localizer 18 may be mounted to an adjustable arm to selectively position the optical sensors 32 with a field of view of the surgical workspace and target volume that, ideally, is free from obstacles. The localizer 18 may be adjustable in at least one degree of freedom by rotating about a rotational joint and may be adjustable about two or more degrees of freedom.

[0057] As previously described, the localizer 18 may cooperate with a plurality of tracking devices, also referred to herein as trackers, to determine the position of objects within the surgical workspace to which the trackers are affixed. In general, the object to which each tracker is affixed may be rigid and inflexible so that movement of the object cannot or is unlikely to alter the positional relationship between the object and the tracker. In other words, the relationship between a tracker in the surgical workspace and an object to which the tracker is attached may remain fixed, notwithstanding changes in the position of the object within the surgical workspace. For instance, the trackers may be firmly affixed to patient bones and surgical instruments, such as retractors and the surgical instrument 16. In this way, responsive to determining a position of a tracker in the surgical workspace using the localizer 18, the navigation controller 22 may infer the position of the object to which the tracker is affixed based on the determined position of the tracker.

[0058] For example, when the target volume to be treated is located at a patient’s knee area, a tracker 34 may be firmly affixed to the femur F of the patient, a tracker 36 may be firmly affixed to the to the tibia T of the patient, and a tracker 38 may be firmly affixed to the surgical instrument 16. Trackers 34, 36 may be attached to the femur F and tibia T in the manner shown in U.S. Patent No. 7,725,162, hereby incorporated by reference. Trackers 34, 36 may also be mounted like those shown in U.S. Patent Application Publication No. 2014/0200621, filed on January 16, 2014, entitled, "Navigation Systems and Methods for Indicating and Reducing Line-of-Sight Errors," hereby incorporated by reference. A tracker 38 may be integrated into the surgical instrument 16 during manufacture or may be separately mounted to the surgical instrument 16 in preparation for a surgical procedure.

[0059] Prior to the start of a surgical procedure using the surgical system 10, pre-operative images may be generated for anatomy of interest, such as anatomical structures defining and/or adjacent a target volume of patient tissue to be treated by the surgical instrument 16. For example, when the target volume of patient tissue to be treated is in a patient’s knee area, pre-operative images of the patient’s femur F and tibia T may be taken. These images may be based on MRI scans, radiological scans, or computed tomography (CT) scans of the patient’s anatomy, and may be used to develop virtual models of the anatomical structures. Each virtual model for an anatomical structure may include a three-dimensional model (e.g., point cloud, mesh, CAD) that includes data representing the entire or at least a portion of the anatomical structure, and/or data representing a target volume of the anatomical structure to be treated. These virtual models may be provided to and stored in the navigation controller 22 in advance of a surgical procedure.

[0060] In addition or alternatively to taking pre-operative images, plans for treatment can be developed in the operating room from kinematic studies, bone tracing, and other methods. These same methods could also be used to generate the virtual models described above.

[0061] In addition to virtual models corresponding to the patient’s anatomical structures of interest, prior to the surgical procedure, the navigation controller 22 may receive and store virtual models for other tracked objects of interest to the surgical procedure, such as surgical instruments and other objects potentially present in the surgical workspace ( e.g ., the surgeon’s hand and/or fingers). The navigation controller 22 may also receive and store surgical data particular to the surgical procedure, such as positional relationships between trackers and the objects fixed to the trackers, a positional relationship between the localizer 18 and the vision device, and a surgical plan. The surgical plan may identify the patient anatomical structures involved in the surgical procedure, may identify the instruments being used in the surgical procedure, and may define the planned trajectories of instruments and the planned movements of patient tissue during the surgical procedure.

[0062] During the surgical procedure, the optical sensors 32 of the localizer 18 may detect light signals, such as in a non- visible light band (e.g., infrared or ultraviolet), from the trackers 34, 36, 38, and may output optical-based signals to the navigation controller 22 indicating the position of the trackers 34, 36, 38 relative to the localizer 18 based on the detected light signals. The navigation controller 22 may then generate localizer data indicating the positions of the objects fixed to the trackers 34, 36, 38 relative to the localizer 18 based on the determined positions of the trackers 34, 36, 38 and the known positional relationships between the trackers 34, 36, 38 and the objects.

[0063] To supplement the tracker-based object tracking provided by the localizer 18, the surgical navigation system 12 may also include the vision device 40. The vision device 40 may be capable of generating three-dimensional images of the surgical workspace site in real time.

Unlike the localizer 18, which may be limited to detecting and pinpointing the position of non- visible light signals transmitted from the trackers 34, 36, 38, the vision device 40 may be configured to generate a three-dimensional image of the surfaces in and surrounding the target volume that are in the field of view of the vision device 40, such as in the form of a depth map. The vision device 40 may include one or more image sensors 42 and a light source 44. Each of the image sensors 42 may be a CMOS sensor.

[0064] For example, the vision device 40 may generate a depth map of the surgical workspace by illuminating exposed surfaces in the surgical workspace with non- visible light, such as infrared or ultraviolet light. The surfaces may then reflect back the non-visible light, which may be detected by the one or more image sensors 42 of the vision device 40. Based on a time of flight of the non-visible light from transmission to detection by the vision device 40, the vision device 40 may determine a distance between the vision device 40 and several points on the exposed surfaces of the surgical workspace. The vision device 40 may then generate a depth map indicating the distance and angle between the vision device 40 and each surface point. Alternatively, the vision device 40 may utilize other modalities to generate a depth map, such as and without limitation, structured light projections, laser range finding, or stereoscopy.

[0065] Similar to the localizer 18, prior to a surgical procedure, the vision device 40 may be positioned with a field of view of the surgical workspace, preferable without obstacles. The vision device 40 may be integrated with the localizer 18, as illustrated in FIG. 1. Alternatively, the vision device 40 may be mounted to a separate adjustable arm to position the vision device 40 separately from the localizer 18. The vision device 40 can also be directly attached to the robotic manipulator 14, such as, for example, as described in United States Patent 10,531,926, entitled “Systems and Methods for Identifying and Tracking Physical Objects During a Robotic Surgical Procedure”, the contents of which are hereby incorporated by reference in its entirety. The vision device 40 may also be in operative communication with the navigation controller 22.

[0066] As described above, the navigation controller 22 may be configured to track objects and identify obstacles in the surgical workspace based on the tracker-based localization data generated using the localizer 18 and depth maps generated by the vision device 40. In particular, at the same time vision time the vision device 40 generates a depth map of the surgical workspace, the localizer 18 may generate optical-based data used to generate the localizer data indicating the position of objects fixed to trackers in the surgical workspace relative to the localizer 18. The depth maps generated by the vision device 40 and the localizer data generated with the localizer 18 may thus be temporally interleaved. In other words, each instance of localizer data generated with the localizer 18 may be temporally associated with a different depth map generated by the vision device 40, such that the positions of objects indicated in the localizer data and the positions of those objects in the associated depth map correspond to a same moment in time during the surgical procedure.

[0067] Responsive to determining the localizer data, the navigation controller 22 may be configured to generate an expected depth map to be captured by the vision device 40 and associated with the localization data. The expected depth map may be the depth map expected to be generated by the vision device 40 that is temporally associated with the localizer data, assuming only the objects fixed to the trackers are present in the surgical workspace. The navigation controller 22 may be configured to determine the expected depth map based on the detected positions of objects fixed to trackers in the surgical workspace as indicated in the localizer data, virtual models corresponding to the objects, and a positional relationship between the localizer 18 and the vision device 40.

[0068] Thereafter, the navigation controller 22 may retrieve the actual depth map generated by the vision device 40 that is temporally associated with the localizer data, and may identify a portion of the actual depth map that fails to match the expected depth map. The navigation controller 22 may then identify objects in the surgical workspace, such as objects other than the objects fixed to trackers that are adjacent to a target volume of patient tissue to be treated, based on the identified portion, and may determine whether any such objects poses an obstacle to a current surgical trajectory.

[0069] The surgical instrument 16 may form part of an end effector of the robotic manipulator 14. The robotic manipulator 14 may include a base 46, several links 48 extending from the base 46, and several active joints for moving the surgical instrument 16 with respect to the base 46. The links 48 may form a serial arm structure as shown in FIG. 1, a parallel arm structure (shown for example in FIG. 3), or other suitable structure. The robotic manipulator 14 may include an ability to operate in a manual mode in which a user grasps the end effector of the robotic manipulator 14 to cause movement of the surgical instrument 16 ( e.g ., directly, or through force/torque sensor measurements that cause active driving of the robotic manipulator 14). The robotic manipulator 14 may also include a semi- autonomous mode in which the surgical instrument 16 is moved by the robotic manipulator 14 along a predefined tool path (e.g., the active joints of the robotic manipulator 14 are operated to move the surgical instrument 16 without requiring force/torque on the end effector from the user). An example of operation in a semi- autonomous mode is described in U.S. Pat. No. 9,119,655 to Bowling, et ah, hereby incorporated by reference. A separate tracker may be attached to the base 46 of the robotic manipulator 14 to track movement of the base 46 by the localizer 18.

[0070] Similar to the surgical navigation system 12, the robotic manipulator 14 may house a manipulator controller 50 including a processor 52 programmed to implement the processes of the robotic manipulator 14, or more particularly the manipulator controller 50, described herein. For example, the processor 52 may be programmed to control operation and movement of the surgical instrument 16 through movement of the links 48, such as at the direction of the surgical navigation system 12. [0071] During a surgical procedure, the manipulator controller 50 may be configured to determine a desired location to which the surgical instrument 16 should be moved, such as based on navigation data received from the navigation controller 22. Based on this determination, and information relating to the current position of the surgical instrument 16, the manipulator controller 50 may be configured to determine an extent to which each of links 48 needs to be moved to reposition the surgical instrument 16 from the current position to the desired position. Data indicating where the links 48 are to be repositioned may be forwarded to joint motor controllers (e.g., one for controlling each motor) that control the active joints of the robotic manipulator 14. Responsive to receiving such data, the joint motor controllers may be configured to move the links 48 in accordance with the data, and consequently move the surgical instrument 16 to the desired position.

[0072] Referring now to FIG. 2, the localizer 18 and vision device 40 may include a localizer controller 62 and vision controller 64 respectively. The localizer controller 62 may be communicatively coupled to the optical sensors 32 of the localizer 18 and to the navigation controller 22. During a surgical procedure, the localizer controller 62 may be configured to operate the optical sensors 32 to cause them to generate optical-based data indicative of light signals received from the trackers 34, 36, 38.

[0073] The trackers 34, 36, 38 may be active trackers each having at least three active markers for transmitting light signals to the optical sensors 32. The trackers 34, 36, 38 may be powered by an internal battery, or may have leads to receive power through the navigation controller 22. The active markers of each tracker 34, 36, 38 may be light emitting diodes (LEDs) 65 that transmit light, such as infrared or ultraviolet light. Each of the trackers 34, 36, 38 may also include a tracker controller 66 connected to the LEDs 65 of the tracker 34, 36, 38 and to the navigation controller 22. The tracker controller 66 may be configured to control the rate and order in which LEDs 65 of the trackers 34, 36, 38 fire, such as at the direction of the navigation controller 22. For example, the tracker controllers 66 of the trackers 34, 36, 38 may cause the LEDs 65 of each tracker 34, 36, 38 to fire at different rates and/or times to facilitate differentiation of the trackers 34, 36, 38 by the navigation controller 22.

[0074] The sampling rate of the optical sensors 32 is the rate at which the optical sensors

32 receive light signals from sequentially fired LEDs 65. The optical sensors 32 may have sampling rates of 100 Hz or more, or more preferably 300 Hz or more, or most preferably 500 Hz or more. For example, the optical sensors 32 may have sampling rates of 8000 Hz.

[0075] Rather than being active trackers, the trackers 34, 36, 38 may be passive trackers including passive markers (not shown), such as reflectors that reflect light emitted from the localizer 18 (e.g., light emitted from the light source 44 (FIG. 1)). The reflected light may then be received by the optical sensors 32.

[0076] Responsive to the optical sensors 32 receiving light signals from the trackers 34,

36, 38, the optical sensors 32 may output optical-based data to the localizer controller 62 indicating the position of trackers 34, 36, 38 relative to the localizer 18, and correspondingly, indicating the position of the objects firmly affixed to the trackers 34, 36, 38 relative to the localizer 18. In particular, each optical sensor 32 may include a one- or two-dimensional sensor area that detects light signals from the trackers 34, 36, 38, and responsively indicates a position within the sensor area that each light signal is detected. The detection position of each light signal within a given sensor area may be based on the angle at which the light signal is received by the optical sensor 32 including the sensor area, and similarly may correspond to the position of the source of the light signal in the surgical workspace.

[0077] Thus, responsive to receiving light signals from the trackers 34, 36, 38, each optical sensor 32 may generate optical-based data indicating positions within the sensor area of the optical sensor 32 that the light signals were detected. The optical sensors 32 may communicate such optical-based data to the localizer controller 62, which may then communicate the optical-based data to the navigation controller 22. The navigation controller 22 may then generate tracker position data indicating the positions of the trackers 34, 36, 38 relative to the localizer 18 based on the optical-based data. For example, the navigation controller 22 may triangulate the positions of the LEDs 65 relative to the localizer 18 based on the optical-based data, and may apply stored positional relationships between the trackers 34, 36, 38 and the markers to the determined positions of the LEDs 65 relative to the localizer 18 to determine positions of the trackers 34, 36, 38 relative to the localizer 18.

[0078] Thereafter, the navigation controller 22 may generate the localizer data indicating the positions of the objects firmly affixed to the trackers 34, 36, 38 relative to the localizer 18 based on the tracker position data. Specifically, the navigation controller 22 may retrieve stored positional relationships between the trackers 34, 36, 38 and the objects to which the trackers 34, 36, 38 are affixed, and may apply these positional relationships to the tracker position data to determine the position of the objects fixed to the trackers 34, 36, 38 relative to the localizer 18. Alternatively, the localizer controller 62 may be configured determine the tracker position data and/or localizer data based on the received optical-based data, and may transmit the tracker position data and/or localizer data to the navigation controller 22 for further processing.

[0079] The vision controller 64 may be communicatively coupled to the light source 44 and the one or more image sensors 42 of the vision device 40, and to the navigation controller 22. Contemporaneously with the localizer controller 62 causing the localizer 18 to generate optical- based data indicating the position of the trackers 34, 36, 38 in the surgical workspace, the vision controller 64 may cause the vision device 40 to generate a depth map of the exposed surfaces of the surgical workspace. Specifically, the vision controller 64 may cause the image sensors 42 to generate image data that forms the basis of the depth map, and may generate the depth map based on the image data. The vision controller 64 may then forward the depth map to the navigation controller 22 for further processing. Alternatively, the vision controller 64 may communicate the image data to the navigation controller 22, which may then generate the depth map based on the received image data.

[0080] In general, a depth map generated by the vision device 40 may indicate the distance between the vision device 40 and surfaces in the field of view of the vision device 40. In other words, the depth map may illustrate a topography of the surfaces in the surgical workspace form the viewpoint of the vision device 40. Each depth map generated by the vision device 40 may include a plurality of image components forming an image frame of the vision device 40. Each of the image components may be akin to a pixel of the depth map, and may define a vector from a center of the vision device 40 to a point on a surface in the field of view of vision device. For instance, the location of the image component in the image frame of the vision device 40 may correspond to the horizontal and vertical components of the vector defined by the image component, and a color of the image component may correspond to the depth component of the vector defined by the image component. As an example, image components representing surface points in the surgical workspace closer to the vision device 40 may have a brighter color than those image components representing surface points farther from the vision device 40.

[0081] The vision device 40 may be a depth camera including one or more depth sensors

68. The depth sensors 68 may be adapted to detect light, such as non-visible light, reflected off surfaces within the field of view of the depth sensors 68. During a surgical procedure, the vision controller 64 may cause the light source 44 to illuminate the target site with non- visible light, such as infrared or ultraviolet light. The depth sensors 68 may then detect reflections of the non- visible light off the surfaces the target site, which may enable the vision controller 64 to generate the depth map.

[0082] For example, the vision controller 64 may generate a depth map based on a time for the light transmitted from the light source 44 to reflect off points on exposed surfaces in the target site ( i.e ., time of flight methodology), which may correspond to distances between the vision device 40 and the various points. The vision controller 64 may then utilize these determined distances to generate the depth map. As an alternative example, the light source 44 may project a known structured non-visible light pattern onto exposed surfaces in the surgical site. The depth sensors 68 may then detect a reflection of the known pattern, which may be distorted based on the topography of the surfaces in the target site. The vision controller 64 may thus be configured to generate the depth map of the target site based on a comparison between the known pattern and the distorted version of the pattern detected by the depth sensors 68.

[0083] Alternatively, the vision device 40 may be an RGB camera including one or more

RGB sensors 70. The RGB sensors 70 may be configured to generate color images of the exposed surfaces in the target site, and the vision controller 64 may be configured to generate the depth made based on the color images.

[0084] For instance, similar to the structured light methodology described above, the vision controller 64 may be configured to cause the light source 44 to project a known structured light pattern onto the target site, such as in a color that deviates from colors in the target site. The RGB sensors 70 may then generate an RGB image of the target site, which may depict a distorted version of the known structured light pattern based on the surface topography of the of the target site. The vision controller 64 may extract the distorted version of the known structured light pattern from the RGB image, such as using pattern recognition, edge detection, and color recognition, and may determine the depth map based on a comparison between the known structured light pattern and the extracted distorted version.

[0085] As a further alternative, the vision device 40 may be configured to generate a depth map of the target site using principles in stereoscopy. More particularly, multiple image sensors 42, such as multiple depth sensors 68 or RGB sensors 70, may be positioned to have fields of view of the target site from different angles. The vision controller 64 may be configured to cause each image sensor 42 to simultaneously generate an image of the target site from the different angle. For instance, when the image sensors 42 are depth sensors 68, the vision controller 64 may be configured to cause the light source 44 to illuminate the exposed surfaces of the target site with a pattern of non-visible light, and each of the depth sensors 68 may image the pattern of non- visible light reflected off the exposed surfaces from a different angle. The vision controller 64 may then determine a three-dimensional position of points on the surfaces in the target site relative to the vision device 40 based on the position of the surface points in each image and a known positional relationship between image sensors 42. The vision controller 64 may thereafter generate the depth map based on the determined three-dimensional positions.

[0086] To reduce inference between the localizer 18 and vision device 40 during the surgical procedure, the localizer 18 and vision device 40 may be configured to operate in different spectral bands to detect the positions of objects in the target site. Additionally or alternatively, when vision device 40 uses a light source 44 to illuminate the exposed surfaces in the target site, such as when the vision device 40 operates in a non-visible light band, the localizer 18 may be configured to operate with a temporal exposure rate sufficiently short so that the light source 44 of the vision device 40 is not visible of the localizer 18.

[0087] As previously described, the navigation controller 22 may include a processor 23 programmed to perform the functions, features, and processes of the navigation controller 22 described herein, such as calculating an expected depth map based on localizer data generated using the localizer 18, and determining objects adjacent to a target volume of patient tissue to be treated in a surgical site by comparing the expected depth map to an actual depth map generated by the vision device 40. In addition to the processor 23, the navigation controller 22 may include memory 72 and non-volatile storage 74 each operatively coupled to the processor 23.

[0088] The processor 23 may include one or more devices selected from microprocessors, micro-controllers, digital signal processors, microcomputers, central processing units, field programmable gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, digital circuits, or any other devices that manipulate signals (analog or digital) based on operational instructions stored in the memory 72. The memory 72 may include a single memory device or a plurality of memory devices including, but not limited to, read-only memory (ROM), random access memory (RAM), volatile memory, non-volatile memory, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, cache memory, or any other device capable of storing information. The non-volatile storage 74 may include one or more persistent data storage devices such as a hard drive, optical drive, tape drive, non-volatile solid state device, or any other device capable of persistently storing information.

[0089] The non-volatile storage 74 may store software, such as a localization engine 76, a transformation engine 78, a vision engine 80, and a surgical navigator 81. The software may be embodied by computer-executable instructions compiled or interpreted from a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, Java, C, C++, C#, Objective C, Fortran, Pascal, Java Script, Python, Perl, and PL/SQL.

[0090] The processor 23 may operate under control of the software stored in the non volatile storage 74. In particular, the processor 23 may be configured to execute the software as active running processes by reading into the memory 72 and executing the computer-executable instructions of the software. Upon execution by the processor 23, the computer-executable instructions may be configured to cause the processor 23 to implement the configured functions, features, and processes of the navigation controller 22 described herein. The software may thus be configured to cause the navigation controller 22 to implement the functions, features, and processes of the navigation controller 22 described herein by virtue of the computer-executable instructions of the software being configured, upon execution of the processor 23, to cause the processor 23 of the navigation controller 22 to implement the processes of the navigation controller 22 described herein.

[0091] The non-volatile storage 74 of the navigation controller 22 may also store data that facilitates operation of the navigation controller 22. Specifically, the software of the navigation controller 22 may be configured to access the data stored in the non-volatile storage 74, and to implement the functions, features, and processes of the navigation controller 22 described herein based on the data.

[0092] For example and without limitation, the data stored in the non-volatile storage 74 may include model data 82, transformation data 83, and surgical plan 84. The model data 82 may include the virtual models of anatomical structures of interest to the surgical procedure, including the virtual models for potential obstacles such as a surgeon’s hand or fingers, and virtual models for the surgical instruments being used in a surgical procedure, as described above. The transformation data 83 may include the positional relationships herein, which may enable transforming a position of an object in the surgical workspace relative to one device, such as a tracker 34, 36, 38 or the localizer 18 or the vision device 40, to a position of the object relative to another device. For example, the transformation data 83 may set forth the fixed positional relationships between the trackers 34, 36, 38 and the objects firmly affixed to the trackers 34, 36, 38, and a positional relationship between localizer 18 and the vision device 40. The surgical plan 84 may identify patient anatomical structures target volumes involved in the surgical procedure, may identify the instruments being used in the surgical procedure, and may define the planned trajectories of instruments and the planned movements of patient tissue during the surgical procedure.

[0093] Referring again to the software running on the navigation controller 22, the localization engine 76 may be configured to generate the localization data indicative of the position of the objects firmly affixed to the trackers 34, 36, 38 relative to the localizer 18, such as based on optical-based data generated by the optical sensors 32 of the localizer 18. The transformation engine 78 may be configured to transform the position of an object relative to one device of the surgical system 10 to a position of the object relative to another device of the surgical system 10, such as based on the positional relationships represented by the transformation data 83. The vision engine 80 may be configured to generate an expected depth map based on localization data generated by the localization engine 76 and the transformation data 83, and to compare the expected depth map with an actual depth map generated by the vision device 40 to identify and track objects in the surgical workspace. The surgical navigator 81 may be configured to provide surgical guidance based on the identification and tracking determined by the vision engine 80. Further details of the functionality of these software components are discussed in more detail below.

[0094] Although not shown, each of the manipulator controller 50, the localizer controller

62, and the vision controller 64 may also include a processor, memory, and non-volatile storage including data and software configured, upon execution of its computer-executable instructions, to implement the functions, features, and processes of the controller described herein.

[0095] While an example surgical system 10 is shown in FIG. 1 and further detailed in FIG

2, this example is not intended to be limiting. Indeed, the surgical system 10 may have more or fewer components, and alternative components and/or implementations may be used. For instance, all or a portion of the of the localizer engine 76 may be implemented by the localizer controller 62. As an example, the localizer controller 62 may be configured to generate the localizer data indicating the positions of objects firmly affixed to the trackers 34, 36, 38 based on the model data 82.

[0096] FIG. 3 illustrates coordinate systems of the various objects and devices used with the surgical system 10. The navigation controller 22, such as via the transformation engine 78, may be configured transform a position of an object in one coordinate system to a position of the object in another coordinate system, such as based on positional relationships defined in the transformation data 83 stored in the navigation controller 22 . Such transformations may enable the navigation controller 22 to track objects in the surgical system 10 relative to a common coordinate system. Moreover, the transformations may enable the navigation controller 22, such as via the vision engine 80, to calculate an expected depth map to be generated by the vision device 40 based on localization data generated by the localization engine 76, and to identify and track objects based on a comparison between an actual depth map generated by the vision device 40 and the expected depth map. As one non-limiting example, each of the positional relationships defined by the transformation data 83 and enabling the transformations between coordinate systems may be represented by a transformation matrix defined by the transformation data 83.

[0097] The navigation controller 22 may be configured to track objects in the target site, such as objects in the target site affixed to the trackers 34, 36, 38, with reference to a localizer coordinate system LCLZ. The localizer coordinate system LCLZ may include an origin and orientation, which may be defined by the position of the x, y, and z axes relative to the surgical workspace. The localizer coordinate system LCLZ may be fixed to and centered on the localizer 18. Specifically, a center point of the localizer 18 may define the origin of the localizer coordinate system LCLZ. The localizer data, which as described above may indicate the positions of objects relative to the localizer 18 determined using the localizer 18, may similarly indicate the positions of such objects in the localizer coordinate system LCLZ.

[0098] During the procedure, one goal is to keep the localizer coordinate system LCLZ in a known position. An accelerometer may be mounted to the localizer 18 to detect sudden or unexpected movements of the localizer coordinate system LCLZ, as may occur when the localizer 18 is inadvertently bumped by surgical personnel. Responsive to a detected movement of the localizer coordinate LCLZ, the navigation controller 22 may be configured, such as via the surgical navigator 81, to present an alert to surgical personal through the user interface 24, to halt surgical navigation, and/or to communicate a signal to the manipulator controller 50 that causes the manipulator controller 50 to halt movement of the surgical instrument 16 until the surgical system 10 is recalibrated.

[0099] Each object tracked by the surgical system 10 may also have its own coordinate system that is fixed to and centered on the object, and that is separate from the localizer coordinate system LCLZ. For instance, the trackers 34, 36, 38 may be fixed and centered within a bone tracker coordinate system BTRK1, bone tracker coordinate system BTRK2, and instrument tracker coordinate system TLTR respectively. The femur F of the patient may be fixed and centered within the femur coordinate system FBONE, and the tibia T of the patient may be fixed and centered within the tibia coordinate system TBONE. Prior to the surgical procedure, the pre-operative images and/or the virtual models for each tracked object, such as the femur F, tibia T, and surgical instrument 16, may be mapped to the object, such as by being mapped to and fixed within the coordinate system for the object in accordance with the fixed position of the object in the coordinate system.

[0100] During an initial phase of a surgical procedure, the trackers 34, 36 may be firmly affixed to the femur F and tibia T of the patient respectively. The position of coordinate systems FBONE and TBONE may then be mapped to the coordinate systems BTRK1 and BTRK2, respectively. For instance, a pointer instrument P (FIG. 1), such as disclosed in U.S. Patent No. 7,725, 162 to Malackowski et ah, hereby incorporated by reference, having its own tracker PT, may be used to register the femur coordinate system FBONE and tibia coordinate system TBONE to the bone tracker coordinate systems BTRK1 and BTRK2, respectively. The fixed positional relationship between the femur coordinate system FBONE and the bone tracker coordinate system BTRK1, and the fixed positional relationship between the tibia coordinate system TBONE and the bone tracker coordinate system BTRK2, may be stored on the navigation controller 22 as transformation data 83.

[0101] Given the fixed spatial relationships between the femur F and tibia T and their trackers 34, 36, the navigation controller 22, such as via the transformation engine 78, may transform the position of the femur F in the femur coordinate system FBONE a position of the femur F in the bone tracker coordinate system BTRK1, and may transform the position of the tibia T in the tibia coordinate system TBONE to a position of the tibia T in the bone tracker coordinate system BTRK2. Thus, by determining the position of the trackers 34, 36 in the localization coordinate system LCLZ using the localizer 18, the navigation controller 22 may determine a position of the femur coordinate system FBONE and a position of the tibia coordinate system TBONE in the localization coordinate system LCLZ respectively, and may correspondingly determine a position of the femur F and tibia T in the localization coordinate system LCLZ respectively.

[0102] Similarly, the treatment end of the surgical instrument 16 may be fixed and centered within it’s the coordinate system EAPP. The origin of the coordinate system EAPP may be fixed to a centroid of a surgical cutting bur, for example. The position of coordinate system EAPP, and correspondingly of the treatment end of the surgical instrument 16, may be fixed within the instrument tracker coordinate system TLTR of the tracker 38 before the procedure begins. The fixed positional relationship between the coordinate system EAPP and the instrument tracker coordinate system TLTR may also be stored in the navigation controller 22 as transformation data 83. Thus, by determining the position of the instrument tracker coordinate system TLTR in the localization coordinate system LCLZ using the localizer 18, the navigation controller 22, such as via the transformation engine 78, may determine a position of the coordinate system EAPP in the localization coordinate system LCLZ based on the positional relationship between the instrument tracker coordinate system TLTR and the coordinate system EAAP, and may correspondingly determine a position of the treatment end of the surgical instrument 16 in the localization coordinate system LCLZ.

[0103] The vision device 40 may likewise be fixed and centered within vision coordinate system VIS. The origin of vision coordinate system VIS may represent a centroid of the vision device 40. Each actual depth map generated by the vision device 40, which as described above may indicate positions of exposed surfaces in the target site relative to the vision device 40, may similarly indicate the positions of the exposed surfaces in the coordinate system VIS.

[0104] When the vision device 40 is integrated with the localizer 18, such as illustrated in

FIG. 1, the vision coordinate system VIS and localizer coordinate system LCLZ may be considered equivalent. In other words, a position of an object or coordinate system in the localizer coordinate system LCLZ may be so close or equal to the position of the object or coordinate system in the vision coordinate system VIS that no transformation is needed. Alternatively, because the vision coordinate system VIS may be fixed within the localizer coordinate system LCLZ when the vision device 40 is integrated with the localizer 18, and vice versa, the positional relationship between the vision coordinate system VIS and localizer coordinate system LCLZ, and correspondingly between the vision device 40 and the localizer 18, may be determined during manufacturer of the surgical system 10, and may be factory-stored in the navigation controller 22 as transformation data 83.

[0105] When the vision device 40 is separate from the localizer 18, the vision device 40 may include a tracker (not shown) rigidly mounted to the housing of the vision device 40 to establish the positional relationship between the vision coordinate system VIS and the localizer coordinate system LCLZ, and correspondingly between the vision device 40 and the localizer 18. The navigation controller 22 may be preloaded with the positional relationship between the tracker’s coordinate system and the vision coordinate system VIS as transformation data 83. Thus, by determining the position of the tracker’s coordinate system in the localization coordinate system LCLZ using the localizer 18, the navigation controller 22 may determine a position of the vision coordinate system VIS in the localization coordinate system LCLZ based on the stored positional relationship between the tracker’s coordinate system and the vision coordinate system VIS, and correspondingly, may determine the position of the vision device 40 in the localizer coordinate system LCLZ. Further correspondingly, the navigation controller 22 may determine the position of the vision device 40 relative to the localizer 18 in the localizer coordinate system LCLZ and the vision coordinate system VIS.

[0106] Alternatively, the navigation controller 22 may be configured to identify the positional relationship between the localizer common coordinate system LCLZ and the vision coordinate system VIS based on a common light pattern inserted into the target site and detectable by both the localizer 18 and vision device 40. For example, after the localizer 18 and vision device 40 are positioned with a field of view of the target site, a pattern of light, such as non- visible light, may be projected onto the target site, which may reflect back the pattern of light to the localizer 18 and vision device 40. The navigation controller 22 may cause the light source 44 of the vision device 40 to project this light pattern into the target site, or a separate light projector (not shown) may be used to project the light pattern. As a further example, a tracker or other physical device, such as the pointer PT (FIG. 1), having markers configured to transmit a pattern of light detectable by the localizer 18 and the vision device 40 may be placed within the target site and in the fields of views of the localizer 18 and vision device 40. [0107] The navigation controller 22, such as via the localizer engine 76 and using the localizer 18, may be configured to generate localization data indicating the position of the light pattern in the localizer coordinate system LCLZ specific to the localizer 18. The navigation controller 22 may also receive a calibration depth map illustrating the light pattern from the vision device 40, and may be configured, such as via the transformation engine 78, to identify a position of the light pattern in the vision coordinate system VIS based on the calibration depth map. The navigation controller 22 may then be configured, such as via the transformation engine 78, to determine the positional relationship between the localization coordinate system LCLZ and the vision coordinate system LCLZ based on the determined position of the projected pattern in the localization coordinate system LCLZ and the vision coordinate system LCLZ, such as using regression algorithm.

[0108] FIG. 4 illustrates a method 100 for tracking objects in a surgical workspace and determining whether an object obstructs a surgical plan using tracker-based localization and machine vision. The method 100 may be performed by the surgical navigation system 12, or more particularly by the navigation controller 22.

[0109] In block 102, a positional relationship between the localizer 18 and the vision device 40 in a common coordinate system may be identified. Specifically, the navigation controller 22, such as via the transformation engine 78, may be configured to identify the positional relationship between the localizer 18 and the vision device 40, and correspondingly between the localizer coordinate system LCLZ and vision coordinate system VIS, using any of the methods described above. For example, a tracker may be fixed to the vision device 40, or a light pattern may be placed in the target site that is detectable by both the localizer 18 and the vision device 40. Alternatively, when the localizer 18 is integrated with or otherwise fixed relative to the vision device 40 during manufacture, the positional relationship may be determined and pre-stored as transformation data 83 in the navigation controller 22 during manufacture.

[0110] In block 104, a virtual model corresponding to one or more objects in the target site may be accessed, such as based on the transformation data 83. The transformation data 83 may indicate objects in the target site to which trackers, such as the trackers 34, 36, 38, are affixed. The navigation controller 22 may be configured to retrieve the virtual models for each of the objects affixed to a tracker. In some instances, one or more of these retrieved virtual models may also define a target volume to be treated during the surgical procedure. [0111] FIG. 5 illustrates a target site 200 of a patient undergoing a knee replacement procedure. The target site 200 may include a portion of the patient’s femur F, which may contain a target volume 202 of bone tissue to be removed with a surgical instrument ( e.g ., the surgical instrument 16). The target site 200 may further include soft tissue adjacent the target volume 202, such as a ligament 204 and epidermal tissue 206. The target site 200 may also include surgical tools, such as retractors 208 positioned to retract the epidermal tissue 206 and provide access to the patient’s femur F. The target site 200 may additionally include a tracker 209 firmly affixed to the patient’s femur F. The navigation controller 22 may therefore retrieve a virtual model corresponding to the femur F from the model data 82, an example of which is illustrated in FIG. 6.

[0112] Referring again to FIG. 4, in block 106, the positions of objects affixed to trackers in the target site may be detected using the localizer 18. In particular, the localizer 18 may be configured to generate optical-based data indicating the position of each tracker coordinate system, and correspondingly the position of each tracker, in the localizer coordinate system LCLZ as described above. The navigation controller 22 may then be configured, such as via the localization engine 76, to identify positions of the coordinate systems of the objects affixed to the trackers, and correspondingly the positions of the objects, in the localizer coordinate system LCLZ, such as based on the detected positions of the trackers and their coordinate systems, and the positional relationships between the trackers and the objects indicated in the transformation data 83. Because the virtual model for each object affixed to a tracker may be mapped to the object’s coordinate system in the transformation data 83, the navigation controller 22 may similarly identify a position of the virtual model for each object affixed to a tracker in the localizer coordinate system LCLZ based on the detected position of the tracker to which the object is affixed and the positional relationship between the tracker and the object.

[0113] FIG. 6 illustrates a continuation of the target site 200 of FIG. 5 and shows a virtual model 210 that may correspond to the patient’s femur F, which may be affixed to the tracker 209 in the target site 200. The virtual model 210 may be positioned in the localizer coordinate system LCLZ according to the position of the tracker 209 and femur F in the localizer coordinate system LCLZ determined with the localizer 18. The virtual model 210 may define a virtual target volume 212 to be removed from the femur F during treatment, which may correspond to the target volume 202 shown in FIG. 5. [0114] In block 108, an expected depth map be generated, such as based on the accessed virtual models, the detected positions of the objects corresponding to the virtual models in the localizer coordinate system LCLZ, and the positional relationship between the localizer 18 and the vision device 40 in the common coordinate system. As previously described, the positions of the virtual models in the localizer coordinate system LCLZ may correspond to the positions of the objects in the localizer coordinate system LCLZ determined using the localizer 18. The navigation controller 22 may be configured, such as via the vision engine 80, to transform the positions of the virtual models in the localizer coordinate system LCLZ to positions of the virtual models in the vision coordinate system VIS based on the positional relationship between the localizer 18 and the vision device 40, and correspondingly between the localizer coordinate system LCLZ and the vision coordinate system VIS, in the common coordinate system.

[0115] Thereafter, the navigation controller 22 may generate an expected depth map based on the positions of the virtual models in the vision coordinate system VIS. As described above, a depth map generated by the vision device 40 may illustrate the position (e.g., depth and location) of exposed object surfaces in the target site relative to the vision device 40. The position of the virtual models in the vision coordinate system VIS may summarily indicate the position of object surfaces represented by the virtual models relative to the vision device 40, which may be fixed within the vision coordinate system VIS. Accordingly, the navigation controller 22 may be configured, such as via the vision engine 80, to simulate a depth map expected to be generated the vision device 40 with a field of view of the target site based on the determined positions of the virtual models in the vision coordinate system VIS, assuming the target site is free of any other objects.

[0116] FIG. 7 illustrates an expected depth map that may be generated based on the virtual model 210 of FIG. 6. The expected depth map of FIG. 7 may simulate the depth map expected to be generated by the vision device 40 of patient’s femur F according to the transformed position of the virtual model 210 in the vision coordinate system VIS, assuming other objects, such as the ligament 204, epidermal tissue 206, retractors 208, and tracker 209 are absent from the target site. The expected depth map of FIG. 7 may also have been cropped to a region of interest, discussed in more detail below.

[0117] In block 110, an actual depth map captured by the vision device 40 may be received.

In particular, contemporaneously with the localizer 18 generating localizer data indicating the positions of objects affixed to trackers in the target site in block 106, the vision device 40 may generate a depth map of the target site as described above. In this way, the depth map may be temporally interleaved with the localization data, and may also be temporally interleaved with the estimated depth map generated based on the localization data. In other words, the actual depth map and the expected depth map may both represent the target site at a substantially same moment in time during a surgical procedure.

[0118] FIG. 8 illustrates a depth map that may be generated by the vision device 40 with a field of view the target site 200 depicted in FIG. 5, with the tracker 209 removed for simplicity. The depth map may include several image components, akin to pixels, arranged as a matrix and forming the image frame of the depth map. A box 214 has been artificially placed on the illustrated depth map to highlight an example one of the image components. The location of each image component in the depth map image frame may represent a horizontal and vertical distance from a center viewpoint of the vision device 40, and the brightness of each image component may correspond to the distance of the object surface point represented by the image component from the vision device 40. In the illustrated example, brighter image components represent surface points in the target site that are closer to the vision device 40, and darker image components represent surface pints in the target site that are farther from the vision device 40.

[0119] In block 112, the actual depth map may be cropped to a region of interest (ROI) for the surgical procedure, such as based on the virtual models accessed in block 104, the detected positions of the objects corresponding to the virtual models in the localizer coordinate system LCLZ, and the positional relationship between the localizer 18 and vision device 40 in the common coordinate system. As explained in further detail below, the actual depth map may be compared with the expected depth map to identify objects in the target site and to determine whether any such objects may obstruct treatment of a target volume in the target site. The larger the dimensions of the actual depth map and the expected depth map being compared, the greater amount of computations involved in the comparison. The navigation controller 22, such as via the vision engine 80, may thus be configured to crop the actual depth map to an ROI based on the positions of the virtual models in the vision coordinate system VIS to reduce the dimensions of the compared depth images. As described above, the position of the virtual models in the vision coordinate system VIS may be determined based on the determined position of the objects localizer coordinate system LCLZ and the positional relationship between the localizer 18 and the vision device 40 in the common coordinate system.

[0120] For example, the virtual models accessed in block 104 may define a target volume to be treated during the surgical procedure. The position of the virtual models in the vision coordinate system VIS may thus indicate the position of the target volume in the vision coordinate system VIS, and correspondingly, may indicate the position of the target volume in the actual depth map generated by the vision device 40. The navigation controller 22, such as via the vision engine 80, may be configured to crop the actual depth map to remove any areas greater than a threshold distance from the position of the target volume in the vision coordinate system VIS. Additionally or alternatively, the navigation controller 22, such as via the vision engine 80, may be configured to center a user-selected or procedure- specific shape on the position of the target volume in the actual depth map, and to remove any areas of the actual depth map outside of the shape. The navigation controller 22 may be configured to limit the dimensions and shape of the expected depth map to the dimensions and shape of the cropped actual depth map, such as during or after calculation of the expected depth map.

[0121] FIG. 9 illustrates the actual depth map of FIG. 8 cropped to an ROI based on the virtual model 210 corresponding to the femur F in FIG. 6. The virtual model 210 may define a virtual target volume 212 corresponding to the target volume 202 to be treated during the surgical procedure. The navigation controller 22 may crop the actual depth map of FIG. 8 based on the determined position of the target volume in the vision coordinate system VIS, which may indicate the position of target volume in the actual depth map, such as by centering a selected shape on the target volume in the depth map and removing areas of the depth map outside of the shape. The expected depth map of FIG. 7 is similarly limited to the dimensions and shape of the cropped actual depth map.

[0122] In blocks 114 and 116, a portion of the depth map that fails to match the expected depth map be identified. Specifically, in block 114, the actual depth map may be compared with the expected depth map, such as by computing a difference between the actual depth map and the expected depth map. The navigation controller 22 may be configured, such as via the vision engine 80, to compute the difference between the expected depth map and the actual depth by computing a difference between the depths at each corresponding pair of image components in the expected depth map and the actual depth map. A corresponding pair of image components in the expected depth map and the actual depth map may include the image component of each depth map at a same horizontal and vertical location. Assuming the actual depth map has been cropped to an ROI in block 112, the depth maps compared in block 114 may be the cropped depth map.

[0123] The difference between the actual depth map and estimated depth may indicate objects in the target site that are not already identified and tracked, such as objects ( e.g ., soft tissue, surgeon’s hand) that are not or cannot be adequately tracked using an affixed tracker. The difference may be represented by a difference depth map, with each image component of the difference depth map indicating the depth difference computed for corresponding image components located in the actual and expected depth maps at a same horizontal and vertical position as the image component in the difference depth map. FIG. 10 illustrates a difference depth map computed between the actual depth map of FIG. 9 and the expected depth map of FIG. 7.

[0124] Corresponding image components from the actual depth map and expected depth map that indicate a same depth will result in a zero depth difference, and may correspond to objects previously identified and tracked, such as using trackers and the localizer 18. A zero depth difference may be represented in a depth map for the difference by image components with a maximum brightness or a color and/or hue specific to zero depth. In the difference depth map of FIG. 10, the areas 216, 218, and 220 represent corresponding image components from the actual depth map and expected depth map with a zero depth difference.

[0125] Corresponding image components of the actual depth map and the expected depth map that do not indicate a same depth will result in a non-zero depth difference, and may correspond to objects not previously identified and tracked, such as using the trackers and localizer 18. A non-zero depth difference may be represented in a depth map for the difference with image components of a brightness that is less than maximum brightness, or a color that differs from the color and/or hue specific to zero depth. In the difference depth map of FIG. 10, the darker areas adjacent the areas 216, 218, and 210 represent corresponding image components of the actual depth map and expected depth map with a non-zero depth difference.

[0126] In block 116, the computed difference may be filtered based on one or more object thresholds. The object thresholds may be designed to differentiate between non-zero differences that are due to noise or inconsequential calibration inaccuracies and non-zero differences that are due to the presence of additional objects in the target site. The object thresholds may include without limitation a threshold depth and/or a minimum size threshold, each of which may be non zero.

[0127] As an example, for each of one or more non-zero sections of the difference depth map, the navigation controller 22 may be configured to determine whether the non-zero section indicates an absolute depth greater than the depth threshold. Specifically, the difference depth map may include one or more non-zero sections, each of the non-zero sections including a set of contiguous image components that each indicate a non-zero depth difference. A non-zero section of the difference depth map may be considered to have an absolute depth greater than the depth threshold if the magnitude (without reference to sign) of the non-zero depth difference indicated by each image component in the non-zero section is greater than the depth threshold. Responsive to determining that a non-zero section of the difference indicates an absolute depth greater than the threshold depth, the navigation controller 22 may be configured to identify as a portion of the actual depth map that fails to match the estimated depth map the section of the actual depth map that corresponds the non-zero section of the difference, such as by virtue of the section of the actual depth map being at a same horizontal and vertical position in the actual depth map as the non-zero section in the difference depth map.

[0128] As a further example, for each non-zero section of the difference, the navigation controller 22 may be configured to determine whether a size ( e.g ., area) of the non-zero section is greater than the minimum size threshold. Responsive to determining that the size of a non-zero section is greater than the minimum size threshold, the navigation controller 22 may be configured, such as vis the vision engine 80, to identify as a portion of the actual depth map that fails to match the expected depth map the section of the actual depth map that corresponds the non-zero section of the difference, such as by virtue of the section of the actual depth map being at a same horizontal and vertical position in the actual depth map as the non-zero section in the difference depth map.

[0129] In another example, the navigation controller 22 may be configured to identify as a portion of the actual depth map that fails to match the estimated depth map a section of the actual depth map that corresponds a non-zero section of the difference responsive to determining that the size of the non-zero section is greater than the minimum size threshold and the non-zero section indicates an absolute depth greater than the threshold depth.

[0130] In block 118, a determination may be made of whether objects are present in the target site based on filtered difference. Specifically, the navigation controller 22, such as via the vision engine 80, may be configured to determine whether any portions of the actual depth map that do not match the expected depth map were identified. If not (“No” branch of block 118), then the method 100 may return to block 106 to again detect the position of tracker- affixed objects using the localizer 18. If so (“Yes” branch of block 118), then the method 100 may proceed to block 120 to recognize objects in the target site by applying machine vision techniques to the portion of the actual depth map that fails to match the expected depth map.

[0131] In block 120, the navigation controller 22, such as via the vision engine 80, may be configured to apply machine vision techniques to the identified portion of the actual depth map that fails to match the expected depth map to recognize objects in target site from the identified portion. For instance and without limitation, the navigation controller 22 may be configured utilize pattern recognition, edge detection, color recognition, wavelength analysis, image component intensity analysis (e.g., pixel or voxel intensity analysis), depth analysis, and metrics generated through machine learning to segment between objects represented in the identified portion of the actual depth map. As some examples, areas of the identified portion separated by edges, having different regular patterns, having different color pallets, and/or indicating different depth ranges may correspond to different objects. As further examples, the surfaces of different objects (e.g., different tissues) in the target site may produce different wavelengths and/or different intensities in the signals reflected to and detected by the vision device 40. The vision device 40 may be configured to output such information for each image component of the actual depth map, and based on this information, the navigation controller 22 may be configured to segment different objects in the identified portion based on varying wavelengths and/or signal intensities occurring across the identified portion of the actual depth map. If the navigation controller 22 is unable discover multiple objects in the identified portion using machine vision, then the navigation controller 22 may be configured to consider the entire identified portion as single object in the target site.

[0132] In addition or alternatively, the navigation controller 22 may be configured, such as via the vision engine 80, to identify objects in the identified portion of the actual depth, such as based on the model data 82 stored in the navigation controller 22. Identification may differ from segmentation in that identification may identify a label for each object represented in the actual depth map describing the type of object, such as identifying the object as a ligament, retractor, epidermal tissue, and so on. Identification of each object in the target site may enable the navigation controller 22, such as via the vision engine 80, to model the entire object as opposed to just a surface of the object, and to better predict movement and other reactions of the object during the surgical procedure, which may enable the surgical navigator 81 of the navigation controller 22 to make increasingly informed navigation decisions.

[0133] As described above, the model data 82 stored in the navigation controller 22 may define three-dimensional models corresponding to objects potentially present in the target site. The model data 82 may also define predetermined profiles for various objects potentially present in the target site, each profile setting forth one or more features specific to the object that aid the navigation controller 22 in identifying the object from the actual depth map. For example, a profile for a given object may include, without limitation, one or more of a color palate, wavelength range, signal intensity range, distance or depth range, area, volume, shape, polarization, and deep metrics output from a learned or statistical model corresponding to the object. The profile for a given object may also include a three-dimensional model for the object, such as one generated from patient scans described above.

[0134] The navigation controller 22 may thus be configured to identify an object based on the identified portion of the actual depth map that fails to match the estimated depth map by matching at least part of the identified portion with one of the predefined profiles, namely, the predefined profile corresponding to the object. The navigation controller 22 may then be configured to label the at least part of the identified potion of the actual depth map as the specific object corresponding to the profile, which may then be considered adjacent to the localizer objects.

[0135] In an alternative example, a user may interact with the user interface 24 to manually select an object of the identified portion segmented by the navigation controller 22, and/or to select a predefined profile for the selected object. A user may also interact with the user interface 24 to manually trace an object represented by the actual depot map, such as in the identified portion, and/or to select a predefined profile for the traced object. The navigation controller 22 may then be configured to label the selected segmented or traced object with the label corresponding to the selected predefined profile, and to track the selected or traced object accordingly.

[0136] In block 122, a position of each object recognized from the actual depth map may be determined in a common coordinate system with the localized objects, such as the vision coordinate system VIS or the localizer coordinate system LCLZ. For example, the navigation controller 22, such as via the vision engine 80, may be configured to determine the position of each object recognized from the depth map and of each localized object in the common coordinate system relative to a target volume, which may be defined by the localized objects, so that the navigation controller 22 may determine whether any of the recognized objects and/or localized objects pose an obstacle to treating the target volume.

[0137] The navigation controller 22 may be configured to determine the position each recognized object relative to the localized objects based on the detected locations of the localized objects in the localizer coordinate system LCLZ using the localizer 18, a location of the recognized object in the actual depth map, and the positional relationship between the localizer 18 and the vision device 40 in the common coordinate system, which may be defined by the transformation data 83 stored in the navigation controller 22. As previously described, the position of the recognized object in the actual depth map may indicate the position of the recognized object in the vision coordinate system VIS. For instance, each image component of the actual depth map that forms the recognized object may represent a vector from a center viewpoint of the vision device 40 to a position in the vision coordinate system VIS. The position of each image component in the image frame of the actual depth map may indicate the horizontal and vertical components of the vector, and depth indicated by each image component may represent the depth component of the vector.

[0138] The navigation controller 22 may thus be configured to determine the position of each recognized object in the vision component system VIS based on the position of the object in the actual depth map, and may then be configured to determine the position of each recognized object relative to the localized objects in a common coordinate system using the positional relationship between the localizer 18 and the vision device 40, the position of each recognized object in the vision coordinate system VIS, and/or the position of each localized object in the localizer coordinate system LCLZ.

[0139] In block 124, for each tracked object, including objects recognized from the actual depth map and objects localized with the localizer 18, a virtual boundary corresponding to the object may be generated in the common coordinate system, such as based on the determined position of the object in the common coordinate system. In particular, the navigation controller 22, such as via the vision engine 80, may be configured to generate the virtual boundaries in the common coordinate system to provide a constraint on motion of a surgical tool, such as the surgical instrument 16. To this end, the navigation controller 22 may also be configured to track movement of the surgical instrument 16 in the common coordinate system, such as with the localizer 18. The virtual boundaries generated by the navigation controller 22 may define areas of the common coordinate system that the surgical instrument 16 should not travel into or nearby, as the space may be occupied by other objects including sensitive anatomical structures and other surgical tools.

[0140] For example, the navigation controller 22 may be configured to insert in the common coordinate system the three-dimensional virtual model stored for each localized object in accordance with the determined position of the localized object in the common coordinate system. When the model data 82 stored in the navigation controller 22 defines a three-dimensional virtual model for a given object recognized from the identified portion of the actual depth map, the navigation controller 22 may be configured to insert the three-dimensional virtual model into the common coordinate system in accordance with the determined position of the given recognized object in the common coordinate system. Additionally or alternatively, the model data 82 may indicate one or more primitive geometric shapes ( e.g ., spheres, cylinders, boxes) for a given object recognized from an identified portion of the actual depth map. In this case, the navigation controller 22 may be configured to size and/or arrange the indicated primitive geometric shapes based on the surface topography of the object indicated by the actual depth map, and to insert the sized and/or arranged primitive geometric shapes into the common coordinate system in accordance with the determined position of the given object in the common coordinate system. Additionally or alternatively, such as when no virtual model or primitive geometric shapes are indicated for a given object recognized from the identified portion of the actual depth map, the navigation controller 22 may be configured to construct a mesh boundary based on the surface topography of the object indicated in the actual depth map, and to insert the mesh boundary into the common coordinate system in accordance with the determined position of the given object in the common coordinate system.

[0141] As a further example, in addition or alternatively to one or more of the above techniques, the navigation controller 22 may be configured to approximate a boundary for a given object in the common coordinate system by inserting force particles in the common coordinate system in accordance with the determined position of the of the given object in the common coordinate system. Specifically, the navigation controller 22 may be configured to select various points on the surface of recognized object, and to place force particles in the common coordinate system at the determined positions of the various points in the common coordinate system. Each of the force particles may be configured to repel other objects that move near the force particle in the common coordinate system, such as by coming within a predetermined distance. Thus, during tracked movement of the surgical instrument 16 in the common coordinate system, the force particles may repel the surgical instrument 16, thereby preventing the surgical instrument 16 from colliding with the object represented by the force particles. Inserting force particles into the common coordinate system that correspond to various points on a recognized object’s surface rather than a virtual boundary representing to the recognized object’s entire surface may result in generation of a virtual boundary for the object using relatively reduced processing bandwidth and less data.

[0142] As examples, FIGS. 11 to 13 illustrate virtual boundaries in the common coordinate system that correspond to objects in the target site 200 (FIG. 5) that may have been recognized from an identified portion of the actual depth map of FIG. 9 that fails to match the expected depth map of FIG. 7 based on the difference illustrated in FIG. 10. Specifically, FIG. 11 illustrates in the common coordinate system a retractor virtual model corresponding to the retractors 208 in the target site 200 and depicted in the actual depth map, FIG. 12 illustrates in the common coordinate system a ligament virtual model corresponding to the ligament 204 in the target site 200 and depicted in the actual depth map, and FIG. 13 illustrates in the common coordinate system an epidermal tissue virtual model corresponding to the epidermal tissue 206 in the target site 200 and depicted in the actual depth map. As alternative examples, a virtual boundary for the ligament 204 in the target site 200 may be in the form of a primitive geometric object, such as a cylinder, inserted in the common coordinate system in accordance with the determined position of the ligament 204 in the common coordinate system, and a virtual boundary for the epidermal tissue 206 in the target site 200 may be in the form of a mesh surface or force particles inserted in the common coordinate system at the determined position of the epidermal tissue 206 in the common coordinate system.

[0143] FIG. 14 illustrates the relative positions of the objects recognized from the actual depth map and the patient’s femur F as localized with the localizer 18 in the common coordinate system. In particular, the illustration includes the virtual models of FIGS. 11-13 and the virtual model 210 of the patient’s femur F illustrated in FIG. 6. During a surgical procedure, the navigation controller 22, such as via the surgical navigator 81, may be configured to display the illustration of FIG. 14 along with an image or virtual model for the surgical instrument 16 at the current position of the surgical instrument 16 in the common coordinate system, such as tracked with the localizer 18, to aid a surgeon in guiding the surgical instrument 16 to the target volume 202.

[0144] In block 126, a determination may be made of whether a potential obstacle is present in the target site based on the tracked objects and/or the surgical plan 84. Specifically, the navigation controller 22, such as via the surgical navigator 81, may be configured to determine whether one of the tracked objects, such as the objects recognized form the actual depth map, is an obstacle to the surgical plan 84 based on the position of the object relative to the target volume in the common coordinate system and the surgical plan 84. For instance, the surgical plan 84 may define a planned trajectory of the surgical instrument 16 through the common coordinate system to treat the target volume. If the planned trajectory causes a collision with one of the virtual boundaries for the tracked objects, the navigation controller 22 may be configured to determine that an obstacle exists.

[0145] Responsive to determining that an obstacle is present (“Yes” branch of block 126), in block 128, a remedial action may be triggered. The navigation controller 22, such as via the surgical navigator 81, may be configured to trigger the remedial action by performing one or more of several available actions. As an example, responsive to determining that an object is an obstacle to the surgical plan 84, the navigation controller 22 may be configured to alter the surgical plan 84 to avoid the obstacle. For instance, the navigation controller 22 may be configured to alter the trajectory of surgical instrument 16 to avoid the obstacle, and to transmit the altered surgical plan 84 to the manipulator controller 50 for implementation. As another example, the navigation controller 22 may be configured to halt surgical guidance provided by the surgical navigation system 12 and movement of the robotic manipulator 14 until the obstacle is cleared, as detected by the navigation controller 22. The navigation controller 22 may also be configured to trigger an alarm and/or notification of the obstacle via the user interface 24 of the surgical navigation system 12. As a further example, when the object causing the obstacle is identified as soft tissue, the navigation controller 22 may be configured to provide soft tissue guidance via the user interface 24. For instance, the navigation controller 22 may be configured to illustrate a position the soft tissue object causing the obstacle relative to other objects in the target site, and provide a suggestion for moving the soft tissue to clear the obstacle. The navigation controller 22 may be configured to continue monitoring the position of the soft tissue in the common coordinate system while providing the soft tissue guidance, and provide a notification to the user when the obstacle threat is cleared.

[0146] Following the triggering and/or overcoming of a remedial action (block 128), or responsive to an obstacle not being identified (“No” branch of block 126), in block 130, movement of objects recognized from the actual depth may be tracked using the vision device 40. Specifically, the navigation controller 22, such as via the vision engine 80, may be configured to track movement of each recognized object by being configured to monitor a state of the portion of the actual depth map corresponding to the recognized object in additional actual depths maps subsequently generated by the vision device 40. By focusing on changes to the portion of the actual depth map previously determined to correspond to the recognized object in subsequently generated depth maps, as opposed to generating an expected depth map for each subsequently generated actual depth map, computing a difference between the expected depth map and the subsequent actual depth map, and matching a stored profile to the difference, the navigation controller 22 may be able to monitor movement of the recognized object with increased speed going forward.

[0147] More particularly, each portion of the actual depth map corresponding to a recognized object may depict an arrangement of features specific to the object and located in a specific position of the actual depth map. For example and without limitation, the arrangement of features may be an arrangement of vertices having a geometric relationship specific to the object, an arrangement of edges or lines having a geometric relationship specific to the object, or an arrangement of depths having a relative and geometric relationship specific to the object. Furthermore, the spatial relationship between the arrangement of features of the object and the rest of the object may be fixed.

[0148] The navigation controller 22 may thus be configured to monitor for movement of an object recognized from the actual depth map by monitoring whether the arrangement of features specific to the object in the actual depth moves to a position in the additional depth map that differs from the position of the arrangement in actual depth map. If so, then the navigation controller 22 may be configured to determine a new position of the object in the common coordinate system based on the new position of the arrangement of features corresponding to the object in the additional depth map, and to update the virtual boundary associated with the object in the common coordinate system accordingly. The arrangement of features for monitoring movement of a given object may be indicated in model data 82 for the object, or may be set manually by a user by selecting points in the portion of the actual depth map corresponding to the object using the user interface 24.

[0149] For example, FIG. 15 illustrates an additional actual depth map subsequently generated by the vision device 40 after the actual depth map illustrated in FIG. 9 is generated. The position of the portion of the additional depth map in FIG. 15 representing the retractors 208 (FIG. 5) differs from the position of this portion in the depth map of FIG. 9, indicating the retractors 208 have moved. The navigation controller 22 may be configured to track such movement of the retractors 208 by monitoring for a changed position in the additional depth map relative to the previous actual depth map of a specific arrangement of features that is in the portion representing the retractors 208 and is positionally fixed relative to the rest of the retractors 208. For instance, the navigation controller 22 may monitor the additional depth map for a change in position of the arrangement of vertices 222 between the heads and bodies of the retractors 208.

[0150] Responsive to determining a change in position of the arrangement of vertices 222, the navigation controller 22 may be configured to determine an updated position of the retractors 208 in the common coordinate system based on the updated position of the arrangement of vertices 222 in the additional depth map of FIG. 15 and the fixed positional relationship between the arrangement of vertices 222 and the rest of the retractors 208. The navigation controller 22 may then be configured to adjust the virtual boundary associated with the retractors 208 in the common coordinate system based on the updated position. FIG. 16 illustrates an updated position of the virtual boundary for the retractors 208, namely the virtual model corresponding to the retractors 208, in the common coordinate system according to the new position of the arrangement of vertices 222 depicted in the additional depth map of FIG. 15.

[0151] Disclosed herein are systems and methods for tracking objects in a surgical workspace using a combination of machine vision and tracker-based localization. Due to the flexible nature of soft tissues such as muscle, skin, and ligaments, tracker-based localization is usually not adequate for tracking soft tissues. Accordingly, in addition to detecting the position of rigid objects in a surgical workspace using tracker-based localization, a surgical navigation system may include a vision device configured to generate a depth map of exposed surfaces in the surgical workspace. The surgical navigation system may further be configured to generate an expected depth map of the vision device based on a detected position of an object in the target site using localization, a virtual model corresponding to the object, and a positional relationship between the localizer and the vision device in a common coordinate system. The surgical navigation system may then be configured to identify a portion of the actual depth map that fails to match the estimated depth map, and to recognize objects, including soft tissues, in the target site based on the identified portion. The surgical navigation system may then be configured to determine whether the objects pose as obstacles to a current surgical plan.

[0152] In general, the routines executed to implement the embodiments of the invention, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, may be referred to herein as "computer program code," or simply "program code." Program code typically comprises computer readable instructions that are resident at various times in various memory and storage devices in a computer and that, when read and executed by one or more processors in a computer, cause that computer to perform the operations necessary to execute operations and/or elements embodying the various aspects of the embodiments of the invention. Computer readable program instructions for carrying out operations of the embodiments of the invention may be, for example, assembly language or either source code or object code written in any combination of one or more programming languages.

[0153] Various program code described herein may be identified based upon the application within that it is implemented in specific embodiments of the invention. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the generally endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API's, applications, applets, etc.), it should be appreciated that the embodiments of the invention are not limited to the specific organization and allocation of program functionality described herein.

[0154] The program code embodied in any of the applications/modules described herein is capable of being individually or collectively distributed as a program product in a variety of different forms. In particular, the program code may be distributed using a computer readable storage medium having computer readable program instructions thereon for causing a processor to carry out aspects of the embodiments of the invention.

[0155] Computer readable storage media, which is inherently non-transitory, may include volatile and non-volatile, and removable and non-removable tangible media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Computer readable storage media may further include RAM, ROM, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other solid state memory technology, portable compact disc read-only memory (CD-ROM), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and which can be read by a computer. A computer readable storage medium should not be construed as transitory signals per se (e.g., radio waves or other propagating electromagnetic waves, electromagnetic waves propagating through a transmission media such as a waveguide, or electrical signals transmitted through a wire). Computer readable program instructions may be downloaded to a computer, another type of programmable data processing apparatus, or another device from a computer readable storage medium or to an external computer or external storage device via a network.

[0156] Computer readable program instructions stored in a computer readable medium may be used to direct a computer, other types of programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions that implement the functions, acts, and/or operations specified in the flowcharts, sequence diagrams, and/or block diagrams. The computer program instructions may be provided to one or more processors of a general purpose computer, a special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the one or more processors, cause a series of computations to be performed to implement the functions, acts, and/or operations specified in the flowcharts, sequence diagrams, and/or block diagrams.

[0157] In certain alternative embodiments, the functions, acts, and/or operations specified in the flowcharts, sequence diagrams, and/or block diagrams may be re-ordered, processed serially, and/or processed concurrently consistent with embodiments of the invention. Moreover, any of the flowcharts, sequence diagrams, and/or block diagrams may include more or fewer blocks than those illustrated consistent with embodiments of the invention.

[0158] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, to the extent that the terms“includes”,“having”,“has”,“with”, “comprised of’, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term“comprising”.

[0159] While all of the invention has been illustrated by a description of various embodiments and while these embodiments have been described in considerable detail, it is not the intention of the Applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departing from the spirit or scope of the Applicant’ s general inventive concept.