Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER AND COMPUTER-IMPLEMENTED METHOD FOR SUPPORTING LAPAROSCOPIC SURGERY
Document Type and Number:
WIPO Patent Application WO/2016/042297
Kind Code:
A1
Abstract:
A computer and computer-implemented method are provided for supporting laparoscopic surgery. The method comprises providing a (3) dimensional model of an anatomical structure of the subject of the laparoscopic surgery; obtaining from a stereo laparoscope corresponding stereo pairs of images of the anatomical structure of the subject; processing the corresponding stereo pairs of images of the anatomical structure to generate a topographical representation of the anatomical structure; determining a registration between the 3-dimensional model of the anatomical structure and the topographical representation of the anatomical structure; and using the registration to determine a position of the laparoscope with respect to the 3-dimensional model.

Inventors:
THOMPSON STEVE (GB)
HAWKES DAVID (GB)
CLARKSON MATT (GB)
TOTZ JOHANNES (GB)
Application Number:
PCT/GB2015/052631
Publication Date:
March 24, 2016
Filing Date:
September 11, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UCL BUSINESS PLC (GB)
International Classes:
G06T7/00
Foreign References:
US20140241600A12014-08-28
Other References:
DAN WANG ET AL: "Real Time 3D Visualization of Intraoperative Organ Deformations Using Structured Dictionary", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 31, no. 4, 1 April 2012 (2012-04-01), pages 924 - 937, XP011491076, ISSN: 0278-0062, DOI: 10.1109/TMI.2011.2177470
THOMPSON STEPHEN ET AL: "Accuracy validation of an image guided laparoscopy system for liver resection", PROGRESS IN BIOMEDICAL OPTICS AND IMAGING, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, BELLINGHAM, WA, US, vol. 9415, 18 March 2015 (2015-03-18), pages 941509 - 941509, XP060051293, ISSN: 1605-7422, DOI: 10.1117/12.2080974
Attorney, Agent or Firm:
DAVIES, Simon (London EC1N 2DY, GB)
Download PDF:
Claims:
Claims

1. A computer-implemented method for supporting laparoscopic surgery comprising:

a) providing a 3-dimensional model of an anatomical structure of the subject of the laparoscopic surgery;

b) obtaining from a stereo laparoscope corresponding stereo pairs of images of the anatomical structure of the subject;

c) processing the corresponding stereo pairs of images of the anatomical structure to generate a topographical representation of the anatomical structure;

d) determining a registration between the 3-dimensional model of the anatomical structure and the topographical representation of the anatomical structure; and

e) using the registration to determine a position of the laparoscope with respect to the 3-dimensional model.

2. The method of claim 1 , wherein the 3 dimensional model of the anatomical structure is derived from a 3-dimensional image of the subject.

3. The method of claim 2, wherein the 3-dimensional image was acquired using magnetic resonance imaging (MRI) or X-ray computed tomography imaging (CTI).

4. The method of any preceding claim, further comprising tracking the position of the laparoscope. 5. The method of claim 4, wherein said tracking is performed using an optical tracking system with passive tracking markers placed on the proximal end of the laparoscope.

6. The method of claim 4 or 5, further comprising providing a synchronisation between the stereo images obtained by the laparoscope and the tracked position of the laparoscope, so that the position of the laparoscope is known at the acquisition of each stereo image pair.

7. The method of any preceding claim, wherein processing the corresponding stereo pairs of images comprises:

matching features between the left and right images; and

triangulating individual points using left and right images from the stereo pairs to determine a

3-D position relative to the stereo laparoscope.

8. The method of any preceding claim, wherein the topographical representation of the anatomical structure is based on multiple patches from different locations on the surface of the anatomical structure.

9. The method of any preceding claim, wherein the topographical representation of the anatomical structure comprises a point cloud. 10. The method of claim 9, further comprising filtering the point cloud by fitting the points in a patch to a local surface based on a maximum curvature function.

11. The method of any preceding claim, wherein said registration is determined using an iterative closest point (ICP) technique.

12. The method of any preceding claim, further comprising performing an initialisation for the registration.

13. The method of claim 12, wherein the initialisation is based at least in part on a standard clinical laparoscopic approach to said anatomical structure.

14. The method of claim 12 or 13, wherein said initialisation comprises:

providing on a display screen a virtual view of the anatomical structure derived from the model; providing on the display screen a real view obtained from the stereo laparoscope; and manually adjusting at least one of the displayed virtual view and the displayed real view to provide alignment between said two views, thereby determining said position of the laparoscope relative to the 3-D model.

15. The method of claim 14, wherein the virtual view is provided on the display screen as a 2-D overlay of the real view obtained from the stereo laparoscope.

16. The method of any preceding claim, wherein said registration is a rigid or locally rigid registration.

17. The method of any preceding claim, wherein the anatomical structure is the liver.

18. The method of any of claims 1 to 16, wherein the anatomical structure is the pancreas, kidney, or bladder.

19. A computer program comprising program instructions that when executed by a computer system cause the computer system to perform the method of any preceding claim.

20. A non-transitory computer readable medium comprising the computer program of claim 19 stored on a computer readable medium. 21. A computer system for supporting laparoscopic surgery, the system comprising: a memory holding a 3 dimensional model of an anatomical structure of the subject of the laparoscopic surgery;

an interface to a stereo laparoscope for receiving corresponding stereo pairs of images of the anatomical structure of the subject;

and one or more processors for:

processing the corresponding stereo pairs of images of the anatomical structure to generate a topographical representation of the anatomical structure;

determining a registration between the 3-dimensional model of the anatomical structure and the topographical representation of the anatomical structure; and

using the registration to determine a position of the laparoscope with respect to the 3- dimensional model.

22. Apparatus comprising a computer system configured to implement the method of any of claims 1 to 18.

23. A method substantially as described herein with reference to the accompanying drawings.

24. Apparatus substantially as described herein with reference to the accompanying drawings.

Description:
COMPUTER AND COMPUTER-IMPLEMENTED METHOD FOR SUPPORTING LAPAROSCOPIC

SURGERY

Field of the Invention

The present invention relates to laparoscopic surgery, and in particular to a computer and computer-implemented method that use stereoscopic images of an anatomical structure for supporting such surgery.

Background of the Invention

The successful implementation of an image guidance system for laparoscopic liver resection has the potential to improve the feasibility of laparoscopic resection for patients with tumours located in surgically challenging locations. Liver surgery in general is a good target for abdominal image guidance, with systems for image guidance in open surgery becoming commercially available [3, 5]. Systems for laparoscopic resection are also under development [4]. Such systems use a rigid or locally rigid registration, and do not adjust for intraoperative deformation of the liver. However, provided that the guidance system has visual cues to enable the surgeon to estimate the degree of error present, these systems can provide patient benefit in their current form, whilst research on systems using deformable modelling is ongoing.

Image guidance systems require a method to register the pre-operative data to the intraoperative scene. For orthopaedic and neuro-surgery, the use of bone-implanted fiducial markers is common place. However fiducial markers are in general impractical for abdominal surgery, so existing systems typically use exposed surfaces or natural anatomical landmarks to register the preoperative data. Systems have been proposed using manually picked landmarks [3], structured light, laser depth scanners, and touching the surface with a tracked pointer [5]. However, it may be relatively difficult in practice to apply such approaches to laparoscopic surgery for widespread clinical adoption.

Summary

The invention is defined in the appended claims.

The approach described herein helps to enable image guidance for laparoscopic (keyhole) surgery. Such image guidance allows the surgeon to refer to preoperative images during surgery in an intuitive way - for example, by overlaying one or more preoperative images onto a laparoscopic video image (although other display options are possible). This overlay of the images depends on a registration of the pre-operative images to the intra-operative video images. Various methods have previously been proposed to achieve this registration, however they have generally required the use of very specialised (and typically still prototype) hardware - e.g. structured light or laser range finders - or else then depend upon explicit manual definition (and hence alignment) of landmark surface points by the surgeon. The approach described herein helps to support registration using only a readily available ( and increasingly common ) stereo laparoscope in combination with a commercially available tracking system for the laparoscope, which is also widely used in image guided procedures.

Also provided is a computer program comprising program instructions in machine-readable format that when executed by one or more processors in a computer system cause the computer system to implement any of the various methods as described above. These program instructions may be stored on a non-transitory computer readable storage medium, such as a hard disk drive, read only memory (ROM) such as flash memory, an optical storage disk, and so on. The program instructions may be loaded into random access memory (RAM) for execution by the one or more processors of a computer system from the computer readable storage medium. This loading may involve first downloading or transferring the program instructions over a computer network, such as a local area network (LAN) or the Internet.

Also provided herein is an apparatus comprising a computer system configured to implement the various methods as described above. The computer system may comprise one or more machines, which may be general purpose machines running program instructions configured to perform such methods. The general purpose machines may be supplemented with graphics processing cards units (GPUs) to provide additional processing capability. The computer system may also comprise at least some special purpose hardware for performing some or all of the processing described above, such as determining the visualisations. The computer system may be incorporated into apparatus specifically customised for performing computer-assisted (image-guided) surgery using a laparoscope. Such apparatus may be used to provide support during a surgical operation, such as by providing real-time visualisation of the position of the inserted laparoscope in combination (and registered) with one or more pre-operative images.

Brief Description of the Drawings

Various embodiments of the invention will now be described in detail by way of example only with reference to the following drawings:

Figure 1 is a graph showing how the triangulation error and patch area various with distance from the laparoscopic lens;

Figure 2 is an image of multiple surface patches overlaid onto a liver phantom;

Figure 3 is an image of mounting prongs as used for subsurface landmarks for the liver phantom, as employed during an assessment of accuracy.

Figure 4 is an image showing estimated locations (circles) and corresponding true locations

(crosses) for in-vivo data to illustrate the accuracy of alignment.

Figure 5 is a flowchart illustrating a method for supporting laparoscopic surgery in accordance with some embodiments of the invention.

Detailed Description 1. General

The approach described herein supports minimally invasive surgery (MIS) using a laparoscope. Published image guidance systems for laparoscopic surgery generally involve the surgeon performing a manual alignment between the visible anatomy, as obtained from the laparoscope, and any preoperative data, or rely on specialised hardware, such as a laparoscope with one or more laser range finders attached. The system described here removes the need for specialist equipment or manual alignment (point selection). Instead, a commercially available stereo laparoscope and tracking system are used to reconstruct and localise multiple surface patches. A carefully designed user interface is also provided to enable multiple surface patches, each of approximately 30cm 2 to be captured, localised, and visualised within around 5 seconds each. Such visualisation is important as it allows the user to assess interactively the quality and spread of reconstructed patches. A good set of patches can be collected and checked in under 2 minutes. Registration between the pre-operative surface and this set of surface patches is achieved using an iterative closest point (ICP) approach in conjunction with a sterile, manual initialisation. The registration, including manual initialisation, can be achieved within 3 minutes. The entire procedure (surface reconstruction, manual initialisation and ICP) was reliably achieved in under 5 minutes. N.B. all timings presented herein are given by way of example only and are based on the hardware and software of the current implementation - it will be appreciated that the timing may vary for other implementations that use different hardware and/or software.

The system described herein has been validated using a silicone liver phantom and in-vivo porcine data. The results presented below are based on registrations performed "live", not subject to any post-operative adjustment. The porcine data uses computed tomography (CT) data from an insufflated patient. Use of insufflated CT reduces deformation due to the insufflation, which is the subject of ongoing research [2].

2. Registration Method

The registration process is performed in four steps: reconstruction and localisation of individual surface patches; filtering and compositing of the surface patches; manual

Initialisation; and ICP registration. Each step is described in detail below. The processing for the various algorithms and visualisation is implemented within NifTK from the UCL Centre for Medical Imaging (CMIC) (see www.niftk.org). Importantly, the system allows the reconstructed surface patches to be visualised overlaid on the intra-operative video as they are generated, enabling the user to monitor the location and quality of surface patches. The visualisation also allows a real-time, interactive registration process to be achieved.

2.1 Generation of Individual Point Clouds

Camera calibration parameters are determined prior to surgery using Zhang's method [7], using for example the implementation in OpenCV, an open source computer vision facility (see www.opencv.org), with the "handeye" calibration performed as per Tsai [8]. For the current implementation, the focal length is not changed during surgery; however, other implementations may involve changing the focal length, and re-calibrating as appropriate. The median right to left lens transform is also determined via OpenCV.

During surgery video images of the liver surface are captured with a Viking high-definition stereoscopic laparoscope, available from Viking Systems (see www.vikingsystems.com) connected to an NVidia SDI capture card, available from NVidia Corporation (see www.nvidia.com) and stereo pairs are captured at a rate of 30Hz. A pyramidal search is used to match features between the left and right images, after which individual points can be triangulated to their 3D position relative to the left lens. The reconstruction algorithm is as per Stoyanov et al. [6], see also Totz [9], and the implementation allows a dense patch of points (a point cloud) to be saved to memory in around 2 seconds.

The accuracy of such a method depends on the image type and quality to enable feature matching and the camera baseline to enable triangulation. The camera baseline is the distance between the two lenses - in this case, the Viking scope has a baseline of 4.6 mm. The image quality is application-specific, hence the need to fine tune the algorithm on realistic models and in-vivo animal and human data, as illustrated in Figures 1 and 2. In particular, Figure 1 shows the point triangulation error and the area of the reconstructed patches, which both increase with distance from the lens (in mm) - the upper curve of Figure 1 represents the point triangulation error (measured as rms mm), while the lower curve of Figure 1 represents the area of the reconstructed patches (cm 2 ). Figure 2 shows nine surface patches 315, each with an area of around 30cm 2 , shown overlaid on the liver phantom 310.

The scale of the visible features is also application-specific and helps to determine at what depth the feature matching works reliably. During phantom and porcine work, the approach described herein was found to give the best results when the liver surface was between approximately 50 and 80 mm from the lens, thereby giving surface patch areas of between approximately 14 and 36 cm 2 .

The laparoscope is tracked using an optical tracking system, NDI Polaris Spectra from Northern Digital Inc (NDI) - see www.ndigital.com. Passive tracking markers were placed on the external end (590 mm from the lens) of the laparoscope. The estimated tracking transform, referred to herein as T Ca mera2wid. is used to transform each set of triangulated points to the world (tracker) coordinate system. Accurate synchronisation of the tracking and video signals is important. A time stamping and signalling protocol within NifTK accurate to the millisecond was implemented based on OpenlGTLink (the Open Network Interface for Image Guided Therapy - see www.openigtlink.org. By moving the laparoscope, it is straightforward to confirm that the surface patch has been placed in the correct location on the liver surface.

As the liver is a large organ of mainly smooth curvature, in general it is not possible to achieve registration of the pre-operative surface using a single patch [1 , 4]. The system presented here therefore allows multiple patches to be captured and composited into a single large patch. A key enabler of this is that the patches can be visualised on the intraoperative video immediately so that badly formed patches can be removed and the process repeated interactively. 2.2 Compositing and Filtering Point Clouds

After capturing about 6 to 10 point clouds (patches), which typically takes about 2 minutes, the resulting point clouds are filtered and composited to a single point cloud. Filtering reduces the number of points in each patch from hundreds of thousands to hundreds, and therefore helps to reduce subsequent processing time (although some implementations may dispense with such filtering). This reduction in number of points is done using voxel re-sampling, implemented within the point cloud library (PCL) (see www.pointclouds.org). By uniformly re-sampling, any bias to areas of high visual feature density can be removed. Secondly, the filtering process removes some of the triangulation noise by fitting the points in each patch to a local surface based on a maximum curvature function, also implemented within PCL. The individual point clouds are then composited and saved to memory as a single point cloud. This point cloud can be considered as representing the surface of the liver as viewed from the laparoscope as the image capture position for each respective patch (prior to the compositing). 2.3 Registration - ICP

The registration process estimates the transform from the model co-ordinate system to world coordinates, referred to herein as T M od2wi<i- The transform TM O <I2WI- is determined so as to minimise the mean Euclidean distance between the filtered point cloud and the model liver surface. Currently this is done using the iterative closest point (ICP) implementation of the Visual Toolkit (VTK), see www.vtk.org. VTK enables interpolation of a surface between its defined vertices, and has been found to work well and repeatably for the phantom, provided a suitable initialisation (within about 30 mm) is given. For the porcine data, the registration was somewhat less repeatable, and susceptible to small changes in initialisation. However, as the user is able to see the pre-operative liver surface overlaid on the laparoscopic video in real time, it is possible for the user to assess rapidly the quality of the registration. This therefore allows the registration to be re-initialised and repeated until a visually satisfactory registration is achieved, which can generally be accomplished within a small number of trials.

2.4 Registration - Manual Initialisation

For reliable operation, the ICP algorithm generally requires a good starting estimate of

TM OC KWI C J- This estimate is determined manually using a two-stage process. The first stage relies on the fact that the laparoscopic approach to the liver is similar for all patients. Therefore TMod2WH can be coarsely estimated from the position of the lens T Ca mera2wid and a preset offset transform Tonset. as per Equation 1 below. The virtual anatomy (scene) remains static on the visible or real scene (derived from the laparoscope video) as the laparoscope is inserted through the trocar and positioned so that the visible scene matches, as closely as possible, the virtual scene. In practice, it is generally only necessary to get both the virtual and real livers visible. During the second part of the initialisation the user "picks up" the virtual liver using a second tracked object. This is configured so that the user can now move the virtual liver in 6 degrees of freedom in the coordinate system of the Image Guided Laproscopy overlay screen.

Prior to use the user defines 2 transformations:

. T_Model2Centre, which defines the transform from the origin of the pre-operative model to the desired centre of rotation of the model. The user may select any centre of rotation - typically it would be the centroid of the anatomy of interest, for example the left lobe of the liver. The application may include an intuitive interface for performing this operation.

. T_World2Screen, which defines the location of the centre of the user interface screen relative to the "world" or tracking system origin. This will depend on the geometry of the operating room. The transform may be set manually or the application may include a user interface for setting the transform automatically.

When the user initiates the second part the initialisation, the incremental motion of a tracked handheld "reference" object relative to the user screen is applied to the centre of the model, relative to the laparoscope lens. The user may use any tracked object for this procedure. A physical representation of the patient's liver might be used to help make the process more intuitive.

The user positions the virtual liver over the real liver as closely as possible. To aid the process it is possible to "pause" the laparoscopic video and tracking streams if so desired. This system has been used on the phantom and in-vivo to provide successful initialisation.

3 Validation Method

3.1 Gold Standard Feature Positions

For present work, the surgeon has chosen to show the registered model as a 2D overlay (rather than using the 3D visualisation capability of the laparoscope). In this context, the error of interest is the difference between the predicted position of a given feature on screen and the actual position of the feature on the screen. For each data set, a set of landmark features that can be unambiguously located in the video images are used as a "gold standard" against which the system errors can be measured.

The phantom is designed, so that after the silicone liver has been imaged, the flexible silicone liver can be removed to allow the rigid mounting pins or prongs 415 to be imaged with the laparoscope (see Figure 3). The mounting prongs are used as subsurface landmarks for the phantom data (at a clinically relevant depth).

For the in-vivo data, subsurface targets cannot be used, so the landmark points used are on the surface of the liver, e.g. surface ablation zones, and anatomical notches, as shown in the image of Figure 4. In particular, Figure 4 illustrates the surface ablation and anatomical notch that were used for error measurement in the in-vivo data. In Figure 4, the green circles show the model estimates of positions corresponding to the neighbouring gold standard (crosshair) locations. As the individual lobes of a porcine liver can move independently, validation was limited to the lobe upon which registration was performed, in this case the right lobe. For each video sequence, a sample set of frames (every 25th frame for each channel) was extracted (791 for the phantom, 1193 for the porcine data). Any pertinent landmarks in this set of frames were manually selected. The pixel location for each of the landmarks was stored along with the relevant frame number. Right and left channels were treated independently.

3.2 Model Feature Positions

The location of each of the landmark points was manually identified in the CT data and transformed to world coordinates using the model-to-world transform from the registration process. In addition to the proposed method, for each video data set the position in world coordinates of each of the validation landmarks was estimated by triangulation and averaging of the manually selected gold standard screen points. This enables an estimate of laparoscope tracking and calibration errors independent of other system errors. A further estimate of the model-to-world transform was achieved by performing a landmark point based registration between the triangulated landmark points and those identified in the CT. Doing this enables an estimate of the errors in manually locating landmark points and any deformation between the CT data and the video data. The three approaches to landmark localisation and the errors present in each method are shown in Table 1.

. e ata types an met o s use ar va at on, t e system errors capture y eac , an resu ts. ystem errors common to all methods, kparaseepe calibration sad trading, and point picking errors in the CT ate omitted from the table for clarity. Each data type and method captures & different subset of error sources, shown in the left hand columns. The right hand cdmrms shew RMS arid maximum erroEs for each method. Phantom results are calculated from 791 samples of 9 subsurface landmarks. Three sets of results are skswrt for the poreiae data, one for each animal. The number of samples for each porcine data set were, 478 samples of 4 surface landmarks, 234 samples of 6 surface landmarks, and 483 samples of 6 surface landmarks, respectively. For each of the frames where gold standard pixel coordinates were available, the error associated with "re-projection" was calculated as follows. The gold standard pixel coordinates are undistorted using a 4 parameter distortion, then re-projected to normalised points (Xg S /z; y gs /z; and 1 .0) in the lens's coordinate system using the camera's projection matrix. The model points are transformed into the lens's coordinate system using the tracking transform for the relevant frame to give For each manually segmented gold standard point, the error is defined by Equation 2 as:

3.4 Results

Table 1 (see above) summarises the results for each validation experiment. For each experiment, the root mean square (RMS) and maximum error are presented. For both the phantom and porcine data, the results are taken from a single registration experiment. Repetition of the experiment for the phantom data was straightforward with results similar to those in Table 1 achieved reliably in under 5 minutes. Repetition of the porcine experiment was more difficult. It took 4 attempts, each taking 3 minutes, to achieve a successful registration. However, as the registration is based on the liver surface, which is visible in the laparoscopic video, failed registrations can be identified in real-time, and the process repeated until a satisfactory result is achieved.

To quantify the repeatability of the registration, the ICP registration was repeated 100 times from starting estimates based on random perturbations of the landmark based registration. For the phantom, all 100 registrations resulted in RMS projection errors less than 4 mm. For the porcine data, only 21 of the 100 registrations gave RMS projection errors less than 10 mm.

4 Discussion

The results presented in Table 1 show an accuracy of around 3 mm was achieved by the system on the phantom, which is clinically acceptable. The porcine data provides additional challenges, such as non-rigid deformation and breathing motion, and the image-guidance was therefore less accurate. The accuracy required for a useful image-guidance system has not yet been finalised. For example, Cash et al. [1 ] aimed for 10 mm, which is similar to the results presented in Table 1 (for the porcine data), which is consistent with feedback from the surgeons who used the present system in-vivo, and found the overlay useful. At a more stringent level, surgeons have suggested to aim for an accuracy of 3 mm, as this would be compatible with their surgical margin.

The above discussion has focussed on the projected errors, i.e. in the camera or image plane, which is appropriate for 2D projection. However, if 3D visualisation were to be adopted, the errors normal to the camera plane are also likely to be important, and it may be appropriate to measure the errors differently. Although it is straightforward to measure the distance error of triangulated points in 3D, the high triangulation errors, as illustrated in Figure 1 , tend to swamp the other system errors. In addition, due to the relatively narrow baseline of the stereo laparoscope, it is likely that the error normal to the camera plane is difficult to perceive. It is recognised that the system described here, consistent with most existing systems, uses a rigid or locally rigid registration between the pre-operative images and the intra-operative images, and hence does not adjust for intraoperative deformation of the liver. However, one possibility is to allow some localised recovery of the dynamic organ deformation using a non-rigid, or locally rigid, variant of the ICP algorithm. In addition, modelling of the insufflation process should enable use of CT data from non insufflated patients, as will be appropriate for human cases.

5. Flowchart

Figure 5 presents a flowchart of a computer-implemented method for supporting laparoscopic surgery in accordance with various embodiments of the invention. The method includes providing a 3-dimensional model of an anatomical structure of the subject of the laparoscopic surgery (operation 210). The anatomical structure may be (for example) an organ such as the liver or pancreas. In general, this 3-D model will be derived from one or more 3-D pre-operative images of the subject, such as by using magnetic resonance imaging (MRI) or X-ray computed tomography imaging (CTI). In general the 3-D image(s) will have been processed to extract the anatomical structure of interest (for example as a surface mesh or image), although it might also be feasible to use the 3-D preoperative image of the subject directly as the model. Note that this processing of the 3-D image(s) does not have to be performed in real-time, but rather can complete in the interval between the preoperative imaging and the surgical procedure (which is typically hours or days).

The 3-D image(s) may be used for planning the operation, for example, for identifying a portion of an organ (the anatomical structure) to be removed, such as in a liver resection.

Accordingly, it is important during the intra-operative procedure to be able to determine the position of the laparoscope relative to the 3-D pre-operative image, as described herein.

In some cases the model may be designed to accommodate motion or deformation of the anatomical structure, which may be a relevant factor for some organs. For example, the model may incorporate information as to how the organ is likely to deform, based perhaps on bio-mechanically modelling and/or statistical data on the deformation of organs from a large number of images. This deformation information can then be used to assist in the registration of the 3-D model of the anatomical structure to the laparoscope imaging.

The method includes, at the intra-operative stage, obtaining (receiving or acquiring) from a stereo laparoscope corresponding stereo pairs of images of the anatomical structure (operation 220). Corresponding pairs of the stereo images are then processed to generate a topographical representation of the anatomical structure (operation 230). Note that this this processing of the stereo images may be performed, at least in part, within the stereo laparoscope itself, or by some external processing unit or device. In addition, it is important to be able to perform the processing in real-time or in near to real-time, so that the results are quickly available to the surgeon who is performing the operative procedure.

The stereo laparoscope is generally provided with two lenses, referred to as left as right, which acquire corresponding pairs of images - i.e. for each image from the left lens there is a corresponding image from the right lens. In practice, the pairs of images may be acquired as a video stream from each lens (hence each individual image can be considered as a frame of the video), although in other embodiments, the stereo laparoscope may provide successive pairs of individual (still) images.

In some embodiments, processing the corresponding stereo pairs of images comprises matching features between the left and right images; and based on the matched features, triangulating individual points using left and right images from the stereo pairs to determine a 3-D position relative to a known position on the stereo laparoscope - for example relative to the left lens. Alternatively, other techniques might be used for processing the images, for example, based on some form of global correlation between the left and right images (i.e. without first matching features between the two images), and/or by incorporating information indicating how the images change with movement of the laparoscope (which gives a generally known change in viewing angle onto the surface).

The topographical representation of the anatomical structure is based on one or more patches from different locations on the surface of the anatomical structure, where each patch corresponds to a given viewing area from the laparoscope - e.g. one field of view. The use of multiple patches, for example, between 6 and 12, is particularly helpful when the anatomical structure is relatively sparse in terms of distinct topography.

The topographical representation of the anatomical structure may comprise any suitable form, such as a point cloud, a surface mesh, etc. This representation may be filtered to reduce complexity (and noise), which can then help to reduce the subsequent computational burden of registering the topographical representation to the 3-D model. For example, such filtering may comprise filtering the point cloud by fitting the points in a patch to a local surface based on a maximum curvature function.

In some embodiments, the method further comprises tracking the position of the laparoscope (usually in combination with tracking the orientation of the laparoscope as well). For example, this tracking may be performed using an optical tracking system with passive tracking markers placed on the proximal end of the laparoscope. There are various other options available for such tracking, for example, attaching one or more ultrasound or microwave emitters to the laparoscope, or using a magnetic field sensor. A synchronisation is provided between the stereo images obtained by the laparoscope and the tracked position of the laparoscope. This then allows the position and orientation of the laparoscope to be determined at the acquisition of each stereo image pair, and hence allows a consistent reference frame to be utilised for combining laparoscopic images from different times (and hence probably different positions and orientations) into a single, coherent, topographical representation of the anatomical structure.

Note that it may be feasible to perform such combination without additional tracking information, but rather based on the continuous imaging provided by the laparoscope - in effect, processing this continuous imaging to provide a (simultaneous) solution for both the surface topography and also for changes in the location and orientation of the laparoscope. However, this is computationally more difficult (and hence slower - which may be problematic in an intra-operative context), and also likely to be less accurate than obtaining external tracking information for the laparoscope. A registration is now determined between the 3-D model of the anatomical structure and the topographical representation of the anatomical structure (operation 240). Again, this registration is performed as part of the intra-operative procedure, and hence should be completed quickly. In some embodiments, the registration is determined using an iterative closest point (ICP) technique, but any suitable algorithm or technique may be utilised.

In many cases, the registration requires (or performs better and/or more quickly) when provided with an initialisation that provides a very approximate (coarse) registration between the 3-D model of the anatomical structure and the topographical representation of the anatomical structure. One possibility is for this initialisation to be based at least in part on a standard clinical laparoscopic approach to the anatomical structure, because in this case the view from the laparoscope relative to the model is predictable (to a certain extent).

In some embodiments, a more accurate, manual, initialisation is performed by providing on a display screen: (i) a virtual view of the anatomical structure derived from the model; and (ii) a real view obtained from the stereo laparoscope. Note that the real view may be derived from the topographic representation derived above, or may alternatively comprise actual images obtained from the laparoscope (e.g. an image obtained from one lens, or a stereo or flattened composite obtained from both lens). A clinician is then able to manually adjust at least one of the displayed virtual view and the displayed real view to provide alignment between the two views. For example, user interface may allow one of the views to be scaled, translated and rotated in order to achieve at least approximate alignment with the other view.

Once this approximate alignment has been selected, an appropriate initialisation for the registration can be determined. This initialisation will be based on the mutual (relative) geometry (orientation, etc) of the two views as aligned on the screen. In a situation where the real view is derived directly from the imaging of the laparoscope (rather than from the topographic

representation), this geometry can also relate the displayed real view to the topographic

representation of the surface, based on the tracked position of the laparoscope. The registration procedure can then commence, based on this approximate alignment, to determine the registration between the topographic representation and the 3-D model.

Once the registration has been completed, this then allows the position of the laparoscope (which is known relative to the topographic representation) to be determined relative to the 3-D model (operation 250). This information can then be used, for example, to display the 3-D model, including relevant information from the pre-operative imaging, in combination (and registration) with the view obtained from the stereo laparoscope, thereby supporting the image-guided surgical procedure.

It will be appreciated that the processing of operations 220, 230, and 240 to determine the registration can be performed as a preliminary portion of the operative procedure - e.g. by acquiring the image patches, determining the topographic representation, and registering to the 3-D model from the pre-operative imaging. As described herein, this procedure (and the associated processing) can be performed quickly, within a few minutes, which is feasible within a real-time, intra-operative context. The resulting registration can then be used to provide the image-guided support for the laparoscopic procedure by allowing the 3-D model and pre-operative imaging to be displayed to a clinician in conjunction with (and aligned to) with the view currently obtained from the stereo laparoscope.

In conclusion, the approach described herein provides an image-guided laparoscopy system that may be used (for example) for abdominal surgery such as liver resection. A validation of this approach has been performed based on a realistic anatomy phantom and in-vivo porcine data.

Registration of pre-operative contrast enhanced CT data to intra-operative video has been achieved by combining stereoscopic surface reconstruction and optical tracking of the laparoscope. Multiple patches of visible surfaces may be reconstructed and combined accurately and quickly from stereo laparoscopy. Coupled with a locally rigid transformation model, registration has been achieved within 5 minutes. This has allowed laparoscopic surgical guidance to be obtained in a surgically realistic setting (this is believed to be for the first time). Testing of the system on a realistic liver phantom has shown that subsurface landmarks can be localised to an accuracy of 2.9 mm rms, while testing on porcine liver models has indicated an accuracy of 8.6 mm rms for anatomical landmarks.

Although the system and method of the present approach have been described primarily in the context of liver resections (of which there are about 1800 per year in the UK), they may also be employed in other contexts - for example, for pancreatic resections (about 2200 per year in the UK), kidney operations for cancer (about 3300 per year in the UK). They may also be utilised for gallbladder removal surgery (about 60,000 per year in the UK), where the potential ability to reduce bile duct injuries is important. Other contexts for the use of the method and system described herein will be apparent to the skilled person.

Various embodiments of the invention have been described above. The skilled person will appreciated that the features of these embodiments may be combined with one another as appropriate, or modified according to the particular circumstances of any given application. The scope of the invention is defined by the appended claims and their equivalents.

References

1. Cash, D.M., Miga, M.I., Sinha, T.K., Galloway, R.L., Chapman, W.C.: Compensating for intraoperative soft-tissue deformations using incomplete surface data and finite elements. IEEE Transactions on Medical Imaging 24(11), 1479-1491 (Nov 2005)

2. Oktay, O., Zhang, L, Mansi, T., Mountney, P., Mewes, P., Nicolau, S., Soler, L, Chefdhotel, C: Biomechanically driven registration of pre- to intra-operative 3d images for laparoscopic surgery. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C, Navab, N. (eds.) Medical Image Computing and Computer- Assisted Intervention MICCAI 2013, Lecture Notes in Computer Science, vol. 8150, pp. 1-9. Springer Berlin Heidelberg (2013)

3. Peterhans, M., vom Berg, A., Dagon, B., Inderbitzin, D., Baur, C, Candinas, D., Weber, S.: A navigation system for open liver surgery: design, workflow and first clinical applications. Int J Med Robot. 7(1 ), 7-16 (Mar 2011) 4. Stefansic, J.D., Herline, A.J., Shyr, Y., Chapman, W.C., Fitzpatrick, J.M., Dawant, B.M., Galloway, R.L.: Registration of physical space to laparoscopic image space for use in minimally invasive hepatic surgery. IEEE Transactions on Medical Imaging 19(10), 1012-1023 (Oct 2000)

5. Stefansic, J.D., Bass, W.A., Hartmann, S.L., Beasley, R.A., Sinha, T.K., Cash, D.M., Herline, A.J., Galloway, R.L.: Design and implementation of a pc-based image-guided surgical system. Comput

Methods Programs Biomed 69(3), 211-224 (Nov 2002)

6. Stoyanov, D., Scarzanella, M., Pratt, P., Yang, G.Z.: Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: Jiang, T., Navab, N., Pluim, J., Viergever, M. (eds.) Medical Image Computing and Computer-Assisted Intervention MICCAI 2010, Lecture Notes in Computer Science, vol. 6361 , pp. 275-282. Springer Berlin Heidelberg (2010)

7. Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 1 , pp. 666-673 (1999), http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=791289

8. Tsai, R. Y. & Lenz, R. K. A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration IEEE Transactions on Robotics and Automation, June 1989, 5, 345-358

9. Johannes Totz, Stephen Thompson, Danail Stoyanov, Kurinchi Gurusamy, Brian R. Davidson, David J. Hawkes, Matthew J. Clarkson Information Processing in Computer-Assisted Interventions, Lecture Notes in Computer Science, Volume 8498, 2014, pp 206-215