Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISUALISATION SYSTEM FOR NEEDLING
Document Type and Number:
WIPO Patent Application WO/2018/211235
Kind Code:
A1
Abstract:
A visualisation system for needling is disclosed. The visualisation system enables the user, who may be a surgeon, to identify the position of the needle and the position of organs within the volume upon which the procedure is being performed.

Inventors:
SLEEP NICHOLAS (GB)
MARGETTS STEPHEN (GB)
JENNER KATHRYN (GB)
COCHLIN DENNIS (GB)
Application Number:
PCT/GB2018/050731
Publication Date:
November 22, 2018
Filing Date:
March 21, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEDAPHOR LTD (GB)
International Classes:
A61B8/08; A61B8/00; A61B34/00; G06F3/01; G06T15/00
Domestic Patent References:
WO2016133847A12016-08-25
Foreign References:
US20140343404A12014-11-20
US20060176242A12006-08-10
US20030117393A12003-06-26
US20030055335A12003-03-20
US20160191887A12016-06-30
US20160249989A12016-09-01
CN106063726A2016-11-02
Attorney, Agent or Firm:
BRODERICK, Terence (GB)
Download PDF:
Claims:
CLAIMS

Visualisation system configured to:

receive scan data from a scanning portion representative of an interior portion of a body;

receive a first set of positional and orientational data indicative of the positions of at least one user;

receive a second set of positional and orientational data indicative of the position of a scanning portion;

generate one or more rendered views of the interior portion using the scan data and the first and second sets of positional and orientational data;

receive input from the at least one user indicative of a change in their viewpoint or a desire to modify one or more features of the rendered views;

modify the rendered views responsive to the user input; and

combine the rendered views into a scene for display by the system.

2. System according to Claim 1, wherein the system further comprises a support portion arranged to support the scanning portion. 3. System according to Claim 2, wherein the support portion is arranged to move the scanning portion.

4. System according to Claim 3, wherein the support portion is arranged to receive

positional data identifying a specified scan plane; and the support portion is arranged to move the scanning portion into the specified scan plane for the generation of scan data.

5. System according to Claim 4 wherein the support portion is arranged to periodically move the scanning portion to repeatedly scan over a volume of interest. 6. System according to any of Claims 2 to 5, wherein the support portion is coupled to a support arm arranged to hold the support portion in a desired position.

7. System according to any of Claims 2 to 6 wherein the support portion is arranged to generate the positional and orientational data indicative of the position and orientation of the scanning portion. 8. System according to any preceding claim, wherein the system further comprises a needle.

9. System according to any preceding claim, wherein the scan data is three-dimensional ultrasound data.

10. System according to any of Claims 1 to 8 wherein the scan data comprises a plurality of two-dimensional frames from a scanning portion and positional data indicating the position and orientation of said scanning portion.

11. System according to any preceding claim, wherein the input received from the user identifies at least one target location in the interior portion of the body and the system is further configured to generate and display indicia to identify the at least one target location as part of the generation of a rendered view.

12. System according to Claim 11 wherein the indicia comprises a text portion displaying information relating to the target location.

13. System according to Claim 11 wherein the indicia comprises a geometric shape.

14. System according any preceding claim where the system is configured to display additional rendered views displaying data related to the body are combined into the scene. 15. System according to any preceding claim wherein the system is configured to display rendered views comprising one or more statically located views of configurable size.

16. System according to any previous claim wherein the system is configured to display a rendered view, wherein the rendered view is generated based on the position and orientation of another user.

17. System according to any previous claim wherein the scene comprises a rendered view comprising a volumetric representation of the body.

18. System according to any preceding claim wherein the scene is displayed on at least one headset.

19. System according to Claim 18 wherein the at least one headset comprises a plurality of headsets.

20. System according to Claim 18 or Claim 19 wherein the system is further configured to: receive positional data from one or more headsets; generate a rendered view for each of the one or more headsets using the scan data and the positional data from the respective headset.

21. System according to Claim 20, wherein the system is further configured to:

receive input from one or more headsets, the user input indicating a modification to a rendered view; and

modify the rendered view for the respective headset responsive to the input from the respective headset.

22. System according to any of Claims 18 to 21 wherein the at least one headset is an

augmented reality headset.

23. System according to any preceding claim wherein the input from the user identifies a target for insertion of a needle.

24. System according to Claim 23, wherein the system is configured to, responsive to receiving the input identifying a target for insertion of a needle, overlay a marker onto the target.

25. System according to Claim 24 wherein the system is further configured to generate a path identifier from the exterior of the body to the target for insertion of the needle.

26. System according to Claim 25 wherein the path identifier is generated to be at a

configurable angle to a scan plane of a scanning portion.

27. System according to any preceding claim wherein the input from the at least one user comprises a request to track an item and the system is configured to analyse the scan data to extract the data indicative of the track of the item of interest.

28. System according to Claim 27, wherein system is configured to: receive the request to track the item; retrieve a plurality of chronologically sequenced three-dimensional data blocks, each data block corresponding to an instance in time at which the three-dimensional data block was generated; extract a plurality of tracks corresponding to possible tracks of the item determine the correlation of the plurality of chronologically sequenced possible tracks to extract the most likely track of the item from the plurality of possible tracks.

29. System according to Claim 28 wherein the system is configured to highlight the

position of the item in one or more rendered views

30. System according to Claim 28, wherein the system is configured to highlight the most likely track of the item in one or more rendered views.

31. System according to any preceding Claim wherein the scene comprises a rendered displaying a three-dimensional representation of the interior portion of the body

32. System according to Claim 31 wherein the scene comprises a rendered view

comprising a de-emphasised portion of at least part of the scan data to generate a cutaway view.

33. System according to Claim 32 wherein the scene comprises of

a rendered view comprising the emphasis of at least part of the scan data.

34. System according to any preceding claim wherein the scene comprises a rendered view comprising a plurality of planes.

35. System according to Claim 34 wherein the plurality of planes is selected by a user.

36. System according to Claim 34 wherein the plurality of planes comprises a plane

coincident with an object of interest.

37. System according to Claim 34 wherein the plurality of planes comprises a plane at a configurable angle to a plane coincident with an object of interest.

Description:
VISUALISATION SYSTEM FOR NEEDLING

FIELD The invention relates to a system and apparatus. Particularly, but not exclusively, the invention relates to a visualisation system and apparatus for needling.

BACKGROUND When carrying out a surgical procedure, such as needling, a user of a needling apparatus typically needs to locate a target within the body, for example, a particular tissue structure, growth or bone. If the target is located within the body it is very often the case that the surgeon is unable to see the target which makes the procedure inevitably more

complicated. Current guidance is that "blind" procedures are unsafe and should not be undertaken.

One of the techniques that is used to provide real-time feedback during invasive surgical procedures is ultrasound. Ultrasound guided interventional procedures include biopsies, the placement of drains, aspirations, and peripheral nerve blocks.

Ultrasound guided needling is commonly performed either free-hand (using an

unconstrained needle) or using a needle guide on the side of the ultrasound transducer. Needle guides do not aid the majority of procedures. Free-hand needling using ultrasound is difficult to master as it involves manually keeping the needle tip in the same narrow two dimensional plane as the ultrasound beam whilst at the same time advancing the needle towards the target.

Augmented reality has previously been used in visualisation systems for use in ultrasound guided needling (www.ncbi.nlm.nih.gov/m pubmed/15458132). Needle guidance has also been considered in Sheng Xu et al in

https://www.researchgate.net/pubiication/228446767 3D uitrasound guidance system

Needle tracking is also considered in US20070073155A1

Aspects and embodiments were conceived with the foregoing in mind. SUMMARY

Systems in accordance with aspects may be used in needling procedures or other medical procedures.

Where positional or orientational data are described individually, it is intended to mean both positional and orientational data.

Viewed from a first aspect there is provided, a visualisation system configured to:

receive scan data from a scanning portion representative of an interior portion of a body; receive a first set of positional and orientational data indicative of the positions of at least one user; receive a second set of positional and orientational data indicative of the position of the scanning portion; generate one or more rendered views of the interior portion using the scan data and the first and second sets of positional and orientational data; receive input from the at least one user indicative of a change in their viewpoint or a desire to modify one or more features of the rendered views; modify the rendered views responsive to the user input; and combine the rendered views into a scene for display by the system.

The scan data may be generated by an ultrasound machine with an ultrasound transducer. The term body means a material object such as, for example, a limb of a patient, a body of a patient, an animal, a part of a manikin, a part of a phantom, a manikin, a phantom or a lump of meat or other material which may be used to simulate a scanning procedure. Input may be in the form of a passive user input such as, for example, the movement of a headset which the user is using to view the scene, or an active user input such as, for example, a voice command or an input detected through the motion sensing capability of a headset. User input may also be in the form of a button press on a keyboard or a gesture or touch-based input on a touch-screen.

The rendered views in the scene may be generated from the perspective of a user and overlaid into their real world view of the medical procedure. The rendered views may be modified in response to input comprising positional and orientational data representing movement of a user.

The system may be further configured to display a rendered view of the interior portion of the body. The generation of a rendered view may comprise generating a three-dimensional model representative of the interior of the body. The generation of a rendered view may further comprise applying image processing to the three-dimensional model to extract at least one view through the three-dimensional model. The generation of a rendered view may further comprise augmenting the three-dimensional model with image enhancement to enhance features of the three-dimensional model. The generation of a rendered view may further comprise augmenting the three-dimensional model with added indicia to indicate the location of features of the three-dimensional model.

The system may be further configured to receive positional data indicative of the position of a scanning probe configured to transmit the scan data to the system and generate the rendered view of the interior position using the scan data and the positional data.

Any of the positional data may be received contemporaneously with the scan data. The scan data may consist of a three-dimensional data set of ultrasound data which is changing in real-time (sometimes known as "4d ultrasound"). Alternatively, the scan data may comprise a plurality of two-dimensional frames from a scanning portion and positional and orientational data indicative of the position and orientation of the scanning portion. The system may be further configured to receive user input to modify any of the rendered views in the scene.

A rendered view may be generated in real-time and co-aligned with the beam from the ultrasound transducer such that the structures seen in the view overlay the anatomy within the body which they image.

The system may further comprise a support portion arranged to support a scanning portion arranged to generate the scan data. The effect of this is that the user of the visualisation system does not need to hold a scanning portion. This leaves both hands free to carry out the procedure.

Furthermore, the system can be configured to automatically align the transducer with an object of interest without operator intervention.

The support portion may be arranged to move the scanning portion to generate the scan data.

The support portion may be arranged to generate positional data indicative of the position of the scanning portion.

The support portion may be arranged to receive positional data identifying a specified scan plane. The support portion may be arranged to move the scanning portion into the specified scan plane for the generation of scan data.

The support portion may be arranged to periodically move the scanning portion to repeatedly scan over a region of interest. The support portion may be arranged to be rested on a body such that it remains in a given location until removed by the operator, alternatively it may be held in place by external means such as a support arm. The system may further comprise a needle to enable procedures such as needle biopsies, placement of drains, peripheral nerve blocks and needling procedures on internal organs such as the kidney and liver to be carried out.

The modification of a rendered view may be the addition of an image enhancement to enhance a feature in the interior of the body.

The modification of a rendered view may be the addition of indicia, for example, to identify a feature in the interior of the body. The indicia may comprise a text portion displaying information relating to a target location. The indicia may comprises a text portion displaying external data relating to the body. The indicia may comprise a geometric shape.

An alternative rendered view may contain a volumetric rendering of the three-dimensional model or to provide a cut-away view of the three-dimensional model.

An alternative rendered view may comprise a plurality of rendered planes which may comprise a plane co-incident with the ultrasound beam, a plane coincident with an object of interest, such as, for example, a needle, and a plane which is at a configurable angle to the plane coincident with the object of interest.

Rendered views may be composited into a scene and displayed on one or more headsets, which may each be an augmented reality headset. Where augmented reality headsets are described, the term is intended to mean both augmented reality solutions where computer generated graphics are overlaid onto a user's field of view, and virtual/mixed reality solutions, where the view from a camera is composited with the computer generated graphics to generate a view that includes a computer generated version of what the user would ordinarily see.

The system may be further configured to receive positional data from any headset of the at least one headsets; and generate a view of the scene appropriate to the current location and orientation of each respective headset.

The effect of this is that a plurality of users may each wear their own headset and the scene will be generated from the perspective of that user. Any indicia or image enhancement generated by one user may then form part of the view seen by each of the other users.

The system may be further configured to receive user input from any of the plurality of headsets, the user input indicating a modification to a rendered view; and modify the rendered view for the respective headset responsive to the user input from the respective headset.

The effect of this is that each user may make their own modifications to the scene. This means that a team may perform a procedure using the visualisation system and each member of the team may generate rendered views according to their own needs and preferences.

Each rendered view can be overlaid onto the real- world in a configurable location in the augmented reality scene. A rendered view may be generated based on positional data for another user. This means that such a view is generated from the perspective of a different user and that view may be displayed for other users to see.

The system may be configured to display additional rendered views displaying data related to the body are combined into the scene. The additional rendered views may be generated by the system without the use of positional and orientational data from either a user or a scanning portion. The system may be configured to display rendered views comprising one or more statically located views of configurable size. The system may be configured to display a rendered view which is generated using positional and orientational data indicative of the position and orientation of another user.

A rendered view may be a display of data relating to the body, which may have been previously generated before a procedure using the system is carried out.

A rendered view may be a dimensionally different view of another rendered view statically located in a different position in the scene (a "billboard" view).

This removes the necessity to study monitors to obtain the information about the interior portion of the body.

The modification of the scene may comprise the removal of a rendered view.

The user may see different rendered views dependent on where they look in the scene.

The input from the user may identify a target for insertion of a needle. The user input may be voice input, gaze input, keyboard input, or it may be a gesture which is detected by motion detection. The system may be configured to, responsive to receiving the input identifying a target for insertion of a needle, overlay a marker onto the target.

The effect of this is that the user may add markers to rendered views.

The system may further configured to generate a path identifier indicating a path from the exterior of the body to the target for insertion of the needle.

The path identifier may be a line or other marking which follows the path from the exterior of the body to the target. The path identifier may be generated to be orthogonal or at a configurable angle to a scan plane of a scanning portion. Alternatively, the system may receive user input indicating a path the user wishes to take when inserting the needle.

The system may be further configured to track objects of interest, such as a needle, by using image processing techniques. These techniques may use data from multiple time points of the three-dimensional data set of voxels. This temporal comparison combined with spatial feature tracking may reduce the search space required to identify objects of interest and may also reduce positional error of the said tracked objects.

The system may be configured to: receive a request to track an item; retrieve a plurality of chronologically sequenced three-dimensional data blocks, each data block corresponding to an instance in time at which the three-dimensional data block is generated; determine the correlation of the plurality of chronologically sequenced three- dimensional data blocks to extract the most likely track of the item from the possible tracks.

The effect of this is that time-series based techniques can be used to reduce the search space needed to track the item of interest. This reduces the computational power required to locate the item of interest. It may also improve location accuracy. The system may be configured to use image enhancement to display the most likely location of the item in the rendered view.

The system may be configured to use image enhancement to display the most likely track of the item on the rendered view. The track may be emphasised using image enhancement to emphasise the track of the item.

The generation of a rendered view may comprise the de-emphasis of at least part of the three-dimensional model data block to generate a cut-away view. The effect of this is that data of interest will be emphasised and that data which is not of interest will be de- emphasised from the rendered view.

The scene may comprise a rendered view comprising the de-emphasised portion of at least part of the scan data to generate a cut-away view.

The scene may comprises a rendered view comprising the emphasis of at least part of the scan data. The scene may comprises a rendered view comprising a plurality of planes which may be selected by a user using user input.

The plurality of planes may also comprise a plane coincident with an object of interest. The plurality of planes may also comprises a plane at a configurable angle to a plane coincident with an object of interest.

DESCRIPTION

First and second embodiments in accordance with the first aspect will now be described, by way of example only, and with reference to the following drawings in which:

Figure la schematically illustrates a visualisation system in accordance with the embodiment;

Figure lb schematically illustrates a control module 106 in accordance with the embodiment; Figure lc schematically illustrates a transducer gathering data from a volume; Figure 2a is a flow diagram illustrating the steps undertaken by the visualisation system to generate a rendered view from two dimensional data; Figure 2b illustrates a set of sub-processes that are performed as part of the generation of a rendered view;

Figure 3a schematically illustrates a scene overlaid into the field of view of a headset; Figure 3b schematically illustrates a team of users each wearing a headset to communicate with the system in accordance with the embodiments;

Figure 4a is a flow diagram illustrating the steps undertaken by the visualisation system to generate a rendered view from three dimensional data;

Figure 4b illustrates a set of sub-processes that are performed as part of the generation of a rendered view;

Figure 5 is an illustration of a rendered view which identifies a track for the insertion of a needle;

Figure 6 is an illustration of a rendered view on which a needle is identified;

Figure 7 is an illustration of a rendered view in which a region of interest is identified;

Figure 8 is an illustration of the use of multiple scan planes in a rendered view; and

Figure 9 is an illustration of a scene containing two example rendered views that may be provided by the system in accordance with the embodiment;

Figure 10 is an illustration of the application of time-series to the tracking of an object; and Figures 11a and l ib are an illustration of a probe cradle which may be used with a system in accordance with the embodiment.

We now illustrate, with reference to Figures 1 to l ib, a visualisation system 100 in accordance with the first and second embodiments.

The visualisation system 100 is described as part of a needling system 200 for reasons of illustration only. Needling system 200 comprises an ultrasound transducer 102 configured to generate scan data, an ultrasound machine 103 configured to receive the scan data from the ultrasound transducer 102 and to generate an output stream of ultrasound scan data, a control module 106 which is configured to receive the output stream of ultrasound scan data from the ultrasound transducer via a control interface 108. The needling system 200 further comprises a needle 110 which may be used in a needling procedure.

The tracking module 107 is configured to receive location and orientation data from at least one of the ultrasound transducer 102, and the headset 104. The tracking module 107 uses a suitable tracking system 105, such as magnetic tracking or optical tracking, to process the location and orientation data. The tracking may be performed by a tracking device built into the device being tracked or by an external tracker such as an Ascension TrakSTAR.

The location and orientation data received from the ultrasound transducer 102 indicates the location and orientation of the ultrasound transducer 102. The location and orientation data received from the headset 104 indicates the location and orientation of the headset 104.

The tracking module 107 is configured to use the location and orientation data transmitted from the ultrasound transducer 102 and the headset 104 to determine the position and orientation of the ultrasound transducer 102 and the headset 104. The tracking module 107 is configured to transmit the determined position and orientation of the ultrasound transducer 102 and the headset 104 to the other sub-modules in the control module 106 as described below. The tracking module 107 is configured to process the location and orientation data in realtime, i.e. as it is received from the respective components, to generate position and orientation data.

The visualisation system 100 further comprises at least one augmented reality headset 104 which communicates with the visualisation system 100 using a headset interface 130. An example of a suitable headset 104 would be the Oculus Rift, the HTC Vive or the Microsoft HoloLens.

The use of an augmented reality headset 104 in this way means that the visualisation system 100 generates an augmented reality scene in the working environment which is used by users of the visualisation system 100. We will describe later in this description how the rich range of user interface features of an augmented reality headset 104 can enable each of the users of the visualisation system 200 to provide instructions to the visualisation system 100 to augment their reality as part of the use of the visualisation system 100 to perform a needling procedure using the needling system 200.

The visualisation system 100 may further comprise additional augmented reality headsets which are being used by a plurality of users. Each of the augmented reality headsets are arranged to provide individual input to the visualisation system in accordance with what is described below. The following description is applicable to interaction of any of the additional headsets being used with the visualisation system 200.

Indeed, a plurality of the augmented reality headsets 104 may interact with the visualisation system simultaneously. The tracking module 107 allows the scene displayed by each headset to be generated using the positional and orientational data for that user.

Control module 106 comprises a three-dimensional (3D) model constructor sub-module 106a, a reslicer sub-module 106b, a volume rendering sub-module 106c, an image segmentation module 106d, an image compositor module 106e and a tracking module 107. Each of these sub-modules and the tracking module 107 are configured to transmit data between one another using standard data interfacing techniques. We will now describe how a needling system 200 in accordance with a first embodiment is used to generate a scene containing a rendered view of an interior portion of an arm 300 of a patient during a needling procedure conducted using a tracked ultrasound transducer 102. This is described with reference to Figure 2a. A user of the needling system 200 uses the transducer 102 in a step S200 to scan the arm 300 by moving the transducer 102 along the arm 300 to generate a scan data. This is illustrated in Figure lc where the scan plane of the transducer 102 is enumerated by the reference numeral 170. The ultrasound machine 103 interfaces with the transducer 102 to produce a series of two- dimensional frames of scan data in a step S202. Coincidentally, the frames of scan data are displayed on a monitor on the ultrasound machine 103.

The two dimensional frames of scan data are also transmitted to an external monitor port on the ultrasound machine 103. Examples of suitable external monitor ports are high definition multiple interface (HDMI), video graphics array (VGA) and digital video interface (DVI).

In step S204, the two-dimensional frames of scan data are transmitted to the control module 106 as a block of voxels via the ultrasound data interface 108, which associates a time-stamp with the frame as it is received. This time-stamped block of voxels is then transmitted by the control module 106 to the 3D model constructor sub-module 106a. The ultrasound data interface 108 may be a video capture card which provides an input data stream to the volume constructor sub-module 106a. Contemporaneously, the location and orientation data for the transducer 102 is received by the tracking system 105. The tracking module 107 uses the location and orientation data received by the tracking system 105 to generate position and orientation data for the transducer 102, also associating a time-stamp with the data as it is received. The position, orientation and time-stamp data is fed to the volume construction module 106a.

The volume constructor module 106a associates the two-dimensional frames from the ultrasound data interface 108 with their corresponding location and orientation data from the tracking module 107 using the time-stamps.

In a step S206 the relicer module 106b creates a Cartesian representation of the body as a 3D-model, by raster scanning the reslice plane along the z-axis, and storing the resultant reslice frames as a 3D array of voxels. Where multiple frames are within a configurable distance from a voxel, and could therefore be used to contribute to it, the most recent frame is used. The control module 106 is configured to delete frames that are older than a pre- configurable age. The 3D construction sub-module 106a is configured to store the generated 3D model data block in a format which is suitable for further processing by the reslicer sub-module 106b, the volume rendering module 106c, the image segmentation sub-module 106d and the indicia adder 106f in a step S208. The 3D model constructor sub-module 106a then transmits the 3D model data block in the stored format to each of the reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f in a step S210 immediately after the 3D model data block has been stored, i.e. in real-time. As set out below, the relicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f may be used to modify rendered views based on input by the user.

The reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f then call specific image processing routines to perform the operations on the 3D model data block that will now be described. The operations are performed independently by the respective sub-module on the 3D model data block in a step S212 as four independent sub-processes which we enumerate as S212A, S212B, S212C and S212D which are illustrated in Figure 2b.

In step S212A, the reslicer sub-module 106b calls an image reslicing routine such as nearest neighbour reslicing as described by https://caa tools.cani..ac.uk/acces8/contex t/group/d4.fe68(K)-4ce2-4bad-8041- 9575 lQe5aaed/Public/3G4/3G4 iab handout 13.pdf to generate one or more two- dimensional slice planes from the 3D model data block. As is set out below, the one or more two-dimensional slice planes may be coincident with an object of interest or at a configurable angle to that plane. The one or more two dimensional slice planes may also be requested for use by the volume constructor module 106a.

The reslicer sub-module 106b outputs the data corresponding to the generated two- dimensional slice planes from the 3D model data block. The data is transmitted to the image compositor sub-module 106e.

In step S212B, the volume rendering sub-module 106c is configured to call an image rendering routine such as the routine described in http://www.h3dapi.org/modules/mediawiki/index.php/MedX3D. The data is then transmitted to the image compositor sub-module 106e.

In step S212C, the image segmentation sub-module 106d calls an image enhancement routine which receives the 3D model data block as an input. The image enhancement routine may apply thresholding, clustering or compression-based methods to the received volumetric data block to generate a segmented 3D model data block. The image enhancement routine may also apply the image segmentation methods such as those described in ' https://www.ncbi.nlm .ni ' h.gov/pubmed/ 11548934 to generate a segmented 3D model data block. The image segmentation sub-module 106d may also be configured to track objects of interest such as a needle and to pass the position and orientation of said objects to the reslicer module 106b to allow automatic update of rendered scan planes based on the position of objects within the body.

The segmented 3D model data block generated by the image rendering sub-module 106d is then transmitted to the image compositor sub-module 106e.

In step S212D, the indicia adder sub-module 106f is configured to generate and/or retrieve additional information which may be of use to the user. Such additional information may comprise other data from other data services and/or other visual information which may be of use to the user such as marker data which is then transmitted to the image compositor sub- module 106e.

The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder sub- module 106f to the image compositor sub-module 106e is then combined in a step S214 to generate rendered views which are then further composited to form the scene.

The combination in step S214 generates the rendered views and the scene using well known compositing techniques such as depth buffering, alpha blending and techniques used in industry-standard 3d model-based game engines such as Unity 5.6 (https://unity3d.com/) and Unreal4 (https://www.unreaiengine.com).

The image compositor sub-module 106e transmits the scene data to the augmented reality headset 104 in a step S216. The scene is then displayed for the user of the needling system 200 in real-time by the headset 104.

The control module 106 then loops back to step S200 where the next collection of scan data is received from the ultrasound machine 103 so that the real-time generation of the scene through steps S200 to S216 can be maintained. We will now describe how a needling system 200 in accordance with a second embodiment is used to generate a rendered view of an interior portion of an arm 300 of a patient during a needling procedure using an ultrasound transducer 102. This is described with reference to Figure 4a.

A user of the needling system 200 uses the transducer 102 in a step S400 to scan the arm 300 by placing the transducer 102 on the arm 300 and using the ultrasound machine 103 to generate a three-dimensional block of scan data. As in the previous embodiment, this is illustrated in Figure lc.

The ultrasound machine 103 interfaces with the transducer 102 to generate a 3D model data block representing the body in a step S402. The 3D model data block is then transmitted to the control module 106 via the ultrasound data interface 108 using a standard data bus. The 3D model data block is formed from a block of voxels.

Contemporaneously, the location and orientation data for the transducer 102 is received by the tracking system 105. The tracking module 107 uses the location and orientation data received by the tracking system 105 to generate position and orientation data for the transducer 102. The position and orientation data is fed to the image compositor sub-module 106f.

The position and orientation data is only required in this embodiment for the purposes of orienting the 3D model data block and not for the construction of the 3D model data block. The block of voxel data is then transmitted to the control module 106 via the control interface 108 in a step S404 where the block of voxels is transmitted to the 3D model constructor sub- module 106a via the ultrasound data interface 108 as the block of voxels is received by the control module 106, i.e. in real time. The control module 106 assigns a timestamp with the block of voxel data as it is received and may discard data older than a pre-configurable age.

Responsive to receiving the 3D model data block through the ultrasound data interface 108, the volume construction module 106a is configured to store the 3D model data block in a format which is suitable for further processing by the reslicer module 106b, the volume rendering module 106c, the image segmentation module 106d and the indicia adder sub- module 106f in a step S408. The timestamp assigned to the block of voxel data is also stored. The 3D model constructor sub-module 106a then transmits (in parallel) the timestamped 3D model data block in the stored format to each of the reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f in a step S410 immediately after it has been stored. As set out below, the relicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f may be used to modify any of the rendered views generated by the system 100 based on input from a user.

The reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f then call specific image processing routines to perform the operations on the 3D model data block that will now be described. The operations are performed independently by the respective sub-module on the 3D model data block in a step S412 as four independent sub-processes which we enumerate as S412A, S412B, S412C and S412D which are illustrated in Figure 4b.

In a step S412A, the reslicer sub-module 106b calls an image slicing routine such as nearest neighbour reslicing routine as described by

95751.0e5aae l Public/3G4/3G4 lab handout 13.pdf to generate one or more two- dimensional slice planes from the 3D model data block. As in the previous embodiment, the one or more two-dimensional slice planes may be requested by each user and may be planes that are coincident with an object of interest or transverse to that plane. The two-dimensional slice planes may also be requested by the volume constructor module 106a The reslicer sub-module 106b outputs the data corresponding to the generated two- dimensional slice planes from the 3D model data block. The data is transmitted to the image compositor sub-module 106e. In a step S412B, the volume rendering sub-module 106c is configured to call an image rendering routine such as the routine described in http://www.h3dapi.org modules/mediawiki/index.php/MexX3D. The data is then transmitted to the image compositor sub-module 106e

In a step S412C, the image segmentation sub-module 106d calls an image enhancement routine which receives the 3D model data block as an input. The image enhancement routine may apply thresholding, clustering or compression-based methods to the received volumetric data block to generate a segmented 3D volumetric data block. The image enhancement routine may also apply the image segmentation method such as those described m htlps://www.ncbi.nlm.nih.gos'/pubmed/.1 1.548934 to generate a segmented 3D volumetric data block. The segmented 3D model data block generated by the image rendering sub-module 106d is then transmitted to the image compositor sub-module 106e

In a step S412D, the indicia adder sub-module 106f is configured to generate and retrieve additional information which may be of use to the user. Such additional information may comprise other data from other data services and/or other visual information which may be of use to the user such as marker data which is then transmitted to the image compositor sub- module 106e.

The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder sub- module 106f to the image compositor sub-module 106e is then combined in a step S414 to generate a rendered view.

The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder sub- module 106f to the image compositor sub-module 106e is then combined in a step S414 to generate rendered views which are then further composited to form the scene. The combination in step S414 generates the rendered views and the scene using well known compositing techniques such as depth buffering, alpha blending and techniques used in industry-standard 3d model-based game engines such as Unity 5.6 (https://unity3d.com/) and Unreal4

The image compositor sub-module 106e transmits the scene data to the augmented reality headset 104 in a step S416. The scene is then displayed for the user of the needling system 200 in real-time by the headset 104.

The control module 106 then loops back to step S400 where the next collection of scan data is received from the ultrasound machine 103 so that the real-time generation of the scene through steps S400 to S416 can be maintained. An example of a scene generated by either the first embodiment (Figure 2a) or the second embodiment (Figure 4a) is illustrated in Figure 3 as part of a display which is overlaid onto the field of view of the user of the headset 104 as part of the augmented reality environment which is generated by the visualisation system 100 using the headset 104. Figure 3 illustrates the rendered view from the perspective of the headset 104 looking onto an arm 300 of a patient whilst an ultrasound transducer 102 (with scan plane 320) is being used to generate scan data for the arm 300. The user is looking for vessel 306 using the ultrasound transducer 102 and the overlay of the rendered view into the field of view of the headset enables the vessel 306 to be visualised whilst the procedure is being carried out with the user looking at the arm 300.

That is to say, whilst the user is wearing the headset 104 and looking at the arm 300 to carry out the needling procedure using the needling system 200, a rendered view of the interior portion of the arm 300 is superimposed onto the user's view of the real- world surroundings whilst the user looks at the arm 300.

The rendered view is in 3D and the rendered view provides the user with a view of the internal organs of the arm 300 in 3D, optionally with certain structures highlighted by the Image Segmentation Module 106d. Each of the users is provided with their own rendered view of arm 300 using positional and orientational data obtained from each headset.

It may be that the user wants to only perform a procedure on vessel 306 and does not want to collide with vessels 310 and 312. The visualisation of vessels 310 and 312 assists the user in identifying vessels 310 and 312 which increases the chances that the user will avoid these vessels during the procedure.

The effect of displaying the rendered view on the headset 104 in real-time, i.e. as it is generated, is that the rendered view is representative of the arm 300 at that point in time and provides the user with a constantly updated image of the interior of the arm.

This means that the system 100 can provide an updated view of the interior of the arm shown by an ultrasound image, which would, for example, display the progress of a needle through the arm as it is inserted during a needling procedure.

In providing the rendered view in the same field of view as of the procedure, the user does not have to maintain watch over a remotely situated monitor on the ultrasound machine 103. This has ergonomic benefits as the user does not have to keep turning their head to view the monitor on the ultrasound machine 103.

By co-locating a rendered view with the anatomical structures it is imaging, the operator gains further ergonomic benefits as it becomes more intuitive as to where to insert the needle to obtain the trajectory required. This may reduce the training need and improve patient safety.

As will be appreciated from Figure 3, the benefits of visualisation system 100 can be realised without the need to be using the system during a needling procedure. The first embodiment described with respect to Figure 2a and the second embodiment described with respect to Figure 4a may additionally comprise the following optional features. The embodiment described with respect to Figure 2a and the embodiment described with respect to Figure 4a may also be described in the same way with respect to any of a plurality of augmented reality headsets 104.

That is to say, a rendered view may be generated for any of the augmented reality headsets 104 using positional and orientation data transmitted from the respective headset to the tracking module 107. The effect of this is that if a team of users is using the visualisation system 100 to perform a needling procedure, the rendered view is generated by the visualisation system 100 and overlaid into the real-world view of the situation from the perspective of each user.

For example, if the arrangement in Figure 3b is considered, we have a patient laying on a bed with a team of users working around the patient 340.

The visualisation system 100 generates an augmented reality environment which overlays computer-generated imagery into the real-world environment surrounding the users and the patient. The effect of this is that multiple sets of imagery (enumerated as 346a, 346b, 346c, 346d and 346e) can be augmented into the real-world within the augmented reality environment generated by the visualisation system 100 and the headsets (enumerated as 104a, 104b, 104c and 104d) being worn by the users (342a, 342b, 342c, 342d). Naturally, each of the users will have a different perspective on the arm 300 of the user. The positional and orientational data transmitted by the respective headsets to the tracking module 107 means that the visualisation system 100 generates a rendered view for each user that is overlaid into that user's real world view of the environment through the respective headset. That is to say, when the user is looking down at the patient during the procedure, the rendered view provided in steps S216 or S416 is overlaid into that user's real- world view to enable them to see a rendered view of the ultrasound imagery overlaid onto the arm 300 of the user. The image data to be composited into the scene by the Image Compositing module 106e is selected from the data from sub-modules 106b, c, d and f based on user input and configuration data. The data from a particular sub-module may be displayed differently in different rendered views. Where required, the sub-modules may process their input data more than once for a particular 3D data block with different configuration parameters.

Optionally, the visualisation system 100 may render more than one rendered view into a user's scene by performing steps S200-S214 or steps S400-S414 to produce each rendered view and then using the Image Compositing module 106e to further composite these rendered views in to the users' scene.

An example of this is creating a rendered view from the perspective of another user (a second user) of the system who is wearing another headset 104. The second user may be viewing the procedure at a different angle relative to the patient. The first user may want to see the rendered view that the second user is being provided with.

Other examples include rendered views of data relating to the body. More than one such rendered view may be generated, each containing different subsets of the data or data relating to different parts of the body. Examples of data from the body include CT, MRI, X-Ray, previous ultrasound data, data from the patient's notes, histology results, contemporaneous results from other patient monitoring systems.

As shown in Figure 8, a rendered view may be shown co-aligned with the structures that it images within the body.

Additionally, as shown in Figure 3, a rendered view may be shown so that it is co-aligned to the end of the ultrasound transducer 102, tracking this as it is moved.

Rendered views may also be displayed as statically located "billboard" displays placed within the augmented reality scene by the visualisation system 100. Billboard views may be of different sizes. Billboard views are illustrated by Figure 9, which shows an example of a billboard view 904 relative to a rendered view 902 which shows a collection of billboard views arranged in the augmented reality scene.

Optionally, the needling system 200 may further comprise a probe cradle 120 configured to hold the ultrasound transducer 102 in position at the appropriate position on the patient. This means that the user of the needling system 200 does not need to use one of their hands to hold the ultrasound transducer 102 during the needling process. This leaves both hands free to carry out the procedure. The probe cradle is illustrated with reference to Figures 11a and l ib.

As shown in Figure 11a, the probe cradle 120 may comprise a jointed arm 1102 coupled to the probe 102 by an attachment device comprising a collar 1108 around the probe 102. Optionally, the collar 1108 may be moved by motors 1105 between the individual joints of the jointed arm 1102 allowing the position and orientation of the probe 102 to be controlled in up to six degrees of freedom. Encoders may be attached between the individual joints of the arm 1102 and the motors 1105. Data from the encoders can be used to infer positional and orientational information of the probe 102 by reference to the position of the joints. Alternatively, as shown in Figure 1 lb, the probe cradle 120 may be built into a cuff 1106. The probe cradle 120 may comprise arms 1110 coupled to an attachment device comprising a collar 1108 around the probe 102. Optionally, the collar 1108 may be moved by motors 1105 coupled to the arms 1110 to allowing the orientation of the probe 102 to be controlled in up to three axes. Encoders may be attached between the arms 1110 and the motors 1105. Data from the encoders can be used to infer positional and orientational information of the probe 102 by reference to the position of the arms 1110.

Using encoders as part of a probe cradle 120 eliminates the need to affix a tracking device to the ultrasound transducer 102.

For example, the probe cradle 120 may be configured to receive the data relating to the plane containing the needle from the image reslicer sub-module 106d and use that data to move the ultrasound transducer 102 into an orientation in which a scan plane from the ultrasound transducer 102 coincides with the longitudinal plane of the needle 110. The ultrasound transducer 102 may then be held by the probe cradle 120 in that position. Alternatively, the requested positional and orientational data may be time dependent, to cause the cradle to sweep the ultrasound transducer 102 through a volume of the body to allow this volume to be repeatedly scanned. In the first embodiment, such a sweep over the area of interest enables a plurality of two-dimensional planes to be obtained and hence for the system to create a 3D model data block of the volume of interest without the user having to manually manipulate the ultrasound transducer 102. By programming this sweep to be periodic, the volume of interest may be scanned repeatedly, to allow a standard 2d ultrasound transducer to be used to image a moving 3d volume of the body.

In current needling systems the needle is inserted adjacent to the probe to allow it to be visualised easily. The needle is guided along or into the scan plane of the ultrasound transducer 102 by visually aligning its shaft with the ultrasound transducer 102 or by using a mechanical guide.

In accordance with either of the first or second embodiments, when using a rendered image containing either a plane not aligned with the ultrasound beam, or a volumetric rendering of the 3D data block, the needle 110 may be visualised even if it is inserted away from the transducer and its trajectory no longer needs to be aligned with the scan plane of the ultrasound transducer 102, since data out-of-plane of the scan plane is made visible. Therefore, the ultrasound transducer may be placed in a position to best optimise the information that can be inferred from the rendered image whilst the needle may be placed in a position to simplify its safe insertion by avoiding intermediate structures.

This enables certain procedures to be simplified.

One example of such a procedure is a needle biopsy of a kidney. A costal position provides a better view of the kidney than would be obtainable from the patient's back as the muscle density in a patient's back is too high to enable a good ultrasound view of the kidney to take place. It is, however, desirable to insert the needle into the kidney through the back as this significantly reduces the chance of internal bleeding. Another example of such a procedure is a liver needling. A large portion of the liver typically lies under the ribs of a patient and it is not possible to obtain good ultrasound images through the ribcage. In taking the rendered view from the sub-costal position, a rendered view of the liver can be generated which can enable a good image of the liver to be obtained whilst inserting the needle intercostally.

Optionally, the user may provide an input to the needling system 200 from the headset 104 to indicate a needling target on the rendered view. This is illustrated in Figure 5 where the ultrasound transducer 102 is in position over a user's arm and a nerve 500 which is to be subject of a peripheral nerve block is part of the rendered image 502 generated in steps S214 or S414.

The input may be a voice command which can be detected by the headset 104 or a gesture input which can also be detected using the motion sensing capability or camera of the headset 104. As an example, using a Microsoft HoloLens, the gaze cursor may be used to provide the location of the object to be marked. Alternatively, voice or gesture input may be used. The resultant location information is fed into the control module 106 and subsequently to the indicia adder sub-module 106f.

Responsive to receiving the input, the headset 104 transmits a request to the control module 106 via the headset interface 130, the control module 106 is then configured to feed a request to the indicia adder sub-module 106f for a marker to be added to the rendered view.

The indicia adder sub-module 106f then adds the marker 504 to the 3D model data block at the indicated location in the 3D model data block as part of steps S212 or S414 and transmits the 3D model data block to the image compositor sub-module 106f in the step S214 or S414 where the rendered view is generated with the addition of the marker 504 at the indicated location. The control module 106 maintains the presence of the marker at a static position in world- space on the generated rendered view until further user input indicates that the marker should be removed.

The user may then issue further user input indicative of a desire to insert a line onto a rendered view to identify a suitable position and direction for a needle to be inserted. The user input may use the rich range of user input features provided by the headset 104, for example, the gesture or voice recognition input capabilities of the headset 104

Responsive to receiving the input, the headset 104 transmits a request to the control module 106 via the headset interface 130, the control module 106 is then configured to feed a request to the indicia adder sub-module 106f for a line 506 to be added to the rendered view. The indicia adder sub-module 106f then calls a line addition routine which receives the data regarding the marker as an input.

In generating the line 506, the image segmentation sub-module 106d may issue a call to the tracking module 107 to obtain data indicative of the orientation of the ultrasound transducer 102. The image segmentation sub-module 106d may use the information concerning the orientation of the ultrasound transducer 102 as an input to the image segmentation routine which will then generate the line 506 so that it at a configurable angle to the scan plane of the ultrasound transducer 102 to maximise the ultrasound reflectance and hence enhance the visibility of the needle 110. The control module 106 maintains the presence of the marker 504 and the line 506 on the generated rendered view until further user input indicates that the marker 504 and the line 506 should be removed.

Optionally, the headset 104 may enable the user of the needling system 200 to interact with the system to modify the scene. Optionally, a user may wish to remove a rendered view from the scene. In order to do this, the user may issue a voice command "remove image" which is detected by the microphone of the headset 104 and, responsive to receiving this command, the headset 104 will switch off the display of the rendered view generated in step S212 or S412. Responsive to the voice command "reinstate image", which is detected by the microphone of the headset 104, the headset will reinstate the display of the rendered view which has been contemporaneously generated in steps S200 to S216 or steps S400 to S416.

Alternatively or additionally, a user may provide input to the headset 104 to indicate that some of the data should be de-emphasised from a rendered view generated in steps S214 and S414 to generate a cut-away view.

Optionally, a user may provide input to the visualisation system 100 to indicate that they would like the position of an object, such as the needle, to be highlighted. We discuss this optional feature in the context of tracking a needle but this is only intended to be illustrative.

The tracking of a feature is described with reference to Figure 10.

Responsive to receiving the input, the control module 106 issues a request to the image segmentation sub-module 106c for the position of the needle to be tracked in a step S 1000.

In a step S 1002, the image segmentation sub-module 106c retrieves previously time-stamped 3D model data blocks. In a step S 1004, the image segmentation sub-module 106c calls a local thresholding routine which is applied to each of the retrieved 3D model data blocks. The local thresholding routine segments each of the 3D model data blocks using a local thresholding algorithm such as the one described in Bernsen, J.: 'Dynamic thresholding of gray-level images'. Proc. 8th Int. Conf. on Pattern Recognition, Paris, 1986, pp. 1251-1255 or J. Sauvola and M. Pietikainen, "Adaptive document image binarization," Pattern Recognition 33(2), pp. 225-236, 2000 In a step S 1006, the image segmentation sub-module 106c calls a line extraction algorithm such as a random sample consensus (RANSAC) algorithm which outputs data for each of the 3D model data block. The data output for each of the 3D model data blocks indicates the possible lines that exist in the 3D model data block - which would include the lines which represent the track of a needle which is inside the body which was used to generate the 3D model data blocks. Another method which may be used for step S 1008 is disclosed in ht(p://www.irisa.fr lagadic/pdf/2013 icra chatelain.pdf or https ://hal.archives- ouvertes.fr/hal-00810785/document. The data output from the line extraction algorithm for each of the 3D model data blocks can then be treated as a time-series of timestamped line data which can then be analysed using time-series based techniques.

In step S 1008, the image segmentation sub-module 106c calls an auto-correlation routine which determines the auto-correlation for the timestamped line data. The determination of the auto-correlation of the timestamped line data provides the similarities between the identified lines in the successively timestamped 3D model data blocks.

In a step S 1010, this data can be used by the image segmentation sub-module 106c to identify the position of the needle as it is statistically unlikely that the needle will move laterally inside the body. That is to say, line data which is indicative of lateral movement can be rejected which will leave only line data indicative of longitudinal movement.

The line data corresponding to the likely position of the needle is then output by the image segmentation sub-module 106c in a step S 1012. The line data corresponding to the likely position of the needle can then be used, with reference to the respective 3D model data block to determine the position of the track, the orientation of the track and the length of the track of the needle, as well as the needle's current position and orientation. The data relating to the track of the needle can then be used to predict the track the needle will likely take. The image segmentation sub-module 106c may also determine the velocity of the needle by applying a routine to determine the rate of change of the position of the track.

The effect of using the image segmentation sub-module 106c to determine the position, orientation and length of the track is that external tracking of the object of interest, e.g., by using a magnetic or optical tracking system, may not be necessary as the execution of steps S1000 to S 1012 provides an output which is indicative of the position of the object of interest. The data indicating some or all of the position, velocity, orientation and length of the track is then passed to the image compositor sub-module 106e in steps S214 and S414 where the data is combined with the data from the respective sub-modules to generate the rendered view. The steps S 1000 to S 1012 are then iterated for every 3d model data block received by the image segmentation sub-module 106c until the user indicates through user input that they do not wish for the tracking to take place any more.

The rendered view generated by the execution of steps S214 and S414 can then be generated to contain an image enhancement to identify the needle. Further image enhancement can also be used to display the track of the needle.

Steps 1006 to S 1010 can be combined using the ROI-based RANSAC and Kalman method disclosed in Automatic needle detection and tracking in 3D ultrasound using an ROI-based RANSAC and Kalman method. (Ultrason Imaging. 2013 Oct;35(4):283-306. doi:

10.1177/0161734613502004).

Optionally, the control module 106 and the headset 104 may be configured to enable features of the scan data to be extracted and emphasised on the rendered view which is displayed in steps S216 and S416. A user wearing a headset 104 may indicate, using a voice command or a gesture, that a region or feature of the generated rendered view is of particular interest. The discussed example of the feature is a needle but this can be applied to any or multiple features of interest within an ultrasound scan.

If a user wanted to identify a needle on the generated rendered view, they may input a voice command "identify needle" during a needling procedure being carried out using the needling system 200. This voice command may be detected by the headset 104. Alternatively the user may gesture in a specified manner which is detected by the motion sensing capability of the headset 104.

Responsive to receiving such a command, the headset 104 issues a request to the control module 106 via the headset interface 130 for the identification of the needle on relevant rendered views. The control module 106 feeds a request for the identification of the needle to the image reslicer sub-module 106d.

As part of the step S212 (or step S412), the image reslicer sub-module 106d calls a needle identification routine. The needle identification routine applies an image reslicing routine such as, for example, nearest neighbour reslicing routine such as the one described in

95751() 5aaed/Pubiic/3G4/3G4 lab handout 13.pdf to extract the plane of the 3D model data block which is aligned with the longitudinal axis, i.e. the length, of the needle. The data output from the image reslicer sub-module 106d then outputs the data in step S212 (or step S412) including the plane of the 3D model data block containing the longitudinal axis of the needle 110.

The rendered view formed by the image compositor sub-module 106e in steps S214 (or step S414) is then generated with the plane of the 3D model data block containing the needle visible on the rendered view. The information relating to the plane containing the needle may be fed to the probe cradle 120 and used by the motor to align the ultrasound transducer 102 accordingly in the correct orientation. Alternatively or in addition to the plane containing the needle, a user may provide input indicating that they would like to see a one or more planes that are at configurable angles to the plane coincident with the longitudinal axis of the needle. These planes may include a plane which is orthogonal to the plane containing the longitudinal axis of the needle. Input may be provided to the image reslicing routine to generate the indicated other planes.

A user may decide they would like to enhance the rendered view using false colour or another type of image enhancement. They may indicate this to the headset 104 through voice input or other command. The headset 104 then transmits a request to the control module 106 via the headset interface 130.

The control module 106 then issues a request to the image segmentation sub-module 106d, which calls an image enhancement routine which is configured to apply false colour or another form of image enhancement to a portion of the 3D model data block in the step S212 (or step S412).

The image enhancement routine may apply thresholding, alpha-blending, clustering or compression-based methods to the received volumetric data block to generate a segmented volumetric data block which is then transmitted to the volume rendering sub-module 106c. The image enhancement routine may also apply the image segmentation method described

The rendered view can then be generated as in steps S212 or S412 with the added image enhancement. An example of a rendered view which includes the identification and enhancement of a needle is illustrated in Figure 6. In this example, the user is using the visualisation system 100 to target vessel 606 and can see vessel 606 through the field of view of the headset 104 as the vessel is appearing on the rendered view displayed in steps S216 and S416. The user has used the ultrasound transducer 102 (with scan plane 620) with the visualisation system 100 to identify the plane containing the needle 110 and it is being highlighted with an outline generated by the image segmentation sub-module 106d in step S212 (or S412). That is to say, the shaft of the needle 650 is highlighted in outline so it is easier to see as it is being used as part of a needling procedure on vessel 606.

Another example of a rendered view is illustrated in Figure 7. The plane containing the needle is enumerated by 702. The plane which is transverse to the plane containing the needle is enumerated by 704. The region of interest is identified by 706.

Another example of such a rendered view is provided in Figure 8 where the emphasised region of interest is enumerated as 802 in the rendered view 800. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the invention as defined by the appended claims. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word "comprising" and "comprises", and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, "comprises" means "includes or consists of and "comprising" means "including or consisting of. The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.