Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ENDOSCOPE SIMULATOR
Document Type and Number:
WIPO Patent Application WO/2009/008750
Kind Code:
A1
Abstract:
A system is provided for simulating the path of an endoscope in a human or animal passageway or cavity. The system includes a memory communicatively coupled to a processor, the memory having stored therein data representative of the passageway or cavity, a tube, and sensing means communicatively coupled to the processor for sensing movement of the tube. The processor is configured to translate movement of the tube detected by the sensing means into movement inside the passageway or cavity by way of a physics engine coupled to or incorporated in the processor. An apparatus and method for endoscopy simulation are also described.

Inventors:
BAKER PAUL (NZ)
Application Number:
PCT/NZ2008/000172
Publication Date:
January 15, 2009
Filing Date:
July 11, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AIRWAY LTD (NZ)
BAKER PAUL (NZ)
International Classes:
G09B23/28
Domestic Patent References:
WO2006037946A12006-04-13
Foreign References:
US20040045561A12004-03-11
US6074213A2000-06-13
US20050196740A12005-09-08
Attorney, Agent or Firm:
BALDWINS (Wellesley Street, Auckland 1141, NZ)
Download PDF:
Claims:
Claims

1. A system for simulating the path of an endoscope in a human or animal passageway/cavity, the system including: a memory communicatively coupled to a processor, the memory having stored therein data representative of the passageway/cavity; a tube; and sensing means communicatively coupled to the processor for sensing movement of the tube, wherein: the processor is configured to translate movement of the tube detected by the sensing means into movement in the passageway/cavity by way of a physics engine coupled to or incorporated in the processor and configured to aid in said translation of movement of the tube into movement in the simulated passageway/cavity.

2. The system of claim 1 wherein the physics engine is configured to determine the angle of movement before and/or after collisions and/or the force of any impact of the simulated endoscope against the walls of the simulated passageway or cavity, said collisions being caused by movement of the tube.

3. The system of claim 1 or claim 2 wherein the data representative of the passageway or cavity is generated using three-dimensional modelling software and at least defines the inner walls thereof.

4. The system of claim 3 wherein the data representative of the passageway or cavity defines the hardness and/or roughness of the walls.

5. The system of any one of the preceding claims including a display for displaying images of the passageway or cavity.

6. The system of claim 5 wherein the processor is configured to generate the images based on a combination of the data representative of the passageway or cavity and the movement detected by the sensing means such that the images change at least partly dependent on movement of the tube.

7. The system of claim 5 or claim 6 wherein the display is configured to display the images based on movement of the tube substantially in real-time.

8. The system of any one of claims 5 to 7 wherein the display is configured to display the images in a stream.

9. The system of any one of the preceding claims wherein the system includes an aperture into which the tube is fed.

10. The system of claim 9 wherein the aperture is provided in the wall of a housing configured to receive the tube.

11. The system of claim 10 wherein the sensing means is provided proximate the aperture.

12. The system of any one of the preceding claims wherein the sensing means is configured to detect and measure the extent of rotation of the tube.

13. The system of any one of the preceding claims wherein the sensing means is configured to detect and measure the extent of translational movement of the tube through the aperture.

14. The system of claim 12 or claim 13 wherein the sensing means includes an optical movement detector.

15. The system of claim 14 wherein the optical movement detector includes a light emitting element and an optical sensor coupled with a digital signal processor to detect and measure the extent of rotation and/or translational movement of the tube.

16. The system of any one of the preceding claims wherein the system includes a handset coupled to the tube.

17. The system of claim 16 wherein the handset includes an actuator provided to control simulated features of an endoscope, whereby actuation of the actuator causes control signals to be sent to the processor.

18. The system of claim 17 wherein the control signals cause the viewing direction for the image to change, thereby simulating selective bending of the tip of an endoscope.

19. The system of claim 17 or claim 18 wherein additional actuators are provided to control other simulated features of an endoscope.

20. The system of any one of claims 17 to 19 wherein the physics engine is further configured to determine the angle of movement before and/or after collisions and/or the force of any impact of said simulated features of an endoscope against the walls of the simulated passageway or cavity, said collisions being caused by movement of the tube and/or actuation of said simulated features.

21. The system of any one of the preceding claims wherein the memory is configured to store the images as they are created to enable them to be viewed again.

22. The system of any one of the preceding claims wherein the system includes means for determining parameters associated with movement of the endoscope in the simulated passageway or cavity.

23. The system of claim 22 wherein the memory is configured to store any such additional parameters and/or the display is configured to display them.

24. The system of any one of claims 16 to 20 wherein the handset includes force feedback means communicatively coupled to the physics engine and configured to generate movement or restriction thereto at the handset based on forces determined by the physics engine.

25. The system of claim 24 wherein the force feedback means includes motor means and/or braking means for limiting and/or reversing movement of the actuator and/or the tube, such as in the event of a collision.

26. The system of claim 10 or claim 11 or any one of claims 12 to 25 when dependent on claim 10 including force feedback means provided proximate the aperture and/or inside the housing.

27. The system of any one of the preceding claims including means for transmitting information generated by the system.

28. An apparatus for use with or incorporation in a system of any one of the preceding claims, the apparatus including: a handset; the tube coupleable or slideably engageable to the handset; the sensing means; and means for storing and/or transmitting data relating to the detected movement.

29. The apparatus of claim 28, including an aperture for receiving the tube.

30. The apparatus of claim 28 or claim 29 wherein the sensing means is provided proximate the aperture.

31. The apparatus of any one of claims 28 to 30 wherein the sensing means is configured to detect translational movement of the tube through the aperture and/or rotational movement of the tube inside the aperture.

32. The apparatus of any one of claims 28 to 31 wherein the handset includes an actuator, whereby actuation of the actuator causes control signals to be generated.

33. An apparatus for determining a path of an object in a virtual environment, the apparatus including: a processor; a hardware or software encoded physics engine; a memory coupled to the processor and containing parameters relating to the virtual environment; and means for receiving signals from the apparatus of the second aspect of the invention, said signals including the control signals from the actuator and/or signals from the sensing means,

wherein the processor is configured to determine the path of the object in the virtual environment based on the received signals and the parameters relating to the environment and the physics engine is configured to determine the angles of movement before and/or after a collision with a wall in the virtual environment and/or the force of impact associated therewith, said collision being caused by movement of the tube of the apparatus of the second aspect.

34. The apparatus of claim 33 wherein the parameters define walls in the virtual environment.

35. The apparatus of claim 34 wherein the parameters information on the hardness or roughness of the walls.

36. A method of simulating the path of an endoscope in a human or animal passageway or cavity, the method including: generating a virtual environment of the inside of a human or animal passageway or cavity, said virtual environment including data defining at least the position of the inner walls of the passageway or cavity; inserting a tube in an aperture; detecting movement of the tube in the aperture; translating said movement to a path inside the passageway or cavity; and determining the angles of movement before and/or after a collision with an inner wall in the virtual environment and/or the force of impact associated therewith.

Description:

ENDOSCOPE SIMULATOR

Field of the Invention

The present invention relates to an endoscope simulator. More particularly, the invention relates to a training aid which translates movement of a tube into movement of a virtual endoscope in a virtual human or animal body passageway or cavity, and which preferably provides for improved realism and/or accuracy in the simulated path in the virtual space.

Background

Endoscopy is a minimally invasive diagnostic medical procedure used to view interior parts of the body, such as the interior or exterior surfaces of organs, joints or cavities. It enables physicians to peer through the body's passageways. An endoscope typically uses two fibre optic lines. The first, a "light fibre" carries light to the region of the body to be viewed. The second, an "image fibre" carries the image of the region back to the physician's viewing lens or, where desired, to a camera so that the image may be displayed on a screen. The portion of the endoscope inserted into the body may be sheathed in a rigid or flexible tube, depending upon the medical procedure. One or more lenses may be provided at the end of the endoscope to enhance image capture and/or illumination of the body region. Ports may be provided to allow for administration of drugs, suction, and irrigation, as well as for the introduction of small instruments.

For applications such as bronchoscopy, the tube must be sufficiently flexible to allow it to be accommodated in body passageways without undue discomfort or injury to patients under examination, but must be rigid enough to cause it to move through the passageways without bunching up. Physicians operate an endoscope by controlling how far the tube is inserted and by rotation of the tube. The tips of endoscopes may be selectively bendable in at least one direction so that the tip may be pointed in a desired direction. Through control of the bend of the tip and rotation of the endoscope tube, the tip of the endoscope may pass through bends in the interior passageways without the tip directly impinging on the walls thereof. This also facilitates the desired path to be selected at a junction, such as that where the trachea meets the left and right bronchi.

A physician training in the field may practice procedures on a patient. However, this is not desired, at least during early stages of training, because inexperienced operators may cause injury to a patient, including puncturing organs which then require surgery. It can also lead to damaging of the equipment since the tip of an endoscope is quite fragile. The tips of endoscopes can be complex and are expensive to replace.

Physical models of passageways or "airway mannequins" may be used in place of patients but these suffer from difficulty in accurately mimicking the contour and surface characteristics of the passageways. Also, it is generally necessary to use genuine endoscopes with mannequins and so such simulators do nothing to prevent the tips of endoscopes being damaged and the associated cost being incurred. Furthermore, they remove endoscopes from clinical use and raise sterility concerns. The mannequins themselves are expensive and limited in that each mannequin is modelled on a particular type of patient (e.g. paediatric versus adult). Thus, it is necessary to obtain a variety of mannequins or for physicians to practice in an environment which differs from that of a patient to be operated on.

To overcome these problems, simulators have been created which avoid the use of an actual endoscope. GB-A-2,252,656, for example, discloses a dummy endoscope including an insertion tube which is received within a duct in a fixture having mechanical sensing means for detecting longitudinal and rotational movement of the tube relative to the fixture. A simulated image of an endoscopic procedure, responsive to outputs of the sensing means and actuation of the endoscope controls, is displayed on a monitor. The fixture is provided with tactile means which provide variable tactile feedback to the user of the endoscope in accordance with the outputs of a mathematical model of the endoscope and an organ.

Simulators such as that disclosed in GB-A-2,252,656 rely on creating a computer model of the relevant internal environment and modelling the motion of the endoscope therethrough using path-seeking algorithms. Path-seeking algorithms attempt to mathematically project a path forward through the simulated environment by breaking it down into a predetermined resolution, with grains either including a portion of the wall of the passageway or not. Movement from one grain to the next is limited by only allowing movement into an adjacent grain which does not include a portion of the wall

of the passageway. This provides a poor model of the interaction between the simulated tip of an endoscope and the walls of the passageway.

As a further example, WO-A-96/30885 discloses a surgical procedure simulator using a "physical constraining model". The constraints of the physical constraining model are

"constructed to be approximately the same size as the virtual computer model corresponding to image data stored in the memory of the computer." The physical constraining model is described as an inexpensive way of providing tactile feedback without acquiring edge detection or collision detection software programs to determine when the mouse device meets or collides with an edge/wall in the image data.

As an alternative to the physical constraining model, the above document also discloses the use of "virtual models" implementing known edge collision and detection software such as High Techsplantations' Telios.

Summary of the Invention

It is an object of the invention to provide a system which at least mitigates one or more problems associated with prior art endoscope simulators.

Alternatively, it is an object of the invention to provide at least a useful choice.

According to a first aspect of the invention, there is provided a system for simulating the path of an endoscope in a human or animal passageway or cavity, the system including: a memory communicatively coupled to a processor, the memory having stored therein data representative of the passageway or cavity; a tube; and sensing means communicatively coupled to the processor for sensing movement of the tube, wherein: the processor is configured to translate movement of the tube detected by the sensing means into movement inside the passageway or cavity by way of a physics engine coupled to or incorporated in the processor and configured to aid in said translation of movement of the tube into movement inside the simulated passageway or cavity.

Preferably, the physics engine is configured to determine the angle of movement before and/or after a collision and/or the force of any impact of the simulated endoscope against the walls of the simulated passageway or cavity, said collisions being caused by movement of the tube.

Preferably, the data representative of the passageway or cavity is generated using three-dimensional modelling software and at least defines the inner walls thereof. The data may also define other structural properties of the walls such as the hardness and/or elasticity and/or roughness thereof.

Preferably, the system further includes a display for displaying images of the passageway or cavity.

Preferably, the processor is configured to generate the images based on a combination of the data representative of the passageway or cavity and the movement detected by the sensing means such that the images change at least partly dependent on movement of the tube.

Preferably, the display is configured to display the images based on movement of the tube substantially in real-time.

Preferably, the display is configured to display the images in a stream.

Preferably, the system includes an aperture into which the tube is fed.

According to one embodiment, the aperture is provided in the wall of a housing configured to receive the tube.

Preferably, the sensing means is provided proximate the aperture.

Preferably, the sensing means is configured to detect and measure the extent of rotation of the tube.

Preferably, the sensing means is configured to detect and measure the extent of translational movement of the tube through the aperture.

Preferably the sensing means includes an optical movement detector.

Preferably the optical movement detector includes a light emitting element and an optical sensor coupled with a digital signal processor to detect and measure the extent of rotation ' and/or translational movement of the tube.

Preferably, the apparatus includes a handset coupled to the tube.

Preferably, the handset includes an actuator, whereby actuation of the actuator causes control signals to be sent to the processor.

Preferably, the control signals cause the viewing direction for the image to change, thereby simulating selective bending of the tip of an endoscope.

According to one embodiment, additional actuators are provided to control other simulated features of an endoscope, such as a switch for actuating a simulated suction tube to remove undesired fluids. Other actuators may be provided for simulating the supply of oxygen or drugs, the insertion of fine instruments, the control of focus and activation of a camera, all in a similar fashion to those in real life.

Preferably, the physics engine is further configured to determine the angle of movement before and/or after collisions and/or the force of any impact of said simulated features of an endoscope against the walls of the simulated passageway or cavity, said collisions being caused by movement of the tube and/or actuation of said simulated features.

Preferably, the memory is configured to store the images as they are created to enable them to be viewed again.

Preferably, the system includes means for determining parameters associated with movement of the endoscope in the simulated passageway or cavity. These may include a speed of movement, a speed immediately prior to a collision, a length of time to insert and/or remove the tube, information generated by the physics engine regarding collisions, etc. These parameters may be used to measure the competency

of operation by a user. Other parameters for measuring competency of use of the simulated endoscope within the simulated or virtual environment will be apparent to one of skill in the art and it is intended that all such alternatives be included within the scope of the invention.

Preferably, the memory is configured to store any such additional parameters and the display may be configured to display them.

While not provided for preferred embodiments of the invention so as to provide a relatively cheap and simple training aid, the handset may include force feedback means communicatively coupled to the physics engine and configured to generate movement or restriction thereto at the handset based on forces determined by the physics engine, thereby providing a more realistic feel for the handset.

Preferably, the force feedback means includes motor means and/or braking means for limiting and/or reversing movement of the actuator and/or the tube, such as in the event of a collision.

According to one embodiment, the force feedback means are provided at the handset. According to another embodiment, the force feedback means is alternatively or additionally provided proximate the aperture and/or inside the housing so as to more closely mimic the feedback which would be created when performing the procedure on a real patient.

As would be apparent to one of skill in the art, the processor, memory and display may be components of a conventional computer. As would also be apparent, information generated by embodiments of the invention may be communicated to other computing devices as desired to enable a user's performance results and/or stream of images to be viewed by others. The internet provides one such gateway for communication of data but any known communication means is included within the scope of the invention.

According to a second aspect of the invention, there is provided an apparatus for use with or incorporation in the system of the first aspect, the apparatus including: a handset;

a tube coupleable or slideably engageable to the handset; sensing means for detecting movement of the tube; and means for storing and/or transmitting data relating to the detected movement.

Preferably, the apparatus includes an aperture for receiving the tube.

Preferably- the sensing means is provided proximate to the aperture.

Preferably, the sensing means is configured to detect translational movement of the tube through the aperture and/or rotational movement of the tube inside the aperture.

Preferably, the handset includes an actuator, whereby actuation of the actuator causes control signals to be generated.

According to a third aspect, there is provided an apparatus for determining a path of an object in a virtual environment, the apparatus including: a processor; a hardware or software encoded physics engine; a memory coupled to the processor and containing parameters relating to the virtual environment; and means for receiving signals from the apparatus of the second aspect of the invention, said signals including the control signals from the actuator and/or signals from the sensing means, wherein the processor is configured to determine the path of the object in the virtual environment based on the received signals and the parameters relating to the environment and the physics engine is configured to determine the angles of movement before and/or after a collision with a wall in the virtual environment and/or the force of impact associated therewith, said collision being caused by movement of the tube of the second aspect.

Preferably, the parameters define walls in the virtual environment and may further include information on the physical properties of the walls, such as the hardness or roughness thereof.

According to a fourth aspect, there is provided a method of simulating the path of an endoscope in a human or animal passageway or cavity, including the steps of: generating a virtual environment of the inside of a human or animal passageway or cavity, said virtual environment including data defining at least the position of the inner walls of the passageway or cavity; inserting a tube in an aperture; detecting movement of the tube in the aperture; translating said movement to a path inside the passageway or cavity; and determining the angles of movement before and/or after a collision with an inner wall in the virtual environment and/or the force of impact associated therewith.

Further aspects of the invention, which should be considered in all its novel aspects, will become apparent to those skilled in the art upon reading the following description which provides at least one example of a practical application of the invention.

Brief Description of the Drawings

One or more embodiments of the invention will be described below by way of example only and without intending to be limiting with reference to the following drawings, in which:

Figure 1 is a schematic diagram of an embodiment of the invention; Figure 2 is a schematic representation of a display layout according to an embodiment of the invention;

Figure 3 is an example endoscopic image generated and displayed according to an embodiment of the invention; Figure 4 is an example cross-sectional image generated and displayed according to an embodiment of the invention; Figure 5 is a flow diagram illustrating the steps of an embodiment of a method according to the invention; Figure 6 is a schematic diagram of a sensor arrangement according to an embodiment of the invention;

Figure 7 is a schematic diagram of a housing according to an embodiment of the invention; and

Figure 8 is an alternative embodiment of a housing according to the invention.

Detailed Description of Preferred Embodiments

Embodiments of the invention provide a computer-based and operated simulator which creates a realistic environment for instruction and training in the use of an endoscope. The fully interactive environment generated by the invention simulates the real world behaviour of endoscope insertion and the visual feedback obtained therefrom. It may be used by those lacking experience of endoscopic procedures and also by skilled physicians- wishing to re-familiarise themselves with a procedure, particularly less frequently performed or more risky procedures. Preferred embodiments also enable the performance of users to be tracked and assessed.

Existing endoscopy simulators, such as that disclosed in GB-A-2,252,656 and WO-A- 96/30885, use a simple mathematical model of the displayed passageway or cavity and a path-seeking or edge detection algorithm to simply identify collisions between the virtual endoscope tube and passageway. These methods do not attempt to analyse or model the effects of the collision. The present invention includes a physics engine to analyse the basic Newtonian physics of the collision and simulate the interaction and effect upon the endoscope tube which is reflected in the images displayed.

Furthermore, preferred embodiments of the invention are relatively cheap, simple and readily portable since bulky, heavy equipment is not required and the invention is able to operate with conventional computing equipment.

Figure 1 is a schematic diagram of a system, generally marked 10, according to one embodiment. System 10 includes display 11 , processor 12, memory 13, bus 14, handset 15, tube 16, housing 17, sensor 18, control 19, wire 20 and circuit 21.

Display 11 , processor 12, memory 13 and bus 14 are preferably embodied by a conventional personal computer. However, purpose-built devices with more specific functionality are also within the scope of the invention. Any suitable display may be used including monitors, projectors and viewing lenses adapted to provide images therethrough and mimic those used for real endoscopic procedures. While a single display is shown, it will be readily apparent that any number of displays may be used so as to enable others to view the user's operation. The displayed images are preferably created by processor 12 using information stored in memory 13. Due to the virtual nature of the environment, it will be readily apparent that parameters for

additional/alternative environments may be obtained or generated as required, such as via the internet or any portable computer readable memory. Processor 12 preferably includes or is communicatively coupled to a 3D graphics accelerator card to assist in the display of the images. Bus 14 facilitates the transfer of data between display 11 , processor 12 and memory 13.

Handset 15 is preferably configured to feel and operate in a similar manner to a genuine handset for an endoscope. Similarly, tube 16 is preferably selected to have structural properties (e.g., flexibility/rigidity, thickness, etc) similar to that of a genuine tube for an endoscope. According to one embodiment, tube 16 is selectively couplable or engageable to handset 15 to enable different tubes to be used for different procedures so as to better mimic the actual equipment used for a particular procedure.

In operation, a user inserts the tip of tube 16 into an opening in housing 17. The opening may be provided with a funnel to guide insertion of the tube. Alternatively, the wall of the opening may be configured so as to imitate an opening into which an endoscope may be inserted (e.g. a nasal or oral conduit). One or more sensors 18 are provided at the opening to monitor movement of tube 16. Preferably, sensor(s) 18 monitors both rotational and translational movement of tube 16 as it passes through the opening.

Movement sensors have been widely developed and while it is preferred that sensor(s) 18 is a laser-based sensor, the invention is not limited thereto. Where a laser-based sensor is used, it is preferably capable of tracking the tube rotation and displacement at a resolution of 2000 dpi.

Sensor(s) 18 is coupled to circuit 21 which relays information to processor 12 via bus 14. According to a preferred embodiment, the connection between circuit 21 and processor 12 may be effected using conventional USB connectors, preferably making use of the Microsoft HID device interface standard so as to avoid the need for specialised drivers. Use of a USB connector avoids the need to provide circuit 21 with a dedicated power supply since power may be fed via a USB cable. According to an alternative embodiment, circuit 21 may be coupled to a transmitter for wirelessly communicating data to a receiver coupled to processor 12. Bluetooth or other wireless

protocols may be used. Various other communication means will be apparent to one of skill in the art and these are included within the scope of the invention.

As tube 16 continues to pass through the opening, it is collected in housing 17. Control 19 is used to control the simulated action of the bending of the tip of an endoscope.

Preferably, control 19 includes a spring-loaded rotatable lever which is operable in a similar fashion to the lever found on conventional endoscopes. Note that operation of control 19 does not result in the bending of an actual endoscope tip, it merely causes the appropriate images to be displayed on display 11 by adjusting the viewing angle on the monitor and moving the tip in the virtual space. Control signals from handset 15 are relayed via wire 20 to circuit 21 , and then to processor 12, again, in a similar fashion to conventional endoscopes. Again, a wireless connection may alternatively be used. As would be apparent to one of skill in the art, signals from handset 15 and/or sensor(s) 18 may bypass circuit 21, in which case circuit 21 may be modified or removed.

Software stored in memory 13 and executed by processor 12 translates the sensed tube 16 movement into an interactive three dimensional view of the particular simulated passageway or cavity under examination. The images displayed take account of the restricted environment being viewed using a Newtonian physics engine which simulates interaction, including any collisions, between an endoscope tube and the walls of a passageway or cavity. Thus, embodiments of the invention provide for a more realistic path of the simulated endoscope in the virtual space, providing greater accuracy in the images displayed.

More specifically, the software includes a first module which models the structure of the passageway (such as the oral/nasal passageway, larynx, trachea, etc) in three dimensions, and a second module which models movement of the simulated endoscope, taking into account the restrictions or effects on movement caused by the constraints of the walls of the virtual space. The first module may be based on any one of a number of widely available programs, such as Autodesk 3DStudio MAX, Autodesk Maya, or the open source Blender (www.blender.org) or animĪ“or (www.anim8tor.com) software. While particular models vary, they generally rely on breaking down and approximating the environment to be modelled using manageable planar surfaces. According to preferred embodiments, the three dimensional visual system or graphics

is preferably based on industry standard OPENGL which expresses all geometry using small triangles with specially designed texture graphics applied thereto to achieve the desired visual effect. As one alternative to OPENGL, the Direct3D may be used. Both alternatives are supported by the "Ogre" open source 3D graphics engine.

The second module is also a three dimensional model but of the simulated tube or endoscope and its interaction with the passageway. A physics engine is used to simulate the interaction between the simulated endoscope and passageway by applying Newtonian laws to the objects in the simulated environment, ensuring that the translational, rotational and bending motions of the simulated endoscope remain within the constraints of the three dimensional model.

More specifically, in the preferred embodiment of the invention the tube is represented in the second module by a tube divided into many rigid capsule-shaped segments that are attached to each other like the shackles of a chain. The position of the surfaces of each individual segment is tracked in real time. Where it is determined that a surface of any capsule intersects with a surface of the first module (modelling the structure of the passageway), the physics engine calculates both the opposing force of the passageway upon the tube segment, and the friction force caused by any translational movement of the tube while the segment is deemed to be in contact with the passageway. These forces are relayed to the other tube segments through a mechanism of simulated joints between adjacent segments.

It is possible for the tube to have many simultaneous collision points at any one time. The resulting forces are calculated many times per second, resulting in acceleration and deceleration of the tube segments which determine the position and path of the tube. This type of physics is sometimes referred to as "rope physics" due to the simulation of the behaviour of a flexible object. Thus, any collision of the simulated endoscope tube with the model environment will affect its routing and bending in a similar manner to that in the real world.

Thus, embodiments of the invention provide a more realistic simulation of movement of an endoscope through a passageway than more conventional arrangements which rely on path-seeking algorithms.

According to one embodiment, the simulation software is written in Delphi and makes use of the open source GLScene OPENGL library. Additionally, the endoscope simulation processes may be based on a modified version of a freely available NewtonDynamics physics engine (see www.newtondynamics.com) and a wrapper library called oxNewton that makes the physics library accessible for Delphi. Alternatively, any other physics engine such as the Open Dynamics Engine (see www.ode.brg), Havok (www.havok.com) or PhysX (www.ageia.com), may be adapted and used for the same purpose. According to a presently preferred embodiment, the software is written in C++ using the "Ogre" 3D graphics engine, the NewtonDynamics physics engine and OgreNewt for interfacing between the two.

The physics engine may simply ensure that the simulated endoscope follows the correct path in the displayed passageway. However, parameters generated by the physics engine may also be used to control force feedback to the operator during use of handset 15. For example, friction forces which would be generated during operation of a conventional endoscope may be modelled during translation and rotation of tube 16 to provide a more realistic feel. Feedback may similarly be provided in the event of any collisions. Any of a variety of widely available haptic devices may be modified to provide the desired force feedback, such as those which have been developed by the computer gaming industry. Thus, motors, braking means, etc may be used to resist or impinge movement of tube 16. Means may be provided at the aperture or proximate thereto, including inside the housing. These means inhibit, restrict or reverse rotational and/or translational movement of the tube. By effecting force feedback proximate to the aperture, the resulting feedback is more realistic as it is then generated as though it was originating from inside a patient.

The invention preferably provides alternative perspectives for viewing the simulated procedures. A first view imitates that which would be seen through a conventional endoscope carrying out the procedure. Thus, it preferably includes a round cut-out view with a marker identifying the top of the tube. A second view is a cross-sectional longitudinal view along the length of the relevant passageway which is useful for instructional and/or monitoring purposes as it provides a good visual representation of the path of the simulated endoscope therethrough. Embodiments of the invention preferably provide for manipulation of the images, such as through rotation of the perspective or viewing angle and/or zoom.

The software according to the present invention is preferably configured to be executable on conventional computers having sufficient processing power to determine the path of the endoscope and display images to the user, preferably in or approaching real-time. According to a presently preferred embodiment, the software is operated using a Windows-based application but the invention is not limited thereto. Those skilled in the art will appreciate that other operating systems may be used and implementation including such systems is within the scope of the invention. Use of conventional computers and systems enables deployment of embodiments of the invention in a wide variety of environments without the need to transport heavy or bulky equipment. Since the environment and operating/processing software may be loaded into a computer such as via the internet or any computer readable disk, the only additional equipment required is the handset 15, tube 16, housing 17 and any required connectors. Furthermore, while inclusion of housing 17 is preferred as it provides a means for holding tube 16 (and possibly other elements when not in use), it is not essential. For example, a ring or other support could be secured to an edge of a desk and used to define the aperture through which tube 16 is fed, the ring being provided with the sensors and able to transfer information to a computing device.

A schematic diagram of a preferred screen layout, generally marked 22, is provided in Figure 2. Region 23 is the main viewing area which displays the simulated images generated by the invention. Depending on the state of operation, region 23 may display the first or second view described hereinbefore, or some other view. As would be apparent, the first and second views may also be selectively displayed side by side or each view may in turn be maximised to cover the full screen. Example images for the first and second views are provided in Figures 3 and 4. While the layout shown in Figure 2 is preferred, the invention is not limited thereto.

Layout 22 also includes pull down menu 24, status area 25 and buttons and user feedback area 26. Further details of layout 22 will be described with reference to Figure 5, which is a flow diagram illustrating the steps of a preferred embodiment of the method of the invention.

To initiate the system, a user may open a file using pull down menu 24, which enables a particular simulated environment to be selected at step 30. Settings for parameters

of the simulated endoscope may be provided, such as tube diameter, tube rigidity, tip bend limits, camera field of view limits and tip light intensity. Default settings may be set for each environment, or a single set of default settings may be set for all environments. Also, on start up, a default environment (such as that for a bronchoscopy) may be loaded into the system. There is no limit to the number of environments which may be selected from. Thus, users may potentially perform any procedure' they wish on any type of human or animal patient. For example, procedures may be practised on patients of various ages, states of health, etc.

Opening of a new environment may automatically clear any user performance data stored in the system from previous simulations. Alternatively, the data may be stored for later retrieval. At step 31 , after the selected environment has been loaded into the system (an indicator may be provided to the user in status area 25 to show that this has been completed), a user begins the session by inserting tube 16 into housing 17. Preferably, a sensor is provided at the opening to housing 17 to detect the presence of tube 16. This may be a dedicated sensor or make use of sensor 18. When the sensor detects the presence of tube 16, display of images in region 23 begins at step 32. As the user operates handset 15 and inserts tube 16 into housing 17, the moving images are displayed to the user as though the task was being performed by a conventional endoscope. The images, the path of the simulated endoscope and other parameters (described below) are stored during operation so as to enable the path to be subsequently viewed and/or assessed. After performing the desired tasks in the simulated environment, the user withdraws tube 16. Display 16 continues until a sensor at the opening of housing 17 detects that tube 16 is no longer present in the opening, at which point the session ends (step 33).

On ending the session, a performance summary may be displayed to the user at step

34. A reset button in area 26 may be used to reinitialise the system to step 30 or 31 , as desired, so that the simulated procedure may be repeated or a different procedure performed. Commands in pull down menu 24 may additionally or alternatively be used.

At step 35, a user may select to view stored data from one or more previously performed procedures, again using buttons or icons in area 26 and/or commands in pull down menu 24. As desired, performance parameters and/or visual images (including the first and/or second views) may be displayed. Where the second, cross-

sectional view is displayed, the paths of multiple attempts at the procedure may be simultaneously displayed so as to enable comparisons to be performed. According to one embodiment, an ideal path may be plotted on the image to enable presentation and measurement of the deviation therefrom.

Different users or types of users may have different access levels. For example, students or trainee practitioners may only be able to perform steps 31 to 34 while an observer may only be able to perform steps 30, 34 and 35 so that they are able to select the particular environment and/or procedure to be performed and then view the results. Determination of a user's access level may be made by requiring users to sign in with a login and/or password.

Preferably, prior to step 30 or 31 , the step of a user providing information regarding their identity is performed to enable data to be associated with a particular user.

In addition to recording the images displayed during operation of tube 16, other parameters which may be recorded include tip collisions, the length of time to fully insert tube 16, the length of time tube 16 is inside housing 17, the total tube distance traversed, the top speed of the tube, the number of times tube 16 is partially withdrawn and then reinserted (this may be measured by determining the total movement of the tip of tube 16 or may be a separate measure), the total tube angle traversal and the total tube direction changes. As well as counting collisions, a measure of the force of impact and the angle of incidence may also be recorded so as to identify occasions where there may have been a danger to a patient in a real-life scenario. According to one embodiment, the model of the environment may include details of the thickness and strength of the passageway / organ walls etc so that injuries to the patient may be simulated on the display.

Figure 6 shows a preferred, novel arrangement for sensor 18, including guides 61 and detector 62. Guides 61 hold tube 16 a fixed distance from detector 62 so that detector 62 is able to detect movement of tube 16 and the extent of movement thereof. Guides 61 may be in the form of rollers, in which case force feedback may be effected by restricting rotation of the rollers. Force feedback may be additionally or alternatively effected by urging one or more of guides 61 against the surface of tube 16.

Detector 62 is preferably an optical movement detector, although the invention is not limited thereto. A light emitting element such as a light emitting diode or more preferably a fine laser, is projected onto the physical tube surface, upon which an optical sensor is focused. Consecutive images from the optical sensor, preferably captured at a rate of 7000 pictures per second, are analysed to detect changes corresponding with movement of the tube. Displacement values corresponding with the translational and rotational movement of the tube are calculated in real time by a digital signal processor (DSP), preferably at a rate of 120 times per second, and transmitted to the processor 12 via USB or any other suitable interface so that the movement may be reflected in the simulated display. The displacement resolution that can be sensed is preferably at least 2000 dpi, or 0.0127mm.

Because the preferred detector 62 described above detects relative movement rather than the absolute position of the tube, errors may accumulate in use. It has been found that the error in the sensor may vary depending on the direction of movement and it may therefore be necessary to compensate the displacement using four separate calibration parameters, namely:

Total linear displacement = (FDPxFCF) - (BDPxBackwardCalibrationFactor), and Total rotational displacement - (LDP x CCF) - (RDP x ACCF)

where FDP is the sum of forward displacement measurements, FCF is a forward calibration factor, BDP is the sum of backward displacement measurements, BCF is a backward calibration factor, LDP is the sum of clockwise displacement measurements, CCF is a clockwise calibration function, RDP is the sum of anti-clockwise displacement measurements and ACCF is an anti-clockwise calibration function.

Figure 7 shows selected features of a preferred arrangement of housing 17. In the arrangement of Figure 7, funnel 71 is provided to aid insertion of tube 16 into housing 17 and through sensor 18. The walls of funnel 71 may be configured to imitate the walls of the cavity or passage into which tube 16 is inserted.

Figure 8 shows an alternative embodiment of the invention. Again, only features particular to this embodiment have been shown. It will be appreciated that other elements (such as circuit 21) have been omitted from Figure 8 for purposes of clarity.

The embodiment of Figure 8 provides for training in the insertion of an endotracheal tube into the virtual airway. An endotracheal tube may be inserted into an airway to ensure that it remains open, particularly when a patient is under anaesthetic. Tube 81 is used as an endotracheal tube. Mounting 82 holds sensor 18 (not shown) proximate to region 83 so as to detect movement of tube 16. Mounting 82 may be coupled to the base of housing 17 or to the side walls of housing 17 (not shown) using one or more struts. Where the mounting is coupled to the base, a rigid tube may be used with an opening provided to enable the end of tube 16 to pass therethrough.

Sensor 18, or a separate, additional sensor, may be configured to monitor movement of endotracheal tube 81. Using simulating images obtained from tube 16, endotracheal tube 81 may be guided appropriately within the virtual environment during it's insertion. While not shown in Figure 8, housing 17 preferably includes a funnel similar to that of Figure 7. The funnel may be formed from a flexible material configured to simulate the feel of the passage into which endotracheal tube 81 is inserted.

Embodiments of the invention provide a preferably portable training device that can be set up wherever it is required, including in hospitals and medical schools, and that provides up-to-date, extensive and low cost training in the use of an endoscope. Thus, it is possible for those in the field to practise insertion of an endoscope into a patient during which simulated visuals are displayed on a monitor or other display. The software of the invention may be used during the procedure to track a number of parameters, rate the user's performance and generate a performance report.

The invention may be applied to a wide variety of endoscopic procedures such as, but not limited to, bronchoscopy, gastroscopy, colonoscopy, nasopharyngoscopy, arthroscopy, laparoscopy, thoracoscopy and endoscopic sinus surgery. It may be applied to both rigid and flexible endoscopic procedures. Furthermore, the invention may be adapted to simulate the environment inside of animals, as well as humans.

While embodiments of the invention have been described as including handset 15, tube 16 and housing 17, the invention is not limited thereto. These features provide for improved realism for a user but may be omitted in favour of more conventional input devices for computers (e.g. a keyboard, a mouse etc). While such embodiments are

not preferred, they can still be helpful, particularly in early stages of training, in providing a user with a better feel for movement of an endoscope and in familiarising a user with internal environments, better enabling a user to navigate/identify anatomy.

Furthermore, while aspects of the invention have been described with reference to simulation of the use of an endoscope, it will be apparent that embodiments of the invention may be applied to other types of simulated environments, including non- anatomical environments, and it is intended that such alternatives be included within the scope of the invention. ' In particular, the use of a physics engine with a model of an environment and appropriate input devices may enable the invention to be applied more widely, not simply for viewing inside cavities. Example applications may include navigation through virtual caves, cities and roadways, flight paths and jungles. The environments may be modelled on real life environments but the invention is not limited thereto. Depending on the chosen environment, the input device(s) may be selected so as to imitate devices typically used by people operating in the environment. However, more conventional computer input devices may again be used so as to avoid the need for purpose-built equipment. As will be apparent, the invention may also be applied within the computer gaming industry.

Also, while housing 17 may be of a simple, box-like configuration, the invention is not limited thereto. At least the opening of housing 17 may be configured to imitate the relevant portion of a patient and the invention does not preclude the use of mannequins which further imitate the internal environment of a patient.

Any discussion of prior art in the specification is not to be considered as an admission that such prior art is widely known or forms part of the common general knowledge.

Various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications may be made without departing from the spirit and scope of the present invention and without diminishing its attendant advantages. It is therefore, intended that such changes and modifications be included within the present invention.