Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TRAINING METHOD FOR TRAINING A ROBOT TO PERFORM AN ULTRASOUND EXAMINATION
Document Type and Number:
WIPO Patent Application WO/2022/231453
Kind Code:
A1
Abstract:
A training method for training a robot to perform an ultrasound examination is provided. The training method comprises: (i) providing a patient-specific anatomy 3D model; (ii) creating, by a 3D scanner, a 3D model of a patient body surface; (iii) manually moving a robot arm from a starting position to at least one predetermined training position on the patient body, the robot arm holding an ultrasound-imaging probe and being provided with a robot arm position tracker and at least one force sensor; (iv) manually actuating the ultrasound-imaging probe to produce at least one ultrasound image when the robot arm is manually moved to each of the training positions; (v) sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is positioned at least in the starting position and each of the training positions; (vi) computing a movement trajectory of the robot arm based on a plurality of robot arm locations on the created body surface 3D model, the locations corresponding to the robot arm positions tracked by the robot arm position tracker during the manual movement of the robot arm and associated with the created body surface 3D model and the provided anatomy 3D model; (vii) creating a robot-teaching model by associating the computed movement trajectory with the forces sensed in the robot arm positions and with the produced ultrasound images.

Inventors:
SLUTSKIY ILYA LEONIDOVICH (RU)
Application Number:
PCT/RU2021/000176
Publication Date:
November 03, 2022
Filing Date:
April 27, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SLUTSKIY ILYA LEONIDOVICH (RU)
International Classes:
A61B8/00; B25J13/00; G09B9/00; G09B23/30
Foreign References:
US20150297177A12015-10-22
US20210059772A12021-03-04
Other References:
FARSONI SAVERIO; ASTOLFI LUCA; BONFE MARCELLO; SPADARO SAVINO; VOLTA CARLO ALBERTO: "A Versatile Ultrasound Simulation System for Education and Training in High-Fidelity Emergency Scenarios", IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE, IEEE, USA, vol. 5, 1 January 1900 (1900-01-01), USA , pages 1 - 9, XP011640079, DOI: 10.1109/JTEHM.2016.2635635
Attorney, Agent or Firm:
NILOVA, Maria Innokentievna (RU)
Download PDF:
Claims:
CLAIMS

1. A training method for training a robot to perform an ultrasound examination, the method comprising: providing a patient-specific anatomy 3D model; creating, by a 3D scanner, a 3D model of a patient body surface; manually moving a robot arm from a starting position to at least one predetermined training position on the patient body, the robot arm holding an ultrasound-imaging probe and being provided with a robot arm position tracker and at least one force sensor; manually actuating the ultrasound-imaging probe to produce at least one ultrasound image when the robot arm is manually moved to each of the training positions; sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasoundimaging probe to hold thereof when the robot arm is positioned at least in the starting position and each of the training positions; computing a movement trajectory of the robot arm based on a plurality of robot arm locations on the created body surface 3D model, the locations corresponding to the robot arm positions tracked by the robot arm position tracker during the manual movement of the robot arm and associated with the created body surface 3D model and the provided anatomy 3D model; and creating a robot-teaching model by associating the computed movement trajectory with the forces sensed in the robot arm positions and with the produced ultrasound images.

2. The training method of claim 1, further comprising sensing a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is moved between the starting position and the training positions.

3. The training method of any of claims 1-2, further comprising updating, by the 3D scanner, the created body surface 3D model during the manual movement of the robot arm and correcting the robot arm locations according to the updated body surface 3D model.

4. The training method of any of claims 1-3, further comprising saving the created robotteaching model in a robot data storage as robot control instructions.

5. The training method of any of claims 1-4, further comprising displaying, by a display, the produced ultrasound images to a user and accepting, by the user, at least one particular ultrasound image among the displayed ultrasound images.

6. The training method of any of claims 1-5, further comprising manually moving the robot arm from an initial position to the starting position; and computing a positioning trajectory of the robot arm based on a plurality of robot arm spatial locations in relation to at least one reference point on the created body surface 3D model, the spatial locations corresponding to the robot arm positions tracked by the robot arm position tracker during the initial movement of the robot arm and associated with the created body surface 3D model;

7. The training method of claim 7, wherein the robot arm spatial locations corresponding to the patient body are further associated with the provided anatomy 3D model.

8. The training method of any of claims 6-7, further comprising sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof during the initial movement of the robot arm; and creating a robot-positioning model by associating the computed positioning trajectory with the forces sensed during the initial movement of the robot arm.

Description:
TRAINING METHOD FOR TRAINING A ROBOT TO PERFORM AN ULTRASOUND EXAMINATION

FIELD OF THE INVENTION

The present invention generally relates to a healthcare or medical industry, in particular to use of robots for examining or treating patients, and more particularly to use of robots for performing an ultrasound diagnostics of a patient. More specifically, the present invention relates to training method for training robots to perform ultrasound examinations.

BACKGROUND OF THE INVENTION

Ultrasound imaging (also referred to in the art as diagnostic sonography or ultrasonography) is a non-invasive diagnostic imaging technique using high-frequency sound waves to image the inside of a patient body. In particular, ultrasound imaging is used for examining many internal organs of the patient body, including but not limited to the following: heart, liver, gallbladder, spleen, pancreas, kidneys, urinary bladder, uterus, ovaries, eyes, thyroid, parathyroid glands, scrotum (testicles), brain in infants, hips in infants, spine in infants, etc. Ultrasound imaging is also used for examining some other structures of the patient body, including but not limited to the following: tendons, muscles, nerves, ligaments, joints, blood vessels, soft tissue masses, bone surfaces, etc.

In medicine, ultrasound imaging is frequently used to: (i) diagnose a variety of conditions and assess organ damage following illness; (ii) examine an unborn child (fetus) in pregnant patients; (iii) guide procedures such as needle biopsies, in which needles are used to sample cells from an abnormal area for laboratory testing; (iv) image the breasts and guide biopsy of breast cancer; (v) diagnose a variety of heart conditions, including valve problems and congestive heart failure, and to assess damage after a heart attack; (vi) evaluate symptoms such as pain, swelling or infection; (vii) evaluate blockages to blood flow (such as clots), narrowing of vessels, tumors and congenital vascular malformations, reduced or absent blood flow to various organs (such as the testes or ovary), increased blood flow (which may be a sign of an infection), etc. Ultrasound images, also known in the art as ultrasonic images or sonograms, are made by sending ultrasound pulses into tissues using an ultrasound probe. The ultrasound pulses echo off tissues with different reflection properties and are recorded and displayed as an ultrasound image.

Ultrasound images are generally captured in real-time, so that they can also show movement of body internal organs as well as blood flowing through blood vessels. Unlike X-ray imaging, no ionizing radiation exposure is associated with ultrasound imaging.

In an ultrasound imaging, the ultrasound probe is generally placed directly on a patient body skin. To optimize an image quality, the ultrasound probe may be placed inside a patient body, in particular via a gastrointestinal tract, vagina or blood vessels. A thin layer of a water-based gel is applied to the patient body skin to be examined; ultrasound waves are transmitted from the ultrasound probe through the applied gel into the patient body. The applied gel will allow the ultrasound probe to securely contact with the examined skin and eliminate air pockets between the ultrasound probe and the examined skin, the air pockets blocking sound waves from passing into the patient body.

Many different types of ultrasound images can be formed. The most common ultrasound image is a B-mode image (brightness) displaying an acoustic impedance of a two-dimensional cross-section of a tissue. Other types of ultrasound images may show a blood flow, motion of a tissue over time, the location of blood, the presence of specific molecules, the stiffness of a tissue, or the anatomy of a three-dimensional region.

Sonographers are medical professionals performing scans, wherein the scans are then interpreted by radiologists and further used by clinicians (i.e. physicians and other healthcare professionals who provide direct patient care) for diagnostic or treatment purposes.

Nowadays, with ever-improving ultrasound technologies, ultrasound imaging is increasingly used in medical diagnostics and interventions. However, one of disadvantages of ultrasound imaging is the high inter-observer variability when acquiring ultrasound images, so that it calls for trained sonographers to guarantee clinically relevant images. In other words, receiving a reliable diagnosis generally depends on the availability of a specially trained technician or a qualified medical doctor. Lack of such specially trained technicians and doctors as well as cost of using radiologists for producing ultrasound images opens the need for robotic ultrasound-imaging techniques.

Nowadays, robots representing a combination of ultrasound imaging technology with a computer-based robotic system or a robot controlled by a computer-based control system are highly integrated into a medical workspace, thereby allowing clinicians to treat individual patients in a more efficient, safer and less morbid way.

With their potential for high precision, dexterity, and repeatability, robots are often uniquely suited for ultrasound examinations.

It is to note that such robots may be pre-trained by expert sonographers to perform the best ultrasound examinations, thereby allowing every person (especially in remote regions) to gain access to the expertise of the expert sonographers.

Although the field is relatively young, it has been developed various robots for application in dozens of medical procedures and various training methods for training such robots to perform ultrasound examinations.

In particular, US 2021015453 (published on 21 February, 2021) discloses a training method for training a robot to perform an ultrasound examination, the training method including obtaining a motion control configuration for manually repositioning a robot arm provided with a ultrasound-imaging probe from a first imaging position to a second imaging position with respect to a patient body, wherein the motion control configuration is based on a prediction-convolutional neural network. In the training method of US 2021015453, the prediction-convolutional neural network is trained by performing the following operations: (i) providing a plurality of images obtained by the ultrasound-imaging probe from at least two imaging positions to obtain a target image view; (ii) obtaining a plurality of motion control configurations based on an ultrasound-imaging probe orientation or movement associated with the at least two imaging positions; and (iii) assigning a score to a relationship between the plurality of motion control configurations and the plurality of images with respect to the target image view. However, a main disadvantage of the training method of US 2021015453 and other similar training methods known in the art is in that it does not actually allow performing an ultrasound examination in a safe, precise and repeatable manner since the known training method at least does not take into account that a patient may reposition the patient’s body during the ultrasound examination, and/or may have personal or patient-specific anatomical features (for example, organ sizes or locations inside the patient body) or body structure features, and/or may be damaged by the ultrasound-imaging probe excessively pressing on the patient body.

Therefore, developing an improved training method for training any mechanically relevant robot to perform an ultrasound examination is an important concern in the art. In particular, the improved training method to be developed in the art has to allow the use of the trained robot for performing a safe, precise and repeatable ultrasound examination.

Consequently, a technical problem to be solved by the present invention is to develop a training method for training a robot to perform an ultrasound examination that would allow the above disadvantage of the prior art training method to be at least partly eliminated.

SUMMARY OF THE INVENTION

It is an objective of the present invention to develop an improved training method for training a robot to perform an ultrasound examination, the improved training method solving at least the above technical problem.

To achieve the objective of the present invention, as embodied and broadly described herein, there is provided a training method for training a robot to perform an ultrasound examination. The training method comprises: providing a patient-specific anatomy 3D model; creating, by a 3D scanner, a 3D model of a patient body surface; manually moving a robot arm from a starting position to at least one predetermined training position on the patient body, the robot arm holding an ultrasound-imaging probe and being provided with a robot arm position tracker and at least one force sensor; manually actuating the ultrasound-imaging probe to produce at least one ultrasound image when the robot arm is manually moved to each of the training positions; sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is positioned at least in the starting position and each of the training positions; computing a movement trajectory of the robot arm based on a plurality of robot arm locations on the created body surface 3D model, the locations corresponding to the robot arm positions tracked by the robot arm position tracker during the manual movement of the robot arm and associated with the created body surface 3D model and the provided anatomy 3D model; creating a robot-training model by associating the computed movement trajectory with the forces sensed in the robot arm positions and with the produced ultrasound images.

In an embodiment of the present invention, the provided training method further comprises sensing a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is moved between the starting position and the training positions.

In one embodiment of the present invention, the provided training method further comprises updating, by the 3D scanner, the created body surface 3D model during the manual movement of the robot arm and correcting the robot arm locations according to the updated body surface 3D model.

In another embodiment of the present invention, the provided training method further comprises saving the created robot-training model in a robot data storage as robot control instructions.

In various embodiments of the present invention, the provided training method further comprises displaying, by a display, the produced ultrasound images to a user and accepting, by the user, at least one particular ultrasound image among the displayed ultrasound images.

In other embodiments of the present invention, the provided training method further comprises manually moving the robot arm from an initial position to the starting position; and computing a positioning trajectory of the robot arm based on a plurality of robot arm spatial locations in relation to at least one reference point on the created body surface 3D model, the spatial locations corresponding to the robot arm positions tracked by the robot arm position tracker during the initial movement of the robot arm and associated with the created body surface 3D model. In some embodiments of the present invention, the robot arm spatial locations corresponding to the patient body in the provided training method may be further associated with the provided anatomy 3D model.

In one embodiment of the present invention, the provided training method further comprises sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasoundimaging probe to hold thereof during the initial movement of the robot arm; and creating a robot-positioning model by associating the computed positioning trajectory with the forces sensed during the initial movement of the robot arm.

The training method according to the present invention according to any of the above- disclosed aspects allows the trained robot to effectively perform an ultrasound examination, in particular due to the fact that the training method provides an improved robot-training model associating the patient-specific anatomy 3D model, the body surface 3D model of the patient body, the tracked robot arm positions, the forces applied by the robot arm to the ultrasoundimaging probe, and the produced ultrasound images.

BRIEF DESCRIPTION OF THE DRAWINGS

While the specification concludes with claims particularly pointing out and distinctly claiming the present invention, it is believed the same will be better understood from the following description taken in conjunction with the accompanying drawings, which illustrate, in a non-limiting fashion, the best mode presently contemplated for carrying out the present invention, and in which like reference numerals designate like parts throughout the drawings, wherein:

FIG. 1 a flow diagram of a training method for training a robot to perform an ultrasound examination according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention will now be described more fully with reference to the accompanying drawings, in which example embodiments of the present invention are illustrated. The subject matter of this disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein.

In the context of this document, unless explicitly stated otherwise, the term "patient" means first of all a potentially sick person (a member of the mammalian class) seeking medical advice or remaining under medical observation to have a disease diagnosed and/or treated, wherein the term "patient" also means potentially sick mammalian animals remaining under medical observation to diagnose and/or treat their disease.

Furthermore, in the context of this document, unless expressly stated otherwise, the term "mammal" means a human or an animal, in particular anthropoid and non-human primates, dogs, cats, horses, camels, donkeys, cows, sheep, pigs, and other well-known mammals.

Furthermore, in the context of this document, unless expressly stated otherwise, the term "user" means a sonographer or any suitably skilled health care professional authorized to place an ultrasound probe on a patient body skin or inside a patient body (in particular, via a gastrointestinal tract, vagina or blood vessels) and/or manipulate the ultrasound probe placed on patient body skin or inserted inside the patient body, and/or remove the ultrasound probe from the patient body skin or the inner space of the patient body, wherein the healthcare professional may be, for example, surgeon, oncologist, endoscopist, thoracic surgeon, angiosurgeon, urologist, veterinarian, etc.

In the context of this document, unless explicitly stated otherwise, the term "patient-specific anatomy 3D model" means an anatomy 3D model corresponding to a particular patient type, wherein the patient type may be defined by a patient age, a patient gender, a mammal type and/or other similar patient features. Therefore, the anatomy 3D model is designated in the present document as patient-specific since it may correspond to a particular patient who could be a human, animal or another mammal relating to a particular age group and/or having certain body dimensions or body features, in particular to a male or female human relating to an infant, teenager, full-aged person, adult, middle-age person, old person, etc.).Fig. 1 illustrates flow diagram of a training method 10 for teaching or training a robot to perform an ultrasound examination according to the present invention. The robot to be trained by the training method 10 of fig. 1 may be implemented as any mechanically appropriate robotic system or robot known in the art, the robot comprising or being provided with the following: (a) a driven robot manipulator or robot arm which comprises at least six joints representing the number of degrees of freedom of the robot arm (i.e. the robot arm has six or more degrees of freedom); (b) a driving unit or module for driving the robot arm; (c) a control unit or module for controlling the operation of the robot, including the operation of the robot arm, processing data used or collected during the operation of the robot and controlling a data recording procedure, the recorded data being collected during the operation of the robot; and (d) a long-term memory or a local data storage for storing executable program instructions or commands controlling the operation of robot (in particular, the operation of functional modules integrated into the robot and mentioned in the present document and, if required, the operation of external devices communicatively connected to the robot and mentioned in the present document) and allowing the functional modules to implement their functionalities. Meanwhile, the local data storage of the robot further stores different additional or supplemental data used by the functional modules to provide their outputs.

The robot arm is used in the robot for holding an ultrasound-imaging probe which performs ultrasound scans or performs ultrasound examinations. The robot arm is provided with a robot arm position tracker for tracking positions of the robot arm during movement thereof, including angles between the ultrasound-imaging probe and the body surface to be examined. The robot arm is also provided with at least one force sensor for sensing a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the ultrasoundimaging probe is manipulated on the patient body skin or inside the patient body.

The force sensors are used in combination to detect a plurality of forces applied by the ultrasound-imaging probe to the patient body when a user manually manipulates the ultrasound-imaging probe in a teaching or training mode. Meanwhile, the force sensors may be each implemented as a strain gage sensor.

The position tracker is used for tracking a robot arm position in a three-dimensional (3D) system. Meanwhile, the position tracker may be implemented as any appropriate position tracking system known in the art, the tracking system including a position tracking system built-in the robot arm. The force sensors may be incorporated into the robot arm or may be provided in an effector used for adapting the ultrasound-imaging probe to the robot arm (for example, the force sensors may be incorporated between an inner housing and an outer housing of the effector) and allowing the robot arm in an operating mode to imitate a natural human hand grasp used by the user to manipulate the ultrasound-imaging probe. In other words, the effector allows the robot arm in the training mode to be trained to simulate natural movements of the user.

The ultrasound-imaging probe (also interchangeably referred to in the art as an ultrasound transducer or an ultrasound scanner) held by the robot arm may be implemented as any appropriate ultrasound-imaging probe known in the art.

The ultrasound-imaging probe can both generate or emit ultrasound waves, as well as detect ultrasound echoes reflected back thereto (i.e. returned signals). Generally, active elements in the ultrasound ultrasound-imaging probe are made of special ceramic crystal materials called piezoelectrics. The piezoelectrics are able to produce sound waves when an electric field is applied to them, and they can work in reverse, producing an electric field when a sound wave hits the piezoelectrics.

When used in the robot, the ultrasound-imaging probe sends out a beam of ultrasound waves into a patient body; ultrasound waves are reflected back to the ultrasound-imaging probe by boundaries between body tissues in the path of the beam (e.g., a boundary between a fluid and a soft tissue or between a tissue and a bone). When these ultrasound echoes hit the ultrasoundimaging probe, they generate electrical signals, wherein the ultrasound-imaging probe computes or calculates the distance from the ultrasound-imaging probe to the tissue boundary based on the speed of the detected ultrasound echoes and the time of each echo’s return. These distances are then used to generate two-dimensional (2D) images or three-dimensional (3D) images of tissues and organs of the patient body.

The training method 10 of fig. 1 comprises at least the following main actions or operations (also interchangeably referred to in the art as stages or steps):

(1) providing a patient-specific anatomy 3D model;

(2) creating, by a 3D scanner, a 3D model of the patient body surface; (3) manually moving the robot arm from a starting position to at least one predetermined training position on the patient body, the robot arm holding the ultrasound-imaging probe and being provided with the robot arm position tracker and at least one force sensor;

(4) manually actuating the ultrasound-imaging probe to produce at least one ultrasound image when the robot arm is manually moved to each of the training positions;

(5) sensing, by the force sensors, a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is positioned at least in the starting position and each of the training positions;

(6) computing a movement trajectory of the robot arm based on a plurality of robot arm locations on the created body surface 3D model, the locations corresponding to the robot arm positions tracked by the robot arm position tracker during the manual movement of the robot arm and associated with the created body surface 3D model and the provided anatomy 3D model; and

(7) creating a robot-training model by associating the computed movement trajectory with the forces sensed in the robot arm positions and with the produced ultrasound images.

The above operation (1) includes the following sub-operations: (i) communicating data on a patient type used for training the robot to the control module of the robot, wherein the patient type may be defined by a patient age, a patient gender, a mammal type and/or other similar patient features; and (ii) automatically extracting, by the control module of the robot, a patient-specific anatomy 3D model from the data storage of the robot, the extracted anatomy 3D model depending on the patient type communicated to the control module. Therefore, the anatomy 3D model provided in the operation (1) may correspond to a human, animal or another mammal relating to a particular age group and/or having certain body dimensions or body features, in particular to a male or female human relating to an infant, teenager, full- aged person, adult, middle-age person, old person, etc.). It is to note that the patient type may be provided by the user as a text or voice input communicated to the control module of the robot, wherein the user may use standard I/O means (e.g. a keyboard, a mouse-pointing device, a touch-screen display, a microphone, etc.) connected to the robot. Furthermore, the patient-specific anatomy 3D model corresponding to the current patient type used for training the robot is stored for the current training session in the data storage of the robot, wherein the stored patient-specific anatomy 3D model may be communicated to the control module of the robot to provide the operation or the training of the robot. In one embodiment of the present invention, the patient type used for training the robot may be preliminarily communicated to the robot to be trained, in particular the patient type may be originally stored in the local data storage of the robot and automatically extracted therefrom by the control module of the robot when the patient type is required to be used to provide the operation of the robot.

In another embodiment of the present invention, the patient type may be received, by the control module of the robot, from a data server, a cloud storage, an external storage or a similar external storage device used for storing data on the patient type to be used for training the robot. According to this embodiment, the robot is further provided with a communication module communicatively connected (e.g. in wireless manner via a communication network or in a wire manner via a physical cable) to the data server, cloud storage, external storage or the similar storage device to receive data on the patient type therefrom, wherein the robot is also provided with a communication bus communicatively coupled both to the communication module and the control module.

In other embodiments of the present invention, the patient-specific anatomy 3D model to be used for training the robot may be received, by the control module of the robot, from a data server, a cloud storage, an external storage or a similar external storage device used for storing data on the patient type to be used for training the robot. According to this embodiment, the robot is further provided with a communication module communicatively connected (e.g. in wireless manner via a communication network or in a wire manner via a physical cable) to the data server, cloud storage, external storage or the similar storage device to receive data on the patient type therefrom, wherein the robot is also provided with a communication bus communicatively coupled both to the communication module and the control module.

In the above embodiments where the communication module needs to be used in the robot to perform the training method 10 of fig. 1, the communication module may be implemented as a network adapter provided with slots appropriate for connecting physical cables of desired types thereto if wired connections are provided between the robot and any external devices mentioned in the present document or as a network adapter in the form of WiFi-adaptor, 3G/4G/5G-adaptor, LTE-adaptor, or any other appropriate adaptor supporting any known wireless communication technology if wireless connections are provided between the robot and any external storage devices mentioned in the present document. Moreover, such communication module may be implemented as a network adaptor supporting a combination of the above-mentioned wire or wireless communication technologies depending on types of connections provided between the robot and any external storage devices mentioned in the present document.

The 3D scanner used for performing the above operation (2) may be implemented as any appropriate stationary, handheld or mobile 3D scanner or 3D scanning device used in healthcare applications for producing body-surface 3D scans.

The body-surface 3D scans produced by the 3D scanner allow the control model to create a 3D whole-body model (also referred to in the art as a 3D avatar). In other words, the 3D scanner allows capturing in 3D a full patient body or particular parts of the patient body, wherein the created 3D body model represents accurate contours of the patient body taking into account body sizes, body shapes, body features being specific or individual for a particular patient, body textures, and a patient posture.

The created 3D model of the patient body is communicated by the 3D scanner to the robot, wherein the 3D scanner is communicatively connected to the robot, and the communication module of the robot is configured to provide data transfer between the robot and the 3D scanner. Meanwhile, the 3D body model is stored in the local data storage of the robot, the 3D body model being associated with the examined patient used for training the robot and with the current training session, wherein the stored 3D body model may be used by the control module of the robot to provide the operation or the training of the robot.

In one embodiment of the present invention, the 3D scanner used for performing the above operation (2) may be integrated with or mounted on the robot art, thereby being another functional module of the robot. In this embodiment, the 3D scanner may be a scanning module of the robot, the scanning module being controlled by the control module of the robot, wherein the created body surface 3D model may be stored in the local data storage of the robot.

Then, to perform the above operation (3) the user personally helps the robot arm to hold the ultrasound-imaging probe in an appropriate manner (i.e. manually providing an appropriate orientation of the ultrasound-imaging probe in relation to the patient body and manually controlling forces applied by the robot arm to the ultrasound-imaging probe to hold thereof) and manually moves the handheld robot arm from the starting position to at least one predetermined training position on the patient body, wherein the patient used for training the robot takes a particular position relating to a particular ultrasound examination to be subsequently performed with the trained robot.

When the ultrasound-imaging probe is positioned directly in each of the training positions, the ultrasound-imaging probe at least takes a required position in relation to a particular examined area or portion on the patient body, has a required orientation in relation to the particular examined body area or portion and is subjected to required forces applied by the user thereto, thereby allowing the ultrasound-imaging probe to capture the most representative ultrasound images corresponding to a particular ultrasound examination.

In case when there are two or more training positions to be taken by the robot arm in relation to the patient body in order to perform a particular ultrasound examination, the robot arm is manually moved by the user in the above-described manner from the staring position to a first training position, and then manually moved from the first training position to the second training position, and so on. In other words, in this case the robot arm needs to be successively manually moved by the user from the staring position to the training positions.

Furthermore, when the robot arm is manually moved, the robot arm position tracker detects or tracks robot arm positions in a three-dimensional system, so that each of the robot arm positions taken by the robot arm during the manual movement thereof (i.e. the starting position and each of the training positions) corresponds to particular coordinates in the three- dimensional system. Meanwhile, the tracked robot arm positions are stored in the local data storage of the robot, wherein the stored robot arm positions each corresponding to particular coordinates in the three-dimensional system may be used by the control module of the robot to provide the operation or the training of the robot.

In another embodiment of the present invention, the 3D scanner may produce additional or supplemental body-surface 3D scans during the manual movement of the robot arm and update the created whole-body surface 3D model based on the supplemental body-surface 3D scans, wherein the control module of the robot may correct the robot arm locations according to the updated body surface 3D model.

To perform the above operation (4) the user manually actuates the ultrasound-imaging probe, and the actuated ultrasound-imaging probe automatically generates or produces at least one ultrasound image in each of the training positions. The ultrasound images produced by the ultrasound-imaging probe are stored in the data storage of the robot, wherein each ultrasound image is associated with a corresponding one of the training positions. Meanwhile, the stored ultrasound images may be used by the control module of the robot to provide the operation or the training of the robot.

In one embodiment of the present invention, ultrasound-imaging probe may be further actuated by the user to have supplemental or additional ultrasound images when the robot arm is moved between the starting position and the training position and/or moved between the training positions, and/or directly located in the starting position.

Then, to perform the above operation (5) the user manually actuates the force sensors when the robot arm is positioned at least in the starting position and each of the training positions. Therefore, the force sensors sense a plurality of forces or a set of forces (in particular, the sensed forces may be pressing forces) applied by the robot arm to the ultrasound-imaging probe to hold the ultrasound-imaging probe when the robot arm is directly located in each of the above-stated robot arm positions. Sets of forces each associated with corresponding one of the above-stated robot arm positions are stored in the local data storage of the robot. Meanwhile, the stored sets of forces may be then used by the control module of the robot to provide the operation or the training of the robot.

In one embodiments of the present invention, the force sensors may be automatically activated when actuating the robot and may perform force measurements from the robot actuation moment to a moment when the robot is disactuated, so that the above-mentioned forces applied by the robot arm to the ultrasound-imaging probe may be sensed by the force sensors for each position taken by the robot arm during the manual movement thereof in relation to the patient body, including the starting position and each of the training positions. In another embodiment of the present invention, the force sensors may be manually activated by the user when the robot arm is located in the starting position and may then operate performing force measurements until the robot is disactuated.

In other embodiments of the present invention, the force sensors may further sense forces applied by the robot arm to the ultrasound-imaging probe to hold thereof when the robot arm is moved between the starting position and the training positions, i.e. in the process of movement between the starting position and the first training position, and then in the process of movement between the first training position and the next second training position, etc. Therefore, in the present embodiment, each set of forces sensed by the force senses during the movement of the robot arm between the starting position and one of the training positions corresponds to a particular robot arm intermediate position between the starting position and said training position.

Then, to perform the above operation (6) the control module of the robot performs at least the following sub-operations: (i) extracting, from the data storage of the robot, data on robot arm positions tracked by the robot arm position tracker during the manual movement of the robot arm; (ii) extracting, from the data storage of the robot, data on the body surface 3D model created by the 3D scanner and data on the anatomy 3D model; (iii) associating the extracted robot arm positions with the extracted body surface 3D model in order to have precise robot arm locations on the created body surface 3D model, wherein the robot arm locations are each associated with the body surface 3D model; (iv) associating the robot arm locations with the anatomy 3D model in order to provide the correlation between the robot arm locations and the anatomy 3D model (e.g. correlation between a particular robot arm location and a particular patient organ or any another body structure to be examined), wherein the robot arm locations are each further associated with the anatomy 3D model; and (v) computing a movement trajectory of the robot arm based on the robot arm locations associated with both the anatomy 3D model and the body surface 3D model. Meanwhile, the computed movement trajectory of the robot arm is stored in the data storage of the robot, wherein the stored movement trajectory may be used by the control module to provide the operation or the training of the robot.

Then, to perform the above operation (7) the control module of the robot performs at least the following sub-operations: (i) extracting, from the data storage of the robot, data on the robot arm movement trajectory computed by the control module; (ii) extracting, from the data storage of the robot, data on the forces sensed by the force sensors for the robot arm training positions; (iii) extracting, from the data storage of the robot, data on the ultrasound images produced by the ultrasound-imaging probe for the robot arm training positions; (iv) associating the extracted robot arm movement trajectory with both the extracted forces and the extracted ultrasound images to form or create the robot-training model.

The created robot-training model is stored in the data storage of the robot as robot control instructions, wherein the stored robot-training model may be then used by the control module of the robot to provide the most precise and effective control of the robot when it is used for performing the same or similar ultrasound examination for a patient having the same or similar body features.

In one embodiment of the present invention, before moving the robot arm from the starting position to a first training position, the user needs to manually move the robot arm with the ultrasound-imaging probe held by the user in the above-described manner from an initial position to the starting position, wherein the robot arm in the initial position may be spaced from the patient body, in particular from the starting position corresponding to a particular place or point on the patient body. In this embodiment, the control module of the robot may compute a positioning trajectory of the robot arm based on a plurality of robot arm spatial locations in relation to at least one reference point on the body surface 3D model created by the 3D scanner (such reference points may be preliminary set by the user and communicated to the control module of the robot), the spatial locations corresponding to the robot arm positions tracked by the robot arm position tracker during the initial movement of the robot arm (i.e. in process when the robot arm is manually moved by the user from the initial position to the starting position) and associated with the created body surface 3D model. Furthermore, in this embodiment, the robot arm spatial locations corresponding to the patient body may be further associated with the anatomy 3D model. Moreover, in this embodiment, the force senses may further sense a plurality of forces applied by the robot arm to the ultrasound-imaging probe to hold thereof during the initial movement of the robot arm, i.e. during the movement of the robot arm from the initial position to the starting position, and the control module of the robot may form or create a robot-positioning model by associating the computed positioning trajectory with the forces sensed during the initial movement of the robot arm. In another embodiment of the present invention, the robot may be further provided with a display or communicatively connected to a display, wherein the display may be configured to display to the user the ultrasound images produced by the ultrasound-imaging probe when performing the operation (4). The user may accept at least one particular ultrasound image among the displayed ultrasound images, wherein such accepted ultrasound images may correspond to ultrasound images being the most representative for a particular ultrasound examination and/or a particular patient. It is to note that the most appropriate ultrasound images may be accepted by inputting a user text or voice command and communicating such command to the control module of the robot, wherein the user may use standard I/O means (e.g. a keyboard, a mouse-pointing device, a touch-screen display, a microphone, etc.) connected to the robot.

Finally, it is to note that robot-training models stored in the data storage of the robot as robot control instructions to be communicated to the control module of the robot may be then used by the robot for performing an ultrasound examination for a new patient in a safe, precise and effective way. In particular, to perform an ultrasound examination for a new patient the control module of the robot may use the most suitable robot-training model chosen by the control module based on initial input data relating to the patient to be examined (for example, a patient age, a patient gender, a mammal type and/or other appropriate patient parameters defining patient body features) and on the 3D body model initially produced by the 3D scanner for the new patient, thereby allowing the ultrasound examination to be performed for the new patient such that it simulates the best user practice which is the most suitable for the new patient with due consideration of individual body features of the patient. In some cases, to perform an ultrasound examination for a new patient the control module may use two or more robot-training models in the above-mentioned way, wherein each robot-training model chosen by the control module of the robot is related to the most suitable real user practice for a particular part of the ultrasound examination.

It is to further note that the created robot-training model representing the collected or accumulated data merged with each other (i.e. representing the computed movement trajectory associated with the forces sensed in the robot arm positions and with the produced ultrasound image) may be also used for creating special training algorithms based thereon, including but not limited to neural networks having different known topologies, thereby allowing the robot using such training algorithms, when used in practice, to generate an individual movement trajectory for the robot arm and individual force vectors for each individual patient taking into account an individual patient anatomy and body features.

While the invention has been described with reference to specific preferred embodiments, it is not limited to these embodiments. The invention may be modified or varied in many ways and such modifications and variations, as would be obvious to one of skill in the art, are within the scope and spirit of the invention and are included within the scope of the following claims.