Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND COMPUTER IMPLEMENTED METHOD FOR 3D PROCESSING OF A TOMOGRAPHY TEST
Document Type and Number:
WIPO Patent Application WO/2020/188428
Kind Code:
A1
Abstract:
The invention describes a computer-implemented method for 3D processing of a tomography test (E_TAC) comprising the steps of performing a tomography test (E__TAC) on a patient (P) and generating a corresponding file of output data (TAC_put); performing software processing of said file of output data (TAC_out) to generate a 3D file (TAC_3D), wherein said 3D file (TAC__3D) comprises a plurality of 3D cells (3D_cell) that determine a 3D mesh-like structure (3D_mesh) of said patient image data (Datajmm); determining a mesh reduction factor (R mesh) for said 3D file (TAG 3D), thus determining a reduced 3D file (R_TAC_3D); transmitting said reduced 3D file (R_TAG_3D) conditionally in a local network (LAN); providing, in said local network (LAN), an integrated viewing station (30i) configured to perform the steps of: receiving said reduced 3D file (R_TAC__3D); determining a 3D hologram graphic model (0L0...3D) as a function of said reduced 3D file (R__TAC__3D); providing a viewer (303) for interacting with a user (U) and for representing said 3D hologram (0L0...3D) in a real viewing space (R__space); calculating said mesh reduction factor (R__mesh) of said reduced 3D file (R_TAC_3D), thereby guaranteeing for said viewer (303) complete compatibility with said 3D hologram graphic model (OLOJ3D) generated and allowing the user (U) to interact interactively with said 3D hologram graphic model (OLO_3D) generated. The invention further describes a system for 3D processing of a tomography test comprising the devices/apparatus/control units provided for implementing the method.

Inventors:
CHECCACCI IACOPO (IT)
LAPENTA GIUSEPPE (IT)
Application Number:
PCT/IB2020/052272
Publication Date:
September 24, 2020
Filing Date:
March 13, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
WITAPP S R L (IT)
International Classes:
G16H30/20; G16H30/40; G16H40/63
Foreign References:
US20130044856A12013-02-21
US20180165867A12018-06-14
Other References:
MORALES MOJICA CRISTINA MARIE ET AL: "Holographic Interface for three-dimensional Visualization of MRI on HoloLens: A Prototype Platform for MRI Guided Neurosurgeries", 2017 IEEE 17TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE), IEEE, 23 October 2017 (2017-10-23), pages 21 - 27, XP033293739, DOI: 10.1109/BIBE.2017.00-84
Attorney, Agent or Firm:
BELLASIO, Marco et al. (IT)
Download PDF:
Claims:
CLAIMS

1. Computer-implemented method for 3D processing of a tomography test (E_TAC) comprising the steps of:

performing a tomography test (E_TAC) on a patient (P) and generating a corresponding file of output data (TAC_out) in a first predefined format for tomography tests (F_TAC), wherein said output file (TAC_out) comprises one or more from among, at least:

- patient data (Data_P);

- patient image data (Data_Imm) comprising a sequence of two- dimensional slices (seq_2D) representative of an internal section of the patient (P);

performing a software processing of said data (Data_P, Data_Imm) to generate a 3D file (TAC_3D) in a second predefined format for tomography tests (F2_TAC) as a function of said output file (TAC_out) generated, wherein said 3D file (TAC_3D) comprises a plurality of 3D cells (3D_cell) that determine a 3D mesh-like structure (3D_mesh) of said image data of the patient (Data_Imm);

- determining a mesh reduction factor (R _mesh) for said 3D file (TAC_3D), thus determining a reduced 3D file (R_TAC_3D);

- transmitting said reduced 3D file (R_TAC_3D) conditionally in a local network (LAN);

wherein the method further comprises the step of:

providing, in said local network (LAN), an integrated viewing station (30i) comprising identification, activation and configuration information (info_drive) characteristic of said integrated viewing station (30i), and provided to perform the steps of:

- receiving said reduced 3D file (R_TAC_3D) transmitted by said second processing station (20);

- determining a 3D hologram graphic model (OLO_3D) of said patient image data (Data_Imm) as a function of said reduced 3D file (R_TAC_3D);

- providing a viewer (303) for interacting with a user (U) and for representing said 3D hologram (OLO_3D) in a real viewing space (R_space),

- calculating said mesh reduction factor (R_mesh) of said reduced 3D file (R_TAC_3D) as a function of said activation and configuration information (info_drive), thereby guaranteeing for said viewer (303) complete compatibility with said 3D hologram graphic model (OLO_3D) generated and allowing the user (U) to interact interactively with said 3D hologram graphic model (OLO_3D) generated.

2. The method according to claim 1 , comprising - prior to the step of conditionally transmitting said reduced 3D file (R_TAC_3D) - the step of:

- identifying in the local network (LAN) an integrated viewing station (30i) previously configured to receive said 3D file (TAC_3D);

- detecting said identification, activation and configuration information (info_drive_l) comprised in said storage module (31 ) of said integrated viewing station (30i);

- sending said identification, activation and configuration information (info_drive_l) to determine a mesh reduction factor (R_mesh). 3. The method according to claim 2, further comprising the step of:

- determining said mesh reduction factor (R_mesh) for said identified integrated viewing station (30i) as a function of:

said identification, activation and configuration information (info_drive) received and/or

said viewer (303) provided.

4. The method according to claim 3, further comprising the step of:

- transmitting in said local network (LAN) said 3D file (TAC_3D) to said identified integrated viewing station (30i) through a data pack comprising: - an initialisation field (C_l);

- a message field (C_ll), in particular comprising messages for and from the viewer to handle the sending of data

- a size field (C_ III) comprising the size of the data segment to be sent;

- a data field (C_IV) comprising the size data indicated in the size field (III) to be sent;

wherein said size field (C_ III) is defined as a function of said identification, activation and configuration information (info_drive) received.

5. The method according to any one of the preceding claims, wherein said software processing of said data (Data_P,Data_lmm) takes place through the sub-steps of:

- receiving said output file (TAC_out);

- graphically processing said sequence of two-dimensional slices

(seq_2D)

6. The method according to any one of the preceding claims, wherein said step of performing software processing comprises processing distinct 3D models provided in a combined view in said 3D file (TAC_3D), so that said overall 3D file (TAC_3D) represents various tissues of the patient (P) highlighted in different ways and,

the method comprises, after the step of providing said viewer (303), a step of selecting said one or more tissues of the patient (P) from the reduced 3D file (R_TAC_3D).

7. The method according to any one of claims 1 to 6, wherein said software processing of said data (Data_P, Data_Imm) and/or the one or more tissues of the patient highlighted in different ways takes place through an application of a neural network system (Al) starting from said output file (TAC_out) generated by said first processing station (10).

8. The method according to claim 7, wherein said neural network system (Al) comprises a convolutional neural network trained to recognise, based on said output file (TAC_out), which regions correspond to different tissues and/or organs of interest.

9. The method according to claim 8, wherein said convolutional neural network produces a semantic segmentation, based on said output file (TAC_out), which assigns a specific class, such as bone, vessel, and other classes of tissues and/or organs, to every pixel identified.

10. The method according to claim 9, wherein said convolutional neural network performs:

- a training step, in which every input tomography test comprised in said output file (TAC_out) is accompanied by a corresponding reference segmentation; and

- a calibration step for the network parameters guided by stochastic gradient descent in order to minimise a measure of error, at the level of individual pixels, between the output of the network and said reference segmentation.

1 1 . The method according to any one of the preceding claims, comprising the step performed in said integrated viewing station (30i) of:

- performing said interaction between the user (U) and the 3D hologram graphic model (OLO_3D) by implementing one or more functions from among movement, rotation and enlargement of said 3D hologram graphic model (OLO_3D) preferably selectable from a menu (100) of said integrated viewing station (30i).

12. The method according to claim 1 1 , wherein said movement, rotation and enlargement functions comprise the actions of:

- detection of a hand (M) of said user (U);

- selection gestures with said hand (M) of said user (U);

- detection of a movement of said hand (M) in said selection status;

- detection of a movement of two hands (M) in said selection status.

13. A 3D processing system of a tomography test comprising:

a first processing station (10) provided to perform a tomography test (E_TAC) on a patient (P) and to generate a corresponding output data file (TAC_out) in a first predefined format for tomography tests (F_TAC), wherein said output file (TAC_out) comprises one or more from among, at least:

- patient identification data (Data_P);

- patient image data (Data_Imm) comprising a sequence of two- dimensional slices (seq_2D) representative of an internal section of the patient (P);

a second processing station (20) comprising:

- a first receiving module (200) configured to receive said output file (TAC_out):

- a 3D calculating module (202) configured to generate a 3D file (TAC_3D) in a second predefined format for tomography tests (F2_TAC) as a function of said output file (TAC_out) received, wherein said 3D file (TAC_3D) comprises a plurality of 3D cells (3D_cell) that determine a 3D mesh-like structure (3D_mesh) of said image data of the patient (Data_Imm);

- a reduction module (203) configured to determine a mesh reduction factor (R _mesh) for said 3D file (TAC_3D), thus determining a reduced 3D file (R_TAC_3D),

- a conditioned transmission module (204) configured to conditionally transmit in a local network (LAN) said reduced 3D file (R_TAC_3D);

an integrated viewing station (30i;i=1 ..n) in said local network (LAN), configured to receive a reduced 3D file (R_TAC_3D) transmitted by said second processing station (20), and comprising:

- a storage module (301 ) comprising identification, activation and configuration information (info_drive_i) characteristic of said integrated viewing station (30i);

- a processing unit (302) configured to determine a 3D hologram graphic model (OLO_3D) of said patient image data (Data_Imm) as a function of said reduced 3D file (R_TAC_3D);

- a viewer (303) predisposed for interacting with a user (U) and configured to represent said 3D hologram (OLO_3D) in a real viewing space (R_space),

wherein said reduced 3D file (R_TAC_3D) has a mesh reduction factor (R _mesh) calculated as a function of said activation and configuration information (info_drive_i), thereby guaranteeing for said viewer (303) complete compatibility with said 3D hologram graphic model 3D (OLO_3D) generated and allowing the user (U) to interact interactively with said 3D hologram graphic model (OLO_3D) generated.

14. The system according to claim 13, wherein said conditioned transmission module (204) comprises an identification module (204A) configured to:

- send call signals (Call_i) in the local network (LAN) for identifying an integrated viewing station (30i) previously predisposed to receive said 3D file (TAC_3D);

- detect said identification, activation and configuration information (info_drive_l) comprised in said storage module (301 ) of said identified integrated viewing station (30i);

- send said identification, activation and configuration information (info_drive_l) detected to said reduction module (203). 15. The system according to claim 14, wherein said reduction module

(203) is configured to determine said mesh reduction factor (R_mesh) for said identified integrated viewing station (30i), as a function of:

said identification, activation and configuration information (info_drive_l) received and/or

said viewer (303) provided.

16. The system according to any one of claims 13 to 15, wherein said second processing station (20) is configured to:

- receive said output file (TAC_out);

- graphically process said sequence of two-dimensional slices (seq_2D)

- generate said 3D file (TAC_3D) in said second predefined format for tomography tests (F2_TAC) as a function of said output file (TAC_out) received and of said processing of two-dimensional slices (seq_2D). 17. The system according to any one of claims 13 to 15, wherein said second processing station (20) comprises an application of a neural network system (Al) starting from said output file (TAC_out) generated by said first processing station (10) predisposed to generate said 3D file (TAC_3D) in said second predefined format for tomography tests (F2_TAC)

18. The 3D processing system for processing a tomography test according to any one of claims 13 to 17, wherein said conditioned transmission module (204) is configured to transmit in said local network (LAN) said reduced 3D file (R_TAC_3D) to said identified integrated viewing station (30i) through a data pack (data packj) comprising:

- an initialisation field (C_l);

- a message field (C_II), in particular comprising messages for and from the viewer to handle the sending of data:

- a size field (C_ III) comprising the size of the data segment to be sent;

- a data field (C_IV) comprising the size data to be sent indicated in the size field (III);

wherein said size field (C_ III) is defined as a function of said identification, activation and configuration information (info_drive) received.

19. The system for 3D processing of a tomography test according to any one of claims 13 to 18, wherein said second processing unit (20) comprises a processing module (201 ) configured to process distinct 3D models provided in a combined view in said 3D file (TAC_3D), so that said overall 3D file (TAC_3D) represents various tissues of said patient (P) highlighted in different ways and

said integrated viewing station (30i) comprises a selection interface (U303) configured to enable the selection of one or more of said tissues of the patient (P) from said reduced 3D file (R_TAC_3D) generated. 20. The 3D processing system according to any one of claims 13 to 19, wherein said viewer (303) is configured to perform said interaction between the user (U) and the 3D hologram graphic model (OLO_3D) by implementing one or more functions from among movement, rotation and enlargement of said 3D hologram graphic model (OLO_3D) preferably selectable from a menu (100) of said integrated viewing station (30i).

21. The 3D processing system according to the preceding claim, wherein said movement, rotation and enlargement functions comprise the actions of:

- detection of a hand (M) of said user (U);

- selection gestures with said hand (M) of said user (U);

- detection of a movement of said hand (M) in said selection status;

- detection of a movement of two hands (M) in said selection status.

Description:
SYSTEM AND COMPUTER IMPLEMENTED METHOD FOR 3D PROCESSING

OF A TOMOGRAPHY TEST

FIELD OF APPLICATION

The present invention relates to a system for 3D processing of a tomography test.

The present invention further relates to a computer-implemented method for 3D processing of a tomography test.

In particular, the invention relates to a system/method for 3D processing of a tomography test wherein the tomography test comprises generating a file of output data comprising patient data and image data, and the following description makes reference to this field of application in order to simply the disclosure.

PRIOR ART

Within hospitals, and in particular in a context of the admission of a patient in emergency conditions, at the moment the procedure whereby information of a tomography test performed on the patient is transferred to a doctor, in particular a surgeon - who requests the test itself under conditions of urgency - takes place by recording the test images on a compact disk; such information is subsequently viewed by the requesting doctor by means of suitable viewing software.

The need that is most greatly felt by a surgeon while planning an operation, i.e. in the pre-operative phase, is to have a view of the patient’s tissues or organs that the operation must be performed on which is as detailed and complete as possible; a three-dimensional representation makes it possible to accurately reproduce the state of the area in which it will be necessary to operate and thus to plan the operation in optimal fashion.

Unfortunately, the software presently available to hospitals for this purpose enables the consultation of static, non-interactive information/images that oblige the surgeon who is planning the operation to ask the tomography technician, in particular a radiologist, for more precise details on the effects/movements of the parts involved; such details are usually provided by means of a video prepared ad hoc by the tomography technician.

A critical issue in this procedure is represented by the time it takes for the creation of a 3D tomography model and rendering of the video, which occupy the radiologist’s activity for an average time of 25 minutes, excluding further requests for modifications/additions made by the surgeon after viewing the model or due to errors arising from incomprehension between the parties.

A second critical issue is represented by the fact that the video created with a three-dimensional model is viewed by means of two-dimensional media such as a computer monitor, which does not give a perception of space and does not maintain the proportion of the organ to be operated on.

The procedure described thus poses 3 main problems:

• Static processing of the 3D view

• Lack of perception of the real dimensions, in particular the depth of the element being viewed

• Lengthy information transfer times

Added to these problems there is the critical issue of passing tomography data/information to the members of a medical staff so that each one may have an optimal view of the part to which the greatest attention must be paid during the operation.

The general object of the present invention is to overcome the drawbacks of the prior art described.

A specific object of the invention is to provide a system for 3D processing of a tomography test that is optimised in terms of efficiency of execution. Another object of the invention is to provide a system for 3D processing of a tomography test that enables the surgeon to customise the 3D view directly and in real time.

A further object of the invention is to provide a system for 3D processing of a tomography test which enables tomographic images to be directed, in a targeted manner, to different members of a surgical staff prior to an operation.

SUMMARY OF THE INVENTION

In a first aspect of the invention, these and other objects are achieved by a computer-implemented method for 3D processing of a tomography test, according to what is claimed in claim 1 .

In a second aspect of the invention, these and other objects are achieved by a system for 3D processing of a tomography test according to what is claimed in claim 13.

Further advantages are described in the dependent claims.

The invention achieves the following technical effects:

- efficiency of execution with consequent reduced times of analysis by surgeons, especially under emergency presurgical conditions;

- in particular, direct, real-time interaction is achieved between surgeons and the 3D view based on the specific planning needs of an urgent operation or needs arising spontaneously in the presurgical phase, with the advantage of not requiring subsequent video processing by the tomography operator;

- customisation of the 3D view;

- optimisation of the distribution of images to the specific doctors who perform the operation.

The aforementioned technical effects/advantages and other technical effects/advantages of the invention will emerge in greater detail from the following description of an example embodiment, given by way of illustration and not by way of limitation with reference to the appended drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is a block diagram of the system/method according to the invention, in an overall view.

Figure 2 is a detail of the data transmission in the system/method of figure 1.

Figure 3 is a detail of the transmission protocol in the system/method of figure 1.

Figure 4 is a schematic representation of an integrated viewing station comprised in the system according to the invention.

DETAILED DESCRIPTION

The design presented aims to improve diagnostic times for operators in the health sector by using digital diagnostic imaging solutions based on edge computing technologies.

The solution is capable of offering greater efficiency within hospitals and private clinics thanks to an improvement in performance, with a faster processing of images and more rapid use of the same, which may be used in planning an operation, i.e. in the presurgical phase.

The solution is mainly composed of a system of HW/SW platforms comprising software for generating computed tomography images, software for processing the images generated, software for adapting the images to enable an optimal view aimed at a specific user, and a mixed reality viewer for viewing the images adapted for the specific user.

The invention comprises performing a tomography test E_TAC on a patient P and generating a corresponding file of output data TAC_out in a first predefined format for tomography tests F_TAC.

With particular reference to figure 1 , a system for 3D processing of a tomography test E_TAC comprises a first processing station 10 configured to perform a tomography test E_TAC on a patient P.

The first processing station 10 preferably comprises a computed tomography machine.

The first predefined format for tomography tests F_TAC is preferably the “DICOM” format.

In addition to the actual image, the DICOM file also includes a header. Various information is contained in the DICOM header, for example: patient’s name and surname, type of scan, image position and size, etc. The size of the header will obviously vary according to the amount of information stored. All of the information stored in the header is catalogued into groups of elements, also known as "DICOM tags" .

The output file TAC_out comprises one or more from among, at least, patient data Data_P and patient image data Data_Imm comprising a sequence of two-dimensional slices seq_2D representative of an internal section of the patient P.

The invention comprises performing software processing of the patient data Data_P and the patient image data Data_Imm.

In a preferred embodiment of the invention, with reference to figure 1 , the system according to the invention comprises a second processing station 20, in particular an electronic digital processor of medical images, preferably CAT images, configured for that purpose.

In the course of the present description and in the subsequent claims, the second processing station 20 will be described as logically divided into distinct functional modules - storage modules or operating modules - which perform specific functions.

The second processing station 20 can consist of a single electronic device, appropriately programmed to perform the functions described, and the different modules can correspond to hardware entities and/or routine software belonging to the programmed device.

Alternatively, or in addition, such functions can be performed by a plurality of electronic devices over which the aforesaid functional modules can be distributed.

The second processing station 20, moreover, can avail itself of one or more processors for executing the instructions contained in the storage modules.

Moreover, the aforesaid functional modules can be distributed over different local or remote computers based on the architecture of the network they reside in.

With reference to figure 1 , the second processing station 20 is configured to receive the output file TAC_out.

The second processing station 20 comprises a first receiving module 200 configured for this function.

The second processing station 20 is further configured graphically process the sequence of two-dimensional slices seq_2D.

The second processing station 20 comprises a processing module 201 configured for this function.

The second processing station is configured to generate a 3D file TAC_3D in a second predefined format for tomography tests F2_TAC as a function of the result of the processing.

The second predefined format for tomography tests F2_TAC is preferably an STL file format.

In particular, after the loading of the output file TAC_OUT, in particular in DICOM format, it is envisaged that one or more tissues of the patient may be selected so as to obtain an overall 3D representation depicting different tissues highlighted in different ways, in particular distinguished by different colours.

For this purpose, distinct 3D models are provided in a combined view, i.e. overlaid; the 3D file TAC_3D comprises the models treated like a single overall 3D model.

In other words, the generated 3D file TAC_3D comprises the representation of the one or more tissues of the patient highlighted in different ways, in particular distinguished by different colours, in a combined view.

Therefore, the processing step comprises processing distinct 3D models provided in a combined view in the 3D file TAC_3D, so that the overall 3D file TAC_3D represents various tissues of the patient highlighted in different ways, in particular distinguished by different colours.

The processing module 201 is configured for this purpose.

The technical effect is to render the tissues clearly distinguishable from one another. The 3D model comprising the 3D representations of tissues rendered in different ways can be sent as a single object to the application at the integrated viewing station 30i, in particular to the viewer 303. Via the integrated viewing station 30i, it is possible, by means of the viewer 303, to consult the 3D hologram comprising the tissues in the established colours. The viewer 303 is further configured to enable or disable the viewing of every single level of tissue.

The technical effect achieved is the viewing, by the user, of the hologram of all tissues simultaneously or only the selected ones.

The generated 3D file TAC_3D comprises a plurality of 3D cells 3D_cell that determine a 3D mesh-like structure 3D_mesh of the patient image data Data_Imm.

With reference to figure 1 , the second processing station 20 comprises a 3D calculating module 202 configured to produce the 3D file TAC_3D as previously described.

In a second embodiment of the invention, the software processing of the patient data Data_P and/or the patient image data Data_Imm and/or the one or more tissues of the patient highlighted in different ways is performed by applying a neural network system Al configured to generate the 3D file TAC_3D from the output file TAC_out generated by the first processing station 10.

In particular, the neural network is a convolutional neural network.

The network is trained to recognise, in an input tomography test (in particular in the DICOM format, 512x512 pixels), i.e. based on the output file TAC_out generated by the first processing station 10, which areas correspond to different tissues or organs of interest.

The computational model thus produces a semantic segmentation of the input tomography test which assigns a specific class, such as bone, vessel, and other classes of tissues and/or organs, to every pixel identified.

During training, every input tomography test is accompanied by a corresponding reference segmentation, in particular prepared by an expert.

The dataset thus constructed is increased during training through ad hoc rotations and deformations.

The calibration of the network parameters is guided by stochastic gradient descent and has the objective of minimising a measure of error, at the level of individual pixels, between the output of the network and the desired reference segmentation.

A certain number of tomographic tests, not observed by the network during training, is used as the validation set to interrupt the gradient descent and the consequent updating of the network parameters.

The second processing station 20 is further configured to determine a mesh reduction factor R _mesh for the 3D file TAC_3D, thus determining a reduced 3D file R_TAC_3D.

With reference to figure 1 , the second processing station 20 comprises a reduction module 203 configured to determine the mesh reduction factor R_mesh.

The second processing station 20 further envisages transmitting the reduced 3D file R_TAC_3D conditionally in a local network LAN.

With reference to figure 1 , the second processing station 20 comprises a conditioned transmission module 204 configured for this function.

The invention further comprises providing at least one integrated viewing station 30i, i=1 ..n in the local network LAN.

A plurality of integrated viewing stations 30i can be provided, each with specific viewing features that can be customised according to the part of the patient to be shown during an operation.

Every integrated viewing station 30i comprises identification, activation and configuration information info_drive characteristic of the integrated viewing station 30i, which enable an optimisation of the work of the surgeon using the specific station.

With reference to figure 1 , the information in the integrated viewing station 30i is comprised in a storage module 301.

Consequently, the local network LAN according to the invention envisages the presence of a plurality of integrated viewing stations 30i configured to cooperate, but with different predefined functions.

At least one integrated viewing station 30i preferably comprises a viewer 303, in particular a mixed reality viewer.

One example of a standard integrated viewing station, i.e. without any customisation or possibility of selecting the input images, is Microsoft HoloLens.

According to the invention, there is provided a plurality of mixed reality viewers 303 associated with corresponding integrated viewing stations 30i. The provision of specific viewers 303 enables optimised viewing of various anatomical features that can also be analysed during a presurgical phase. For example, for every region of the patient’s body having a more complex anatomical structure, a viewer with extremely high resolution might be necessary, whereas in the case of a less complex region, a viewer with lower resolution but a greater computing power might be sufficient.

Balancing the features of the viewers enables the technical effect of resource processing optimisation.

Consequently, as will be better detailed further below in the description, the mesh reduction factor will be optimised also as a function of the features of the viewer 303 used, in particular a mixed reality viewer.

Preferably, a selection interface 205 is provided which is configured to select an integrated viewing station 30i as a function of the features of the viewer 303.

According to the invention, the conditioned transmission over the local network LAN is carried out by the conditioned transmission module 204, which comprises an identification module 204A configured to carry out the described identification functions.

The conditioned transmission comprises sending call signals CallJ in the local network LAN in order to identify an integrated viewing station 30i previously configured to receive the 3D file TAC_3D, detect the identification, activation and configuration information info_drive_l comprised in the storage module 301 of the integrated viewing station 30i identified, and send the identification, activation and configuration information info_drive_l detected to the reduction module 203.

The integrated viewing station 30i has preferably been previously configured by means of the selection interface 205.

The technical effect achieved is a viewing of the 3D image information on the viewer which is as reliable as possible, given the complete compatibility between the 3D data generated and the technical graphic processing features of the viewer 303.

It is fundamental, in fact, in the presurgical phase and particularly under conditions of urgency, that the images do not show artifacts or distortions caused by an incomplete conformity of the input data with the data processable by the viewer 303.

According to the invention, the mesh reduction factor (mesh) R_mesh for the integrated viewing station 30i identified is calculated, in particular by the reduction module 203, based on the identification, activation and configuration information info_drive_l received from the conditioned transmission module 204.

Specifically, the algorithm for calculating the mesh reduction ALG_R_mesh comprises calculating the mesh reduction factor R_mesh starting from an estimated size ES of the output 3D file TAC_3D and the maximum size that the file can reach in order to be sustainable for the process of importing the mesh to the integrated viewing station 30i, in particular a HoloLens proprietary app.

The latter information is contained in the information info_drive of every integrated viewing station 30i.

According to the invention, the mesh reduction factor R_mesh is the percentage of reduction to be applied to the mesh in order to be imported by the integrated viewing station 30i, in particular the HoloLens proprietary app.

According to the invention, the following relation applies: R _mesh = 1 - (MFS / ES)

with

k = 180

ES = k * CN / 1000000

wherein:

k: constant

CN(Cell Number): is the number of cells of the 3D model obtained.

ES(Estimated Size): is the estimated size of the stl file.

MFS(Max File Size) is the maximum size that the file can reach in order to be sustainable for the process of importing the mesh to the integrated viewing station 30i, in particular the HoloLens proprietary app.

Furthermore, according to the invention, the mesh reduction factor R_mesh is further defined as a function of the viewer 303 identified.

The process of creating the 3D model is thus capable of adapting the result to the input data, i.e. the estimated size ES of the output 3D file TAC_3D, and the capabilities of the device in which the output will be viewed, i.e. the maximum size that the file can reach in order to be sustainable for the process of importing the mesh to the integrated viewing station 30i. This process makes it possible always to have a useful and usable result, in the integrated viewing station 30i, of the holographic representation of the input CT test, i.e. of the output 3D file TAC_3D.

The transmission of the three-dimensional model to the integrated viewing station 30i is carried out with a specific method.

With particular reference to figure 2, the conditioned transmission module 204 is configured to transmit, in the local network LAN, the reduced 3D file R_TAC_3D to the integrated viewing station 30i identified through a data pack data packj comprising :

- an initialisation field C_l; - a message field C_II, in particular comprising messages for and from the viewer to handle the sending of data;

- a size field C_ III comprising the size of the data segment to be sent;

- a data field C_IV comprising the data to be sent - the 3D images - of the size indicated in the size field C_ III;

According to the invention, the size field C_ III is defined as a function of the identification, activation and configuration information info_drive received.

In other words, the data pack data pack_I varies dynamically as a function of the features of the integrated viewing station 30i.

In other words, a dedicated transmission protocol has been implemented whereby it is possible to send, via the network, the 3D model, i.e. the 3D file R_TAC_3D, including information regarding the structure and composition thereof info_drive, to the integrated viewing station 30i, i.e. to the application for the mixed reality viewer 303, which receives said model and information in order to be able to load it and view it dynamically, so that it need not be preloaded into the integrated viewing station 30i itself.

It is possible, therefore, to dynamically identify the mixed reality viewers 303 over the same network in which the integrated viewing station 30i for the mixed reality viewer 303 is active. This makes it possible not to have to pre-configure the viewers 303 and to be able to use more than one at any time.

The technical effect achieved is an optimal correspondence between the data sent and the user prepared to receive it; in the presurgical phase, in particular under conditions of urgency, the distribution of precise information to the various surgeons prepared to receive and view different information for the same operation can be crucial for the success of the operation.

With reference to figure 2, the data structure for handling the sending of the three-dimensional model from the second processing station 20 to the integrated viewing station 30i, in an example embodiment, can be described as follows:

C_l = initialisation byte: it is used to check that the pack actually belongs to the app;

C_II = message segment: UTF-8 encoding of a message with a maximum of 32 characters, which will then be interpreted by the client or the server; C_ III = Data length segment: UTF-8 encoding of the size of the data segment, the number can have a maximum of 32 digits;

C_IV = data segment: it contains the data that are about to be sent and can have a maximum length of 65536 bytes. The variable length of this segment makes it possible to send small or large-sized files:

● if the file is smaller in size than the maximum length, the data

segment will be smaller.

● if the file is larger in size, it will be divided into several packs and the data segment will be of the maximum size.

Based on the dimensions of the 3D data file R_TAC_3D, the transmission takes place, for example, as shown in figure 3, wherein:

-“prepare:obj” is a server request to the client to prepare to receive the file

- "request:obj” is a client response to the server to say that it is ready to receive more data

-“send Packet :obj" is a server message to the client which indicates that the current pack contains data of the file

-“end:obj” is a server message to the client which notifies the client that all the data of the file have been sent.

With particular reference to figure 1 , the integrated viewing station 30i in the local network LAN is configured to receive the reduced 3D file R_TAC_3D transmitted by the second processing station 20.

The invention envisages that the integrated viewing station 30i receives the reduced 3D file R_TAC_3D transmitted by the second processing station 20.

The integrated viewing station 30i comprises a second receiving module 300 configured for that function. The invention further comprises determining a 3D hologram graphic model OLO_3D of the patient image data Data_Imm as a function of the reduced 3D file R_TAC_3D received.

The integrated viewing station 30i comprises a processing unit 302 configured to determine the 3D hologram graphic model 3D OLO_3D of the patient image data Data_Imm as a function of the reduced 3D file R_TAC_3D.

In particular, a step of selecting one or more tissues of the patient from the reduced 3D file R_TAC_3D is envisaged.

The integrated viewing station 30i comprises a selection interface U303 configured to enable the selection of one or more tissues of the patient from the reduced 3D file R_TAC_3D generated.

Through the integrated viewing station 30i, it is thus possible, via the selection interface U303 of the viewer 303, to consult the 3D hologram comprising the tissues of the patient in the established colours.

The viewer 303 is further configured to enable or disable viewing of every single level of tissue of the patient, by acting through the selection interface U303.

The technical effect achieved is the viewing, by the user, of the hologram of all tissues simultaneously or only the selected ones.

As already mentioned, the invention comprises configuring a viewer 303 to interact with a user U and to represent the 3D hologram OLO_3D in a real viewing space R_space.

As said previously, the configuration of specific viewers 303 enables an optimised view of different anatomical features, which may be analysed in the presurgical phase.

The viewer 303 is configured to perform the interaction between the user U and the 3D hologram graphic model OLO_3D by implementing one or more functions from among movement, rotation and enlargement of the 3D hologram graphic model OLO_3D.

The interactions on the 3D model from the mixed reality viewer 303 are implemented by exploiting the gestures of the viewer itself, but with the implementation of the dedicated functions, since the native interface of the viewer provides the basic software components, but does not implement the functions thereof, unlike what is described in the present invention. Therefore, though the interface of the mixed reality viewer 303 is exploited in order to capture the input, i.e. the 3D hologram OLO_3D, the interactions on the 3D model are the result of the implementation taking place on the integrated viewing station 30i, i.e. on the application for the viewer 303, according to the invention.

According to the invention, the movement, rotation and enlargement functions comprise the actions of:

- detection of a hand M of the user U;

- selection gestures with the hand M of the user U;

- detection of a movement of the hand M in the selection status;

- detection of a movement of two hands M in the selection status.

In other words, with particular reference to figure 4, the integrated viewing station 30i, in particular the HoloLens proprietary application, implements the movement, rotation and enlargement operations (menu 100) as follows:

Movement: after having selected the movement action from the specific menu 100, the user U focuses on the hologram OLO_3D using the system gaze function, puts his/her hand M in front of him/herself using the system’s click gesture, and moves his/her hand M. The hologram OLO_3D moves according to the movement of the hand M of the user, based on the position of the hand M in the space R_space, and applying its movement to the 3D model.

Rotation: after having selected the rotation action from the specific menu, the user focuses on the hologram OLO_3D using the system gaze function, puts his/her hand M in front of him/herself using the system’s click gesture, and moves his/her hand M. The hologram OLO_3D rotates in the direction in which the hand M of the user has moved, at a speed that is directly proportional to the speed of the hand M.

The rotation applied to the hologram OLO_3D is calculated by measuring the distance between the user U - and thus the integrated viewing station 30i - and the hand M in every frame, so as to be able to apply a rotation to the hologram OLO_3D that is consistent with the direction and speed of the gesture, irrespective of the position of the user U relative to the hologram OLO_3D itself.

Enlargement: after having selected the enlargement action from the specific menu 100, the user focuses on the hologram OLO_3D using the system gaze function, puts two hands M in front of him/herself using the system’s click gesture, and moves them closer to or farther from each other. The hologram is reduced in size when the hands are moved nearer, and increased in size when they are moved farther.

The hologram size modification factor is calculated by adding to its current size the difference between the distance between the hands at the beginning of the operation and that at the subsequent frame, so that the speed in the variation of the size of the hologram varies in a manner that is directly proportional to the speed of movement of the hands.

The invention thus comprises calculating the mesh reduction factor R _mesh of the reduced 3D file R_TAC_3D as a function of the activation and configuration information info_drive, thus ensuring complete compatibility of the viewer 303 with the 3D hologram graphic model OLO_3D generated and allowing the user U to interact interactively with the 3D hologram graphic model 3D OLO_3D generated.

In other words, the reduced 3D file R_TAC_3D has a mesh reduction factor R _mesh calculated as a function of said activation and configuration information info_drive_i, thus ensuring complete compatibility of the viewer 303 with the 3D hologram graphic model OLO_3D generated and allowing the user U to interact interactively with the 3D hologram graphic model 3D OLO_3D generated.

The invention as described is implemented in an aspect thereof as a computer-implemented method for 3D processing of a tomography test E_TAC comprising a plurality of operating steps whereby the functions of the components described are performed.

A method and a system for 3D processing of a tomography test has been presented which achieves the following technical effects:

- efficiency of execution, with a consequent reduction in the times of analysis by surgeons, especially under emergency conditions;

- in particular, direct, real-time interaction is achieved between surgeons and the 3D view based on the specific needs for planning an urgent operation, i.e. in the presurgical phase, with the advantage of not requiring subsequent video processing on the part of the tomography technician;

- customisation of the 3D view;

- optimisation of the distribution of the images to specific doctors in the presurgical phase.