Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR ELECTRICAL IMPEDANCE TOMOGRAPHY
Document Type and Number:
WIPO Patent Application WO/2021/223038
Kind Code:
A1
Abstract:
A system for electrical impedance tomography (EIT) includes an EIT backend.The EIT backend includes a processor configured to execute instructions for obtaining a frame of EIT impedance measurements, and reconstructing an EIT image from the frame of EIT impedance measurements.

Inventors:
TOMA JONATHAN EMANUEL (CA)
PALLOPSON CHAD MICHAEL JOHN (CA)
SAWYER AUSTIN (CA)
PELLETIER STEPHANIE ALISHA (CA)
DOURTHE BENJAMIN LOUIS (CA)
SUNG ARTHUR WAI (US)
ARDESHIRI RAMTIN (CA)
ADLER ANDREW (CA)
Application Number:
PCT/CA2021/050649
Publication Date:
November 11, 2021
Filing Date:
May 10, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TOMA JONATHAN EMANUEL (CA)
PALLOPSON CHAD MICHAEL JOHN (CA)
SAWYER AUSTIN (CA)
PELLETIER STEPHANIE ALISHA (CA)
DOURTHE BENJAMIN LOUIS (CA)
SUNG ARTHUR WAI (US)
ARDESHIRI RAMTIN (CA)
ADLER ANDREW (CA)
International Classes:
A61B5/0536; G06N3/02; G06N3/08; G16H30/40
Foreign References:
US20160242673A12016-08-25
US7162296B22007-01-09
US20200138335A12020-05-07
US10092212B22018-10-09
Other References:
ZACH EATON-ROSEN, FELIX BRAGMAN, SEBASTIEN OURSELIN, M. JORGE CARDOSO: "Improving Data Augmentation for Medical Image Segmentation", 6 July 2018 (2018-07-06), XP055709256, Retrieved from the Internet [retrieved on 20200626]
ALQAHTANI HAMED; KAVAKLI-THORNE MANOLYA; KUMAR GULSHAN: "Applications of Generative Adversarial Networks (GANs): An Updated Review", ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, SPRINGER NETHERLANDS, DORDRECHT, vol. 28, no. 2, 1 January 1900 (1900-01-01), Dordrecht , pages 525 - 552, XP037360202, ISSN: 1134-3060, DOI: 10.1007/s11831-019-09388-y
Attorney, Agent or Firm:
OSLER, HOSKIN & HARCOURT LLP et al. (CA)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for electrical impedance tomography (EIT), the system comprising: an EIT backend comprising: a processor configured to execute instructions for: obtaining a frame of EIT impedance measurements; and reconstructing an EIT image from the frame of EIT impedance measurements.

2. The system of claim 1, further comprising: a wearable EIT signal acquisition unit worn by a subject, the wearable EIT signal acquisition unit configured to: obtain the frame of EIT impedance measurements from the subject; and transmit the frame of EIT impedance measurements to the EIT processing backend.

3. The system of claim 2, wherein the wearable EIT signal acquisition unit is a belt.

4. The system of claim 2, wherein the wearable EIT signal acquisition unit comprises: an EIT signal acquisition circuit configured to: obtain voltage measurements from a plurality of electrodes disposed on the skin surface of the subject, and generate the frame of EIT impedance measurements from the voltage measurements; a wireless communication interface configured to communicatively interface the wearable EIT signal acquisition unit with the EIT backend; and a battery powering the EIT signal acquisition circuit and the communication interface.

5. The system of claim 4, wherein the wireless communication interface supports at least one selected from the group consisting of Bluetooth, Wi-Fi, and 5G.

6. The system of claim 1, wherein the EIT backend is cloud-hosted.

7. A method for electrical impedance tomography (EIT) image processing, the method comprising: receiving a frame of EIT impedance measurements from a wearable EIT signal acquisition unit worn by a subject; and reconstructing an EIT image from the EIT impedance measurements.

8. The method of claim 7, further comprising: augmenting a resolution of the EIT image and transferring a style of the EIT image.

9. The method of claim 8, wherein the augmenting the resolution and the transferring the style generates a computed tomography (CT)-style image from the EIT image.

10. The method of claim 8, wherein the augmenting the resolution and the transferring the style are performed by a single neural network.

11. The method of claim 10, wherein the neural network is a generative adversarial neural network (GAN).

12. The method of claim 11, further comprising: training the GAN using a set of training images, wherein the training is guided by a perceptual loss.

13. The method of claim 11, further comprising: augmenting the set of training images by performing at least one selected from the group consisting of: a rotation of at least one image in the set of training images, a warping of at least one image in the set of training images, a cropping of at least one image in the set of training images.

14. The method of claim 7, further comprising: a segmentation of the EIT image to identify an organ of interest.

15. The method of claim 14, wherein the segmentation is an unsupervised segmentation using a K-means clustering.

16. The method of claim 14, wherein the segmentation is a supervised segmentation using a deep convolutional neural network (DCNN).

17. The method of claim 16, wherein the segmentation comprises a first stage operating on an entirety of the EIT image and a second stage operating on a segment of the EIT image.

18. The method of claim 17, wherein the segment of the EIT image is a rectangular region of random size at a random location in the EIT image.

19. A non-transitory computer readable medium (CRM) storing instructions for electrical impedance tomography (EIT), the instructions comprising functionality for: obtaining voltage measurements from a plurality of electrodes disposed on the skin surface of a subject; generating a frame of EIT impedance measurements from the voltage measurements; and transmitting the frame of EIT impedance measurements to an EIT processing backend.

20. The non-transitory computer readable medium of claim 19, wherein the instructions further comprise functionality for: receiving the frame of EIT impedance measurements; and reconstructing an EIT image from the frame of EIT impedance measurements.

Description:
METHOD AND SYSTEM FOR ELECTRICAL IMPEDANCE

TOMOGRAPHY

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims benefit under 35 U.S.C. § 119(e) to United States Provisional Patent Application Serial Number 63/022,196, filed on May 8, 2020. United States Provisional Patent Application Serial Number 63/022,196 is incorporated herein by reference in its entirety. This application also claims benefit under 35 U.S.C. § 119(e) to United States Provisional Patent Application Serial Number 63/092,177, filed on October 15, 2020. United States Provisional Patent Application Serial Number 63/092,177 is incorporated herein by reference in its entirety.

BACKGROUND

[0002] Electrical impedance tomography (EIT) is a noninvasive type of medical imaging in which the electrical impedance of a part of the body is determined from surface electrode measurements. The obtained impedance measurements may be used to generate a tomographic image. Electrical conductivity varies among various biological tissues, and other substances such as fluids and gases, thus enabling the visualization of these components in the tomographic image. Various systems for use in clinical and/or research environments exist.

SUMMARY

[0003] In general, in one aspect, one or more embodiments relate to a system for electrical impedance tomography (EIT), the system comprising: an EIT backend comprising: a processor configured to execute instructions for: obtaining a frame of EIT impedance measurements; and reconstructing an EIT image from the frame of EIT impedance measurements.

[0004] In general, in one aspect, one or more embodiments relate to a method for electrical impedance tomography (EIT) image processing, the method comprising: receiving a frame of EIT impedance measurements from a wearable EIT signal acquisition unit worn by a subject; and reconstructing an EIT image from the EIT impedance measurements.

[0005] In general, in one aspect, one or more embodiments relate to a non- transitory computer readable medium (CRM) storing instructions for electrical impedance tomography (EIT), the instructions comprising functionality for: obtaining voltage measurements from a plurality of electrodes disposed on the skin surface of a subject; generating a frame of EIT impedance measurements from the voltage measurements; and transmitting the frame of EIT impedance measurements to an EIT processing backend.

BRIEF DESCRIPTION OF DRAWINGS

[0006] FIG. 1 schematically shows EIT imaging operations, in accordance with one or more embodiments.

[0007] FIG. 2 schematically shows an EIT signal acquisition, in accordance with one or more embodiments.

[0008] FIG. 3 schematically shows an EIT signal acquisition circuit, in accordance with one or more embodiments.

[0009] FIG. 4 schematically shows an EIT image reconstruction, in accordance with one or more embodiments.

[0010] FIG. 5 schematically shows an EIT system, in accordance with one or more embodiments.

[0011] FIG. 6 schematically shows a wearable EIT acquisition unit, in accordance with one or more embodiments.

[0012] FIG. 7 schematically shows an EIT processing backend, in accordance with one or more embodiments.

[0013] FIG. 8A shows an example of a generator architecture of a Generative Adversarial Network (GAN), in accordance with one or more embodiments.

[0014] FIG. 8B shows examples of sub-blocks of a generator architecture of a

GAN, in accordance with one or more embodiments. [0015] FIG. 8C shows an example of a discriminator architecture of a GAN, in accordance with one or more embodiments.

[0016] FIG. 8D shows examples of resolution enhancement results, in accordance with one or more embodiments.

[0017] FIG. 9A shows an example illustrating an unsupervised segmentation, in accordance with one or more embodiments.

[0018] FIG. 9B shows an example illustrating a supervised segmentation, in accordance with one or more embodiments.

[0019] FIG. 10 shows a flowchart describing a method for EIT signal acquisition, in accordance with one or more embodiments.

[0020] FIG. 11A shows a flowchart describing a method for EIT image processing, in accordance with one or more embodiments.

[0021] FIG. 11B shows a flowchart describing a method for EIT image enhancement, in accordance with one or more embodiments.

[0022] FIG. l lC shows a flowchart describing a method for EIT image enhancement, in accordance with one or more embodiments.

[0023] FIGs. 12A and 12B schematically show computing systems, in accordance with one or more embodiments.

DETAILED DESCRIPTION

[0024] The following detailed description is merely exemplary in nature, and is not intended to limit the disclosed technology or the application and uses of the disclosed technology. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, or the following detailed description.

[0025] In the following detailed description of embodiments, numerous specific details are set forth in order to provide a more thorough understanding of the disclosed technology. However, it will be apparent to one of ordinary skill in the art that the disclosed technology may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.

[0026] Throughout the application, ordinal numbers ( e.g ., first, second, third, etc.) may be used as an adjective for an element ( i.e ., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.

[0027] Various embodiments of the present disclosure provide methods and systems for Electrical Impedance Tomography (EIT). EIT is a technique that may be used to create tomographic images of the electrical properties of tissue within a body based on electrical impedance measurements using body-surface electrodes. Because the electrical conductivity varies among various biological tissues, and other substances such as fluids and gases, a physiological or anatomical feature of interest may be evaluated using EIT through observations of changes in tissue impedance. EIT has various clinical and scientific applications, including but not limited to imaging of lung ventilation, cardiac function, gastric emptying, brain function and pathology, screening for breast cancer, etc.

[0028] The operations for obtaining an EIT image and system configurations for obtaining an EIT image, in accordance with one or more embodiments of the disclosure, are subsequently described with reference to the figures.

[0029] Turning to FIG. 1, EIT imaging operations, in accordance with one or more embodiments, are schematically shown. [0030] Tissue electrical properties (110) of biological tissue may be measured by applying and measuring voltages and currents. EIT may apply small ( e.g ., imperceptible) currents selected to meet safety standards for medical devices. In order to improve electrical safety and to remain outside the range of the body’s own electrical signals, EIT may apply small sinusoidal currents at frequencies above 10 kHz. In biological tissue at low frequencies, tissue membranes may primarily act as capacitors which store electrical energy. As the frequency increases, a pathway through the membrane has a higher admittance, and current may flow through the membranes. The bulk properties of tissue may be characterized by a complex conductivity (or admittivity). The conductivity may be used to characterize the tissue. EIT systems may measure both the amplitude and phase voltage at the electrodes. The tissue electrical properties (110) may be for an EIT image of a patient’s or subject’s lungs. Depending on the volume of air in the lungs and the distribution of air-tissue interfaces, the electrical properties may change. Specifically, in the example, a conductivity, s, may change depending on the volume of air in the lungs. Accordingly, various respiratory parameters may be obtained using EIT.

[0031] In the example of an EIT configuration (120), a set of electrodes is placed to perform an EIT sensing (130). The electrodes interface with an EIT system hardware configured to apply electrical currents to the electrodes, and to measure voltages on the electrodes, as further described below. To perform the EIT sensing (130), a current may be applied to the body part of interest across a pair of electrodes. The EIT sensing (130) may result in a unique pattern of currents passing through the region being imaged. When the electrodes are placed in a ring-like arrangement around, e.g., as illustrated for the EIT sensing (130), the region being imaged may be a slice circumscribed by the electrodes. The pattern of currents may be based on the body geometry (e.g, locations of tissue, fluids, gases, etc.) within the region being imaged, and electrical properties (of, e.g, tissue, fluids, gases, etc.) within the region being imaged, as a result of the current between the current injection and receiving electrodes spreading out to find the easiest (lowest impedance) paths. [0032] Accordingly, changes in the spatial distribution of the conductivity of tissue may change the way current flows, which may be detectable in voltage measurements obtained from the remaining ( i.e ., non-current injecting) electrodes. EIT is sensitive to contrasts in tissue admittivity. For example, because blood is more conductive than most other tissue, increases of blood in the heart during diastole, or pooling of blood due to hemorrhage, create changes that may be measured by EIT. The most significant conductivity contrast in the body may be caused by air. A large volume of non-conductive air moves with each breath and, as a result, EIT imaging of breathing may produce strong changes of the voltage measurements. Several other tissues may also give useful contrasts. For example, neural tissue may change conductivity when in an active state, cancerous tissue and the new growth around tumors may contrast with benign tissue, etc. Additionally, conductivity contrasts may be induced using hypertonic saline injections, ingesting salty meals, or through thermal contrasts.

[0033] EIT systems may apply currents across pairs of electrodes and measure the voltages across the remaining electrodes. EIT systems may apply, current across a pair of electrodes and while obtaining voltage measurements from other electrode pairs, before repeating this operation by applying the current across another pair of electrodes. The repetitions may continue until all available pairs of electrodes have served as current injection sites. A complete set of voltage measurements may form a sensing frame. Given N electrodes a possible N x (N - 3)/2 independent voltage measurements may be performed. EIT systems with 8, 16, or 32 electrodes may, thus, have 20, 104, or 464 voltage measurements per sensing frame. EIT systems may measure 50 or more sensing frames per second. The EIT system may calculate the amplitude and phase of these voltage measurements.

[0034] Based on the voltage measurements, images of the conductivity (absolute EIT) or the change in conductivity (difference EIT) may be obtained by performing an EIT image reconstruction (140) as discussed below in reference to FIG. 4. One EIT image may be obtained for a sensing frame. A sequence of EIT images may be generated over time, as illustrated by the EIT image reconstruction (140). In the EIT images, regions of different conductivity, based on the voltage measurements, are color-coded differently, thereby enabling visualization of anatomical or physiological features based on their differing conductivity. Subsequently, physiologically relevant measures may be calculated based on the EIT images as shown in the EIT image interpretation (150). For example, regional lung airflow changes ( e.g ., regional tidal volumes or regional flow-volume loops) may be calculated, may be tracked over time, etc. A more detailed description of the operations (110, 120, 130, 140, 150) is subsequently provided.

[0035] Turning to FIG. 2, an EIT signal acquisition (200), in accordance with one or more embodiments, is schematically shown. In the configuration as shown, sixteen electrodes (202) are placed on a circular body (232). Assume, for example, that the circular body schematically represents the cross section of a human torso. In the example, a four electrode EIT method is shown, in which a current is applied to two electrodes by the current source (204), and a voltage measurement (206) is obtained at two other electrodes. While not shown, alternatively, a voltage may be applied by a voltage source, whereas a current measurement may be obtained. In the example configuration shown in

FIG. 2, the area “S” may be the most sensitive to changes in conductivity, based on the current flow resulting from the selection of the electrodes. The change

“C” in the circular body (232) may thus be particularly detectable based on a change in the voltage measurement (206), whereas a similar changes outside the area “S” may be less detectable. The current that is applied may be small, e.g., less than 5mA. The frequency may be above 50 kHz , e.g., in a range between 100-200 kHz, or more generally, the frequency may be anywhere in a range between 100 Hz and 1 MHz. A single frequency, or multiple frequencies may be used. At, for example, 50 kHz, the properties of tissue may be similar to those at DC, in that the great majority of current travels in the extracellular space, whereas electrode impedance is much lower than at DC, thereby reducing instrumentation errors. At 50 kHz, a single measurement may be taken within a few hundred ps.

[0036] Turning to FIG. 3, an EIT signal acquisition circuit (300), in accordance with one or more embodiments, is shown. The EIT signal acquisition circuit is configured to drive the electrodes (398) with a current and obtain voltage measurements from the electrodes (398). EIT systems may employ any number of electrodes ( e.g ., typically ranging from 8 to 128 electrodes). EIT systems may use electrodes applied in a ring, but some systems may alternatively use several rings on the thorax or evenly distributed, for example, over the head.

[0037] The electrodes (398) may be of any type, e.g., ECG type adhesive electrodes or other electrodes that are suitable for skin surface contact. The electrodes (398) may be individually placed, or alternatively the electrodes may be integrated in a belt, harness, or other type of array to facilitate placement, e.g., around the chest or abdomen. The skin impedance at the electrode interface may be reduced by applying a contact gel or by abrasion. The electrodes may also be placed within a conductive medium (such as a hardened conductive gel strip), with the conductive medium directly opposed to the subject’s skin to obtain impedance measurements.

[0038] In one or more embodiments, the EIT circuit (300) includes an electrode interface circuit (310) to interface with the electrodes (398), and an amplifier/demodulator circuit (320). These components are subsequently described.

[0039] In one or more embodiments, the electrode interface circuit (310) includes an electrode driving circuit (312), an electrode sensing circuit (314), and an electrode switching circuit (316).

[0040] In one or more embodiments, the electrode driving circuit (312) includes a current source for driving the electrodes (398) with a current. A sinusoidal waveform may be used. The sinusoidal waveform may be generated in any way, e.g., using direct digital synthesis (DDS), followed by a digital -to-analog conversion. Analog methods for generating the sinusoidal waveform (eg. the use of a Wien Bridge Oscillator) may also be used. The waveform may be used to drive the current source. The current source may, thus, operate as a voltage- to-current converter, producing a current, with the waveform being the input voltage to the voltage-to-current converter. The current source may be floating or single-ended. Any other type of current source may be used, without departing from the disclosure. Alternatively, a voltage source may be used. The current may be provided to the electrodes (398) through the electrode switching circuit (316), and the driving of the electrodes (398) with the current may follow a particular pattern, coordinated by the electrode switching circuit (316), as discussed below.

[0041] In one or more embodiments, the electrode sensing circuit (314) includes a voltage sensing circuit. The electrode sensing circuit (314) may operate in coordination with the electrode driving circuit (312) to obtain voltage measurements while a current is applied. Alternatively, a current measurement may be obtained while a voltage is applied. The electrode sensing circuit (314) may use an instrumentation amplifier or any type of amplifier with a high input impedance, e.g ., an operational amplifier, to capture the voltages at the electrodes (398). The instrumentation amplifier may operate in a differential configuration, measuring voltages between pairs of electrodes, or in a single- ended configuration, measuring voltages of single electrodes with respect to ground. The electrode sensing circuit (314) may include multiple voltage sensing circuits to simultaneously capture voltages from multiple pairs of electrodes or multiple single electrodes. The output of the electrode sensing circuit (314) may be an AC voltage, provided to the demodulator circuit (320) for further processing.

[0042] In one or more embodiments, the electrode interface circuit (310) includes an electrode switching circuit (316). The electrode switching circuit (316) may enable a selective connection of the electrodes (398) to the electrode driving circuit (312) and/or the electrode sensing circuit (314). [0043] The electrode switching circuit (316) may enable the EIT system to step through a pattern of sensing operations in order to obtain a complete sensing frame. First, the EIT system may apply a current to one pair of electrodes while performing a voltage sensing on all other electrodes. Next, the EIT system may apply a current to another pair while performing a voltage on all other electrodes. A sensing frame may include multiple additional such operations.

[0044] Consider, for example, the use of an “adjacent pattern”, in which the driving and sensing is performed on directly adjacent electrodes, and further assume that sixteen electrodes are arranged in a circular pattern, arranged as shown in FIG. 2. Assume that the electrodes are numbered 1 - 16, in a clockwise direction. When using the adjacent pattern, first, a current may be injected between electrodes 1 and 2 (1, 2), while voltages may be measured on all other pairs (2, 3), (3, 4), ... , (15, 16). The voltages may be measured all at once (in parallel), or sequentially. Next, the current may be injected between electrodes 2 and 3 (2, 3), while voltages may be measured on all other pairs (4, 5), (5, 6), ... , (15, 16), (16, 1). The pattern may be continued for all available electrodes, to complete a sensing frame.

[0045] Non-adjacent patterns may be used without departing from the disclosure. For example, in a “skip 2” pattern, the driving and sensing may be performed for pairs of electrodes that are separated by two electrodes. Further, while differential measurements between two electrodes are discussed, alternatively, single-ended measurements from one electrode to ground may be performed, without departing from the disclosure.

[0046] Any type of driving and sensing pattern may be used, without departing from the disclosure. Multiple frames may be recorded during a time interval. For example, 50-100 frames may be recorded per second.

[0047] The electrode switching circuit (316) may perform the necessary switching to enable execution of the described patterns, by selectively connecting the electrodes (398) to the electrode driving circuit (312) and to the electrode sensing circuit (314). The electrode switching circuit (316) may be any type of multiplexer capable of handling the forwarding of the current from the electrode driving circuit (312) to the electrodes (398), and the forwarding of voltages received from the electrodes (398) to the electrode sensing circuit (314), for the number of electrodes of the EIT system.

[0048] In one or more embodiments, the demodulator circuit (320) generates impedance measurements from the AC voltages received from the electrode sensing circuit (314). An impedance measurement may include a value for resistance and a value for reactance. Each of the resistance and the reactance may be represented by a DC voltage, at the output of the demodulator circuit (320). An impedance measurement may be generated for each of the measurements in a frame.

[0049] An impedance measurement may be generated by performing a quadrature (I/Q) demodulation. Any type of demodulator circuit may be used. The I-component of the quadrature demodulation is in-phase with the current that is applied by the electrode driving circuit (312), thus representing the resistance. The Q-component of the quadrature demodulation is 90° out of phase with the current applied by the electrode driving circuit, thus providing the reactance.

[0050] Turning to FIG. 4, an EIT image reconstruction (400), in accordance with one or more embodiments, is schematically shown. Based on the obtained impedance measurements (410), EIT images may be generated as subsequently described. For the following discussion of the EIT image generation, assume that impedance measurements, y, are obtained as previously described, or using any other EIT method.

[0051] The impedance measurements, y, may include impedance measurements of at least some electrodes, e.g ., a full frame. The impedance measurements may include resistance values, and may further include reactance values.

[0052] In one or more embodiments, computing an EIT image (440) involves an image reconstruction (the “inverse”) which is a process that begins with the effects (impedance measurements, y) to calculate the causes (conductivity distribution across tissue, represented by parameters, x, to be calculated). The computing of the EIT image (440), thus, involves solving a forward problem (420), describing how the causes (parameters) lead to effects (impedance measurements).

[0053] The forward problem (420) relies on a model that establishes an assumed (. i.e ., modeled) relationship between the conductance of a volume of tissue under consideration, and impedance measurements obtained when performing EIT measurements on the volume of tissue under consideration. The volume of tissue has an internal conductivity distribution described by a vector, s. Assume, for example, that the model represents conductivity across tissue, including a slice of tissue that is being imaged using the electrode arrangement, previously shown in FIG. 2. The model may further represent other electrical characteristics, such as the characteristics of the interface of the electrodes with the tissue. The parameters, x, are typically at a lower spatial resolution than s especially in regions far from the electrodes. Accordingly, the model with the parameters, x, may provide an approximation of the internal conductivity distribution, s. Depending on the choice of x, the model may more or less accurately reflect the actual conductivity distribution s.

[0054] When executing the forward problem (420), a forward solution F(o) and a sensitivity J(o) may be obtained. The parameters, x, for the forward solution F(G) may be used for the EIT image (440). Further, the sensitivity J(o) may be a matrix or table which represents the impedance ( e.g ., resistivity) for each of the voxels (basically, representing s through the model) of the EIT image (440) for the subject or patient, based on the impedance measurements, y.

[0055] In one or more embodiments, to solve the inverse problem (430) of determining model parameters, x, one may solve the forward problem (420) for an assumed parameter set so that the predicted impedance measurements, can be compared with the actual impedance measurements. More specifically, through iterative execution of the forward problem (420) in combination with the inverse problem (430), parameters, x, may be selected to provide a sufficiently accurate representation of the actual conductivity distribution, s, by the model when parameterized using x. With each execution, the model output, F(G) for the execution with the parameters, x, may be compared to the impedance measurement y to obtain a model error. Based on the model error, the parameters, x, may be updated by applying a Dc, and the resulting updated x is mapped to s, allowing the next execution of the model, which may result in a smaller model error, if Dc was correctly selected. In contrast, the initially used x (initial guesses or random values may be used) is unlikely to result in an accurate model. A set number of iterations may be performed, or the iterative execution may continue until the model error drops below an error threshold. The parameters, x, at that point, may be used for the EIT image (440).

[0056] Different models may be used for the described approach. Analytical models may be used for regular shapes such as cylinders, and to help understand theoretical limits. For irregular geometries, such as for a patient’s body, the Finite Element Method (FEM) models may be used. An FEM model may be obtained by segmenting Magnetic Resonance, or X-Ray CT images and use these to develop a FE mesh specific to the individual subject. Alternatively, a general anatomical mesh may be used. The general anatomical mesh may be fitted to the external shape of the subject, measured, for example, by some simpler optical or mechanical device.

[0057] While accurate FEM models may be computationally expensive, it has been shown that difference EIT methods provide reliable EIT images even for low accuracy FEM models. In contrast, absolute EIT methods may requires higher accuracy FEM models.

[0058] Further, the computational cost may also be affected by the number of parameters, x,. Assuming, for example, that a parameter is used for each FEM element of the model, the number of parameters may be high. However, it may be possible to avoid a fine discretization of the FEM near the electrodes. Instead a parameterization of the image with fewer degrees of freedom may be made and mapped to the FEM.

[0059] While the above paragraphs describe an iterative method using a forward problem an inverse problem, a direct method may be used without departing from the disclosure. The direct method may directly (without iterations) obtain a reconstruction without the use of a forward model.

[0060] Turning to FIG. 5, an EIT system, in accordance with one or more embodiments, is schematically shown. A subject (598), e.g ., a patient, wears a wearable EIT signal acquisition unit (510), which contains the electrode interface circuit (310), including the entirety, or part thereof an electrode driving circuit (312), an electrode sensing circuit (314), and an electrode switching circuit (316).

[0061] The wearable EIT signal acquisition unit (510) may perform various operations as described in reference to FIGs. 2, 3, and 4, and may be used to gather impedance measurements from the subject (598). The impedance measurements may be transmitted, via a network (580) to an EIT processing backend (520), for further processing. The wearable EIT signal acquisition unit (510) may be designed to collect impedance measurements over any time interval, e.g., over minutes, hours, or days. The wearable EIT signal acquisition unit (510) may have any wearable format. For example, the wearable EIT signal acquisition unit (510) may be in the form of a belt or a harness. In one embodiment, the wearable EIT signal acquisition unit (510) is a thin portable unit, similar to a belt, worn around the chest, configured to perform lung imaging. In one or more embodiments, the wearable EIT signal acquisition unit includes all components to perform the impedance measurements of an EIT imaging operation. Specifically, as further discussed below in reference to FIG. 6, the wearable EIT signal acquisition unit (510) includes the circuitry for interfacing with the EIT electrodes, the circuitry to generate impedance measurements, a power source, etc. No external unit, such as a non-wearable, e.g., stationary, processing component is needed for generating the impedance measurements. Accordingly, the wearable EIT signal acquisition unit (510) enables the subject (598) to move freely without being tethered to a non wearable component.

[0062] The EIT processing backend (520) may perform various operations, including the operations described in reference to FIG. 4, and other operations, described below. An image reconstruction may be performed as impedance measurements are received from the wearable EIT signal acquisition unit (510). The image reconstruction may be performed in real-time, or in near real-time. Alternatively, the image reconstruction may be performed in batches. A more detailed description of the operations performed by the EIT processing backend (520) is provided below in reference to Fig. 7. The EIT processing backend (520) may be cloud-hosted. Alternatively, e.g ., in a clinical environment, the EIT processing backend (520) may be hosted on a server, e.g., a server under the administration of a healthcare provider. Although the EIT processing backend (520) is shown as interfacing with a single wearable EIT signal acquisition unit (510), the EIT processing backend (520) may receive data from any number of wearable EIT signal acquisition units (510).

[0063] The EIT user interface (530) may enable a user to view and interact with the output of the EIT processing backend (520). Any type of output produced by the EIT processing backend (520) may be viewed, including, for example, EIT images, EIT images that have been enhanced by further processing, EIT image interpretations, etc. The EIT user interface (530) may be a graphical user interface that is accessible by healthcare providers and/or other users authorized to access the data associated with the subject (598).

[0064] The components of the EIT system (500), e.g., the wearable EIT signal acquisition unit (510), the EIT processing backend (520), and the EIT user interface (530) may communicate using the network (580), which may include any combination of wired and/or wireless segments, local area networks and/or wide area networks. The communication between the components of the EIT system (500) may include any combination of secured ( e.g ., encrypted) and non- secured (e.g., un-encrypted) communication.

[0065] Turning to FIG. 6, a wearable EIT acquisition unit, in accordance with one or more embodiments, is schematically shown. In one or more embodiments, the wearable EIT acquisition unit (600) includes all components required to obtain impedance measurements of a subject via the electrodes (698), and to transmit the obtained impedance measurements to an EIT processing backend. No external processing components are needed for the processing performed to obtain the impedance measurements. The wearable EIT acquisition unit (600) may be in the form of a belt, a harness, or any other wearable configuration. In one or more embodiments, the wearable EIT signal acquisition unit (600) includes an EIT signal acquisition circuit (610), a communication interface (620), and a power source (630). Other components may be included without departing from the disclosure. For example, an accelerometer or other sensors may be included to confirm the proper orientation of the wearable EIT acquisition unit, when worn by a subject.

[0066] The EIT signal acquisition unit (610), may include an electrode interface (612) and a demodulator circuit (614). The electrode interface (612) may be similar to the electrode interface circuit (310), described in reference to FIG. 3. The demodulator circuit (614) may be similar to the demodulator circuit (320), described in reference to FIG. 3. The output of the demodulator circuit may be the impedance measurements.

[0067] The communication interface (620), may be any type of communication interface, e.g., a Wi-Fi or Bluetooth or cellular (e.g, 5G) communication interface. The communication interface may transmit the impedance measurements provided by the EIT signal acquisition circuit (610) to an EIT processing backend. The communication interface (620) may further transmit status information and/or may receive configuration information.

[0068] In one or more embodiments, the wearable EIT signal acquisition unit (600) includes a computing system (not shown). The computing system may include at least some of the components of the computing system of FIGs. 12A and 12B. The computing system may further include additional components such as analog-to-digital and/or digital-to-analog converters, etc. In one embodiment, the computing system is based on a microcontroller, e.g ., an FPGA microcontroller. The computing system may perform various operations, such as the coordination between the EIT signal acquisition circuit (610), and the communication interface (620). The computing system may perform some of the operations of the EIT signal acquisition circuit (610). For example, the demodulator circuit (614) may be a digital demodulator implemented on the computing system. The computing system may also parameterize the EIT signal acquisition circuit (612). For example, the computing system may be used to set the current applied to the electrodes and the frequency of the current, the sensing pattern, the timing, etc.

[0069] The power source (630) may be any type of power source for powering the components of the wearable EIT signal acquisition unit (600). In one embodiment, the power source (630) is a battery. The use of the battery may isolate the subject from potentially hazardous line voltages, particularly while exposing the subject to currents during the acquisition of the EIT impedance data. The battery may be rechargeable.

[0070] The electrodes (698) may be any type of skin surface electrodes, as previously described. Any number of electrodes may be used. The electrodes (698) may be physically integrated with the wearable EIT signal acquisition unit (600), or alternatively may be physically separate, but electrically connected to the wearable EIT signal acquisition unit (600). The electrodes may be used in direct contact with the subject, or suspended in a conductive medium which is in contact with the subject.

[0071] Turning to FIG. 7, an EIT processing backend, in accordance with one or more embodiments, is schematically shown. The EIT processing backend (700) may include an EIT reconstruction module (710), an EIT image interpretation module (720), and/or an EIT image enhancement module (750). Each of these components is subsequently described.

[0072] In one or more embodiments, the EIT image reconstruction module (710) performs an EIT image reconstruction based on the impedance measurements received from the wearable EIT signal acquisition unit. The EIT image reconstruction module (710) may perform operations as described in reference to FIG. 4.

[0073] In one or more embodiments, the geometry of the chest is estimated by hardware component(s) to assist in the image reconstruction process. In one or more embodiments, a single or set of strain gauge(s), inertial measurement unit(s), and/or other spatial sensors are used to estimate the boundary shape and area contained within the EIT belt. Such measures are transmitted to the EIT image reconstruction module to inform specifications of the model used for EIT image generation.

[0074] In one or more embodiments, the EIT image interpretation module (720) performs operations on one or more of the EIT images obtained from the EIT image reconstruction module (710). Alternatively, the EIT image interpretation module (720) may operate on enhanced images produced by the image enhancement module (750), as discussed below.

[0075] The EIT image interpretation module (720) may compute various parameters, based on an EIT image. The computed parameters may be of clinical interest, e.g ., to better understand the origins of certain critical conditions, to better prevent and predict their occurrence and/or to have protocols in place to reduce their potential impact. Similarly, the computed parameters may help monitor the long-term impact of past or current conditions (e.g., Chronic Obstructive Pulmonary Disease (COPD), COVID-19, etc.).

[0076] In one or more embodiments, a calibration unit (e.g., a battery-powered wireless portable spirometer) may be used to provide baseline, or calibration measurements for the clinical variables outlined below. Dynamic parameters obtained by an EIT system using difference methods are often unitless. Therefore, an initial calibration of measurements may be performed. Once one or more measurements ( e.g ., minimum and maximum values) are known in reference to the calibration unit, the EIT system may perform the remaining measurements without assistance from the calibration unit. In one or more embodiments, several calibration measurements are performed prior to the EIT system performing its measurements independently. In one or more embodiments, the calibration unit may communicate to the EIT processing backend (700) directly, or through a third-party data transmitter, via wired, or wireless electronic communication protocols (e.g., WIFI, Bluetooth, 5G, etc.).

[0077] Examples of clinical variables that may be computed by the EIT image interpretation module (720) are subsequently provided:

1) EIT to Estimate Global Spirometry Values a. FEV-1 (Forced Expiratory Volume in One Second) b. FVC (Forced Vital Capacity) c. FEV-1 /FVC (ratio of above values) d. PEF (Peak expiratory flow) e. FEF (Forced expiratory flow rate at 75%, 50%, 25% exhalation volume) f. FIF (Forced inspiratory flow rate at 75%, 50%, 25% exhalation volume)

2) EIT to Estimate Cross-Sectional Spirometry Values ( i.e ., Regional Spirometry) a. Regional FEV-1 b. Regional FVC c. Regional FEV-l/F VC

3) EIT to Estimate Lung Compliance (i.e., Airway or Lung ‘Stiffness’)

4) EIT to Estimate Lung Volumes

Examples of lung volume measurements relevant in COPD: i. Residual Volume (air left in lung after maximum exhalation) ii. End-expiratory lung volume (air left in lung after typical exhalation)

5) EIT to Estimate Work-of-Breathing

Ventilatory Work: Amount of work performed to move air into the lungs. Neuroventilatory Efficiency: The force of contraction of the diaphragm necessary to move a given volume of air into the lungs.

[0078] Global spirometry may be measured using EIT in a free-breathing subject using the calibration unit described in a previous section for initial calibration. Those skilled in the art will appreciate that global spirometry values (e.g. FEV 1 , FVC, FEV1/FVC, PEF, FIF, FEF) may be estimated using EIT, for example, as described in “Ngo C, Dippel F, Tenbrock K, Leonhardt S, Lehmann S. Flow- volume loops measured with electrical impedance tomography in pediatric patients with asthma. Pediatr Pulmonol. 2018 May;53(5):636-644. doi: 10.1002/ppul.23962. Epub 2018 Feb 6. PMID: 29405616.”

[0079] Regional spirometry may be measured using EIT in a free-breathing subject using the calibration unit described in a previous section for initial calibration. Those skilled in the art will appreciate that regional spirometry values (e.g. regional FEV1, FVC, FEV1/FVC) may be estimated using EIT, for example, as described in “Krueger-Ziolek, Sabine, Schullcke, et al. "Determination of regional lung function in cystic fibrosis using electrical impedance tomography" Current Directions in Biomedical Engineering , vol. 2, no. 1, 2016, pp. 633-636. https://doi.org/10.1515/cdbme-2016-0139”.

[0080] Lung volumes and lung compliance may be estimated based on difference imaging methods, as subsequently described. Method 1 in the following section describes methods to estimate lung compliance and furthermore, the use of lung compliance to estimate lung volumes. Method 2 in the next section describes the use of pulmonary artery pressure to estimate lung volumes and lung compliance.

[0081] Method 1: Lung Compliance and Associated Lung Volume

Measurements: In usual lung physiology, a patient breathes along a standard pressure-volume curve. The residual volume (RV) represents the amount of air contained in the lungs at the end of maximum exhalation. The end-expiratory lung volume (EELV) represents the amount of air contained in the lungs at the end of a normal breath. Inspiratory capacity (IC) is the amount of air that can in inhaled to reach total lung capacity (TLC) at the end of a usual breath. IRV represents to amount of air that could be inhaled to reach TLC at the end of a typical inhalation. VT represents tidal volume, or the amount of air moved in and out of the lungs in any given breath. In COPD disease physiology, when patients are compensating for reduced flow rates and carbon dioxide retention, an early physiologic compensation mechanism is to increase residual volume

(RV) and end-expiratory lung volume (EELV). A patient is able to increase lung volumes dynamically by performing a compensation maneuver known as dynamic hyperinflation. During dynamic hyperinflation, a series of lung inspirations occur prior to complete exhalation (known as breath-stacking), thus resulting in higher retained lung volume. At these higher starting lung volumes, the slope of the pressure volume curve is generally shallower, representing a less compliant system. The intrinsic pressure required to maintain this system at this increased volume is known as intrinsic positive end- expiratory pressure (or PEEPi, otherwise known as auto-PEEP). This compensation mechanism makes use of elastic recoil of the chest wall to preserve expiratory flow rates. At higher lung volumes, a greater amount of recruitment of lung segments ( e.g ., especially lung segments closer to the boundaries of the chest) occurs due to intrinsic PEEP. It is thus reasonable to estimate lung volumes based on how much lung recruitment is required to achieve given airflow rates (e.g., net change in lung recruitment / net lung airflow rate). Using EIT images, recruitment may be estimated by measuring a Homogeneity Index (index of how similar flow rates are in different lung segments; which in turn indicates how much lung recruitment has occurred). These metrics may be used to estimate an individual patient’s pressure-volume curve. Residual volume (RV) and end-expiratory lung volume (EELV) are estimated based on the slope of the pressure-volume curve (e.g. shallower slope on the pressure-volume curve will be observed as lung volumes such as RV and EELV rise).

[0082] Method 2: Pulmonary Artery Pressure Measurement: Pulmonary artery (PA) pressure is a measure of blood pressure in the pulmonary arterial circulation, or circulation of blood to lung tissue originating from the right side of the heart. This measurement is often used as a surrogate for advancing heart or lung dysfunction since it is highly correlated with changes in pressures from both of these systems. In the case of left-sided heart disease, PA pressure increases as a result of increasing pressure in the left atrium, given these circuits are connected in series. In the case of advancing lung disease, PA pressures increase linearly with rising PEEPi, given the positive pressures required to maintain increased lung volumes must be added to the pressure generated by the right side of the heart to preserve cardiac output. In effect, Pnew = Pold + oPEEPi, where ø represents a correction factor. Therefore, measuring differences in PA pressure over time in the lung disease population predominantly represent changes in intrinsic PEEP. PEEPi increases are in many circumstances linearly correlated with increases in EELV, which itself is highly correlated with an increase in RV. Thus, any changes in pulmonary artery pressure may be used to estimate changes in lung volumes (RV and EELV). Lung compliance may be estimated using the known RV and EELV measurements derived from Method 2 as the initial conditions for the methods described in Method 1. Those skilled in the art will appreciate that PA pressures may be estimated using EIT, for example, as described in "Proen?a M, Braun F, Lemay M, et al. Non-invasive pulmonary artery pressure estimation by electrical impedance tomography in a controlled hypoxemia study in healthy subjects. Sci Rep. 2020;10(1):21462”, which is hereby incorporated herein by reference in its entirety, to the extent possible. [0083] Work-of-breathing (WOB) may be divided into two distinct concepts: 1) Ventilatory Work ( i.e ., amount of work performed to move air into the lungs), and 2) Neuroventilatory Efficiency (i.e., the force of contraction of the diaphragm necessary to move a given volume of air into the lungs). The methods to estimate each of these metrics using EIT are outlined in the subsequent sections.

[0084] Ventilatory Work (i.e., amount of work performed to move air into the lungs) may be estimated mathematically as the area under of the pressure- volume curve of the lungs. The methods to estimate Tidal Volume (VT) and lung compliance (i.e., pressure-volume curve) are described in previous sections. Given the initial conditions and slope of the pressure-volume curve are known based on the methods described in previous sections, the tidal volume (VT) multiplied by the change in airway pressure results in an estimate of the ventilatory work performed.

[0085] Neuroventilatory Efficiency (i.e. the force of contraction of the diaphragm necessary to move a given volume of air into the lungs) may be estimated with the use of EIT and a separate method of assessing force exerted by the diaphragm. In one or more embodiments, EIT may be used to estimate tidal volume (VT). In one or more embodiments, force exerted by the diaphragm is estimated using surface electromyography (EMG) electrodes or alternatively, through the use of invasive diaphragm pressures sensor (e.g. intra-esophageal diaphragm EMG sensor). Neuroventilatory efficiency is estimated by dividing the tidal volume by the force exerted by the diaphragm (e.g., electrical diaphragm-activity ratio). Those skilled in the art will appreciate that electrical diaphragm-activity ratio may be estimated using EMG, for example, as described in “Ferreira, J.C., Diniz-Silva, F., Moriya, H.T. et al. Neurally Adjusted Ventilatory Assist (NAVA) or Pressure Support Ventilation (PSV) during spontaneous breathing trials in critically ill patients: a crossover trial. BMC Pulm Med 17, 139 (2017). https://doi.org/10.1186/sl2890-017-0484-5”. [0086] In one or more embodiments global airflow metrics, regional airflow metrics, lung compliance, lung volumes, or work-of-breathing are either measured or estimated in a free-breathing subject ( i.e ., not connected to a ventilator) using EIT methods as described.

[0087] The EIT image interpretation module may further perform historical trend analyses over time, e.g ., over a series of EIT images. Historical trend analysis may be performed for any parameters and may be used to predict occurrences of undesirable developments such as a Chronic Obstructive Pulmonary Disease (COPD) exacerbation, heart failure, etc. over time.

[0088] The EIT image enhancement module (750) may include a resolution enhancement and style transfer module (760) and/or a segmentation module (770). The resolution enhancement and style transfer module (760) and the segmentation module (770) are described below.

[0089] The resolution enhancement and style transfer module (760), in one or more embodiments, performs operations on one or more of the EIT images obtained from the EIT image reconstruction module. More specifically, the resolution enhancement and style transfer module (760) may perform at least one of a resolution enhancement and a style transfer on the EIT image(s).

[0090] The resolution of an EIT image may be increased as subsequently described. Assume, for example, that an EIT image based on impedance measurements provided by a wearable EIT signal acquisition unit or any other EIT source includes 64 x 64 pixels. The resolution may be a result of the use of 64 electrodes. The resolution may be enhanced to 512x512 pixels, a common resolution of a computed tomography (CT) image. Using the machine learning methods described, the EIT image may further be used to estimate the appearance of a patients’ CT image at a similar anatomic cross-section.

[0091] The resolution enhancement and style transfer may benefit clinicians who are trained to read and interpret CT images, which may not always true for EIT images. The purpose of the resolution enhancement and style transfer is to generate static or dynamic CT-style images and subsequently overlay the dynamic EIT images to provide anatomic context to the interpreting healthcare professional.

[0092] The resolution enhancement may be performed using up-sampling methods, as described below. In one or more embodiments, AI-based super resolution methods based on deep convolutional neural networks (DCNN) are used. Other methods may be used without departing from the disclosure. Using super-resolution neural networks (SRNN) as further discussed below, a single model may be trained to “learn” image-specific features by analyzing the individual relationship between each of the low resolution input images and the corresponding high resolution target images. As a result of using the SRNN methods, the usefulness and capabilities of lower-resolution medical imaging systems may increase.

[0093] The style transfer may be used to manipulate the appearance of EIT image such that the original content of the EIT image is presented in a different imaging modality, e.g ., similar to a CT image. As a result, the color coding indicative of impedance values may change to a grayscale representation indicative of anatomical features. In one or more embodiments, a single neural network architecture is used for the resolution enhancement and the style transfer from a lower-resolution EIT image towards a higher-resolution CT- style image.

[0094] In one or more embodiments, the resolution enhancement and the style transfer are performed by a generative adversarial network (GAN). GANs are composed of two different networks: the Generator and the Discriminator. The purpose of the Generator is to generate data ( i.e ., enhanced images from low- resolution input) that is then presented to the Discriminator. The purpose of the Discriminator is to classify the images generated by the Generator as real or fake data. In other words, during training, the images created by the Generator will constantly be tested by the Discriminator, which indicates whether it is able to detect a difference between the Generator’s output (super- resolution/styled image) and the target (actual high-resolution CT image). The better the performance of the Generator, the less detectable the difference should be between the Generator’s output and the target. A more detailed description of GANs may be found in “I. J. Goodfellow et al ., “Generative Adversarial Networks,” Jun. 2014, [Online] Available: http://arxiv.org/abs/1406.2661”, which is hereby incorporated herein by reference in its entirety, to the extent possible.

[0095] The architecture of the Generator network may be based on residual neural networks (ResNet) and skip connections, which have been shown to improve the performance and outcome of deep neural networks by preventing the vanishing gradient problem. A more detailed description of residual neural networks may be found in “K. He et al., “Deep Residual Learning for Image Recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. PatternRecogn.it., vol. 2016-December, pp. 770-778, 2016”, which is hereby incorporated herein by reference in its entirety, to the extent possible. Referring now to FIGs. 8A and 8B, an example implementation of a Generator network architecture of a GAN (800) is shown. The Generator network (800), once trained, operates on the low resolution EIT image to generate a super resolution image. Examples of sub-blocks of the Generator network architecture (820) include an example for a residual block of the Generator network (800) and a sub-pixel block of the Generator network (800). While FIGs. 8A and 8B show a particular configuration of a Generator network, other configurations may be used without departing from the disclosure.

[0096] The architecture of the Discriminator network may be based on a series of convolutional blocks combined with batch normalization and activation layers, ending with a dense layer to perform binary classification (i.e., to decide whether the super resolution image generated by the Generator is classified as high resolution). Referring now to FIGs. 8C, an example implementation of a Discriminator network architecture of a GAN (840) is shown. While FIG. 8C shows a particular configuration of a Discriminator network, other configurations may be used without departing from the disclosure.

[0097] A more detailed description of GANs used for resolution enhancement, including Generator and Discriminator networks, content loss and adversarial loss, may be found in “C. Ledig et al ., “Photo-realistic single image super resolution using a generative adversarial network,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017 , vol. 2017-January, pp. 105— 114, 2017, doi: 10.1109/CVPR.2017.19”, which is hereby incorporated herein by reference in its entirety, to the extent possible. Other network architectures may be used without departing from the disclosure.

[0098] In one or more embodiments, prior to using the GAN for resolution enhancement and for style transfer, the GAN is trained. The training may be based on a dataset containing matching EIT and CT images of the same group of patients. The GAN may be trained to output resolution enhanced and style transferred images that match the CT images, based on the EIT images provided as input.

[0099] A loss function is used as a metric guiding the learning of the GAN. In one or more embodiments, a perceptual loss is used. Unlike a pixel-wise mean squared error (MSE) ( i.e ., difference between the pixel value of the target and the network’s output) the perceptual loss does not struggle with high-frequency details, and thus does not smoothen important image features by causing the model to leam a pixel-wise average solution.

[00100] The perceptual loss relies on learning higher level image features (perceptual features) instead of focusing on pixel-wise differences. Unlike the pixel-wise MSE loss, the perceptual loss is suitable for correctly evaluating the higher-frequency features in the output of the resolution enhancement and style transfer performed by the GAN. A detailed description of perceptual loss may be found in “. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real time style transfer and super-resolution,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics ), vol. 9906 LNCS, pp. 694-711, 2016, doi: 10.1007/978-3-319-46475-6_43” and in “J. Bruna, P. Sprechmann, and Y. LeCun, “Super-resolution with deep convolutional sufficient statistics,” 4th Int. Conf. Learn. Represent. ICLR 2016 - Conf. Track Proc ., no. 2009, pp. 1-17, 2016”, which are hereby incorporated herein by reference in their entirety.

[00101] The perceptual loss may be divided into content loss and adversarial loss. Content loss may be designed to assist the network in learning how to generate an image that is similar to the target. Adversarial loss may be designed to assist the network in learning higher level image features that will help fool the Discriminator.

[00102] Content loss may be defined as a function that attempts to minimize the Euclidean distance between the feature maps of the generated super-resolution image and the high-resolution training image. The feature maps used for content loss may be extracted from a cropped pre-trained VGG network (by the Visual Geometry Group at Oxford University). The details of the VGG network may found in “K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1-14, 2015”, which is hereby incorporated herein by reference in its entirety, to the extent possible. Adversarial loss may be defined using binary cross-entropy.

[00103] Data augmentation may be used when generating the training dataset for training the GAN. The data augmentation may expand the diversity of the training dataset by randomly transforming each image prior to each training iteration. The data augmentation may result in an improved ability of the GAN to generalize to new unseen images. In general image processing, data augmentation strategies may include rotation, cropping, flipping, resizing, etc. of the images. However, in medical imaging, augmentation transformations may be limited to certain operations, as most medical images are collected using a standardized protocol and therefore are not expected to differ significantly between samples. Augmentation strategies may include, for example, a random rotation of each image by an angle within a defined range, a random warping of each image along the x- and y-axes ( i.e ., resulting in slightly elongated or compressed image along the horizontal or vertical direction), and/or a random cropping of each image to allow the network to learn on different scales.

[00104] Now referring to FIG. 8D, examples of resolution enhancement results (800), in accordance with one or more embodiments, are shown. The results (800) were obtained after training the GAN for 2,000 iterations using 80% of the available data. The testing was performed on the remaining 20%. Two samples are shown, with the input to the GAN in the left column, the output of the GAN in the center column and, for comparison, the high resolution CT image in the right column.

[00105] A training data set consisting of CT and EIT data sets from roughly 100- 200 patients appears to be sufficient. The quality of the output (center column) suggests that EIT methods, may be used to estimate baseline anatomy and any anatomic changes due to disease progression, from the EIT images, without requiring other images, such as CT.

[00106] The use of artificial intelligence methods to generate anatomic representations of a patients’ lung anatomy based on low-resolution EIT imaging, in accordance with one or more embodiments, represents a significant advancement in the field of remote lung monitoring. The addition of realistic representations provides well-needed anatomic context regarding the regional variations involved in COPD disease progression. Embodiments of the invention, thus, enable an EIT lung imaging capable of demonstrating global and regional airflow patterns within the anatomy, without requiring other high resolution anatomical imaging.

[00107] The segmentation module (770), in one or more embodiments, performs operations on one or more of the EIT images obtained from the EIT image reconstruction module or on any other EIT image such as an EIT image that has been enhanced as previously described. In one embodiment, the segmentation module (770), when operating on an image, identifies what portions of the image may be labeled as lung tissue ( i.e ., what pixels should be classified as ‘lung tissue’ versus ‘background and other organs’). A similar labeling may be performed for any other organs of interest.

[00108] The segmentation performed by the segmentation module (770) may benefit clinicians because image segmentation can be a tedious and time- consuming process that requires the intervention of highly trained professionals. Besides reducing manual processing time, an AI-based auto segmentation by the segmentation module (770) may generate more reliable and accurate lung volume assessments compared to traditional generic image processing algorithms. This may also help with the tuning of future image classification algorithms by removing potential noise and biases from other less relevant portions of the image.

[00109] Further, when monitoring for lung volume changes and/or extracting a series of relevant pulmonary metrics, a method to isolate lung volume in the collected images may be beneficial or necessary.

[00110] An unsupervised segmentation method (not requiring labeled data during training) and a supervised segmentation method are subsequently described.

[00111] Unsupervised segmentation: In one or more embodiments, a K-means clustering in combination with additional image processing techniques is used to perform the segmentation.

[00112] The K-means clustering may iteratively group data points into a defined number of K clusters. This unsupervised clustering method may work by (1) randomly assigning K-number of centroids within the data, (2) calculating the sum of squared distance between all data points and each centroid, (3) assigning data points to the closest centroid, (4) calculating the new centroids of each defined cluster, (5) re-iterate through all previous steps until clear clusters have been defined. To perform the segmentation, K may be fixed at a value of 2 to allow the clustering between lung tissues (cluster #1) and background and other organs (cluster #2).

[00113] To improve the outcome of the K-Means clustering method, several thresholds and post-processing techniques may be used, including, for example, a pixel intensity thresholding, morphological operations, and/or a pixel count thresholding, as subsequently described in reference to FIG. 9A.

[00114] In the example of FIG. 9 A, illustrating an unsupervised segmentation, an original image is first normalized. The normalization may include, for example, a brightness normalization resulting in enhanced contrast and enhanced lung tissue visualization.

[00115] Next, a mean pixel intensity value between the two clusters may be calculated and defined as a pixel intensity threshold. Every pixel with an intensity below this threshold may be set to a new intensity of 1, while all other pixels may be set to an intensity of 0. The pixel intensity threshold may result in a binary ( e.g ., black & white) image.

[00116] Subsequently, morphological operations such as erosion (removes pixels from the boundary of an image) and/or dilation (adds new pixels to the boundary of an image based on the value of the pixels present near the corresponding boundary) may be performed. The morphological operations may help reduce noise from small blood vessels contained within the lung volume.

[00117] Further, a pixel count thresholding may be used to categorize a group of data points as lung volume only if the data points fall into several pixel count criteria. For example, the total number of pixels is required to be between 10,000 and 170,000, the minimum number of pixels in a row should be at least ~10% of the image width, the maximum number of pixels in a row should be no more than ~95% of the image width, the minimum number of pixels in a column should be at least ~2% of the image height, and/or the maximum number of pixels in a row should be no more than ~98% of image height. Other thresholds may be used without departing from the disclosure.

[00118] In a final step, the resulting mask may be applied to the original image, thereby identifying the lungs in the original image.

[00119] In a performance evaluation, the unsupervised segmentation, in accordance with one or more embodiments, was able to correctly segment about 70% of the data. The remaining 30% resulted in partial or incorrect segmentations. Further analysis showed that the partial or incorrect segmentations were mostly due to the diseased aspect of these images, where the morphology and/or integrity of lung tissues may significantly differ from healthy samples, making it challenging for the algorithm to correctly distinguish lung space from background.

[00120] Supervised segmentation: In one or more embodiments, a neural network is used for the segmentation. The network may be able to generalize to different morphologies, image quality and clinical conditions, thereby producing a high quality segmentation.

[00121] In one or more embodiments, a 2D adaptation of a multi-scale pyramid 3D deep convolutional neural network (DCNN) is used for the supervised segmentation. A description of the original implementation may be found in “H. R. Roth et a/., “A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multi-organ Segmentation,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif Intell. Lect. Notes Bioinformatics ), vol. 11073 LNCS, pp. 417M25, 2018, doi: 10.1007/978-3-030-00937-3_48”, which is hereby incorporated herein by reference in its entirety, to the extent possible.

[00122] Now referring to FIG. 9B, showing an example of a supervised segmentation (950), in the architecture used for the supervised segmentation, the samples are first passed through the DCNN (stage 1). The predictions by stage 1 may be up-sampled, cropped, and concatenated with a zoomed-in version of the samples before being passed through the DCNN a second time (stage 2) to generate the final output. The use of two stages (stage 1 : full image, and stage 2: zoomed in version of a segment of the full image), in accordance with one or more embodiments is performed to produce a superior segmentation at different scales, in comparison to a single stage segmentation. The DCNN used for the segmentation may be a 2D adaptation of a V-Net. A description of the original V-Net implementation, including the Dice loss used for training, may be found in “F. Milletari, N. Navab, and S. A. Ahmadi, “V- Net: Fully convolutional neural networks for volumetric medical image segmentation,” Proc. - 20164th Int. Conf. 3D Vision, 3DV 2016, pp. 565-571, 2016, doi: 10.1109/3DV.2016.79”, which is hereby incorporated herein by reference in its entirety, to the extent possible.

[00123] To train the DCNN-based supervised segmentation algorithm, a multiclass adaptation of the average Dice loss is used. A Dice loss is based on the Dice coefficient, which is a common metric in the field of computer vision and which allows the quantification of similarities between two images, where a value of 0 indicates that the two images have 0% of similarity, while a value of 1 indicates that the two images are identical. Two Dice losses were calculated (one per stage) and added to generate the multi-scale dice coefficient, as illustrated in FIG. 9B.

[00124] The dataset used for the training may include pairs of images, each pair including an original image and the original image with a mask indicating the segmentation applied to the original image. The dataset may serve as ground- truth data for the training. The masks may have been applied to the original images by an expert user to ensure accuracy of the segmentation. During the training, the previously described data augmentation strategies including random rotation, random warping, and/or random cropping may be used to generate a richer training dataset. Further, to generate the stage 2 input, the zoomed-in region may be randomly selected. For example, a rectangle of random size may be place at a random location of the image, to obtain the zoomed-in region, as shown in FIG. 9B.

[00125] To evaluate the supervised segmentation, in accordance with one or more embodiments, the described network was trained for 500 iterations using 80% of the available data and tested on the remaining 20%. The resulting segmentation performance was evaluated based on the Dice score. An average Dice score of 0.96, indicating a 96% similarity between network output and original target was found, thus suggesting a highly accurate supervised segmentation.

[00126] Now returning to FIG 7, The EIT processing backend (700) may include other components that are not shown. For example, the EIT processing backend may include one or more databases. The database(s) may archive any data, including the impedance measurements received from the wearable EIT signal acquisition unit, images generated by the EIT reconstruction module (710), resolution enhanced and/or style-transferred images generated by the resolution enhancement and style transfer module (760), segmented images generated by the segmentation module (770), interpretations generated by the EIT image interpretation module (720), etc. The EIT processing backend may further include import/export interface, e.g ., to import images from other sources and/or to share images with other systems.

[00127] FIGs. 10, 11 A, 1 IB, and 11C show flowcharts in accordance with one or more embodiments. One or more of the steps in FIGs. 10, 11 A, 1 IB, and 11C may be performed by the components discussed above in reference to FIGs. 5, 6, and 7. While the various steps in these flowchart are presented and described sequentially, one of ordinary skill will appreciate that at least some of the blocks may be executed in different orders, may be combined or omitted, and at least some of the blocks may be executed in parallel. Additional steps may further be performed. Accordingly, the scope of the disclosure should not be considered limited to the specific arrangement of steps shown in FIGs. 10, 11 A, 1 IB, and 11C. [00128] Turning to FIG. 10, a flowchart describing a method for EIT signal acquisition (1000), in accordance with one or more embodiments, is shown. The operations described below may be performed by a wearable EIT signal acquisition unit, as previously described.

[00129] In Step 1002, a frame of EIT impedance measurements is obtained. The frame of EIT impedance measurements may be obtained as described in reference to FIGs. 2, 3, and 4.

[00130] In Step 1004, the frame of EIT impedance measurements is transmitted to an EIT processing backend.

[00131] Steps 1002 and 1004 may be repeated, e.g ., at a fixed frame rate.

[00132] Turning to FIG. 11 A, a flowchart describing a method for EIT image processing (1100), in accordance with one or more embodiments, is shown.

[00133] In Step 1102, the frame of EIT impedance measurements is received from the wearable EIT signal acquisition unit.

[00134] In Step 1104, an EIT image is reconstructed from the EIT impedance measurements. The reconstruction may be performed as described in reference to FIG 5. Step 1104 may be performed as frames of EIT impedance measurements are received from the wearable EIT signal acquisition unit. Alternatively, Step 1104 may be executed to process frames of EIT impedance measurements in batches.

[00135] In Step 1106, the EIT image may be interpreted, e.g., by performing one or more of the operations of the EIT image interpretation module (720) of FIG. 7. Execution of Step 1106 is optional.

[00136] Other operations may further be performed. For example, The EIT impedance measurements, the reconstructed EIT image(s) and/or interpretations of the EIT image(s) may be archived.

[00137] Turning to FIG. 1 IB, a flowchart describing a method for EIT image enhancement (1150), in accordance with one or more embodiments, is shown. [00138] In Step 1152, a resolution enhancement and/or style transfer is performed on an EIT image. The resolution enhancement and/or style transfer may be performed as described in reference to FIGs. 7, 8A, 8B,8C, and 8D.

[00139] Turning to FIG. 11C, a flowchart describing a method for EIT image enhancement (1160), in accordance with one or more embodiments, is shown.

[00140] In Step 1162, a segmentation is performed on an EIT image. The segmentation may be performed as described in reference to FIGs. 7, 9 A and 9B.

[00141] The use case scenarios described below are intended to provide examples of the application of the systems and methods in accordance with one or more embodiments. The systems and methods as described are not limited to the following use cases.

[00142] Embodiments of the disclosure may be used by patients in home settings or clinical settings.

[00143] In home settings, embodiments of the disclosure may provide detailed, disease-specific health monitoring for patients with chronic respiratory conditions, including chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), cystic fibrosis (CF), and bronchiectasis. Embodiments of the disclosure may reduce unnecessary hospitalizations, improve health outcomes, and prevent high-risk healthcare encounters. A wearable EIT signal acquisition unit may be distributed to the patient by their physician, clinic, or health authority for their at-home use. The data from the wearable EIT signal acquisition unit would be sent wirelessly to the patients’ healthcare provider regardless of location, and also to a cloud-based computing center for disease trend analysis. Furthermore, during any present or future outbreaks of communicable respiratory infections, the wearable EIT signal acquisition unit may be distributed to patients recovering from their condition to monitor their progress without the need for high-risk healthcare contact. The outpatient version of the wearable EIT signal acquisition unit, may be intended to be worn approximately 5-10 minutes per day to track respiratory disease progression. The patient may initially perform several deep breaths through a mouthpiece for calibration while the wearable EIT signal acquisition unit is worn. Afterwards, all measurements would be performed by the wearable EIT signal acquisition unit.

[00144] A hospital grade version of the wearable EIT signal acquisition unit may be available for outpatient clinics and inpatient hospital settings to provide detailed physiologic assessment of the lungs in real-time while minimizing unnecessary person-to-person contact. In the clinic setting, the wearable EIT signal acquisition unit would be used as a point of-care diagnostic tool for monitoring disease progression in patients suffering from underlying lung conditions, as well as a tool for assessment of patients recovering from acute respiratory conditions, such as COVID-19 infection. In remote healthcare settings, such as rural geographies the wearable EIT signal acquisition unit could greatly improve access to specialist care by providing high-quality information to a specialist physician in a remote setting through existing telehealth networks in local clinics. In the inpatient hospital setting, the wearable EIT signal acquisition unit could be worn continuously by a patient in order to inform healthcare providers of their respiratory status in real-time. Parameters such as regional ventilation changes, which typically required dedicated equipment to measure, would be available to healthcare providers instantly, similar to how vital signs are currently monitored.

[00145] Physicians may prescribe an action plan based on information from a single encounter, or may prescribe Action Plans based on crossing pre-defmed thresholds. For example, a COPD patient recovering from COVID-19 may increase their inhaler regimen if their flow rates based on a maximal respiratory effort (FEV1, FVC) drop below a certain level. The system may also inform healthcare practitioners for patients with pre-existing conditions of how much their current symptoms are due to their new respiratory infection versus their chronic underlying lung disease. [00146] Furthermore, an AI-driven trend analysis may inform clinical decision making for patients with known patterns prior to exacerbations. For example, if a known COPD patient with particular restrictive airflow in the right lower lobe experiences a 50% increase in resistance to that particular segment, and in the past, worsening of this segment has predicted an impending COPD exacerbation, a healthcare practitioner may opt to treat the patient earlier with oral steroids and antibiotics as an outpatient in order to prevent an acute hospitalization. Such information may be overlooked with conventional outpatient spirometry systems, even when performed under ideal circumstances, given there may be a negligible change in overall airflow and resistance due to over-compensation from other regions of the lung.

[00147] Reports may be automatically generated and sent to a clinician and/or to the patient. The clinician may receive reconstructed and/or enhanced images, as well as pulmonary/cardiac metrics, followed by predictive diagnoses to indicated potential risk of chronic obstructive pulmonary disease (COPD) exacerbation, pulmonary hypertension, or congestive heart failure. The patient may receive a summarized less technical report.

[00148] By pairing the EIT imaging system with a portable spirometer, the system may be capable of mapping regional changes in airflow, including regional changes in forced expiratory volume in the first second (FEV-1), forced vital capacity (FVC), and FEV1/FVC (FIG. ID). Currently, no clinical system in the outpatient or at home setting measures these parameters in a regional manner. Global measures such as end-expiratory lung volume (EELV), and tidal volume (VT), may also be estimated by the system by using lung cross- sectional areas to estimate volumes. The system further overcomes the challenge of distinguishing poor inspiratory effort from severe lung obstruction by measuring airflow parameters while simultaneously visualizing lung expansion (EIT imaging).

[00149] In addition to patient monitoring, embodiments of the disclosure may also be used for patient rehabilitation, e.g ., for patients recovering from lung disease exacerbations or rehabilitation for patients with chronic lung disease. In such a scenario, embodiments of the disclosure guide the patient’s pulmonary rehabilitation towards improved breathing adaptations (allowing patients to better control their breathing patterns) or strengthening muscles of breathing (including diaphragm or accessory muscles of breathing). In this scenario, the device may be used during physical exercise, instructed breathing maneuvers, or other components of usual pulmonary rehabilitation. The patient may be provided visual feedback from the display apparatus to improve their understanding of their lung physiology and their response to activity and breathing patterns.

[00150] In addition to patient monitoring, embodiments of the disclosure may also be used for monitoring patients undergoing assisted mechanical ventilation (such as invasive mechanical ventilation or bidirectional positive airway pressure (BiPAP) ventilation), including adjustment of ventilator settings and assistance in weaning patient off ventilator assistance.

[00151] In addition to patient monitoring, embodiments of the disclosure may also be used for monitoring patients response to mechanical airway support ( e.g ., continuous positive airway pressure, or CPAP), including adjustment of settings and assessing response to therapy for patients with sleep-breathing disorders such as obstructive sleep apnea (OSA).

[00152] In addition to patient monitoring, embodiments of the disclosure may also be used for monitoring clinical response to invasive or minimally invasive lung procedures, such as lung volume reduction surgery or minimally-invasive endobronchial valve insertion.

[00153] Embodiments of the disclosure may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in FIG. 12 A, the computing system (1200) may include one or more computer processors (1202), non-persistent storage (1204) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (1206) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (1212) (e.g, Bluetooth interface, infrared interface, network interface, optical interface, etc.), and numerous other elements and functionalities.

[00154] The computer processor(s) (1202) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (1200) may also include one or more input devices (1210), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.

[00155] The communication interface (1212) may include an integrated circuit for connecting the computing system (1200) to a network (not shown) (e.g, a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.

[00156] Further, the computing system (1200) may include one or more output devices (1208), such as a screen (e.g, a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (1202), non-persistent storage (1204), and persistent storage (1206). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms.

[00157] Software instructions in the form of computer readable program code to perform embodiments of the disclosure may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the disclosure.

[00158] The computing system (1200) in FIG. 12A may be connected to or be a part of a network. For example, as shown in FIG. 12B, the network (120) may include multiple nodes ( e.g ., node X (1222), node Y (1224)). Each node may correspond to a computing system, such as the computing system shown in FIG. 12A, or a group of nodes combined may correspond to the computing system shown in FIG. 12 A. By way of an example, embodiments of the disclosure may be implemented on a node of a distributed system that is connected to other nodes. By way of another example, embodiments of the disclosure may be implemented on a distributed computing system having multiple nodes, where each portion of the disclosure may be located on a different node within the distributed computing system. Further, one or more elements of the aforementioned computing system (1200) may be located at a remote location and connected to the other elements over a network.

[00159] Although not shown in FIG. 12B, the node may correspond to a blade in a server chassis that is connected to other nodes via a backplane. By way of another example, the node may correspond to a server in a data center. By way of another example, the node may correspond to a computer processor or micro core of a computer processor with shared memory and/or resources.

[00160] The nodes (e.g., node X (1222), node Y (1224)) in the network (1220) may be configured to provide services for a client device (1226). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (1226) and transmit responses to the client device (1226). The client device (1226) may be a computing system, such as the computing system shown in FIG. 12A. Further, the client device (1226) may include and/or perform all or a portion of one or more embodiments of the disclosure.

[00161] The computing system or group of computing systems described in FIG. 12A and 12B may include functionality to perform a variety of operations disclosed herein. For example, the computing system(s) may perform communication between processes on the same or different system. A variety of mechanisms, employing some form of active or passive communication, may facilitate the exchange of data between processes on the same device. Examples representative of these inter-process communications include, but are not limited to, the implementation of a file, a signal, a socket, a message queue, a pipeline, a semaphore, shared memory, message passing, and a memory-mapped file. Further details pertaining to a couple of these non limiting examples are provided below.

[00162] Based on the client-server networking model, sockets may serve as interfaces or communication channel endpoints enabling bidirectional data transfer between processes on the same device. Foremost, following the client- server networking model, a server process (e.g. , a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g, processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process.

Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters ( e.g ., bytes).

[00163] Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non- persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, only one authorized process may mount the shareable segment, other than the initializing process, at any given time.

[00164] Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the disclosure. The processes may be part of the same or different application and may execute on the same or different computing system.

[00165] Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the disclosure may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.

[00166] By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.

[00167] Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the disclosure, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system in FIG. 12 A. First, the organizing pattern ( e.g ., grammar, schema, layout) of the data is determined, which may be based on one or more of the following: position (e.g., bit or column position, Nth token in a data stream, etc.), attribute (where the attribute is associated with one or more values), or a hierarchical/tree structure (consisting of layers of nodes at different levels of detail-such as in nested packet headers or nested document sections). Then, the raw, unprocessed stream of data symbols is parsed, in the context of the organizing pattern, into a stream (or layered structure) of tokens (where each token may have an associated token "type").

[00168] Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query provided to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).

[00169] The extracted data may be used for further processing by the computing system. For example, the computing system of FIG. 12A, while performing one or more embodiments of the disclosure, may perform data comparison. Data comparison may be used to compare two or more data values ( e.g ., A, B). For example, one or more embodiments may determine whether A > B, A = B, A != B, A < B, etc. The comparison may be performed by submitting A, B, and an opcode specifying an operation related to the comparison into an arithmetic logic unit (ALU) ( i.e ., circuitry that performs arithmetic and/or bitwise logical operations on the two data values). The ALU outputs the numerical result of the operation and/or one or more status flags related to the numerical result. For example, the status flags may indicate whether the numerical result is a positive number, a negative number, zero, etc. By selecting the proper opcode and then reading the numerical results and/or status flags, the comparison may be executed. For example, in order to determine if A > B, B may be subtracted from A (i.e., A - B), and the status flags may be read to determine if the result is positive (i.e., if A > B, then A - B > 0). In one or more embodiments, B may be considered a threshold, and A is deemed to satisfy the threshold if A = B or if A > B, as determined using the ALU. In one or more embodiments of the disclosure, A and B may be vectors, and comparing A with B requires comparing the first element of vector A with the first element of vector B, the second element of vector A with the second element of vector B, etc. In one or more embodiments, if A and B are strings, the binary values of the strings may be compared.

[00170] The computing system in FIG. 12A may implement and/or be connected to a data repository. For example, one type of data repository is a database. A database is a collection of information configured for ease of data retrieval, modification, re-organization, and deletion. Database Management System (DBMS) is a software application that provides an interface for users to define, create, query, update, or administer databases.

[00171] The user, or software application, may submit a statement or query into the DBMS. Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application.

[00172] The computing system of FIG. 12A may include functionality to provide raw and/or processed data, such as results of comparisons and other processing. For example, providing data may be accomplished through various presenting methods. Specifically, data may be provided through a user interface provided by a computing device. The user interface may include a GUI that displays information on a display device, such as a computer monitor or a touchscreen on a handheld computer device. The GUI may include various GUI widgets that organize what data is shown as well as how data is provided to a user. Furthermore, the GUI may provide data directly to the user, e.g ., data provided as actual data values through text, or rendered by the computing device into a visual representation of the data, such as through visualizing a data model.

[00173] For example, a GUI may first obtain a notification from a software application requesting that a particular data object be provided within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.

[00174] Data may also be provided through various audio methods. In particular, data may be rendered into an audio format and provided as sound through one or more speakers operably connected to a computing device.

[00175] Data may also be provided to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be provided to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.

[00176] The above description of functions presents only a few examples of functions performed by the computing system of FIG. 12A and the nodes and / or client device in FIG. 12B. Other functions may be performed using one or more embodiments of the disclosure. [00177] While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein.