Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INTRACARDIAC ECG NOISE DETECTION AND REDUCTION
Document Type and Number:
WIPO Patent Application WO/2022/079621
Kind Code:
A1
Abstract:
A system and method for detecting and reducing noise in an ECG environment is disclosed. The system and method include inputting data regarding the ECG and ECG noise into a database, the database including data on other ECG patients and their respective signals, modeling the noise of the ECG in a quiet environment to provide samples to train and model to identify noise in an ECG system, and identifying the noise signals within the ECG data and removing the noise from the signals. The noise may include per site noise signals, additive noises, contact noise and deflection noise. The quiet environment may include an aquarium.

Inventors:
AMIT MATITYAHU (IL)
GOLDBERG STANISLAV (IL)
AMOS YARIV AVRAHAM (IL)
BOTZER LIOR (IL)
Application Number:
PCT/IB2021/059386
Publication Date:
April 21, 2022
Filing Date:
October 13, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BIOSENSE WEBSTER ISRAEL LTD (IL)
International Classes:
A61B5/33; A61B5/00; A61B5/346; A61B5/367
Foreign References:
US20150150512A12015-06-04
US20140278171A12014-09-18
Other References:
ANONYMOUS: "CARTONET", 9 June 2020 (2020-06-09), pages 1 - 2, XP055878792, Retrieved from the Internet [retrieved on 20220113]
Attorney, Agent or Firm:
KLIGLER & ASSOCIATES PATENT ATTORNEYS LTD. (IL)
Download PDF:
Claims:
34

CLAIMS

What is claimed is:

1. A method for detecting and reducing noise in an ECG laboratory environment, the method comprising: measuring ECG signals and ECG noise into a database, the database including data including ECG signals and ECG noise on other ECG patients measured in a laboratory environment; modeling the noise of the ECG in the laboratory environment to provide samples to to identify the ECG noise in the laboratory environment; applying a noise filter for the laboratory environment based on the modeled noise in the lab to remove the noise from the signals; and providing a clean version of the ECG signals.

2. The method of claim 1 wherein the ECG noise includes per site noise signals.

3. The method of claim 1 wherein the ECG noise includes additive noises.

4. The method of claim 1 wherein the ECG noise includes contact noise.

5. The method of claim 1 wherein the ECG noise includes deflection noise.

6. The method of claim 1 further comprising building a filter based on the noise in the ECG environment.

7. The method of claim 6 wherein the filter is applied to remove the noise in the ECG environment.

8. The method if claim 6 wherein the filter is applied to cancel the noise in the ECG environment.

9. The method of claim 1 further comprising exporting the ECG signal once the noise is removed.

10. A system for detecting and reducing noise in an ECG environment, the system comprising: a plurality of mapping catheters capable of measuring signals in an ECG operating in the ECG environment; a plurality of penta-ray catheters capable of measuring signals in the ECG operating in the ECG environment; and a signal processor and database cooperatively operating to process and record signals measured on at least a portion of the plurality of mapping catheters and plurality of penta-ray catheters, 35 at least a portion of the plurality of mapping catheters and plurality of penta-ray catheters inputting data regarding the ECG and ECG noise into the database, the database including data on other ECG patients and their respective signals operating in the ECG environment; the processor modeling the noise of the ECG in a quiet environment to provide samples to identify noise in an ECG system; and the processor identifying the noise signals within the ECG data and removing the noise from the signals.

11. The system of claim 10 wherein the noise includes additive noises.

12. The system of claim 10 wherein the noise includes contact noise.

13. The system of claim 10 wherein the noise includes deflection noise.

14. The system of claim 10 wherein the quiet environment includes an aquarium.

15. The system of claim 10 wherein the identifying includes per site noise.

16. The system of claim 10 further comprising building a filter based on the noise in the ECG environment.

17. The system of claim 16 wherein the filter is applied to remove the noise in the ECG environment.

18. The system if claim 16 wherein the filter is applied to cancel the noise in the ECG environment.

19. The system of claim 10 further comprising exporting the ECG signals once the noise is removed.

Description:
INTRACARDIAC ECG NOISE DETECTION AND REDUCTION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application No. 63/091,186, filed October 13, 2020, the contents of which are incorporated herein by reference.

FIELD OF INVENTION

[0002] The present invention is related to artificial intelligence and machine learning associated with intracardiac electrocardiogram (ECG) noise detection and reduction.

BACKGROUND

[0003] Electrical signals such as electrocardiogram (ECG) and intracardiac ECG signals are often detected prior to and/or during a cardiac procedure. For example, ECG signals and intracardiac ECG signal can be used to identify potential locations of a heart where arrhythmia causing signals originate from. Generally, an ECG or intracardiac ECG is a signal that describes the electrical activity of the heart. ECG signals and intracardiac ECG signals may also be used to map portions of a heart. When physicians use an ECG or intracardiac ECG to study heart activity, an accounting for the interference needs to occur in order to isolate the electrical signals from the heart. Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG and intracardiac ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings, wherein like reference numerals in the figures indicate like elements, and wherein:

[0005] FIG. 1 is a block diagram of an example system for remotely monitoring and communicating patient biometrics;

[0006] FIG. 2 is a system diagram of an example of a computing environment in communication with network; [0007] FIG. 3 is a block diagram of an example device in which one or more features of the disclosure can be implemented;

[0008] FIG. 4 illustrates a graphical depiction of an artificial intelligence system incorporating the example device of FIG. 3;

[0009] FIG. 5 illustrates a method performed in the artificial intelligence system of FIG. 4;

[0010] FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation;

[0011] FIG. 7 illustrates an exemplary decision tree;

[0012] FIG. 8 illustrates an exemplary random forest classifier;

[0013] FIG. 9 illustrates an exemplary logistic regression;

[0014] FIG. 10 illustrates an exemplary support vector machine;

[0015] FIG. 11 illustrated an exemplary linear regression model;

[0016] FIG. 12 illustrates an exemplary K-means clustering;

[0017] FIG. 13 illustrates an exemplary ensemble learning algorithm;

[0018] FIG. 14 illustrates an exemplary neural network;

[0019] FIG. 15 illustrates a hardware based neural network;

[0020] FIG. 16A illustrates an ECG signal that contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization);

[0021] FIG. 16B illustrates a frequency content of the baseline wander;

[0022] FIG. 16C shows an ECG signal interfered by an EMG noise;

[0023] FIG. 16D illustrates examples of power line noise;

[0024] FIG. 16E illustrates a signal during ventricle activity including baseline wander;

[0025] FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal;

[0026] FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements;

[0027] FIG. 17A is a diagram of an exemplary system in which one or more features of the disclosure subject matter can be implemented;

[0028] FIG. 17B illustrates an exemplary catheter placed in the right atria with bipolar intracardiac ECG signals;

[0029] FIG. 18 is a depiction of an illustration of a lab;

[0030] FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab;

[0031] FIG. 20 illustrates a method for dealing with the described noise signals; [0032] FIG. 21 illustrates a method performed to denoise signals for a lab (A and B);

[0033] FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment;

[0034] FIG. 23A illustrates deflection noise examples recorded in a controlled aquarium environment;

[0035] FIG. 23B illustrates deflection noise examples of FIG. 23A with an increased x- axis to zoom in on features;

[0036] FIG. 24 illustrates additional deflection noise examples;

[0037] FIG. 25 illustrates a contact and deflection noise model;

[0038] FIG. 26 illustrates a CNN inception model; and

[0039] FIG. 27 illustrates a second learning phase that may be implemented to capture the methods described herein.

DETAILED DESCRIPTION

[0040] Systems and methods for providing improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed are described.

[0041] FIG. 1 is a block diagram of an example system 100 for remotely monitoring and communicating patient biometrics (i.e., patient data). In the example illustrated in FIG. 1, the system 100 includes a patient biometric monitoring and processing apparatus 102 associated with a patient 104, a local computing device 106, a remote computing system 108, a first network 110 and a second network 120.

[0042] According to an embodiment, a monitoring and processing apparatus 102 may be an apparatus that is internal to the patient’s body (e.g., subcutaneously implantable). The monitoring and processing apparatus 102 may be inserted into a patient via any applicable manner including orally injecting, surgical insertion via a vein or artery, an endoscopic procedure, or a laparoscopic procedure.

[0043] According to an embodiment, a monitoring and processing apparatus 102 may be an apparatus that is external to the patient. For example, as described in more detail below, the monitoring and processing apparatus 102 may include an attachable patch (e.g., that attaches to a patient’s skin). The monitoring and processing apparatus 102 may also include a catheter with one or more electrodes, a probe, a blood pressure cuff, a weight scale, a bracelet or smart watch biometric tracker, a glucose monitor, a continuous positive airway pressure (CPAP) machine or virtually any device which may provide an input concerning the health or biometrics of the patient. [0044] According to an embodiment, a monitoring and processing apparatus 102 may include both components that are internal to the patient and components that are external to the patient.

[0045] A single monitoring and processing apparatus 102 is shown in FIG. 1. Example systems may, however, may include a plurality of patient biometric monitoring and processing apparatuses. A patient biometric monitoring and processing apparatus may be in communication with one or more other patient biometric monitoring and processing apparatuses. Additionally, or alternatively, a patient biometric monitoring and processing apparatus may be in communication with the network 110.

[0046] One or more monitoring and processing apparatuses 102 may acquire patient biometric data (e.g., electrical signals, blood pressure, temperature, blood glucose level or other biometric data) and receive at least a portion of the patient biometric data representing the acquired patient biometrics and additional formation associated with acquired patient biometrics from one or more other monitoring and processing apparatuses 102. The additional information may be, for example, diagnosis information and/or additional information obtained from an additional device such as a wearable device. Each monitoring and processing apparatus 102 may process data, including its own acquired patient biometrics as well as data received from one or more other monitoring and processing apparatuses 102.

[0047] In FIG. 1, network 110 is an example of a short-range network (e.g., local area network (LAN), or personal area network (PAN)). Information may be sent, via short-range network 110, between monitoring a processing apparatus 102 and local computing device 106 using any one of various short-range wireless communication protocols, such as Bluetooth, WiFi, Zigbee, Z-Wave, near field communications (NFC), ultraband, Zigbee, or infrared (IR).

[0048] Network 120 may be a wired network, a wireless network or include one or more wired and wireless networks. For example, a network 120 may be a long-range network (e.g., wide area network (WAN), the internet, or a cellular network,). Information may be sent, via network 120 using any one of various long-range wireless communication protocols (e.g., TCP/IP, HTTP, 3G, 4G/LTE, or 5G/New Radio).

[0049] The patient monitoring and processing apparatus 102 may include a patient biometric sensor 112, a processor 114, a user input (UI) sensor 116, a memory 118, and a transmitter-receiver (i.e., transceiver) 122. The patient monitoring and processing apparatus 102 may continually or periodically monitor, store, process and communicate, via network 110, any number of various patient biometrics. Examples of patient biometrics include electrical signals (e.g., ECG signals and brain biometrics), blood pressure data, blood glucose data and temperature data. The patient biometrics may be monitored and communicated for treatment across any number of various diseases, such as cardiovascular diseases (e.g., arrhythmias, cardiomyopathy, and coronary artery disease) and autoimmune diseases (e.g., type I and type II diabetes).

[0050] Patient biometric sensor 112 may include, for example, one or more sensors configured to sense a type of biometric patient biometrics. For example, patient biometric sensor 112 may include an electrode configured to acquire electrical signals (e.g., heart signals, brain signals or other bioelectrical signals), a temperature sensor, a blood pressure sensor, a blood glucose sensor, a blood oxygen sensor, a pH sensor, an accelerometer and a microphone.

[0051] As described in more detail below, patient biometric monitoring and processing apparatus 102 may be an ECG monitor for monitoring ECG signals of a heart. The patient biometric sensor 112 of the ECG monitor may include one or more electrodes for acquiring ECG signals. The ECG signals may be used for treatment of various cardiovascular diseases.

[0052] In another example, the patient biometric monitoring and processing apparatus 102 may be a continuous glucose monitor (CGM) for continuously monitoring blood glucose levels of a patient on a continual basis for treatment of various diseases, such as type I and type II diabetes. The CGM may include a subcutaneously disposed electrode, which may monitor blood glucose levels from interstitial fluid of the patient. The CGM may be, for example, a component of a closed-loop system in which the blood glucose data is sent to an insulin pump for calculated delivery of insulin without user intervention.

[0053] Transceiver 122 may include a separate transmitter and receiver. Alternatively, transceiver 122 may include a transmitter and receiver integrated into a single device.

[0054] Processor 114 may be configured to store patient data, such as patient biometric data in memory 118 acquired by patient biometric sensor 112, and communicate the patient data, across network 110, via a transmitter of transceiver 122. Data from one or more other monitoring and processing apparatus 102 may also be received by a receiver of transceiver 122, as described in more detail below.

[0055] According to an embodiment, the monitoring and processing apparatus 102 includes UI sensor 116 which may be, for example, a piezoelectric sensor or a capacitive sensor configured to receive a user input, such as a tapping or touching. For example, UI sensor 116 may be controlled to implement a capacitive coupling, in response to tapping or touching a surface of the monitoring and processing apparatus 102 by the patient 104. Gesture recognition may be implemented via any one of various capacitive types, such as resistive capacitive, surface capacitive, projected capacitive, surface acoustic wave, piezoelectric and infra-red touching. Capacitive sensors may be disposed at a small area or over a length of the surface such that the tapping or touching of the surface activates the monitoring device.

[0056] As described in more detail below, the processor 114 may be configured to respond selectively to different tapping patterns of the capacitive sensor (e.g., a single tap or a double tap), which may be the UI sensor 116, such that different tasks of the patch (e.g., acquisition, storing, or transmission of data) may be activated based on the detected pattern. In some embodiments, audible feedback may be given to the user from processing apparatus 102 when a gesture is detected.

[0057] The local computing device 106 of system 100 is in communication with the patient biometric monitoring and processing apparatus 102 and may be configured to act as a gateway to the remote computing system 108 through the second network 120. The local computing device 106 may be, for example, a, smart phone, smartwatch, tablet or other portable smart device configured to communicate with other devices via network 120. Alternatively, the local computing device 106 may be a stationary or standalone device, such as a stationary base station including, for example, modem and/or router capability, a desktop or laptop computer using an executable program to communicate information between the processing apparatus 102 and the remote computing system 108 via the PC's radio module, or a USB dongle. Patient biometrics may be communicated between the local computing device 106 and the patient biometric monitoring and processing apparatus 102 using a short-range wireless technology standard (e.g., Bluetooth, WiFi, ZigBee, Z-wave and other short-range wireless standards) via the short-range wireless network 110, such as a local area network (LAN) (e.g., a personal area network (PAN)). In some embodiments, the local computing device 106 may also be configured to display the acquired patient electrical signals and information associated with the acquired patient electrical signals, as described in more detail below.

[0058] In some embodiments, remote computing system 108 may be configured to receive at least one of the monitored patient biometrics and information associated with the monitored patient via network 120, which is a long-range network. For example, if the local computing device 106 is a mobile phone, network 120 may be a wireless cellular network, and information may be communicated between the local computing device 106 and the remote computing system 108 via a wireless technology standard, such as any of the wireless technologies mentioned above. As described in more detail below, the remote computing system 108 may be configured to provide (e.g., visually display and/or aurally provide) the at least one of the patient biometrics and the associated information to a healthcare professional (e.g., a physician). [0059] FIG. 2 is a system diagram of an example of a computing environment 200 in communication with network 120. In some instances, the computing environment 200 is incorporated in a public cloud computing platform (such as Amazon Web Services or Microsoft Azure), a hybrid cloud computing platform (such as HP Enterprise OneSphere) or a private cloud computing platform.

[0060] As shown in FIG. 2, computing environment 200 includes remote computing system 108 (hereinafter computer system), which is one example of a computing system upon which embodiments described herein may be implemented.

[0061] The remote computing system 108 may, via processors 220, which may include one or more processors, perform various functions. The functions may include analyzing monitored patient biometrics and the associated information and, according to physician- determined or algorithm driven thresholds and parameters, providing (e.g., via display 266) alerts, additional information, or instructions. As described in more detail below, the remote computing system 108 may be used to provide (e.g., via display 266) healthcare personnel (e.g., a physician) with a dashboard of patient information, such that such information may enable healthcare personnel to identify and prioritize patients having more critical needs than others.

[0062] As shown in FIG. 2, the computer system 210 may include a communication mechanism such as a bus 221 or other communication mechanism for communicating information within the computer system 210. The computer system 210 further includes one or more processors

220 coupled with the bus 221 for processing the information. The processors 220 may include one or more CPUs, GPUs, or any other processor known in the art.

[0063] The computer system 210 also includes a system memory 230 coupled to the bus

221 for storing information and instructions to be executed by processors 220. The system memory 230 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only system memory (ROM) 231 and/or random-access memory (RAM) 232. The system memory RAM 232 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The system memory ROM 231 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 230 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 220. A basic input/output system 233 (BIOS) may contain routines to transfer information between elements within computer system 210, such as during start-up, that may be stored in system memory ROM 231. RAM 232 may comprise data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 220. System memory 230 may additionally include, for example, operating system 234, application programs 235, other program modules 236 and program data 237.

[0064] The illustrated computer system 210 also includes a disk controller 240 coupled to the bus 221 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 241 and a removable media drive 242 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive). The storage devices may be added to the computer system 210 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).

[0065] The computer system 210 may also include a display controller 265 coupled to the bus 221 to control a monitor or display 266, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The illustrated computer system 210 includes a user input interface 260 and one or more input devices, such as a keyboard 262 and a pointing device 261, for interacting with a computer user and providing information to the processor 220. The pointing device 261, for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 220 and for controlling cursor movement on the display 266. The display 266 may provide a touch screen interface that may allow input to supplement or replace the communication of direction information and command selections by the pointing device 261 and/or keyboard 262.

[0066] The computer system 210 may perform a portion or each of the functions and methods described herein in response to the processors 220 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 230. Such instructions may be read into the system memory 230 from another computer readable medium, such as a hard disk 241 or a removable media drive 242. The hard disk 241 may contain one or more data stores and data files used by embodiments described herein. Data store contents and data files may be encrypted to improve security. The processors 220 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 230. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.

[0067] As stated above, the computer system 210 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments described herein and for containing data structures, tables, records, or other data described herein. The term computer readable medium as used herein refers to any non-transitory, tangible medium that participates in providing instructions to the processor 220 for execution. A computer readable medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 241 or removable media drive 242. Non-limiting examples of volatile media include dynamic memory, such as system memory 230. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 221. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

[0068] The computing environment 200 may further include the computer system 210 operating in a networked environment using logical connections to local computing device 106 and one or more other devices, such as a personal computer (laptop or desktop), mobile devices (e.g., patient mobile devices), a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 210. When used in a networking environment, computer system 210 may include modem 272 for establishing communications over a network 120, such as the Internet. Modem 272 may be connected to system bus 221 via network interface 270, or via another appropriate mechanism.

[0069] Network 120, as shown in FIGs. 1 and 2, may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., local computing device 106).

[0070] FIG. 3 is a block diagram of an example device 300 in which one or more features of the disclosure can be implemented. The device 300 may be local computing device 106, for example. The device 300 can include, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The device 300 includes a processor 302, a memory 304, a storage device 306, one or more input devices 308, and one or more output devices 310. The device 300 can also optionally include an input driver 312 and an output driver 314. It is understood that the device 300 can include additional components not shown in FIG. 3 including an artificial intelligence accelerator. [0071] In various alternatives, the processor 302 includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU or a GPU. In various alternatives, the memory 304 is located on the same die as the processor 302, or is located separately from the processor 302. The memory 304 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.

[0072] The storage device 306 includes a fixed or removable storage means, for example, a hard disk drive, a solid-state drive, an optical disk, or a flash drive. The input devices 308 include, without limitation, a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 310 include, without limitation, a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).

[0073] The input driver 312 communicates with the processor 302 and the input devices 308, and permits the processor 302 to receive input from the input devices 308. The output driver 314 communicates with the processor 302 and the output devices 310, and permits the processor 302 to send output to the output devices 310. It is noted that the input driver 312 and the output driver 314 are optional components, and that the device 300 will operate in the same manner if the input driver 312 and the output driver 314 are not present. The output driver 316 includes an accelerated processing device (“APD”) 316 which is coupled to a display device 318. The APD accepts compute commands and graphics rendering commands from processor 302, processes those compute and graphics rendering commands, and provides pixel output to display device 318 for display. As described in further detail below, the APD 316 includes one or more parallel processing units to perform computations in accordance with a single-instruction-multiple-data (“SIMD”) paradigm. Thus, although various functionality is described herein as being performed by or in conjunction with the APD 316, in various alternatives, the functionality described as being performed by the APD 316 is additionally or alternatively performed by other computing devices having similar capabilities that are not driven by ahost processor (e.g., processor 302) and provides graphical output to a display device 318. For example, it is contemplated that any processing system that performs processing tasks in accordance with a SIMD paradigm may perform the functionality described herein. Alternatively, it is contemplated that computing systems that do not perform processing tasks in accordance with a SIMD paradigm performs the functionality described herein. [0074] FIG. 4 illustrates a graphical depiction of an artificial intelligence system 200 incorporating the example device of FIG. 3. System 400 includes data 410, a machine 420, a model 430, a plurality of outcomes 440 and underlying hardware 450. System 400 operates by using the data 410 to train the machine 420 while building a model 430 to enable a plurality of outcomes 440 to be predicted. The system 400 may operate with respect to hardware 450. In such a configuration, the data 410 may be related to hardware 450 and may originate with apparatus 102, for example. For example, the data 410 may be on-going data, or output data associated with hardware 450. The machine 420 may operate as the controller or data collection associated with the hardware 450, or be associated therewith. The model 430 may be configured to model the operation of hardware 450 and model the data 410 collected from hardware 450 in order to predict the outcome achieved by hardware 450. Using the outcome 440 that is predicted, hardware 450 may be configured to provide a certain desired outcome 440 from hardware 450.

[0075] FIG. 5 illustrates a method 500 performed in the artificial intelligence system of FIG. 4. Method 500 includes collecting data from the hardware at step 510. This data may include currently collected, historical or other data from the hardware. For example, this data may include measurements during a surgical procedure and may be associated with the outcome of the procedure. For example, the temperature of a heart may be collected and correlated with the outcome of a heart procedure.

[0076] At step 520, method 500 includes training a machine on the hardware. The training may include an analysis and correlation of the data collected in step 510. For example, in the case of the heart, the data of temperature and outcome may be trained to determine if a correlation or link exists between the temperature of the heart during the procedure and the outcome.

[0077] At step 530, method 500 includes building a model on the data associated with the hardware. Building a model may include physical hardware or software modeling, algorithmic modeling, and the like, as will be described below. This modeling may seek to represent the data that has been collected and trained.

[0078] At step 540, method 500 includes predicting the outcomes of the model associated with the hardware. This prediction of the outcome may be based on the trained model. For example, in the case of the heart, if the temperature during the procedure between 97.7 - 100.2 produces a positive result from the procedure, the outcome can be predicted in a given procedure based on the temperature of the heart during the procedure. While this model is rudimentary, it is provided for exemplary purposes and to increase understanding of the present invention.

[0079] The present system and method operate to train the machine, build the model, and predict outcomes using algorithms. These algorithms may be used to solve the trained model and predict outcomes associated with the hardware. These algorithms may be divided generally into classification, regression, and clustering algorithms.

[0080] For example, a classification algorithm is used in the situation where the dependent variable, which is the variable being predicted, is divided into classes, and predicting a class, the dependent variable, for a given input. Thus, a classification algorithm is used to predict an outcome, from a set number of fixed, predefined outcomes. A classification algorithm may include naive Bayes algorithms, decision trees, random forest classifiers, logistic regressions, support vector machines and k nearest neighbors.

[0081] Generally, a naive Bayes algorithm follows the Bayes theorem, and follows a probabilistic approach. As would be understood, other probabilistic -based algorithms may also be used, and generally operate using similar probabilistic principles to those described below for the exemplary naive Bayes algorithm.

[0082] FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation. The probability approach of Bayes theorem essentially means, that instead of jumping straight into the data, the algorithm has a set of prior probabilities for each of the classes for the target. After the data is entered, the naive Bayes algorithm may update the prior probabilities to form a posterior probability. This is given by the formula: prior x likelihood posterior = - - - evidence

[0083] This naive Bayes algorithm, and Bayes algorithms generally, may be useful when needing to predict whether your input belongs to a given list of n classes or not. The probabilistic approach may be used because the probabilities for all the n classes will be quite low.

[0084] For example, as illustrated in FIG. 6, a person playing golf, which depends on factors including the weather outside shown in a first data set 610. The first data set 610 illustrates the weather in a first column and an outcome of playing associated with that weather in a second column. In the frequency table 620 the frequencies with which certain events occur are generated. In frequency table 620, the frequency of a person playing or not playing golf in each of the weather conditions is determined. From there, a likelihood table is compiled to generate initial probabilities. For example, the probability of the weather being overcast is 0.29 while the general probability of playing is 0.64.

[0085] The posterior probabilities may be generated from the likelihood table 630. These posterior probabilities may be configured to answer questions about weather conditions and whether golf is played in those weather conditions. For example, the probability of it being sunny outside and golf being played may be set forth by the Bayesian formula:

P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) I P (Sunny)

According to likelihood table 630:

P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P(Yes)= 9/14 = 0.64.

Therefore, the P(Yes | Sunny) = .33*.64/.36 or approximately 0.60 (60%).

[0086] Generally, a decision tree is a flowchart-like tree structure where each external node denotes a test on an attribute and each branch represents the outcome of that test. The leaf nodes contain the actual predicted labels. The decision tree begins from the root of the tree with attribute values being compared until a leaf node is reached. A decision tree can be used as a classifier when handling high dimensional data and when little time has been spent behind data preparation. Decision trees may take the form of a simple decision tree, a linear decision tree, an algebraic decision tree, a deterministic decision tree, a randomized decision tree, a nondeterministic decision tree, and a quantum decision tree. An exemplary decision tree is provided below in FIG. 7.

[0087] FIG. 7 illustrates a decision tree, along the same structure as the Bayes example above, in deciding whether to play golf. In the decision tree, the first node 710 examines the weather providing sunny 712, overcast 714, and rain 716 as the choices to progress down the decision tree. If the weather is sunny, the leg of the tree is followed to a second node 720 examining the temperature. The temperature at node 720 may be high 722 or normal 724, in this example. If the temperature at node 720 is high 722, then the predicted outcome of “No” 723 golf occurs. If the temperature at node 720 is normal 724, then the predicted outcome of “Yes” 725 golf occurs.

[0088] Further, from the first node 710, an outcome overcast 714, “Yes” 715 golf occurs.

[0089] From the first node weather 710, an outcome of rain 716 results in the third node

730 (again) examining temperature. If the temperature at third node 730 is normal 732, then “Yes” 733 golf is played. If the temperature at third node 730 is low 734, then “No” 735 golf is played. [0090] From this decision tree, a golfer plays golf if the weather is overcast 715, in normal temperature sunny weather 725, and in normal temperature rainy weather 733, while the golfer does not play if there are sunny high temperatures 723 or low rainy temperatures 735.

[0091] A random forest classifier is a committee of decision trees, where each decision tree has been fed a subset of the attributes of data and predicts on the basis of that subset. The mode of the actual predicted values of the decision trees are considered to provide an ultimate random forest answer. The random forest classifier, generally, alleviates overfitting, which is present in a standalone decision tree, leading to a much more robust and accurate classifier.

[0092] FIG. 8 illustrates an exemplary random forest classifier for classifying the color of a garment. As illustrated in FIG. 8, the random forest classifier includes five decision trees 8101, 810a, 8IO3, 8IO4, and 8 IO5 (collectively or generally referred to as decision trees 810). Each of the trees is designed to classify the color of the garment. A discussion of each of the trees and decisions made is not provided, as each individual tree generally operates as the decision tree of FIG. 7. In the illustration, three (8101, 8IO2, 8IO4) of the five trees determines that the garment is blue, while one determines the garment is green (8IO3) and the remaining tree determines the garment is red (8IO5). The random forest takes these actual predicted values of the five trees and calculates the mode of the actual predicted values to provide random forest answer that the garment is blue.

[0093] Logistic Regression is another algorithm for binary classification tasks. Logistic regression is based on the logistic function, also called the sigmoid function. This S-shaped curve can take any real-valued number and map it between 0 and 1 asymptotically approaching those limits. The logistic model may be used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1 with the sum of the probabilities adding to one.

[0094] In the logistic model, the log-odds (the logarithm of the odds) for the value labeled "1" is a linear combination of one or more independent variables ("predictors"); the independent variables can each be a binary variable (two classes, coded by an indicator variable) or a continuous variable (any real value). The corresponding probability of the value labeled "1" can vary between 0 (certainly the value "0") and 1 (certainly the value " 1"), hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. Analogous models with a different sigmoid function instead of the logistic function can also be used, such as the probit model; the defining characteristic of the logistic model is that increasing one of the independent variables multiplicatively scales the odds of the given outcome at a constant rate, with each independent variable having its own parameter; for a binary dependent variable this generalizes the odds ratio.

[0095] In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier.

[0096] FIG. 9 illustrates an exemplary logistic regression. This exemplary logistic regression enables the prediction of an outcome based on a set of variables. For example, based on a person’s grade point average, and outcome of being accepted to a school may be predicted. The past history of grade point averages and the relationship with acceptance enables the prediction to occur. The logistic regression of FIG. 9 enables the analysis of the grade point average variable 920 to predict the outcome 910 defined by 0 to 1. At the low end 930 of the S- shaped curve, the grade point average 920 predicts an outcome 910 of not being accepted. While at the high end 940 of the S -shaped curve, the grade point average 920 predicts an outcome 910 of being accepted. Logistic regression may be used to predict house values, customer lifetime value in the insurance sector, etc.

[0097] A support vector machine (SVM) may be used to sort the data with the margins between two classes as far apart as possible. This is called maximum margin separation. The SVM may account for the support vectors while plotting the hyperplane, unlike linear regression which uses the entire dataset for that purpose.

[0098] FIG. 10 illustrates an exemplary support vector machine. In the exemplary SVM 1000, data may be classified into two different classes represented as squares 1010 and triangles 1020. SVM 1000 operates by drawing a random hyperplane 1030. This hyperplane 1030 is monitored by comparing the distance (illustrated with lines 1040) between the hyperplane 1030 and the closest data points 1050 from each class. The closest data points 1050 to the hyperplane 1030 are known as support vectors. The hyperplane 1030 is drawn based on these support vectors 1050 and an optimum hyperplane has a maximum distance from each of the support vectors 1050. The distance between the hyperplane 1030 and the support vectors 1050 is known as the margin. [0099] SVM 1000 may be used to classify data by using a hyperplane 1030, such that the distance between the hyperplane 1030 and the support vectors 1050 is maximum. Such an SVM 1000 may be used to predict heart disease, for example.

[0100] K Nearest Neighbors (KNN) refers to a set of algorithms that generally do not make assumptions on the underlying data distribution, and perform a reasonably short training phase. Generally, KNN uses many data points separated into several classes to predict the classification of a new sample point. Operationally, KNN specifies an integer N with a new sample. The N entries in the model of the system closest to the new sample are selected. The most common classification of these entries is determined, and that classification is assigned to the new sample. KNN generally requires the storage space to increase as the training set increases. This also means that the estimation time increases in proportion to the number of training points.

[0101] In regression algorithms, the output is a continuous quantity so regression algorithms may be used in cases where the target variable is a continuous variable. Linear regression is a general example of regression algorithms. Linear regression may be used to gauge genuine qualities (cost of houses, number of calls, all out deals and so forth) in view of the consistent variable(s). A connection between the variables and the outcome is created by fitting the best line (hence linear regression). This best fit line is known as regression line and spoken to by a direct condition Y= a *X + b. Linear regression is best used in approaches involving a low number of dimensions.

[0102] FIG. 11 illustrates an exemplary linear regression model. In this model, a predicted variable 1110 is modeled against a measured variable 1120. A cluster of instances of the predicted variable 1110 and measured variable 1120 are plotted as data points 1130. Data points 1130 are then fit with the best fit line 1140. Then the best fit line 1140 is used in subsequent predicted, given a measured variable 1120, the line 1140 is used to predict the predicted variable 1110 for that instance. Linear regression may be used to model and predict in a financial portfolio, salary forecasting, real estate and in traffic in arriving at estimated time of arrival.

[0103] Clustering algorithms may also be used to model and train on a data set. In clustering, the input is assigned into two or more clusters based on feature similarity. Clustering algorithms generally learn the patterns and useful insights from data without any guidance. For example, clustering viewers into similar groups based on their interests, age, geography, etc. may be performed using unsupervised learning algorithms like K- means clustering.

[0104] K-means clustering generally is regarded as a simple unsupervised learning approach. In K-means clustering similar data points may be gathered together and bound in the form of a cluster. One method for binding the data points together is by calculating the centroid of the group of data points. In determining effective clusters, in K-means clustering the distance between each point from the centroid of the cluster is evaluated. Depending on the distance between the data point and the centroid, the data is assigned to the closest cluster. The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data. The ‘K’ in K-means stands for the number of clusters formed. The number of clusters (basically the number of classes in which new instances of data may be classified) may be determined by the user. This determination may be performed using feedback and viewing the size of the clusters during training, for example.

[0105] K-means is used majorly in cases where the data set has points which are distinct and well separated, otherwise, if the clusters are not separated the modeling may render the clusters inaccurate. Also, K-means may be avoided in cases where the data set contains a high number of outliers or the data set is non-linear.

[0106] FIG. 12 illustrates a K-means clustering. In K-means clustering, the data points are plotted, and the K value is assigned. For example, for K=2 in FIG. 12, the data points are plotted as shown in depiction 1210. The points are then assigned to similar centers at step 1220. The cluster centroids are identified as shown in 1230. Once centroids are identified, the points are reassigned to the cluster to provide the minimum distance between the data point to the respective cluster centroid as illustrated in 1240. Then a new centroid of the cluster may be determined as illustrated in depiction 1250. As the data pints are reassigned to a cluster, new cluster centroids formed, an iteration, or series of iterations, may occur to enable the clusters to be minimized in size and the centroid of the optimal centroid determined. Then as new data points are measured, the new data points may be compared with the centroid and cluster to identify with that cluster.

[0107] Ensemble learning algorithms may be used. These algorithms use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Ensemble learning algorithms perform the task of searching through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem. Even if the hypothesis space contains hypotheses that are very well- suited for a particular problem, it may be very difficult to find a good hypothesis. Ensemble algorithms combine multiple hypotheses to form a better hypothesis. The term ensemble is usually reserved for methods that generate multiple hypotheses using the same base learner. The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.

[0108] Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model, so ensembles may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. Fast algorithms such as decision trees are commonly used in ensemble methods, for example, random forests, although slower algorithms can benefit from ensemble techniques as well.

[0109] An ensemble is itself a supervised learning algorithm because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built. Thus, ensembles can be shown to have more flexibility in the functions they can represent. This flexibility can, in theory, enable them to over-fit the training data more than a single model would, but in practice, some ensemble techniques (especially bagging) tend to reduce problems related to over-fitting of the training data.

[0110] Empirically, ensemble algorithms tend to yield better results when there is a significant diversity among the models. Many ensemble methods, therefore, seek to promote diversity among the models they combine. Although non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy -reducing decision trees). Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt to dumb-down the models in order to promote diversity.

[0111] The number of component classifiers of an ensemble has a great impact on the accuracy of prediction. A priori determining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. A theoretical framework suggests that there are an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. The theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.

[0112] Some common types of ensembles include Bayes optimal classifier, bootstrap aggregating (bagging), boosting, Bayesian model averaging, Bayesian model combination, bucket of models and stacking. FIG. 13 illustrates an exemplary ensemble learning algorithm where bagging is being performed in parallel 1310 and boosting is being performed sequentially 1320.

[0113] A neural network is a network or circuit of neurons, or in a modem sense, an artificial neural network, composed of artificial neurons or nodes. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. Inputs are modified by a weight and summed using a linear combination. An activation function may control the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1. [0114] These artificial networks may be used for predictive modeling, adaptive control and applications and can be trained via a dataset. Self-learning resulting from experience can occur within networks, which can derive conclusions from a complex and seemingly unrelated set of information.

[0115] For completeness, a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion.

[0116] Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.

[0117] A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. In more practical terms neural networks are non-linear statistical data modeling or decisionmaking tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

[0118] An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.

[0119] One classical type of artificial neural network is the recurrent Hopfield network. The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical. [0120] Neural networks can be used in different fields. The tasks to which artificial neural networks are applied tend to fall within the following broad categories: function approximation, or regression analysis, including time series prediction and modeling; classification, including pattern and sequence recognition, novelty detection and sequential decision making, data processing, including filtering, clustering, blind signal separation and compression.

[0121] Application areas of ANNs include nonlinear system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. For example, it is possible to create a semantic profile of user's interests emerging from pictures trained for object recognition.

[0122] FIG. 14 illustrates an exemplary neural network. In the neural network there is an input layer represented by a plurality of inputs, such as 14101 and 14102. The inputs 14101 , 14102 are provided to a hidden layer depicted as including nodes 14201, 14202, 1420a, 14204. These nodes 14201, 14202, 14203,14204 are combined to produce an output 1430 in an output layer. The neural network performs simple processing via the hidden layer of simple processing elements, nodes 14201, 14202, 1420a, 14204, which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.

[0123] The neural network of FIG. 14 may be implemented in hardware. As illustrated in FIG. 15 a hardware based neural network is depicted.

[0124] Electrical signals such as electrocardiogram (ECG) signals are often detected prior to and/or during a cardiac procedure. For example, ECG signals can be used to identify potential locations of a heart where arrhythmia causing signals originate from. Generally, an ECG is a signal that describes the electrical activity of the heart. ECG signals may also be used to map portions of a heart. When physicians use an ECG to study heart activity, an accounting for the interference needs to occur in order to isolate the electrical signals from the heart. Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.

[0125] An ECG signal is generated by contraction (depolarization) and relaxation (repolarization) of atrial and ventricular muscles of the heart. As shown by signal 1602 in FIG. 16A, an ECG signal contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization). In order to record an ECG signal, electrodes can be placed at specific positions on the human body or can be positioned within a human body via a catheter. Artifacts (e.g., noise) are the unwanted signals that are merged with electronic signals, such as ECG signals, and sometimes create obstacles for the diagnosis and/or treatment of a cardiac condition. Artifacts in electrical signals can be baseline wander, powerline interference, electromyogram (EMG) noise, power line noise, etc. These noise signals may include site base noise and other additive noise.

[0126] Baseline wander or baseline drift occurs where the base axis (x-axis) of a signal appears to ‘wander’ or move up and down rather than be straight. This may cause the entire signal to shift from its normal base. In ECG signals, the baseline wander is caused due to improper electrode contact (e.g., electrode- skin impedance), patient movement, and cyclical movement (e.g., respiration). FIG. 16B shows a typical ECG signal 1612 affected by baseline wander. As shown in the example of FIG. 16B, the frequency content of the baseline wander is in the range of 0.5 Hz. However, increased movement of the body during exercise or stress test increase the frequency content of baseline wander. According to implementations, given that the baseline signal is a low frequency signal, a Finite Impulse Response (FIR) high-pass zero phase forwardbackward filtering with a cut-off frequency of 0.5 Hz to estimate and remove the baseline in the ECG signal can be used.

[0127] Electromagnetic fields caused by a powerline represent a common noise source in electronic signals such as ECGs, as well as to any other bioelectrical signal recorded from a patient’s body. Such noise is characterized by, for example, 50 or 60 Hz sinusoidal interference, possibly accompanied by a number of harmonics. Such narrowband noise renders the analysis and interpretation of the ECG more difficult since the delineation of low-amplitude waveforms becomes unreliable and spurious waveforms may be introduced. It may be necessary to remove powerline interference from ECG signals as it superimposes the low frequency ECG waves like P wave and T wave.

[0128] The presence of muscle noise can interfere with in many electrical signal applications such as ECG applications, as low amplitude waveforms can become obscured. Muscle noise is, in contrast to baseline wander and 50/60 Hz interference, not removed by narrowband filtering, but presents a different filtering problem as the spectral content of muscle activity considerably overlaps that of the PQRST complex. As an ECG signal is a repetitive signal, techniques can be used to reduce muscle noise in a manner similar to the processing of evoked potentials. FIG. 16C shows an ECG signal 1630 interfered by an EMG noise 1632. [0129] Instruments for measuring electrical signals such as ECG signals often detect electrical interference corresponding to a line, or mains, frequency. Line frequencies in most countries, though nominally set at 50 Hz or 60 Hz, may vary by several percent from these nominal values.

[0130] Various techniques for removing electrical interference from electrical signals can be implemented. Several of these techniques use of one or more low-pass or notch filters. For example, a system for variable filtering of noise in ECG signals may be implemented. The system may have a plurality of low pass filters including one filter with a, for example, 3 dB point at approximately 50 Hz and, for example, a second low pass filter with a 3-dB point at approximately 5 Hz.

[0131] According to another example, a system for rejecting a line frequency component of an electronic signal may be implemented by passing the signal through two serially linked notch filters. A system with a notch filter that may have either or both low -pass and high-pass coefficients for removing line frequency components from an ECG signal may be implemented. The system may also support removal of burst noise and calculate a heart rate from the notch filter output.

[0132] According to another example, a system with several units for removing interference may be implemented. The units may include a mean value unit to generate an average signal over several cardiac cycles, a subtracting unit to subtract the average signal from the input signal to generate a residual signal, a filter unit to provide a filtered signal from the residual signal, and/or an addition unit to add the filtered signal to the average signal.

[0133] According to another example, an analog-to-digital (A/D) converter may provide noise rejection by synchronizing a clock of the converter with a phase locked loop set to the line frequency.

[0134] Additionally, biometric (e.g., biopotential) patient monitors may use surface electrodes to make measurements of bioelectric potentials such as ECG or electroencephalogram (EEG). The fidelity of these measurements is limited by the effectiveness of the connection of the electrode to the patient. The resistance of the electrode system to the flow of electric currents, known as the electric impedance, characterizes the effectiveness of the connection. Typically, the higher the impedance, the lower the fidelity of the measurement. Several mechanisms may contribute to lower fidelity.

[0135] Signals from electrodes with high impedances are subject to thermal noise (or so- called Johnson noise), voltages that increase with the square root of the impedance value. In addition, biopotential electrodes tend to have voltage noises in excess of that predicted by Johnson. Also, amplifier systems making measurements from biopotential electrodes can have degraded performance at higher electrode impedances. The impairments are characterized by poor common mode rejection, which tends to increase the contamination of the bioelectric signal by noise sources such as patient motion and electronic equipment that may be in use on or around the patient. These noise sources are particularly prevalent in the operating theatre and may include equipment such as electrosurgical units (ESU), cardiopulmonary bypass pumps (CPB), electric motor-driven surgical saws, lasers, and other sources.

[0136] During a cardiac procedure, it is often desirable to measure electrode impedances continuously in real time while a patient is being monitored. To do this, a very small electric current is typically injected through the electrodes and the resulting voltage measured, thereby establishing the impedance using Ohm's law. This current may be injected using DC or AC sources. It is often not possible to separate voltage due to the electrode impedance from voltage artifacts arising from interference. Interference tends to increase the measured voltage and thus the apparent measured impedance, causing the biopotential measurement system to falsely detect higher impedances than are actually present. Often such monitoring systems have maximum impedance threshold limits that may be programmed to prevent their operation when they detect impedances in excess of these limits. This is particularly true of systems that make measurements of very small voltages, such as the EEG. Such systems require very low electrode impedances.

[0137] FIG. 16D illustrates examples of power line noise. FIG 16D illustrates a body surface lead and signals from the mapping catheter including bipolar, unipolar distal, and unipolar proximal signal. The area indicated around the signal in gray is the signals of interest illustrating the power line noise and in particular the dot on the unipolar distal signal and unipolar proximal signal indicate further areas of noise.

[0138] FIG. 16E illustrates a signal during ventricle activity including baseline wander. FIG. 16E includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals.

[0139] FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal. FIG. 16F again includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals after baseline removal.

[0140] FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements.

[0141] FIG. 17A is a diagram of an exemplary system 1720 in which one or more features of the disclosure subject matter can be implemented. All or parts of system 1720 may be used to collect information for a training dataset and/or all or parts of system 1720 may be used to implement a trained model. System 1720 may include components, such as a catheter 1740, that are configured to damage tissue areas of an intra-body organ. The catheter 1740 may also be further configured to obtain biometric data including electronic signals. Although catheter 1740 is shown to be a point catheter, it will be understood that a catheter of any shape that includes one or more elements (e.g., electrodes) may be used to implement the embodiments disclosed herein. System 1720 includes a probe 1721, having shafts that may be navigated by a physician 1730 into a body part, such as heart 1726, of a patient 1728 lying on a table 1729. According to embodiments, multiple probes may be provided, however, for purposes of conciseness, a single probe 1721 is described herein but it will be understood that probe 1721 may represent multiple probes. As shown in FIG. 17A, physician 1730 may insert shaft 1722 through a sheath 1723, while manipulating the distal end of the shafts 1722 using a manipulator 1732 near the proximal end of the catheter 1740 and/or deflection from the sheath 1723. As shown in an inset 1725, catheter 1740 may be fitted at the distal end of shafts 1722. Catheter 1740 may be inserted through sheath 1723 in a collapsed state and may be then expanded within heart 1726. Cather 1740 may include at least one ablation electrode 1747 and a catheter needle 1748, as further disclosed herein.

[0142] According to embodiments, catheter 1740 may be configured to ablate tissue areas of a cardiac chamber of heart 1726. Inset 1745 shows catheter 1740 in an enlarged view, inside a cardiac chamber of heart 1726. As shown, catheter 1740 may include at least one ablation electrode 1747 coupled onto the body of the catheter. According to other embodiments, multiple elements may be connected via splines that form the shape of the catheter 1740. One or more other elements (not shown) may be provided and may be any elements configured to ablate or to obtain biometric data and may be electrodes, transducers, or one or more other elements.

[0143] According to embodiments disclosed herein, the ablation electrodes, such as electrode 1747, may be configured to provide energy to tissue areas of an intra-body organ such as heart 1726. The energy may be thermal energy and may cause damage to the tissue area starting from the surface of the tissue area and extending into the thickness of the tissue area.

[0144] According to embodiments disclosed herein, biometric data may include one or more of LATs, electrical activity, topology, bipolar mapping, dominant frequency, impedance, or the like. The local activation time may be a point in time of a threshold activity corresponding to a local activation, calculated based on a normalized initial starting point. Electrical activity may be any applicable electrical signals that may be measured based on one or more thresholds and may be sensed and/or augmented based on signal to noise ratios and/or other filters. A topology may correspond to the physical structure of a body part or a portion of a body part and may correspond to changes in the physical structure relative to different parts of the body part or relative to different body parts. A dominant frequency may be a frequency or a range of frequency that is prevalent at a portion of a body part and may be different in different portions of the same body part. For example, the dominant frequency of a pulmonary vein of a heart may be different than the dominant frequency of the right atrium of the same heart. Impedance may be the resistance measurement at a given area of a body part.

[0145] As shown in FIG. 17A, the probe 1721, and catheter 1740 may be connected to a console 1724. Console 1724 may include a processor 1741, such as a general-purpose computer, with suitable front end and interface circuits 1738 for transmitting and receiving signals to and from catheter, as well as for controlling the other components of system 1720. In some embodiments, processor 1741 may be further configured to receive biometric data, such as electrical activity, and determine if a given tissue area conducts electricity. According to an embodiment, the processor may be external to the console 1724 and may be located, for example, in the catheter, in an external device, in a mobile device, in a cloud-based device, or may be a standalone processor.

[0146] As noted above, processor 1741 may include a general -purpose computer, which may be programmed in software to carry out the functions described herein. The software may be downloaded to the general-purpose computer in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory. The example configuration shown in FIG. 17A may be modified to implement the embodiments disclosed herein. The disclosed embodiments may similarly be applied using other system components and settings. Additionally, system 1720 may include additional components, such as elements for sensing electrical activity, wired or wireless connectors, processing and display devices, or the like.

[0147] According to an embodiment, a display connected to a processor (e.g., processor 1741) may be located at a remote location such as a separate hospital or in separate healthcare provider networks. Additionally, the system 1720 may be part of a surgical system that is configured to obtain anatomical and electrical measurements of a patient’s organ, such as a heart, and performing a cardiac ablation procedure. An example of such a surgical system is the CARTO® system sold by Biosense Webster.

[0148] The system 1720 may also, and optionally, obtain biometric data such as anatomical measurements of the patient’s heart using ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) or other medical imaging techniques known in the art. The system 1720 may obtain electrical measurements using catheters, electrocardiograms (EKGs) or other sensors that measure electrical properties of the heart. The biometric data including anatomical and electrical measurements may then be stored in a memory 1742 of the mapping system 1720, as shown in FIG. 17A. The biometric data may be transmitted to the processor 1741 from the memory 1742. Alternatively, or in addition, the biometric data may be transmitted to a server 1760, which may be local or remote, using a network 1762.

[0149] Network 1762 may be any network or system generally known in the art such as an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between the mapping system 1720 and the server 1760. The network 1762 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11 or any other wired connection generally known in the art. Wireless connections may be implemented using WiFi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 1762.

[0150] In some instances, the server 1762 may be implemented as a physical server. In other instances, server 1762 may be implemented as a virtual server a public cloud computing provider (e.g., Amazon Web Services (AWS) ®).

[0151] Control console 1724 may be connected, by a cable 1739, to body surface electrodes 1743, which may include adhesive skin patches that are affixed to the patient 1730. The processor, in conjunction with a current tracking module, may determine position coordinates of the catheter 1740 inside the body part (e.g., heart 1726) of a patient. The position coordinates may be based on impedances or electromagnetic fields measured between the body surface electrodes 1743 and the electrode 1748 or other electromagnetic components of the catheter 1740. Additionally, or alternatively, location pads may be located on the surface of bed 1729 and may be separate from the bed 1729.

[0152] Processor 1741 may include real-time noise reduction circuitry typically configured as a field programmable gate array (FPGA), followed by an analog-to-digital (A/D) ECG (electrocardiograph) or EMG (electromyogram) signal conversion integrated circuit. The processor 1741 may pass the signal from an A/D ECG or EMG circuit to another processor and/or can be programmed to perform one or more functions disclosed herein.

[0153] Control console 1724 may also include an input/output (I/O) communications interface that enables the control console to transfer signals from, and/or transfer signals to electrode 1747.

[0154] During a procedure, processor 1741 may facilitate the presentation of a body part rendering 1735 to physician 1730 on a display 1727, and store data representing the body part rendering 1735 in a memory 1742. Memory 1742 may comprise any suitable volatile and/or non- volatile memory, such as random-access memory or a hard disk drive. In some embodiments, medical professional 1730 may be able to manipulate a body part rendering 1735 using one or more input devices such as a touch pad, a mouse, a keyboard, a gesture recognition apparatus, or the like. For example, an input device may be used to change the position of catheter 1740 such that rendering 1735 is updated. In alternative embodiments, display 1727 may include a touchscreen that can be configured to accept inputs from medical professional 1730, in addition to presenting a body part rendering 1735.

[0155] FIG. 17B illustrates an exemplary catheter 1750 placed in the right atria with bipolar intracardiac ECG signals 1780 via the intracardiac ECG leads 1770.

[0156] As set forth above, noise is an issue that causes constant concern in ECG measurements. This noise requires denoising plus removal of additive noises including contact and deflection noise. For example, referring to FIG. 18 there is a depiction of an illustration 1800 of a lab. Illustration 1800 includes many specific electronic devices that provide noise to the measurements. There are many monitors, machines light sources, and other machines commonly found with the internal lab environment. In addition, there may be external (from the lab) sources that influence electronic signals within the lab. For example, power banks for the hospital (if the lab is included within a hospital, for example) may be located below the floor of the lab. Air conditioning units may be located above the lab. Other devices providing or increasing noise may be located in adjacent rooms. All of the internal and external machines may affect the environment of the ECG signal within the lab. Further, each lab is unique with respect to internal and external noise signals. That is, one lab may include air conditions, another power banks, and in fact, each lab may have a particular configuration of the monitors and other internal device. Specifically, every lab has its own specific set of environment noises due to the apparatus setup including electrical power lines , frequency converters, X-ray machines, CARTO ACL, and relationship and proximity to electrical room with transformers.

[0157] FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab. For example, fluorescent noise around 200 Hz, power noise 50/60 Hz and the respective harmonics of these signals. As illustrated in FIG. 19, and for the present discussion, each lab has a different typical spectrum of noise and by characterizing the typical noise signals on a laboratory basis, the present system and method may design a filter set specific to each particular lab. Reviewing the ECG and the ICEG has shown that each lab has its own unique noise pattern. Therefore, there is a need to define a method to generate lab specific noise algorithm.

[0158] FIG. 19 illustrates an FFT of the power spectrum of the signals with each signal plotted in a horizontal line entry. There are four regions 1910, 1920, 1930, 1940 that illustrate noise signals found within the signals displayed for this lab. The signals 1910, 1920, 1930, 1940 represent harmonics of the interference found in the lab. The information on the noise for the lab illustrates the fingerprint of noise for the lab.

[0159] FIG. 20 illustrates a method 2000 for dealing with the described noise signals. Current concepts for dealing with these types of noise are based on generating "one method fit all" ECG/ICEG denoising algorithms to all ECG processing systems currently deployed on the field. Method 2000 includes collecting ECG plus noise signals at step 2010. At step 2020, method 2000 applies a set of filters (the same filters for all labs). At step 2030, method 2000 provides the cleaned ECG signals.

[0160] The method described herein is based on generating a specific denoising algorithm for each specific lab, consequently reducing impact of denoising on ECG/ICEG signals morphology. In order to provide unique noise algorithms for each lab, data is collected on the types of noise for each lab, both internal and external sources of noise. This includes power lines, converters, X-ray machines and even the CARTO ACT, for example. The transformers located below the floor in some labs, as well as noise from other external sources are also collected.

[0161] For example, in FIG. 21, a method 2100 may be performed to denoise signals for a lab. Method 2100A may be employed for a first lab. Method 2100A includes collecting ECG plus noise signals at step 2110A. At step 2120A, method 2100A ensures that enough data is collected to build a lab noise profile for lab A. At step 2130A, method 2100A applies a set of filters (a specific filter designed for lab A). Such a filter may be designed to remove known noise signals in the lab. Similarly, the filter may be designed to cancel the unwanted signals found in the lab. At step 2140A, method 2100A provides the cleaned ECG signals.

[0162] Similarly, as shown in FIG. 21, method 2100B may be employed for a second lab. Method 2100B includes collecting ECG plus noise signals at step 2110B. At step 2120B, method 2100B ensures that enough data is collected to build a lab noise profile for lab A. At step 2130B, method 2100B applies a set of filters (a specific filter designed for lab A). The set of filters applied in step 2130A and step 2130B may be different as each filter set is dependent upon the noise found within the respective lab. Such a filter may be designed to remove known noise signals in the lab. Similarly, the filter may be designed to cancel the unwanted signals found in the lab. At step 2140BA, method 2100B provides the cleaned ECG signals

[0163] In order to provide, the system of FIG. 4, data from each lab in order to address site-based noise, the data from each specific lab is collected at one of steps 2120 depending on which lab is being tested. ECG/ICEG collection performed by aggregating each lab recordings to a database. By backing CARTO® data to CARTONET, each case that is uploaded to the database includes the identification of the institute and the specific lab. The data may be recorded with filtering. The raw data may also be collected, i.e., ECG/ICEG signals without filtering, and the WCT recording (channel 21) that includes all the lab base noise may also be collected. Alternatively, the data on each specific WS may be collected. For example, there may be specific disk space for the collection. As discussed with respect to FIG. 4, after collection a local algorithm may run within the disk space. The ECG/ICEG denoising algorithm may train on the data. A specific algorithm may be designed and generated to filter the lab noise at step 2130. In order to do so, an FFT may be performed on all the ECG/ICEG signals to determine the typical lab noise. As biologic noise is different between patients, and the lab noise is consistent in specific labs, the correlations on the data (e.g., above 30) provide the ability to distinguish between biologic noise and lab noise. As described in FIG. 4, machine learning may be performed to enable the machine to learn the coherent noise. An algorithm may be generated in the cloud or a specific application on the workstation. The algorithm may be based on an autoencoder, for example. This may include LTSM and/or CNN architectures, as described hereinabove. Once generated, the trained model may be deployed for the specific lab to provide clean ECG from that lab at step 2140.

[0164] The resulting filter for each lab may be presented to the user (physician) to approve the algorithm results based on clinical data. Such presentation may include the raw data, previous filtering algorithm, and other filtering algorithms to enable ease of approval. Upon approval, the trained algorithm (or the frequencies of the specific lab noise) may be loaded into the CARTO® system (i.e., to the ECG/ICEG presenting and storing system).

[0165] In addition to lab specific noise, other additive noises may also be found within ECG signals. The present description provides an automatic noise sample generation technique for additive noises. In solving denoising issues, including solving such issues using Al, it is often important to harvest enough samples to provide a good coverage of possible examples of specific noise.

[0166] The presence of environment related additive noise in recorded signals may include, for example, power noise, contact noise, and deflection noise. Currently, some additive noise signals are hard to detect and can be cumbersome to annotate with reviewers needing to view hours of ECG recordings to identify a few noise events. Today the task of finding noise affected points is rather cumbersome and is performed manually if at all. As a result, CARTO's detections of physiological features (e.g. mapping annotations within contact affected points ECG) will likely produce artifacts and thus to affect clinical understanding of CARTO® maps. This method will allow user to filter out contact noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts. [0167] The present method allows generation of a set of noise samples in a "quiet lab" condition that may be added to real-life signals allowing the generation of practically unlimited number of real-life noise samples.

[0168] Referring again to FIG. 4, data may be collected by configuring the system in "quiet lab" with a minimal number of possible noises or ideally free of any noise. A set of signals may be generated and recorded with specific noise of interest, assuming that the noise of interest is additive and has not created a dependence on signals or other noises in the system. The needed, required, or desired number of samples of the noise may be provided by embedding the collected noise samples in real-life system's signal recordings by addition of recorded noise samples to real- life systems' signal recordings. The signals that may be provided as additive noise include, but are not limited to, power noise, contact noise, and deflection noise.

[0169] For contact noise for example, noise detection may occur using deep autoencoder with fully connected (dense) layer. Contact noise is a distinctive non-clinical artifact caused when catheter electrodes are in contact with each other. This contact may occur when two or more electrodes of the catheter or different catheters intersect, i.e., touch each other. The present description presents a method to detect such a noise in CARTO® points ECG and to filter out those points from CARTO® map.

[0170] FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment. Map 1-4 may be provided in this example as the mapping catheter. Catheters Pl- P20 are provided as the penta-ray different catheters. In this example Mapl (mapping catheter electrode 1) is touching Penta-Ray electrode 5 and 6 (P5, P6) and Map 2 touches P8 illustrating the contact noise (cyan).

[0171] Data is collected using an ICEG and ECG data collection by aggregating lab clinical recordings into a database. This may be a backup of the CARTO® data to CARTONET, for example. A designated GUI allows manual marking of contact start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification. One of the classes is contact noise present in points (CN). The other class includes ECG points free of contact noise (FR). From the database, as described above with respect to FIG. 4, training and evaluating may occur on the model. The training may include deep learning model architecture is based on deep autoencoder network. For example, three layers of 2D convolutional neural network as an encoder and similar three layers of 2D convolutional network as a decoder. The output of the encoder is connected to a dense layer in order to classify contact noise per channel. In such a configuration, the model may be trained on 5 GB of IC ECG data. [0172] The model allows the classification of each CARTO® point into (CN. FR) classes during the clinical procedure. This enables the user to filter out CN classified CARTO® point and to display CARTO® maps free from of contact noise induced artifacts.

[0173] For deflection noise detection, a LSTM deep network is described. As set forth above, the task of finding noise affected points is rather cumbersome and is performed manually if at all. As a result, CARTO's detections of physiological features (e.g. mapping annotations within deflection affected points ECG) will likely produce artifacts and thus to affect clinical understanding of CARTO® maps. This method will allow user to filter out deflection noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts. Deflection noise appears as chaotic peaks when catheter is deflected by clinical specialist. This disclosure presents a method to detect such noise in CARTO® points ECG and to filter out those points from CARTO® map.

[0174] FIG. 23 A illustrates deflection noise examples recorded in a controlled aquarium environment. These data samples may be provided with random start times and having random durations. FIG. 23B illustrates deflection noise examples of FIG. 23 A with an increased x-axis to zoom in on features. In particular, the bottom plot (orange) represents deflection noise recorded in an aquarium. The bottom plot illustrates three high frequency bursts that indicate the three times the catheter was deflected. The upper plot (green) represents a signal free from deflection or contact noise. The middle plot (blue) illustrates a signal that is the sum of deflection and contact noise.

[0175] Data is collected using an ICEG (deflection noise manifests only in the ICEG) and ECG data collection by aggregating lab clinical recordings into a database. This may be a backup of the CARTO® data to CARTONET, for example. A designated GUI allows manual marking of deflection start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification. One of the classes is deflection noise present in point's ECG (DN). The other class includes ECG point free of deflection noise (FR). From the database, as described above with respect to FIG. 4, training and evaluating may occur on the model. The training may include a deep learning model architecture is based on LSTM deep network. For example, a three-layer LSTM networks to capture feature representation of deflection noise. The last layer output is connected to dense fully connected layer in order to predict the presence of deflection noise. The model may be trained on 10 GB of BS ECG and IC ECG data. [0176] The model allows the classification of each CARTO point into (DN. FR) classes during the clinical procedure. This enables the user to filter out DN classified CARTO® point and to display CARTO® maps free of deflection noise induced artifacts.

[0177] FIG. 24 illustrates additional deflection noise 2450 examples. This deflection noise 2450 is illustrated whereas in prior beats the deflection noise does not exist or is reduced.

[0178] A contact and deflection noise model 2500 is provided in FIG. 25. The input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters. Model 2500 includes LSTM1 input as an input layer 2510. LSTM1 performs LSTM 2520. Then dropoutl occurs at step 2530. LSTM2 is performed at step 2540. Dropout2 occurs at step 2550. LSTM3 is performed at step 2560. Dropout3 occurs at step 2570. A densel connected layer occurs at step 2580. Dropout4 occurs at step 2590. A dense2 connected layer occurs at step 2595. The output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise.

[0179] A CNN inception model 2600 is provided in FIG. 26. The input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters. The input data may include position information (x,y,z), angular movement, and movement or displacement. Model 2600 includes an input 1 in input layer 2605. A 2D convolution is performed at step 2610. This may be performed three-fold in 2610a, 2610b, 2610c. Each respective conv2d (2610a, 2610b, 2610c) is the nominalized in a batch nominalization at step 2615. These batch normalizations are concatenated in a first concatenation at step 2620. A second set of 2D convolution is performed at step 2625. This may be performed three-fold in 2625a, 2625b, 2625c. Each respective conv2d (2625a, 2625b, 2625c) is the nominalized in a batch nominalization at step 2630. These batch normalizations are concatenated in a second concatenation at step 2635. A third set of 2D convolution is performed at step 2640. This may be performed three-fold in 2640a, 2640b, 2640c. Each respective conv2d (2640a, 2640b, 2640c) is the nominalized in a batch nominalization at step 2645. These batch normalizations are concatenated in a third concatenation at step 2650. A fourth set of 2D convolution is performed at step 2655. This may be performed three-fold in 2655a, 2655b, 2655c. Each respective conv2d (2655a, 2655b, 2655c) is the nominalized in a batch nominalization at step 2660. These batch normalizations are concatenated in a fourth concatenation at step 2665. The data is then flattened in a first flattening at step 2670 and output to a first dense filtering at step 2675. Meanwhile, a second input layer is provided at step 2680 which feeds a second dense filter at step 2685. The outputs of the first dense filter at step 2675 and the second dense filter at step 2685 are concatenated at step 2690, which is then provided to a third dense filter at step 2695. The output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise.

[0180] FIG. 27 illustrates a second learning phase 2700 that may be implemented to capture the methods described herein. Method 2700 includes recording the noise in the laboratory in step 2710. In step 2720, noise samples are added to clean intracardiac signals. In step 2730, a model is built. This model may include a neural network, for example. Steps 2710, 2720 and 2730 have been described herein above for various additive signals. At step 2740, the model may be evaluated on general data sets and may be compared to manual annotations. This retraining of the model provides a second learning phase.

[0181] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer- readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer- readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.