868 resultados para ECG classification
Resumo:
In this paper an attempt has been made to determine the number of Premature Ventricular Contraction (PVC) cycles accurately from a given Electrocardiogram (ECG) using a wavelet constructed from multiple Gaussian functions. It is difficult to assess the ECGs of patients who are continuously monitored over a long period of time. Hence the proposed method of classification will be helpful to doctors to determine the severity of PVC in a patient. Principal Component Analysis (PCA) and a simple classifier have been used in addition to the specially developed wavelet transform. The proposed wavelet has been designed using multiple Gaussian functions which when summed up looks similar to that of a normal ECG. The number of Gaussians used depends on the number of peaks present in a normal ECG. The developed wavelet satisfied all the properties of a traditional continuous wavelet. The new wavelet was optimized using genetic algorithm (GA). ECG records from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) database have been used for validation. Out of the 8694 ECG cycles used for evaluation, the classification algorithm responded with an accuracy of 97.77%. In order to compare the performance of the new wavelet, classification was also performed using the standard wavelets like morlet, meyer, bior3.9, db5, db3, sym3 and haar. The new wavelet outperforms the rest
Resumo:
An important tool for the heart disease diagnosis is the analysis of electrocardiogram (ECG) signals, since the non-invasive nature and simplicity of the ECG exam. According to the application, ECG data analysis consists of steps such as preprocessing, segmentation, feature extraction and classification aiming to detect cardiac arrhythmias (i.e.; cardiac rhythm abnormalities). Aiming to made a fast and accurate cardiac arrhythmia signal classification process, we apply and analyze a recent and robust supervised graph-based pattern recognition technique, the optimum-path forest (OPF) classifier. To the best of our knowledge, it is the first time that OPF classifier is used to the ECG heartbeat signal classification task. We then compare the performance (in terms of training and testing time, accuracy, specificity, and sensitivity) of the OPF classifier to the ones of other three well-known expert system classifiers, i.e.; support vector machine (SVM), Bayesian and multilayer artificial neural network (MLP), using features extracted from six main approaches considered in literature for ECG arrhythmia analysis. In our experiments, we use the MIT-BIH Arrhythmia Database and the evaluation protocol recommended by The Association for the Advancement of Medical Instrumentation. A discussion on the obtained results shows that OPF classifier presents a robust performance, i.e.; there is no need for parameter setup, as well as a high accuracy at an extremely low computational cost. Moreover, in average, the OPF classifier yielded greater performance than the MLP and SVM classifiers in terms of classification time and accuracy, and to produce quite similar performance to the Bayesian classifier, showing to be a promising technique for ECG signal analysis. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Long-term electrocardiography (ECG) featuring adequate atrial and ventricular signal quality is highly desirable. Routinely used surface leads are limited in atrial signal sensitivity and recording capability impeding complete ECG delineation, i.e. in the presence of supraventricular arrhythmias. Long-term esophageal ECG might overcome these limitations but requires a dedicated lead system and recorder design. To this end, we analysed multiple-lead esophageal ECGs with respect to signal quality by describing the ECG waves as a function of the insertion level, interelectrode distance, electrode shape and amplifier's input range. The results derived from clinical data show that two bipolar esophageal leads, an atrial lead with short (15 mm) interelectrode distance and a ventricular lead with long (80 mm) interelectrode distance provide non-inferior ventricular signal strength and superior atrial signal strength compared to standard surface lead II. High atrial signal slope in particular is observed with the atrial esophageal lead. The proposed esophageal lead system in combination with an increased recorder input range of ±20 mV minimizes signal loss due to excessive electrode motion typically observed in esophageal ECGs. The design proposal might help to standardize long-term esophageal ECG registrations and facilitate novel ECG classification systems based on the independent detection of ventricular and atrial electrical activity.
Resumo:
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Resumo:
For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.
Resumo:
[Es]En este proyecto se analizan el diseño y la evaluación de dos métodos para la supresión de la interferencia generada por las compresiones torácicas proporcionadas por el dispositivo mecánico LUCAS, en el electrocardiograma (ECG) durante el masaje de resucitación cardiopulmonar. El objetivo es encontrar un método que elimine el artefacto generado en el ECG de una manera efectiva, que permita el diagnóstico fiable del ritmo cardiaco. Encontrar un método eficaz sería de gran ayuda para no tener que interrumpir el masaje de resucitación para el análisis correcto del ritmo cardiaco, lo que supondría un aumento en las probabilidades de resucitación. Para llevar a cabo el proyecto se ha generado una base de datos propia partiendo de registros de paradas cardiorrespiratorias extra-hospitalarias. Esta nueva base de datos contiene 410 cortes correspondientes a 86 pacientes, siendo todos los episodios de 30 segundos de duración y durante los cuales el paciente, recibe masaje cardiaco. Por otro lado, se ha desarrollado una interfaz gráfica para caracterizar los métodos de supresión del artefacto. Esta, muestra las señales del ECG, de impedancia torácica y del ECG tras eliminar el artefacto en tiempo. Mediante esta herramienta se han procesado los registros aplicando un filtro adaptativo y un filtro de coeficientes constantes. La evaluación de los métodos se ha realizado en base a la sensibilidad y especificidad del algoritmo de clasificación de ritmos con las señales ECG filtradas. La mayor aportación del proyecto, por tanto, es el desarrollo de una potente herramienta eficaz para evaluar métodos de supresión del artefacto causado en el ECG por las compresiones torácicas al realizar el masaje de resucitación cardiopulmonar, y su posterior diagnóstico. Un instrumento que puede ser implementado para analizar episodios de resucitación de cualquier tipo de procedencia y capaz de integrar nuevos métodos de supresión del artefacto.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
CONTEXT: In populations of older adults, prediction of coronary heart disease (CHD) events through traditional risk factors is less accurate than in middle-aged adults. Electrocardiographic (ECG) abnormalities are common in older adults and might be of value for CHD prediction. OBJECTIVE: To determine whether baseline ECG abnormalities or development of new and persistent ECG abnormalities are associated with increased CHD events. DESIGN, SETTING, AND PARTICIPANTS: A population-based study of 2192 white and black older adults aged 70 to 79 years from the Health, Aging, and Body Composition Study (Health ABC Study) without known cardiovascular disease. Adjudicated CHD events were collected over 8 years between 1997-1998 and 2006-2007. Baseline and 4-year ECG abnormalities were classified according to the Minnesota Code as major and minor. Using Cox proportional hazards regression models, the addition of ECG abnormalities to traditional risk factors were examined to predict CHD events. MAIN OUTCOME MEASURE: Adjudicated CHD events (acute myocardial infarction [MI], CHD death, and hospitalization for angina or coronary revascularization). RESULTS: At baseline, 276 participants (13%) had minor and 506 (23%) had major ECG abnormalities. During follow-up, 351 participants had CHD events (96 CHD deaths, 101 acute MIs, and 154 hospitalizations for angina or coronary revascularizations). Both baseline minor and major ECG abnormalities were associated with an increased risk of CHD after adjustment for traditional risk factors (17.2 per 1000 person-years among those with no abnormalities; 29.3 per 1000 person-years; hazard ratio [HR], 1.35; 95% CI, 1.02-1.81; for minor abnormalities; and 31.6 per 1000 person-years; HR, 1.51; 95% CI, 1.20-1.90; for major abnormalities). When ECG abnormalities were added to a model containing traditional risk factors alone, 13.6% of intermediate-risk participants with both major and minor ECG abnormalities were correctly reclassified (overall net reclassification improvement [NRI], 7.4%; 95% CI, 3.1%-19.0%; integrated discrimination improvement, 0.99%; 95% CI, 0.32%-2.15%). After 4 years, 208 participants had new and 416 had persistent abnormalities. Both new and persistent ECG abnormalities were associated with an increased risk of subsequent CHD events (HR, 2.01; 95% CI, 1.33-3.02; and HR, 1.66; 95% CI, 1.18-2.34; respectively). When added to the Framingham Risk Score, the NRI was not significant (5.7%; 95% CI, -0.4% to 11.8%). CONCLUSIONS: Major and minor ECG abnormalities among older adults were associated with an increased risk of CHD events. Depending on the model, adding ECG abnormalities was associated with improved risk prediction beyond traditional risk factors.
Resumo:
This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Detection of arrhythmic atrial beats in surface ECGs can be challenging when they are masked by the R or T wave, or do not affect the RR-interval. Here, we present a solution using a high-resolution esophageal long-term ECG that offers a detailed view on the atrial electrical activity. The recorded ECG shows atrial ectopic beats with long coupling intervals, which can only be successfully classified using additional morphology criteria. Esophageal high-resolution ECGs provide this information, whereas surface long-term ECGs show poor atrial signal quality. This new method is a promising tool for the long-term rhythm monitoring with software-based automatic classification of atrial beats.
Resumo:
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.