971 resultados para chest compression rate
Resumo:
Quality of cardiopulmonary resuscitation (CPR) improves through the use of CPR feedback devices. Most feedback devices integrate the acceleration twice to estimate compression depth. However, they use additional sensors or processing techniques to compensate for large displacement drifts caused by integration. This study introduces an accelerometer-based method that avoids integration by using spectral techniques on short duration acceleration intervals. We used a manikin placed on a hard surface, a sternal triaxial accelerometer, and a photoelectric distance sensor (gold standard). Twenty volunteers provided 60 s of continuous compressions to test various rates (80-140 min(-1)), depths (3-5 cm), and accelerometer misalignment conditions. A total of 320 records with 35312 compressions were analysed. The global root-mean-square errors in rate and depth were below 1.5 min(-1) and 2 mm for analysis intervals between 2 and 5 s. For 3 s analysis intervals the 95% levels of agreement between the method and the gold standard were within -1.64-1.67 min(-1) and -1.69-1.72 mm, respectively. Accurate feedback on chest compression rate and depth is feasible applying spectral techniques to the acceleration. The method avoids additional techniques to compensate for the integration displacement drift, improving accuracy, and simplifying current accelerometer-based devices.
Resumo:
Study which shows that 10-11 yr olds are capable of effective CPR after a single 2 hour training session using the ABC for Life programme. However they perfrom more effective CPR when using a ratio of 15:2 rather than 30:2 chest compressions : ventilations
Resumo:
nterruptions in cardiopulmonary resuscitation (CPR) compromise defibrillation success. However, CPR must be interrupted to analyze the rhythm because although current methods for rhythm analysis during CPR have high sensitivity for shockable rhythms, the specificity for nonshockable rhythms is still too low. This paper introduces a new approach to rhythm analysis during CPR that combines two strategies: a state-of-the-art CPR artifact suppression filter and a shock advice algorithm (SAA) designed to optimally classify the filtered signal. Emphasis is on designing an algorithm with high specificity. The SAA includes a detector for low electrical activity rhythms to increase the specificity, and a shock/no-shock decision algorithm based on a support vector machine classifier using slope and frequency features. For this study, 1185 shockable and 6482 nonshockable 9-s segments corrupted by CPR artifacts were obtained from 247 patients suffering out-of-hospital cardiac arrest. The segments were split into a training and a test set. For the test set, the sensitivity and specificity for rhythm analysis during CPR were 91.0% and 96.6%, respectively. This new approach shows an important increase in specificity without compromising the sensitivity when compared to previous studies.
Resumo:
Background Quality of cardiopulmonary resuscitation (CPR) is key to increase survival from cardiac arrest. Providing chest compressions with adequate rate and depth is difficult even for well-trained rescuers. The use of real-time feedback devices is intended to contribute to enhance chest compression quality. These devices are typically based on the double integration of the acceleration to obtain the chest displacement during compressions. The integration process is inherently unstable and leads to important errors unless boundary conditions are applied for each compression cycle. Commercial solutions use additional reference signals to establish these conditions, requiring additional sensors. Our aim was to study the accuracy of three methods based solely on the acceleration signal to provide feedback on the compression rate and depth. Materials and Methods We simulated a CPR scenario with several volunteers grouped in couples providing chest compressions on a resuscitation manikin. Different target rates (80, 100, 120, and 140 compressions per minute) and a target depth of at least 50 mm were indicated. The manikin was equipped with a displacement sensor. The accelerometer was placed between the rescuer's hands and the manikin's chest. We designed three alternatives to direct integration based on different principles (linear filtering, analysis of velocity, and spectral analysis of acceleration). We evaluated their accuracy by comparing the estimated depth and rate with the values obtained from the reference displacement sensor. Results The median (IQR) percent error was 5.9% (2.8-10.3), 6.3% (2.9-11.3), and 2.5% (1.2-4.4) for depth and 1.7% (0.0-2.3), 0.0% (0.0-2.0), and 0.9% (0.4-1.6) for rate, respectively. Depth accuracy depended on the target rate (p < 0.001) and on the rescuer couple (p < 0.001) within each method. Conclusions Accurate feedback on chest compression depth and rate during CPR is possible using exclusively the chest acceleration signal. The algorithm based on spectral analysis showed the best performance. Despite these encouraging results, further research should be conducted to asses the performance of these algorithms with clinical data.
Resumo:
This study aims to determine whether the British Heart Foundation (BHF) PocketCPR application can improve the depth and rate of chest compression, and therefore be confidently recommended for bystander use. 118 candidates were recruited into a randomised crossover manikin trial. Each candidate performed CPR for two-minutes without instruction, or performed chest compressions using the PocketCPR application. Candidates then performed a further two minutes of CPR within the opposite arm. The number of chest compressions performed improved when PocketCPR was used compared to chest compressions when it was not (44.28% v40.57, P<0.001). The number of chest compressions performed to the required depth was higher in the PocketCPR group (90.86 v 66.26). The BHF PocketCPR application improved the percentage of chest compressions that were performed to the required depth. Despite this, more work is required in order to develop a feedback device that can improve bystander CPR without creating delay.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
INTRODUCTION: During mechanical ventilation (MV), the airways may accumulate secretions. Patients are submitted to Respiratory Therapy (RT) and tracheal aspiration when in MV, alone or associated, to eliminate these secretions. OBJECTIVE: The objective was to compare the effects of different protocols of bronchial hygiene in blood pressure, heart rate, oxygen saturation and respiratory rate of patients undergoing MV. MATERIALS AND METHODS: We conducted a prospective, randomized, controlled crossover, with intentional non-probabilistic sample in the Medical School Hospital of Marília. We included patients in invasive MV who were submitted to three different bronchial hygiene protocols: PP - physiotherapy protocol (manual chest compression and manual hyperinflation); AP - aspiration protocol; and PP + AP. Respiratory rate, systolic blood pressure (SBP), diastolic blood pressure (DBP), oxygen saturation and heart rate were evaluated in three moments: before (M1), immediately after (M2) and 30 minutes after (M3) for each protocol. The differences among protocols and times were assessed using ANOVA and post hoc Student Newman-Keus (p < 0.05). RESULTS: We studied eighteen 71.2 ± 13.9 year-old patients with 15.1 ± 17.7 days of MV. There were no differences among protocols. There was a significant decreasing in SBP (p = 0.0261) and DBP (p = 0.0119) from M2 to M3 in the aspiration protocol. CONCLUSION: There was a decrease of blood pressure on MV patients after 30 minutes of aspiration and no change in the other variables, and there was no difference among protocols.
Resumo:
PURPOSE: Computer-based feedback systems for assessing the quality of cardiopulmonary resuscitation (CPR) are widely used these days. Recordings usually involve compression and ventilation dependent variables. Thorax compression depth, sufficient decompression and correct hand position are displayed but interpreted independently of one another. We aimed to generate a parameter, which represents all the combined relevant parameters of compression to provide a rapid assessment of the quality of chest compression-the effective compression ratio (ECR). METHODS: The following parameters were used to determine the ECR: compression depth, correct hand position, correct decompression and the proportion of time used for chest compressions compared to the total time spent on CPR. Based on the ERC guidelines, we calculated that guideline compliant CPR (30:2) has a minimum ECR of 0.79. To calculate the ECR, we expanded the previously described software solution. In order to demonstrate the usefulness of the new ECR-parameter, we first performed a PubMed search for studies that included correct compression and no-flow time, after which we calculated the new parameter, the ECR. RESULTS: The PubMed search revealed 9 trials. Calculated ECR values ranged between 0.03 (for basic life support [BLS] study, two helpers, no feedback) and 0.67 (BLS with feedback from the 6th minute). CONCLUSION: ECR enables rapid, meaningful assessment of CPR and simplifies the comparability of studies as well as the individual performance of trainees. The structure of the software solution allows it to be easily adapted to any manikin, CPR feedback devices and different resuscitation guidelines (e.g. ILCOR, ERC).
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.
Resumo:
Survival from out-of-hospital cardiac arrest depends largely on two factors: early cardiopulmonary resuscitation (CPR) and early defibrillation. CPR must be interrupted for a reliable automated rhythm analysis because chest compressions induce artifacts in the ECG. Unfortunately, interrupting CPR adversely affects survival. In the last twenty years, research has been focused on designing methods for analysis of ECG during chest compressions. Most approaches are based either on adaptive filters to remove the CPR artifact or on robust algorithms which directly diagnose the corrupted ECG. In general, all the methods report low specificity values when tested on short ECG segments, but how to evaluate the real impact on CPR delivery of continuous rhythm analysis during CPR is still unknown. Recently, researchers have proposed a new methodology to measure this impact. Moreover, new strategies for fast rhythm analysis during ventilation pauses or high-specificity algorithms have been reported. Our objective is to present a thorough review of the field as the starting point for these late developments and to underline the open questions and future lines of research to be explored in the following years.
Resumo:
[EU]Dokumentu honek "Panpinan Egindako Bihotz-Biriketako Berpizte Maniobran Bularra Bere Onera Bueltatzearen Analisia" izena duen Gradu Amaierako Lanaren memoria eta eranskinak biltzen ditu. Bertan, lan horren bitartez lorturiko helburuak, horiek nola lortu diren eta burutu ahal izateko egin beharreko pausuak aurkezten dira. Bihotz-biriketako berpizterako berrelikadura gailuek berpizte maniobren kalitateari buruzko informazioa momentuan ematen dute. Proiektua berrelikadurarako laguntza gailuen parametroen kalitatea hobetzeko ikerlan bat egitera bideratuta dago, zehazki bularra bere onera bueltatzen den jakiteko metodoen azterketan sakontzen duelarik interesa duelarik. Alde batetik, benetako sakoneraren kalkulu zehatzari buruzko ikerketa bat egiten da, horretarako aukera ezberdinak aztertuz, metodo desberdinen analisia eginez. Bestetik, berrelikaduran zehar bularra bere onera bueltatzen den ohartarazteko (esate baterako, alarma sistema baten bitartez) parametroan egiten da ikerketa, horretarako indar sentsore gehigarri bat erabiliz.
Resumo:
[EU]Proiektu honetan bihotz-biriketako berpizte masajearen bular-sakadek elektrokardiograman eta bular-inpedantziaren seinaleetan eragindako interferentziaren azterketa egiten da. Helburu nagusia bi interferentzia hauen arteko erlazioa aztertzea da, horretarako tresna garatuz. Erlazio hau definitzeak interferentziaren eragina txikitzeko modua aurkitzen lagunduko luke, eta honek berpizteko aukerak handituko lituzke. Proiektua gauzatzeko ospitalez kanpoko geldialdien erregistro multzo batetik abiatuta datu-base propioa garatu da ezarritako irizpide batzuk jarraituz. Datu-base berri hau 37 pazienteren 237 mozketak osatzen dute, 10 segundotako luzera minimoarekin non pazienteek asistolia bitarteko kanpoko bular masajea jasotzen duten. Bestalde, interferentzia ezaugarritzeko interfaze grafiko bat garatu da, elektrokardiograma eta bular-inpedantziaren seinaleak denboran eta maiztasunean erakutsi eta hauen parametro esanguratsuak automatikoki zein eskuz ateratzeko aukera ematen duena. Parametroak seinaleen sakada bakoitzeko maximo eta minimoak, beraien kokapenak eta oinarrizko maiztasuna, bere harmonikoak eta hauen anplitudeak dira. Tresna hau erabiliz aipatutako datu-baseko episodioen prozesaketa egin da. Bukatzeko, lortutako emaitzak tratatzeko bigarren interfaze grafiko bat garatu da, non emaitzen banaketa estatistikoa eta hauen arteko erlazio lineala aztertzen diren. Proiektuaren ekarpen nagusia, beraz, bihotz-biriketako berpizte masajeak eragindako interferentzia aztertzeko tresna ahaltsuaren garapena da, jatorri desberdineko bestelako berpizte episodioak aztertzeko ere balio duena.
Resumo:
[EN]These feedback devices are used to improve the quality of chest compressions while performing CPR technique, as they provide real time information to guide the rescuer during resuscitation attempts. Most feedback systems on the market are based on accelerometers and additional sensors or reference signals, used for calculating the displacement of the chest from the acceleration signal. This makes them expensive and complex devices. With the aim of optimizing these feedback systems and overcoming their limitations, in this document we propose three alternative methods for calculating the depth of chest compressions. These methods differ from the ones existing so far in that they use exclusively the chest acceleration signal to compute the displacement. With their implementation, it would be possible to develop systems to provide accurate feedback more easily and economically. In this context, this document details the design and implementation of the three methods and the development of a software environment to analyze the accuracy of each of them and compare the results by means of a detailed calculation of errors. Furthermore, in order to evaluate the methods a database is required, and it can be compiled using a sensorized manikin to record the acceleration signal and the gold standard chest compression depth. The database generated will be used for other studies related to the estimation of the compression depth, because the signals obtained in the manikin platform are very similar to those recorded during a real resuscitation episode.