936 resultados para signals analysis
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
This paper explores a new method of analysing muscle fatigue within the muscles predominantly used during microsurgery. The captured electromyographic (EMG) data retrieved from these muscles are analysed for any defining patterns relating to muscle fatigue. The analysis consists of dynamically embedding the EMG signals from a single muscle channel into an embedded matrix. The muscle fatigue is determined by defining its entropy characterized by the singular values of the dynamically embedded (DE) matrix. The paper compares this new method with the traditional method of using mean frequency shifts in the EMG signal's power spectral density. Linear regressions are fitted to the results from both methods, and the coefficients of variation of both their slope and point of intercept are determined. It is shown that the complexity method is slightly more robust in that the coefficient of variation for the DE method has lower variability than the conventional method of mean frequency analysis.
Resumo:
In this study, a new entropy measure known as kernel entropy (KerEnt), which quantifies the irregularity in a series, was applied to nocturnal oxygen saturation (SaO 2) recordings. A total of 96 subjects suspected of suffering from sleep apnea-hypopnea syndrome (SAHS) took part in the study: 32 SAHS-negative and 64 SAHS-positive subjects. Their SaO 2 signals were separately processed by means of KerEnt. Our results show that a higher degree of irregularity is associated to SAHS-positive subjects. Statistical analysis revealed significant differences between the KerEnt values of SAHS-negative and SAHS-positive groups. The diagnostic utility of this parameter was studied by means of receiver operating characteristic (ROC) analysis. A classification accuracy of 81.25% (81.25% sensitivity and 81.25% specificity) was achieved. Repeated apneas during sleep increase irregularity in SaO 2 data. This effect can be measured by KerEnt in order to detect SAHS. This non-linear measure can provide useful information for the development of alternative diagnostic techniques in order to reduce the demand for conventional polysomnography (PSG). © 2011 IEEE.
Resumo:
Dyslexia is one of the most common childhood disorders with a prevalence of around 5-10% in school-age children. Although an important genetic component is known to have a role in the aetiology of dyslexia, we are far from understanding the molecular mechanisms leading to the disorder. Several candidate genes have been implicated in dyslexia, including DYX1C1, DCDC2, KIAA0319, and the MRPL19/C2ORF3 locus, each with reports of both positive and no replications. We generated a European cross-linguistic sample of school-age children-the NeuroDys cohort-that includes more than 900 individuals with dyslexia, sampled with homogenous inclusion criteria across eight European countries, and a comparable number of controls. Here, we describe association analysis of the dyslexia candidate genes/locus in the NeuroDys cohort. We performed both case-control and quantitative association analyses of single markers and haplotypes previously reported to be dyslexia-associated. Although we observed association signals in samples from single countries, we did not find any marker or haplotype that was significantly associated with either case-control status or quantitative measurements of word-reading or spelling in the meta-analysis of all eight countries combined. Like in other neurocognitive disorders, our findings underline the need for larger sample sizes to validate possibly weak genetic effects. © 2014 Macmillan Publishers Limited All rights reserved.
Resumo:
Many studies have accounted for whole body vibration effects in the fields of exercise physiology, sport and rehabilitation medicine. Generally, surface EMG is utilized to assess muscular activity during the treatment; however, large motion artifacts appear superimposed to the raw signal, making sEMG recording not suitable before any artifact filtering. Sharp notch filters, centered at vibration frequency and at its superior harmonics, have been used in previous studies, to remove the artifacts. [6, 10] However, to get rid of those artifacts some true EMG signal is lost. The purpose of this study was to reproduce the effect of motor-unit synchronization on a simulated surface EMG during vibratory stimulation. In addition, authors mean to evaluate the EMG power percentage in those bands in which are also typically located motion artifact components. Model characteristics were defined to take into account two main aspect: the muscle MUs discharge behavior and the triggering effects that appear during local vibratory stimulation. [7] Inter-pulse-interval, was characterized by a polimodal distribution related to the MU discharge frequency (IPI 55-80ms, σ=12ms) and to the correlation with the vibration period within the range of ±2 ms due to vibration stimulus. [1, 7] The signals were simulated using different stimulation frequencies from 30 to 70 Hz. The percentage of the total simulated EMG power within narrow bands centered at the stimulation frequency and its superior harmonics (± 1 Hz) resulted on average about 8% (± 2.85) of the total EMG power. However, the artifact in those bands may contain more than 40% of the total power of the total signal. [6] Our preliminary results suggest that the analysis of the muscular activity of muscle based on raw sEMG recordings and RMS evaluation, if not processed during vibratory stimulation may lead to a serious overestimation of muscular response.
Resumo:
* This study was supported in part by the Natural Sciences and Engineering Research Council of Canada, and by the Gastrointestinal Motility Laboratory (University of Alberta Hospitals) in Edmonton, Alberta, Canada.
Resumo:
This chapter provides the theoretical foundation and background on Data Envelopment Analysis (DEA) method and some variants of basic DEA models and applications to various sectors. Some illustrative examples, helpful resources on DEA, including DEA software package, are also presented in this chapter. DEA is useful for measuring relative efficiency for variety of institutions and has its own merits and limitations. This chapter concludes that DEA results should be interpreted with much caution to avoid giving wrong signals and providing inappropriate recommendations.
Resumo:
Transmembrane proteins play crucial roles in many important physiological processes. The intracellular domain of membrane proteins is key for their function by interacting with a wide variety of cytosolic proteins. It is therefore important to examine this interaction. A recently developed method to study these interactions, based on the use of liposomes as a model membrane, involves the covalent coupling of the cytoplasmic domains of membrane proteins to the liposome membrane. This allows for the analysis of interaction partners requiring both protein and membrane lipid binding. This thesis further establishes the liposome recruitment system and utilises it to examine the intracellular interactome of the amyloid precursor protein (APP), most well-known for its proteolytic cleavage that results in the production and accumulation of amyloid beta fragments, the main constituent of amyloid plaques in Alzheimer’s disease pathology. Despite this, the physiological function of APP remains largely unclear. Through the use of the proteo-liposome recruitment system two novel interactions of APP’s intracellular domain (AICD) are examined with a view to gaining a greater insight into APP’s physiological function. One of these novel interactions is between AICD and the mTOR complex, a serine/threonine protein kinase that integrates signals from nutrients and growth factors. The kinase domain of mTOR directly binds to AICD and the N-terminal amino acids of AICD are crucial for this interaction. The second novel interaction is between AICD and the endosomal PIKfyve complex, a lipid kinase involved in the production of phosphatidylinositol-3,5-bisphosphate (PI(3,5)P2) from phosphatidylinositol-3-phosphate, which has a role in controlling ensdosome dynamics. The scaffold protein Vac14 of the PIKfyve complex binds directly to AICD and the C-terminus of AICD is important for its interaction with the PIKfyve complex. Using a recently developed intracellular PI(3,5)P2 probe it is shown that APP controls the formation of PI(3,5)P2 positive vesicular structures and that the PIKfyve complex is involved in the trafficking and degradation of APP. Both of these novel APP interactors have important implications of both APP function and Alzheimer’s disease. The proteo-liposome recruitment method is further validated through its use to examine the recruitment and assembly of the AP-2/clathrin coat from purified components to two membrane proteins containing different sorting motifs. Taken together this thesis highlights the proteo-liposome recruitment system as a valuable tool for the study of membrane proteins intracellular interactome. It allows for the mimicking of the protein in its native configuration therefore identifying weaker interactions that are not detected by more conventional methods and also detecting interactions that are mediated by membrane phospholipids.
Resumo:
The aim of this study is to evaluate the application of ensemble averaging to the analysis of electromyography recordings under whole body vibratory stimulation. Recordings from Rectus Femoris, collected during vibratory stimulation at different frequencies, are used. Each signal is subdivided in intervals, which time duration is related to the vibration frequency. Finally the average of the segmented intervals is performed. By using this method for the majority of the recordings the periodic components emerge. The autocorrelation of few seconds of signals confirms the presence of a pseudosinusoidal components strictly related to the soft tissues oscillations caused by the mechanical waves. © 2014 IEEE.
Resumo:
Cardiotocography provides significant information on foetal oxygenation linked to characteristics of foetal heart rate signals. Among most important we can mention foetal heart rate variability, whose spectral analysis is recognised like useful in improving diagnosis of pathologic conditions. However, despite its importance, a standardisation of definition and estimation of foetal heart rate variability is still searched. Some guidelines state that variability refers to fluctuations in the baseline free from accelerations and decelerations. This is an important limit in clinical routine since variability in correspondence of these FHR alterations has always been regarded as particularly significant in terms of prognostic value. In this work we compute foetal heart rate variability as difference between foetal heart rate and floatingline and we propose a method for extraction of floatingline which takes into account accelerations and decelerations. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
The long-term foetal surveillance is often to be recommended. Hence, the fully non-invasive acoustic recording, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the recorded heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. In this paper, we present a new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings. A filtering is employed as a first step of the algorithm to reduce the background noise. A block for first heart sounds enhancing is then used to further reduce other components of foetal heart sound signals. A complex logic block, guided by a number of rules concerning foetal heart beat regularity, is proposed as a successive block, for the detection of most probable first heart sounds from several candidates. A final block is used for exact first heart sound timing and in turn foetal heart rate estimation. Filtering and enhancing blocks are actually implemented by means of different techniques, so that different processing paths are proposed. Furthermore, a reliability index is introduced to quantify the consistency of the estimated foetal heart rate and, based on statistic parameters; [,] a software quality index is designed to indicate the most reliable analysis procedure (that is, combining the best processing path and the most accurate time mark of the first heart sound, provides the lowest estimation errors). The algorithm performances have been tested on phonocardiographic signals recorded in a local gynaecology private practice from a sample group of about 50 pregnant women. Phonocardiographic signals have been recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by our algorithm and the other provided by cardiotocographic device). Our results show that the proposed algorithm, in particular some analysis procedures, provides reliable foetal heart rate signals, very close to the reference cardiotocographic recordings. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.
Resumo:
It is well established that accent recognition can be as accurate as up to 95% when the signals are noise-free, using feature extraction techniques such as mel-frequency cepstral coefficients and binary classifiers such as discriminant analysis, support vector machine and k-nearest neighbors. In this paper, we demonstrate that the predictive performance can be reduced by as much as 15% when the signals are noisy. Specifically, in this paper we perturb the signals with different levels of white noise, and as the noise become stronger, the out-of-sample predictive performance deteriorates from 95% to 80%, although the in-sample prediction gives overly-optimistic results. ACM Computing Classification System (1998): C.3, C.5.1, H.1.2, H.2.4., G.3.