34 resultados para rail wheel flat, vibration monitoring, wavelet approaches, daubechies wavelets, signal processing
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.
Resumo:
Over the last twenty years, we have been continuously seeing R&D efforts and activities in developing optical fibre grating devices and technologies and exploring their applications for telecommunications, optical signal processing and smart sensing, and recently for medical care and biophotonics. In addition, we have also witnessed successful commercialisation of these R&Ds, especially in the area of fibre Bragg grating (FBG) based distributed sensor network systems and technologies for engineering structure monitoring in industrial sectors such as oil, energy and civil engineering. Despite countless published reports and papers and commercial realisation, we are still seeing significant and novel research activities in this area. This invited paper will give an overview on recent advances in fibre grating devices and their sensing applications with a focus on novel fibre gratings and their functions and grating structures in speciality fibres. The most recent developments in (i) femtosecond inscription for microfluidic/grating devices, (2) tilted grating based novel polarisation devices and (3) dual-peak long-period grating based DNA hybridisation sensors will be discussed.
Resumo:
Optical differentiators constitute a basic device for analog all-optical signal processing [1]. Fiber grating approaches, both fiber Bragg grating (FBG) and long period grating (LPG), constitute an attractive solution because of their low cost, low insertion losses, and full compatibility with fiber optic systems. A first order differentiator LPG approach was proposed and demonstrated in [2], but FBGs may be preferred in applications with a bandwidth up to few nm because of the extreme sensitivity of LPGs to environmental fluctuations [3]. Several FBG approaches have been proposed in [3-6], requiring one or more additional optical elements to create a first-order differentiator. A very simple, single optical element FBG approach was proposed in [7] for first order differentiation, applying the well-known logarithmic Hilbert transform relation of the amplitude and phase of an FBG in transmission [8]. Using this relationship in the design process, it was theoretically and numerically demonstrated that a single FBG in transmission can be designed to simultaneously approach the amplitude and phase of a first-order differentiator spectral response, without need of any additional elements. © 2013 IEEE.
Resumo:
Congenital nystagmus is an ocular-motor disorder that develops in the first few months of life; its pathogenesis is still unknown. Patients affected by congenital nystagmus show continuous, involuntary, rhythmical oscillations of the eyes. Monitoring eye movements, nystagmus main features such as shape, amplitude and frequency, can be extracted and analysed. Previous studies highlighted, in some cases, a much slower and smaller oscillation, which appears added up to the ordinary nystagmus waveform. This sort of baseline oscillation, or slow nystagmus, hinder precise cycle-to-cycle image placement onto the fovea. Such variability of the position may reduce patient visual acuity. This study aims to analyse more extensively eye movements recording including the baseline oscillation and investigate possible relationships between these slow oscillations and nystagmus. Almost 100 eye movement recordings (either infrared-oculographic or electrooculographic), relative to different gaze positions, belonging to 32 congenital nystagmus patients were analysed. The baseline oscillation was assumed sinusoidal; its amplitude and frequency were computed and compared with those of the nystagmus by means of a linear regression analysis. The results showed that baseline oscillations were characterised by an average frequency of 0.36 Hz (SD 0.11 Hz) and an average amplitude of 2.1° (SD 1.6°). It also resulted in a considerable correlation (R2 scored 0.78) between nystagmus amplitude and baseline oscillation amplitude; the latter, on average, resulted to be about one-half of the correspondent nystagmus amplitude. © 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.
Resumo:
The goal of this paper is to model normal airframe conditions for helicopters in order to detect changes. This is done by inferring the flying state using a selection of sensors and frequency bands that are best for discriminating between different states. We used non-linear state-space models (NLSSM) for modelling flight conditions based on short-time frequency analysis of the vibration data and embedded the models in a switching framework to detect transitions between states. We then created a density model (using a Gaussian mixture model) for the NLSSM innovations: this provides a model for normal operation. To validate our approach, we used data with added synthetic abnormalities which was detected as low-probability periods. The model of normality gave good indications of faults during the flight, in the form of low probabilities under the model, with high accuracy (>92 %). © 2013 IEEE.
Resumo:
For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.
Resumo:
Signal processing techniques for mitigating intra-channel and inter-channel fiber nonlinearities are reviewed. More detailed descriptions of three specific examples highlight the diversity of the electronic and optical approaches that have been investigated.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
The aim of this work was to investigate the feasibility of detecting and locating damage in large frame structures where visual inspection would be difficult or impossible. This method is based on a vibration technique for non-destructively assessing the integrity of structures by using measurements of changes in the natural frequencies. Such measurements can be made at a single point in the structure. The method requires that initially a comprehensive theoretical vibration analysis of the structure is undertaken and from it predictions are made of changes in dynamic characteristics that will occur if each member of the structure is damaged in turn. The natural frequencies of the undamaged structure are measured, and then routinely remeasured at intervals . If a change in the natural frequencies is detected a statistical method. is used to make the best match between the measured changes in frequency and the family of theoretical predictions. This predicts the most likely damage site. The theoretical analysis was based on the finite element method. Many structures were extensively studied and a computer model was used to simulate the effect of the extent and location of the damage on natural frequencies. Only one such analysis is required for each structure to be investigated. The experimental study was conducted on small structures In the laboratory. Frequency changes were found from inertance measurements on various plane and space frames. The computational requirements of the location analysis are small and a desk-top micro computer was used. Results of this work showed that the method was successful in detecting and locating damage in the test structures.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
In inflammatory diseases, release of oxidants leads to oxidative damage to proteins. The precise nature of oxidative damage to individual proteins depends on the oxidant involved. Chlorination and nitration are markers of modification by the myeloperoxidase-H2O2-Cl- system and nitric oxide-derived oxidants, respectively. Although these modifications can be detected by western blotting, currently no reliable method exists to identify the specific sites damage to individual proteins in complex mixtures such as clinical samples. We are developing novel LCMS2 and precursor ion scanning methods to address this. LC-MS2 allows separation of peptides and detection of mass changes in oxidized residues on fragmentation of the peptides. We have identified indicative fragment ions for chlorotyrosine, nitrotyrosine, hydroxytyrosine and hydroxytryptophan. A nano-LC/MS3 method involving the dissociation of immonium ions to give specific fragments for the oxidized residues has been developed to overcome the problem of false positives from ions isobaric to these immonium ions that exist in unmodified peptides. The approach has proved able to identify precise protein modifications in individual proteins and mixtures of proteins. An alternative methodology involves multiple reaction monitoring for precursors and fragment ions are specific to oxidized and chlorinated proteins, and this has been tested with human serum albumin. Our ultimate aim is to apply this methodology to the detection of oxidative post-translational modifications in clinical samples for disease diagnosis, monitoring the outcomes of therapy, and improved understanding of disease biochemistry.
Resumo:
Research in the present thesis is focused on the norms, strategies,and approaches which translators employ when translating humour in Children's Literature from English into Greek. It is based on process-oriented descriptive translation studies, since the focus is on investigating the process of translation. Viewing translation as a cognitive process and a problem soling activity, this thesis Think-aloud protocols (TAPs) in order to investigate translator's minds. As it is not possible to directly observe the human mind at work, an attempt is made to ask the translators themselves to reveal their mental processes in real time by verbalising their thoughts while carrying out a translation task involving humour. In this study, thirty participants at three different levels of expertise in translation competence, i.e. tn beginner, ten competent, and ten experts translators, were requested to translate two humourous extracts from the fictional diary novel The Secret Diary of Adrian Mole, Aged 13 ¾ by Sue Townsend (1982) from English into Greek. As they translated, they were asked to verbalise their thoughts and reason them, whenever possible, so that their strategies and approaches could be detected, and that subsequently, the norms that govern these strategies and approaches could be revealed. The thesis consists of four parts: the introduction, the literature review, the study, and the conclusion, and is developed in eleven chapters. the introduction contextualises the study within translation studies (TS) and presents its rationale, research questions, aims, and significance. Chapters 1 to 7 present an extensive and inclusive literature review identifying the principles axioms that guide and inform the study. In these seven chapters the following areas are critically introduced: Children's literature (Chapter 1), Children's Literature Translation (Chapter 2), Norms in Children's Literature (Chapter 3), Strategies in Children's Literature (Chapter 4), Humour in Children's Literature Translation (Chapter 5), Development of Translation Competence (Chapter 6), and Translation Process Research (Chapter 7). In Chapters 8 - 11 the fieldwork is described in detail. the piolot and the man study are described with a reference to he environments and setting, the participants, the research -observer, the data and its analysis, and limitations of the study. The findings of the study are presented and analysed in Chapter 9. Three models are then suggested for systematising translators' norms, strategies, and approaches, thus, filling the existing gap in the field. Pedagogical norms (e.g. appropriateness/correctness, famililarity, simplicity, comprehensibility, and toning down), literary norms (e.g. sound of language and fluency). and source-text norms (e.g. equivalence) were revealed to b the most prominent general and specific norms governing the translators' strategies and approaches in the process of translating humour in ChL. The data also revealed that monitoring and communication strategies (e.g. additions, omissions, and exoticism) were the prevalent strategies employed by translators. In Chapter 10 the main findings and outcomes of a potential secondary benefit (beneficial outcomes) are discussed on the basis of the research questions and aims of the study, and implications of the study are tackled in Chapter 11. In the conclusion, suggestions for future directions are given and final remarks noted.