958 resultados para rail wheel flat, vibration monitoring, wavelet approaches, daubechies wavelets, signal processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of sensing devices is one of the instrumentation fields that has grown rapidly in the last decade. Corresponding to the swift advance in the development of microelectronic sensors, optical fibre sensors are widely investigated because of their advantageous properties over the electronics sensors such as their wavelength multiplexing capability and high sensitivity to temperature, pressure, strain, vibration and acoustic emission. Moreover, optical fibre sensors are more attractive than the electronics sensors as they can perform distributed sensing, in terms of covering a reasonably large area using a single piece of fibre. Apart from being a responsive element in the sensing field, optical fibre possesses good assets in generating, distributing, processing and transmitting signals in the future broadband information network. These assets include wide bandwidth, high capacity and low loss that grant mobility and flexibility for wireless access systems. Among these core technologies, the fibre optic signal processing and transmission of optical and radio frequency signals have been the subjects of study in this thesis. Based on the intrinsic properties of single-mode optical fibre, this thesis aims to exploit the fibre characteristics such as thermal sensitivity, birefringence, dispersion and nonlinearity, in the applications of temperature sensing and radio-over-fibre systems. By exploiting the fibre thermal sensitivity, a fully distributed temperature sensing system consisting of an apodised chirped fibre Bragg grating has been implemented. The proposed system has proven to be efficient in characterising grating and providing the information of temperature variation, location and width of the heat source applied in the area under test.To exploit the fibre birefringence, a fibre delay line filter using a single high-birefringence optical fibre structure has been presented. The proposed filter can be reconfigured and programmed by adjusting the input azimuth of launched light, as well as the strength and direction of the applied coupling, to meet the requirements of signal processing for different purposes in microwave photonic and optical filtering applications. To exploit the fibre dispersion and nonlinearity, experimental investigations have been carried out to study their joint effect in high power double-sideband and single-sideband modulated links with the presence of fibre loss. The experimental results have been theoretically verified based on the in-house implementation of the split-step Fourier method applied to the generalised nonlinear Schrödinger equation. Further simulation study on the inter-modulation distortion in two-tone signal transmission has also been presented so as to show the effect of nonlinearity of one channel on the other. In addition to the experimental work, numerical simulations have also been carried out in all the proposed systems, to ensure that all the aspects concerned are comprehensively investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over recent years much has been learned about the way in which depth cues are combined (e.g. Landy et al., 1995). The majority of this work has used subjective measures, a rating scale or a point of subjective equality, to deduce the relative contributions of different cues to perception. We have adopted a very different approach by using two interval forced-choice (2IFC) performance measures and a signal processing framework. We performed summation experiments for depth cue increment thresholds between pairs of pictorial depth cues in displays depicting slanted planar surfaces made from arrays of circular 'contrast' elements. Summation was found to be ideal when size-gradient was paired with contrast-gradient for a wide range of depth-gradient magnitudes in the null stimulus. For a pairing of size-gradient and linear perspective, substantial summation (> 1.5 dB) was found only when the null stimulus had intermediate depth gradients; when flat or steeply inclined surfaces were depicted, summation was diminished or abolished. Summation was also abolished when one of the target cues was (i) not a depth cue, or (ii) added in conflict. We conclude that vision has a depth mechanism for the constructive combination of pictorial depth cues and suggest two generic models of summation to describe the results. Using similar psychophysical methods, Bradshaw and Rogers (1996) revealed a mechanism for the depth cues of motion parallax and binocular disparity. Whether this is the same or a different mechanism from the one reported here awaits elaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have recently proposed the framework of independent blind source separation as an advantageous approach to steganography. Amongst the several characteristics noted was a sensitivity to message reconstruction due to small perturbations in the sources. This characteristic is not common in most other approaches to steganography. In this paper we discuss how this sensitivity relates the joint diagonalisation inside the independent component approach, and reliance on exact knowledge of secret information, and how it can be used as an additional and inherent security mechanism against malicious attack to discovery of the hidden messages. The paper therefore provides an enhanced mechanism that can be used for e-document forensic analysis and can be applied to different dimensionality digital data media. In this paper we use a low dimensional example of biomedical time series as might occur in the electronic patient health record, where protection of the private patient information is paramount.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last twenty years, we have been continuously seeing R&D efforts and activities in developing optical fibre grating devices and technologies and exploring their applications for telecommunications, optical signal processing and smart sensing, and recently for medical care and biophotonics. In addition, we have also witnessed successful commercialisation of these R&Ds, especially in the area of fibre Bragg grating (FBG) based distributed sensor network systems and technologies for engineering structure monitoring in industrial sectors such as oil, energy and civil engineering. Despite countless published reports and papers and commercial realisation, we are still seeing significant and novel research activities in this area. This invited paper will give an overview on recent advances in fibre grating devices and their sensing applications with a focus on novel fibre gratings and their functions and grating structures in speciality fibres. The most recent developments in (i) femtosecond inscription for microfluidic/grating devices, (2) tilted grating based novel polarisation devices and (3) dual-peak long-period grating based DNA hybridisation sensors will be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last twenty years, we have been continuously seeing R&D efforts and activities in developing optical fibre grating devices and technologies and exploring their applications for telecommunications, optical signal processing and smart sensing, and recently for medical care and biophotonics. In addition, we have also witnessed successful commercialisation of these R&Ds, especially in the area of fibre Bragg grating (FBG) based distributed sensor network systems and technologies for engineering structure monitoring in industrial sectors such as oil, energy and civil engineering. Despite countless published reports and papers and commercial realisation, we are still seeing significant and novel research activities in this area. This invited paper will give an overview on recent advances in fibre grating devices and their sensing applications with a focus on novel fibre gratings and their functions and grating structures in speciality fibres. The most recent developments in (i) femtosecond inscription for microfluidic/grating devices, (2) tilted grating based novel polarisation devices and (3) dual-peak long-period grating based DNA hybridisation sensors will be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical differentiators constitute a basic device for analog all-optical signal processing [1]. Fiber grating approaches, both fiber Bragg grating (FBG) and long period grating (LPG), constitute an attractive solution because of their low cost, low insertion losses, and full compatibility with fiber optic systems. A first order differentiator LPG approach was proposed and demonstrated in [2], but FBGs may be preferred in applications with a bandwidth up to few nm because of the extreme sensitivity of LPGs to environmental fluctuations [3]. Several FBG approaches have been proposed in [3-6], requiring one or more additional optical elements to create a first-order differentiator. A very simple, single optical element FBG approach was proposed in [7] for first order differentiation, applying the well-known logarithmic Hilbert transform relation of the amplitude and phase of an FBG in transmission [8]. Using this relationship in the design process, it was theoretically and numerically demonstrated that a single FBG in transmission can be designed to simultaneously approach the amplitude and phase of a first-order differentiator spectral response, without need of any additional elements. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this report we summarize the state-of-the-art of speech emotion recognition from the signal processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the observation is made that existing approaches for supervised machine learning lead to database dependent classifiers which can not be applied for multi-language speech emotion recognition without additional training because they discriminate the emotion classes following the used training language. As there are experimental results showing that Humans can perform language independent categorisation, we made a parallel between machine recognition and the cognitive process and tried to discover the sources of these divergent results. The analysis suggests that the main difference is that the speech perception allows extraction of language independent features although language dependent features are incorporated in all levels of the speech signal and play as a strong discriminative function in human perception. Based on several results in related domains, we have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We propose a strategy for developing language independent machine emotion recognition, related to the identification of language independent speech features and the use of additional information from visual (expression) features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gastroesophageal reflux disease (GERD) is a common cause of chronic cough. For the diagnosis and treatment of GERD, it is desirable to quantify the temporal correlation between cough and reflux events. Cough episodes can be identified on esophageal manometric recordings as short-duration, rapid pressure rises. The present study aims at facilitating the detection of coughs by proposing an algorithm for the classification of cough events using manometric recordings. The algorithm detects cough episodes based on digital filtering, slope and amplitude analysis, and duration of the event. The algorithm has been tested on in vivo data acquired using a single-channel intra-esophageal manometric probe that comprises a miniature white-light interferometric fiber optic pressure sensor. Experimental results demonstrate the feasibility of using the proposed algorithm for identifying cough episodes based on real-time recordings using a single channel pressure catheter. The presented work can be integrated with commercial reflux pH/impedance probes to facilitate simultaneous 24-hour ambulatory monitoring of cough and reflux events, with the ultimate goal of quantifying the temporal correlation between the two types of events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Congenital nystagmus is an ocular-motor disorder that develops in the first few months of life; its pathogenesis is still unknown. Patients affected by congenital nystagmus show continuous, involuntary, rhythmical oscillations of the eyes. Monitoring eye movements, nystagmus main features such as shape, amplitude and frequency, can be extracted and analysed. Previous studies highlighted, in some cases, a much slower and smaller oscillation, which appears added up to the ordinary nystagmus waveform. This sort of baseline oscillation, or slow nystagmus, hinder precise cycle-to-cycle image placement onto the fovea. Such variability of the position may reduce patient visual acuity. This study aims to analyse more extensively eye movements recording including the baseline oscillation and investigate possible relationships between these slow oscillations and nystagmus. Almost 100 eye movement recordings (either infrared-oculographic or electrooculographic), relative to different gaze positions, belonging to 32 congenital nystagmus patients were analysed. The baseline oscillation was assumed sinusoidal; its amplitude and frequency were computed and compared with those of the nystagmus by means of a linear regression analysis. The results showed that baseline oscillations were characterised by an average frequency of 0.36 Hz (SD 0.11 Hz) and an average amplitude of 2.1° (SD 1.6°). It also resulted in a considerable correlation (R2 scored 0.78) between nystagmus amplitude and baseline oscillation amplitude; the latter, on average, resulted to be about one-half of the correspondent nystagmus amplitude. © 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this paper is to model normal airframe conditions for helicopters in order to detect changes. This is done by inferring the flying state using a selection of sensors and frequency bands that are best for discriminating between different states. We used non-linear state-space models (NLSSM) for modelling flight conditions based on short-time frequency analysis of the vibration data and embedded the models in a switching framework to detect transitions between states. We then created a density model (using a Gaussian mixture model) for the NLSSM innovations: this provides a model for normal operation. To validate our approach, we used data with added synthetic abnormalities which was detected as low-probability periods. The model of normality gave good indications of faults during the flight, in the form of low probabilities under the model, with high accuracy (>92 %). © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.