876 resultados para Signal filtering and prediction
Resumo:
During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.
Resumo:
This paper introduces a procedure for filtering electromyographic (EMG) signals. Its key element is the Empirical Mode Decomposition, a novel digital signal processing technique that can decompose my time-series into a set of functions designated as intrinsic mode functions. The procedure for EMG signal filtering is compared to a related approach based on the wavelet transform. Results obtained from the analysis of synthetic and experimental EMG signals show that Our method can be Successfully and easily applied in practice to attenuation of background activity in EMG signals. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
To investigate the nature of plasticity in the adult visual system, perceptual learning was measured in a peripheral orientation discrimination task with systematically varying amounts of external (environmental) noise. The signal contrasts required to achieve threshold were reduced by a factor or two or more after training at all levels of external noise. The strong quantitative regularities revealed by this novel paradigm ruled out changes in multiplicative internal noise, changes in transducer nonlinearites, and simple attentional tradeoffs. Instead, the regularities specify the mechanisms of perceptual learning at the behavioral level as a combination of external noise exclusion and stimulus enhancement via additive internal noise reduction. The findings also constrain the neural architecture of perceptual learning. Plasticity in the weights between basic visual channels and decision is sufficient to account for perceptual learning without requiring the retuning of visual mechanisms.
Resumo:
The large number of protein kinases makes it impractical to determine their specificities and substrates experimentally. Using the available crystal structures, molecular modeling, and sequence analyses of kinases and substrates, we developed a set of rules governing the binding of a heptapeptide substrate motif (surrounding the phosphorylation site) to the kinase and implemented these rules in a web-interfaced program for automated prediction of optimal substrate peptides, taking only the amino acid sequence of a protein kinase as input. We show the utility of the method by analyzing yeast cell cycle control and DNA damage checkpoint pathways. Our method is the only available predictive method generally applicable for identifying possible substrate proteins for protein serine/threonine kinases and helps in silico construction of signaling pathways. The accuracy of prediction is comparable to the accuracy of data from systematic large-scale experimental approaches.
Resumo:
How does the brain combine spatio-temporal signals from the two eyes? We quantified binocular summation as the improvement in 2AFC contrast sensitivity for flickering gratings seen by two eyes compared with one. Binocular gratings in-phase showed sensitivity up to 1.8 times higher, suggesting nearly linear summation of contrasts. The binocular advantage decreased to 1.4 at lower spatial and higher temporal frequencies (0.25 cycle deg-1, 30 Hz). Dichoptic, antiphase gratings showed only a small binocular advantage, by a factor of 1.1 to 1.2, but no evidence of cancellation. We present a signal-processing model to account for the contrast-sensitivity functions and the pattern of binocular summation. It has linear sustained and transient temporal filters, nonlinear transduction, and half-wave rectification that creates ON and OFF channels. Binocular summation occurs separately within ON and OFF channels, thus explaining the phase-specific binocular advantage. The model also accounts for earlier findings on detection of brief antiphase flashes and the surprising finding that dichoptic antiphase flicker is seen as frequency-doubled (Cavonius et al, 1992 Ophthalmic and Physiological Optics 12 153 - 156). [Supported by EPSRC project GR/S74515/01].
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
Recommender system is a specific type of intelligent systems, which exploits historical user ratings on items and/or auxiliary information to make recommendations on items to the users. It plays a critical role in a wide range of online shopping, e-commercial services and social networking applications. Collaborative filtering (CF) is the most popular approaches used for recommender systems, but it suffers from complete cold start (CCS) problem where no rating record are available and incomplete cold start (ICS) problem where only a small number of rating records are available for some new items or users in the system. In this paper, we propose two recommendation models to solve the CCS and ICS problems for new items, which are based on a framework of tightly coupled CF approach and deep learning neural network. A specific deep neural network SADE is used to extract the content features of the items. The state of the art CF model, timeSVD++, which models and utilizes temporal dynamics of user preferences and item features, is modified to take the content features into prediction of ratings for cold start items. Extensive experiments on a large Netflix rating dataset of movies are performed, which show that our proposed recommendation models largely outperform the baseline models for rating prediction of cold start items. The two proposed recommendation models are also evaluated and compared on ICS items, and a flexible scheme of model retraining and switching is proposed to deal with the transition of items from cold start to non-cold start status. The experiment results on Netflix movie recommendation show the tight coupling of CF approach and deep learning neural network is feasible and very effective for cold start item recommendation. The design is general and can be applied to many other recommender systems for online shopping and social networking applications. The solution of cold start item problem can largely improve user experience and trust of recommender systems, and effectively promote cold start items.
Resumo:
Recommender systems (RS) are used by many social networking applications and online e-commercial services. Collaborative filtering (CF) is one of the most popular approaches used for RS. However traditional CF approach suffers from sparsity and cold start problems. In this paper, we propose a hybrid recommendation model to address the cold start problem, which explores the item content features learned from a deep learning neural network and applies them to the timeSVD++ CF model. Extensive experiments are run on a large Netflix rating dataset for movies. Experiment results show that the proposed hybrid recommendation model provides a good prediction for cold start items, and performs better than four existing recommendation models for rating of non-cold start items.
Resumo:
This paper describes a number of techniques for GNSS navigation message authentication. A detailed analysis of the security facilitated by navigation message authentication is given. The analysis takes into consideration the risk of critical applications that rely on GPS including transportation, finance and telecommunication networks. We propose a number of cryptographic authentication schemes for navigation data authentication. These authentication schemes provide authenticity and integrity of the navigation data to the receiver. Through software simulation, the performance of the schemes is quantified. The use of software simulation enables the collection of authentication performance data of different data channels, and the impact of various schemes on the infrastructure and receiver. Navigation message authentication schemes have been simulated at the proposed data rates of Galileo and GPS services, for which the resulting performance data is presented. This paper concludes by making recommendations for optimal implementation of navigation message authentication for Galileo and next generation GPS systems.