832 resultados para Recursive Filtering
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Recent research on affective processing has suggested that low spatial frequency information of fearful faces provide rapid emotional cues to the amygdala, whereas high spatial frequencies convey fine-grained information to the fusiform gyrus, regardless of emotional expression. In the present experiment, we examined the effects of low (LSF, <15 cycles/image width) and high spatial frequency filtering (HSF, >25 cycles/image width) on brain processing of complex pictures depicting pleasant, unpleasant, and neutral scenes. Event-related potentials (ERP), percentage of recognized stimuli and response times were recorded in 19 healthy volunteers. Behavioral results indicated faster reaction times in response to unpleasant LSF than to unpleasant HSF pictures. Unpleasant LSF pictures and pleasant unfiltered pictures also elicited significant enhancements of P1 amplitudes at occipital electrodes as compared to neutral LSF and unfiltered pictures, respectively; whereas no significant effects of affective modulation were found for HSF pictures. Moreover, mean ERP amplitudes in the time between 200 and 500ms post-stimulus were significantly greater for affective (pleasant and unpleasant) than for neutral unfiltered pictures; whereas no significant affective modulation was found for HSF or LSF pictures at those latencies. The fact that affective LSF pictures elicited an enhancement of brain responses at early, but not at later latencies, suggests the existence of a rapid and preattentive neural mechanism for the processing of motivationally relevant stimuli, which could be driven by LSF cues. Our findings confirm thus previous results showing differences on brain processing of affective LSF and HSF faces, and extend these results to more complex and social affective pictures.
Resumo:
Nearest neighbour collaborative filtering (NNCF) algorithms are commonly used in multimedia recommender systems to suggest media items based on the ratings of users with similar preferences. However, the prediction accuracy of NNCF algorithms is affected by the reduced number of items – the subset of items co-rated by both users – typically used to determine the similarity between pairs of users. In this paper, we propose a different approach, which substantially enhances the accuracy of the neighbour selection process – a user-based CF (UbCF) with semantic neighbour discovery (SND). Our neighbour discovery methodology, which assesses pairs of users by taking into account all the items rated at least by one of the users instead of just the set of co-rated items, semantically enriches this enlarged set of items using linked data and, finally, applies the Collinearity and Proximity Similarity metric (CPS), which combines the cosine similarity with Chebyschev distance dissimilarity metric. We tested the proposed SND against the Pearson Correlation neighbour discovery algorithm off-line, using the HetRec data set, and the results show a clear improvement in terms of accuracy and execution time for the predicted recommendations.
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
Recommender system is a specific type of intelligent systems, which exploits historical user ratings on items and/or auxiliary information to make recommendations on items to the users. It plays a critical role in a wide range of online shopping, e-commercial services and social networking applications. Collaborative filtering (CF) is the most popular approaches used for recommender systems, but it suffers from complete cold start (CCS) problem where no rating record are available and incomplete cold start (ICS) problem where only a small number of rating records are available for some new items or users in the system. In this paper, we propose two recommendation models to solve the CCS and ICS problems for new items, which are based on a framework of tightly coupled CF approach and deep learning neural network. A specific deep neural network SADE is used to extract the content features of the items. The state of the art CF model, timeSVD++, which models and utilizes temporal dynamics of user preferences and item features, is modified to take the content features into prediction of ratings for cold start items. Extensive experiments on a large Netflix rating dataset of movies are performed, which show that our proposed recommendation models largely outperform the baseline models for rating prediction of cold start items. The two proposed recommendation models are also evaluated and compared on ICS items, and a flexible scheme of model retraining and switching is proposed to deal with the transition of items from cold start to non-cold start status. The experiment results on Netflix movie recommendation show the tight coupling of CF approach and deep learning neural network is feasible and very effective for cold start item recommendation. The design is general and can be applied to many other recommender systems for online shopping and social networking applications. The solution of cold start item problem can largely improve user experience and trust of recommender systems, and effectively promote cold start items.
Resumo:
Recommender systems (RS) are used by many social networking applications and online e-commercial services. Collaborative filtering (CF) is one of the most popular approaches used for RS. However traditional CF approach suffers from sparsity and cold start problems. In this paper, we propose a hybrid recommendation model to address the cold start problem, which explores the item content features learned from a deep learning neural network and applies them to the timeSVD++ CF model. Extensive experiments are run on a large Netflix rating dataset for movies. Experiment results show that the proposed hybrid recommendation model provides a good prediction for cold start items, and performs better than four existing recommendation models for rating of non-cold start items.
Resumo:
OBJECTIVE: To identify whether the use of a notch filter significantly affects the morphology or characteristics of the newborn auditory brainstem response (ABR) waveform and so inform future guidance for clinical practice. DESIGN: Waveforms with and without the application of a notch filter were recorded from babies undergoing routine ABR tests at 4000, 1000 and 500 Hz. Any change in response morphology was judged subjectively. Response latency, amplitude, and measurements of response quality and residual noise were noted. An ABR simulator was also used to assess the effect of notch filtering in conditions of low and high mains interference. RESULTS: The use of a notch filter changed waveform morphology for 500 Hz stimuli only in 15% of tests in newborns. Residual noise was lower when 4000 Hz stimuli were used. Response latency, amplitude, and quality were unaffected regardless of stimulus frequency. Tests with the ABR stimulator suggest that these findings can be extended to conditions of high level mains interference. CONCLUSIONS: A notch filter should be avoided when testing at 500 Hz, but at higher frequencies appears to carry no penalty.
Resumo:
The conjugate gradient is the most popular optimization method for solving large systems of linear equations. In a system identification problem, for example, where very large impulse response is involved, it is necessary to apply a particular strategy which diminishes the delay, while improving the convergence time. In this paper we propose a new scheme which combines frequency-domain adaptive filtering with a conjugate gradient technique in order to solve a high order multichannel adaptive filter, while being delayless and guaranteeing a very short convergence time.
Resumo:
Nowadays the leukodepletion is one of the most important processes done on the blood in order to reduce the risk of transfusion diseases. It can be performed through different techniques but the most popular one is the filtration due to its simplicity and efficiency. This work aims at improving a current commercial product, by developing a new filter based on Fenton-type reaction to cross-link a hydrogel on to the base material. The filters for leukodepletion are preferably made through the melt flow technique resulting in a non-woven tissue; the functionalization should increase the stability of the filter restricting the extraction of substances to minimum amount when in contact with blood. Through the modification the filters can acquire new properties including wettability, surface charge and good resistance to the extractions. The most important for leukodepletion is the surface charge due to the nature of the filtration process. All the modified samples results have been compared to the commercial product. Three different polymers (A, B and C) have been studied for the filter modifications and every modified filter has been tested in order to determine its properties.
Resumo:
Seasonally dry tropical plant formations (SDTF) are likely to exhibit phylogenetic clustering owing to niche conservatism driven by a strong environmental filter (water stress), but heterogeneous edaphic environments and life histories may result in heterogeneity in degree of phylogenetic clustering. We investigated phylogenetic patterns across ecological gradients related to water availability (edaphic environment and climate) in the Caatinga, a SDTF in Brazil. Caatinga is characterized by semiarid climate and three distinct edaphic environments - sedimentary, crystalline, and inselberg -representing a decreasing gradient in soil water availability. We used two measures of phylogenetic diversity: Net Relatedness Index based on the entire phylogeny among species present in a site, reflecting long-term diversification; and Nearest Taxon Index based on the tips of the phylogeny, reflecting more recent diversification. We also evaluated woody species in contrast to herbaceous species. The main climatic variable influencing phylogenetic pattern was precipitation in the driest quarter, particularly for herbaceous species, suggesting that environmental filtering related to minimal periods of precipitation is an important driver of Caatinga biodiversity, as one might expect for a SDTF. Woody species tended to show phylogenetic clustering whereas herbaceous species tended towards phylogenetic overdispersion. We also found phylogenetic clustering in two edaphic environments (sedimentary and crystalline) in contrast to phylogenetic overdispersion in the third (inselberg). We conclude that while niche conservatism is evident in phylogenetic clustering in the Caatinga, this is not a universal pattern likely due to heterogeneity in the degree of realized environmental filtering across edaphic environments. Thus, SDTF, in spite of a strong shared environmental filter, are potentially heterogeneous in phylogenetic structuring. Our results support the need for scientifically informed conservation strategies in the Caatinga and other SDTF regions that have not previously been prioritized for conservation in order to take into account this heterogeneity.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
Estuarine hydrodynamics is a key factor in the definition of the filtering capacity of an estuary and results from the interaction of the processes that control the inlet morphodynamics and those that are acting in the mixing of the water in the estuary. The hydrodynamics and suspended sediment transport in the Camboriú estuary were assessed by two field campaigns conducted in 1998 that covered both neap and spring tide conditions. The period measured represents the estuarine hydrodynamics and sediment transport prior to the construction of the jetty in 2003 and provides important background information for the Camboriú estuary. Each field campaign covered two complete tidal cycles with hourly measurements of currents, salinity, suspended sediment concentration and water level. Results show that the Camboriú estuary is partially mixed with the vertical structure varying as a function of the tidal range and tidal phase. The dynamic estuarine structure can be balanced between the stabilizing effects generated by the vertical density gradient, which produces buoyancy and stratification flows, and the turbulent effects generated by the vertical velocity gradient that generates vertical mixing. The main sediment source for the water column are the bottom sediments, periodically resuspended by the tidal currents. The advective salt and suspended sediment transport was different between neap and spring tides, being more complex at spring tide. The river discharge term was important under both tidal conditions. The tidal correlation term was also important, being dominant in the suspended sediment transport during the spring tide. The gravitational circulation and Stokes drift played a secondary role in the estuarine transport processes.
Resumo:
Background: High-throughput SNP genotyping has become an essential requirement for molecular breeding and population genomics studies in plant species. Large scale SNP developments have been reported for several mainstream crops. A growing interest now exists to expand the speed and resolution of genetic analysis to outbred species with highly heterozygous genomes. When nucleotide diversity is high, a refined diagnosis of the target SNP sequence context is needed to convert queried SNPs into high-quality genotypes using the Golden Gate Genotyping Technology (GGGT). This issue becomes exacerbated when attempting to transfer SNPs across species, a scarcely explored topic in plants, and likely to become significant for population genomics and inter specific breeding applications in less domesticated and less funded plant genera. Results: We have successfully developed the first set of 768 SNPs assayed by the GGGT for the highly heterozygous genome of Eucalyptus from a mixed Sanger/454 database with 1,164,695 ESTs and the preliminary 4.5X draft genome sequence for E. grandis. A systematic assessment of in silico SNP filtering requirements showed that stringent constraints on the SNP surrounding sequences have a significant impact on SNP genotyping performance and polymorphism. SNP assay success was high for the 288 SNPs selected with more rigorous in silico constraints; 93% of them provided high quality genotype calls and 71% of them were polymorphic in a diverse panel of 96 individuals of five different species. SNP reliability was high across nine Eucalyptus species belonging to three sections within subgenus Symphomyrtus and still satisfactory across species of two additional subgenera, although polymorphism declined as phylogenetic distance increased. Conclusions: This study indicates that the GGGT performs well both within and across species of Eucalyptus notwithstanding its nucleotide diversity >= 2%. The development of a much larger array of informative SNPs across multiple Eucalyptus species is feasible, although strongly dependent on having a representative and sufficiently deep collection of sequences from many individuals of each target species. A higher density SNP platform will be instrumental to undertake genome-wide phylogenetic and population genomics studies and to implement molecular breeding by Genomic Selection in Eucalyptus.
Resumo:
Background: High-throughput molecular approaches for gene expression profiling, such as Serial Analysis of Gene Expression (SAGE), Massively Parallel Signature Sequencing (MPSS) or Sequencing-by-Synthesis (SBS) represent powerful techniques that provide global transcription profiles of different cell types through sequencing of short fragments of transcripts, denominated sequence tags. These techniques have improved our understanding about the relationships between these expression profiles and cellular phenotypes. Despite this, more reliable datasets are still necessary. In this work, we present a web-based tool named S3T: Score System for Sequence Tags, to index sequenced tags in accordance with their reliability. This is made through a series of evaluations based on a defined rule set. S3T allows the identification/selection of tags, considered more reliable for further gene expression analysis. Results: This methodology was applied to a public SAGE dataset. In order to compare data before and after filtering, a hierarchical clustering analysis was performed in samples from the same type of tissue, in distinct biological conditions, using these two datasets. Our results provide evidences suggesting that it is possible to find more congruous clusters after using S3T scoring system. Conclusion: These results substantiate the proposed application to generate more reliable data. This is a significant contribution for determination of global gene expression profiles. The library analysis with S3T is freely available at http://gdm.fmrp.usp.br/s3t/.S3T source code and datasets can also be downloaded from the aforementioned website.