828 resultados para GRASP filtering
Resumo:
We have designed this flowchart to help you choose the web filtering option that best suits your needs from three different options: Our free standard web filtering service, enhanced user based filtering or a solution from our framework agreement.
Resumo:
The spike-diffuse-spike (SDS) model describes a passive dendritic tree with active dendritic spines. Spine-head dynamics is modeled with a simple integrate-and-fire process, whilst communication between spines is mediated by the cable equation. In this paper we develop a computational framework that allows the study of multiple spiking events in a network of such spines embedded on a simple one-dimensional cable. In the first instance this system is shown to support saltatory waves with the same qualitative features as those observed in a model with Hodgkin-Huxley kinetics in the spine-head. Moreover, there is excellent agreement with the analytically calculated speed for a solitary saltatory pulse. Upon driving the system with time varying external input we find that the distribution of spines can play a crucial role in determining spatio-temporal filtering properties. In particular, the SDS model in response to periodic pulse train shows a positive correlation between spine density and low-pass temporal filtering that is consistent with the experimental results of Rose and Fortune [1999, Mechanisms for generating temporal filters in the electrosensory system. The Journal of Experimental Biology 202, 1281-1289]. Further, we demonstrate the robustness of observed wave properties to natural sources of noise that arise both in the cable and the spine-head, and highlight the possibility of purely noise induced waves and coherent oscillations.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Recent research on affective processing has suggested that low spatial frequency information of fearful faces provide rapid emotional cues to the amygdala, whereas high spatial frequencies convey fine-grained information to the fusiform gyrus, regardless of emotional expression. In the present experiment, we examined the effects of low (LSF, <15 cycles/image width) and high spatial frequency filtering (HSF, >25 cycles/image width) on brain processing of complex pictures depicting pleasant, unpleasant, and neutral scenes. Event-related potentials (ERP), percentage of recognized stimuli and response times were recorded in 19 healthy volunteers. Behavioral results indicated faster reaction times in response to unpleasant LSF than to unpleasant HSF pictures. Unpleasant LSF pictures and pleasant unfiltered pictures also elicited significant enhancements of P1 amplitudes at occipital electrodes as compared to neutral LSF and unfiltered pictures, respectively; whereas no significant effects of affective modulation were found for HSF pictures. Moreover, mean ERP amplitudes in the time between 200 and 500ms post-stimulus were significantly greater for affective (pleasant and unpleasant) than for neutral unfiltered pictures; whereas no significant affective modulation was found for HSF or LSF pictures at those latencies. The fact that affective LSF pictures elicited an enhancement of brain responses at early, but not at later latencies, suggests the existence of a rapid and preattentive neural mechanism for the processing of motivationally relevant stimuli, which could be driven by LSF cues. Our findings confirm thus previous results showing differences on brain processing of affective LSF and HSF faces, and extend these results to more complex and social affective pictures.
Resumo:
Nearest neighbour collaborative filtering (NNCF) algorithms are commonly used in multimedia recommender systems to suggest media items based on the ratings of users with similar preferences. However, the prediction accuracy of NNCF algorithms is affected by the reduced number of items – the subset of items co-rated by both users – typically used to determine the similarity between pairs of users. In this paper, we propose a different approach, which substantially enhances the accuracy of the neighbour selection process – a user-based CF (UbCF) with semantic neighbour discovery (SND). Our neighbour discovery methodology, which assesses pairs of users by taking into account all the items rated at least by one of the users instead of just the set of co-rated items, semantically enriches this enlarged set of items using linked data and, finally, applies the Collinearity and Proximity Similarity metric (CPS), which combines the cosine similarity with Chebyschev distance dissimilarity metric. We tested the proposed SND against the Pearson Correlation neighbour discovery algorithm off-line, using the HetRec data set, and the results show a clear improvement in terms of accuracy and execution time for the predicted recommendations.
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
Recommender system is a specific type of intelligent systems, which exploits historical user ratings on items and/or auxiliary information to make recommendations on items to the users. It plays a critical role in a wide range of online shopping, e-commercial services and social networking applications. Collaborative filtering (CF) is the most popular approaches used for recommender systems, but it suffers from complete cold start (CCS) problem where no rating record are available and incomplete cold start (ICS) problem where only a small number of rating records are available for some new items or users in the system. In this paper, we propose two recommendation models to solve the CCS and ICS problems for new items, which are based on a framework of tightly coupled CF approach and deep learning neural network. A specific deep neural network SADE is used to extract the content features of the items. The state of the art CF model, timeSVD++, which models and utilizes temporal dynamics of user preferences and item features, is modified to take the content features into prediction of ratings for cold start items. Extensive experiments on a large Netflix rating dataset of movies are performed, which show that our proposed recommendation models largely outperform the baseline models for rating prediction of cold start items. The two proposed recommendation models are also evaluated and compared on ICS items, and a flexible scheme of model retraining and switching is proposed to deal with the transition of items from cold start to non-cold start status. The experiment results on Netflix movie recommendation show the tight coupling of CF approach and deep learning neural network is feasible and very effective for cold start item recommendation. The design is general and can be applied to many other recommender systems for online shopping and social networking applications. The solution of cold start item problem can largely improve user experience and trust of recommender systems, and effectively promote cold start items.
Resumo:
Recommender systems (RS) are used by many social networking applications and online e-commercial services. Collaborative filtering (CF) is one of the most popular approaches used for RS. However traditional CF approach suffers from sparsity and cold start problems. In this paper, we propose a hybrid recommendation model to address the cold start problem, which explores the item content features learned from a deep learning neural network and applies them to the timeSVD++ CF model. Extensive experiments are run on a large Netflix rating dataset for movies. Experiment results show that the proposed hybrid recommendation model provides a good prediction for cold start items, and performs better than four existing recommendation models for rating of non-cold start items.
Resumo:
OBJECTIVE: To identify whether the use of a notch filter significantly affects the morphology or characteristics of the newborn auditory brainstem response (ABR) waveform and so inform future guidance for clinical practice. DESIGN: Waveforms with and without the application of a notch filter were recorded from babies undergoing routine ABR tests at 4000, 1000 and 500 Hz. Any change in response morphology was judged subjectively. Response latency, amplitude, and measurements of response quality and residual noise were noted. An ABR simulator was also used to assess the effect of notch filtering in conditions of low and high mains interference. RESULTS: The use of a notch filter changed waveform morphology for 500 Hz stimuli only in 15% of tests in newborns. Residual noise was lower when 4000 Hz stimuli were used. Response latency, amplitude, and quality were unaffected regardless of stimulus frequency. Tests with the ABR stimulator suggest that these findings can be extended to conditions of high level mains interference. CONCLUSIONS: A notch filter should be avoided when testing at 500 Hz, but at higher frequencies appears to carry no penalty.
Resumo:
The conjugate gradient is the most popular optimization method for solving large systems of linear equations. In a system identification problem, for example, where very large impulse response is involved, it is necessary to apply a particular strategy which diminishes the delay, while improving the convergence time. In this paper we propose a new scheme which combines frequency-domain adaptive filtering with a conjugate gradient technique in order to solve a high order multichannel adaptive filter, while being delayless and guaranteeing a very short convergence time.
Resumo:
Nowadays the leukodepletion is one of the most important processes done on the blood in order to reduce the risk of transfusion diseases. It can be performed through different techniques but the most popular one is the filtration due to its simplicity and efficiency. This work aims at improving a current commercial product, by developing a new filter based on Fenton-type reaction to cross-link a hydrogel on to the base material. The filters for leukodepletion are preferably made through the melt flow technique resulting in a non-woven tissue; the functionalization should increase the stability of the filter restricting the extraction of substances to minimum amount when in contact with blood. Through the modification the filters can acquire new properties including wettability, surface charge and good resistance to the extractions. The most important for leukodepletion is the surface charge due to the nature of the filtration process. All the modified samples results have been compared to the commercial product. Three different polymers (A, B and C) have been studied for the filter modifications and every modified filter has been tested in order to determine its properties.