994 resultados para pre-filtering
Resumo:
Noise is one of the main factors degrading the quality of original multichannel remote sensing data and its presence influences classification efficiency, object detection, etc. Thus, pre-filtering is often used to remove noise and improve the solving of final tasks of multichannel remote sensing. Recent studies indicate that a classical model of additive noise is not adequate enough for images formed by modern multichannel sensors operating in visible and infrared bands. However, this fact is often ignored by researchers designing noise removal methods and algorithms. Because of this, we focus on the classification of multichannel remote sensing images in the case of signal-dependent noise present in component images. Three approaches to filtering of multichannel images for the considered noise model are analysed, all based on discrete cosine transform in blocks. The study is carried out not only in terms of conventional efficiency metrics used in filtering (MSE) but also in terms of multichannel data classification accuracy (probability of correct classification, confusion matrix). The proposed classification system combines the pre-processing stage where a DCT-based filter processes the blocks of the multichannel remote sensing image and the classification stage. Two modern classifiers are employed, radial basis function neural network and support vector machines. Simulations are carried out for three-channel image of Landsat TM sensor. Different cases of learning are considered: using noise-free samples of the test multichannel image, the noisy multichannel image and the pre-filtered one. It is shown that the use of the pre-filtered image for training produces better classification in comparison to the case of learning for the noisy image. It is demonstrated that the best results for both groups of quantitative criteria are provided if a proposed 3D discrete cosine transform filter equipped by variance stabilizing transform is applied. The classification results obtained for data pre-filtered in different ways are in agreement for both considered classifiers. Comparison of classifier performance is carried out as well. The radial basis neural network classifier is less sensitive to noise in original images, but after pre-filtering the performance of both classifiers is approximately the same.
Resumo:
Los sistemas de recomendación son un tipo de solución al problema de sobrecarga de información que sufren los usuarios de los sitios web en los que se pueden votar ciertos artículos. El sistema de recomendación de filtrado colaborativo es considerado como el método con más éxito debido a que sus recomendaciones se hacen basándose en los votos de usuarios similares a un usuario activo. Sin embargo, el método de filtrado de colaboración tradicional selecciona usuarios insuficientemente representativos como vecinos de cada usuario activo. Esto significa que las recomendaciones hechas a posteriori no son lo suficientemente precisas. El método propuesto en esta tesis realiza un pre-filtrado del proceso, mediante el uso de dominancia de Pareto, que elimina los usuarios menos representativos del proceso de selección k-vecino y mantiene los más prometedores. Los resultados de los experimentos realizados en MovieLens y Netflix muestran una mejora significativa en todas las medidas de calidad estudiadas en la aplicación del método propuesto. ABSTRACTRecommender systems are a type of solution to the information overload problem suffered by users of websites on which they can rate certain items. The Collaborative Filtering Recommender System is considered to be the most successful approach as it make its recommendations based on votes of users similar to an active user. Nevertheless, the traditional collaborative filtering method selects insufficiently representative users as neighbors of each active user. This means that the recommendations made a posteriori are not precise enough. The method proposed in this thesis performs a pre-filtering process, by using Pareto dominance, which eliminates the less representative users from the k-neighbor selection process and keeps the most promising ones. The results from the experiments performed on Movielens and Netflix show a significant improvement in all the quality measures studied on applying the proposed method.
Resumo:
方差阴影图算法使用概率的方法计算像素被遮挡的上限概率,通过对深度图滤波的方法来有效地减少阴影图算法中的走样问题,但在深度比较复杂的场景中方差阴影图算法会出现光渗现象,即在应该是阴影的区域却有了亮度.文中使用最小-最大阴影图来辅助消除方差阴影图中的光渗现象,在对深度纹理进行滤波的同时生成一个最小-最大阴影图;在实时绘制场景时,利用最小-最大阴影图来辅助判断当前片元是否完全处在阴影区域内部,由此生成更真实、更准确的阴影.该算法可以很容易地添加到已有的方差阴影图算法的片元处理程序中,并且不会对原有阴影的柔和边界以及绘制的帧率产生影响.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
In the last decades, a striking amount of hydrographic data, covering the most part of Mediterranean basin, have been generated by the efforts made to characterize the oceanography and ecology of the basin. On the other side, the improvement in technologies, and the consequent perfecting of sampling and analytical techniques, provided data even more reliable than in the past. Nutrient data enter fully in this context, but suffer of the fact of having been produced by a large number of uncoordinated research programs and of being often deficient in quality control, with data bases lacking of intercalibration. In this study we present a computational procedure based on robust statistical parameters and on the physical dynamic properties of the Mediterranean sea and its morphological characteristics, to partially overcome the above limits in the existing data sets. Through a data pre filtering based on the outlier analysis, and thanks to the subsequent shape analysis, the procedure identifies the inconsistent data and for each basin area identifies a characteristic set of shapes (vertical profiles). Rejecting all the profiles that do not follow any of the spotted shapes, the procedure identifies all the reliable profiles and allows us to obtain a data set that can be considered more internally consistent than the existing ones.
Resumo:
We report an empirical analysis of long-range dependence in the returns of eight stock market indices, using the Rescaled Range Analysis (RRA) to estimate the Hurst exponent. Monte Carlo and bootstrap simulations are used to construct critical values for the null hypothesis of no long-range dependence. The issue of disentangling short-range and long-range dependence is examined. Pre-filtering by fitting a (short-range) autoregressive model eliminates part of the long-range dependence when the latter is present, while failure to pre-filter leaves open the possibility of conflating short-range and long-range dependence. There is a strong evidence of long-range dependence for the small central European Czech stock market index PX-glob, and a weaker evidence for two smaller western European stock market indices, MSE (Spain) and SWX (Switzerland). There is little or no evidence of long-range dependence for the other five indices, including those with the largest capitalizations among those considered, DJIA (US) and FTSE350 (UK). These results are generally consistent with prior expectations concerning the relative efficiency of the stock markets examined. © 2011 Elsevier Inc.
Resumo:
La tesi propone una soluzione middleware per scenari in cui i sensori producono un numero elevato di dati che è necessario gestire ed elaborare attraverso operazioni di preprocessing, filtering e buffering al fine di migliorare l'efficienza di comunicazione e del consumo di banda nel rispetto di vincoli energetici e computazionali. E'possibile effettuare l'ottimizzazione di questi componenti attraverso operazioni di tuning remoto.
Resumo:
To construct Biodiversity richness maps from Environmental Niche Models (ENMs) of thousands of species is time consuming. A separate species occurrence data pre-processing phase enables the experimenter to control test AUC score variance due to species dataset size. Besides, removing duplicate occurrences and points with missing environmental data, we discuss the need for coordinate precision, wide dispersion, temporal and synonymity filters. After species data filtering, the final task of a pre-processing phase should be the automatic generation of species occurrence datasets which can then be directly ’plugged-in’ to the ENM. A software application capable of carrying out all these tasks will be a valuable time-saver particularly for large scale biodiversity studies.
Resumo:
Estuarine hydrodynamics is a key factor in the definition of the filtering capacity of an estuary and results from the interaction of the processes that control the inlet morphodynamics and those that are acting in the mixing of the water in the estuary. The hydrodynamics and suspended sediment transport in the Cambori estuary were assessed by two field campaigns conducted in 1998 that covered both neap and spring tide conditions. The period measured represents the estuarine hydrodynamics and sediment transport prior to the construction of the jetty in 2003 and provides important background information for the Cambori estuary. Each field campaign covered two complete tidal cycles with hourly measurements of currents, salinity, suspended sediment concentration and water level. Results show that the Cambori estuary is partially mixed with the vertical structure varying as a function of the tidal range and tidal phase. The dynamic estuarine structure can be balanced between the stabilizing effects generated by the vertical density gradient, which produces buoyancy and stratification flows, and the turbulent effects generated by the vertical velocity gradient that generates vertical mixing. The main sediment source for the water column are the bottom sediments, periodically resuspended by the tidal currents. The advective salt and suspended sediment transport was different between neap and spring tides, being more complex at spring tide. The river discharge term was important under both tidal conditions. The tidal correlation term was also important, being dominant in the suspended sediment transport during the spring tide. The gravitational circulation and Stokes drift played a secondary role in the estuarine transport processes.
Resumo:
In autumn 2012, the new release 05 (RL05) of monthly geopotencial spherical harmonics Stokes coefficients (SC) from GRACE (Gravity Recovery and Climate Experiment) mission was published. This release reduces the noise in high degree and order SC, but they still need to be filtered. One of the most common filtering processing is the combination of decorrelation and Gaussian filters. Both of them are parameters dependent and must be tuned by the users. Previous studies have analyzed the parameters choice for the RL05 GRACE data for oceanic applications, and for RL04 data for global application. This study updates the latter for RL05 data extending the statistics analysis. The choice of the parameters of the decorrelation filter has been optimized to: (1) balance the noise reduction and the geophysical signal attenuation produced by the filtering process; (2) minimize the differences between GRACE and model-based data; (3) maximize the ratio of variability between continents and oceans. The Gaussian filter has been optimized following the latter criteria. Besides, an anisotropic filter, the fan filter, has been analyzed as an alternative to the Gauss filter, producing better statistics.
Resumo:
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time–period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.
Resumo:
Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space