24 resultados para Search-based algorithms
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for block base motion compensation. On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduced hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms, Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.
Resumo:
We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The dependence of much of Africa on rain fed agriculture leads to a high vulnerability to fluctuations in rainfall amount. Hence, accurate monitoring of near-real time rainfall is particularly useful, for example in forewarning possible crop shortfalls in drought-prone areas. Unfortunately, ground based observations are often inadequate. Rainfall estimates from satellite-based algorithms and numerical model outputs can fill this data gap, however rigorous assessment of such estimates is required. In this case, three satellite based products (NOAA-RFE 2.0, GPCP-1DD and TAMSAT) and two numerical model outputs (ERA-40 and ERA-Interim) have been evaluated for Uganda in East Africa using a network of 27 rain gauges. The study focuses on the years 2001 to 2005 and considers the main rainy season (February to June). All data sets were converted to the same temporal and spatial scales. Kriging was used for the spatial interpolation of the gauge data. All three satellite products showed similar characteristics and had a high level of skill that exceeded both model outputs. ERA-Interim had a tendency to overestimate whilst ERA-40 consistently underestimated the Ugandan rainfall.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
A new approach is presented to identify the number of incoming signals in antenna array processing. The new method exploits the inherent properties existing in the noise eigenvalues of the covariance matrix of the array output. A single threshold has been established concerning information about the signal and noise strength, data length, and array size. When the subspace-based algorithms are adopted the computation cost of the signal number detector can almost be neglected. The performance of the threshold is robust against low SNR and short data length.
Resumo:
The tap-length, or the number of the taps, is an important structural parameter of the linear MMSE adaptive filter. Although the optimum tap-length that balances performance and complexity varies with scenarios, most current adaptive filters fix the tap-length at some compromise value, making them inefficient to implement especially in time-varying scenarios. A novel gradient search based variable tap-length algorithm is proposed, using the concept of the pseudo-fractional tap-length, and it is shown that the new algorithm can converge to the optimum tap-length in the mean. Results of computer simulations are also provided to verify the analysis.
Resumo:
Tropical Applications of Meteorology Using Satellite and Ground-Based Observations (TAMSAT) rainfall estimates are used extensively across Africa for operational rainfall monitoring and food security applications; thus, regional evaluations of TAMSAT are essential to ensure its reliability. This study assesses the performance of TAMSAT rainfall estimates, along with the African Rainfall Climatology (ARC), version 2; the Tropical Rainfall Measuring Mission (TRMM) 3B42 product; and the Climate Prediction Center morphing technique (CMORPH), against a dense rain gauge network over a mountainous region of Ethiopia. Overall, TAMSAT exhibits good skill in detecting rainy events but underestimates rainfall amount, while ARC underestimates both rainfall amount and rainy event frequency. Meanwhile, TRMM consistently performs best in detecting rainy events and capturing the mean rainfall and seasonal variability, while CMORPH tends to overdetect rainy events. Moreover, the mean difference in daily rainfall between the products and rain gauges shows increasing underestimation with increasing elevation. However, the distribution in satellite–gauge differences demon- strates that although 75% of retrievals underestimate rainfall, up to 25% overestimate rainfall over all eleva- tions. Case studies using high-resolution simulations suggest underestimation in the satellite algorithms is likely due to shallow convection with warm cloud-top temperatures in addition to beam-filling effects in microwave- based retrievals from localized convective cells. The overestimation by IR-based algorithms is attributed to nonraining cirrus with cold cloud-top temperatures. These results stress the importance of understanding re- gional precipitation systems causing uncertainties in satellite rainfall estimates with a view toward using this knowledge to improve rainfall algorithms.
Resumo:
Frequency recognition is an important task in many engineering fields such as audio signal processing and telecommunications engineering, for example in applications like Dual-Tone Multi-Frequency (DTMF) detection or the recognition of the carrier frequency of a Global Positioning, System (GPS) signal. This paper will present results of investigations on several common Fourier Transform-based frequency recognition algorithms implemented in real time on a Texas Instruments (TI) TMS320C6713 Digital Signal Processor (DSP) core. In addition, suitable metrics are going to be evaluated in order to ascertain which of these selected algorithms is appropriate for audio signal processing(1).
Resumo:
In this paper, a fuzzy Markov random field (FMRF) model is used to segment land-objects into free, grass, building, and road regions by fusing remotely, sensed LIDAR data and co-registered color bands, i.e. scanned aerial color (RGB) photo and near infra-red (NIR) photo. An FMRF model is defined as a Markov random field (MRF) model in a fuzzy domain. Three optimization algorithms in the FMRF model, i.e. Lagrange multiplier (LM), iterated conditional mode (ICM), and simulated annealing (SA), are compared with respect to the computational cost and segmentation accuracy. The results have shown that the FMRF model-based ICM algorithm balances the computational cost and segmentation accuracy in land-cover segmentation from LIDAR data and co-registered bands.
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
A novel Linear Hashtable Method Predicted Hexagonal Search (LHMPHS) method for block based motion compensation is proposed. Fast block matching algorithms use the origin as the initial search center, which often does not track motion very well. To improve the accuracy of the fast BMA's, we employ a predicted starting search point, which reflects the motion trend of the current block. The predicted search centre is found closer to the global minimum. Thus the center-biased BMA's can be used to find the motion vector more efficiently. The performance of the algorithm is evaluated by using standard video sequences, considers the three important metrics: The results show that the proposed algorithm enhances the accuracy of current hexagonal algorithms and is better than Full Search, Logarithmic Search etc.