932 resultados para CONVEX COMBINATION


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The convex combination is a mathematic approach to keep the advantages of its component algorithms for better performance. In this paper, we employ convex combination in the blind equalization to achieve better blind equalization. By combining the blind constant modulus algorithm (CMA) and decision directed algorithm, the combinative blind equalization (CBE) algorithm can retain the advantages from both. Furthermore, the convergence speed of the CBE algorithm is faster than both of its component equalizers. Simulation results are also given to verify the proposed algorithm.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Knowing exactly where a mobile entity is and monitoring its trajectory in real-time has recently attracted a lot of interests from both academia and industrial communities, due to the large number of applications it enables, nevertheless, it is nowadays one of the most challenging problems from scientific and technological standpoints. In this work we propose a tracking system based on the fusion of position estimations provided by different sources, that are combined together to get a final estimation that aims at providing improved accuracy with respect to those generated by each system individually. In particular, exploiting the availability of a Wireless Sensor Network as an infrastructure, a mobile entity equipped with an inertial system first gets the position estimation using both a Kalman Filter and a fully distributed positioning algorithm (the Enhanced Steepest Descent, we recently proposed), then combines the results using the Simple Convex Combination algorithm. Simulation results clearly show good performance in terms of the final accuracy achieved. Finally, the proposed technique is validated against real data taken from an inertial sensor provided by THALES ITALIA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As centrais termoelétricas convencionais convertem apenas parte do combustível consumido na produção de energia elétrica, sendo que outra parte resulta em perdas sob a forma de calor. Neste sentido, surgiram as unidades de cogeração, ou Combined Heat and Power (CHP), que permitem reaproveitar a energia dissipada sob a forma de energia térmica e disponibilizá-la, em conjunto com a energia elétrica gerada, para consumo doméstico ou industrial, tornando-as mais eficientes que as unidades convencionais Os custos de produção de energia elétrica e de calor das unidades CHP são representados por uma função não-linear e apresentam uma região de operação admissível que pode ser convexa ou não-convexa, dependendo das caraterísticas de cada unidade. Por estas razões, a modelação de unidades CHP no âmbito do escalonamento de geradores elétricos (na literatura inglesa Unit Commitment Problem (UCP)) tem especial relevância para as empresas que possuem, também, este tipo de unidades. Estas empresas têm como objetivo definir, entre as unidades CHP e as unidades que apenas geram energia elétrica ou calor, quais devem ser ligadas e os respetivos níveis de produção para satisfazer a procura de energia elétrica e de calor a um custo mínimo. Neste documento são propostos dois modelos de programação inteira mista para o UCP com inclusão de unidades de cogeração: um modelo não-linear que inclui a função real de custo de produção das unidades CHP e um modelo que propõe uma linearização da referida função baseada na combinação convexa de um número pré-definido de pontos extremos. Em ambos os modelos a região de operação admissível não-convexa é modelada através da divisão desta àrea em duas àreas convexas distintas. Testes computacionais efetuados com ambos os modelos para várias instâncias permitiram verificar a eficiência do modelo linear proposto. Este modelo permitiu obter as soluções ótimas do modelo não-linear com tempos computationais significativamente menores. Para além disso, ambos os modelos foram testados com e sem a inclusão de restrições de tomada e deslastre de carga, permitindo concluir que este tipo de restrições aumenta a complexidade do problema sendo que o tempo computacional exigido para a resolução do mesmo cresce significativamente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report results from an experiment that explores the empirical validity of correlated equilibrium, an important generalization of the Nash equilibrium concept. Specifically, we seek to understand the conditions under which subjects playing the game of Chicken will condition their behavior on private, third–party recommendations drawn from known distributions. In a “good–recommendations” treatment, the distribution we use is a correlated equilibrium with payoffs better than any symmetric payoff in the convex hull of Nash equilibrium payoff vectors. In a “bad–recommendations” treatment, the distribution is a correlated equilibrium with payoffs worse than any Nash equilibrium payoff vector. In a “Nash–recommendations” treatment, the distribution is a convex combination of Nash equilibrium outcomes (which is also a correlated equilibrium), and in a fourth “very–good–recommendations” treatment, the distribution yields high payoffs, but is not a correlated equilibrium. We compare behavior in all of these treatments to the case where subjects do not receive recommendations. We find that when recommendations are not given to subjects, behavior is very close to mixed–strategy Nash equilibrium play. When recommendations are given, behavior does differ from mixed–strategy Nash equilibrium, with the nature of the differ- ences varying according to the treatment. Our main finding is that subjects will follow third–party recommendations only if those recommendations derive from a correlated equilibrium, and further, if that correlated equilibrium is payoff–enhancing relative to the available Nash equilibria.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a distribution problem, and specfii cally in bankruptcy issues, the Proportional (P) and the Egalitarian (EA) divisions are two of the most popular ways to resolve the conflict. The Constrained Equal Awards rule (CEA) is introduced in bankruptcy literature to ensure that no agent receives more than her claim, a problem that can arise when using the egalitarian division. We propose an alternative modi cation, by using a convex combination of P and EA. The recursive application of this new rule finishes at the CEA rule. Our solution concept ensures a minimum amount to each agent, and distributes the remaining estate in a proportional way. Keywords: Bankruptcy problems, Proportional rule, Equal Awards, Convex combination of rules, Lorenz dominance. JEL classi fication: C71, D63, D71.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper constructs a unit root test baseei on partially adaptive estimation, which is shown to be robust against non-Gaussian innovations. We show that the limiting distribution of the t-statistic is a convex combination of standard normal and DF distribution. Convergence to the DF distribution is obtaineel when the innovations are Gaussian, implying that the traditional ADF test is a special case of the proposed testo Monte Carlo Experiments indicate that, if innovation has heavy tail distribution or are contaminated by outliers, then the proposed test is more powerful than the traditional ADF testo Nominal interest rates (different maturities) are shown to be stationary according to the robust test but not stationary according to the nonrobust ADF testo This result seems to suggest that the failure of rejecting the null of unit root in nominal interest rate may be due to the use of estimation and hypothesis testing procedures that do not consider the absence of Gaussianity in the data.Our results validate practical restrictions on the behavior of the nominal interest rate imposed by CCAPM, optimal monetary policy and option pricing models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the context of climate change over South America (SA) has been observed that the combination of high temperatures and rain more temperatures less rainfall, cause different impacts such as extreme precipitation events, favorable conditions for fires and droughts. As a result, these regions face growing threat of water shortage, local or generalized. Thus, the water availability in Brazil depends largely on the weather and its variations in different time scales. In this sense, the main objective of this research is to study the moisture budget through regional climate models (RCM) from Project Regional Climate Change Assessments for La Plata Basin (CLARIS-LPB) and combine these RCM through two statistical techniques in an attempt to improve prediction on three areas of AS: Amazon (AMZ), Northeast Brazil (NEB) and the Plata Basin (LPB) in past climates (1961-1990) and future (2071-2100). The moisture transport on AS was investigated through the moisture fluxes vertically integrated. The main results showed that the average fluxes of water vapor in the tropics (AMZ and NEB) are higher across the eastern and northern edges, thus indicating that the contributions of the trade winds of the North Atlantic and South are equally important for the entry moisture during the months of JJA and DJF. This configuration was observed in all the models and climates. In comparison climates, it was found that the convergence of the flow of moisture in the past weather was smaller in the future in various regions and seasons. Similarly, the majority of the SPC simulates the future climate, reduced precipitation in tropical regions (AMZ and NEB), and an increase in the LPB region. The second phase of this research was to carry out combination of RCM in more accurately predict precipitation, through the multiple regression techniques for components Main (C.RPC) and convex combination (C.EQM), and then analyze and compare combinations of RCM (ensemble). The results indicated that the combination was better in RPC represent precipitation observed in both climates. Since, in addition to showing values be close to those observed, the technique obtained coefficient of correlation of moderate to strong magnitude in almost every month in different climates and regions, also lower dispersion of data (RMSE). A significant advantage of the combination of methods was the ability to capture extreme events (outliers) for the study regions. In general, it was observed that the wet C.EQM captures more extreme, while C.RPC can capture more extreme dry climates and in the three regions studied.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Estatística, 2015.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aitchison and Bacon-Shone (1999) considered convex linear combinations ofcompositions. In other words, they investigated compositions of compositions, wherethe mixing composition follows a logistic Normal distribution (or a perturbationprocess) and the compositions being mixed follow a logistic Normal distribution. Inthis paper, I investigate the extension to situations where the mixing compositionvaries with a number of dimensions. Examples would be where the mixingproportions vary with time or distance or a combination of the two. Practicalsituations include a river where the mixing proportions vary along the river, or acrossa lake and possibly with a time trend. This is illustrated with a dataset similar to thatused in the Aitchison and Bacon-Shone paper, which looked at how pollution in aloch depended on the pollution in the three rivers that feed the loch. Here, I explicitlymodel the variation in the linear combination across the loch, assuming that the meanof the logistic Normal distribution depends on the river flows and relative distancefrom the source origins