865 resultados para kernel estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a polynomial-based noise variance estimator for multiple-input multiple-output single-carrier block transmission (MIMO-SCBT) systems. It is shown that the optimal pilots for noise variance estimation satisfy the same condition as that for channel estimation. Theoretical analysis indicates that the proposed estimator is statistically more efficient than the conventional sum of squared residuals (SSR) based estimator. Furthermore, we obtain an efficient implementation of the estimator by exploiting its special structure. Numerical results confirm our theoretical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In financial research, the sign of a trade (or identity of trade aggressor) is not always available in the transaction dataset and it can be estimated using a simple set of rules called the tick test. In this paper we investigate the accuracy of the tick test from an analytical perspective by providing a closed formula for the performance of the prediction algorithm. By analyzing the derived equation, we provide formal arguments for the use of the tick test by proving that it is bounded to perform better than chance (50/50) and that the set of rules from the tick test provides an unbiased estimator of the trade signs. On the empirical side of the research, we compare the values from the analytical formula against the empirical performance of the tick test for fifteen heavily traded stocks in the Brazilian equity market. The results show that the formula is quite realistic in assessing the accuracy of the prediction algorithm in a real data situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive energy-norm a posteriori error bounds, using gradient recovery (ZZ) estimators to control the spatial error, for fully discrete schemes for the linear heat equation. This appears to be the �rst completely rigorous derivation of ZZ estimators for fully discrete schemes for evolution problems, without any restrictive assumption on the timestep size. An essential tool for the analysis is the elliptic reconstruction technique.Our theoretical results are backed with extensive numerical experimentation aimed at (a) testing the practical sharpness and asymptotic behaviour of the error estimator against the error, and (b) deriving an adaptive method based on our estimators. An extra novelty provided is an implementation of a coarsening error "preindicator", with a complete implementation guide in ALBERTA in the appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods of improving the coverage of Box–Jenkins prediction intervals for linear autoregressive models are explored. These methods use bootstrap techniques to allow for parameter estimation uncertainty and to reduce the small-sample bias in the estimator of the models’ parameters. In addition, we also consider a method of bias-correcting the non-linear functions of the parameter estimates that are used to generate conditional multi-step predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the spectrum of a one-dimensional Dirac operator pencil, with a coupling constant in front of the potential considered as the spectral parameter. Motivated by recent investigations of graphene waveguides, we focus on the values of the coupling constant for which the kernel of the Dirac operator contains a square integrable function. In physics literature such a function is called a confined zero mode. Several results on the asymptotic distribution of coupling constants giving rise to zero modes are obtained. In particular, we show that this distribution depends in a subtle way on the sign variation and the presence of gaps in the potential. Surprisingly, it also depends on the arithmetic properties of certain quantities determined by the potential. We further observe that variable sign potentials may produce complex eigenvalues of the operator pencil. Some examples and numerical calculations illustrating these phenomena are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a three dimensional system consisting of a large number of small spherical particles, distributed in a range of sizes and heights (with uniform distribution in the horizontal direction). Particles move vertically at a size-dependent terminal velocity. They are either allowed to merge whenever they cross or there is a size ratio criterion enforced to account for collision efficiency. Such a system may be described, in mean field approximation, by the Smoluchowski kinetic equation with a differential sedimentation kernel. We obtain self-similar steady-state and time-dependent solutions to the kinetic equation, using methods borrowed from weak turbulence theory. Analytical results are compared with direct numerical simulations (DNS) of moving and merging particles, and a good agreement is found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The techno-economic performance of a small wind turbine is very sensitive to the available wind resource. However, due to financial and practical constraints installers rely on low resolution wind speed databases to assess a potential site. This study investigates whether the two site assessment tools currently used in the UK, NOABL or the Energy Saving Trust wind speed estimator, are accurate enough to estimate the techno-economic performance of a small wind turbine. Both the tools tend to overestimate the wind speed, with a mean error of 23% and 18% for the NOABL and Energy Saving Trust tool respectively. A techno-economic assessment of 33 small wind turbines at each site has shown that these errors can have a significant impact on the estimated load factor of an installation. Consequently, site/turbine combinations which are not economically viable can be predicted to be viable. Furthermore, both models tend to underestimate the wind resource at relatively high wind speed sites, this can lead to missed opportunities as economically viable turbine/site combinations are predicted to be non-viable. These results show that a better understanding of the local wind resource is a required to make small wind turbines a viable technology in the UK.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.