139 resultados para kernel estimator
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Uniform-price assignment games are introduced as those assignment markets with the core reduced to a segment. In these games, for all active agents, competitive prices are uniform although products may be non-homogeneous. A characterization in terms of the assignment matrix is given. The only assignment markets where all submarkets are uniform are the Bohm-Bawerk horse markets. We prove that for uniform-price assignment games the kernel, or set of symmetrically-pairwise bargained allocations, either coincides with the core or reduces to the nucleolus
Resumo:
[cat] En el domini dels jocs bilaterals d’assignació, es presenta una axiomàtica del nucleolus com l´unica solució que compleix les propietats de consistència respecte del joc derivat definit per Owen (1992) i monotonia de les queixes dels sectors respecte de la seva cardinalitat. Com a conseqüència obtenim una caracterització geomètrica del nucleolus mitjançant una propietat de bisecció més forta que la que satisfan els punts del kernel (Maschler et al, 1979).
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
[cat] Aquest treball tracta d’extendre la noció d’equilibri simètric de negociació bilateral introduït per Rochford (1983) a jocs d’assignació multilateral. Un pagament corresponent a un equilibri simètric de negociación multilateral (SMB) és una imputación del core que garanteix que qualsevol agent es troba en equilibri respecte a un procés de negociación entre tots els agents basat en allò que cadascun d’ells podria rebre -i fer servir com a amenaça- en un ’matching’ òptim diferent al que s’ha format. Es prova que, en el cas de jocs d’assignació multilaterals, el conjunt de SMB és sempre no buit i que, a diferència del cas bilateral, no sempre coincideix amb el kernel (Davis and Maschler, 1965). Finalment, responem una pregunta oberta per Rochford (1982) tot introduïnt un conjunt basat en la idea de kernel, que, conjuntament amb el core, ens permet caracteritzar el conjunt de SMB.
Resumo:
In a recent paper, Komaki studied the second-order asymptotic properties of predictive distributions, using the Kullback-Leibler divergence as a loss function. He showed that estimative distributions with asymptotically efficient estimators can be improved by predictive distributions that do not belong to the model. The model is assumed to be a multidimensional curved exponential family. In this paper we generalize the result assuming as a loss function any f divergence. A relationship arises between alpha connections and optimal predictive distributions. In particular, using an alpha divergence to measure the goodness of a predictive distribution, the optimal shift of the estimate distribution is related to alpha-covariant derivatives. The expression that we obtain for the asymptotic risk is also useful to study the higher-order asymptotic properties of an estimator, in the mentioned class of loss functions.
Resumo:
In this work, the calcium-induced aggregation of phosphatidylserine liposomes is probed by means of the analysis of the kinetics of such process as well as the aggregate morphology. This novel characterization of liposome aggregation involves the use of static and dynamic light-scattering techniques to obtain kinetic exponents and fractal dimensions. For salt concentrations larger than 5 mM, a diffusion-limited aggregation regime is observed and the Brownian kernel properly describes the time evolution of the diffusion coefficient. For slow kinetics, a slightly modified multiple contact kernel is required. In any case, a time evolution model based on the numerical resolution of Smoluchowski's equation is proposed in order to establish a theoretical description for the aggregating system. Such a model provides an alternative procedure to determine the dimerization constant, which might supply valuable information about interaction mechanisms between phospholipid vesicles.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series prior to assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and illustrated. A simulation study is carried out to explore the bias and precision of the estimators and compare them to an analytical procedure matching the data simulation model. The experimental conditions include two data generation models, several degrees of serial dependence, trend, level and/or slope change. The results suggest that the level and slope change estimates provided by the procedure are unbiased for all levels of serial dependence tested and trend is effectively controlled for. The efficiency of the slope change estimator is acceptable, whereas the variance of the level change estimator may be problematic for highly negatively autocorrelated data series.
Resumo:
Este PFM se enmarca dentro del trabajo a realizar por el proyecto IRATI. IRATI es un proyecto STReP (Specific Targeted Research Project) financiado por la Unión Europea dentro del programa FP7 (Seventh Framework Programme for Research and Technological Development). El objetivo general de IRATI es conseguir una mayor compresión y exploración de RINA. El trabajo que se reportará en este PFM es (aproximadamente) la primera fase del diseño y desarrollo de un prototipo de RINA sobre Ethernet en el seno del Kernel (Linux), basándose y generando software libre.
Resumo:
The restricted maximum likelihood is preferred by many to the full maximumlikelihood for estimation with variance component and other randomcoefficientmodels, because the variance estimator is unbiased. It is shown that thisunbiasednessis accompanied in some balanced designs by an inflation of the meansquared error.An estimator of the cluster-level variance that is uniformly moreefficient than the fullmaximum likelihood is derived. Estimators of the variance ratio are alsostudied.
Resumo:
On the domain of general assignment games (with possible reservation prices) the core is axiomatized as the unique solution satisfying two consistency principles: projection consistency and derived consistency. Also, an axiomatic characterization of the nucleolus is given as the unique solution that satisfies derived consistency and equal maximum complaint between groups. As a consequence, we obtain a geometric characterization of the nucleolus. Maschler et al. (1979) provide a geometrical characterization for the intersection of the kernel and the core of a coalitional game, showing that those allocations that lie in both sets are always the midpoint of certain bargaining range between each pair of players. In the case of the assignment game, this means that the kernel can be determined as those core allocations where the maximum amount, that can be transferred without getting outside the core, from one agent to his / her optimally matched partner equals the maximum amount that he / she can receive from this partner, also remaining inside the core. We now prove that the nucleolus of the assignment game can be characterized by requiring this bisection property be satisfied not only for optimally matched pairs but also for optimally matched coalitions. Key words: cooperative games, assignment game, core, nucleolus
Resumo:
This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
The well-known structure of an array combiner along with a maximum likelihood sequence estimator (MLSE) receiveris the basis for the derivation of a space-time processor presentinggood properties in terms of co-channel and intersymbol interferencerejection. The use of spatial diversity at the receiver front-endtogether with a scalar MLSE implies a joint design of the spatialcombiner and the impulse response for the sequence detector. Thisis faced using the MMSE criterion under the constraint that thedesired user signal power is not cancelled, yielding an impulse responsefor the sequence detector that is matched to the channel andcombiner response. The procedure maximizes the signal-to-noiseratio at the input of the detector and exhibits excellent performancein realistic multipath channels.