865 resultados para kernel estimator
Resumo:
Recientemente, ha aumentado mucho el interés por la aplicación de los modelos de memoria larga a variables económicas, sobre todo los modelos ARFIMA. Sin duda , el método más usado para la estimación de estos modelos en el ámbito del análisis económico es el propuesto por Geweke y Portero-Hudak (GPH) aun cuando en trabajos recientes se ha demostrado que, en ciertos casos, este estimador presenta un sesgo muy importante. De ahí que, se propone una extensión de este estimador a partir del modelo exponencial propuesto por Bloomfield, y que permite corregir este sesgo.A continuación, se analiza y compara el comportamiento de ambos estimadores en muestras no muy grandes y se comprueba como el estimador propuesto presenta un error cuadrático medio menor que el estimador GPH
Resumo:
[cat] Aquest treball tracta d’extendre la noció d’equilibri simètric de negociació bilateral introduït per Rochford (1983) a jocs d’assignació multilateral. Un pagament corresponent a un equilibri simètric de negociación multilateral (SMB) és una imputación del core que garanteix que qualsevol agent es troba en equilibri respecte a un procés de negociación entre tots els agents basat en allò que cadascun d’ells podria rebre -i fer servir com a amenaça- en un ’matching’ òptim diferent al que s’ha format. Es prova que, en el cas de jocs d’assignació multilaterals, el conjunt de SMB és sempre no buit i que, a diferència del cas bilateral, no sempre coincideix amb el kernel (Davis and Maschler, 1965). Finalment, responem una pregunta oberta per Rochford (1982) tot introduïnt un conjunt basat en la idea de kernel, que, conjuntament amb el core, ens permet caracteritzar el conjunt de SMB.
Resumo:
Losses of productivity of flooded rice in the State of Rio Grande do Sul, Brazil, may occur in the Coastal Plains and in the Southern region due to the use of saline water from coastal rivers, ponds and the Laguna dos Patos lagoon, and the sensibility of the plants are variable according to its stage of development. The purpose of this research was to evaluate the production of rice grains and its components, spikelet sterility and the phenological development of rice at different levels of salinity in different periods of its cycle. The experiment was conducted in a greenhouse, in pots filled with 11 dm³ of an Albaqualf. The levels of salinity were 0.3 (control), 0.75, 1.5, 3.0 and 4.5 dS m-1 kept in the water layer by adding a salt solution of sodium chloride, except for the control, in different periods of rice development: tillering initiation to panicle initiation; tillering initiation to full flowering; tillering initiation to physiological maturity; panicle initiation to full flowering; panicle initiation to physiological maturity and full flowering to physiological maturity. The number of panicles per pot, the number of spikelets per panicle, the 1,000-kernel weight, the spikelet sterility, the grain yield and phenology were evaluated. All characteristics were negatively affected, in a quadratic manner, with increased salinity in all periods of rice development. Among the yield components evaluated, the one most closely related to grain yields of rice was the spikelet sterility.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
Una de las herramientas estadísticas más importantes para el seguimiento y análisis de la evolución de la actividad económica a corto plazo es la disponibilidad de estimaciones de la evolución trimestral de los componentes del PIB, en lo que afecta tanto a la oferta como a la demanda. La necesidad de disponer de esta información con un retraso temporal reducido hace imprescindible la utilización de métodos de trimestralización que permitan desagregar la información anual a trimestral. El método más aplicado, puesto que permite resolver este problema de manera muy elegante bajo un enfoque estadístico de estimador óptimo, es el método de Chow-Lin. Pero este método no garantiza que las estimaciones trimestrales del PIB en lo que respecta a la oferta y a la demanda coincidan, haciendo necesaria la aplicación posterior de algún método de conciliación. En este trabajo se desarrolla una ampliación multivariante del método de Chow-Lin que permite resolver el problema de la estimación de los valores trimestrales de manera óptima, sujeta a un conjunto de restricciones. Una de las aplicaciones potenciales de este método, que hemos denominado método de Chow-Lin restringido, es precisamente la estimación conjunta de valores trimestrales para cada uno de los componentes del PIB en lo que afecta tanto a la demanda como a la oferta condicionada a que ambas estimaciones trimestrales del PIB sean iguales, evitando así la necesidad de aplicar posteriormente métodos de conciliación
Resumo:
In this paper we study, having as theoretical reference the economic model of crime (Becker, 1968; Ehrlich, 1973), which are the socioeconomic and demographic determinants of crime in Spain paying attention on the role of provincial peculiarities. We estimate a crime equation using a panel dataset of Spanish provinces (NUTS3) for the period 1993 to 1999 employing the GMMsystem estimator. Empirical results suggest that lagged crime rate and clear-up rate are correlated to all typologies of crime rate considered. Property crimes are better explained by socioeconomic variables (GDP per capita, GDP growth rate and percentage of population with high school and university degree), while demographic factors reveal important and significant influences, in particular for crimes against the person. These results are obtained using an instrumental variable approach that takes advantage of the dynamic properties of our dataset to control for both measurement errors in crime data and joint endogeneity of the explanatory variables
Resumo:
Uniform-price assignment games are introduced as those assignment markets with the core reduced to a segment. In these games, for all active agents, competitive prices are uniform although products may be non-homogeneous. A characterization in terms of the assignment matrix is given. The only assignment markets where all submarkets are uniform are the Bohm-Bawerk horse markets. We prove that for uniform-price assignment games the kernel, or set of symmetrically-pairwise bargained allocations, either coincides with the core or reduces to the nucleolus
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
Alzheimer's disease (AD) disrupts functional connectivity in distributed cortical networks. We analyzed changes in the S-estimator, a measure of multivariate intraregional synchronization, in electroencephalogram (EEG) source space in 15 mild AD patients versus 15 age-matched controls to evaluate its potential as a marker of AD progression. All participants underwent 2 clinical evaluations and 2 EEG recording sessions on diagnosis and after a year. The main effect of AD was hyposynchronization in the medial temporal and frontal regions and relative hypersynchronization in posterior cingulate, precuneus, cuneus, and parietotemporal cortices. However, the S-estimator did not change over time in either group. This result motivated an analysis of rapidly progressing AD versus slow-progressing patients. Rapidly progressing AD patients showed a significant reduction in synchronization with time, manifest in left frontotemporal cortex. Thus, the evolution of source EEG synchronization over time is correlated with the rate of disease progression and should be considered as a cost-effective AD biomarker.
Resumo:
We investigate the depinning transition occurring in dislocation assemblies. In particular, we consider the cases of regularly spaced pileups and low-angle grain boundaries interacting with a disordered stress landscape provided by solute atoms, or by other immobile dislocations present in nonactive slip systems. Using linear elasticity, we compute the stress originated by small deformations of these assemblies and the corresponding energy cost in two and three dimensions. Contrary to the case of isolated dislocation lines, which are usually approximated as elastic strings with an effective line tension, the deformations of a dislocation assembly cannot be described by local elastic interactions with a constant tension or stiffness. A nonlocal elastic kernel results as a consequence of long-range interactions between dislocations. In light of this result, we revise statistical depinning theories of dislocation assemblies and compare the theoretical results with numerical simulations and experimental data.
Resumo:
We point out that using the heat kernel on a cone to compute the first quantum correction to the entropy of Rindler space does not yield the correct temperature dependence. In order to obtain the physics at arbitrary temperature one must compute the heat kernel in a geometry with different topology (without a conical singularity). This is done in two ways, which are shown to agree with computations performed by other methods. Also, we discuss the ambiguities in the regularization procedure and their physical consequences.
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Uniform-price assignment games are introduced as those assignment markets with the core reduced to a segment. In these games, for all active agents, competitive prices are uniform although products may be non-homogeneous. A characterization in terms of the assignment matrix is given. The only assignment markets where all submarkets are uniform are the Bohm-Bawerk horse markets. We prove that for uniform-price assignment games the kernel, or set of symmetrically-pairwise bargained allocations, either coincides with the core or reduces to the nucleolus
Resumo:
[cat] En el domini dels jocs bilaterals d’assignació, es presenta una axiomàtica del nucleolus com l´unica solució que compleix les propietats de consistència respecte del joc derivat definit per Owen (1992) i monotonia de les queixes dels sectors respecte de la seva cardinalitat. Com a conseqüència obtenim una caracterització geomètrica del nucleolus mitjançant una propietat de bisecció més forta que la que satisfan els punts del kernel (Maschler et al, 1979).