994 resultados para parametric function


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the serial-correlation ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns, does not depend on any parametric function representing preferences, is suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles, and can be a basis to construct an estimator of the risk-free rate. For post-war data, our estimator is close to unity most of the time, yielding an average annual real discount rate of 2.46%. In formal testing, we cannot reject standard preference speciÖcations used in the literature and estimates of the relative risk-aversion coe¢ cient are between 1 and 2, and statistically equal to unity. Using our SDF estimator, we found little signs of the equity-premium puzzle for the U.S.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The determination of the intersection curve between Bézier Surfaces may be seen as the composition of two separated problems: determining initial points and tracing the intersection curve from these points. The Bézier Surface is represented by a parametric function (polynomial with two variables) that maps a point in the tridimensional space from the bidimensional parametric space. In this article, it is proposed an algorithm to determine the initial points of the intersection curve of Bézier Surfaces, based on the solution of polynomial systems with the Projected Polyhedral Method, followed by a method for tracing the intersection curves (Marching Method with differential equations). In order to allow the use of the Projected Polyhedral Method, the equations of the system must be represented in terms of the Bernstein basis, and towards this goal it is proposed a robust and reliable algorithm to exactly transform a multivariable polynomial in terms of power basis to a polynomial written in terms of Bernstein basis .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using the Pricing Equation, in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) mimicking portfolio which relies on the fact that its logarithm is the ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences, making it suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the "common feature" in every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences. The techniques discussed in this paper were applied to two relevant issues in macroeconomics and finance: the first asks what type of parametric preference-representation could be validated by asset-return data, and the second asks whether or not our SDF estimator can price returns in an out-of-sample forecasting exercise. In formal testing, we cannot reject standard preference specifications used in the macro/finance literature. Estimates of the relative risk-aversion coefficient are between 1 and 2, and statistically equal to unity. We also show that our SDF proxy can price reasonably well the returns of stocks with a higher capitalization level, whereas it shows some difficulty in pricing stocks with a lower level of capitalization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Com o objetivo de estimar parâmetros genéticos para a produção de leite no dia do controle (PLDC), foram usadas as 2.440 primeiras lactações de vacas da raça Gir leiteira, com partos registrados entre 1990 e 2005. As PLDC foram consideradas em dez classes mensais e analisadas por meio de modelo de regressão aleatória (MRA) utilizando-se como efeitos aleatórios o genético-aditivo, o de ambiente permanente e o residual e, como efeitos fixos, o grupo de contemporâneos (GC), a co-variável idade da vaca ao parto (efeito linear e quadrático) e a curva média de lactação da população. Os efeitos genético-aditivos e de ambiente permanente foram modelados utilizando-se as funções de Wilmink (WIL) e Ali e Schaeffer (AS). As variâncias residuais foram modeladas utilizando-se 1, 4, 6 ou 10 classes. Os grupos de contemporâneos foram definidos como rebanho-ano-estação do controle contendo no mínimo três animais. Os testes indicaram que o modelo com quatro classes de variâncias usando a função paramétrica AS foi o que melhor se ajustou aos dados. As estimativas de herdabilidade variaram de 0,21 a 0,33 para a função AS e de 0,17 a 0,30 para WIL e foram maiores na primeira metade da lactação. As correlações genéticas entre as PLDC foram positivas e elevadas entre os controles adjacentes e diminuiram quando a distância entre os controles aumentou. Para o melhor modelo, foram estimados os valores genéticos para a produção de leite acumulada até os 305 dias e, para períodos parciais da lactação, foram obtidas como médias dos valores genéticos preditos naquele período. Os valores genéticos foram comparados, por meio da correlação de posto, ao valor genético predito para a produção acumulada até os 305 dias, pelo método tradicional. As correlações entre os valores genéticos indicaram que podem ocorrer divergências na classificação dos animais pelos critérios estudados.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Given the importance of Guzera breeding programs for milk production in the tropics, the objective of this study was to compare alternative random regression models for estimation of genetic parameters and prediction of breeding values. Test-day milk yields records (TDR) were collected monthly, in a maximum of 10 measurements. The database included 20,524 records of first lactation from 2816 Guzera cows. TDR data were analyzed by random regression models (RRM) considering additive genetic, permanent environmental and residual effects as random and the effects of contemporary group (CG), calving age as a covariate (linear and quadratic effects) and mean lactation curve as fixed. The genetic additive and permanent environmental effects were modeled by RRM using Wilmink, All and Schaeffer and cubic B-spline functions as well as Legendre polynomials. Residual variances were considered as heterogeneous classes, grouped differently according to the model used. Multi-trait analysis using finite-dimensional models (FDM) for testday milk records (TDR) and a single-trait model for 305-days milk yields (default) using the restricted maximum likelihood method were also carried out as further comparisons. Through the statistical criteria adopted, the best RRM was the one that used the cubic B-spline function with five random regression coefficients for the genetic additive and permanent environmental effects. However, the models using the Ali and Schaeffer function or Legendre polynomials with second and fifth order for, respectively, the additive genetic and permanent environmental effects can be adopted, as little variation was observed in the genetic parameter estimates compared to those estimated by models using the B-spline function. Therefore, due to the lower complexity in the (co)variance estimations, the model using Legendre polynomials represented the best option for the genetic evaluation of the Guzera lactation records. An increase of 3.6% in the accuracy of the estimated breeding values was verified when using RRM. The ranks of animals were very close whatever the RRM for the data set used to predict breeding values. Considering P305, results indicated only small to medium difference in the animals' ranking based on breeding values predicted by the conventional model or by RRM. Therefore, the sum of all the RRM-predicted breeding values along the lactation period (RRM305) can be used as a selection criterion for 305-day milk production. (c) 2014 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we focus on the model for two types of tumors. Tumor development can be described by four types of death rates and four tumor transition rates. We present a general semi-parametric model to estimate the tumor transition rates based on data from survival/sacrifice experiments. In the model, we make a proportional assumption of tumor transition rates on a common parametric function but no assumption of the death rates from any states. We derived the likelihood function of the data observed in such an experiment, and an EM algorithm that simplified estimating procedures. This article extends work on semi-parametric models for one type of tumor (see Portier and Dinse and Dinse) to two types of tumors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents di±cult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers have long recognized that the non-random sorting of individuals into groups generates correlation between individual and group attributes that is likely to bias naive estimates of both individual and group effects. This paper proposes a non-parametric strategy for identifying these effects in a model that allows for both individual and group unobservables, applying this strategy to the estimation of neighborhood effects on labor market outcomes. The first part of this strategy is guided by a robust feature of the equilibrium in the canonical vertical sorting model of Epple and Platt (1998), that there is a monotonic relationship between neighborhood housing prices and neighborhood quality. This implies that under certain conditions a non- parametric function of neighborhood housing prices serves as a suitable control function for the neighborhood unobservable in the labor market outcome regression. The second part of the proposed strategy uses aggregation to develop suitable instruments for both exogenous and endogenous group attributes. Instrumenting for each individual's observed neighborhood attributes with the average neighborhood attributes of a set of observationally identical individuals eliminates the portion of the variation in neighborhood attributes due to sorting on unobserved individual attributes. The neighborhood effects application is based on confidential microdata from the 1990 Decennial Census for the Boston MSA. The results imply that the direct effects of geographic proximity to jobs, neighborhood poverty rates, and average neighborhood education are substantially larger than the conditional correlations identified using OLS, although the net effect of neighborhood quality on labor market outcomes remains small. These findings are robust across a wide variety of specifications and robustness checks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this paper is to estimate technical efficiency in retailing; and the influence of inventory investment, wage levels, and firm age on this efficiency. We use the output supermarket chains’ sales volume, calculated isolating the retailer price effect on its sales revenue. This output allows us to estimate a strictly technical concept of efficiency. The methodology is based on the estimation of a stochastic parametric function. The empirical analyses applied to panel data on a sample of 42 supermarket chains between 2000 and 2002 show that inventory investment and wage level have an impact on technical efficiency. In comparison, the effect of these factors on efficiency calculated through a monetary output (sales revenue) shows some differences that could be due to aspects related to product prices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.