888 resultados para Reproducing kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We build a stochastic discount factor—SDF— using information on US domestic financial data only, and provide evidence that it accounts for foreign markets stylized facts that escape SDF’s generated by consumption based models. By interpreting our SDF as the projection of the pricing kernel from a fully specified model in the space of returns, our results indicate that a model that accounts for the behavior of domestic assets goes a long way toward accounting for the behavior of foreign assets prices. In our tests, we address predictability, a defining feature of the Forward Premium Puzzle—FPP— by using instruments that are known to forecast excess returns in the moments restrictions associated with Euler equations both in the equity and the foreign markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We build a stochastic discount factor—SDF— using information on US domestic financial data only, and provide evidence that it accounts for foreign markets stylized facts that escape SDF’s generated by consumption based models. By interpreting our SDF as the projection of the pricing kernel from a fully specified model in the space of returns, our results indicate that a model that accounts for the behavior of domestic assets goes a long way toward accounting for the behavior of foreign assets prices. In our tests, we address predictability, a defining feature of the Forward Premium Puzzle—FPP— by using instruments that are known to forecast excess returns in the moments restrictions associated with Euler equations both in the equity and the foreign markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the asset pricing implications of an endowment economy when agents can default on contracts that would leave them otherwise worse off. We specialize and extend the environment studied by Kocherlakota (1995) and Kehoe and Levine (1993) to make it comparable to standard studies of asset pricillg. We completely charactize efficient allocations for several special cases. We illtroduce a competitive equilibrium with complete markets alld with elldogellous solvency constraints. These solvellcy constraints are such as to prevent default -at the cost of reduced risk sharing. We show a version of the classical welfare theorems for this equilibrium definition. We characterize the pricing kernel, alld compare it with the one for economies without participation constraints : interest rates are lower and risk premia can be bigger depending on the covariance of the idiosyncratic and aggregate shocks. Quantitative examples show that for reasonable parameter values the relevant marginal rates of substitution fali within the Hansen-Jagannathan bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Forward Premium Puzzle (FPP) is how the empirical observation of a negative relation between future changes in the spot rates and the forward premium is known. Modeling this forward bias as a risk premium and under weak assumptions on the behavior of the pricing kernel, we characterize the potential bias that is present in the regressions where the FPP is observed and we identify the necessary and sufficient conditions that the pricing kernel has to satisfy to account for the predictability of exchange rate movements. Next, we estimate the pricing kernel applying two methods: i) one, du.e to Araújo et aI. (2005), that exploits the fact that the pricing kernel is a serial correlation common feature of asset prices, and ii) a traditional principal component analysis used as a procedure 1;0 generate a statistical factor modeI. Then, using on the sample and out of the sample exercises, we are able to show that the same kernel that explains the Equity Premi um Puzzle (EPP) accounts for the FPP in all our data sets. This suggests that the quest for an economic mo deI that generates a pricing kernel which solves the EPP may double its prize by simultaneously accounting for the FPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a systematic and unified treatment of the developments in the area of kernel estimation in econometrics and statistics. Both the estimation and hypothesis testing issues are discussed for the nonparametric and semiparametric regression models. A discussion on the choice of windowwidth is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study an intertemporal asset pricing model in which a representative consumer maximizes expected utility derived from both the ratio of his consumption to some reference level and this level itself. If the reference consumption level is assumed to be determined by past consumption levels, the model generalizes the usual habit formation specifications. When the reference level growth rate is made dependent on the market portfolio return and on past consumption growth, the model mixes a consumption CAPM with habit formation together with the CAPM. It therefore provides, in an expected utility framework, a generalization of the non-expected recursive utility model of Epstein and Zin (1989). When we estimate this specification with aggregate per capita consumption, we obtain economically plausible values of the preference parameters, in contrast with the habit formation or the Epstein-Zin cases taken separately. All tests performed with various preference specifications confirm that the reference level enters significantly in the pricing kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives the spectral density function of aggregated long memory processes in light of the aliasing effect. The results are different from previous analyses in the literature and a small simulation exercise provides evidence in our favour. The main result point to that flow aggregates from long memory processes shall be less biased than stock ones, although both retain the degree of long memory. This result is illustrated with the daily US Dollar/ French Franc exchange rate series.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In da Costa et al. (2006) we have shown how a same pricing kernel can account for the excess returns of the S&:P500 over the US short term bond and of the uncovered over the covered trading of foreign government bonds. In this paper we estimate and test the overidentifying restrictiom; of Euler equations associated with "ix different versions of the Consumption Capital Asset Pricing I\Iodel. Our main finding is that the same (however often unreasonable) values for the parameters are estimated for ali models in both nmrkets. In most cases, the rejections or otherwise of overidentifying restrictions occurs for the two markets, suggesting that success and failure stories for the equity premium repeat themselves in foreign exchange markets. Our results corroborate the findings in da Costa et al. (2006) that indicate a strong similarity between the behavior of excess returns in the two markets when modeled as risk premiums, providing empirical grounds to believe that the proposed preference-based solutions to puzzles in domestic financiaI markets can certainly shed light on the Forward Premium Puzzle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this article, we relax this assumption – not supported by the data - and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets, which is estimated using a non-parametric method applied to data from the Survey of Consumer Finances. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O artigo trata, fundamentalmente, da análise do Projeto de Lei do Senado nº 6004 de 2013 à luz do conceito da ideologia concurseira. O cerne do trabalho passa pelo entendimento da diferença entre sistemas ideológicos existentes no âmbito acadêmico e pela compreensão dos pressupostos republicanos que guiaram o raciocínio da Administração Pública na determinação de um modelo de seleção de funcionários. O referido projeto de lei surge neste cenário tendente a suprimir lacuna legislativa de regramento específico dos concursos públicos no âmbito federal, no entanto, reproduzindo a ideologia concurseira. Informa o embate o Relatório de Pesquisa “Processos Seletivos para a Contratação de Servidores Públicos: Brasil, o País dos Concursos?”, realizada pela FGV Direito Rio em parceria com a Universidade Federal Fluminense, fruto da iniciativa “Pensando o Direito” da Secretaria de Assuntos Legislativos do Ministério da Justiça.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis contains three chapters. The first chapter uses a general equilibrium framework to simulate and compare the long run effects of the Patient Protection and Affordable Care Act (PPACA) and of health care costs reduction policies on macroeconomic variables, government budget, and welfare of individuals. We found that all policies were able to reduce uninsured population, with the PPACA being more effective than cost reductions. The PPACA increased public deficit mainly due to the Medicaid expansion, forcing tax hikes. On the other hand, cost reductions alleviated the fiscal burden of public insurance, reducing public deficit and taxes. Regarding welfare effects, the PPACA as a whole and cost reductions are welfare improving. High welfare gains would be achieved if the U.S. medical costs followed the same trend of OECD countries. Besides, feasible cost reductions are more welfare improving than most of the PPACA components, proving to be a good alternative. The second chapter documents that life cycle general equilibrium models with heterogeneous agents have a very hard time reproducing the American wealth distribution. A common assumption made in this literature is that all young adults enter the economy with no initial assets. In this chapter, we relax this assumption – not supported by the data – and evaluate the ability of an otherwise standard life cycle model to account for the U.S. wealth inequality. The new feature of the model is that agents enter the economy with assets drawn from an initial distribution of assets. We found that heterogeneity with respect to initial wealth is key for this class of models to replicate the data. According to our results, American inequality can be explained almost entirely by the fact that some individuals are lucky enough to be born into wealth, while others are born with few or no assets. The third chapter documents that a common assumption adopted in life cycle general equilibrium models is that the population is stable at steady state, that is, its relative age distribution becomes constant over time. An open question is whether the demographic assumptions commonly adopted in these models in fact imply that the population becomes stable. In this chapter we prove the existence of a stable population in a demographic environment where both the age-specific mortality rates and the population growth rate are constant over time, the setup commonly adopted in life cycle general equilibrium models. Hence, the stability of the population do not need to be taken as assumption in these models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O material apresenta a sincronização de threads no Windows, as vantagens do uso de padrões no projeto com thread,e assuntos como: problemas com threads, Race Condition & Deadlock, métodos de sincronização (WaitForSingleObject e WaitForMultipleObject), Seção Crítica, e Objetos do Kernel. O material também destaca o armazenamento volatile; Mutex e CreateMutex; Objeto Semaphore, CreateSemaphore e ReleaseSemaphore.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A resistência a múltiplos fármacos é um grande problema na terapia anti-cancerígena, sendo a glicoproteína-P (P-gp) uma das responsáveis por esta resistência. A realização deste trabalho incidiu principalmente no desenvolvimento de modelos matemáticos/estatísticos e “químicos”. Para os modelos matemáticos/estatísticos utilizamos métodos de Machine Learning como o Support Vector Machine (SVM) e o Random Forest, (RF) em relação aos modelos químicos utilizou-se farmacóforos. Os métodos acima mencionados foram aplicados a diversas proteínas P-gp, p53 e complexo p53-MDM2, utilizando duas famílias: as pifitrinas para a p53 e flavonóides para P-gp e, em menor medida, um grupo diversificado de moléculas de diversas famílias químicas. Nos modelos obtidos pelo SVM quando aplicados à P-gp e à família dos flavonóides, obtivemos bons valores através do kernel Radial Basis Function (RBF), com precisão de conjunto de treino de 94% e especificidade de 96%. Quanto ao conjunto de teste com previsão de 70% e especificidade de 67%, sendo que o número de falsos negativos foi o mais baixo comparativamente aos restantes kernels. Aplicando o RF à família dos flavonóides verificou-se que o conjunto de treino apresenta 86% de precisão e uma especificidade de 90%, quanto ao conjunto de teste obtivemos uma previsão de 70% e uma especificidade de 60%, existindo a particularidade de o número de falsos negativos ser o mais baixo. Repetindo o procedimento anterior (RF) e utilizando um total de 63 descritores, os resultados apresentaram valores inferiores obtendo-se para o conjunto de treino 79% de precisão e 82% de especificidade. Aplicando o modelo ao conjunto de teste obteve-se 70% de previsão e 60% de especificidade. Comparando os dois métodos, escolhemos o método SVM com o kernel RBF como modelo que nos garante os melhores resultados de classificação. Aplicamos o método SVM à P-gp e a um conjunto de moléculas não flavonóides que são transportados pela P-gp, obteve-se bons valores através do kernel RBF, com precisão de conjunto de treino de 95% e especificidade de 93%. Quanto ao conjunto de teste, obtivemos uma previsão de 70% e uma especificidade de 69%, existindo a particularidade de o número de falsos negativos ser o mais baixo. Aplicou-se o método do farmacóforo a três alvos, sendo estes, um conjunto de inibidores flavonóides e de substratos não flavonóides para a P-gp, um grupo de piftrinas para a p53 e um conjunto diversificado de estruturas para a ligação da p53-MDM2. Em cada um dos quatro modelos de farmacóforos obtidos identificou-se três características, sendo que as características referentes ao anel aromático e ao dador de ligações de hidrogénio estão presentes em todos os modelos obtidos. Realizando o rastreio em diversas bases de dados utilizando os modelos, obtivemos hits com uma grande diversidade estrutural.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The one which is considered the standard model of theory change was presented in [AGM85] and is known as the AGM model. In particular, that paper introduced the class of partial meet contractions. In subsequent works several alternative constructive models for that same class of functions were presented, e.g.: safe/kernel contractions ([AM85, Han94]), system of spheres-based contractions ([Gro88]) and epistemic entrenchment-based contractions ([G ar88, GM88]). Besides, several generalizations of such model were investigated. In that regard we emphasise the presentation of models which accounted for contractions by sets of sentences rather than only by a single sentence, i.e. multiple contractions. However, until now, only two of the above mentioned models have been generalized in the sense of addressing the case of contractions by sets of sentences: The partial meet multiple contractions were presented in [Han89, FH94], while the kernel multiple contractions were introduced in [FSS03]. In this thesis we propose two new constructive models of multiple contraction functions, namely the system of spheres-based and the epistemic entrenchment-based multiple contractions which generalize the models of system of spheres-based and of epistemic entrenchment-based contractions, respectively, to the case of contractions (of theories) by sets of sentences. Furthermore, analogously to what is the case in what concerns the corresponding classes of contraction functions by one single sentence, those two classes are identical and constitute a subclass of the class of partial meet multiple contractions. Additionally, and as the rst step of the procedure that is here followed to obtain an adequate de nition for the system of spheres-based multiple contractions, we present a possible worlds semantics for the partial meet multiple contractions analogous to the one proposed in [Gro88] for the partial meet contractions (by one single sentence). Finally, we present yet an axiomatic characterization for the new class(es) of multiple contraction functions that are here introduced.