904 resultados para Panel model estimation
Resumo:
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.
Resumo:
Decision strategies in multi-attribute Choice Experiments are investigated using eye-tracking. The visual attention towards, and attendance of, attributes is examined. Stated attendance is found to diverge substantively from visual attendance of attributes. However, stated and visual attendance are shown to be informative, non-overlapping sources of information about respondent utility functions when incorporated into model estimation. Eye-tracking also reveals systematic nonattendance of attributes only by a minority of respondents. Most respondents visually attend most attributes most of the time. We find no compelling evidence that the level of attention is related to respondent certainty, or that higher or lower value attributes receive more or less attention
Resumo:
Understanding the performance of banks is of the utmost importance due to the impact the sector may have on economic growth and financial stability. Residential mortgage loans constitute a large proportion of the portfolio of many banks and are one of the key assets in the determination of their performance. Using a dynamic panel model, we analyse the impact of residential mortgage loans on bank profitability and risk, based on a sample of 555 banks in the European Union (EU-15), over the period from 1995 to 2008. We find that an increase in residential mortgage loans seems to improve bank’s performance in terms of both profitability and credit risk in good market, pre-financial crisis, conditions. These findings may aid in explaining why banks rush to lend to property during booms because of the positive effect it has on performance. The results also show that credit risk and profitability are lower during the upturn in the residential property cycle.
Resumo:
This paper investigates whether bank integration measured by cross-border bank flows can capture the co-movements across housing markets in developed countries by using a spatial dynamic panel model. The transmission can occur through a global banking channel in which global banks intermediate wholesale funding to local banks. Changes in financial conditions are passed across borders through the banks’ balance-sheet exposure to credit, currency, maturity, and funding risks resulting in house price spillovers. While controlling for country-level and global factors, we find significant co-movement across housing markets of countries with proportionally high bank integration. Bank integration can better capture house price co-movements than other measures of economic integration. Once we account for bank exposure, other spatial linkages traditionally used to account for return co-movements across region – such as trade, foreign direct investment, portfolio investment, geographic proximity, etc. – become insignificant. Moreover, we find that the co-movement across housing markets decreases for countries with less developed mortgage markets characterized by fixed mortgage rate contracts, low limits of loan-to-value ratios and no mortgage equity withdrawal.
Resumo:
Essa dissertação apresenta estimativas para a elasticidade-preço da demanda por aço no Brasil, a partir de dados agregados e desagregados da indústria siderúrgica. Os resultados das estimativas a partir do painel com dados desagregados sugerem que existe um viés de agregação nas estimativas já realizadas a partir de dados agregados, e esse viés subestimaria a elasticidade-preço do setor siderúrgico. Com a finalidade de comparar as relações entre as elasticidades-preços de curto e longo prazo foram estimados painéis heterogêneos dinâmicos, através de estimadores Mean Group (MG) e Pooled Mean Group (PMG). É importante ressaltar que, de acordo com o conhecimento do autor, este é o primeiro estudo a usar estimação em painel para estimação da elasticidade-preço da demanda por produtos siderúrgicos no Brasil, dessa forma, controlando a estimativa pela heterogeneidade entre os tipos de aço.
Resumo:
Multi-factor models constitute a useful tool to explain cross-sectional covariance in equities returns. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide interval forecasts.
Resumo:
Multi-factor models constitute a use fui tool to explain cross-sectional covariance in equities retums. We propose in this paper the use of irregularly spaced returns in the multi-factor model estimation and provide an empirical example with the 389 most liquid equities in the Brazilian Market. The market index shows itself significant to explain equity returns while the US$/Brazilian Real exchange rate and the Brazilian standard interest rate does not. This example shows the usefulness of the estimation method in further using the model to fill in missing values and to provide intervaI forecasts.
Resumo:
No contexto do desenvolvimento econômico, este trabalho tem como principal objetivo explorar a relação entre comércio internacional e produtividade. Após fazer uma ampla discussão sobre as teorias do comércio internacional e as teorias do desenvolvimento econômico, busca-se definir a relação de causalidade entre essas duas variáveis. A pergunta que se segue refere-se ao sentido da causalidade, ou seja, produtividade gera comércio ou comércio gera produtividade? Esse trabalho sugere que os dois sentidos são possíveis e a diferença encontra-se justamente no componente da produtividade que está sendo analisado. Assim, produtividade, no nível do produto (intrasetorial) gera comércio, tal como argumentam Smith (1776) e Ricardo (1817), mas comércio gera produtividade (intersetorial) tal como argumentam Hausmann, Hwang e Rodrik (2007) e McMillan e Rodrik (2011). Na sequência, o estudo faz uma ampla análise dos métodos de decomposição da produtividade, concluindo que existe mais de uma forma de se fazer essa decomposição e que a interpretação de cada uma dessas abordagens difere podendo enviesar as conclusões. Adicionalmente, é feita a decomposição da produtividade nos seus componentes utilizando duas bases de dados distintas: 10-Sector Database do GGDC e contas nacionais do IBGE, concluindo que, a depender da base, os resultados encontrados podem variar significativamente. Da mesma forma, dentro de uma abordagem estruturalista, diferenciam-se setores que possuem maior potencial de crescimento de setores tradicionais com menor potencial de crescimento. Definindo a complexidade das exportações a partir do conceito desenvolvido por Hausmann at al (2014), estimam-se para o caso brasileiro recente, através de um modelo de painel dinâmico, os coeficientes de uma equação para explicar variações no efeito intersetorial estático da produtividade. O modelo estimado sugere que a complexidade das exportações impacta significativa e positivamente o componente estrutural da produtividade. Assim, pode-se dizer que uma pauta de exportações com produtos mais sofisticados favorece o crescimento da produtividade via seu componente intersetorial.
Resumo:
In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.
Resumo:
Pós-graduação em Economia - FCLAR
Resumo:
In this paper the influence of a secondary variable as a function of the correlation with the primary variable for collocated cokriging is examined. For this study five exhaustive data sets were generated in computer, from which samples with 60 and 104 data points were drawn using the stratified random sampling method. These exhaustive data sets were generated departing from a pair of primary and secondary variables showing a good correlation. Then successive sets were generated by adding an amount of white noise in such a way that the correlation gets poorer. Using these samples, it was possible to find out how primary and secondary information is used to estimate an unsampled location according to the correlation level.
Resumo:
This is the reconstructed pCO2 data from Tree ring cellulose d13C data with estimation errors for 10 sites (location given below) by a geochemical model as given in the publication by Trina Bose, Supriyo Chakraborty, Hemant Borgaonkar, Saikat Sengupta. This data was generated in Stable Isotope Laboratory, Indian Institute of Tropical Meteorology, Pune - 411008, India
Resumo:
This talk illustrates how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and various other Stata commands such as tabulate, summarize, or correlate are presented. Users will also be shown how to dynamically link results into word processors or into LaTeX documents.
Resumo:
This tutorial will show how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and also various other Stata commands such as tabulate, summarize, or correlate are presented. Furthermore, this tutorial shows how to dynamically link results into word processors or into LaTeX documents.
Resumo:
This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.