931 resultados para Spatial conditional autoregressive model
Resumo:
Quantifying mass and energy exchanges within tropical forests is essential for understanding their role in the global carbon budget and how they will respond to perturbations in climate. This study reviews ecosystem process models designed to predict the growth and productivity of temperate and tropical forest ecosystems. Temperate forest models were included because of the minimal number of tropical forest models. The review provides a multiscale assessment enabling potential users to select a model suited to the scale and type of information they require in tropical forests. Process models are reviewed in relation to their input and output parameters, minimum spatial and temporal units of operation, maximum spatial extent and time period of application for each organization level of modelling. Organizational levels included leaf-tree, plot-stand, regional and ecosystem levels, with model complexity decreasing as the time-step and spatial extent of model operation increases. All ecosystem models are simplified versions of reality and are typically aspatial. Remotely sensed data sets and derived products may be used to initialize, drive and validate ecosystem process models. At the simplest level, remotely sensed data are used to delimit location, extent and changes over time of vegetation communities. At a more advanced level, remotely sensed data products have been used to estimate key structural and biophysical properties associated with ecosystem processes in tropical and temperate forests. Combining ecological models and image data enables the development of carbon accounting systems that will contribute to understanding greenhouse gas budgets at biome and global scales.
Resumo:
No âmbito da condução da política monetária, as funções de reação estimadas em estudos empíricos, tanto para a economia brasileira como para outras economias, têm mostrado uma boa aderência aos dados. Porém, os estudos mostram que o poder explicativo das estimativas aumenta consideravelmente quando se inclui um componente de suavização da taxa de juros, representado pela taxa de juros defasada. Segundo Clarida, et. al. (1998) o coeficiente da taxa de juros defasada (situado ente 0,0 e 1,0) representaria o grau de inércia da política monetária, e quanto maior esse coeficiente, menor e mais lenta é a resposta da taxa de juros ao conjunto de informações relevantes. Por outro lado, a literatura empírica internacional mostra que esse componente assume um peso expressivo nas funções de reação, o que revela que os BCs ajustam o instrumento de modo lento e parcimonioso. No entanto, o caso brasileiro é de particular interesse porque os trabalhos mais recentes têm evidenciado uma elevação no componente inercial, o que sugere que o BCB vem aumentando o grau de suavização da taxa de juros nos últimos anos. Nesse contexto, mais do que estimar uma função de reação forward looking para captar o comportamento global médio do Banco Central do Brasil no período de Janeiro de 2005 a Maio de 2013, o trabalho se propôs a procurar respostas para uma possível relação de causalidade dinâmica entre a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes, usando como método a aplicação do filtro de Kalman para extrair a trajetória do coeficiente de inércia e a estimação de um modelo de Vetores Autorregressivos (VAR) que incluirá a trajetória do coeficiente de inércia e as variáveis macroeconômicas relevantes. De modo geral, pelas regressões e pelo filtro de Kalman, os resultados mostraram um coeficiente de inércia extremamente elevado em todo o período analisado, e coeficientes de resposta global muito pequenos, inconsistentes com o que é esperado pela teoria. Pelo método VAR, o resultado de maior interesse foi o de que choques positivos na variável de inércia foram responsáveis por desvios persistentes no hiato do produto e, consequentemente, sobre os desvios de inflação e de expectativas de inflação em relação à meta central.
Resumo:
This paper studies the evolution of the default risk premia for European firms during the years surrounding the recent credit crisis. We employ the information embedded in Credit Default Swaps (CDS) and Moody’s KMV EDF default probabilities to analyze the common factors driving this risk premia. The risk premium is characterized in several directions: Firstly, we perform a panel data analysis to capture the relationship between CDS spreads and actual default probabilities. Secondly, we employ the intensity framework of Jarrow et al. (2005) in order to measure the theoretical effect of risk premium on expected bond returns. Thirdly, we carry out a dynamic panel data to identify the macroeconomic sources of risk premium. Finally, a vector autoregressive model analyzes which proportion of the co-movement is attributable to financial or macro variables. Our estimations report coefficients for risk premium substantially higher than previously referred for US firms and a time varying behavior. A dominant factor explains around 60% of the common movements in risk premia. Additionally, empirical evidence suggests a public-to-private risk transfer between the sovereign CDS spreads and corporate risk premia.
Resumo:
OBJECTIVE: To analyze whether previously identified risk factors for sudden death syndrome have a significant impact in a developing country. METHODS: Retrospective longitudinal case-control study carried out in Porto Alegre, Southern Brazil. Cases (N=39) were infants born between 1996 and 2000 who died suddenly and unexpectedly at home during sleep and were diagnosed with sudden death syndrome. Controls (N=117) were infants matched by age and sex who died in hospitals due to other conditions. Data were collected from postmortem examination records and questionnaires answers. A conditional logistic model was used to identify factors associated with the outcome. RESULTS: Mean age at death of cases was 3.2 months. The frequencies of infants regarding gestational age, breastfeeding and regular medical visits were similar in both groups. Sleeping position for most cases and controls was the lateral one. Supine sleeping position was found for few infants in both groups. Maternal variables, age below 20 years (OR=2, 95% CI: 1.1; 5.1) and smoking of more than 10 cigarettes per day during pregnancy (OR=3, 95% CI: 1.3; 6.4), significantly increased the risk for the syndrome. Socioeconomic characteristics were similar in both groups and did not affect risk. CONCLUSIONS: Infant-maternal and socioeconomic profiles of cases in a developing country closely resembled the profile described in the literature, and risk factors were similar as well. However, individual characteristics were identified as risks in the population studied, such as smoking during pregnancy and maternal age below 20 years.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
This paper intends to study whether financial liberalization tends to increase the likelihood of systemic banking crises. I used a sample of 79 countries with data spanning from 1973 to 2005 to run a panel probit model. I found that, if anything, financial liberalization as measured across seven different dimensions tends to decrease the probability of occurrence of a systemic banking crisis. I went further and did several robustness tests – used a conditional probit model, tested for different durations of liberalization impact and reduced the sample by considering only the first crisis event for each country. Main results still verified, proving the results’ robustness.
Avaliação do desempenho de fundos de investimento de obrigações: evidência para o mercado Brasileiro
Resumo:
Dissertação de mestrado em Finanças
Resumo:
We provide a comparative analysis of how short-run variations in carbon and energy prices relate to each other in the emerging greenhouse gas market in California (Western Climate Initiative [WCI], and the European Union Emission Trading Scheme [EU ETS]). We characterize the relationship between carbon, gas, coal, electricity and gasoline prices and an indicator for economic activity, and present a first analysis of carbon prices in the WCI. We also provide a comparative analysis of the structures of the two markets. We estimate a vector autoregressive model and the impulse--response functions. Our main findings show a positive impact from a carbon shock toward electricity, in both markets, but larger in the WCI electricity price, indicating more efficiency. We propose that the widening of carbon market sectors, namely fuels transport and electricity imports, may contribute to this result. To conclude, the research shows significant and coherent relations between variables in WCI, which demonstrate some degree of success for a first year in operation. Reversely, the EU ETS should complete its intended market reform, to allow for more impact of the carbon price. Finally, in both markets, there is no evidence of carbon pricing depleting economic activity.
Resumo:
Macroeconomic activity has become less volatile over the past three decades in most G7 economies. Current literature focuses on the characterization of the volatility reduction and explanations for this so called "moderation" in each G7 economy separately. In opposed to individual country analysis and individual variable analysis, this paper focuses on common characteristics of the reduction and common explanations for the moderation in G7 countries. In particular, we study three explanations: structural changes in the economy, changes in common international shocks and changes in domestic shocks. We study these explanations in a unified model structure. To this end, we propose a Bayesian factor structural vector autoregressive model. Using the proposed model, we investigate whether we can find common explanations for all G7 economies when information is pooled from multiple domestic and international sources. Our empirical analysis suggests that volatility reductions can largely be attributed to the decline in the magnitudes of the shocks in most G7 countries while only for the U.K., the U.S. and Italy they can partially be attributed to structural changes in the economy. Analyzing the components of the volatility, we also find that domestic shocks rather than common international shocks can account for a large part of the volatility reduction in most of the G7 countries. Finally, we find that after mid-1980s the structure of the economy changes substantially in five of the G7 countries: Germany, Italy, Japan, the U.K. and the U.S..
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
OBJECTIVE To assess Spanish and Portuguese patients' and physicians' preferences regarding type 2 diabetes mellitus (T2DM) treatments and the monthly willingness to pay (WTP) to gain benefits or avoid side effects. METHODS An observational, multicenter, exploratory study focused on routine clinical practice in Spain and Portugal. Physicians were recruited from multiple hospitals and outpatient clinics, while patients were recruited from eleven centers operating in the public health care system in different autonomous communities in Spain and Portugal. Preferences were measured via a discrete choice experiment by rating multiple T2DM medication attributes. Data were analyzed using the conditional logit model. RESULTS Three-hundred and thirty (n=330) patients (49.7% female; mean age 62.4 [SD: 10.3] years, mean T2DM duration 13.9 [8.2] years, mean body mass index 32.5 [6.8] kg/m(2), 41.8% received oral + injected medication, 40.3% received oral, and 17.6% injected treatments) and 221 physicians from Spain and Portugal (62% female; mean age 41.9 [SD: 10.5] years, 33.5% endocrinologists, 66.5% primary-care doctors) participated. Patients valued avoiding a gain in bodyweight of 3 kg/6 months (WTP: €68.14 [95% confidence interval: 54.55-85.08]) the most, followed by avoiding one hypoglycemic event/month (WTP: €54.80 [23.29-82.26]). Physicians valued avoiding one hypoglycemia/week (WTP: €287.18 [95% confidence interval: 160.31-1,387.21]) the most, followed by avoiding a 3 kg/6 months gain in bodyweight and decreasing cardiovascular risk (WTP: €166.87 [88.63-843.09] and €154.30 [98.13-434.19], respectively). Physicians and patients were willing to pay €125.92 (73.30-622.75) and €24.28 (18.41-30.31), respectively, to avoid a 1% increase in glycated hemoglobin, and €143.30 (73.39-543.62) and €42.74 (23.89-61.77) to avoid nausea. CONCLUSION Both patients and physicians in Spain and Portugal are willing to pay for the health benefits associated with improved diabetes treatment, the most important being to avoid hypoglycemia and gaining weight. Decreased cardiovascular risk and weight reduction became the third most valued attributes for physicians and patients, respectively.
Resumo:
Aim Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location World-wide.Methods Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.
Resumo:
One of the principal issues facing biomedical research is to elucidate developmental pathways and to establish the fate of stem and progenitor cells in vivo. Hematopoiesis, the process of blood cell formation, provides a powerful experimental system for investigating this process. Here, we employ transcriptional regulatory elements from the stem cell leukemia (SCL) gene to selectively label primitive and definitive hematopoiesis. We report that SCL-labelled cells arising in the mid to late streak embryo give rise to primitive red blood cells but fail to contribute to the vascular system of the developing embryo. Restricting SCL-marking to different stages of foetal development, we identify a second population of multilineage progenitors, proficient in contributing to adult erythroid, myeloid and lymphoid cells. The distinct lineage-restricted potential of SCL-labelled early progenitors demonstrates that primitive erythroid cell fate specification is initiated during mid gastrulation. Our data also suggest that the transition from a hemangioblastic precursors with endothelial and blood forming potential to a committed hematopoietic progenitor must have occurred prior to SCL-marking of definitive multilineage blood precursors.
Resumo:
Sickness absence (SA) is an important social, economic and public health issue. Identifying and understanding the determinants, whether biological, regulatory or, health services-related, of variability in SA duration is essential for better management of SA. The conditional frailty model (CFM) is useful when repeated SA events occur within the same individual, as it allows simultaneous analysis of event dependence and heterogeneity due to unknown, unmeasured, or unmeasurable factors. However, its use may encounter computational limitations when applied to very large data sets, as may frequently occur in the analysis of SA duration. To overcome the computational issue, we propose a Poisson-based conditional frailty model (CFPM) for repeated SA events that accounts for both event dependence and heterogeneity. To demonstrate the usefulness of the model proposed in the SA duration context, we used data from all non-work-related SA episodes that occurred in Catalonia (Spain) in 2007, initiated by either a diagnosis of neoplasm or mental and behavioral disorders. As expected, the CFPM results were very similar to those of the CFM for both diagnosis groups. The CPU time for the CFPM was substantially shorter than the CFM. The CFPM is an suitable alternative to the CFM in survival analysis with recurrent events,especially with large databases.
Resumo:
We estimate the response of stock prices to exogenous monetary policy shocks usinga vector-autoregressive model with time-varying parameters. Our evidence points toprotracted episodes in which, after a a short-run decline, stock prices increase persistently in response to an exogenous tightening of monetary policy. That responseis clearly at odds with the "conventional" view on the effects of monetary policy onbubbles, as well as with the predictions of bubbleless models. We also argue that it isunlikely that such evidence be accounted for by an endogenous response of the equitypremium to the monetary policy shocks.