917 resultados para ARCH and GARCH Models
Resumo:
In this study, discrete time one-factor models of the term structure of interest rates and their application to the pricing of interest rate contingent claims are examined theoretically and empirically. The first chapter provides a discussion of the issues involved in the pricing of interest rate contingent claims and a description of the Ho and Lee (1986), Maloney and Byrne (1989), and Black, Derman, and Toy (1990) discrete time models. In the second chapter, a general discrete time model of the term structure from which the Ho and Lee, Maloney and Byrne, and Black, Derman, and Toy models can all be obtained is presented. The general model also provides for the specification of an additional model, the ExtendedMB model. The third chapter illustrates the application of the discrete time models to the pricing of a variety of interest rate contingent claims. In the final chapter, the performance of the Ho and Lee, Black, Derman, and Toy, and ExtendedMB models in the pricing of Eurodollar futures options is investigated empirically. The results indicate that the Black, Derman, and Toy and ExtendedMB models outperform the Ho and Lee model. Little difference in the performance of the Black, Derman, and Toy and ExtendedMB models is detected. ^
Resumo:
This dissertation investigates, based on the Post-Keynesian theory and on its concept of monetary economy of production, the exchange rate behavior of the Brazilian Real in the presence of Brazilian Central Bank's interventions by means of the so-called swap transactions over 2002-2015. Initially, the work analyzes the essential properties of an open monetary economy of production and, thereafter, it presents the basic propositions of the Post-Keynesian view on the exchange rate determination, highlighting the properties of foreign exchange markets and the peculiarities of the Brazilian position into the international monetary and financial system. The research, thereby, accounts for the various segments of the Brazilian foreign exchange market. To accomplish its purpose, we first do a literature review of the Post-Keynesian literature about the topic. Then, we undertake empirical exams of the exchange rate determination using two statistical methods. On the one hand, to measure the volatility of exchange rate, we estimate Auto-regressive Conditional Heteroscedastic (ARCH) and Generalized Auto-regressive Conditional Heteroscedastic (GARCH) models. On the other hand, to measure the variance of the exchange rate in relation to real, financial variables, and the swaps, we estimate a Vector Auto-regression (VAR) model. Both experiments are performed for the nominal and real effective exchange rates. The results show that the swaps respond to exchange rate movements, trying to offset its volatility. This reveals that the exchange rate is, at least in a certain magnitude, sensitive to swaps transactions conducted by the Central Bank. In addition, another empirical result is that the real effective exchange rate responds more to the swaps auctions than the nominal rate.
Resumo:
The problem of social diffusion has animated sociological thinking on topics ranging from the spread of an idea, an innovation or a disease, to the foundations of collective behavior and political polarization. While network diffusion has been a productive metaphor, the reality of diffusion processes is often muddier. Ideas and innovations diffuse differently from diseases, but, with a few exceptions, the diffusion of ideas and innovations has been modeled under the same assumptions as the diffusion of disease. In this dissertation, I develop two new diffusion models for "socially meaningful" contagions that address two of the most significant problems with current diffusion models: (1) that contagions can only spread along observed ties, and (2) that contagions do not change as they spread between people. I augment insights from these statistical and simulation models with an analysis of an empirical case of diffusion - the use of enterprise collaboration software in a large technology company. I focus the empirical study on when people abandon innovations, a crucial, and understudied aspect of the diffusion of innovations. Using timestamped posts, I analyze when people abandon software to a high degree of detail.
To address the first problem, I suggest a latent space diffusion model. Rather than treating ties as stable conduits for information, the latent space diffusion model treats ties as random draws from an underlying social space, and simulates diffusion over the social space. Theoretically, the social space model integrates both actor ties and attributes simultaneously in a single social plane, while incorporating schemas into diffusion processes gives an explicit form to the reciprocal influences that cognition and social environment have on each other. Practically, the latent space diffusion model produces statistically consistent diffusion estimates where using the network alone does not, and the diffusion with schemas model shows that introducing some cognitive processing into diffusion processes changes the rate and ultimate distribution of the spreading information. To address the second problem, I suggest a diffusion model with schemas. Rather than treating information as though it is spread without changes, the schema diffusion model allows people to modify information they receive to fit an underlying mental model of the information before they pass the information to others. Combining the latent space models with a schema notion for actors improves our models for social diffusion both theoretically and practically.
The empirical case study focuses on how the changing value of an innovation, introduced by the innovations' network externalities, influences when people abandon the innovation. In it, I find that people are least likely to abandon an innovation when other people in their neighborhood currently use the software as well. The effect is particularly pronounced for supervisors' current use and number of supervisory team members who currently use the software. This case study not only points to an important process in the diffusion of innovation, but also suggests a new approach -- computerized collaboration systems -- to collecting and analyzing data on organizational processes.
Resumo:
Left ventricular diastolic dysfunction leads to heart failure with preserved ejection fraction, an increasingly prevalent condition largely driven by modern day lifestyle risk factors. As heart failure with preserved ejection fraction accounts for almost one-half of all patients with heart failure, appropriate nonhuman animal models are required to improve our understanding of the pathophysiology of this syndrome and to provide a platform for preclinical investigation of potential therapies. Hypertension, obesity, and diabetes are major risk factors for diastolic dysfunction and heart failure with preserved ejection fraction. This review focuses on murine models reflecting this disease continuum driven by the aforementioned common risk factors. We describe various models of diastolic dysfunction and highlight models of heart failure with preserved ejection fraction reported in the literature. Strengths and weaknesses of the different models are discussed to provide an aid to translational scientists when selecting an appropriate model. We also bring attention to the fact that heart failure with preserved ejection fraction is difficult to diagnose in animal models and that, therefore, there is a paucity of well described animal models of this increasingly important condition.
Resumo:
Calculations of synthetic spectropolarimetry are one means to test multidimensional explosion models for Type Ia supernovae. In a recent paper, we demonstrated that the violent merger of a 1.1 and 0.9 M⊙ white dwarf binary system is too asymmetric to explain the low polarization levels commonly observed in normal Type Ia supernovae. Here, we present polarization simulations for two alternative scenarios: the sub-Chandrasekhar mass double-detonation and the Chandrasekhar mass delayed-detonation model. Specifically, we study a 2D double-detonation model and a 3D delayed-detonation model, and calculate polarization spectra for multiple observer orientations in both cases. We find modest polarization levels (<1 per cent) for both explosion models. Polarization in the continuum peaks at ∼0.1–0.3 per cent and decreases after maximum light, in excellent agreement with spectropolarimetric data of normal Type Ia supernovae. Higher degrees of polarization are found across individual spectral lines. In particular, the synthetic Si II λ6355 profiles are polarized at levels that match remarkably well the values observed in normal Type Ia supernovae, while the low degrees of polarization predicted across the O I λ7774 region are consistent with the non-detection of this feature in current data. We conclude that our models can reproduce many of the characteristics of both flux and polarization spectra for well-studied Type Ia supernovae, such as SN 2001el and SN 2012fr. However, the two models considered here cannot account for the unusually high level of polarization observed in extreme cases such as SN 2004dt.
Resumo:
This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
BACKGROUND: Risk assessment is fundamental in the management of acute coronary syndromes (ACS), enabling estimation of prognosis. AIMS: To evaluate whether the combined use of GRACE and CRUSADE risk stratification schemes in patients with myocardial infarction outperforms each of the scores individually in terms of mortality and haemorrhagic risk prediction. METHODS: Observational retrospective single-centre cohort study including 566 consecutive patients admitted for non-ST-segment elevation myocardial infarction. The CRUSADE model increased GRACE discriminatory performance in predicting all-cause mortality, ascertained by Cox regression, demonstrating CRUSADE independent and additive predictive value, which was sustained throughout follow-up. The cohort was divided into four different subgroups: G1 (GRACE<141; CRUSADE<41); G2 (GRACE<141; CRUSADE≥41); G3 (GRACE≥141; CRUSADE<41); G4 (GRACE≥141; CRUSADE≥41). RESULTS: Outcomes and variables estimating clinical severity, such as admission Killip-Kimbal class and left ventricular systolic dysfunction, deteriorated progressively throughout the subgroups (G1 to G4). Survival analysis differentiated three risk strata (G1, lowest risk; G2 and G3, intermediate risk; G4, highest risk). The GRACE+CRUSADE model revealed higher prognostic performance (area under the curve [AUC] 0.76) than GRACE alone (AUC 0.70) for mortality prediction, further confirmed by the integrated discrimination improvement index. Moreover, GRACE+CRUSADE combined risk assessment seemed to be valuable in delineating bleeding risk in this setting, identifying G4 as a very high-risk subgroup (hazard ratio 3.5; P<0.001). CONCLUSIONS: Combined risk stratification with GRACE and CRUSADE scores can improve the individual discriminatory power of GRACE and CRUSADE models in the prediction of all-cause mortality and bleeding. This combined assessment is a practical approach that is potentially advantageous in treatment decision-making.
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Resumo:
Mestrado em Contabilidade e Gestão das Instituições Financeiras
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Agronomia e Medicina Veterinária, Programa de Pós-Graduação em Agronegócios, 2016.
Resumo:
A presente dissertação visa uma aplicação de séries temporais, na modelação do índice financeiro FTSE100. Com base na série de retornos, foram estudadas a estacionaridade através do teste Phillips-Perron, a normalidade pelo Teste Jarque-Bera, a independência analisada pela função de autocorrelação e pelo teste de Ljung-Box, e utilizados modelos GARCH, com a finalidade de modelar e prever a variância condicional (volatilidade) da série financeira em estudo. As séries temporais financeiras apresentam características peculiares, revelando períodos mais voláteis do que outros. Esses períodos encontram-se distribuídos em clusters, sugerindo um grau de dependência no tempo. Atendendo à presença de tais grupos de volatilidade (não linearidade), torna-se necessário o recurso a modelos heterocedásticos condicionais, isto é, modelos que consideram que a variância condicional de uma série temporal não é constante e dependente do tempo. Face à grande variabilidade das séries temporais financeiras ao longo do tempo, os modelos ARCH (Engle, 1982) e a sua generalização GARCH (Bollerslev, 1986) revelam-se os mais adequados para o estudo da volatilidade. Em particular, estes modelos não lineares apresentam uma variância condicional aleatória, sendo possível, através do seu estudo, estimar e prever a volatilidade futura da série. Por fim, é apresentado o estudo empírico que se baseia numa proposta de modelação e previsão de um conjunto de dados reais do índice financeiro FTSE100.
Resumo:
This paper applies two measures to assess spillovers across markets: the Diebold Yilmaz (2012) Spillover Index and the Hafner and Herwartz (2006) analysis of multivariate GARCH models using volatility impulse response analysis. We use two sets of data, daily realized volatility estimates taken from the Oxford Man RV library, running from the beginning of 2000 to October 2016, for the S&P500 and the FTSE, plus ten years of daily returns series for the New York Stock Exchange Index and the FTSE 100 index, from 3 January 2005 to 31 January 2015. Both data sets capture both the Global Financial Crisis (GFC) and the subsequent European Sovereign Debt Crisis (ESDC). The spillover index captures the transmission of volatility to and from markets, plus net spillovers. The key difference between the measures is that the spillover index captures an average of spillovers over a period, whilst volatility impulse responses (VIRF) have to be calibrated to conditional volatility estimated at a particular point in time. The VIRF provide information about the impact of independent shocks on volatility. In the latter analysis, we explore the impact of three different shocks, the onset of the GFC, which we date as 9 August 2007 (GFC1). It took a year for the financial crisis to come to a head, but it did so on 15 September 2008, (GFC2). The third shock is 9 May 2010. Our modelling includes leverage and asymmetric effects undertaken in the context of a multivariate GARCH model, which are then analysed using both BEKK and diagonal BEKK (DBEKK) models. A key result is that the impact of negative shocks is larger, in terms of the effects on variances and covariances, but shorter in duration, in this case a difference between three and six months.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.