882 resultados para VaR Estimation methods, Statistical Methods, Risk managment, Investments


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines the dynamics of firm-level financing and investment decisions for six Southeast Asian countries. The study provides empirical evidence on the impacts of changes in the firm-level financing decisions during the period of financial liberalization by considering the debt and equity financing decisions of a set of non-financial firms. The empirical results show that firms in Indonesia, Pakistan, and South Korea have relatively faster speed of adjustment than other Southeast Asian countries to attain optimal debt and equity ratios in response to banking sector and stock market liberalization. In addition, contrary to widely held belief that firms adjust their financial ratios to industry levels, the results indicate that industry factors do not significantly impact on the speed of capital structure adjustments. This study also shows that non-linear estimation methods are more appropriate than linear estimation methods for capturing changes in capital structure. The empirical results also show that international stock market integration of these countries has significantly reduced the equity risk premium as well as the firm-level cost of equity capital. Thus stock market liberalization is associated with a decrease in the cost of equity capital of the firms. Developments in the securities markets infrastructure have also reduced the cost of equity capital. However, with increased integration there is the possibility of capital outflows from the emerging markets, which might reverse the pattern of decrease in cost of capital in these markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P10, 62H30

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation addresses three issues in the political economy of growth literature. The first study empirically tests the hypothesis that income inequality influences the size of a country's sovereign debt for a sample of developing countries for the period 1970–1990. The argument examined is that governments tend to yield to popular pressures to engage in redistributive policies, partially financed by foreign borrowing. Facing increased risk of default, international creditors limit the credit they extend, with the result that borrowing countries invest less and grow at a slower pace. The findings do not seem to support the negative relationship between inequality and sovereign debt, as there is evidence of increases in multilateral, countercyclical flows until the mid 1980s in Latin America. The hypothesis would hold for the period 1983–1990. Debt flows and levels seem to be positively correlated with growth as expected. ^ The second study empirically investigates the hypothesis that pronounced levels of inequality lead to unconsolidated democracies. We test the existence of a nonmonotonic relationship between inequality and democracy for a sample of Latin American countries for the period 1970–2000, where democracy appears to consolidate at some intermediate level of inequality. We find that the nonmonotonic relationship holds using instrumental variables methods. Bolivia seems to be a case of unconsolidated democracy. The positive relationship between per capita income and democracy disappears once fixed effects are introduced. ^ The third study explores the nonlinear relationship between per capita income and private saving levels in Latin America. Several estimation methods are presented; however, only the estimation of a dynamic specification through a state-of-the-art general method of moments estimator yields consistent estimates with increased efficiency. Results support the hypothesis that income positively affects private saving, while system GMM reveals nonlinear effects at income levels that exceed the ones included in this sample for the period 1960–1994. We also find that growth, government dissaving, and tightening of credit constraints have a highly significant and positive effect on private saving. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for continuous recording rain gauges makes it difficult to determine the rainfall erosivity factor (R-factor) of the (R)USLE model in areas without good temporal data coverage. In mainland Spain, the Nature Conservation Institute (ICONA) determined the R-factor at few selected pluviographs, so simple estimates of the R-factor are definitely of great interest. The objectives of this study were: (1) to identify a readily available estimate of the R-factor for mainland Spain; (2) to discuss the applicability of a single (global) estimate based on analysis of regional results; (3) to evaluate the effect of record length on estimate precision and accuracy; and (4) to validate an available regression model developed by ICONA. Four estimators based on monthly precipitation were computed at 74 rainfall stations throughout mainland Spain. The regression analysis conducted at a global level clearly showed that modified Fournier index (MFI) ranked first among all assessed indexes. Applicability of this preliminary global model across mainland Spain was evaluated by analyzing regression results obtained at a regional level. It was found that three contiguous regions of eastern Spain (Catalonia, Valencian Community and Murcia) could have a different rainfall erosivity pattern, so a new regression analysis was conducted by dividing mainland Spain into two areas: Eastern Spain and plateau-lowland area. A comparative analysis concluded that the bi-areal regression model based on MFI for a 10-year record length provided a simple, precise and accurate estimate of the R-factor in mainland Spain. Finally, validation of the regression model proposed by ICONA showed that R-ICONA index overpredicted the R-factor by approximately 19%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Repeated self-harm represents the single strongest risk factor for suicide. To date no study with full national coverage has examined the pattern of hospital repeated presentations due to self-harm among young people. Methods: Data on consecutive self-harm presentations were obtained from the National Self-Harm Registry Ireland. Socio-demographic and behavioural characteristics of individuals aged 10–29 years who presented with self-harm to emergency departments in Ireland (2007–2014) were analysed. Risk of long-term repetition was assessed using survival analysis and time differences between the order of presentations using generalised estimating equation analysis. Results: The total sample comprised 28,700 individuals involving 42,642 presentations. Intentional drug overdose was the most prevalent method (57.9%). Repetition of self-harm occurred in 19.2% of individuals during the first year following a first presentation, of whom the majority (62.7%) engaged in one repeated act. Overall, the risk of repeated self-harm was similar between males and females. However, in the 20–24-year-old age group males were at higher risk than females. Those who used self-cutting were at higher risk for repetition than those who used intentional drug overdose, particularly among females. Age was associated with repetition only among females, in particular adolescents (15–19 years old) were at higher risk than young emerging adults (20–24 years old). Repeated self-harm risk increased significantly with the number of previous self-harm episodes. Time differences between first self-harm presentations were detected. Time between second and third presentation increased compared to time between first and second presentation among low frequency repeaters (patients with 3 presentations only within 1 year following a first presentation). The same time period decreased among high frequency repeaters (patients with at least 4 to more than 30 presentations). Conclusion: Young people with the highest risk for repeated self-harm were 15–19-year-old females and 20–24-year-old males. Self-cutting was the method associated with the highest risk of self-harm repetition. Time between first self-harm presentations represents an indicator of subsequent repetition. To prevent risk of repeated self-harm in young people, all individuals presenting at emergency departments due to self-harm should be provided with a risk assessment including psychosocial characteristics, history of self-harm and time between first presentations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing literature seeks to explain differences in individuals' self-reported satisfaction with their jobs. The evidence so far has mainly been based on cross-sectional data and when panel data have been used, individual unobserved heterogeneity has been modelled as an ordered probit model with random effects. This article makes use of longitudinal data for Denmark, taken from the waves 1995-1999 of the European Community Household Panel, and estimates fixed effects ordered logit models using the estimation methods proposed by Ferrer-i-Carbonel and Frijters (2004) and Das and van Soest (1999). For comparison and testing purposes a random effects ordered probit is also estimated. Estimations are carried out separately on the samples of men and women for individuals' overall satisfaction with the jobs they hold. We find that using the fixed effects approach (that clearly rejects the random effects specification), considerably reduces the number of key explanatory variables. The impact of central economic factors is the same as in previous studies, though. Moreover, the determinants of job satisfaction differ considerably between the genders, in particular once individual fixed effects are allowed for.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainability has been increasingly recognised as an integral part of highway infrastructure development. In practice however, the fact that financial return is still a project’s top priority for many, environmental aspects tend to be overlooked or considered as a burden, as they add to project costs. Sustainability and its implications have a far-reaching effect on each project over time. Therefore, with highway infrastructure’s long-term life span and huge capital demand, the consideration of environmental cost/ benefit issues is more crucial in life-cycle cost analysis (LCCA). To date, there is little in existing literature studies on viable estimation methods for environmental costs. This situation presents the potential for focused studies on environmental costs and issues in the context of life-cycle cost analysis. This paper discusses a research project which aims to integrate the environmental cost elements and issues into a conceptual framework for life cycle costing analysis for highway projects. Cost elements and issues concerning the environment were first identified through literature. Through questionnaires, these environmental cost elements will be validated by practitioners before their consolidation into the extension of existing and worked models of life-cycle costing analysis (LCCA). A holistic decision support framework is being developed to assist highway infrastructure stakeholders to evaluate their investment decision. This will generate financial returns while maximising environmental benefits and sustainability outcome.