32 resultados para Standardised returns
Resumo:
This paper investigates whether the non-normality typically observed in daily stock-market returns could arise because of the joint existence of breaks and GARCH effects. It proposes a data-driven procedure to credibly identify the number and timing of breaks and applies it on the benchmark stock-market indices of 27 OECD countries. The findings suggest that a substantial element of the observed deviations from normality might indeed be due to the co-existence of breaks and GARCH effects. However, the presence of structural changes is found to be the primary reason for the non-normality and not the GARCH effects. Also, there is still some remaining excess kurtosis that is unlikely to be linked to the specification of the conditional volatility or the presence of breaks. Finally, an interesting sideline result implies that GARCH models have limited capacity in forecasting stock-market volatility.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
This article examines whether UK portfolio returns are time varying so that expected returns follow an AR(1) process as proposed by Conrad and Kaul for the USA. It explores this hypothesis for four portfolios that have been formed on the basis of market capitalization. The portfolio returns are modelled using a kalman filter signal extraction model in which the unobservable expected return is the state variable and is allowed to evolve as a stationary first order autoregressive process. It finds that this model is a good representation of returns and can account for most of the autocorrelation present in observed portfolio returns. This study concludes that UK portfolio returns are time varying and the nature of the time variation appears to introduce a substantial amount of autocorrelation to portfolio returns. Like Conrad and Kaul if finds a link between the extent to which portfolio returns are time varying and the size of firms within a portfolio but not the monotonic one found for the USA. © 2004 Taylor and Francis Ltd.
Resumo:
Models for the conditional joint distribution of the U.S. Dollar/Japanese Yen and Euro/Japanese Yen exchange rates, from November 2001 until June 2007, are evaluated and compared. The conditional dependency is allowed to vary across time, as a function of either historical returns or a combination of past return data and option-implied dependence estimates. Using prices of currency options that are available in the public domain, risk-neutral dependency expectations are extracted through a copula repre- sentation of the bivariate risk-neutral density. For this purpose, we employ either the one-parameter \Normal" or a two-parameter \Gumbel Mixture" specification. The latter provides forward-looking information regarding the overall degree of covariation, as well as, the level and direction of asymmetric dependence. Specifications that include option-based measures in their information set are found to outperform, in-sample and out-of-sample, models that rely solely on historical returns.
Resumo:
The properties of an iterative procedure for the estimation of the parameters of an ARFIMA process are investigated in a Monte Carlo study. The estimation procedure is applied to stock returns data for 15 countries. © 2012.
Resumo:
We test for departures from normal and independent and identically distributed (NIID) log returns, for log returns under the alternative hypothesis that are self-affine and either long-range dependent, or drawn randomly from an L-stable distribution with infinite higher-order moments. The finite sample performance of estimators of the two forms of self-affinity is explored in a simulation study. In contrast to rescaled range analysis and other conventional estimation methods, the variant of fluctuation analysis that considers finite sample moments only is able to identify both forms of self-affinity. When log returns are self-affine and long-range dependent under the alternative hypothesis, however, rescaled range analysis has higher power than fluctuation analysis. The techniques are illustrated by means of an analysis of the daily log returns for the indices of 11 stock markets of developed countries. Several of the smaller stock markets by capitalization exhibit evidence of long-range dependence in log returns. © 2012 Elsevier Inc. All rights reserved.
Resumo:
stocks. We examine the effects of foreign exchange (FX) and interest rate changes on the excess returns of U.S. stocks, for short-horizons of 1-40 days. Our new evidence shows a tendency for the volatility of both excess returns and FX rate changes to be negatively related with FX rate and interest rate effects. Both the number of firms with significant FX rate and interest rate effects and the magnitude of their exposures increase with the length of the return horizon. Our finding seems inconsistent with the view that firms hedge effectively at short-return horizons.
Resumo:
BACKGROUND: Standardised packaging (SP) of tobacco products is an innovative tobacco control measure opposed by transnational tobacco companies (TTCs) whose responses to the UK government's public consultation on SP argued that evidence was inadequate to support implementing the measure. The government's initial decision, announced 11 months after the consultation closed, was to wait for 'more evidence', but four months later a second 'independent review' was launched. In view of the centrality of evidence to debates over SP and TTCs' history of denying harms and manufacturing uncertainty about scientific evidence, we analysed their submissions to examine how they used evidence to oppose SP. METHODS AND FINDINGS: We purposively selected and analysed two TTC submissions using a verification-oriented cross-documentary method to ascertain how published studies were used and interpretive analysis with a constructivist grounded theory approach to examine the conceptual significance of TTC critiques. The companies' overall argument was that the SP evidence base was seriously flawed and did not warrant the introduction of SP. However, this argument was underpinned by three complementary techniques that misrepresented the evidence base. First, published studies were repeatedly misquoted, distorting the main messages. Second, 'mimicked scientific critique' was used to undermine evidence; this form of critique insisted on methodological perfection, rejected methodological pluralism, adopted a litigation (not scientific) model, and was not rigorous. Third, TTCs engaged in 'evidential landscaping', promoting a parallel evidence base to deflect attention from SP and excluding company-held evidence relevant to SP. The study's sample was limited to sub-sections of two out of four submissions, but leaked industry documents suggest at least one other company used a similar approach. CONCLUSIONS: The TTCs' claim that SP will not lead to public health benefits is largely without foundation. The tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for highly resourced corporations to slow, weaken, or prevent public health policies.
Resumo:
OBJECTIVES: To examine the volume, relevance and quality of transnational tobacco corporations' (TTCs) evidence that standardised packaging of tobacco products 'won't work', following the UK government's decision to 'wait and see' until further evidence is available. DESIGN: Content analysis. SETTING: We analysed the evidence cited in submissions by the UK's four largest TTCs to the UK Department of Health consultation on standardised packaging in 2012. OUTCOME MEASURES: The volume, relevance (subject matter) and quality (as measured by independence from industry and peer-review) of evidence cited by TTCs was compared with evidence from a systematic review of standardised packaging . Fisher's exact test was used to assess differences in the quality of TTC and systematic review evidence. 100% of the data were second-coded to validate the findings: 94.7% intercoder reliability; all differences were resolved. RESULTS: 77/143 pieces of TTC-cited evidence were used to promote their claim that standardised packaging 'won't work'. Of these, just 17/77 addressed standardised packaging: 14 were industry connected and none were published in peer-reviewed journals. Comparison of TTC and systematic review evidence on standardised packaging showed that the industry evidence was of significantly lower quality in terms of tobacco industry connections and peer-review (p<0.0001). The most relevant TTC evidence (on standardised packaging or packaging generally, n=26) was of significantly lower quality (p<0.0001) than the least relevant (on other topics, n=51). Across the dataset, TTC-connected evidence was significantly less likely to be published in a peer-reviewed journal (p=0.0045). CONCLUSIONS: With few exceptions, evidence cited by TTCs to promote their claim that standardised packaging 'won't work' lacks either policy relevance or key indicators of quality. Policymakers could use these three criteria-subject matter, independence and peer-review status-to critically assess evidence submitted to them by corporate interests via Better Regulation processes.
Resumo:
This study extends the Grullon, Michaely, and Swaminathan (2002) analysis by incorporating default risk. Using data for firms that either increased or initiated cash dividend payments during the 23-year period 1986-2008, we find reduction in default risk. This reduction is shown to be a priced risk factor beyond the Fama and French (1993) risk measures, and it explains the dividend payment decision and the positive market reaction around dividend increases and initiations. Further analysis reveals that the reduction in default risk is a significant factor in explaining the 3-year excess returns following dividend increases and initiations. © Copyright Michael G. Foster School of Business, University of Washington 2011.
Resumo:
Purpose – The use of key accounts has become a mature trend and most industrial firms use this concept in some form. Selling firms establish key account teams to attend to important customers and consolidate their selling activities. Yet, despite such increased efforts on behalf of key accounts, sufficient research has not quantified the returns on key account strategy nor has it firmly established performance differences between key and non-key accounts within a firm. In response to this shortcoming, this study aims to examine returns on key accounts. Design Methodology/approach – Data were collected from a global consulting firm. The data collection started two years after the implementation of the key account program. Data were collected on recently acquired customers (within the previous year) at two time periods: year 1 and year 3 (based on company access of data). Findings – Initially, key accounts perform as well or better than other types of accounts. However, in the long term, key accounts are less satisfied, less profitable and less beneficial for a firm’s growth than other types of accounts. Because the returns to key account expenditures, thus, appear mixed, firms should be cautious in expanding their key account strategies. Research limitations implications – The study contributes to research in three areas. First, most research on the effectiveness of key accounts refers to the between-firm level, whereas this study examines the effect within a single firm. Second, this study examines the temporal aspects of key accounts, namely, what happens to key accounts over time, in comparison with other accounts in a fairly large sample. Third, it considers the survival rates of key accounts versus other types of accounts. Practical implications – The authors suggest that firms also need to track their key accounts better because the results show that key accounts are less satisfied, less profitable and less beneficial for a firm’s growth than other types of accounts. Originality/value – Extant research has not examined these issues.
Resumo:
This paper summarizes the literature on hedge funds (HFs) developed over the last two decades, particularly that which relates to risk management characteristics (a companion piece investigates the managerial characteristics of HFs). It discusses the successes and the shortfalls to date in developing more sophisticated risk management frameworks and tools to measure and monitor HF risks, and the empirical evidence on the role of the HFs and their investment behaviour and risk management practices on the stability of the financial system. It also classifies the HF literature considering the most recent contributions and, particularly, the regulatory developments after the 2007 financial crisis.