917 resultados para expected returns
Resumo:
This study focuses on: (i) the responsiveness of the U.S. financial sector stock indices to foreign exchange (FX) and interest rate changes; and, (ii) the extent to which good model specification can enhance the forecasts from the associated models. Three models are considered. Only the error-correction model (ECM) generated efficient and consistent coefficient estimates. Furthermore, a simple zero lag model in differences which is clearly mis-specified, generated forecasts that are better than those of the ECM, even if the ECM depicts relationships that are more consistent with economic theory. In brief, FX and interest rate changes do not impact on the return-generating process of the stock indices in any substantial way. Most of the variation in the sector stock indices is associated with past variation in the indices themselves and variation in the market-wide stock index. These results have important implications for financial and economic policies.
Resumo:
Purpose – The purpose of this paper is to investigate the impact of foreign exchange and interest rate changes on US banks’ stock returns. Design/methodology/approach – The approach employs an EGARCH model to account for the ARCH effects in daily returns. Most prior studies have used standard OLS estimation methods with the result that the presence of ARCH effects would have affected estimation efficiency. For comparative purposes, the standard OLS estimation method is also used to measure sensitivity. Findings – The findings are as follows: under the conditional t-distributional assumption, the EGARCH model generated a much better fit to the data although the goodness-of-fit of the model is not entirely satisfactory; the market index return accounts for most of the variation in stock returns at both the individual bank and portfolio levels; and the degree of sensitivity of the stock returns to interest rate and FX rate changes is not very pronounced despite the use of high frequency data. Earlier results had indicated that daily data provided greater evidence of exposure sensitivity. Practical implications – Assuming that banks do not hedge perfectly, these findings have important financial implications as they suggest that the hedging policies of the banks are not reflected in their stock prices. Alternatively, it is possible that different GARCH-type models might be more appropriate when modelling high frequency returns. Originality/value – The paper contributes to existing knowledge in the area by showing that ARCH effects do impact on measures of sensitivity.
Resumo:
Neuronal operations associated with the top-down control process of shifting attention from one locus to another involve a network of cortical regions, and their influence is deemed fundamental to visual perception. However, the extent and nature of these operations within primary visual areas are unknown. In this paper, we used magnetoencephalography (MEG) in combination with magnetic resonance imaging (MRI) to determine whether, prior to the onset of a visual stimulus, neuronal activity within early visual cortex is affected by covert attentional shifts. Time/frequency analyses were used to identify the nature of this activity. Our results show that shifting attention towards an expected visual target results in a late-onset (600 ms postcue onset) depression of alpha activity which persists until the appearance of the target. Independent component analysis (ICA) and dipolar source modeling confirmed that the neuronal changes we observed originated from within the calcarine cortex. Our results further show that the amplitude changes in alpha activity were induced not evoked (i.e., not phase-locked to the cued attentional task). We argue that the decrease in alpha prior to the onset of the target may serve to prime the early visual cortex for incoming sensory information. We conclude that attentional shifts affect activity within the human calcarine cortex by altering the amplitude of spontaneous alpha rhythms and that subsequent modulation of visual input with attentional engagement follows as a consequence of these localized changes in oscillatory activity. © 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper will show that short horizon stock returns for UK portfolios are more predictable than suggested by sample autocorrelation co-efficients. Four capitalisation based portfolios are constructed for the period 1976–1991. It is shown that the first order autocorrelation coefficient of monthly returns can explain no more than 10% of the variation in monthly portfolio returns. Monthly autocorrelation coefficients assume that each weekly return of the previous month contains the same amount of information. However, this will not be the case if short horizon returns contain predictable components which dissipate rapidly. In this case, the return of the most recent week would say a lot more about the future monthly portfolio return than other weeks. This suggests that when predicting future monthly portfolio returns more weight should be given to the most recent weeks of the previous month, because, the most recent weekly returns provide the most information about the subsequent months' performance. We construct a model which exploits the mean reverting characteristics of monthly portfolio returns. Using this model we forecast future monthly portfolio returns. When compared to forecasts that utilise the autocorrelation statistic the model which exploits the mean reverting characteristics of monthlyportfolio returns can forecast future returns better than the autocorrelation statistic, both in and out of sample.
Resumo:
For some time there has been a puzzle surrounding the seasonal behaviour of stock returns. This paper demonstrates that there is an asymmetric relationship between systematic risk and return across the different months of the year for both large and small firms. In the case of both large and small firms systematic risk appears to be priced in only two months of the year, January and April. During the other months no persistent relationship between systematic risk and return appears to exist. The paper also shows that when systematic risk is priced, the size of the systematic risk premium is higher for large firms than for small firms and varies significantly across the months of the year.
Resumo:
This thesis examines the effect of rights issue announcements on stock prices by companies listed on the Kuala Lumpur Stock Exchange (KLSE) between 1987 to 1996. The emphasis is to report whether the KLSE is semi strongly efficient with respect to the announcement of rights issues and to check whether the implications of corporate finance theories on the effect of an event can be supported in the context of an emerging market. Once the effect is established, potential determinants of abnormal returns identified by previous empirical work and corporate financial theory are analysed. By examining 70 companies making clean rights issue announcements, this thesis will hopefully shed light on some important issues in long term corporate financing. Event study analysis is used to check on the efficiency of the Malaysian stock market; while cross-sectional regression analysis is executed to identify possible explanators of the rights issue announcements' effect. To ensure the results presented are not contaminated, econometric and statistical issues raised in both analyses have been taken into account. Given the small amount of empirical research conducted in this part of the world, the results of this study will hopefully be of use to investors, security analysts, corporate financial managements, regulators and policy makers as well as those who are interested in capital market based research of an emerging market. It is found that the Malaysian stock market is not semi strongly efficient since there exists a persistent non-zero abnormal return. This finding is not consistent with the hypothesis that security returns adjust rapidly to reflect new information. It may be possible that the result is influenced by the sample, consisting mainly of below average size companies which tend to be thinly traded. Nevertheless, these issues have been addressed. Another important issue which has emerged from the study is that there is some evidence to suggest that insider trading activity existed in this market. In addition to these findings, when the rights issue announcements' effect is compared to the implications of corporate finance theories in predicting the sign of abnormal returns, the signalling model, asymmetric information model, perfect substitution hypothesis and Scholes' information hypothesis cannot be supported.
Resumo:
This paper investigates whether the non-normality typically observed in daily stock-market returns could arise because of the joint existence of breaks and GARCH effects. It proposes a data-driven procedure to credibly identify the number and timing of breaks and applies it on the benchmark stock-market indices of 27 OECD countries. The findings suggest that a substantial element of the observed deviations from normality might indeed be due to the co-existence of breaks and GARCH effects. However, the presence of structural changes is found to be the primary reason for the non-normality and not the GARCH effects. Also, there is still some remaining excess kurtosis that is unlikely to be linked to the specification of the conditional volatility or the presence of breaks. Finally, an interesting sideline result implies that GARCH models have limited capacity in forecasting stock-market volatility.
Resumo:
Agent-based technology is playing an increasingly important role in today’s economy. Usually a multi-agent system is needed to model an economic system such as a market system, in which heterogeneous trading agents interact with each other autonomously. Two questions often need to be answered regarding such systems: 1) How to design an interacting mechanism that facilitates efficient resource allocation among usually self-interested trading agents? 2) How to design an effective strategy in some specific market mechanisms for an agent to maximise its economic returns? For automated market systems, auction is the most popular mechanism to solve resource allocation problems among their participants. However, auction comes in hundreds of different formats, in which some are better than others in terms of not only the allocative efficiency but also other properties e.g., whether it generates high revenue for the auctioneer, whether it induces stable behaviour of the bidders. In addition, different strategies result in very different performance under the same auction rules. With this background, we are inevitably intrigued to investigate auction mechanism and strategy designs for agent-based economics. The international Trading Agent Competition (TAC) Ad Auction (AA) competition provides a very useful platform to develop and test agent strategies in Generalised Second Price auction (GSP). AstonTAC, the runner-up of TAC AA 2009, is a successful advertiser agent designed for GSP-based keyword auction. In particular, AstonTAC generates adaptive bid prices according to the Market-based Value Per Click and selects a set of keyword queries with highest expected profit to bid on to maximise its expected profit under the limit of conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. The TAC CAT tournament provides an environment for investigating the optimal design of mechanisms for double auction markets. AstonCAT-Plus is the post-tournament version of the specialist developed for CAT 2010. In our experiments, AstonCAT-Plus not only outperforms most specialist agents designed by other institutions but also achieves high allocative efficiencies, transaction success rates and average trader profits. Moreover, we reveal some insights of the CAT: 1) successful markets should maintain a stable and high market share of intra-marginal traders; 2) a specialist’s performance is dependent on the distribution of trading strategies. However, typical double auction models assume trading agents have a fixed trading direction of either buy or sell. With this limitation they cannot directly reflect the fact that traders in financial markets (the most popular application of double auction) decide their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Experiments are conducted under both dynamic and static settings of the continuous BDA market. We find that the allocative efficiency of a continuous BDA market mainly comes from rational selection of trading directions. Furthermore, we introduce a high-performance Kernel trading strategy in the BDA market which uses kernel probability density estimator built on historical transaction data to decide optimal order prices. Kernel trading strategy outperforms some popular intelligent double auction trading strategies including ZIP, GD and RE in the continuous BDA market by making the highest profit in static games and obtaining the best wealth in dynamic games.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Models for the conditional joint distribution of the U.S. Dollar/Japanese Yen and Euro/Japanese Yen exchange rates, from November 2001 until June 2007, are evaluated and compared. The conditional dependency is allowed to vary across time, as a function of either historical returns or a combination of past return data and option-implied dependence estimates. Using prices of currency options that are available in the public domain, risk-neutral dependency expectations are extracted through a copula repre- sentation of the bivariate risk-neutral density. For this purpose, we employ either the one-parameter \Normal" or a two-parameter \Gumbel Mixture" specification. The latter provides forward-looking information regarding the overall degree of covariation, as well as, the level and direction of asymmetric dependence. Specifications that include option-based measures in their information set are found to outperform, in-sample and out-of-sample, models that rely solely on historical returns.
Resumo:
This paper analyses the relationship between innovation - proxied by Research and Development (R&D), patent and trade mark activity – and profitability in a panel of Australian firms (1995 to 1998). Special attention is given to assessing the nature of competitive conditions faced by different firms, as the nature of competition is likely to affect the returns to innovation. The hypothesis is that lower levels of competition will imply higher returns to innovation. To allow for a time lag time before any return to innovation, the market value of the firms is used as a proxy for expected future profits. The results give some support for the main hypothesis: the market’s valuation of R&D activity is higher in industries where competition is lower. However, the paper highlights the difficulty in assessing competitive conditions and finds a number of results that challenge the simple hypothesis.
Resumo:
∗This research, which was funded by a grant from the Natural Sciences and Engineering Research Council of Canada, formed part of G.A.’s Ph.D. thesis [1].
Resumo:
The properties of an iterative procedure for the estimation of the parameters of an ARFIMA process are investigated in a Monte Carlo study. The estimation procedure is applied to stock returns data for 15 countries. © 2012.
Resumo:
We test for departures from normal and independent and identically distributed (NIID) log returns, for log returns under the alternative hypothesis that are self-affine and either long-range dependent, or drawn randomly from an L-stable distribution with infinite higher-order moments. The finite sample performance of estimators of the two forms of self-affinity is explored in a simulation study. In contrast to rescaled range analysis and other conventional estimation methods, the variant of fluctuation analysis that considers finite sample moments only is able to identify both forms of self-affinity. When log returns are self-affine and long-range dependent under the alternative hypothesis, however, rescaled range analysis has higher power than fluctuation analysis. The techniques are illustrated by means of an analysis of the daily log returns for the indices of 11 stock markets of developed countries. Several of the smaller stock markets by capitalization exhibit evidence of long-range dependence in log returns. © 2012 Elsevier Inc. All rights reserved.