380 resultados para Dividend Imputation
Resumo:
A dividend imputation tax system provides shareholders with a credit (for corporate tax paid) that can be used to offset personal tax on dividend income. This paper shows how to infer the value of imputation tax credits from the prices of derivative securities that are unique to Australian retail markets. We also test whether a tax law amendment that was designed to prevent the trading of imputation credits affected their economic value. Before the amendment, tax credits were worth up to 50% of face value in large, high-yielding companies, but Subsequently it is difficult to detect any value at all. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In a dividend imputation tax system, equity investors have three potential sources of return: dividends, capital gains and franking (tax) credits. However, the standard procedures for estimating the market risk premium (MRP) for use in the capital asset pricing model, ignore the value of franking credits. Officer (1994) notes that if franking credits do affect the corporate cost of capital, their value must be added to the standard estimates of MRP. In the present paper, we explicitly derive the relationship between the value of franking credits (gamma) and the MRP. We show that the standard parameter estimates that have been adopted in practice (especially by Australian regulators) violate this deterministic mathematical relationship. We also show how information on dividend yields and effective tax rates bounds the values that can be reasonably used for gamma and the MRP. We make recommendations for how estimates of the MRP should be adjusted to reflect the value of franking credits in an internally consistent manner.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Mestrado em Contabilidade e Análise Financeira
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
A Work Project, presented as part of the requirements for the Award of a Master's Double Degree in Finance from the NOVA School of Business and Economics / Masters Degree in Economics from Insper
Resumo:
This project characterizes the accuracy of the escrowed dividend model on the value of European options on a stock paying discrete dividend. A description of the escrowed dividend model is provided, and a comparison between this model and the benchmark model is realized. It is concluded that options on stocks with either low volatility, low dividend yield, low ex-dividend to maturity ratio or that are deep in or out of the money are reasonably priced with the escrowed dividend model.
Resumo:
In this paper we follow the tradition of applied general equilibrium modelling of the Walrasian static variety to study the empirical viability of a double dividend (green, welfare, and employment) in the Spanish economy. We consider a counterfactual scenario in which an ecotax is levied on the intermediate and final use of energy goods. Under a revenue neutral assumption, we evaluate the real income and employment impact of lowering payroll taxes. To appraise to what extent the model structure and behavioural assumptions may influence the results, we perform simulations under a range of alternative model and policy scenarios. We conclude that a double dividend –better environmental quality, as measured by reduced CO2 emissions, and improved levels of employment– may be an achievable goal of economic policy.
Resumo:
We re-examine the dynamics of returns and dividend growth within the present-value framework of stock prices. We find that the finite sample order of integration of returns is approximately equal to the order of integration of the first-differenced price-dividend ratio. As such, the traditional return forecasting regressions based on the price-dividend ratio are invalid. Moreover, the nonstationary long memory behaviour of the price-dividend ratio induces antipersistence in returns. This suggests that expected returns should be modelled as an AFIRMA process and we show this improves the forecast ability of the present-value model in-sample and out-of-sample.
Resumo:
Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.