380 resultados para Dividend Imputation
Resumo:
Purpose – The purpose of this study is to examine dividend policies in an emerging capital market, in a country undergoing a transitional period. Design/methodology/approach – Using pooled cross-sectional observations from the top 50 listed Egyptian firms between 2003 and 2005, this study examines the effect of board of directors’ composition and ownership structure on dividend policies in Egypt. Findings – It is found that there is a significant positive association between institutional ownership and firm performance, and both dividend decision and payout ratio. The results confirm that firms with a higher return on equity and a higher institutional ownership distribute higher levels of dividend. No significant association was found between board composition and dividend decisions or ratios. Originality/value – This study provides additional evidence of the applicability of the signalling model in the emerging market of Egypt. It was found that despite the high institutional ownership and the closely held nature of the firms, which imply lower agency costs, the payment of higher dividend was considered necessary to attract capital during this transitional period.
Resumo:
This paper examines investors' reactions to dividend reductions or omissions conditional on past earnings and dividend patterns for a sample of eighty-two U.S. firms that incurred an annual loss. We document that the market reaction for firms with long patterns of past earnings and dividend payouts is significantly more negative than for firms with lessestablished past earnings and dividends records. Our results can be explained by the following line of reasoning. First, consistent with DeAngelo, DeAngelo, and Skinner (1992), a loss following a long stream of earnings and dividend payments represents an unreliable indicator of future earnings. Thus, established firms have higher loss reliability than less-established firms. Second, because current earnings and dividend policy are a substitute source of means of forecasting future earnings, lower loss reliability increases the information content of dividend reductions. Therefore, given the presence of a loss, the longer the stream of prior earnings and dividend payments, (1) the lower the loss reliability and (2) the more reliably dividend cuts are perceived as an indication that earnings difficulties will persist in the future.
Resumo:
This study extends the Grullon, Michaely, and Swaminathan (2002) analysis by incorporating default risk. Using data for firms that either increased or initiated cash dividend payments during the 23-year period 1986-2008, we find reduction in default risk. This reduction is shown to be a priced risk factor beyond the Fama and French (1993) risk measures, and it explains the dividend payment decision and the positive market reaction around dividend increases and initiations. Further analysis reveals that the reduction in default risk is a significant factor in explaining the 3-year excess returns following dividend increases and initiations. © Copyright Michael G. Foster School of Business, University of Washington 2011.
Resumo:
This study pursues two objectives: first, to provide evidence on the information content of dividend policy, conditional on past earnings and dividend patterns prior to an annual earnings decline; second, to examine the effect of the magnitude of low earnings realizations on dividend policy when firms have more-or-less established dividend payouts. The information content of dividend policy for firms that incur earnings reductions following long patterns of positive earnings and dividends has been examined (DeAngelo et al., 1992, 1996; Charitou, 2000). No research has examined the association between the informativeness of dividend policy changes in the event of an earnings drop, relative to varying patterns of past earnings and dividends. Our dataset consists of 4,873 U.S. firm-year observations over the period 1986-2005. Our evidence supports the hypotheses that, among earnings-reducing or loss firms, longer patterns of past earnings and dividends: (a) strengthen the information conveyed by dividends regarding future earnings, and (b) enhance the role of the magnitude of low earnings realizations in explaining dividend policy decisions, in that earnings hold more information content that explains the likelihood of dividend cuts the longer the past earnings and dividend patterns. Both results stem from the stylized facts that managers aim to maintain consistency with respect to historic payout policy, being reluctant to proceed with dividend reductions, and that this reluctance is higher the more established is the historic payout policy. © 2010 The Authors. Journal compilation © 2010 Accounting Foundation, The University of Sydney.
Resumo:
I examine the predictability of dividend cuts based on the time interval between dividend announcement dates using a large dataset of US firms from 1971 to 2014. The longer the time interval between dividend announcements, the larger the probability of a cut in the dividend per share, consistent with the view that firms delay the release of bad news.
Resumo:
This study examines the tax-arbitrage possibilities on the Budapest Stock Exchange between 1995 and 2007. The theoretical possibility for the arbitrage is the different taxation for different stockholders, for the private investors and for the institutions: the institutions had higher taxation on capital gain while private persons in the whole period had tax-benefits on capital gains. The dynamic clientele model shows, that there is a range of the price drops after dividend payouts which guarantees a risk-free profit for both parties. The research is based on the turnover data from 97 companies listed on the Budapest Stock Exchange. We have tested the significant turnovers around the dividend-dates. The study presents clear evidence that investors continuously did take advantages on the different taxation.
Resumo:
Date of Acceptance: 13/03/2015
Resumo:
Date of Acceptance: 13/03/2015
Resumo:
ABSTRACT Researchers frequently have to analyze scales in which some participants have failed to respond to some items. In this paper we focus on the exploratory factor analysis of multidimensional scales (i.e., scales that consist of a number of subscales) where each subscale is made up of a number of Likert-type items, and the aim of the analysis is to estimate participants' scores on the corresponding latent traits. We propose a new approach to deal with missing responses in such a situation that is based on (1) multiple imputation of non-responses and (2) simultaneous rotation of the imputed datasets. We applied the approach in a real dataset where missing responses were artificially introduced following a real pattern of non-responses, and a simulation study based on artificial datasets. The results show that our approach (specifically, Hot-Deck multiple imputation followed of Consensus Promin rotation) was able to successfully compute factor score estimates even for participants that have missing data.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
Spatio-temporal modelling is an area of increasing importance in which models and methods have often been developed to deal with specific applications. In this study, a spatio-temporal model was used to estimate daily rainfall data. Rainfall records from several weather stations, obtained from the Agritempo system for two climatic homogeneous zones, were used. Rainfall values obtained for two fixed dates (January 1 and May 1, 2012) using the spatio-temporal model were compared with the geostatisticals techniques of ordinary kriging and ordinary cokriging with altitude as auxiliary variable. The spatio-temporal model was more than 17% better at producing estimates of daily precipitation compared to kriging and cokriging in the first zone and more than 18% in the second zone. The spatio-temporal model proved to be a versatile technique, adapting to different seasons and dates.
Resumo:
Credible spatial information characterizing the structure and site quality of forests is critical to sustainable forest management and planning, especially given the increasing demands and threats to forest products and services. Forest managers and planners are required to evaluate forest conditions over a broad range of scales, contingent on operational or reporting requirements. Traditionally, forest inventory estimates are generated via a design-based approach that involves generalizing sample plot measurements to characterize an unknown population across a larger area of interest. However, field plot measurements are costly and as a consequence spatial coverage is limited. Remote sensing technologies have shown remarkable success in augmenting limited sample plot data to generate stand- and landscape-level spatial predictions of forest inventory attributes. Further enhancement of forest inventory approaches that couple field measurements with cutting edge remotely sensed and geospatial datasets are essential to sustainable forest management. We evaluated a novel Random Forest based k Nearest Neighbors (RF-kNN) imputation approach to couple remote sensing and geospatial data with field inventory collected by different sampling methods to generate forest inventory information across large spatial extents. The forest inventory data collected by the FIA program of US Forest Service was integrated with optical remote sensing and other geospatial datasets to produce biomass distribution maps for a part of the Lake States and species-specific site index maps for the entire Lake State. Targeting small-area application of the state-of-art remote sensing, LiDAR (light detection and ranging) data was integrated with the field data collected by an inexpensive method, called variable plot sampling, in the Ford Forest of Michigan Tech to derive standing volume map in a cost-effective way. The outputs of the RF-kNN imputation were compared with independent validation datasets and extant map products based on different sampling and modeling strategies. The RF-kNN modeling approach was found to be very effective, especially for large-area estimation, and produced results statistically equivalent to the field observations or the estimates derived from secondary data sources. The models are useful to resource managers for operational and strategic purposes.