971 resultados para Earnings per share.
Resumo:
This study compared pregnancy rates (PRs) and costs per calf born after fixed-time artificial insemination (FTAI) or AI after estrus detection (i.e., estrus detection and AI, EDAI), before and after a single PGF2α treatment in Bos indicus (Brahman-cross) heifers. On Day 0, the body weight, body condition score, and presence of a CL (46% of heifers) were determined. The heifers were then alternately allocated to one of two FTAI groups (FTAI-1, n = 139) and (FTAI-2, n = 141) and an EDAI group (n = 273). Heifers in the FTAI groups received an intravaginal progesterone-releasing device (IPRD; 0.78 g of progesterone) and 1 mg of estradiol benzoate intramuscularly (im) on Day 0. Eight days later, the IPRD was removed and heifers received 500 μg of PGF2α and 300 IU of eCG im; 24 hours later, they received 1 mg estradiol benzoate im and were submitted to FTAI 30 to 34 hours later (54 and 58 hours after IPRD removal). Heifers in the FTAI-2 group started treatment 8 days after those in the FTAI-1 group. Heifers in the EDAI group were inseminated approximately 12 hours after the detection of estrus between Days 4 and 9 at which time the heifers that had not been detected in estrus received 500 μg of PGF2α im and EDAI continued until Day 13. Heifers in the FTAI groups had a higher overall PR (proportion pregnant as per the entire group) than the EDAI group (34.6% vs. 23.2%; P = 0.003), however, conception rate (PR of heifers submitted for AI) tended to favor the estrus detection group (34.6% vs. 44.1%; P = 0.059). The cost per AI calf born was estimated to be $267.67 and $291.37 for the FTAI and EDAI groups, respectively. It was concluded that in Brahman heifers typical of those annually mated in northern Australia FTAI compared with EDAI increases the number of heifers pregnant and reduces the cost per calf born.
Resumo:
Head motion (HM) is a well known confound in analyses of functional MRI (fMRI) data. Neuroimaging researchers therefore typically treat HM as a nuisance covariate in their analyses. Even so, it is possible that HM shares a common genetic influence with the trait of interest. Here we investigate the extent to which this relationship is due to shared genetic factors, using HM extracted from resting-state fMRI and maternal and self report measures of Inattention and Hyperactivity-Impulsivity from the Strengths and Weaknesses of ADHD Symptoms and Normal Behaviour (SWAN) scales. Our sample consisted of healthy young adult twins (N = 627 (63% females) including 95 MZ and 144 DZ twin pairs, mean age 22, who had mother-reported SWAN; N = 725 (58% females) including 101 MZ and 156 DZ pairs, mean age 25, with self reported SWAN). This design enabled us to distinguish genetic from environmental factors in the association between head movement and ADHD scales. HM was moderately correlated with maternal reports of Inattention (r = 0.17, p-value = 7.4E-5) and Hyperactivity-Impulsivity (r = 0.16, p-value = 2.9E-4), and these associations were mainly due to pleiotropic genetic factors with genetic correlations [95% CIs] of rg = 0.24 [0.02, 0.43] and rg = 0.23 [0.07, 0.39]. Correlations between self-reports and HM were not significant, due largely to increased measurement error. These results indicate that treating HM as a nuisance covariate in neuroimaging studies of ADHD will likely reduce power to detect between-group effects, as the implicit assumption of independence between HM and Inattention or Hyperactivity-Impulsivity is not warranted. The implications of this finding are problematic for fMRI studies of ADHD, as failing to apply HM correction is known to increase the likelihood of false positives. We discuss two ways to circumvent this problem: censoring the motion contaminated frames of the RS-fMRI scan or explicitly modeling the relationship between HM and Inattention or Hyperactivity-Impulsivity
Resumo:
Digital image
Resumo:
This study explores the decline of terrorism by conducting source-based case studies on two left-wing terrorist campaigns in the 1970s, those of the Rode Jeugd in the Netherlands and the Symbionese Liberation Army in the United States. The purpose of the case studies is to bring more light into the interplay of different external and internal factors in the development of terrorist campaigns. This is done by presenting the history of the two chosen campaigns as narratives from the participants’ points of view, based on interviews with participants and extensive archival material. Organizational resources and dynamics clearly influenced the course of the two campaigns, but in different ways. This divergence derives at least partly from dissimilarities in organizational design and the incentive structure. Comparison of even these two cases shows that organizations using terrorism as a strategy can differ significantly, even when they share ideological orientation, are of the same size and operate in the same time period. Theories on the dynamics of terrorist campaigns would benefit from being more sensitive to this. The study also highlights that the demise of a terrorist organization does not necessarily lead to the decline of the terrorist campaign. Therefore, research should look at the development of terrorist activity beyond the lifespan of a single organization. The collective ideological beliefs and goals functioned primarily as a sustaining force, a lens through which the participants interpreted all developments. On the other hand, it appears that the role of ideology should not be overstated. Namely, not all participants in the campaigns under study fully internalized the radical ideology. Rather, their participation was mainly based on their friendship with other participants. Instead of ideology per se, it is more instructive to look at how those involved described their organization, themselves and their role in the revolutionary struggle. In both cases under study, the choice of the terrorist strategy was not merely a result of a cost-benefit calculation, but an important part of the participants’ self-image. Indeed, the way the groups portrayed themselves corresponded closely with the forms of action that they got involved in. Countermeasures and the lack of support were major reasons for the decline of the campaigns. However, what is noteworthy is that the countermeasures would not have had the same kind of impact had it not been for certain weaknesses of the groups themselves. Moreover, besides the direct impact the countermeasures had on the campaign, equally important was how they affected the attitudes of the larger left-wing community and the public in general. In this context, both the attitudes towards the terrorist campaign and the authorities were relevant to the outcome of the campaigns.
Resumo:
Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.
Resumo:
The article considers the implications of the decision of the High Court in Spotless Services Pty Ltd (1996) 141 ALR 92; 34 ATR 183. It argues in particular that the decision was made per incuriam.
Resumo:
Microvolunteering is bite-size volunteering with no commitment to repeat and minimum formality, involving short and specific actions. Online microvolunteering occurs through an internet-connected device. University students' online microvolunteering decisions were investigated using an extended theory of planned behavior (TPB) comprising attitudes and normative and control perceptions, with the additional variables of moral norm and group norm. Participants (N = 303) completed the main TPB questionnaire and 1-month follow-up survey (N = 171) assessing engagement in online microvolunteering. Results generally supported standard and additional TPB constructs predicting intention. Intention predicted behavior. The findings suggest an important role for attitudes and moral considerations in understanding what influences this increasingly popular form of online activity.
Resumo:
One of the objectives of general-purpose financial reporting is to provide information about the financial position, financial performance and cash flows of an entity that is useful to a wide range of users in making economic decisions. The current focus on potentially increased relevance of fair value accounting weighed against issues of reliability has failed to consider the potential impact on the predictive ability of accounting. Based on a sample of international (non-U.S.) banks from 24 countries during 2009-2012, we test the usefulness of fair values in improving the predictive ability of earnings. First, we find that the increasing use of fair values on balance-sheet financial instruments enhances the ability of current earnings to predict future earnings and cash flows. Second, we provide evidence that the fair value hierarchy classification choices affect the ability of earnings to predict future cash flows and future earnings. More precisely, we find that the non-discretionary fair value component (Level 1 assets) improves the predictability of current earnings whereas the discretionary fair value components (Level 2 and Level 3 assets) weaken the predictive power of earnings. Third, we find a consistent and strong association between factors reflecting country-wide institutional structures and predictive power of fair values based on discretionary measurement inputs (Level 2 and Level 3 assets and liabilities). Our study is timely and relevant. The findings have important implications for standard setters and contribute to the debate on the use of fair value accounting.
Resumo:
Passive wavelength/time fiber-optic code division multiple access (WIT FO-CDMA) network is a viable option for highspeed access networks. Constructions of 2-D codes, suitable for incoherent WIT FO-CDMA, have been proposed to reduce the time spread of the 1-D sequences. The 2-D constructions can be broadly classified as 1) hybrid codes and 2) matrix codes. In our earlier work [141, we had proposed a new family of wavelength/time multiple-pulses-per-row (W/T MPR) matrix codes which have good cardinality, spectral efficiency and at the same time have the lowest off-peak autocorrelation and cross-correlation values equal to unity. In this paper we propose an architecture for a WIT MPR FO-CDAM network designed using the presently available devices and technology. A complete FO-CDMA network of ten users is simulated, for various number of simultaneous users and shown that 0 --> 1 errors can occur only when the number of interfering users is at least equal to the threshold value.
Resumo:
Introduction. We estimate the total yearly volume of peer-reviewed scientific journal articles published world-wide as well as the share of these articles available openly on the Web either directly or as copies in e-print repositories. Method. We rely on data from two commercial databases (ISI and Ulrich's Periodicals Directory) supplemented by sampling and Google searches. Analysis. A central issue is the finding that ISI-indexed journals publish far more articles per year (111) than non ISI-indexed journals (26), which means that the total figure we obtain is much lower than many earlier estimates. Our method of analysing the number of repository copies (green open access) differs from several earlier studies which have studied the number of copies in identified repositories, since we start from a random sample of articles and then test if copies can be found by a Web search engine. Results. We estimate that in 2006 the total number of articles published was approximately 1,350,000. Of this number 4.6% became immediately openly available and an additional 3.5% after an embargo period of, typically, one year. Furthermore, usable copies of 11.3% could be found in subject-specific or institutional repositories or on the home pages of the authors. Conclusions. We believe our results are the most reliable so far published and, therefore, should be useful in the on-going debate about Open Access among both academics and science policy makers. The method is replicable and also lends itself to longitudinal studies in the future.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
There is much literature developing theories when and where earnings management occurs. Among the several possible motives driving earnings management behaviour in firms, this thesis focuses on motives that aim to influence the valuation of the firm. Earnings management that makes the firm look better than it really is may result in disappointment for the single investor and potentially leads to a welfare loss in society when the resource allocation is distorted. A more specific knowledge of the occurrence of earnings management supposedly increases the awareness of the investor and thus leads to better investments and increased welfare. This thesis contributes to the literature by increasing the knowledge as to where and when earnings management is likely to occur. More specifically, essay 1 adds to existing research connecting earnings management to IPOs and increases the knowledge in arguing that the tendency to manage earnings differs between the IPOs. Evidence is found that entrepreneur owned IPOs are more likely to be earnings managers than the institutionally owned ones. Essay 2 considers the reliability of quarterly earnings reports that precedes insider selling binges. The essay contributes by suggesting that earnings management is likely to occur before high insider selling. Essay 3 examines the widely studied phenomenon of income smoothing and investigates if income smoothing can be explained with proxies for information asymmetry. The essay argues that smoothing is more pervasive in private and smaller firms.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.
Resumo:
The study contributes to our understanding of the forces that drive the stock market by investigating how different types of investors react to new financial statement information. Using the extremely comprehensive official register of share holdings in Finland, we find that the majority of investors are more probable to sell (buy) stocks in a company after a positive (negative) earnings surprise, and show a bias towards buying after the disclosure of new financial statement information. Large investors, on the other hand, show behavior opposite to that of the majority of investors in the market. Further, foreign investors show behavior similar to that of domestic investors. We suggest investor overconfidence and asymmetric information as possible explanations for the documented behavior.