30 resultados para Information content

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: This paper investigates the impact of timeliness and credit ratings on the information content of the earnings announcements of Greek listed firms from 2001 to 2008. Using the classical event study methodology and regression analysis, we find that firms tend to release good news on time and are inclined to delay the release of bad news. We also provide evidence that the level of corporate risk differentiates the information content of earnings according to the credit rating category. Specifically, firms displaying high creditworthiness enjoy positive excess returns on earnings announcement dates. In contrast, firms with low creditworthiness undergo significant share price erosions on earnings announcement days. We also observe a substitution effect between timeliness and credit ratings in relation to the information content of earnings announcements. Specifically, we find that as the credit category of earnings-announcing firms improves, the informational role of timeliness is mitigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study pursues two objectives: first, to provide evidence on the information content of dividend policy, conditional on past earnings and dividend patterns prior to an annual earnings decline; second, to examine the effect of the magnitude of low earnings realizations on dividend policy when firms have more-or-less established dividend payouts. The information content of dividend policy for firms that incur earnings reductions following long patterns of positive earnings and dividends has been examined (DeAngelo et al., 1992, 1996; Charitou, 2000). No research has examined the association between the informativeness of dividend policy changes in the event of an earnings drop, relative to varying patterns of past earnings and dividends. Our dataset consists of 4,873 U.S. firm-year observations over the period 1986-2005. Our evidence supports the hypotheses that, among earnings-reducing or loss firms, longer patterns of past earnings and dividends: (a) strengthen the information conveyed by dividends regarding future earnings, and (b) enhance the role of the magnitude of low earnings realizations in explaining dividend policy decisions, in that earnings hold more information content that explains the likelihood of dividend cuts the longer the past earnings and dividend patterns. Both results stem from the stylized facts that managers aim to maintain consistency with respect to historic payout policy, being reluctant to proceed with dividend reductions, and that this reluctance is higher the more established is the historic payout policy. © 2010 The Authors. Journal compilation © 2010 Accounting Foundation, The University of Sydney.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Digital watermarking aims at embedding information in digital data. The watermark is usually required to be imperceptible, unremovable and to have a high information content. Unfortunately, these three requirements are contradicting. For example, having a more robust watermark makes it either more perceptible or/and less informative. This paper investigates the relationship between the watermark information content and the induced distortion due to quantisation, such as lossy compression.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Digital watermarking aims at embedding information in digital data. The watermark is usually required to be imperceptible, unremovable and to have a high information content. Unfortunately, these three requirements are contradicting. For example, having a more robust watermark makes it either more perceptible or/and less informative. For Gaussian data and additive white Gaussian noise, an optimal but also impractical scheme has already be devised. Since then, many practical schemes have tried to approach the theoretical limits. This paper investigate improvements to current state-of-the-art embedding schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis initially presents an 'assay' of the literature pertaining to individual differences in human-computer interaction. A series of experiments is then reported, designed to investigate the association between a variety of individual characteristics and various computer task and interface factors. Predictor variables included age, computer expertise, and psychometric tests of spatial visualisation, spatial memory, logical reasoning, associative memory, and verbal ability. These were studied in relation to a variety of computer-based tacks, including: (1) word processing and its component elements; (ii) the location of target words within passages of text; (iii) the navigation of networks and menus; (iv) command generation using menus and command line interfaces; (v) the search and selection of icons and text labels; (vi) information retrieval. A measure of self-report workload was also included in several of these experiments. The main experimental findings included: (i) an interaction between spatial ability and the manipulation of semantic but not spatial interface content; (ii) verbal ability being only predictive of certain task components of word processing; (iii) age differences in word processing and information retrieval speed but not accuracy; (iv) evidence of compensatory strategies being employed by older subjects; (v) evidence of performance strategy differences which disadvantaged high spatial subjects in conditions of low spatial information content; (vi) interactive effects of associative memory, expertise and command strategy; (vii) an association between logical reasoning and word processing but not information retrieval; (viii) an interaction between expertise and cognitive demand; and (ix) a stronger association between cognitive ability and novice performance than expert performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Divisia money is a monetary aggregate that gives each component asset an assigned weight. We use an evolutionary neural network to calculate new Divisia weights for each component utilising the Bank of England monetary data for the U.K. We propose a new monetary aggregate using our newly derived weights to carry out quantitative inflation prediction. The results show that this new monetary aggregate has better inflation forecasting performance than the traditionally constructed Bank of England Divisa money. This result is important for monetary policymakers, as improved construction of monetary aggregates will yield tighter relationships between key macroeconomic variables and ultimately, greater macroeconomic control. Research is ongoing to establish the extent of the increased information content and parameter stability of this new monetary aggregate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The expansion of the Internet has made the task of searching a crucial one. Internet users, however, have to make a great effort in order to formulate a search query that returns the required results. Many methods have been devised to assist in this task by helping the users modify their query to give better results. In this paper we propose an interactive method for query expansion. It is based on the observation that documents are often found to contain terms with high information content, which can summarise their subject matter. We present experimental results, which demonstrate that our approach significantly shortens the time required in order to accomplish a certain task by performing web searches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and \model-free" implied volatility expectations, the recently proposed corridor implied volatil- ity (CIV) measures are explored. For all pair-wise comparisons, it is found that a CIV measure that is closely related to the model-free implied volatility, nearly always delivers the most accurate forecasts for the majority of the firms. This finding remains consistent for different forecast horizons, volatility definitions, loss functions and forecast evaluation settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper examines investors' reactions to dividend reductions or omissions conditional on past earnings and dividend patterns for a sample of eighty-two U.S. firms that incurred an annual loss. We document that the market reaction for firms with long patterns of past earnings and dividend payouts is significantly more negative than for firms with lessestablished past earnings and dividends records. Our results can be explained by the following line of reasoning. First, consistent with DeAngelo, DeAngelo, and Skinner (1992), a loss following a long stream of earnings and dividend payments represents an unreliable indicator of future earnings. Thus, established firms have higher loss reliability than less-established firms. Second, because current earnings and dividend policy are a substitute source of means of forecasting future earnings, lower loss reliability increases the information content of dividend reductions. Therefore, given the presence of a loss, the longer the stream of prior earnings and dividend payments, (1) the lower the loss reliability and (2) the more reliably dividend cuts are perceived as an indication that earnings difficulties will persist in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ensuring the security of corporate information, that is increasingly stored, processed and disseminated using information and communications technologies [ICTs], has become an extremely complex and challenging activity. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of security breaches, and in so doing, protecting corporate information, is through the formulation and application of a formal information security policy (InSPy). Whilst a great deal has now been written about the importance and role of the information security policy, and approaches to its formulation and dissemination, there is relatively little empirical material that explicitly addresses the structure or content of security policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and content of authentic information security policies, rather than simply making general prescriptions about what they ought to contain. Having established the structure and key features of the reviewed policies, the paper critically explores the underlying conceptualisation of information security embedded in the policies. There are two important conclusions to be drawn from this study: (1) the wide diversity of disparate policies and standards in use is unlikely to foster a coherent approach to security management; and (2) the range of specific issues explicitly covered in university policies is surprisingly low, and reflects a highly techno-centric view of information security management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the design and results of a task-based user study, based on Information Foraging Theory, on a novel user interaction framework - uInteract - for content-based image retrieval (CBIR). The framework includes a four-factor user interaction model and an interactive interface. The user study involves three focused evaluations, 12 simulated real life search tasks with different complexity levels, 12 comparative systems and 50 subjects. Information Foraging Theory is applied to the user study design and the quantitative data analysis. The systematic findings have not only shown how effective and easy to use the uInteract framework is, but also illustrate the value of Information Foraging Theory for interpreting user interaction with CBIR. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper proposes an ISE (Information goal, Search strategy, Evaluation threshold) user classification model based on Information Foraging Theory for understanding user interaction with content-based image retrieval (CBIR). The proposed model is verified by a multiple linear regression analysis based on 50 users' interaction features collected from a task-based user study of interactive CBIR systems. To our best knowledge, this is the first principled user classification model in CBIR verified by a formal and systematic qualitative analysis of extensive user interaction data. Copyright 2010 ACM.