38 resultados para distributions to shareholders
em Aston University Research Archive
Resumo:
This chapter takes a social theory of practice approach to examining institutional work; that is, how institutions are created, maintained, and disrupted through the actions, interactions, and negotiations of multiple actors. We examine alternative approaches that organizations use to deal with institutional pluralism based on a longitudinal real-time case study of a utility company grappling with opposing market and regulatory logics over time. These two logics required the firm to both mitigate its significant market power and also maintain its commercially competitive focus and responsiveness to shareholders. Institutional theorists have long acknowledged that institutions have a central logic (Friedland & Alford, 1991) or rationality (DiMaggio & Powell, 1983; Scott, 1995/2001; Townley, 2002), comprising a set of material and symbolic practices and organizing principles that provide logics of action for organizations and individuals, who then reproduce the institutions through their actions (Glynn & Lounsbury, 2005; Suddaby & Greenwood, 2005). Despite a monolithic feel to much institutional theory, in which a dominant institutional logic appears to prevail, institutional theorists also acknowledge the plurality of institutions (e.g. Friedland & Alford, 1991; Kraatz & Block, 2008; Lounsbury, 2007; Meyer & Rowan, 1977; Whittington, 1992). While these pluralistic institutions may be interdependent, they are not considered to coexist in harmony; “There is no question but that many competing and inconsistent logics exist in modern society” (Scott, 1995: 130).
Resumo:
We are the first to examine the market reaction to 13 announcement dates related to IFRS 9 for over 5400 European listed firms. We find an overall positive reaction to the introduction of IFRS 9. The regulation is particularly beneficial to shareholders of firms in countries with weaker rule of law and a smaller divergence between local GAAP and IAS 39. Bootstrap simulations rule out the possibility that sampling error or data mining are driving our findings. Our main findings are also robust to confounding events and the extent of the media coverage for each event. These results suggest that investors perceive the new regulation as shareholder-wealth enhancing and support the view that stronger comparability across accounting standards of European firms is beneficial to international investors and outweighs the costs of poorer firm-specific information.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
There has been a revival of interest in economic techniques to measure the value of a firm through the use of economic value added as a technique for measuring such value to shareholders. This technique, based upon the concept of economic value equating to total value, is founded upon the assumptions of classical liberal economic theory. Such techniques have been subject to criticism both from the point of view of the level of adjustment to published accounts needed to make the technique work and from the point of view of the validity of such techniques in actually measuring value in a meaningful context. This paper critiques economic value added techniques as a means of calculating changes in shareholder value, contrasting such techniques with more traditional techniques of measuring value added. It uses the company Severn Trent plc as an actual example in order to evaluate and contrast the techniques in action. The paper demonstrates discrepancies between the calculated results from using economic value added analysis and those reported using conventional accounting measures. It considers the merits of the respective techniques in explaining shareholder and managerial behaviour and the problems with using such techniques in considering the wider stakeholder concept of value. It concludes that this economic value added technique has merits when compared with traditional accounting measures of performance but that it does not provide the universal panacea claimed by its proponents.
Resumo:
Distributed source analyses of half-field pattern onset visual evoked magnetic responses (VEMR) were carried out by the authors with a view to locating the source of the largest of the components, the CIIm. The analyses were performed using a series of realistic source spaces taking into account the anatomy of the visual cortex. Accuracy was enhanced by constraining the source distributions to lie within the visual cortex only. Further constraints on the source space yielded reliable, but possibly less meaningful, solutions.
Resumo:
This work presents pressure distributions and fluid flow patterns on the shellside of a cylindrical shell-and-tube heat exchanger. The apparatus used was constructed from glass enabling direct observation of the flow using a dye release technique and had ten traversable pressure instrumented tubes permitting detailed pressure distributions to be obtained. The `exchanger' had a large tube bundle (278 tubes) and main flow areas typical of practical designs. Six geometries were studied: three baffle spacings both with and without baffle leakage. Results are also presented of three-dimensional modelling of shellside flows using the Harwell Laboratory's FLOW3D code. Flow visualisation provided flow patterns in the central plane of the bundle and adjacent to the shell wall. Comparison of these high-lighted significant radial flow variations. In particular, separated regions, originating from the baffle tips, were observed. The size of these regions was small in the bundle central plane but large adjacent to the shell wall and extended into the bypass lane. This appeared to reduce the bypass flow area and hence the bypass flow fraction. The three-dimensional flow modelling results were presented as velocity vector and isobar maps. The vector maps illustrated regions of high and low velocity which could be prone to tube vibration and fouling. Separated regions were also in evidence. A non-uniform crossflow was discovered with, in general, higher velocities in the central plane of the bundle than near the shell wall._The form of the isobar maps calculated by FLOW3D was in good agreement with experimental results. In particular, larger pressure drops occurred across the inlet than outlet of a crossflow region and were higher near the upstream than downstream baffle face. The effect of baffle spacing and baffle leakage on crossflow and window pressure drop measurements was identified. Agreement between the current measurements, previously obtained data and commonly used design correlations/models was, in general, poor. This was explained in terms of the increased understanding of shellside flow. The bulk of previous data, which dervies from small-scale rigs with few tubes, have been shown to be unrepresentative of typical commerical units. The Heat Transfer and Fluid Flow Service design program TASC provided the best predictions of the current pressure drop results. However, a number of simple one-dimensional models in TASC are, individually, questionable. Some revised models have been proposed.
Resumo:
Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.
Resumo:
Sentiment analysis has long focused on binary classification of text as either positive or negative. There has been few work on mapping sentiments or emotions into multiple dimensions. This paper studies a Bayesian modeling approach to multi-class sentiment classification and multidimensional sentiment distributions prediction. It proposes effective mechanisms to incorporate supervised information such as labeled feature constraints and document-level sentiment distributions derived from the training data into model learning. We have evaluated our approach on the datasets collected from the confession section of the Experience Project website where people share their life experiences and personal stories. Our results show that using the latent representation of the training documents derived from our approach as features to build a maximum entropy classifier outperforms other approaches on multi-class sentiment classification. In the more difficult task of multi-dimensional sentiment distributions prediction, our approach gives superior performance compared to a few competitive baselines. © 2012 ACM.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce two novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we apply two novel techniques to the problem of extracting the distribution of wind vector directions from radar catterometer data gathered by a remote-sensing satellite.
Resumo:
Most conventional techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three related techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are invariant measurements of the divergence between two distributions. The posterior average information divergence is used to measure the generalisation ability of a network. The optimal estimators for multinomial distributions with Dirichlet priors are studied in detail. This confirms that the definition is compatible with intuition. The results also show that many commonly used methods can be put under this unified framework, by assume special priors and special divergences.
Resumo:
A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.
Resumo:
We introduce a novel inversion-based neuro-controller for solving control problems involving uncertain nonlinear systems that could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. In this work a novel robust inverse control approach is obtained based on importance sampling from these distributions. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The performance of the new algorithm is illustrated through simulations with example systems.