923 resultados para Real Options Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free,''in,'' and ''out'' eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian: the singularities of the ''out'' eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of ''complete'' sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the ''out'' eigenvectors. The free, ''in'' and ''out'' eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee–Friedrichs model and to the scattering of a spinless particle by a local central potential. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Excessive speed is a primary contributing factor to young novice road trauma, including intentional and unintentional speeds above posted limits or too fast for conditions. The objective of this research was to conduct a systematic review of recent investigations into novice drivers’ speed selection, with particular attention to applications and limitations of theory and methodology. Method Systematic searches of peer-reviewed and grey literature were conducted during September 2014. Abstract reviews identified 71 references potentially meeting selection criteria of investigations since the year 2000 into factors that influence (directly or indirectly) actual speed (i.e., behaviour or performance) of young (age <25 years) and/or novice (recently-licensed) drivers. Results Full paper reviews resulted in 30 final references: 15 focused on intentional speeding and 15 on broader speed selection investigations. Both sets identified a range of individual (e.g., beliefs, personality) and social (e.g., peer, adult) influences, were predominantly theory-driven and applied cross-sectional designs. Intentional speed investigations largely utilised self-reports while other investigations more often included actual driving (simulated or ‘real world’). The latter also identified cognitive workload and external environment influences, as well as targeted interventions. Discussion and implications Applications of theory have shifted the novice speed-related literature beyond a simplistic focus on intentional speeding as human error. The potential to develop a ‘grand theory’ of intentional speeding emerged and to fill gaps to understand broader speed selection influences. This includes need for future investigations of vehicle-related and physical environment-related influences and methodologies that move beyond cross-sectional designs and rely less on self-reports.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The financial health of beef cattle enterprises in northern Australia has declined markedly over the last decade due to an escalation in production and marketing costs and a real decline in beef prices. Historically, gains in animal productivity have offset the effect of declining terms of trade on farm incomes. This raises the question of whether future productivity improvements can remain a key path for lifting enterprise profitability sufficient to ensure that the industry remains economically viable over the longer term. The key objective of this study was to assess the production and financial implications for north Australian beef enterprises of a range of technology interventions (development scenarios), including genetic gain in cattle, nutrient supplementation, and alteration of the feed base through introduced pastures and forage crops, across a variety of natural environments. To achieve this objective a beef systems model was developed that is capable of simulating livestock production at the enterprise level, including reproduction, growth and mortality, based on energy and protein supply from natural C4 pastures that are subject to high inter-annual climate variability. Comparisons between simulation outputs and enterprise performance data in three case study regions suggested that the simulation model (the Northern Australia Beef Systems Analyser) can adequately represent the performance beef cattle enterprises in northern Australia. Testing of a range of development scenarios suggested that the application of individual technologies can substantially lift productivity and profitability, especially where the entire feedbase was altered through legume augmentation. The simultaneous implementation of multiple technologies that provide benefits to different aspects of animal productivity resulted in the greatest increases in cattle productivity and enterprise profitability, with projected weaning rates increasing by 25%, liveweight gain by 40% and net profit by 150% above current baseline levels, although gains of this magnitude might not necessarily be realised in practice. While there were slight increases in total methane output from these development scenarios, the methane emissions per kg of beef produced were reduced by 20% in scenarios with higher productivity gain. Combinations of technologies or innovative practices applied in a systematic and integrated fashion thus offer scope for providing the productivity and profitability gains necessary to maintain viable beef enterprises in northern Australia into the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A central tenet in the theory of reliability modelling is the quantification of the probability of asset failure. In general, reliability depends on asset age and the maintenance policy applied. Usually, failure and maintenance times are the primary inputs to reliability models. However, for many organisations, different aspects of these data are often recorded in different databases (e.g. work order notifications, event logs, condition monitoring data, and process control data). These recorded data cannot be interpreted individually, since they typically do not have all the information necessary to ascertain failure and preventive maintenance times. This paper presents a methodology for the extraction of failure and preventive maintenance times using commonly-available, real-world data sources. A text-mining approach is employed to extract keywords indicative of the source of the maintenance event. Using these keywords, a Naïve Bayes classifier is then applied to attribute each machine stoppage to one of two classes: failure or preventive. The accuracy of the algorithm is assessed and the classified failure time data are then presented. The applicability of the methodology is demonstrated on a maintenance data set from an Australian electricity company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytic treatment of localization in a weakly disordered system is presented for the case where the real lattice is approximated by a Cayley tree. Contrary to a recent assertion we find that the mobility edge moves inwards into the band as disorder increases from zero.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flexible objects such as a rope or snake move in a way such that their axial length remains almost constant. To simulate the motion of such an object, one strategy is to discretize the object into large number of small rigid links connected by joints. However, the resulting discretised system is highly redundant and the joint rotations for a desired Cartesian motion of any point on the object cannot be solved uniquely. In this paper, we revisit an algorithm, based on the classical tractrix curve, to resolve the redundancy in such hyper-redundant systems. For a desired motion of the `head' of a link, the `tail' is moved along a tractrix, and recursively all links of the discretised objects are moved along different tractrix curves. The algorithm is illustrated by simulations of a moving snake, tying of knots with a rope and a solution of the inverse kinematics of a planar hyper-redundant manipulator. The simulations show that the tractrix based algorithm leads to a more `natural' motion since the motion is distributed uniformly along the entire object with the displacements diminishing from the `head' to the `tail'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"A practical guide for educators and managers involved in supervising field education. Drawing on the experience of academics, clinicians and educators from Australia, New Zealand, Canada and the UK, the collection explores how to make the most of fieldwork experience."--Libraries Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive compensation and managerial behavior have received an increasing amount of attention in the financial economics literature since the mid 1970s. The purpose of this thesis is to extend our understanding of managerial compensation, especially how stock option compensation is linked to the actions undertaken by the management. Furthermore, managerial compensation is continuously and heatedly debated in the media and an emerging consensus from this discussion seems to be that there still exists gaps in our knowledge of optimal contracting. In Finland, the first executive stock options were introduced in the 1980s and throughout the last 15 years it has become increasingly popular for Finnish listed firms to use this type of managerial compensation. The empirical work in the thesis is conducted using data from Finland, in contrast to most previous studies that predominantly use U.S. data. Using Finnish data provides insight of how market conditions affect compensation and managerial action and provides an opportunity to explore what parts of the U.S. evidence can be generalized to other markets. The thesis consists of four essays. The first essay investigates the exercise policy of the executive stock option holders in Finland. In summary, Essay 1 contributes to our understanding of the exercise policies by examining both the determinants of the exercise decision and the markets reaction to the actual exercises. The second essay analyzes the factors driving stock option grants using data for Finnish publicly listed firms. Several agency theory based variables are found to have have explanatory power on the likelihood of a stock option grant. Essay 2 also contributes to our understanding of behavioral factors, such as prior stock return, as determinants of stock option compensation. The third essay investigates the tax and stock option motives for share repurchases and dividend distributions. We document strong support for the tax motive for share repurchases. Furthermore, we also analyze the dividend distribution decision in companies with stock options and find a significant difference between companies with and without dividend protected options. We thus document that the cutting of dividends found in previous U.S. studies can be avoided by dividend protection. In the fourth essay we approach the puzzle of negative skewness in stock returns from an altogether different angle than in previous studies. We suggest that negative skewness in stock returns is generated by management disclosure practices and find proof for this. More specifically, we find that negative skewness in daily returns is induced by returns for days when non-scheduled firm specific news is disclosed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pragmatism has sometimes been taken as a catchphrase for epistemological stances in which anything goes. However, other authors argue that the real novelty and contribution of this tradition has to do with its view of action as the context in which all things human take place. Thus, it is action rather than, for example, discourses that should be our starting point in social theory. The introductory section of the book situates pragmatism (especially the ideas of G. H. Mead and John Dewey) within the field and tradition of social theory. This introductory also contextualizes the main core of the book which consists of four chapters. Two of these chapters have been published as articles in scientific journals and one in an edited book. All of them discuss the core problem of social theory: how is action related to social structures (and vice versa)? The argument is that habitual action is the explanation for the emergence of social structures from our action. Action produces structures and social reproduction takes place when action is habitualized; that is, when we develop social dispositions to act in a certain manner in familiar environments. This also means that even though the physical environment is the same for all of us, our habits structure it into different kinds of action possibilities. Each chapter highlights these general insights from different angles. Practice theory has gained momentum in recent years and it has many commonalities with pragmatism because both highlight the situated and corporeal character of human activity. One famous proponent of practice theory is Margaret Archer who has argued that the pragmatism of G. H. Mead leads to an oversocialized conception of selfhood. Mead does indeed present a socialized view of selfhood but this is a meta-sociological argument rather than a substantial sociological claim. Accordingly, one can argue that in this general sense intersubjectivity precedes subjectivity and not the other way around. Such a view does not indicate that our social relation would necessarily "colonize" individual action because there is a place for internal conversations (in Archer s terminology); it is especially in those phases of action where it meets obstacles due to the changes of the environment. The second issue discussed has the background assumption that social structures can fruitfully be conceptualized as institutions. A general classification of different institution theories is presented and it is argued that there is a need for a habitual theory of institutions due to the problems associated with these other theories. So-called habitual institutionalism accounts for institutions in terms of established and prevalent social dispositions that structure our social interactions. The germs of this institution theory can be found in the work of Thorstein Veblen. Since Veblen s times, these ideas have been discussed for example, by the economist Geoffrey M. Hodgson. His ideas on the evolution of institutions are presented but a critical stance is taken towards his tendency of defining institutions with the help of rules because rules are not always present in institutions. Accordingly, habitual action is the most basic but by no means the only aspect of institutional reproduction. The third chapter deals with theme of action and structures in the context of Pierre Bourdieu s thought. Bourdieu s term habitus refers to a system of dispositions which structure social fields. It is argued that habits come close to the concept of habitus in the sense that the latter consists of particular kinds of habits; those that are related to the reproduction of socioeconomic positions. Habits are thus constituents of a general theory of societal reproduction whereas habitus is a systematic combination of socioeconomic habits. The fourth theme relates to issues of social change and development. The capabilities approach has been associated with the name of Amartya Sen, for example, and it underscores problems inhering in economistic ways of evaluating social development. However, Sen s argument has some theoretical problems. For example, his theory cannot adequately confront the problem of relativism. In addition, Sen s discussion lacks also a theory of the role of the public. With the help of arguments derived from pragmatism, one gets an action-based, socially constituted view of freedom in which the role of the public is essential. In general, it is argued that a socially constituted view of agency does not necessarily to lead to pessimistic conclusions about the freedom of action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recently developed microscopic theory of solvation dynamics in real dipolar liquids is used to calculate, for the first time, the solvation time correlation function in liquid acetonitrile, water and methanol. The calculated results are in excellent agreement with known experimental and computer simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A scheme to apply the rate-1 real orthogonal designs (RODs) in relay networks with single real-symbol decodability of the symbols at the destination for any arbitrary number of relays is proposed. In the case where the relays do not have any information about the channel gains from the source to themselves, the best known distributed space time block codes (DSTBCs) for k relays with single real-symbol decodability offer an overall rate of complex symbols per channel use. The scheme proposed in this paper offers an overall rate of 2/2+k complex symbol per channel use, which is independent of the number of relays. Furthermore, in the scenario where the relays have partial channel information in the form of channel phase knowledge, the best known DSTBCs with single real-symbol decodability offer an overall rate of 1/3 complex symbols per channel use. In this paper, making use of RODs, a scheme which achieves the same overall rate of 1/3 complex symbols per channel use but with a decoding delay that is 50 percent of that of the best known DSTBCs, is presented. Simulation results of the symbol error rate performance for 10 relays, which show the superiority of the proposed scheme over the best known DSTBC for 10 relays with single real-symbol decodability, are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel algebraic formulation of the central problem of screw theory, namely the determination of the principal screws of a given system. Using the algebra of dual numbers, it shows that the principal screws can be determined via the solution of a generalised eigenproblem of two real, symmetric matrices. This approach allows the study of the principal screws of the general screw systems associated with a manipulator of arbitrary geometry in terms of closed-form expressions of its architecture and configuration parameters. The formulation is illustrated with examples of practical manipulators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been shown recently that the maximum rate of a 2-real-symbol (single-complex-symbol) maximum likelihood (ML) decodable, square space-time block codes (STBCs) with unitary weight matrices is 2a/2a complex symbols per channel use (cspcu) for 2a number of transmit antennas [1]. These STBCs are obtained from Unitary Weight Designs (UWDs). In this paper, we show that the maximum rates for 3- and 4-real-symbol (2-complex-symbol) ML decodable square STBCs from UWDs, for 2a transmit antennas, are 3(a-1)/2a and 4(a-1)/2a cspcu, respectively. STBCs achieving this maximum rate are constructed. A set of sufficient conditions on the signal set, required for these codes to achieve full-diversity are derived along with expressions for their coding gain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laminar separation bubbles are thought to be highly non-parallel, and hence global stability studies start from this premise. However, experimentalists have always realized that the flow is more parallel than is commonly believed, for pressure-gradient-induced bubbles, and this is why linear parallel stability theory has been successful in describing their early stages of transition. The present experimental/numerical study re-examines this important issue and finds that the base flow in such a separation bubble becomes nearly parallel due to a strong-interaction process between the separated boundary layer and the outer potential flow. The so-called dead-air region or the region of constant pressure is a simple consequence of this strong interaction. We use triple-deck theory to qualitatively explain these features. Next, the implications of global analysis for the linear stability of separation bubbles are considered. In particular we show that in the initial portion of the bubble, where the flow is nearly parallel, local stability analysis is sufficient to capture the essential physics. It appears that the real utility of the global analysis is perhaps in the rear portion of the bubble, where the flow is highly non-parallel, and where the secondary/nonlinear instability stages are likely to dominate the dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analytically study the role played by the network topology in sustaining cooperation in a society of myopic agents in an evolutionary setting. In our model, each agent plays the Prisoner's Dilemma (PD) game with its neighbors, as specified by a network. Cooperation is the incumbent strategy, whereas defectors are the mutants. Starting with a population of cooperators, some agents are switched to defection. The agents then play the PD game with their neighbors and compute their fitness. After this, an evolutionary rule, or imitation dynamic is used to update the agent strategy. A defector switches back to cooperation if it has a cooperator neighbor with higher fitness. The network is said to sustain cooperation if almost all defectors switch to cooperation. Earlier work on the sustenance of cooperation has largely consisted of simulation studies, and we seek to complement this body of work by providing analytical insight for the same. We find that in order to sustain cooperation, a network should satisfy some properties such as small average diameter, densification, and irregularity. Real-world networks have been empirically shown to exhibit these properties, and are thus candidates for the sustenance of cooperation. We also analyze some specific graphs to determine whether or not they sustain cooperation. In particular, we find that scale-free graphs belonging to a certain family sustain cooperation, whereas Erdos-Renyi random graphs do not. To the best of our knowledge, ours is the first analytical attempt to determine which networks sustain cooperation in a population of myopic agents in an evolutionary setting.