922 resultados para Empirical Flow Models
Resumo:
Willingness to pay models have shown the theoretical relationships between the contingent valuation, cost of illness and the avertive behaviour approaches. In this paper, field survey data are used to compare the relationships between these three approaches and to demonstrate that contingent valuation bids exceed the sum of cost of illness and the avertive behaviour approach estimates. The estimates provide a validity check for CV bids and further support the claim that contingent valuation studies are theoretically consistent.
Resumo:
This article examines the market valuation of announcements of new capital expenditure. Prior research suggests that the firm's growth opportunities and cash flow position condition the market response. This study jointly examines the role of growth and cash flow, and the interaction between them. Using a new data set of Australian firms that avoids problems associated with expectations models, the results are remarkably strong and support a positive association between growth opportunities and the market valuation, in addition to supporting the role of free cash flow. The findings have implications for the relationship between general investment information and stock prices.
Resumo:
The power required to operate large gyratory mills often exceeds 10 MW. Hence, optimisation of the power consumption will have a significant impact on the overall economic performance and environmental impact of the mineral processing plant. In most of the published models of tumbling mills (e.g. [Morrell, S., 1996. Power draw of wet tumbling mills and its relationship to charge dynamics, Part 2: An empirical approach to modelling of mill power draw. Trans. Inst. Mining Metall. (Section C: Mineral Processing Ext. Metall.) 105, C54-C62. Austin, L.G., 1990. A mill power equation for SAG mills. Miner. Metall. Process. 57-62]), the effect of lifter design and its interaction with mill speed and filling are not incorporated. Recent experience suggests that there is an opportunity for improving grinding efficiency by choosing the appropriate combination of these variables. However, it is difficult to experimentally determine the interactions of these variables in a full scale mill. Although some work has recently been published using DEM simulations, it was basically. limited to 2D. The discrete element code, Particle Flow Code 3D (PFC3D), has been used in this work to model the effects of lifter height (525 cm) and mill speed (50-90% of critical) on the power draw and frequency distribution of specific energy (J/kg) of normal impacts in a 5 m diameter autogenous (AG) mill. It was found that the distribution of the impact energy is affected by the number of lifters, lifter height, mill speed and mill filling. Interactions of lifter design, mill speed and mill filling are demonstrated through three dimensional distinct element methods (3D DEM) modelling. The intensity of the induced stresses (shear and normal) on lifters, and hence the lifter wear, is also simulated. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The influence of three dimensional effects on isochromatic birefringence is evaluated for planar flows by means of numerical simulation. Two fluid models are investigated in channel and abrupt contraction geometries. In practice, the flows are confined by viewing windows, which alter the stresses along the optical path. The observed optical properties differ therefore from their counterpart in an ideal two-dimensional flow. To investigate the influence of these effects, the stress optical rule and the differential propagation Mueller matrix are used. The material parameters are selected so that a retardation of multiple orders is achieved, as is typical for highly birefringent melts. Errors due to three dimensional effects are mainly found on the symmetry plane, and increase significantly with the flow rate. Increasing the geometric aspect ratio improve the accuracy provided that the error on the retardation is less than one order. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Cognitive modelling of phenomena in clinical practice allows the operationalisation of otherwise diffuse descriptive terms such as craving or flashbacks. This supports the empirical investigation of the clinical phenomena and the development of targeted treatment interventions. This paper focuses on the cognitive processes underpinning craving, which is recognised as a motivating experience in substance dependence. We use a high-level cognitive architecture, Interacting Cognitive Subsystems (ICS), to compare two theories of craving: Tiffany's theory, centred on the control of automated action schemata, and our own Elaborated Intrusion theory of craving. Data from a questionnaire study of the subjective aspects of everyday desires experienced by a large non-clinical population are presented. Both the data and the high-level modelling support the central claim of the Elaborated Intrusion theory that imagery is a key element of craving, providing the subjective experience and mediating much of the associated disruption of concurrent cognition.
Resumo:
Four mine waste beach longitudinal profile equations are compared theoretically and in statistical analyses of profile data from 64 field and laboratory beaches formed by mine tailings, co-disposed coal mine wastes, and sand. All four equations fit the profile data well. The best performing equation both accounts for particle sorting and satisfies hydraulic constraints, and the combination of assumptions underlying it is considered to best represent the processes occurring on mine waste beaches. Combining these assumptions with the Lacey normal equation leads to a variant of the Manning resistance equation. Features that it is desirable to incorporate in theoretical and numerical models of mine waste beaches are listed.
Resumo:
Inactivity is associated with endothelial dysfunction and the development of cardiovascular disease. Exercise training has a favourable effect in the management of hypertension, heart failure and ischaemic heart disease. These beneficial effects are probably mediated through improvements of vascular function and, in this issue of Clinical Science, Hagg and co-authors propose a coronary artery effect. The use of a Doppler technique for non-invasive assessment of coronary flow reserve in a small animal model is an exciting aspect of this study. If feasible in the hands of other investigators, the availability of sequential coronary flow measurements in animal models may help improve our understanding of the mechanisms of disorders of the coronary circulation.
Resumo:
An existing capillarity correction for free surface groundwater flow as modelled by the Boussinesq equation is re-investigated. Existing solutions, based on the shallow flow expansion, have considered only the zeroth-order approximation. Here, a second-order capillarity correction to tide-induced watertable fluctuations in a coastal aquifer adjacent to a sloping beach is derived. A new definition of the capillarity correction is proposed for small capillary fringes, and a simplified solution is derived. Comparisons of the two models show that the simplified model can be used in most cases. The significant effects of higher-order capillarity corrections on tidal fluctuations in a sloping beach are also demonstrated. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Functional-structural plant models that include detailed mechanistic representation of underlying physiological processes can be expensive to construct and the resulting models can also be extremely complicated. On the other hand, purely empirical models are not able to simulate plant adaptability and response to different conditions. In this paper, we present an intermediate approach to modelling plant function that can simulate plant response without requiring detailed knowledge of underlying physiology. Plant function is modelled using a 'canonical' modelling approach, which uses compartment models with flux functions of a standard mathematical form, while plant structure is modelled using L-systems. Two modelling examples are used to demonstrate that canonical modelling can be used in conjunction with L-systems to create functional-structural plant models where function is represented either in an accurate and descriptive way, or in a more mechanistic and explanatory way. We conclude that canonical modelling provides a useful, flexible and relatively simple approach to modelling plant function at an intermediate level of abstraction.
Resumo:
The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We develop foreign bank technical, cost and profit efficiency models for particular application with data envelopment analysis (DEA). Key motivations for the paper are (a) the often-observed practice of choosing inputs and outputs where the selection process is poorly explained and linkages to theory are unclear, and (b) foreign bank productivity analysis, which has been neglected in DEA banking literature. The main aim is to demonstrate a process grounded in finance and banking theories for developing bank efficiency models, which can bring comparability and direction to empirical productivity studies. We expect this paper to foster empirical bank productivity studies.
Resumo:
Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.