185 resultados para model efficiency
em University of Queensland eSpace - Australia
Resumo:
A reversible linear master equation model is presented for pressure- and temperature-dependent bimolecular reactions proceeding via multiple long-lived intermediates. This kinetic treatment, which applies when the reactions are measured under pseudo-first-order conditions, facilitates accurate and efficient simulation of the time dependence of the populations of reactants, intermediate species and products. Detailed exploratory calculations have been carried out to demonstrate the capabilities of the approach, with applications to the bimolecular association reaction C3H6 + H reversible arrow C3H7 and the bimolecular chemical activation reaction C2H2 +(CH2)-C-1--> C3H3+H. The efficiency of the method can be dramatically enhanced through use of a diffusion approximation to the master equation, and a methodology for exploiting the sparse structure of the resulting rate matrix is established.
Resumo:
Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.
Resumo:
A mixture model for long-term survivors has been adopted in various fields such as biostatistics and criminology where some individuals may never experience the type of failure under study. It is directly applicable in situations where the only information available from follow-up on individuals who will never experience this type of failure is in the form of censored observations. In this paper, we consider a modification to the model so that it still applies in the case where during the follow-up period it becomes known that an individual will never experience failure from the cause of interest. Unless a model allows for this additional information, a consistent survival analysis will not be obtained. A partial maximum likelihood (ML) approach is proposed that preserves the simplicity of the long-term survival mixture model and provides consistent estimators of the quantities of interest. Some simulation experiments are performed to assess the efficiency of the partial ML approach relative to the full ML approach for survival in the presence of competing risks.
Resumo:
Most soils contain preferential flow paths that can impact on solute mobility. Solutes can move rapidly down the preferential flow paths with high pore-water velocities, but can be held in the less permeable region of the soil matrix with low pore-water velocities, thereby reducing the efficiency of leaching. In this study, we conducted leaching experiments with interruption of the flow and drainage of the main flow paths to assess the efficiency of this type of leaching. We compared our experimental results to a simple analytical model, which predicts the influence of the variations in concentration gradients within a single spherical aggregate (SSA) surrounded by preferential flow paths on leaching. We used large (length: 300 mm, diameter: 216 mm) undisturbed field soil cores from two contrasting soil types. To carry out intermittent leaching experiments, the field soil cores were first saturated with tracer solution (CaBr2), and background solution (CaCl2) was applied to mimic a leaching event. The cores were then drained at 25- to 30-cm suction to empty the main flow paths to mimic a dry period during which solutes could redistribute within the undrained region. We also conducted continuous leaching experiments to assess the impact of the dry periods on the efficiency of leaching. The flow interruptions with drainage enhanced leaching by 10-20% for our soils, which was consistent with the model's prediction, given an optimised equivalent aggregate radius for each soil. This parameter quantifies the time scales that characterise diffusion within the undrained region of the soil, and allows us to calculate the duration of the leaching events and interruption periods that would lead to more efficient leaching. Application of these methodologies will aid development of strategies for improving management of chemicals in soils, needed in managing salts in soils, in improving fertiliser efficiency, and in reclaiming contaminated soils. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
A method involving bubbling of air through a fibrous filter immersed in water has recently been investigated (Agranovski et al. [1]). Experimental results showed that the removal efficiency for ultra-fine aerosols by such filters was greatly increased compared to dry filters. Nuclear Magnetic Resonance (NMR) imaging was used to examine the wet filter and to determine the nature of the gas flow inside the filter (Agranovski et al. [2]). It was found that tortuous preferential pathways (or flow tubes) develop within the filter through which the air flows and the distribution of air and water inside the porous medium has been investigated. The aim of this paper is to investigate the geometry of the pathways and to make estimates of the flow velocities and particle removal efficiency in such pathways. A mathematical model of the flow of air along the preferred pathways has been developed and verified experimentally. Even for the highest realistic gas velocity the flow field was essentially laminar (Re approximate to 250). We solved Laplace's equation for stream function to map trajectories of particles and gas molecules to investigate the possibility of their removal from the carrier.
Resumo:
We model a buyer who wishes to combine objects owned by two separate sellers in order to realize higher value. Sellers are able to avoid entering into negotiations with the buyer, so that the order in which they negotiate is endogenous. Holdout occurs if at least one of the sellers is not present in the first round of negotiations. We demonstrate that complementarity of the buyer's technology is a necessary condition for equilibrium holdout. Moreover, a rise in complementarity leads to an increased likelihood of holdout, and an increased efficiency loss. Applications include patents, the land assembly problem, and mergers.
Resumo:
Conditions which influence the viability, integrity, and extraction efficiency of the isolated perfused rat liver were examined to establish optimal conditions for subsequent work in reperfusion injury studies including the choice of buffer, use of oncotic agents, hematocrit, perfusion flow rate, and pressure. Rat livers were perfused with MOPS-buffered Ringer solution with or without erythrocytes. Perfusates were collected and analyzed for blood gases, electrolytes, enzymes, radioactivity in MID studies, and lignocaine in extraction studies. Liver tissue was sampled for histological examinations, and wet:dry weight of the liver was also determined. MOPS-buffered Ringer solution was found to be superior to Krebs bicarbonate buffer, in terms of pH control and buffering capacity, especially during any prolonged period of liver perfusion. A pH of 7.2 is chosen for perfusion since this is the physiological pH of the portal blood. The presence of albumin was important as an oncotic agent, particularly when erythrocytes were used in the perfusate. Perfusion pressure, resistance, and vascular volume are how-dependent and the inclusion of erythrocytes in the perfusate substantially altered the flow characteristics for perfusion pressure and resistance but not vascular volume. Lignocaine extraction was relatively flow-independent. Perfusion injury as defined by enzyme release and tissue fine structure was closely related to the supply of O-2. The optimal conditions for liver perfusion depend upon an adequate supply of oxygen. This can be achieved by using either erythrocyte-free perfusate at a how rate greater than 6 ml/min/g liver or a 20% erythrocyte-containing perfusate at 2 ml/min/g. (C) 1996 Academic Press, Inc.
Resumo:
Experimental data for E. coli debris size reduction during high-pressure homogenisation at 55 MPa are presented. A mathematical model based on grinding theory is developed to describe the data. The model is based on first-order breakage and compensation conditions. It does not require any assumption of a specified distribution for debris size and can be used given information on the initial size distribution of whole cells and the disruption efficiency during homogenisation. The number of homogeniser passes is incorporated into the model and used to describe the size reduction of non-induced stationary and induced E. coil cells during homogenisation. Regressing the results to the model equations gave an excellent fit to experimental data ( > 98.7% of variance explained for both fermentations), confirming the model's potential for predicting size reduction during high-pressure homogenisation. This study provides a means to optimise both homogenisation and disc-stack centrifugation conditions for recombinant product recovery. (C) 1997 Elsevier Science Ltd.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Model-based procedure for scale-up of wet, overflow ball mills - Part III: Validation and discussion
Resumo:
A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The Agricultural Production Systems slMulator, APSIM, is a cropping system modelling environment that simulates the dynamics of soil-plant-management interactions within a single crop or a cropping system. Adaptation of previously developed crop models has resulted in multiple crop modules in APSIM, which have low scientific transparency and code efficiency. A generic crop model template (GCROP) has been developed to capture unifying physiological principles across crops (plant types) and to provide modular and efficient code for crop modelling. It comprises a standard crop interface to the APSIM engine, a generic crop model structure, a crop process library, and well-structured crop parameter files. The process library contains the major science underpinning the crop models and incorporates generic routines based on physiological principles for growth and development processes that are common across crops. It allows APSIM to simulate different crops using the same set of computer code. The generic model structure and parameter files provide an easy way to test, modify, exchange and compare modelling approaches at process level without necessitating changes in the code. The standard interface generalises the model inputs and outputs, and utilises a standard protocol to communicate with other APSIM modules through the APSIM engine. The crop template serves as a convenient means to test new insights and compare approaches to component modelling, while maintaining a focus on predictive capability. This paper describes and discusses the scientific basis, the design, implementation and future development of the crop template in APSIM. On this basis, we argue that the combination of good software engineering with sound crop science can enhance the rate of advance in crop modelling. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In this paper we investigate the trade-off faced by regulators who must set a price for an intermediate good somewhere between the marginal cost and the monopoly price. We utilize a growth model with monopolistic suppliers of intermediate goods. Investment in innovation is required to produce a new intermediate good. Marginal cost pricing deters innovation, while monopoly pricing maximizes innovation and economic growth at the cost of some static inefficiency. We demonstrate the existence of a second-best price above the marginal cost but below the monopoly price, which maximizes consumer welfare. Simulation results suggest that substantial reductions in consumption, production, growth, and welfare occur where regulators focus on static efficiency issues by setting prices at or near marginal cost.