47 resultados para Process models
Resumo:
Recent advances in computer technology have made it possible to create virtual plants by simulating the details of structural development of individual plants. Software has been developed that processes plant models expressed in a special purpose mini-language based on the Lindenmayer system formalism. These models can be extended from their architectural basis to capture plant physiology by integrating them with crop models, which estimate biomass production as a consequence of environmental inputs. Through this process, virtual plants will gain the ability to react to broad environmental conditions, while crop models will gain a visualisation component. This integration requires the resolution of the fundamentally different time scales underlying the approaches. Architectural models are usually based on physiological time; each time step encompasses the same amount of development in the plant, without regard to the passage of real time. In contrast, physiological models are based in real time; the amount of development in a time step is dependent on environmental conditions during the period. This paper provides a background on the plant modelling language, then describes how widely-used concepts of thermal time can be implemented to resolve these time scale differences. The process is illustrated using a case study. (C) 1997 Elsevier Science Ltd.
Resumo:
This work studied the structure-hepatic disposition relationships for cationic drugs of varying lipophilicity using a single-pass, in situ rat liver preparation. The lipophilicity among the cationic drugs studied in this work is in the following order: diltiazem. propranolol. labetalol. prazosin. antipyrine. atenolol. Parameters characterizing the hepatic distribution and elimination kinetics of the drugs were estimated using the multiple indicator dilution method. The kinetic model used to describe drug transport (the two-phase stochastic model) integrated cytoplasmic binding kinetics and belongs to the class of barrier-limited and space-distributed liver models. Hepatic extraction ratio (E) (0.30-0.92) increased with lipophilicity. The intracellular binding rate constant (k(on)) and the equilibrium amount ratios characterizing the slowly and rapidly equilibrating binding sites (K-S and K-R) increase with the lipophilicity of drug (k(on) : 0.05-0.35 s(-1); K-S : 0.61-16.67; K-R : 0.36-0.95), whereas the intracellular unbinding rate constant (k(off)) decreases with the lipophilicity of drug (0.081-0.021 s(-1)). The partition ratio of influx (k(in)) and efflux rate constant (k(out)), k(in)/k(out), increases with increasing pK(a) value of the drug [from 1.72 for antipyrine (pK(a) = 1.45) to 9.76 for propranolol (pK(a) = 9.45)], the differences in k(in/kout) for the different drugs mainly arising from ion trapping in the mitochondria and lysosomes. The value of intrinsic elimination clearance (CLint), permeation clearance (CLpT), and permeability-surface area product (PS) all increase with the lipophilicity of drug [CLint (ml . min(-1) . g(-1) of liver): 10.08-67.41; CLpT (ml . min(-1) . g(-1) of liver): 10.80-5.35; PS (ml . min(-1) . g(-1) of liver): 14.59-90.54]. It is concluded that cationic drug kinetics in the liver can be modeled using models that integrate the presence of cytoplasmic binding, a hepatocyte barrier, and a vascular transit density function.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
In this paper, we consider testing for additivity in a class of nonparametric stochastic regression models. Two test statistics are constructed and their asymptotic distributions are established. We also conduct a small sample study for one of the test statistics through a simulated example. (C) 2002 Elsevier Science (USA).
Resumo:
Accurate habitat mapping is critical to landscape ecological studies such as required for developing and testing Montreal Process indicator 1.1e, fragmentation of forest types. This task poses a major challenge to remote sensing, especially in mixedspecies, variable-age forests such as dry eucalypt forests of subtropical eastern Australia. In this paper, we apply an innovative approach that uses a small section of one-metre resolution airborne data to calibrate a moderate spatial resolution model (30 m resolution; scale 1:50 000) based on Landsat Thematic Mapper data to estimate canopy structural properties in St Marys State Forest, near Maryborough, south-eastern Queensland. The approach applies an image-processing model that assumes each image pixel is significantly larger than individual tree crowns and gaps to estimate crown-cover percentage, stem density and mean crown diameter. These parameters were classified into three discrete habitat classes to match the ecology of four exudivorous arboreal species (yellowbellied glider Petaurus australis, sugar glider P. breviceps, squirrel glider P. norfolcensis , and feathertail glider Acrobates pygmaeus), and one folivorous arboreal marsupial, the greater glider Petauroides volans. These species were targeted due to the known ecological preference for old trees with hollows, and differences in their home range requirements. The overall mapping accuracy, visually assessed against transects (n = 93) interpreted from a digital orthophoto and validated in the field, was 79% (KHAT statistic = 0.72). The KHAT statistic serves as an indicator of the extent that the percentage correct values of the error matrix are due to ‘true’ agreement verses ‘chance’ agreement. This means that we are able to reliably report on the effect of habitat loss on target species, especially those with a large home range size (e.g. yellow-bellied glider). However, the classified habitat map failed to accurately capture the spatial patterning (e.g. patch size and shape) of stands with a trace or sub-dominance of senescent trees. This outcome makes the reporting of the effects of habitat fragmentation more problematic, especially for species with a small home range size (e.g. feathertail glider). With further model refinement and validation, however, this moderateresolution approach offers an important, cost eff e c t i v e advancement in mapping the age of dry eucalypt forests in the region.
Resumo:
The thin-layer drying behaviour of bananas in a beat pump dehumidifier dryer was examined. Four pre-treatments (blanching, chilling, freezing and combined blanching and freezing) were applied to the bananas, which were dried at 50 degreesC with an air velocity of 3.1 m s(-1) and with the relative humidity of the inlet air of 10-35%. Three drying models, the simple model, the two-term exponential model and the Page model were examined. All models were evaluated using three statistical measures, correlation coefficient, root means square error, and mean absolute percent error. Moisture diffusivity was calculated based on the diffusion equation for an infinite cylindrical shape using the slope method. The rate of drying was higher for the pre-treatments involving freezing. The sample which was blanched only did not show any improvement in drying rate. In fact, a longer drying time resulted due to water absorption during blanching. There was no change in the rate for the chilled sample compared with the control. While all models closely fitted the drying data, the simple model showed greatest deviation from the experimental results. The two-term exponential model was found to be the best model for describing the drying curves of bananas because its parameters represent better the physical characteristics of the drying process. Moisture diffusivities of bananas were in the range 4.3-13.2 x 10(-10) m(2)s(-1). (C) 2002 Published by Elsevier Science Ltd.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.
Resumo:
The birth, death and catastrophe process is an extension of the birth-death process that incorporates the possibility of reductions in population of arbitrary size. We will consider a general form of this model in which the transition rates are allowed to depend on the current population size in an arbitrary manner. The linear case, where the transition rates are proportional to current population size, has been studied extensively. In particular, extinction probabilities, the expected time to extinction, and the distribution of the population size conditional on nonextinction (the quasi-stationary distribution) have all been evaluated explicitly. However, whilst these characteristics are of interest in the modelling and management of populations, processes with linear rate coefficients represent only a very limited class of models. We address this limitation by allowing for a wider range of catastrophic events. Despite this generalisation, explicit expressions can still be found for the expected extinction times.
Resumo:
NPT and NVT Monte Carlo simulations are applied to models for methane and water to predict the PVT behaviour of these fluids over a wide range of temperatures and pressures. The potential models examined in this paper have previously been presented in the literature with their specific parameters optimised to fit phase coexistence data. The exponential-6 potential for methane gives generally good prediction of PVT behaviour over the full range of temperature and pressures studied with the only significant deviation from experimental data seen at high temperatures and pressures. The NSPCE water model shows very poor prediction of PVT behaviour, particularly at dense conditions. To improve this. the charge separation in the NSPCE model is varied with density. Improvements for vapour and liquid phase PVT predictions are achieved with this variation. No improvement was found in the prediction of the oxygen-oxygen radial distribution by varying charge separation under dense phase conditions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.
Resumo:
In this paper, we investigate the effects of various potential models in the description of vapor–liquid equilibria (VLE) and adsorption of simple gases on highly graphitized thermal carbon black. It is found that some potential models proposed in the literature are not suitable for the description of VLE (saturated gas and liquid densities and the vapor pressure with temperature). Simple gases, such as neon, argon, krypton, xenon, nitrogen, and methane are studied in this paper. To describe the isotherms on graphitized thermal carbon black correctly, the surface mediation damping factor introduced in our recent publication should be used to calculate correctly the fluid–fluid interaction energy between particles close to the surface. It is found that the damping constant for the noble gases family is linearly dependent on the polarizability, suggesting that the electric field of the graphite surface has a direct induction effect on the induced dipole of these molecules. As a result of this polarization by the graphite surface, the fluid–fluid interaction energy is reduced whenever two particles are near the surface. In the case of methane, we found that the damping constant is less than that of a noble gas having the similar polarizability, while in the case of nitrogen the damping factor is much greater and this could most likely be due to the quadrupolar nature of nitrogen.
Resumo:
The process of adsorption of two dissociating and two non-dissociating aromatic compounds from dilute aqueous solutions on an untreated commercially available activated carbon (B.D.H.) was investigated systematically. All adsorption experiments were carried out in pH controlled aqueous solutions. The experimental isotherms were fitted into four different models (Langmuir homogenous Models, Langmuir binary Model, Langmuir-Freundlich single model and Langmuir-Freundlich double model). Variation of the model parameters with the solution pH was studied and used to gain further insight into the adsorption process. The relationship between the model parameters and the solution pH and pK(a) was used to predict the adsorption capacity in molecular and ionic form of solutes in other solution. A relationship was sought to predict the effect of pH on the adsorption systems and for estimating the maximum adsorption capacity of carbon at any pH where the solute is ionized reasonably well. N-2 and CO2 adsorption were used to characterize the carbon. X-ray Photoelectron Spectroscopy (XPS) measurement was used for surface elemental analysis of the activated carbon.
Resumo:
We develop foreign bank technical, cost and profit efficiency models for particular application with data envelopment analysis (DEA). Key motivations for the paper are (a) the often-observed practice of choosing inputs and outputs where the selection process is poorly explained and linkages to theory are unclear, and (b) foreign bank productivity analysis, which has been neglected in DEA banking literature. The main aim is to demonstrate a process grounded in finance and banking theories for developing bank efficiency models, which can bring comparability and direction to empirical productivity studies. We expect this paper to foster empirical bank productivity studies.