62 resultados para process model collection
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
A model has been developed which enables the viscosities of coal ash slags to be predicted as a function of composition and temperature under reducing conditions. The model describes both completely liquid and heterogeneous, i.e. partly crystallised, slags in the Al2O3-CaO-'FeO'-SiO2 system in equilibrium with metallic iron. The Urbain formalism has been modified to describe the viscosities of the liquid slag phase over the complete range of compositions and a wide range of temperatures. The computer package F * A * C * T was used to predict the proportions of solids and the compositions of the remaining liquid phases. The Roscoe equation has been used to describe the effect of presence of solid suspension (slurry effect) on the viscosity of partly crystallised slag systems. The model provides a good description of the experimental data of fully liquid, and liquid + solids mixtures, over the complete range of compositions and a wide range of temperatures. This model can now be used for viscosity predictions in industrial slag systems. Examples of the application of the new model to coal ash fluxing and blending are given in the paper. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
At the core of the analysis task in the development process is information systems requirements modelling, Modelling of requirements has been occurring for many years and the techniques used have progressed from flowcharting through data flow diagrams and entity-relationship diagrams to object-oriented schemas today. Unfortunately, researchers have been able to give little theoretical guidance only to practitioners on which techniques to use and when. In an attempt to address this situation, Wand and Weber have developed a series of models based on the ontological theory of Mario Bunge-the Bunge-Wand-Weber (BWW) models. Two particular criticisms of the models have persisted however-the understandability of the constructs in the BWW models and the difficulty in applying the models to a modelling technique. This paper addresses these issues by presenting a meta model of the BWW constructs using a meta language that is familiar to many IS professionals, more specific than plain English text, but easier to understand than the set-theoretic language of the original BWW models. Such a meta model also facilitates the application of the BWW theory to other modelling techniques that have similar meta models defined. Moreover, this approach supports the identification of patterns of constructs that might be common across meta models for modelling techniques. Such findings are useful in extending and refining the BWW theory. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The biological reactions during the settling and decant periods of Sequencing Batch Reactors (SBRs) are generally ignored as they are not easily measured or described by modelling approaches. However, important processes are taking place, and in particular when the influent is fed into the bottom of the reactor at the same time (one of the main features of the UniFed process), the inclusion of these stages is crucial for accurate process predictions. Due to the vertical stratification of both liquid and solid components, a one-dimensional hydraulic model is combined with a modified ASM2d biological model to allow the prediction of settling velocity, sludge concentration, soluble components and biological processes during the non-mixed periods of the SBR. The model is calibrated on a full-scale UniFed SBR system with tracer breakthrough tests, depth profiles of particulate and soluble compounds and measurements of the key components during the mixed aerobic period. This model is then validated against results from an independent experimental period with considerably different operating parameters. In both cases, the model is able to accurately predict the stratification and most of the biological reactions occurring in the sludge blanket and the supernatant during the non-mixed periods. Together with a correct description of the mixed aerobic period, a good prediction of the overall SBR performance can be achieved.
Resumo:
Accurate habitat mapping is critical to landscape ecological studies such as required for developing and testing Montreal Process indicator 1.1e, fragmentation of forest types. This task poses a major challenge to remote sensing, especially in mixedspecies, variable-age forests such as dry eucalypt forests of subtropical eastern Australia. In this paper, we apply an innovative approach that uses a small section of one-metre resolution airborne data to calibrate a moderate spatial resolution model (30 m resolution; scale 1:50 000) based on Landsat Thematic Mapper data to estimate canopy structural properties in St Marys State Forest, near Maryborough, south-eastern Queensland. The approach applies an image-processing model that assumes each image pixel is significantly larger than individual tree crowns and gaps to estimate crown-cover percentage, stem density and mean crown diameter. These parameters were classified into three discrete habitat classes to match the ecology of four exudivorous arboreal species (yellowbellied glider Petaurus australis, sugar glider P. breviceps, squirrel glider P. norfolcensis , and feathertail glider Acrobates pygmaeus), and one folivorous arboreal marsupial, the greater glider Petauroides volans. These species were targeted due to the known ecological preference for old trees with hollows, and differences in their home range requirements. The overall mapping accuracy, visually assessed against transects (n = 93) interpreted from a digital orthophoto and validated in the field, was 79% (KHAT statistic = 0.72). The KHAT statistic serves as an indicator of the extent that the percentage correct values of the error matrix are due to ‘true’ agreement verses ‘chance’ agreement. This means that we are able to reliably report on the effect of habitat loss on target species, especially those with a large home range size (e.g. yellow-bellied glider). However, the classified habitat map failed to accurately capture the spatial patterning (e.g. patch size and shape) of stands with a trace or sub-dominance of senescent trees. This outcome makes the reporting of the effects of habitat fragmentation more problematic, especially for species with a small home range size (e.g. feathertail glider). With further model refinement and validation, however, this moderateresolution approach offers an important, cost eff e c t i v e advancement in mapping the age of dry eucalypt forests in the region.
Resumo:
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The Agricultural Production Systems slMulator, APSIM, is a cropping system modelling environment that simulates the dynamics of soil-plant-management interactions within a single crop or a cropping system. Adaptation of previously developed crop models has resulted in multiple crop modules in APSIM, which have low scientific transparency and code efficiency. A generic crop model template (GCROP) has been developed to capture unifying physiological principles across crops (plant types) and to provide modular and efficient code for crop modelling. It comprises a standard crop interface to the APSIM engine, a generic crop model structure, a crop process library, and well-structured crop parameter files. The process library contains the major science underpinning the crop models and incorporates generic routines based on physiological principles for growth and development processes that are common across crops. It allows APSIM to simulate different crops using the same set of computer code. The generic model structure and parameter files provide an easy way to test, modify, exchange and compare modelling approaches at process level without necessitating changes in the code. The standard interface generalises the model inputs and outputs, and utilises a standard protocol to communicate with other APSIM modules through the APSIM engine. The crop template serves as a convenient means to test new insights and compare approaches to component modelling, while maintaining a focus on predictive capability. This paper describes and discusses the scientific basis, the design, implementation and future development of the crop template in APSIM. On this basis, we argue that the combination of good software engineering with sound crop science can enhance the rate of advance in crop modelling. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A technique based on laser light diffraction is shown to be successful in collecting on-line experimental data. Time series of floc size distributions (FSD) under different shear rates (G) and calcium additions were collected. The steady state mass mean diameter decreased with increasing shear rate G and increased when calcium additions exceeded 8 mg/l. A so-called population balance model (PBM) was used to describe the experimental data, This kind of model describes both aggregation and breakage through birth and death terms. A discretised PBM was used since analytical solutions of the integro-partial differential equations are non-existing. Despite the complexity of the model, only 2 parameters need to be estimated: the aggregation rate and the breakage rate. The model seems, however, to lack flexibility. Also, the description of the floc size distribution (FSD) in time is not accurate.
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.
Resumo:
The Agricultural Production Systems Simulator (APSIM) is a modular modelling framework that has been developed by the Agricultural Production Systems Research Unit in Australia. APSIM was developed to simulate biophysical process in farming systems, in particular where there is interest in the economic and ecological outcomes of management practice in the face of climatic risk. The paper outlines APSIM's structure and provides details of the concepts behind the different plant, soil and management modules. These modules include a diverse range of crops, pastures and trees, soil processes including water balance, N and P transformations, soil pH, erosion and a full range of management controls. Reports of APSIM testing in a diverse range of systems and environments are summarised. An example of model performance in a long-term cropping systems trial is provided. APSIM has been used in a broad range of applications, including support for on-farm decision making, farming systems design for production or resource management objectives, assessment of the value of seasonal climate forecasting, analysis of supply chain issues in agribusiness activities, development of waste management guidelines, risk assessment for government policy making and as a guide to research and education activity. An extensive citation list for these model testing and application studies is provided. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Low concentrate density from wet drum magnetic separators in dense medium circuits can cause operating difficulties due to inability to obtain the required circulating medium density and, indirectly, high medium solids losses. The literature is almost silent on the processes controlling concentrate density. However, the common name for the region through which concentrate is discharged-the squeeze pan gap-implies that some extrusion process is thought to be at work. There is no model of magnetics recovery in a wet drum magnetic separator, which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was done using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 turn diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in this work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. It is proposed, based both on the experimental observations of the present work and on observations reported in the literature, that the process controlling magnetic separator concentrate density is one of drainage. Such a process should be able to be defined by an initial moisture, a drainage rate and a drainage time, the latter being defined by the volumetric flowrate and the volume within the drainage zone. The magnetics can be characterised by an experimentally derived ultimate drainage moisture. A model based on these concepts and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to be a good fit to data over concentrate solids content values from 40% solids to 80% solids and for both magnetite and ferrosilicon feeds. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.
Resumo:
A model of iron carbonate (FeCO3) film growth is proposed, which is an extension of the recent mechanistic model of carbon dioxide (CO2) corrosion by Nesic, et al. In the present model, the film growth occurs by precipitation of iron carbonate once saturation is exceeded. The kinetics of precipitation is dependent on temperature and local species concentrations that are calculated by solving the coupled species transport equations. Precipitation tends to build up a layer of FeCO3 on the surface of the steel and reduce the corrosion rate. On the other hand, the corrosion process induces voids under the precipitated film, thus increasing the porosity and leading to a higher corrosion rate. Depending on the environmental parameters such as temperature, pH, CO2 partial pressure, velocity, etc., the balance of the two processes can lead to a variety of outcomes. Very protective films and low corrosion rates are predicted at high pH, temperature, CO2 partial pressure, and Fe2+ ion concentration due to formation of dense protective films as expected. The model has been successfully calibrated against limited experimental data. Parametric testing of the model has been done to gain insight into the effect of various environmental parameters on iron carbonate film formation. The trends shown in the predictions agreed well with the general understanding of the CO2 corrosion process in the presence of iron carbonate films. The present model confirms that the concept of scaling tendency is a good tool for predicting the likelihood of protective iron carbonate film formation.
Resumo:
Glycogen-accumulating organisms (GAO) have the potential to directly compete with polyphosphate-accumulating organisms (PAO) in EBPR systems as both are able to take up VFA anaerobically and grow on the intracellular storage products aerobically. Under anaerobic conditions GAO hydrolyse glycogen to gain energy and reducing equivalents to take up VFA and to synthesise polyhydroxyalkanoate (PHA). In the subsequent aerobic stage, PHA is being oxidised to gain energy for glycogen replenishment (from PHA) and for cell growth. This article describes a complete anaerobic and aerobic model for GAO based on the understanding of their metabolic pathways. The anaerobic model has been developed and reported previously, while the aerobic metabolic model was developed in this study. It is based on the assumption that acetyl-CoA and propionyl-CoA go through the catabolic and anabolic processes independently. Experimental validation shows that the integrated model can predict the anaerobic and aerobic results very well. It was found in this study that at pH 7 the maximum acetate uptake rate of GAO was slower than that reported for PAO in the anaerobic stage. On the other hand, the net biomass production per C-mol acetate added is about 9% higher for GAO than for PAO. This would indicate that PAO and GAO each have certain competitive advantages during different parts of the anaerobic/aerobic process cycle. (C) 2002 Wiley Periodicals, Inc.