276 resultados para Process Modelling
Resumo:
A one-dimensional computational model of pilling of a fibre assembly has been created. The model follows a set of individual fibres, as free ends and loops appear as fuzz and arc progressively withdrawn from the body of the assembly, and entangle to form pills, which eventually break off or are pulled out. The time dependence of the computation is given by ticks, which correspond to cycles of a wear and laundering process. The movement of the fibres is treated as a reptation process. A set of standard values is used as inputs to the computation. Predictions arc given of the change with a number Of cycles of mass of fuzz, mass of pills, and mass removed from the assembly. Changes in the standard values allow sensitivity studies to be carried out.
Resumo:
The patterns of rock comminution within tumbling mills, as well as the nature of forces, are of significant practical importance. Discrete element modelling (DEM) has been used to analyse the pattern of specific energy applied to rock, in terms of spatial distribution within a pilot AG/SAG mill. We also analysed in some detail the nature of the forces, which may result in rock comminution. In order to examine the distribution of energy applied within the mill, the DEM models were compared with measured particle mass losses, in small scale AG and SAG mill experiments. The intensity of contact stresses was estimated using the Hertz theory of elastic contacts. The results indicate that in the case of the AG mill, the highest intensity stresses and strains are likely to occur deep within the charge, and close to the base. This effect is probably more pronounced for large AG mills. In the SAG mill case, the impacts of the steel balls on the surface of the charge are likely to be the most potent. In both cases, the spatial pattern of medium-to-high energy collisions is affected by the rotational speed of the mill. Based on an assumed damage threshold for rock, in terms of specific energy introduced per single collision, the spatial pattern of productive collisions within each charge was estimated and compared with rates of mass loss. We also investigated the nature of the comminution process within AG vs. SAG mill, in order to explain the observed differences in energy utilisation efficiency, between two types of milling. All experiments were performed using a laboratory scale mill of 1.19 m diameter and 0.31 m length, equipped with 14 square section lifters of height 40 mm. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Entrainment in flotation can be considered as a two-step process, including the transfer of the suspended solids in the top of the pulp region just below the pulp-froth interface to the froth phase and the transfer of the entrained particles in the froth phase to the concentrate. Both steps have a strong classification characteristic. The degree of entrainment describes the classification effect of the drainage process in the froth phase. This paper briefly reviews two existing models of degree of entrainment. Experimental data were collected from an Outokumpu 3 m(3) tank cell in the Xstrata Mt. Isa Mines copper concentrator. The data are fitted to the models and the effect of cell operating conditions including air rate and froth height on the degree of entrainment is examined on a size-by-size basis. It is found that there is a strong correlation between the entrainment and the water recovery, which is close to lineal. for the fines. The degree of entrainment decreases with increase in particle size. Within the normal range of cell operating conditions, few particles coarser than 50 mu m are recovered by entrainment. In general, the degree of entrainment increases with increase in the ail rate and decreases with increase in the froth height. Air rate and froth height strongly interact with each other and affect the entrainment process mainly via changes in the froth retention time, the froth structure and froth properties. As a result, other mechanisms such as entrapment may become important in recovering the coarse entrained particles. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
While it is well known that reading is highly heritable, less has been understood about the bases of these genetic influences. In this paper, we review the research that we have been conducting in recent years to examine genetic and environmental influences on the particular reading processes specified in the dual-route cognitive model of reading. We argue that a detailed understanding of the role of genetic factors in reading acquisition requires the delineation and measurement of precise phenotypes, derived from well-articulated models of the reading process. We report evidence for independent genetic influences on the lexical and nonlexical reading processes represented in the dual-route model, based on studies of children with particular subtypes of dyslexia, and on univariate and multivariate genetic modelling of reading performance in the normally reading population.
Resumo:
Background: This study extended that of Kwon and Oei [Kwon, S.M., Oei, T.P.S., 2003. Cognitive change processes in a group cognitive behavior therapy of depression. J. Behav. Ther. Exp. Psychiatry, 3, 73-85], which outlined a number of testable models based on Beck's cognitive theory of depression. Specifically, the current study tested the following four competing models: the causal, consequential, fully and partially interactive cognitive models in patients with major depressive disorder. Methods: A total of 168 clinically depressed outpatients were recruited into a 12-week group cognitive behaviour therapy program. Data was collected at three time points: baseline, mid- and at termination of therapy using the ATQ DAS and BD1. The data were analysed with Amos 4.01 (Arbuckle, J.L., 1999. Amos 4.1. Smallwaters, Chicago.) structural equation modelling. Results: Results indicated that dysfunctional attitudes, negative automatic thoughts and symptoms of depression reduced significantly during treatment. Both the causal and consequential models equally provided an adequate fit to the data. The fully interactive model provided the best fit. However, after removing non-significant pathways, it was found that reduced depressive symptom contributed to reduced depressogenic automatic thoughts and dysfunctional attitudes, not the reverse. Conclusion: These findings did not fully support Beck's cognitive theory of depression that cognitions are primary in the reduction of depressed mood. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Many populations have a negative impact on their habitat or upon other species in the environment if their numbers become too large. For this reason they are often subjected to some form of control. One common control regime is the reduction regime: when the population reaches a certain threshold it is controlled (for example culled) until it falls below a lower predefined level. The natural model for such a controlled population is a birth-death process with two phases, the phase determining which of two distinct sets of birth and death rates governs the process. We present formulae for the probability of extinction and the expected time to extinction, and discuss several applications. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Anaerobic digestion is a multistep process, mediated by a functionally and phylogenetically diverse microbial population. One of the crucial steps is oxidation of organic acids, with electron transfer via hydrogen or formate from acetogenic bacteria to methanogens. This syntrophic microbiological process is strongly restricted by a thermodynamic limitation on the allowable hydrogen or formate concentration. In order to study this process in more detail, we developed an individual-based biofilm model which enables to describe the processes at a microbial resolution. The biochemical model is the ADM1, implemented in a multidimensional domain. With this model, we evaluated three important issues for the syntrophic relationship: (i) is there a fundamental difference in using hydrogen or formate as electron carrier? (ii) Does a thermodynamic-based inhibition function produced substantially different results from an empirical function? and; (iii) Does the physical colocation of acetogens and methanogens follow directly from a general model. Hydrogen or formate as electron carrier had no substantial impact on model results. Standard inhibition functions or thermodynamic inhibition function gave similar results at larger substrate field grid sizes (> 10 mu m), but at smaller grid sizes, the thermodynamic-based function reduced the number of cells with long interspecies distances (> 2.5 mu m). Therefore, a very fine grid resolution is needed to reflect differences between the thermodynamic function, and a more generic inhibition form. The co-location of syntrophic bacteria was well predicted without a need to assume a microbiological based mechanism (e.g., through chemotaxis) of biofilm formation.
Resumo:
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Resumo:
In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This paper outlines the methodology of blast fragmentation modeling undertaken for a green field feasibility study at the Riska gold deposit in Indonesia. The favoured milling process for the feasibility study was dump leaching,with no crushing of the ore material extracted from the pit. For this reason,blast fragmentation was a critical issue to be addressed by the study. A range of blast designs were considered with bench heights and blasthole diameters ranging from 4 m to 7 m and 76 mm to 102 mm respectively. Rock mass data was obtained from 19 diamond drill cores across the deposit (total drill length approximately 2200 m). Intact rock strength was estimated from qualitative strength descriptors,while the in situ block size distribution of the rock mass was estimated from the Rock Quality Designation (RQD) of the core.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..
Resumo:
A plethora of process modeling techniques has been proposed over the years. One way of evaluating and comparing the scope and completeness of techniques is by way of representational analysis. The purpose of this paper is to examine how process modeling techniques have developed over the last four decades. The basis of the comparison is the Bunge-Wand-Weber representation model, a benchmark used for the analysis of grammars that purport to model the real world and the interactions within it. This paper presents a comparison of representational analyses of several popular process modeling techniques and has two main outcomes. First, it provides insights, within the boundaries of a representational analysis, into the extent to which process modeling techniques have developed over time. Second, the findings also indicate areas in which the underlying theory seems to be over-engineered or lacking in specialization.