883 resultados para Simulation and modelling
Resumo:
Process optimisation and optimal control of batch and continuous drum granulation processes are studied in this paper. The main focus of the current research has been: (i) construction of optimisation and control relevant, population balance models through the incorporation of moisture content, drum rotation rate and bed depth into the coalescence kernels; (ii) investigation of optimal operational conditions using constrained optimisation techniques; (iii) development of optimal control algorithms based on discretized population balance equations; and (iv) comprehensive simulation studies on optimal control of both batch and continuous granulation processes. The objective of steady state optimisation is to minimise the recycle rate with minimum cost for continuous processes. It has been identified that the drum rotation-rate, bed depth (material charge), and moisture content of solids are practical decision (design) parameters for system optimisation. The objective for the optimal control of batch granulation processes is to maximize the mass of product-sized particles with minimum time and binder consumption. The objective for the optimal control of the continuous process is to drive the process from one steady state to another in a minimum time with minimum binder consumption, which is also known as the state-driving problem. It has been known for some time that the binder spray-rate is the most effective control (manipulative) variable. Although other possible manipulative variables, such as feed flow-rate and additional powder flow-rate have been investigated in the complete research project, only the single input problem with the binder spray rate as the manipulative variable is addressed in the paper to demonstrate the methodology. It can be shown from simulation results that the proposed models are suitable for control and optimisation studies, and the optimisation algorithms connected with either steady state or dynamic models are successful for the determination of optimal operational conditions and dynamic trajectories with good convergence properties. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.
Resumo:
Many populations have a negative impact on their habitat, or upon other species in the environment, if their numbers become too large. For this reason they are often managed using some form of control. The objective is to keep numbers at a sustainable level, while ensuring survival of the population.+Here we present models that allow population management programs to be assessed. Two common control regimes will be considered: reduction and suppression. Under the suppression regime the previous population is maintained close to a particular threshold through near continuous control, while under the reduction regime, control begins once the previous population reaches a certain threshold and continues until it falls below a lower pre-defined level. We discuss how to best choose the control parameters, and we provide tools that allow population managers to select reduction levels and control rates. Additional tools will be provided to assess the effect of different control regimes, in terms of population persistence and cost.In particular we consider the effects of each regime on the probability of extinction and the expected time to extinction, and compare the control methods in terms of the expected total cost of each regime over the life of the population. The usefulness of our results will be illustrated with reference to the control of a koala population inhabiting Kangaroo Island, Australia.
Resumo:
Methods of analysing and optimising flotation circuits have improved significantly over the last 15 years. Mineral flotation is now generally better understood through major advances in measuring and modelling the sub-processes within the flotation system. JKSimFloat V6 is a user-friendly Windows-based software package incorporating simulation, mass balancing, and, currently under development, liberation data viewing and model fitting. This paper presents an overview of the development of the program up to its current status, and the plans established for the future. The application of the simulator, in particular, at various operations is also discussed with emphasis on the use of the program in improving flotation circuit performance.
Resumo:
The identification of disease clusters in space or space-time is of vital importance for public health policy and action. In the case of methicillin-resistant Staphylococcus aureus (MRSA), it is particularly important to distinguish between community and health care-associated infections, and to identify reservoirs of infection. 832 cases of MRSA in the West Midlands (UK) were tested for clustering and evidence of community transmission, after being geo-located to the centroids of UK unit postcodes (postal areas roughly equivalent to Zip+4 zip code areas). An age-stratified analysis was also carried out at the coarser spatial resolution of UK Census Output Areas. Stochastic simulation and kernel density estimation were combined to identify significant local clusters of MRSA (p<0.025), which were supported by SaTScan spatial and spatio-temporal scan. In order to investigate local sampling effort, a spatial 'random labelling' approach was used, with MRSA as cases and MSSA (methicillin-sensitive S. aureus) as controls. Heavy sampling in general was a response to MRSA outbreaks, which in turn appeared to be associated with medical care environments. The significance of clusters identified by kernel estimation was independently supported by information on the locations and client groups of nursing homes, and by preliminary molecular typing of isolates. In the absence of occupational/ lifestyle data on patients, the assumption was made that an individual's location and consequent risk is adequately represented by their residential postcode. The problems of this assumption are discussed, with recommendations for future data collection.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
Computer-based simulation is frequently used to evaluate the capabilities of proposed manufacturing system designs. Unfortunately, the real systems are often found to perform quite differently from simulation predictions and one possible reason for this is an over-simplistic representation of workers' behaviour within current simulation techniques. The accuracy of design predictions could be improved through a modelling tool that integrates with computer-based simulation and incorporates the factors and relationships that determine workers' performance. This paper explores the viability of developing a similar tool based on our previously published theoretical modelling framework. It focuses on evolving this purely theoretical framework towards a practical modelling tool that can actually be used to expand the capabilities of current simulation techniques. Based on an industrial study, the paper investigates how the theoretical framework works in practice, analyses strengths and weaknesses in its formulation, and proposes developments that can contribute towards enabling human performance modelling in a practical way.
Resumo:
The calcitonin gene-related peptide (CGRP) receptor is a complex of a cal-citonin receptor-like receptor (CLR), which is a family B G-protein-coupled receptor (GPCR) and receptor activity modifying protein 1. The role of the second extracellular loop (ECL2) of CLR in binding CGRP and coupling to Gs was investigated using a combination of mutagenesis and modelling. An alanine scan of residues 271-294 of CLR showed that the ability of CGRP to produce cAMP was impaired by point mutations at 13 residues; most of these also impaired the response to adrenomedullin (AM). These data were used to select probable ECL2-modelled conformations that are involved in agonist binding, allowing the identification of the likely contacts between the peptide and receptor. The implications of the most likely structures for receptor activation are discussed. © 2013 The Authors.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.
Resumo:
Our aim was to approach an important and well-investigable phenomenon – connected to a relatively simple but real field situation – in such a way, that the results of field observations could be directly comparable with the predictions of a simulation model-system which uses a simple mathematical apparatus and to simultaneously gain such a hypothesis-system, which creates the theoretical opportunity for a later experimental series of studies. As a phenomenon of the study, we chose the seasonal coenological changes of aquatic and semiaquatic Heteroptera community. Based on the observed data, we developed such an ecological model-system, which is suitable for generating realistic patterns highly resembling to the observed temporal patterns, and by the help of which predictions can be given to alternative situations of climatic circumstances not experienced before (e.g. climate changes), and furthermore; which can simulate experimental circumstances. The stable coenological state-plane, which was constructed based on the principle of indirect ordination is suitable for unified handling of data series of monitoring and simulation, and also fits for their comparison. On the state-plane, such deviations of empirical and model-generated data can be observed and analysed, which could otherwise remain hidden.