956 resultados para Computer models
Resumo:
The procedure for successful scale-up of batchwise emulsion polymerisation has been studied. The relevant literature on liquid-liquid dispersion on scale-up and on emulsion polymerisation has been crit1cally reviewed. Batchwise emulsion polymerisation of styrene in a specially built 3 litre, unbaffled, reactor confirmed that impeller speed had a direct effect on the latex particle size and on the reaction rate. This was noted to be more significant at low soap concentrations and the phenomenon was related to the depletion of micelle forming soap by soap adsorption onto the monomer emulsion surface. The scale-up procedure necessary to maintain constant monomer emulsion surface area in an unbaffled batch reactor was therefore investigated. Three geometrically similar 'vessels of 152, 229 and 305mm internal diameter, and a range of impeller speeds (190 to 960 r.p.m.) were employed. The droplet sizes were measured either through photomicroscopy or via a Coulter Counter. The power input to the impeller was also measured. A scale-up procedure was proposed based on the governing relationship between droplet diameter, impeller speed and impeller diameter. The relationships between impeller speed soap concentration, latex particle size and reaction rate were investigated in a series of polymerisations employing an amended commercial recipe for polystyrene. The particle size was determined via a light transmission technique. Two computer models, based on the Smith and Ewart approach but taking into account the adsorption/desorption of soap at the monomer surface, were successful 1n predicting the particle size and the progress of the reaction up to the end of stage II, i.e. to the end of the period of constant reaction rate.
Resumo:
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.
Resumo:
During the 24 hour period following inoculation, aggregation of spores and sporelings can have an important effect on the subsequent growth of filamentous fungi in submerged culture. This early phase of growth does not appear to have received much attention, and it was for this reason that the author's research was started. The aggregation, germination and early growth of the filamentous fungus Aspergillus niger have been followed in aerated tower fermenters, by microscopic examination. By studying many individual sporelings it has been possible to estimate the specific growth rate and germination times, and then to assess the branching characteristics of the fungus over a period of from 1 to 10 hours after germination. The results have been incorporated into computer models to simulate the development of the physical structure of individual and aggregated sporelings. Following germination, and an initial rapid growth phase, fungi were found to grow exponentially: in the case of A.niger the mean germination time was about 5 hours and the doubling time was as short as 1.5 hours. Branching also followed an exponential pattern and appeared to be related to hyphal length. Using a simple hypothesis for growth along with empirical parameters, typical fungal structures were generated using the computer models : these compared well with actual sporelings observed under the microscope. Preliminary work suggested that the techniques used in this research could be successfully applied to a range of filamentous fungi.
Resumo:
The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.
Resumo:
Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.
Resumo:
Phyllotaxis patterns in plants, or the arrangement of leaves and flowers radially around the shoot, have fascinated both biologists and mathematicians for centuries. The current model of this process involves the lateral transport of the hormone auxin through the first layer of cells in the shoot apical meristem via the auxin efflux carrier protein PIN1. Locations around the meristem with high auxin concentration are sites of organ formation and differentiation. Many of the molecular players in this process are well known and characterized. Computer models composed of all these components are able to produce many of the observed phyllotaxis patterns. To understand which parts of this model have a large effect on the phenotype I automated parameter testing and tried many different parameter combinations. Results of this showed that cell size and meristem size should have the largest effect on phyllotaxis. This lead to three questions: (1) How is cell geometry regulated? (2) Does cell size affect auxin distribution? (3) Does meristem size affect phyllotaxis? To answer the first question I tracked cell divisions in live meristems and quantified the geometry of the cells and the division planes using advanced image processing techniques. The results show that cell shape is maintained by minimizing the length of the new wall and by minimizing the difference in area of the daughter cells. To answer the second question I observed auxin patterning in the meristem, shoot, leaves, and roots of Arabidopsis mutants with larger and smaller cell sizes. In the meristem and shoot, cell size plays an important role in determining the distribution of auxin. Observations of auxin in the root and leaves are less definitive. To answer the third question I measured meristem sizes and phyllotaxis patterns in mutants with altered meristem sizes. These results show that there is no correlation between meristem size and average divergence angle. But in an extreme case, making the meristem very small does lead to a switch on observed phyllotaxis in accordance with the model.
Resumo:
Oil production and exploration techniques have evolved in the last decades in order to increase fluid flows and optimize how the required equipment are used. The base functioning of Electric Submersible Pumping (ESP) lift method is the use of an electric downhole motor to move a centrifugal pump and transport the fluids to the surface. The Electric Submersible Pumping is an option that has been gaining ground among the methods of Artificial Lift due to the ability to handle a large flow of liquid in onshore and offshore environments. The performance of a well equipped with ESP systems is intrinsically related to the centrifugal pump operation. It is the pump that has the function to turn the motor power into Head. In this present work, a computer model to analyze the three-dimensional flow in a centrifugal pump used in Electric Submersible Pumping has been developed. Through the commercial program, ANSYS® CFX®, initially using water as fluid flow, the geometry and simulation parameters have been defined in order to obtain an approximation of what occurs inside the channels of the impeller and diffuser pump in terms of flow. Three different geometry conditions were initially tested to determine which is most suitable to solving the problem. After choosing the most appropriate geometry, three mesh conditions were analyzed and the obtained values were compared to the experimental characteristic curve of Head provided by the manufacturer. The results have approached the experimental curve, the simulation time and the model convergence were satisfactory if it is considered that the studied problem involves numerical analysis. After the tests with water, oil was used in the simulations. The results were compared to a methodology used in the petroleum industry to correct viscosity. In general, for models with water and oil, the results with single-phase fluids were coherent with the experimental curves and, through three-dimensional computer models, they are a preliminary evaluation for the analysis of the two-phase flow inside the channels of centrifugal pump used in ESP systems
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Computer viruses are an important risk to computational systems endangering either corporations of all sizes or personal computers used for domestic applications. Here, classical epidemiological models for disease propagation are adapted to computer networks and, by using simple systems identification techniques a model called SAIC (Susceptible, Antidotal, Infectious, Contaminated) is developed. Real data about computer viruses are used to validate the model. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
Las formas de evaluación basadas en el uso de tests no pueden identificar muchos errores conceptuales de los estudiantes. Esta investigación tiene como objetivo facilitar un nuevo procedimiento capaz de generar los modelos conceptuales de los estudiantes de forma automática a partir de respuestas en texto libre.. Este trabajo se organiza en tres apartados. En primer lugar, se procede a la revisión del estado de la cuestión. A continuación se describen el procedimiento para generar automáticamente los modelos conceptuales de los estudiantes y los sistemas que implementan dicho procedimiento. Por último se ofrecen: una explicación de los experimentos realizados y sus resultados, las conclusiones obtenidas y las líneas de trabajo futuro. Además se proporciona información para aplicar el procedimiento en otro idioma y/o área de conocimiento.. Se propone un procedimiento para generar automáticamente modelos conceptuales de cada estudiante y de una clase a partir de las respuestas facilitadas al sistema de evaluación automático y adaptativo. Estos sistemas son la evolución de los actuales de evaluación de respuestas en texto libre, que evalúan respuestas en texto libre automáticamente y de forma adaptada al modelo de cada estudiante..