943 resultados para Discreet Element Modelling
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
The objective of this work was to develop and validate a mathematical model to estimate the duration of cotton (Gossypium hirsutum L. r. latifolium hutch) cycle in the State of Goiás, Brazil, by applying the method of growing degree-days (GD), and considering, simultaneously, its time-space variation. The model was developed as a linear combination of elevation, latitude, longitude, and Fourier series of time variation. The model parameters were adjusted by using multiple-linear regression to the observed GD accumulated with air temperature in the range of 15°C to 40°C. The minimum and maximum temperature records used to calculate the GD were obtained from 21 meteorological stations, considering data varying from 8 to 20 years of observation. The coefficient of determination, resulting from the comparison between the estimated and calculated GD along the year was 0.84. Model validation was done by comparing estimated and measured crop cycle in the period from cotton germination to the stage when 90 percent of bolls were opened in commercial crop fields. Comparative results showed that the model performed very well, as indicated by the Pearson correlation coefficient of 0.90 and Willmott agreement index of 0.94, resulting in a performance index of 0.85.
Resumo:
Scarcity of long-term series of sediment-related variables has led watershed managers to apply mathematical models to simulate sediment fluxes. Due to the high efforts for installation and maintenance of sedimentological gauges, tracers have been pointed out as an alternative to validate soil redistribution modelling. In this study, the 137Cs technique was used to assess the WASA-SED model performance at the Benguê watershed (933 km²), in the Brazilian semiarid. Qualitatively, good agreement was found among the 137Cs technique and the WASA-SED model results. Nonetheless, quantitatively great differences, up to two orders of magnitude, were found between the two methods. Among the uncertainties inherent to the 137Cs technique, definition of the reference inventory seems to be a major source of imprecision. In addition, estimations of water and sediment fluxes with mathematical models usually also present high uncertainty, contributing to the quantitative differences of the soil redistribution estimates with the two methods.
Resumo:
Local head losses must be considered in estimating properly the maximum length of drip irrigation laterals. The aim of this work was to develop a model based on dimensional analysis for calculating head loss along laterals accounting for in-line drippers. Several measurements were performed with 12 models of emitters to obtain the experimental data required for developing and assessing the model. Based on the Camargo & Sentelhas coefficient, the model presented an excellent result in terms of precision and accuracy on estimating head loss. The deviation between estimated and observed values of head loss increased according to the head loss and the maximum deviation reached 0.17 m. The maximum relative error was 33.75% and only 15% of the data set presented relative errors higher than 20%. Neglecting local head losses incurred a higher than estimated maximum lateral length of 19.48% for pressure-compensating drippers and 16.48% for non pressure-compensating drippers.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.
Resumo:
Climate change has given an impetus to research and developed new technologies to reduce significantly carbon dioxide emissions in energy production in the developed countries. The major pollution source, fossil fuels, will be used as an energy source for many decades, which provides the demand for carbon capture and storage technologies. Over recent years many new technologies has been developed and one of the most promising is calcium-looping in post-combustion carbon capture process, which use carbonation-calcination cycle to capture carbon dioxide from the flue gas of a combustion process. First pilot plant for calcium-looping process has been built in Oviedo, Spain. In this study, a three-dimensional model has been created for the calciner, which is one of the two fluidized bed reactors needed for the process. The calciner is a regenerator where the captured carbon dioxide is removed from the calcium material and then collected after the reactor. Thesis concentrates in creating the calciner 3D-model frame with CFB3D-program and testing the model with two different example cases. Used input parameters and calciner geometry are Oviedo pilot plant design parameters. The calculation results give information about the process and show that pilot plant calciner should perform as planned. This Master’s Thesis is done in participation to EU FP7 project CaOling.
Resumo:
Atomic structure of ZrO2 and B2O3 was investigated in this work. New data under extreme conditions (T = 3100 K) was obtained for the liquid ZrO2 structure. A fractional number of boron was investigated for glassy structure of B2O3. It was shown that it is possible to obtain an agreement for the fractional number between NMR and DFT techniques using a suitable initial configuration.
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
The modern society is getting increasingly dependent on software applications. These run on processors, use memory and account for controlling functionalities that are often taken for granted. Typically, applications adjust the functionality in response to a certain context that is provided or derived from the informal environment with various qualities. To rigorously model the dependence of an application on a context, the details of the context are abstracted and the environment is assumed stable and fixed. However, in a context-aware ubiquitous computing environment populated by autonomous agents, a context and its quality parameters may change at any time. This raises the need to derive the current context and its qualities at runtime. It also implies that a context is never certain and may be subjective, issues captured by the context’s quality parameter of experience-based trustworthiness. Given this, the research question of this thesis is: In what logical topology and by what means may context provided by autonomous agents be derived and formally modelled to serve the context-awareness requirements of an application? This research question also stipulates that the context derivation needs to incorporate the quality of the context. In this thesis, we focus on the quality of context parameter of trustworthiness based on experiences having a level of certainty and referral experiences, thus making trustworthiness reputation based. Hence, in this thesis we seek a basis on which to reason and analyse the inherently inaccurate context derived by autonomous agents populating a ubiquitous computing environment in order to formally model context-awareness. More specifically, the contribution of this thesis is threefold: (i) we propose a logical topology of context derivation and a method of calculating its trustworthiness, (ii) we provide a general model for storing experiences and (iii) we formalise the dependence between the logical topology of context derivation and its experience-based trustworthiness. These contributions enable abstraction of a context and its quality parameters to a Boolean decision at runtime that may be formally reasoned with. We employ the Action Systems framework for modelling this. The thesis is a compendium of the author’s scientific papers, which are republished in Part II. Part I introduces the field of research by providing the mending elements for the thesis to be a coherent introduction for addressing the research question. In Part I we also review a significant body of related literature in order to better illustrate our contributions to the research field.
Resumo:
This thesis presents an experimental study and numerical study, based on the discrete element method (DEM), of bell-less charging in the blast furnace. The numerical models are based on the microscopic interaction between the particles in the blast furnace charging process. The emphasis is put on model validation, investigating several phenomena in the charging process, and on finding factors that influence the results. The study considers and simulates size segregation in the hopper discharging process, particle flow and behavior on the chute, which is the key equipment in the charging system, using mono-size spherical particles, multi-size spheres and nonspherical particles. The behavior of the particles at the burden surface and pellet percolation into a coke layer is also studied. Small-scale experiments are used to validate the DEM models.
Resumo:
The Thesis concentrates on two central terms – Technology park and the resource-based view of the organization. General attention is devoted to competencies and capabilities of organizations that operate in foreign environment. It is difficult to go abroad without any experience and support from local government, especially for small or medium company. Technology and Science parks are the main sources of competitive advantage for this kind of organizations. They provide a huge range of services as well as business consultations and financial support on different stages of companies’ development. The Thesis was made with the assistance of Technopolis Oy in Lappeenranta. During the research companies in Finland and Russia were interviewed. Based on empirical findings important capabilities for entering foreign market were identified and some recommendations for the Technology park were given.