994 resultados para parallel modeling
Resumo:
In this thesis, general approach is devised to model electrolyte sorption from aqueous solutions on solid materials. Electrolyte sorption is often considered as unwanted phenomenon in ion exchange and its potential as an independent separation method has not been fully explored. The solid sorbents studied here are porous and non-porous organic or inorganic materials with or without specific functional groups attached on the solid matrix. Accordingly, the sorption mechanisms include physical adsorption, chemisorption on the functional groups and partition restricted by electrostatic or steric factors. The model is tested in four Cases Studies dealing with chelating adsorption of transition metal mixtures, physical adsorption of metal and metalloid complexes from chloride solutions, size exclusion of electrolytes in nano-porous materials and electrolyte exclusion of electrolyte/non-electrolyte mixtures. The model parameters are estimated using experimental data from equilibrium and batch kinetic measurements, and they are used to simulate actual single-column fixed-bed separations. Phase equilibrium between the solution and solid phases is described using thermodynamic Gibbs-Donnan model and various adsorption models depending on the properties of the sorbent. The 3-dimensional thermodynamic approach is used for volume sorption in gel-type ion exchangers and in nano-porous adsorbents, and satisfactory correlation is obtained provided that both mixing and exclusion effects are adequately taken into account. 2-Dimensional surface adsorption models are successfully applied to physical adsorption of complex species and to chelating adsorption of transition metal salts. In the latter case, comparison is also made with complex formation models. Results of the mass transport studies show that uptake rates even in a competitive high-affinity system can be described by constant diffusion coefficients, when the adsorbent structure and the phase equilibrium conditions are adequately included in the model. Furthermore, a simplified solution based on the linear driving force approximation and the shrinking-core model is developed for very non-linear adsorption systems. In each Case Study, the actual separation is carried out batch-wise in fixed-beds and the experimental data are simulated/correlated using the parameters derived from equilibrium and kinetic data. Good agreement between the calculated and experimental break-through curves is usually obtained indicating that the proposed approach is useful in systems, which at first sight are very different. For example, the important improvement in copper separation from concentrated zinc sulfate solution at elevated temperatures can be correctly predicted by the model. In some cases, however, re-adjustment of model parameters is needed due to e.g. high solution viscosity.
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.
Resumo:
ABSTRACTThis study presents a contribution to the modeling of a computer application employing a method of serviceability performance for unpaved roads, aiming the management of maintenance/restoration activities of the primary surface layer. The proposed methodology consisted of field inspections during dry (April to September) and rainy (October to March) periods, during which objective evaluations were performed to survey of defects and their densities and degrees of severity. To aid the functional classification of analyzed road sections and the determination of the defect with major influence on the serviceability of these roads, the method of serviceability performance proposed by Silva (2009)was implemented in the Visual Basic for Applications (VBA) language in Microsoft Excel software. With the use of the computer application proposed it was possible to identify among the defects analyzed in field, through the index of serviceability of the sampling unit per defect type (ISUdef), which one had the greatest influence on determining the relative serviceability index per road section (IST). The results allow us to conclude that the computer application Road achieved satisfactory results, since the objective evaluation criteria applied to road sections denotes consistency regarding their serviceability.
Resumo:
ABSTRACT This study aimed to verify the differences in radiation intensity as a function of distinct relief exposure surfaces and to quantify these effects on the leaf area index (LAI) and other variables expressing eucalyptus forest productivity for simulations in a process-based growth model. The study was carried out at two contrasting edaphoclimatic locations in the Rio Doce basin in Minas Gerais, Brazil. Two stands with 32-year-old plantations were used, allocating fixed plots in locations with northern and southern exposure surfaces. The meteorological data were obtained from two automated weather stations located near the study sites. Solar radiation was corrected for terrain inclination and exposure surfaces, as it is measured based on the plane, perpendicularly to the vertical location. The LAI values collected in the field were used. For the comparative simulations in productivity variation, the mechanistic 3PG model was used, considering the relief exposure surfaces. It was verified that during most of the year, the southern surfaces showed lower availability of incident solar radiation, resulting in up to 66% losses, compared to the same surface considered plane, probably related to its geographical location and higher declivity. Higher values were obtained for the plantings located on the northern surface for the variables LAI, volume and mean annual wood increase, with this tendency being repeated in the 3PG model simulations.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
Linear programming models are effective tools to support initial or periodic planning of agricultural enterprises, requiring, however, technical coefficients that can be determined using computer simulation models. This paper, presented in two parts, deals with the development, application and tests of a methodology and of a computational modeling tool to support planning of irrigated agriculture activities. Part I aimed at the development and application, including sensitivity analysis, of a multiyear linear programming model to optimize the financial return and water use, at farm level for Jaíba irrigation scheme, Minas Gerais State, Brazil, using data on crop irrigation requirement and yield, obtained from previous simulation with MCID model. The linear programming model outputted a crop pattern to which a maximum total net present value of R$ 372,723.00 for the four years period, was obtained. Constraints on monthly water availability, labor, land and production were critical in the optimal solution. In relation to the water use optimization, it was verified that an expressive reductions on the irrigation requirements may be achieved by small reductions on the maximum total net present value.
Resumo:
Techniques of evaluation of risks coming from inherent uncertainties to the agricultural activity should accompany planning studies. The risk analysis should be carried out by risk simulation using techniques as the Monte Carlo method. This study was carried out to develop a computer program so-called P-RISCO for the application of risky simulations on linear programming models, to apply to a case study, as well to test the results comparatively to the @RISK program. In the risk analysis it was observed that the average of the output variable total net present value, U, was considerably lower than the maximum U value obtained from the linear programming model. It was also verified that the enterprise will be front to expressive risk of shortage of water in the month of April, what doesn't happen for the cropping pattern obtained by the minimization of the irrigation requirement in the months of April in the four years. The scenario analysis indicated that the sale price of the passion fruit crop exercises expressive influence on the financial performance of the enterprise. In the comparative analysis it was verified the equivalence of P-RISCO and @RISK programs in the execution of the risk simulation for the considered scenario.
Resumo:
In the forced-air cooling process of fruits occurs, besides the convective heat transfer, the mass transfer by evaporation. The energy need in the evaporation is taken from fruit that has its temperature lowered. In this study it has been proposed the use of empirical correlations for calculating the convective heat transfer coefficient as a function of surface temperature of the strawberry during the cooling process. The aim of this variation of the convective coefficient is to compensate the effect of evaporation in the heat transfer process. Linear and exponential correlations are tested, both with two adjustable parameters. The simulations are performed using experimental conditions reported in the literature for the cooling of strawberries. The results confirm the suitability of the proposed methodology.
Resumo:
The interaction between the soil and tillage tool can be examined using different parameters for the soil and the tool. Among the soil parameters are the shear stress, cohesion, internal friction angle of the soil and the pre-compression stress. The tool parameters are mainly the tool geometry and depth of operation. Regarding to the soils of Rio Grande do Sul there are hardly any studies and evaluations of the parameters that have importance in the use of mathematical models to predict tensile loads. The objective was to obtain parameters related to the soils of Rio Grande do Sul, which are used in soil-tool analysis, more specifically on mathematical models that allow the calculation of tractive effort for symmetric and narrow tools. Two of the main soils of Rio Grande do Sul, an Albaqualf and a Paleudult were studied. Equations that relate the cohesion, internal friction angle of the soil, adhesion, soil-tool friction angle and pre-compression stress as a function of water content in the soil were obtained, leading to important information for use of mathematical models for tractive effort calculation.
Resumo:
This study aimed to apply mathematical models to the growth of Nile tilapia (Oreochromis niloticus) reared in net cages in the lower São Francisco basin and choose the model(s) that best represents the conditions of rearing for the region. Nonlinear models of Brody, Bertalanffy, Logistic, Gompertz, and Richards were tested. The models were adjusted to the series of weight for age according to the methods of Gauss, Newton, Gradiente and Marquardt. It was used the procedure "NLIN" of the System SAS® (2003) to obtain estimates of the parameters from the available data. The best adjustment of the data were performed by the Bertalanffy, Gompertz and Logistic models which are equivalent to explain the growth of the animals up to 270 days of rearing. From the commercial point of view, it is recommended that commercialization of tilapia from at least 600 g, which is estimated in the Bertalanffy, Gompertz and Logistic models for creating over 183, 181 and 184 days, and up to 1 Kg of mass , it is suggested the suspension of the rearing up to 244, 244 and 243 days, respectively.
Resumo:
ABSTRACT Given the need to obtain systems to better control broiler production environment, we performed an experiment with broilers from 1 to 21 days, which were submitted to different intensities and air temperature durations in conditioned wind tunnels and the results were used for validation of afuzzy model. The model was developed using as input variables: duration of heat stress (days), dry bulb air temperature (°C) and as output variable: feed intake (g) weight gain (g) and feed conversion (g.g-1). The inference method used was Mamdani, 20 rules have been prepared and the defuzzification technique used was the Center of Gravity. A satisfactory efficiency in determining productive responses is evidenced in the results obtained in the model simulation, when compared with the experimental data, where R2 values calculated for feed intake, weight gain and feed conversion were 0.998, 0.981 and 0.980, respectively.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Usage of batteries as energy storage is emerging in automotive and mobile working machine applications in future. When battery systems become larger, battery management becomes an essential part of the application concerning fault situations of the battery and safety of the user. A properly designed battery management system extends one charge cycle of battery pack and the whole life time of the battery pack. In this thesis main objectives and principles of BMS are studied and first order Thevenin’s model of the lithium-titanate battery cell is built based on laboratory measurements. The battery cell model is then verified by comparing the battery cell model and the actual battery cell and its suitability for use in BMS is studied.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.