27 resultados para computational analysis
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
This thesis is based on computational chemistry studies on lignans, focusing on the naturally occurring lignan hydroxymatairesinol (HMR) (Papers I II) and on TADDOL-like conidendrin-based chiral 1,4-diol ligands (LIGNOLs) (Papers III V). A complete quantum chemical conformational analysis on HMR was previously conducted by Dr. Antti Taskinen. In the works reported in this thesis, HMR was further studied by classical molecular dynamics (MD) simulations in aqueous solution including torsional angle analysis, quantum chemical solvation e ect study by the COnductorlike Screening MOdel (COSMO), and hydrogen bond analysis (Paper I), as well as from a catalytic point of view including protonation and deprotonation studies at di erent levels of theory (Paper II). The computational LIGNOL studies in this thesis constitute a multi-level deterministic structural optimization of the following molecules: 1,1-diphenyl (2Ph), two diastereomers of 1,1,4-triphenyl (3PhR, 3PhS), 1,1,4,4-tetraphenyl (4Ph) and 1,1,4,4-tetramethyl (4Met) 1,4-diol (Paper IV) and a conformational solvation study applying MD and COSMO (Paper V). Furthermore, a computational study on hemiketals in connection with problems in the experimental work by Docent Patrik Eklund's group synthesizing the LIGNOLs based on natural products starting from HMR, is shortly described (Paper III).
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.
Resumo:
The application of computational fluid dynamics (CFD) and finite element analysis (FEA) has been growing rapidly in the various fields of science and technology. One of the areas of interest is in biomedical engineering. The altered hemodynamics inside the blood vessels plays a key role in the development of the arterial disease called atherosclerosis, which is the major cause of human death worldwide. Atherosclerosis is often treated with the stenting procedure to restore the normal blood flow. A stent is a tubular, flexible structure, usually made of metals, which is driven and expanded in the blocked arteries. Despite the success rate of the stenting procedure, it is often associated with the restenosis (re-narrowing of the artery) process. The presence of non-biological device in the artery causes inflammation or re-growth of atherosclerotic lesions in the treated vessels. Several factors including the design of stents, type of stent expansion, expansion pressure, morphology and composition of vessel wall influence the restenosis process. Therefore, the role of computational studies is crucial in the investigation and optimisation of the factors that influence post-stenting complications. This thesis focuses on the stent-vessel wall interactions followed by the blood flow in the post-stenting stage of stenosed human coronary artery. Hemodynamic and mechanical stresses were analysed in three separate stent-plaque-artery models. Plaque was modeled as a multi-layer (fibrous cap (FC), necrotic core (NC), and fibrosis (F)) and the arterial wall as a single layer domain. CFD/FEA simulations were performed using commercial software packages in several models mimicking the various stages and morphologies of atherosclerosis. The tissue prolapse (TP) of stented vessel wall, the distribution of von Mises stress (VMS) inside various layers of vessel wall, and the wall shear stress (WSS) along the luminal surface of the deformed vessel wall were measured and evaluated. The results revealed the role of the stenosis size, thickness of each layer of atherosclerotic wall, thickness of stent strut, pressure applied for stenosis expansion, and the flow condition in the distribution of stresses. The thicknesses of FC, and NC and the total thickness of plaque are critical in controlling the stresses inside the tissue. A small change in morphology of artery wall can significantly affect the distribution of stresses. In particular, FC is the most sensitive layer to TP and stresses, which could determine plaque’s vulnerability to rupture. The WSS is highly influenced by the deflection of artery, which in turn is dependent on the structural composition of arterial wall layers. Together with the stenosis size, their roles could play a decisive role in controlling the low values of WSS (<0.5 Pa) prone to restenosis. Moreover, the time dependent flow altered the percentage of luminal area with WSS values less than 0.5 Pa at different time instants. The non- Newtonian viscosity model of the blood properties significantly affects the prediction of WSS magnitude. The outcomes of this investigation will help to better understand the roles of the individual layers of atherosclerotic vessels and their risk to provoke restenosis at the post-stenting stage. As a consequence, the implementation of such an approach to assess the post-stented stresses will assist the engineers and clinicians in optimizing the stenting techniques to minimize the occurrence of restenosis.
Resumo:
The main objective of this research is to estimate and characterize heterogeneous mass transfer coefficients in bench- and pilot-scale fluidized bed processes by the means of computational fluid dynamics (CFD). A further objective is to benchmark the heterogeneous mass transfer coefficients predicted by fine-grid Eulerian CFD simulations against empirical data presented in the scientific literature. First, a fine-grid two-dimensional Eulerian CFD model with a solid and gas phase has been designed. The model is applied for transient two-dimensional simulations of char combustion in small-scale bubbling and turbulent fluidized beds. The same approach is used to simulate a novel fluidized bed energy conversion process developed for the carbon capture, chemical looping combustion operated with a gaseous fuel. In order to analyze the results of the CFD simulations, two one-dimensional fluidized bed models have been formulated. The single-phase and bubble-emulsion models were applied to derive the average gas-bed and interphase mass transfer coefficients, respectively. In the analysis, the effects of various fluidized bed operation parameters, such as fluidization, velocity, particle and bubble diameter, reactor size, and chemical kinetics, on the heterogeneous mass transfer coefficients in the lower fluidized bed are evaluated extensively. The analysis shows that the fine-grid Eulerian CFD model can predict the heterogeneous mass transfer coefficients quantitatively with acceptable accuracy. Qualitatively, the CFD-based research of fluidized bed process revealed several new scientific results, such as parametrical relationships. The huge variance of seven orders of magnitude within the bed Sherwood numbers presented in the literature could be explained by the change of controlling mechanisms in the overall heterogeneous mass transfer process with the varied process conditions. The research opens new process-specific insights into the reactive fluidized bed processes, such as a strong mass transfer control over heterogeneous reaction rate, a dominance of interphase mass transfer in the fine-particle fluidized beds and a strong chemical kinetic dependence of the average gas-bed mass transfer. The obtained mass transfer coefficients can be applied in fluidized bed models used for various engineering design, reactor scale-up and process research tasks, and they consequently provide an enhanced prediction accuracy of the performance of fluidized bed processes.
Resumo:
Effective control and limiting of carbon dioxide (CO₂) emissions in energy production are major challenges of science today. Current research activities include the development of new low-cost carbon capture technologies, and among the proposed concepts, chemical combustion (CLC) and chemical looping with oxygen uncoupling (CLOU) have attracted significant attention allowing intrinsic separation of pure CO₂ from a hydrocarbon fuel combustion process with a comparatively small energy penalty. Both CLC and CLOU utilize the well-established fluidized bed technology, but several technical challenges need to be overcome in order to commercialize the processes. Therefore, development of proper modelling and simulation tools is essential for the design, optimization, and scale-up of chemical looping-based combustion systems. The main objective of this work was to analyze the technological feasibility of CLC and CLOU processes at different scales using a computational modelling approach. A onedimensional fluidized bed model frame was constructed and applied for simulations of CLC and CLOU systems consisting of interconnected fluidized bed reactors. The model is based on the conservation of mass and energy, and semi-empirical correlations are used to describe the hydrodynamics, chemical reactions, and transfer of heat in the reactors. Another objective was to evaluate the viability of chemical looping-based energy production, and a flow sheet model representing a CLC-integrated steam power plant was developed. The 1D model frame was succesfully validated based on the operation of a 150 kWth laboratory-sized CLC unit fed by methane. By following certain scale-up criteria, a conceptual design for a CLC reactor system at a pre-commercial scale of 100 MWth was created, after which the validated model was used to predict the performance of the system. As a result, further understanding of the parameters affecting the operation of a large-scale CLC process was acquired, which will be useful for the practical design work in the future. The integration of the reactor system and steam turbine cycle for power production was studied resulting in a suggested plant layout including a CLC boiler system, a simple heat recovery setup, and an integrated steam cycle with a three pressure level steam turbine. Possible operational regions of a CLOU reactor system fed by bituminous coal were determined via mass, energy, and exergy balance analysis. Finally, the 1D fluidized bed model was modified suitable for CLOU, and the performance of a hypothetical 500 MWth CLOU fuel reactor was evaluated by extensive case simulations.
Resumo:
Traditional econometric approaches in modeling the dynamics of equity and commodity markets, have, made great progress in the past decades. However, they assume rationality among the economic agents and and do not capture the dynamics that produce extreme events (black swans), due to deviation from the rationality assumption. The purpose of this study is to simulate the dynamics of silver markets by using the novel computational market dynamics approach. To this end, the daily data from the period of 1st March 2000 to 1st March 2013 of closing prices of spot silver prices has been simulated with the Jabłonska-Capasso-Morale(JCM) model. The Maximum Likelihood approach has been employed to calibrate the acquired data with JCM. Statistical analysis of the simulated series with respect to the actual one has been conducted to evaluate model performance. The model captures the animal spirits dynamics present in the data under evaluation well.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.