33 resultados para Numerical Computation
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
Supersonic axial turbine stages typically exhibit lower efficiencies than subsonic axial turbine stages. One reason for the lower efficiency is the occurrence of shock waves. With higher pressure ratios the flow inside the turbine becomes relatively easily supersonic if there is only one turbine stage. Supersonic axial turbines can be designed in smaller physical size compared to subsonic axial turbines of same power. This makes them good candidates for turbochargers in large diesel engines, where space can be a limiting factor. Also the production costs are lower for a supersonic axial turbine stage than for two subsonic stages. Since supersonic axial turbines are typically low reaction turbines, they also create lower axial forces to be compensated with bearings compared to high reaction turbines. The effect of changing the stator-rotor axial gap in a small high (rotational) speed supersonic axial flow turbine is studied in design and off-design conditions. Also the effect of using pulsatile mass flow at the supersonic stator inlet is studied. Five axial gaps (axial space between stator and rotor) are modeled using threedimensional computational fluid dynamics at the design and three axial gaps at the off-design conditions. Numerical reliability is studied in three independent studies. An additional measurement is made with the design turbine geometry at intermediate off-design conditions and is used to increase the reliability of the modelling. All numerical modelling is made with the Navier-Stokes solver Finflo employing Chien’s k ¡ ² turbulence model. The modelling of the turbine at the design and off-design conditions shows that the total-to-static efficiency of the turbine decreases when the axial gap is increased in both design and off-design conditions. The efficiency drops almost linearily at the off-design conditions, whereas the efficiency drop accelerates with increasing axial gap at the design conditions. The modelling of the turbine stator with pulsatile inlet flow reveals that the mass flow pulsation amplitude is decreased at the stator throat. The stator efficiency and pressure ratio have sinusoidal shapes as a function of time. A hysteresis-like behaviour is detected for stator efficiency and pressure ratio as a function of inlet mass flow, over one pulse period. This behaviour arises from the pulsatile inlet flow. It is important to have the smallest possible axial gap in the studied turbine type in order to maximize the efficiency. The results for the whole turbine can also be applied to some extent in similar turbines operating for example in space rocket engines. The use of a supersonic stator in a pulsatile inlet flow is shown to be possible.
Resumo:
Transitional flow past a three-dimensional circular cylinder is a widely studied phenomenon since this problem is of interest with respect to many technical applications. In the present work, the numerical simulation of flow past a circular cylinder, performed by using a commercial CFD code (ANSYS Fluent 12.1) with large eddy simulation (LES) and RANS (κ - ε and Shear-Stress Transport (SST) κ - ω! model) approaches. The turbulent flow for ReD = 1000 & 3900 is simulated to investigate the force coefficient, Strouhal number, flow separation angle, pressure distribution on cylinder and the complex three dimensional vortex shedding of the cylinder wake region. The numerical results extracted from these simulations have good agreement with the experimental data (Zdravkovich, 1997). Moreover, grid refinement and time-step influence have been examined. Numerical calculations of turbulent cross-flow in a staggered tube bundle continues to attract interest due to its importance in the engineering application as well as the fact that this complex flow represents a challenging problem for CFD. In the present work a time dependent simulation using κ – ε, κ - ω! and SST models are performed in two dimensional for a subcritical flow through a staggered tube bundle. The predicted turbulence statistics (mean and r.m.s velocities) have good agreement with the experimental data (S. Balabani, 1996). Turbulent quantities such as turbulent kinetic energy and dissipation rate are predicted using RANS models and compared with each other. The sensitivity of grid and time-step size have been analyzed. Model constants sensitivity study have been carried out by adopting κ – ε model. It has been observed that model constants are very sensitive to turbulence statistics and turbulent quantities.
Resumo:
En del av de intressantaste fenomenen inom dagens materialfysik uppstår ur ett intrikat samspel mellan myriader av elektroner. Högtemperatursupraledare är det mest berömda exemplet. Varken klassiska teorier eller modeller där elektronerna är oberoende av varandra kan förklara de häpnadsväckande effekterna i de starkt korrelerade elektronsystemen. I vissa kopparoxider, till exempel La2CuO4, är det känt att valenselektronerna till följd av en stark ömsesidig växelverkan lokaliseras en och en till kopparatomerna i föreningens CuO2 plan. Laddningarnas inneboende magnetiska moment—spinnet—får då en avgörande roll för materialets elektriska och magnetiska egenskaper, vilka i exemplets fall kan beskrivas med Heisenbergmodellen som är den grundläggande teoretiska modellen för mikroskopisk magnetism. Men exakt varför föreningarna kan bli supraledande då de dopas med överskottsladdningar är än så länge en obesvarad fråga. Min avhandling undersöker orenheters inverkan på Heisenbergmodellens magnetiska egenskaper—ett problem av både experimentell och teoretisk relevans. En etablerad numerisk metod har använts—en kvantmekanisk Monte Carlo teknik—för att utföra omfattande datorsimuleringar av den matematiska modellen på två dedikerade Linux datorkluster. Arbetet hör till området beräkningsfysik. De teoretiska modellerna för starkt korrelerade elektronsystem, däribland Heisenbergmodellen, är ytterst invecklade matematiskt sett och de kan inte lösas exakt. Analytiska utredningar bygger för det mesta på antaganden och förenklingar vars inverkningar på slutresultatet är ofta oklara. I det avseende kan numeriska studier vara exakta, det vill säga de kan behandla modellerna som de är. Oftast behövs bägge tillvägagångssätten. Den röda tråden i arbetet har varit att numeriskt testa vissa högaktuella analytiska förutsägelser rörande effekterna av orenheter i Heisenbergmodellen. En del av dem har vi på basen av mycket noggranna data kunnat bekräfta. Men våra resultat har också påvisat felaktigheter i de analytiska prognoserna som sedermera delvis reviderats. En del av avhandlingens numeriska upptäckter har i sin tur stimulerat till helt nya teoretiska studier.
Resumo:
Min avhandling behandlar hur oordnade material leder elektrisk ström. Bland materialen som studeras finns ledande polymerer, d.v.s. plaster som leder ström, och mer allmänt organiska halvledare. Av de här materialen har man kunnat bygga elektroniska komponenter, och man hoppas på att kunna trycka hela kretsar av organiska material. För de här tillämpningarna är det viktigt att förstå hur materialen själva leder elektrisk ström. Termen oordnade material syftar på material som saknar kristallstruktur. Oordningen gör att elektronernas tillstånd blir lokaliserade i rummet, så att en elektron i ett visst tillstånd är begränsad t.ex. till en molekyl eller ett segment av en polymer. Det här kan jämföras med kristallina material, där ett elektrontillstånd är utspritt över hela kristallen (men i stället har en väldefinierad rörelsemängd). Elektronerna (eller hålen) i det oordnade materialet kan röra sig genom att tunnelera mellan de lokaliserade tillstånden. Utgående från egenskaperna för den här tunneleringsprocessen, kan man bestämma transportegenskaperna för hela materialet. Det här är utgångspunkten för den så kallade hopptransportmodellen, som jag har använt mig av. Hopptransportmodellen innehåller flera drastiska förenklingar. Till exempel betraktas elektrontillstånden som punktformiga, så att tunneleringssannolikheten mellan två tillstånd endast beror på avståndet mellan dem, och inte på deras relativa orientation. En annan förenkling är att behandla det kvantmekaniska tunneleringsproblemet som en klassisk process, en slumpvandring. Trots de här grova approximationerna visar hopptransportmodellen ändå många av de fenomen som uppträder i de verkliga materialen som man vill modellera. Man kan kanske säga att hopptransportmodellen är den enklaste modell för oordnade material som fortfarande är intressant att studera. Man har inte hittat exakta analytiska lösningar för hopptransportmodellen, därför använder man approximationer och numeriska metoder, ofta i form av datorberäkningar. Vi har använt både analytiska metoder och numeriska beräkningar för att studera olika aspekter av hopptransportmodellen. En viktig del av artiklarna som min avhandling baserar sig på är att jämföra analytiska och numeriska resultat. Min andel av arbetet har främst varit att utveckla de numeriska metoderna och applicera dem på hopptransportmodellen. Därför fokuserar jag på den här delen av arbetet i avhandlingens introduktionsdel. Ett sätt att studera hopptransportmodellen numeriskt är att direkt utföra en slumpvandringsprocess med ett datorprogram. Genom att föra statisik över slumpvandringen kan man beräkna olika transportegenskaper i modellen. Det här är en så kallad Monte Carlo-metod, eftersom själva beräkningen är en slumpmässig process. I stället för att följa rörelsebanan för enskilda elektroner, kan man beräkna sannolikheten vid jämvikt för att hitta en elektron i olika tillstånd. Man ställer upp ett system av ekvationer, som relaterar sannolikheterna för att hitta elektronen i olika tillstånd i systemet med flödet, strömmen, mellan de olika tillstånden. Genom att lösa ekvationssystemet fås sannolikhetsfördelningen för elektronerna. Från sannolikhetsfördelningen kan sedan strömmen och materialets transportegenskaper beräknas. En aspekt av hopptransportmodellen som vi studerat är elektronernas diffusion, d.v.s. deras slumpmässiga rörelse. Om man betraktar en samling elektroner, så sprider den med tiden ut sig över ett större område. Det är känt att diffusionshastigheten beror av elfältet, så att elektronerna sprider sig fortare om de påverkas av ett elektriskt fält. Vi har undersökt den här processen, och visat att beteendet är väldigt olika i endimensionella system, jämfört med två- och tredimensionella. I två och tre dimensioner beror diffusionskoefficienten kvadratiskt av elfältet, medan beroendet i en dimension är linjärt. En annan aspekt vi studerat är negativ differentiell konduktivitet, d.v.s. att strömmen i ett material minskar då man ökar spänningen över det. Eftersom det här fenomenet har uppmätts i organiska minnesceller, ville vi undersöka om fenomenet också kan uppstå i hopptransportmodellen. Det visade sig att det i modellen finns två olika mekanismer som kan ge upphov till negativ differentiell konduktivitet. Dels kan elektronerna fastna i fällor, återvändsgränder i systemet, som är sådana att det är svårare att ta sig ur dem då elfältet är stort. Då kan elektronernas medelhastighet och därmed strömmen i materialet minska med ökande elfält. Elektrisk växelverkan mellan elektronerna kan också leda till samma beteende, genom en så kallad coulombblockad. En coulombblockad kan uppstå om antalet ledningselektroner i materialet ökar med ökande spänning. Elektronerna repellerar varandra och ett större antal elektroner kan leda till att transporten blir långsammare, d.v.s. att strömmen minskar.
Resumo:
Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Energy efficiency is one of the major objectives which should be achieved in order to implement the limited energy resources of the world in a sustainable way. Since radiative heat transfer is the dominant heat transfer mechanism in most of fossil fuel combustion systems, more accurate insight and models may cause improvement in the energy efficiency of the new designed combustion systems. The radiative properties of combustion gases are highly wavelength dependent. Better models for calculating the radiative properties of combustion gases are highly required in the modeling of large scale industrial combustion systems. With detailed knowledge of spectral radiative properties of gases, the modeling of combustion processes in the different applications can be more accurate. In order to propose a new method for effective non gray modeling of radiative heat transfer in combustion systems, different models for the spectral properties of gases including SNBM, EWBM, and WSGGM have been studied in this research. Using this detailed analysis of different approaches, the thesis presents new methods for gray and non gray radiative heat transfer modeling in homogeneous and inhomogeneous H2O–CO2 mixtures at atmospheric pressure. The proposed method is able to support the modeling of a wide range of combustion systems including the oxy-fired combustion scenario. The new methods are based on implementing some pre-obtained correlations for the total emissivity and band absorption coefficient of H2O–CO2 mixtures in different temperatures, gas compositions, and optical path lengths. They can be easily used within any commercial CFD software for radiative heat transfer modeling resulting in more accurate, simple, and fast calculations. The new methods were successfully used in CFD modeling by applying them to industrial scale backpass channel under oxy-fired conditions. The developed approaches are more accurate compared with other methods; moreover, they can provide complete explanation and detailed analysis of the radiation heat transfer in different systems under different combustion conditions. The methods were verified by applying them to some benchmarks, and they showed a good level of accuracy and computational speed compared to other methods. Furthermore, the implementation of the suggested banded approach in CFD software is very easy and straightforward.
Resumo:
Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.