16 resultados para Nuclear engineering inverse problems

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finlands industri har av tradition varit starkt energikrävande. Träförädlingsindustrin, som fick sin egentliga start i medlet på 1800-talet, använde stora mängder energi liksom metallförädlingsföretagen i ett senare skede. Krigstiden med sin energiransonering visade handgripligen för allmänheten liksom för specialisterna att en tillräcklig tillgång till energi är ett livsvillkor för vår industri och därmed för vårt land. Efterkrigstiden kännetecknades av en allt snabbare utbyggnad av den på vatten- och ångkraft baserade elkraftskapaciteten, en utbyggnad som den inhemska verkstadsindustrin i stor utsträckning deltog i. Men redan på 1950-talet var vattenkraften till stor del utbyggd, varför den privata såväl som den statliga sektorns intresse allt mera inriktade sig på den speciellt i USA favoriserade atomenergin. Efter fördjupade studier i kärnfysik och kärnteknik vid the International School of Nuclear Science and Engineering i USA deltog författaren av dessa rader intensivt (först som Ahlströmanställd och senare som VD för Finnatom) i den utvecklingsverksamhet inom det kärntekniska området som inte bara elproducenterna utan även verkstadsindustrin i vårt land genomförde. Det var därför naturligt för mig att som objekt för min doktorsavhandling välja introduktionen av kärnkraften i Finland med speciell fokus på den inhemska verkstadsindustrins roll. Jag ställde följande forskningsfrågor: a. När och hur skedde introduktionen av kärnkraften i Finland? b. Vilka var orsakerna till och resultatet av denna introduktion? c. Vilken var den inhemska verkstadsindustrins roll? Ett grundligt studium av litteraturen inklusive mötesprotokoll och tidningsreferat samt personligen genomförda intervjuer med ett trettiotal av de verkliga aktörerna i den långa och komplicerade introduktionsprocessen ledde till en teori, vars riktighet jag anser mig ha kunnat bevisa. Den inhemska verkstadsindustrins roll var synnerligen central. Dess representanter lyckades, bl.a. refererande till erfarenheterna från utbyggnaden av vatten- och ångkraften liksom till byggandet av den underkritiska milan YXP samt forskningsreaktorn TRIGA, övertyga beslutsfattarna om att den besatt nödig kompetens för att kompensera den kompetensbrist som kunde iakttas inom vissa områden hos den sovjetiska kärnkraftverksleverantören. De inhemska leveranserna påverkade även driftsresultatet, speciellt i fallet Lovisa, i positiv riktning. Introduktionsprocessen, som omfattade tiden från slutet av 1950-talet till början på 1980-talet, beskrevs, noterande bl.a. J. W. Creswells anvisningar, i detalj i avhandlingen. Introduktionen fick som resultat konkurrenskraftig elkraft, impuls till start av nya företag, exempelvis Nokia Elektronik, liksom en klar höjning av den tekniska nivån hos vår industri, inkluderande kärnteknisk tillverkning i stor skala. Katastrofen i Tjernobyl i slutet av april 1986 innebar emellertid att utvecklingen tog en paus på ett par decennier. Erfarenheterna från introduktionsfasen kan förhoppningsvis utnyttjas till fullo nu, när utbyggnaden av kärnkraften återupptagits i vårt land.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä tarkastellaan Loviisan ydinvoimalaitoksen todennäköisyyspohjaisen riskianalyysin tason 2 epävarmuuksia. Tason 2 riskitutkimuksissa tutkitaan ydinvoimalaitosonnettomuuksia, joiden seurauksena osa reaktorin radioaktiivisista aineista vapautuu ympäristöön. Näiden tutkimuksien päätulos on suuren päästön vuotuinen taajuus ja se on pääosin todelliseen laitoshistoriaan perustuva tilastollinen odotusarvo. Tämän odotusarvon uskottavuutta voidaan parantaa huomioimalla merkittävimmät laskentaan liittyvät epävarmuudet. Epävarmuuksia laskentaan aiheutuu muiden muassa vakavan reaktorionnettomuuden ilmiöistä, turvallisuusjärjestelmien laitteista, inhimillisistä toiminnoista sekä luotettavuusmallin määrittelemättömistä osista. Diplomityössä kuvataan, kuinka epävarmuustarkastelut integroidaan osaksi Loviisan ydinvoimalaitoksen todennäköisyyspohjaisia riskianalyysejä. Tämä toteutetaan diplomityössä kehitetyillä apuohjelmilla PRALA:lla ja PRATU:lla, joiden avulla voidaan lisätä laitoshistorian perusteella muodostetut epävarmuusparametrit osaksi riskianalyysien luotettavuusdataa. Lisäksi diplomityössä on laskettu laskentaesimerkkinä Loviisan ydinvoimalaitoksen suuren päästön vuotuisen taajuuden vaihtelua kuvaava luottamusväli. Tämä laskentaesimerkki pohjautuu pääosin konservatiivisiin epävarmuusarvioihin, ei todellisiin tilastollisiin epävarmuuksiin. Laskentaesimerkin tulosten perusteella Loviisan suuren päästön taajuudella on laaja vaihteluväli; virhekertoimeksi saatiin 8,4 nykyisillä epävarmuusparametreilla. Suuren päästön taajuuden luottamusväliä voidaan kuitenkin tulevaisuudessa supistaa, kun hyödynnetään todelliseen laitoshistoriaan perustuvia epävarmuusparametreja.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, Small Modular Reactors (SMRs) have attracted increased public discussion. While large nuclear power plant new build projects are facing challenges, the focus of attention is turning to small modular reactors. One particular project challenge arises in the area of nuclear licensing, which plays a significant role in new build projects affecting their quality as well as costs and schedules. This dissertation - positioned in the field of nuclear engineering but also with a significant section in the field of systems engineering - examines the nuclear licensing processes and their suitability for the characteristics of SMRs. The study investigates the licensing processes in selected countries, as well as other safety critical industry fields. Viewing the licensing processes and their separate licensing steps in terms of SMRs, the study adopts two different analysis theories for review and comparison. The primary data consists of a literature review, semi-structured interviews, and questionnaire responses concerning licensing processes and practices. The result of the study is a recommendation for a new, optimized licensing process for SMRs. The most important SMR-specific feature, in terms of licensing, is the modularity of the design. Here the modularity indicates multi-module SMR designs, which creates new challenges in the licensing process. As this study focuses on Finland, the main features of the new licensing process are adapted to the current Finnish licensing process, aiming to achieve the main benefits with minimal modifications to the current process. The application of the new licensing process is developed using Systems Engineering, Requirements Management, and Project Management practices and tools. Nuclear licensing includes a large amount of data and documentation which needs to be managed in a suitable manner throughout the new build project and then during the whole life cycle of the nuclear power plant. To enable a smooth licensing process and therefore ensure the success of the new build nuclear power plant project, management processes and practices play a significant role. This study contributes to the theoretical understanding of how licensing processes are structured and how they are put into action in practice. The findings clarify the suitability of different licensing processes and their selected licensing steps for SMR licensing. The results combine the most suitable licensing steps into a new licensing process for SMRs. The results are also extended to the concept of licensing management practices and tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of gamma spectrometry and gamma and X-ray tomography of nuclear fuel is to determine both radionuclide concentration and integrity and deformation of nuclear fuel. The aims of this thesis have been to find out the basics of gamma spectrometry and tomography of nuclear fuel, to find out the operational mechanisms of gamma spectrometry and tomography equipment of nuclear fuel, and to identify problems that relate to these measurement techniques. In gamma spectrometry of nuclear fuel the gamma-ray flux emitted from unstable isotopes is measured using high-resolution gamma-ray spectroscopy. The production of unstable isotopes correlates with various physical fuel parameters. In gamma emission tomography the gamma-ray spectrum of irradiated nuclear fuel is recorded for several projections. In X-ray transmission tomography of nuclear fuel a radiation source emits a beam and the intensity, attenuated by the nuclear fuel, is registered by the detectors placed opposite. When gamma emission or X-ray transmission measurements are combined with tomographic image reconstruction methods, it is possible to create sectional images of the interior of nuclear fuel. MODHERATO is a computer code that simulates the operation of radioscopic or tomographic devices and it is used to predict and optimise the performance of imaging systems. Related to the X-ray tomography, MODHERATO simulations have been performed by the author. Gamma spectrometry and gamma and X-ray tomography are promising non-destructive examination methods for understanding fuel behaviour under normal, transient and accident conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.