55 resultados para Monte Carlo.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Tässä diplomityössä on esitetty työn yhteydessä toteutetun Serpent-ARES-laskentaketjun muodostamiseksi tarvittavat toimenpiteet. ARES-reaktorisydän-simulaattorissa tarvittavien homogenisoitujen ryhmävakiokirjastojen muodostaminen Serpentiä käyttäen tekee laskentaketjusta muiden käytössä olevien reaktorisydämen laskentaketjujen mahdollisista virhelähteistä riippumattoman. Monte Carlo-laskentamenetelmään perustuvaa reaktorifysiikan laskentaohjelmaa käyttämällä ryhmävakiokirjastot muodostetaan uudella menetelmällä ja näin saadaan viranomaiskäyttöön voimayhtiöiden käyttämistä menetelmistä riippumaton laskentaketju reaktorien turvallisuusmarginaalien laskentaan. Työn yhteydessä muodostetun laskentaketjun ja tehtyjen vaikutusalakirjastojen muodostamisrutiinien sekä parametrisovitteiden toimivuus on todettu laskemalla Olkiluoto 3 - reaktorin alkulatauksen säätösauvojen tehokkuuksia ja sammutusmarginaaleja eri olosuhteissa. Menetelmä on todettu toimivaksi parametrien pätevyysalueella ja saadut laskentatulokset ovat oikeaa suuruusluokkaa. Parametrimallin tarkkuutta ja pätevyysaluetta on syytä vielä kehittää, ennen kuin laskentaketjua voidaan käyttää varmentamaan muilla menetelmillä laskettujen tulosten oikeellisuutta.
Resumo:
Maailmassa on tarve entistä turvallisemmille ja taloudellisemmille ydinreaktoreille. Neljännen sukupolven reaktorikonseptit ovat aiempia turvallisempia ja luotettavampia, niissä on tehokkaampi polttoaineresurssien käyttö ja ydinjätettä syntyy vähemmän. Lisäksi ne ovat taloudellisesti kilpailukykyisempiä ja niissä on erinomainen proliferaation vastustuskyky. Kuulakekoreaktorikonsepti on toinen korkealämpötilaisten kaasujäähdytteisten reaktoreiden (HTGR, High Temperature Reactor) päätyypeistä ja jäähdytteen lämpötilan noustessa reaktorissa riittävän korkealle, sitä voidaan pitää myös erittäin korkean lämpötilan reaktorina (VHTR, Very High Temperature Reactor), joka on neljännen sukupolven reaktorikonsepti. Tässä kandidaatintyössä käsitellään 90-luvulla Sveitsissä sijainnutta kuulakekoreaktori-tyyppistä koereaktoria HTR-PROTEUS (tai LEU-HTR-PROTEUS), jolla tutkittiin ennen kaikkea matalaväkevöidyn (LEU, Low Enriched Uranium) uraanipolttoaineen käyttöä kuulakekoreaktorissa. Lisäksi erityisenä mielenkiinnon kohteena oli veden joutuminen reaktoriin onnettomuustilanteessa. Työn tarkoituksena on mallintaa reaktorisysteemi ja laskea kasvutekijät viidelle eri reaktorikonfiguraatiolle. Reaktorin mallinnus ja laskenta suoritetaan Monte Carlo -menetelmää käyttävällä Serpent-laskentakoodilla. Saatuja tuloksia verrataan muissa lähteissä eri laskentakoodeilla esitettyihin tuloksiin.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
Tehokasta asevaikutusta pitkille kantamille haettaessa yhdeksi varteenotettavaksi vaihtoehdoksi nousee raskas raketinheittimistö. Sen kuorma-ammusvalikoimassa on myös monipuolisia tytärkranaatteja, jotka soveltuvat niin henkilöstöä, panssaroituja ajoneuvoja kuin linnoitteitakin vastaan. Raskaalla raketinheittimistöllä olisi myös mahdollista korvata tulevaisuudessa jalkaväkimiinojen käyttöä. Tutkimuksessa esitellään käytössä olevien yleisimpien raskaiden raketinheittimien ampumatarvikkeita sekä niiden käyttöä ja tulen tehoa. Tutkimusongelma on asetettu seuraavaksi: Mikä on raskaan raketinheittimistön tulen teho tutkimuksen maalimalleja vastaan? Tutkimuksessa pohditaan myös tulen tehon parantamista ja siihen vaikuttamista erilaisin ratkaisuin. Tutkimuksen aineistona on käytetty julkisia kirjallisuus- sekä internetlähteitä. Tutkimuksessa on käytetty tutkimusmenetelminä ovat kirjallisuustutkimus sekä Monte Carlo -simulaatio. Simulaatio toteutetaan tietokoneen taulukkolaskentasovelluksessa. Saatuja tuloksia tullaan myös vertailemaan aikaisempien tutkimusten laskentatuloksiin. Tutkimuksen pohjalta on nähtävissä, että raskas raketinheittimistö on erittäin tehokas asejärjestelmä. Näin on ainakin tutkimukseen asetettuja maalimalleja vastaan. Tutkimuksessa olleilla raketinheittimillä ja niiden kuorma-ammuksilla on maalit lamautettavissa osin yhden heittimen puolisarjalla eli kuudella raketilla. Tulen tehon kasvattamiseksi tutkimuksessa on pohdittu niin teknisiä, ampumateknisiä kuin tulenjohto- ja maalinosoituksellisia asioita. Itsessään tehokas ampumatarvike ei riitä, jos sitä ei saada toimitettua riittävän lähelle maalia. Kehitystyön kohteita raskaalle raketinheittimistölle ovat aktiotykistöä suurempi hajonta, tytärkranaattien toimintavarmuus sekä asevaikutuksen tehokkuus amputarvikkeiden älykkyyttä ja ohjautuvuutta lisäämällä. Simuloinnin osalta tutkimuksessa esitellään yksinkertainen, itsetehty taulukkolaskentasimulaatio. Sen epätarkkuuksia sekä puutteita on tutkija pyrkinyt huomioimaan laskennassa ja tuloksia analysoidessa. Kaikenkaikkiaan voidaan todeta näinkin vaivattoman simulaation antavan oikean suuntaisia tuloksia.
Resumo:
Tämän kandidaatintyön tarkoituksena on tutkia jäähdytteen poistamisen vaikutusta RBMK-koereaktorin kasvukertoimeen ja erityisesti sitä, kuinka hyvin Monte Carlo -menetelmää käyttävä Serpent-laskentakoodi pystyy mallintamaan jäähdytteen poistamisen vaikutuksen. Aluksi tarkastellaan taustatietoina käytettyä raporttia käsiteltävän koereaktorin kriittisyysajoista ja aiemmista simulaatioista, sekä RBMK-reaktorin ominaispiirteitä ja Monte Carlo -simulaation teoriaa. Seuraavaksi esitellään koereaktorista luotu malli, selitetään mallinnettaessa tehdyt yksinkertaistukset ja kuvataan simulaation alkutilanne. Lopuksi käsitellään simulaation tuloksia ja Serpentillä luodun mallin soveltuvuutta verrattuna aiemmin suoritettuihin simulaatioihin.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Tässä työssä on tutkittu OL1/OL2-ydinvoimalaitosten käytetyn polttoaineen siirrossa aiheutuvaa altistusta neutronisäteilylle. Käytetty polttoaine siirretään vedellä täytetyssä käytetyn polttoaineen siirtosäiliössä Castor TVO:ssa OL1/OL2-laitoksilta käytetyn polttoaineen varastolle. Siirtotyön aikana useat eri ammattiryhmiin kuuluvat henkilöt työskentelevät siirtosäiliön välittömässä läheisyydessä, altistuen käytetystä polttoaineesta emittoituvalle fotoni- ja neutronisäteilylle. Aikaisemmista neutronisäteilyannosten mittauksista on todettu, ettei jatkuvalle altistuksen seurannalle ole ollut tarvetta. Tämän työn tarkoitus on selvittää teoreettisilla laskelmilla siirtotyöhön osallistuvan henkilön mahdollisuus saada kirjausrajan ylittävä annos neutronisäteilyä. Neutronisäteilyn annosnopeudet siirtosäiliötä ympäröivässä tilassa on laskettu yhdysvaltalaisella Monte Carlo-menetelmään perustuvalla MCNP-ohjelmalla. MCNP:llä mallinnettiin siirtosäiliö, siirtosäiliön sisältämä polttoaine ja ympäröivä tila kolmella jäähtymisajalla ja kolmella keskimääräisellä maksimipoistopalamalla. Polttoainenippujen isotooppikonsentraatiot ja säteilylähteiden voimakkuudet on laskettu Studsvik SNF-ohjelmalla. Simuloinnin perusteella voidaan todeta, ettei neutronisäteilyannosten jatkuvalle seurannalle ole tarvetta käytetyn polttoaineen siirrossa. Vaikka neutronisäteilyn annosnopeudet voivat nousta siirtosäiliön läheisyydessä suhteellisen suuriksi, ovat siirtosäiliön lähellä tehtävät työt niin lyhytaikaisia, että kirjausrajan ylitystä voidaan pitää hyvin epätodennäköisenä. Johtopäätökset varmistetaan työssä suunnitellulla mittausjärjestelyllä.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.
Resumo:
Digital business ecosystems (DBE) are becoming an increasingly popular concept for modelling and building distributed systems in heterogeneous, decentralized and open environments. Information- and communication technology (ICT) enabled business solutions have created an opportunity for automated business relations and transactions. The deployment of ICT in business-to-business (B2B) integration seeks to improve competitiveness by establishing real-time information and offering better information visibility to business ecosystem actors. The products, components and raw material flows in supply chains are traditionally studied in logistics research. In this study, we expand the research to cover the processes parallel to the service and information flows as information logistics integration. In this thesis, we show how better integration and automation of information flows enhance the speed of processes and, thus, provide cost savings and other benefits for organizations. Investments in DBE are intended to add value through business automation and are key decisions in building up information logistics integration. Business solutions that build on automation are important sources of value in networks that promote and support business relations and transactions. Value is created through improved productivity and effectiveness when new, more efficient collaboration methods are discovered and integrated into DBE. Organizations, business networks and collaborations, even with competitors, form DBE in which information logistics integration has a significant role as a value driver. However, traditional economic and computing theories do not focus on digital business ecosystems as a separate form of organization, and they do not provide conceptual frameworks that can be used to explore digital business ecosystems as value drivers—combined internal management and external coordination mechanisms for information logistics integration are not the current practice of a company’s strategic process. In this thesis, we have developed and tested a framework to explore the digital business ecosystems developed and a coordination model for digital business ecosystem integration; moreover, we have analysed the value of information logistics integration. The research is based on a case study and on mixed methods, in which we use the Delphi method and Internetbased tools for idea generation and development. We conducted many interviews with key experts, which we recoded, transcribed and coded to find success factors. Qualitative analyses were based on a Monte Carlo simulation, which sought cost savings, and Real Option Valuation, which sought an optimal investment program for the ecosystem level. This study provides valuable knowledge regarding information logistics integration by utilizing a suitable business process information model for collaboration. An information model is based on the business process scenarios and on detailed transactions for the mapping and automation of product, service and information flows. The research results illustrate the current cap of understanding information logistics integration in a digital business ecosystem. Based on success factors, we were able to illustrate how specific coordination mechanisms related to network management and orchestration could be designed. We also pointed out the potential of information logistics integration in value creation. With the help of global standardization experts, we utilized the design of the core information model for B2B integration. We built this quantitative analysis by using the Monte Carlo-based simulation model and the Real Option Value model. This research covers relevant new research disciplines, such as information logistics integration and digital business ecosystems, in which the current literature needs to be improved. This research was executed by high-level experts and managers responsible for global business network B2B integration. However, the research was dominated by one industry domain, and therefore a more comprehensive exploration should be undertaken to cover a larger population of business sectors. Based on this research, the new quantitative survey could provide new possibilities to examine information logistics integration in digital business ecosystems. The value activities indicate that further studies should continue, especially with regard to the collaboration issues on integration, focusing on a user-centric approach. We should better understand how real-time information supports customer value creation by imbedding the information into the lifetime value of products and services. The aim of this research was to build competitive advantage through B2B integration to support a real-time economy. For practitioners, this research created several tools and concepts to improve value activities, information logistics integration design and management and orchestration models. Based on the results, the companies were able to better understand the formulation of the digital business ecosystem and the importance of joint efforts in collaboration. However, the challenge of incorporating this new knowledge into strategic processes in a multi-stakeholder environment remains. This challenge has been noted, and new projects have been established in pursuit of a real-time economy.
Resumo:
The use of exact coordinates of pebbles and fuel particles of pebble bed reactor modelling becoming possible in Monte Carlo reactor physics calculations is an important development step. This allows exact modelling of pebble bed reactors with realistic pebble beds without the placing of pebbles in regular lattices. In this study the multiplication coefficient of the HTR-10 pebble bed reactor is calculated with the Serpent reactor physics code and, using this multiplication coefficient, the amount of pebbles required for the critical load of the reactor. The multiplication coefficient is calculated using pebble beds produced with the discrete element method and three different material libraries in order to compare the results. The received results are lower than those from measured at the experimental reactor and somewhat lower than those gained with other codes in earlier studies.
Resumo:
This thesis presents an analysis of recently enacted Russian renewable energy policy based on capacity mechanism. Considering its novelty and poor coverage by academic literature, the aim of the thesis is to analyze capacity mechanism influence on investors’ decision-making process. The current research introduces a number of approaches to investment analysis. Firstly, classical financial model was built with Microsoft Excel® and crisp efficiency indicators such as net present value were determined. Secondly, sensitivity analysis was performed to understand different factors influence on project profitability. Thirdly, Datar-Mathews method was applied that by means of Monte Carlo simulation realized with Matlab Simulink®, disclosed all possible outcomes of investment project and enabled real option thinking. Fourthly, previous analysis was duplicated by fuzzy pay-off method with Microsoft Excel®. Finally, decision-making process under capacity mechanism was illustrated with decision tree. Capacity remuneration paid within 15 years is calculated individually for each RE project as variable annuity that guarantees a particular return on investment adjusted on changes in national interest rates. Analysis results indicate that capacity mechanism creates a real option to invest in renewable energy project by ensuring project profitability regardless of market conditions if project-internal factors are managed properly. The latter includes keeping capital expenditures within set limits, production performance higher than 75% of target indicators, and fulfilling localization requirement, implying producing equipment and services within the country. Occurrence of real option shapes decision-making process in the following way. Initially, investor should define appropriate location for a planned power plant where high production performance can be achieved, and lock in this location in case of competition. After, investor should wait until capital cost limit and localization requirement can be met, after that decision to invest can be made without any risk to project profitability. With respect to technology kind, investment into solar PV power plant is more attractive than into wind or small hydro power, since it has higher weighted net present value and lower standard deviation. However, it does not change decision-making strategy that remains the same for each technology type. Fuzzy pay-method proved its ability to disclose the same patterns of information as Monte Carlo simulation. Being effective in investment analysis under uncertainty and easy in use, it can be recommended as sufficient analytical tool to investors and researchers. Apart from described results, this thesis contributes to the academic literature by detailed description of capacity price calculation for renewable energy that was not available in English before. With respect to methodology novelty, such advanced approaches as Datar-Mathews method and fuzzy pay-off method are applied on the top of investment profitability model that incorporates capacity remuneration calculation as well. Comparison of effects of two different RE supporting schemes, namely Russian capacity mechanism and feed-in premium, contributes to policy comparative studies and exhibits useful inferences for researchers and policymakers. Limitations of this research are simplification of assumptions to country-average level that restricts our ability to analyze renewable energy investment region wise and existing limitation of the studying policy to the wholesale power market that leaves retail markets and remote areas without our attention, taking away medium and small investment into renewable energy from the research focus. Elimination of these limitations would allow creating the full picture of Russian renewable energy investment profile.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.