14 resultados para SHELL-MODEL CALCULATIONS
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Environmentally harmful consequences of fossil fuel utilisation andthe landfilling of wastes have increased the interest among the energy producers to consider the use of alternative fuels like wood fuels and Refuse-Derived Fuels, RDFs. The fluidised bed technology that allows the flexible use of a variety of different fuels is commonly used at small- and medium-sized power plants ofmunicipalities and industry in Finland. Since there is only one mass-burn plantcurrently in operation in the country and no intention to build new ones, the co-firing of pre-processed wastes in fluidised bed boilers has become the most generally applied waste-to-energy concept in Finland. The recently validated EU Directive on Incineration of Wastes aims to mitigate environmentally harmful pollutants of waste incineration and co-incineration of wastes with conventional fuels. Apart from gaseous flue gas pollutants and dust, the emissions of toxic tracemetals are limited. The implementation of the Directive's restrictions in the Finnish legislation is assumed to limit the co-firing of waste fuels, due to the insufficient reduction of the regulated air pollutants in the existing flue gas cleaning devices. Trace metals emission formation and reduction in the ESP, the condensing wet scrubber, the fabric filter, and the humidification reactor were studied, experimentally, in full- and pilot-scale combustors utilising the bubbling fluidised bed technology, and, theoretically, by means of reactor model calculations. The core of the model is a thermodynamic equilibrium analysis. The experiments were carried out with wood chips, sawdust, and peat, and their refuse-derived fuel, RDF, blends. In all, ten different fuels or fuel blends were tested. Relatively high concentrations of trace metals in RDFs compared to the concentrations of these metals in wood fuels increased the trace metal concentrations in the flue gas after the boiler ten- to hundred-folds, when RDF was co-fired with sawdust in a full-scale BFB boiler. In the case of peat, lesser increase in trace metal concentrations was observed, due to the higher initial trace metal concentrations of peat compared to sawdust. Despite the high removal rate of most of the trace metals in the ESP, the Directive emission limits for trace metals were exceeded in each of the RDF co-firing tests. The dominat trace metals in fluegas after the ESP were Cu, Pb and Mn. In the condensing wet scrubber, the flue gas trace metal emissions were reduced below the Directive emission limits, whenRDF pellet was used as a co-firing fuel together with sawdust and peat. High chlorine content of the RDFs enhanced the mercuric chloride formation and hence the mercury removal in the ESP and scrubber. Mercury emissions were lower than theDirective emission limit for total Hg, 0.05 mg/Nm3, in all full-scale co-firingtests already in the flue gas after the ESP. The pilot-scale experiments with aBFB combustor equipped with a fabric filter revealed that the fabric filter alone is able to reduce the trace metal concentrations, including mercury, in the flue gas during the RDF co-firing approximately to the same level as they are during the wood chip firing. Lower trace metal emissions than the Directive limits were easily reached even with a 40% thermal share of RDF co-firing with sawdust.Enrichment of trace metals in the submicron fly ash particle fraction because of RDF co-firing was not observed in the test runs where sawdust was used as the main fuel. The combustion of RDF pellets with peat caused an enrichment of As, Cd, Co, Pb, Sb, and V in the submicron particle mode. Accumulation and release oftrace metals in the bed material was examined by means of a bed material analysis, mass balance calculations and a reactor model. Lead, zinc and copper were found to have a tendency to be accumulated in the bed material but also to have a tendency to be released from the bed material into the combustion gases, if the combustion conditions were changed. The concentration of the trace metal in the combustion gases of the bubbling fluidised bed boiler was found to be a summary of trace metal fluxes from three main sources. They were (1) the trace metal flux from the burning fuel particle (2) the trace metal flux from the ash in the bed, and (3) the trace metal flux from the active alkali metal layer on the sand (and ash) particles in the bed. The amount of chlorine in the system, the combustion temperature, the fuel ash composition and the saturation state of the bed material in regard to trace metals were discovered to be key factors affecting therelease process. During the co-firing of waste fuels with variable amounts of e.g. ash and chlorine, it is extremely important to consider the possible ongoingaccumulation and/or release of the trace metals in the bed, when determining the flue gas trace metal emissions. If the state of the combustion process in regard to trace metals accumulation and/or release in the bed material is not known,it may happen that emissions from the bed material rather than the combustion of the fuel in question are measured and reported.
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Lämmityskustannukset ovat merkittävä osa pientalon asumiskustannuksista. Oikealla lämmitysratkaisuvalinnalla voidaan saada merkittäviä säästöjä aikaiseksi. Lämmitysjärjestelmä on pitkäaikainen investointi ja järjestelmän vaihtaminen toiseen on usein kallista. Järjestelmän valinnassa tehtyä virhettä on siten vaikea korjata jälkikäteen. Jotta osataan valita kuhunkin tilanteeseen sopivin lämmitysratkaisu, täytyy tietää millaisia kustannuseriä eri vaihtoehdot sisältävät ja millainen merkitys niillä on kokonaiskustannuksiin. Lämmitysjärjestelmän valinnassa tulee huomioida asuntojen erilaiset energiantarpeet, tekniset ratkaisut ja käyttäjien mieltymykset. Edellä mainituista syistä lämmitysjärjestelmiä ei voida laittaa yleispätevään edullisuusjärjestykseen. Tässä työssä käsitellään esimerkkilaskelmien avulla joitain lämmitysjärjestelmäratkaisuja ja niiden kustannuksia. Vaikka työ ei käsitä kaikkia markkinoilla olevia vaihtoehtoja, työssä esitettyjä laskentamenetelmiä voidaan soveltaa myös muihin lämmitysjärjestelmiin.
Resumo:
Boiling two-phase flow and the equations governing the motion of fluid in two-phase flows are discussed in this thesis. Disposition of the governing equations in three-dimensional complex geometries is considered from the perspective of the porous medium concept. The equations governing motion in two-phase flows were formulated, discretized and implemented in a subroutine for pressure-velocity solution utilizing the SIMPLE algorithm modified for two-phase flow. The subroutine was included in PORFLO, which is a three-dimensional 5-equation porous media model developed at VTT by Jaakko Miettinen. The development of two-phase flow and the resulting void fraction distribution was predicted in a geometry resembling a section of BWR fuel bundle in a couple of test cases using PORFLO.
Resumo:
Diplomityön tavoitteena oli selvittää, kuinka kohdeyrityksessä nykyhetkellä pitkältiräätälöitynä tuotetut putkilämmönvaihtimet voitaisiin standardisoida, jotta pystyttäisiin kohdistamaan resursseja oikein ja parantamaan siten yrityksen taloudellista suorituskykyä. Tutkimuksen edetessä määräävimmäksi tekijäksi työn tuloksia ajatellen nousi taloudellinen standardisointiaste, jonka perusteeksi kohdeyrityksen putkilämmönvaihtimissa valittiin standardisoitu suunnitteluohjeisto. Suunnitteluohjeiston lisäksi tuotestandardiin liitettiin ehdotus toimintapojen automatisoimisesta parametrisen valintataulukon sekä sähköisen manuaalin avulla. Työssä tuotestandardisointia tarkastellaan rationalisointi-investointina ja sen kannattavuutta on tarkasteltu investointilaskelmien sekä herkkyys analyysin avulla. Alkuarvot investointilaskelmissa perustuvat työssä yhdelle putkilämmönvaihtimelletoteutettuun tuotestandardisointitapaan, jos tätä tapaa käytettäisiin jatkossa kaikkiin kyseisen yksikön putkilämmönvaihtimiin.
Resumo:
The aim of this thesis was to develop a model, which can predict heat transfer, heat release distribution and vertical temperature profile of gas phase in the furnace of a bubbling fluidized bed (BFB) boiler. The model is based on three separate model components that take care of heat transfer, heat release distribution and mass and energy balance calculations taking into account the boiler design and operating conditions. The model was successfully validated by solving the model parameters on the basis of commercial size BFB boiler test run information and by performing parametric studies with the model. Implementation of the developed model for the Foster Wheeler BFB design procedures will require model validation with existing BFB database and possibly more detailed measurements at the commercial size BFB boilers.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
Multibody simulation model of the roller test rig is presented in this work. The roller test rig consists of a paper machine’s tube roll supported with a hard bearing type balancing machine. The simulation model includes non-idealities that are measured from the physical structure. These non-idealities are the shell thickness variation of the roll and roundness errors of the shafts of the roll. These kinds of non-idealities are harmful since they can cause subharmonic resonances of the rotor system. In this case, the natural vibration mode of the rotor is excited when the rotation speed is a fraction of the natural frequency of the system. With the simulation model, the half critical resonance is studied in detail and a sensitivity analysis is performed by simulating several analyses with slightly different input parameters. The model is verified by comparing the simulation results with those obtained by measuring the real structure. Comparison shows that good accuracy is achieved, since equivalent responses are achieved within the error limit of the input parameters.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The use of exact coordinates of pebbles and fuel particles of pebble bed reactor modelling becoming possible in Monte Carlo reactor physics calculations is an important development step. This allows exact modelling of pebble bed reactors with realistic pebble beds without the placing of pebbles in regular lattices. In this study the multiplication coefficient of the HTR-10 pebble bed reactor is calculated with the Serpent reactor physics code and, using this multiplication coefficient, the amount of pebbles required for the critical load of the reactor. The multiplication coefficient is calculated using pebble beds produced with the discrete element method and three different material libraries in order to compare the results. The received results are lower than those from measured at the experimental reactor and somewhat lower than those gained with other codes in earlier studies.
Resumo:
As increasing efficiency of a wind turbine gearbox, more power can be transferred from rotor blades to generator and less power is used to cause wear and heating in the gearbox. By using a simulation model, behavior of the gearbox can be studied before creating expensive prototypes. The objective of the thesis is to model a wind turbine gearbox and its lubrication system to study power losses and heat transfer inside the gearbox and to study the simulation methods of the used software. Software used to create the simulation model is Siemens LMS Imagine.Lab AMESim, which can be used to create one-dimensional mechatronic system simulation models from different fields of engineering. When combining components from different libraries it is possible to create a simulation model, which includes mechanical, thermal and hydraulic models of the gearbox. Results for mechanical, thermal, and hydraulic simulations are presented in the thesis. Due to the large scale of the wind turbine gearbox and the amount of power transmitted, power loss calculations from AMESim software are inaccurate and power losses are modelled as constant efficiency for each gear mesh. Starting values for simulation in thermal and hydraulic simulations were chosen from test measurements and from empirical study as compact and complex design of gearbox prevents accurate test measurements. In further studies to increase the accuracy of the simulation model, components used for power loss calculations needs to be modified and values for unknown variables are needed to be solved through accurate test measurements.
Resumo:
Target of this study was to develop a total cost calculation model to compare all costs from manufacturing and logistics from own factories or from partner factories to global distribution centers in a case company. Especially the total cost calculation model was needed to simulate an own factory utilization effect in the total cost calculation context. This study consist of the theoretical literature review and the empirical case study. This study was completed using the constructive research approach. The result of this study was a new total cost calculation model. The new total cost calculation model includes not only all the costs caused by manufacturing and logistics, but also the relevant capital costs. Using the new total cost calculation model, case company is able to complete the total cost calculations taking into account the own factory utilization effect in different volume situations and volume shares between an own factory and a partner factory.
Resumo:
The main aim of this research was to develop cost of poor quality calculation model which will better reflect business impacts of lost productivity caused by IT incidents for the case company. This objective was pursued by reviewing literature and conducting a study in a Finnish multinational manufacturing company. Broad analysis of the scientific literature allowed to identify main theories and models of Cost of Poor Quality and provided better base for development of measurements of business impacts of lost productivity. Empirical data was gathered with semi-structured interviews and internet based survey. In total, twelve interviews with experts and 39 survey results from business stakeholders were gathered. Main results of empirical study helped to develop the measurement model of cost of poor quality and it was tied to incident priority matrix. Nevertheless, the model was created based on available data. Main conclusions of the thesis were that cost of poor quality measurements could be even further improved if additional data points could be used. New model takes into consideration different cost regions and utilizes on this notion.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.