29 resultados para one-meson-exchange: independent-particle shell model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
The overall goal of the study was to describe nurses’ acceptance of an Internet-based support system in the care of adolescents with depression. The data were collected in four phases during the period 2006 – 2010 from nurses working in adolescent psychiatric outpatient clinics and from professionals working with adolescents in basic public services. In the first phase, the nurses’ anticipated perceptions of the usefulness of the Internet-based support system before its implementation was explored. In the second phase, the nurses’ perceived ease of computer and Internet use and attitudes toward it were explored. In the third phase, the features of the support system and its implementation process were described. In the fourth phase, the nurses’ experiences of behavioural intention and actual system use of the Internet-based support were described in psychiatric out-patient care after one year use. The Technology Acceptance Model (TAM) was used to structure the various research phases. Several benefits were identified from the nurses’ perspective in using the Internet-based support system in the care of adolescents with depression. The nurses’ technology skills were good and their attitudes towards computer use were positive. The support system was developed in various phases to meet the adolescents’ needs. Before the implementation of the information technology (IT)-based support system, it is important to pay attention to the nurses’ IT-training, technology support, resources, and safety as well as ethical issues related to the support system. After one year of using the system, the nurses perceived the Internet-based support system to be useful in the care of adolescents with depression. The adolescents’ independent work with the support system at home and the program’s systematic character were experienced as conducive from the point of view of the treatment. However, the Internet-based support system was integrated only partly into the nurseadolescent interaction even though the nurses’ perceptions of it were positive. The use of the IT-based system as part of the adolescents’ depression care was seen positively and its benefits were recognized. This serves as a good basis for future IT-based techniques. Successful implementations of IT-based support systems need a systematic implementation plan and commitment from the part of the organization and its managers. Supporting and evaluating the implementation of an IT-based system should pay attention to changing the nurses’ work styles. Health care organizations should be offered more flexible opportunities to utilize IT-based systems in direct patient care in the future.
Resumo:
The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.
Resumo:
Diplomityön tavoite jaettiin kahteen osaan, joista ensimmäinen oli laskentamallin rakentaminen tulologistiikkaan. Tavoitteen toinen osa muodostui optimaalisen ratkaisuehdotuksen tekemisestä työn tilaajana olevan yrityksen eräälle tuoteperheelle rakennettua laskentamallia hyödyntäen. Työn teoriaosuudessa esiteltiin keskeisiä tekijöitä, joilla on vaikutusta toimitusketjun sekä tulologistiikan hallintaan. Lisäksi teorian ja esimerkkilaskelmien avulla esiteltiin rakennetun laskentamallin kannalta keskeistä toimitusketjun varastojen laskentaa. Työn tuloksina esiteltiin rakennettua laskentamallia sekä yrityksen kahdelle valmistuspaikalle laskentamallilla tehtyjen tulologististen laskelmien keskeisiä tuloksia. Laskelmien yksityiskohtaiset tulokset eivät ole sisällytetty työhön. Tuoteperheen optimaaliset ratkaisuehdotukset valmistuspaikoittain muodostettiin laskelmien analysoinnin pohjalta.
Resumo:
Työn tavoitteena oli etsiä ongelmakohtia olemassa olevasta prosessikuvauksesta ja prosessimaisesta toiminnasta. Samalla luotiin ehdotus uudesta mallista ja parannuksista. Uuden mallin avulla perustajaurakoinnin hankevaihe voidaan hahmottaa paremmin ja henkilöiden tehtäväkuvat tarkentuvat. Toinen tärkeä tavoite oli asettaa prosessimainen työskentely merkittävämpään asemaan yrityksessä. Työn teoriaosassa esitellään näkökulmia jotka vaikuttavat yrityksen tehokkaaseen toimintaan. Laatuajattelu, laatujohtaminen, prosessit ja niiden johtaminen standardi sekä prosessien uudistamisen laajuuden vaihtoehdot antavat hyvän kuvan siitä mihin kaikkeen prosessimainen toiminta vaikuttaa, ja mitkä tekijät ovat tehokkaan toiminnan takana. Empiriaosuudessa sovelletaan teoriaosuudessa esitettyjä tekniikoita tutkittaessa uuden mallin luomista. Havaittiin monia ongelmia prosessimaisen toiminnan tunnistamisessa ja tehtäväkuvauksien epämääräisyyttä. Tuloksena päädyttiin tekemään ehdotus jatkokehitystyölle ja esiteltiin prosessien tehokkuuden mittaamiseen erilaisia näkökulmia ja mittareita.
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Työn tavoitteena oli suunnitella ja toteuttaa sähkön ja lämmön yhteistuotantolaitoksen tuotannon optimointi. Optimoinnin kriteerinä on tuotannon kannattavuus. Pyrittiin luomaan optimointimalli, joka ottaa optimoinnissa huomioon erityisesti kaukolämmön kulutusennusteen muutokset sekä sähkön pörssihinnan vaihtelut. Tuotannon kannalta olennaisin kriteeri on kaukolämmön kulutusennusteen pohjalta arvioidun kaukolämpökuorman tyydyttäminen mahdollisimman tehokkaasti ja taloudellisesti. Sähkön tuotannon merkittävimmiksi kriteereiksi muodostuivat sähkön tuotannon ennustettavuus ja tuotannon maksimointi sähkön pörssihinnan asettamissa puitteissa. Optimointiohjelmaa ei ole tarkoitus kytkeä suoraan voimalaitoksen ajojärjestelmään, vaan siitä on tarkoitus tulla erillinen ajosuunnittelijan työkalu. Itse ajosuunnitteluun vaikuttaa usein monipuolisemmat suunnittelukriteerit kuin pelkästään tuotannon tuottavuus. Näiden eri kriteerien painotuksia ei ohjelmassa huomioida, vaan ne päättää ajosuunnittelija. Tuloksena saatiin aikaan optimointiohjelma, joka laskee valittujen tuotantovaihtoehtojen kokonaistuotot eri kaukolämmön kulutusennusteiden ja sähkön pörssihintaennusteiden pohjalta.
Resumo:
Particulate nanostructures are increasingly used for analytical purposes. Such particles are often generated by chemical synthesis from non-renewable raw materials. Generation of uniform nanoscale particles is challenging and particle surfaces must be modified to make the particles biocompatible and water-soluble. Usually nanoparticles are functionalized with binding molecules (e.g., antibodies or their fragments) and a label substance (if needed). Overall, producing nanoparticles for use in bioaffinity assays is a multistep process requiring several manufacturing and purification steps. This study describes a biological method of generating functionalized protein-based nanoparticles with specific binding activity on the particle surface and label activity inside the particles. Traditional chemical bioconjugation of the particle and specific binding molecules is replaced with genetic fusion of the binding molecule gene and particle backbone gene. The entity of the particle shell and binding moieties are synthesized from generic raw materials by bacteria, and fermentation is combined with a simple purification method based on inclusion bodies. The label activity is introduced during the purification. The process results in particles that are ready-to-use as reagents in bioaffinity. Apoferritin was used as particle body and the system was demonstrated using three different binding moieties: a small protein, a peptide and a single chain Fv antibody fragment that represents a complex protein including disulfide bridge.If needed, Eu3+ was used as label substance. The results showed that production system resulted in pure protein preparations, and the particles were of homogeneous size when visualized with transmission electron microscopy. Passively introduced label was stably associated with the particles, and binding molecules genetically fused to the particle specifically bound target molecules. Functionality of the particles in bioaffinity assays were successfully demonstrated with two types of assays; as labels and in particle-enhanced agglutination assay. This biological production procedure features many advantages that make the process especially suited for applications that have frequent and recurring requirements for homogeneous functional particles. The production process of ready, functional and watersoluble particles follows principles of “green chemistry”, is upscalable, fast and cost-effective.
Resumo:
Tässä työssä selvitettiin tuotelopetustoiminnan nykytilaa ja tuotteen elinkaaren hallinnan prosessimallin hyödyntämistä suuressa teleoperaattoriyrityksessä. Työn tavoitteena oli tutkia, mitkä tekijät hidastavat lopetuksia ja miten hitaus vaikuttaa alasajojen kustannuksiin. Toisaalta haluttiin selvittää, miten tuotteen elinkaaren prosessimallia käytetään case-yrityksessä erityisesti tuotelopetustoiminnan osalta, ja miten prosessimallin käytöstä voidaan hyötyä. Näiden kysymysten selvittämiseksi haastateltiin kymmentä tuotelopetuksiin osallistunutta tuotevastaavaa yritysasiakas- ja tuotantoyksiköissä, hankittiin tuotetietoa yrityksen useilta asiantuntijoilta ja teoriatietoa muun muassa referenssimalleista ja tuotelopetusstrategioista kirjallisuudesta. Syitä hitaisiin tuotelopetuksiin listattiin. Todettiin, että moniin ongelmiin ratkaisun tarjoaisivat tehokkaampi tuote- ja asiakastiedonhallinta ja lopetettavien tuotteiden tunnistaminen, kypsien tuotteiden tarkempi seuranta sekä myyjien lisäkoulutukset. Joidenkin tuotteiden lopetuksen keston pidentyminen voi lisätä lopetukseen liittyviä kustannuksia, mutta näin ei ole kaikkien tuotteiden kohdalla. Lisäksi todettiin, että osa vanhoista tuotteista ei ole korvattavissa täysin uusilla tuotteilla uusiin tuotteisiin liittyvien riskien vuoksi. Tuloksena selvisi myös, että prosessimallia ei käytetä kaikissa tuotelopetustapauksissa. Syitä tähän on selvitettävä vielä lisätutkimuksin, jotta mallin parantaminen onnistuisi.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Traditional econometric approaches in modeling the dynamics of equity and commodity markets, have, made great progress in the past decades. However, they assume rationality among the economic agents and and do not capture the dynamics that produce extreme events (black swans), due to deviation from the rationality assumption. The purpose of this study is to simulate the dynamics of silver markets by using the novel computational market dynamics approach. To this end, the daily data from the period of 1st March 2000 to 1st March 2013 of closing prices of spot silver prices has been simulated with the Jabłonska-Capasso-Morale(JCM) model. The Maximum Likelihood approach has been employed to calibrate the acquired data with JCM. Statistical analysis of the simulated series with respect to the actual one has been conducted to evaluate model performance. The model captures the animal spirits dynamics present in the data under evaluation well.
Resumo:
Väkevän hapon katalysoiman hydrolyysin avulla lignoselluloosasta on mahdollista valmistaa arvokkaita sokereita. Katalyyttinä toimiva happo voidaan käyttää uudelleen hydrolyysissä, jos se saadaan erotettua sokereista ilman neutralointia. Tämän kandidaatintyön tavoitteena oli selvittää, soveltuuko happoretardaatiotekniikka väkevähappohydrolysaatin fraktiointiin. Työssä verrattiin happoretardaatiotekniikkaa elektrolyyttiekskluusiotekniikkaan. Työn kirjallisuusosassa käsiteltiin happoretardaation ja elektrolyyttiekskluusion teoriaa. Lisäksi esiteltiin elektrolyyttiekskluusioon ja happoretardaatioon liittyviä tutkimuksia. Työn kokeellisessa osassa suoritettiin panoskromatografiakokeita käyttäen syöttöliuoksena rikkihappoa, etikkahappoa, glukoosia ja ksyloosia sisältävää synteettistä liuosta. Erotusmateriaaleina käytettiin neljää eri anionin- ja yhtä kationinvaihtohartsia. Kokeiden perusteella tutkittiin anioninvaihtohartsin tyypin ja kolonnin latauksen vaikutusta happoretardaatiotekniikalla saavutettavaan erotustulokseen sekä verrattiin elektrolyyttiekskluusiota happoretardaatioon. Työn tulosten perusteella rikkihappo laimeni happoretardaatiotekniikalla jopa 20-kertaisesti kromatografiakolonniin syötettyyn liuokseen verrattuna, riippumatta kolonnin latauksesta ja anioninvaihtohartsista. Rikkihapon laimenemisen vuoksi happoretardaatio ei soveltunut lignoselluloosapohjaisten väkevähappohydrolysaattien fraktiointiin. Elektrolyyttiekskluusiotekniikalla rikkihapon laimeneminen oli merkittävästi vähäisempää, minkä vuoksi elektrolyyttiekskluusion todettiin soveltuvan happoretardaatiota paremmin lignoselluloosapohjaisten väkevähappohydrolysaattien fraktiointiin.
Resumo:
The development of carbon capture and storage (CCS) has raised interest towards novel fluidised bed (FB) energy applications. In these applications, limestone can be utilized for S02 and/or CO2 capture. The conditions in the new applications differ from the traditional atmospheric and pressurised circulating fluidised bed (CFB) combustion conditions in which the limestone is successfully used for SO2 capture. In this work, a detailed physical single particle model with a description of the mass and energy transfer inside the particle for limestone was developed. The novelty of this model was to take into account the simultaneous reactions, changing conditions, and the effect of advection. Especially, the capability to study the cyclic behaviour of limestone on both sides of the calcination-carbonation equilibrium curve is important in the novel conditions. The significances of including advection or assuming diffusion control were studied in calcination. Especially, the effect of advection in calcination reaction in the novel combustion atmosphere was shown. The model was tested against experimental data; sulphur capture was studied in a laboratory reactor in different fluidised bed conditions. Different Conversion levels and sulphation patterns were examined in different atmospheres for one limestone type. The Conversion curves were well predicted with the model, and the mechanisms leading to the Conversion patterns were explained with the model simulations. In this work, it was also evaluated whether the transient environment has an effect on the limestone behaviour compared to the averaged conditions and in which conditions the effect is the largest. The difference between the averaged and transient conditions was notable only in the conditions which were close to the calcination-carbonation equilibrium curve. The results of this study suggest that the development of a simplified particle model requires a proper understanding of physical and chemical processes taking place in the particle during the reactions. The results of the study will be required when analysing complex limestone reaction phenomena or when developing the description of limestone behaviour in comprehensive 3D process models. In order to transfer the experimental observations to furnace conditions, the relevant mechanisms that take place need to be understood before the important ones can be selected for 3D process model. This study revealed the sulphur capture behaviour under transient oxy-fuel conditions, which is important when the oxy-fuel CFB process and process model are developed.