12 resultados para Single particle state
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The development of carbon capture and storage (CCS) has raised interest towards novel fluidised bed (FB) energy applications. In these applications, limestone can be utilized for S02 and/or CO2 capture. The conditions in the new applications differ from the traditional atmospheric and pressurised circulating fluidised bed (CFB) combustion conditions in which the limestone is successfully used for SO2 capture. In this work, a detailed physical single particle model with a description of the mass and energy transfer inside the particle for limestone was developed. The novelty of this model was to take into account the simultaneous reactions, changing conditions, and the effect of advection. Especially, the capability to study the cyclic behaviour of limestone on both sides of the calcination-carbonation equilibrium curve is important in the novel conditions. The significances of including advection or assuming diffusion control were studied in calcination. Especially, the effect of advection in calcination reaction in the novel combustion atmosphere was shown. The model was tested against experimental data; sulphur capture was studied in a laboratory reactor in different fluidised bed conditions. Different Conversion levels and sulphation patterns were examined in different atmospheres for one limestone type. The Conversion curves were well predicted with the model, and the mechanisms leading to the Conversion patterns were explained with the model simulations. In this work, it was also evaluated whether the transient environment has an effect on the limestone behaviour compared to the averaged conditions and in which conditions the effect is the largest. The difference between the averaged and transient conditions was notable only in the conditions which were close to the calcination-carbonation equilibrium curve. The results of this study suggest that the development of a simplified particle model requires a proper understanding of physical and chemical processes taking place in the particle during the reactions. The results of the study will be required when analysing complex limestone reaction phenomena or when developing the description of limestone behaviour in comprehensive 3D process models. In order to transfer the experimental observations to furnace conditions, the relevant mechanisms that take place need to be understood before the important ones can be selected for 3D process model. This study revealed the sulphur capture behaviour under transient oxy-fuel conditions, which is important when the oxy-fuel CFB process and process model are developed.
Resumo:
In this thesis three experiments with atomic hydrogen (H) at low temperatures T<1 K are presented. Experiments were carried out with two- (2D) and three-dimensional (3D) H gas, and with H atoms trapped in solid H2 matrix. The main focus of this work is on interatomic interactions, which have certain specific features in these three systems considered. A common feature is the very high density of atomic hydrogen, the systems are close to quantum degeneracy. Short range interactions in collisions between atoms are important in gaseous H. The system of H in H2 differ dramatically because atoms remain fixed in the H2 lattice and properties are governed by long-range interactions with the solid matrix and with H atoms. The main tools in our studies were the methods of magnetic resonance, with electron spin resonance (ESR) at 128 GHz being used as the principal detection method. For the first time in experiments with H in high magnetic fields and at low temperatures we combined ESR and NMR to perform electron-nuclear double resonance (ENDOR) as well as coherent two-photon spectroscopy. This allowed to distinguish between different types of interactions in the magnetic resonance spectra. Experiments with 2D H gas utilized the thermal compression method in homogeneous magnetic field, developed in our laboratory. In this work methods were developed for direct studies of 3D H at high density, and for creating high density samples of H in H2. We measured magnetic resonance line shifts due to collisions in the 2D and 3D H gases. First we observed that the cold collision shift in 2D H gas composed of atoms in a single hyperfine state is much smaller than predicted by the mean-field theory. This motivated us to carry out similar experiments with 3D H. In 3D H the cold collision shift was found to be an order of magnitude smaller for atoms in a single hyperfine state than that for a mixture of atoms in two different hyperfine states. The collisional shifts were found to be in fair agreement with the theory, which takes into account symmetrization of the wave functions of the colliding atoms. The origin of the small shift in the 2D H composed of single hyperfine state atoms is not yet understood. The measurement of the shift in 3D H provides experimental determination for the difference of the scattering lengths of ground state atoms. The experiment with H atoms captured in H2 matrix at temperatures below 1 K originated from our work with H gas. We found out that samples of H in H2 were formed during recombination of gas phase H, enabling sample preparation at temperatures below 0.5 K. Alternatively, we created the samples by electron impact dissociation of H2 molecules in situ in the solid. By the latter method we reached highest densities of H atoms reported so far, 3.5(5)x1019 cm-3. The H atoms were found to be stable for weeks at temperatures below 0.5 K. The observation of dipolar interaction effects provides a verification for the density measurement. Our results point to two different sites for H atoms in H2 lattice. The steady-state nuclear polarizations of the atoms were found to be non-thermal. The possibility for further increase of the impurity H density is considered. At higher densities and lower temperatures it might be possible to observe phenomena related to quantum degeneracy in solid.
Resumo:
Computational fluid dynamics (CFD) modeling is an important tool in designing new combustion systems. By using CFD modeling, entire combustion systems can be modeled and the emissions and the performance can be predicted. CFD modeling can also be used to develop new and better combustion systems from an economical and environmental point of view. In CFD modeling of solid fuel combustion, the combustible fuel is generally treated as single fuel particles. One of the limitations with the CFD modeling concerns the sub-models describing the combustion of single fuel particles. Available models in the scientific literature are in many cases not suitable as submodels for CFD modeling since they depend on a large number of input parameters and are computationally heavy. In this thesis CFD-applicable models are developed for the combustion of single fuel particles. The single particle models can be used to improve the combustion performance in various combustion devices or develop completely new technologies. The investigated fields are oxidation of carbon (C) and nitrogen (N) in char residues from solid fuels. Modeled char-C oxidation rates are compared to experimental oxidation rates for a large number of pulverized solid fuel chars under relevant combustion conditions. The experiments have been performed in an isothermal plug flow reactor operating at 1123-1673 K and 3-15 vol.% O2. In the single particle model, the char oxidation is based on apparent kinetics and depends on three fuel specific parameters: apparent pre-exponential factor, apparent activation energy, and apparent reaction order. The single particle model can be incorporated as a sub-model into a CFD code. The results show that the modeled char oxidation rates are in good agreement with experimental char oxidation rates up to around 70% of burnout. Moreover, the results show that the activation energy and the reaction order can be assumed to be constant for a large number of bituminous coal chars under conditions limited by the combined effects of chemical kinetics and pore diffusion. Based on this, a new model based on only one fuel specific parameter is developed (Paper III). The results also show that reaction orders of bituminous coal chars and anthracite chars differ under similar conditions (Paper I and Paper II); reaction orders of bituminous coal chars were found to be one, while reaction orders of anthracite chars were determined to be zero. This difference in reaction orders has not previously been observed in the literature and should be considered in future char oxidation models. One of the most frequently used comprehensive char oxidation models could not explain the difference in the reaction orders. In the thesis (Paper II), a modification to the model is suggested in order to explain the difference in reaction orders between anthracite chars and bituminous coal chars. Two single particle models are also developed for the NO formation and reduction during the oxidation of single biomass char particles. In the models the char-N is assumed to be oxidized to NO and the NO is partly reduced inside the particle. The first model (Paper IV) is based on the concentration gradients of NO inside and outside the particle and the second model is simplified to such an extent that it is based on apparent kinetics and can be incorporated as a sub-model into a CFD code (Paper V). Modeled NO release rates from both models were in good agreement with experimental measurements from a single particle reactor of quartz glass operating at 1173-1323 K and 3-19 vol.% O2. In the future, the models can be used to reduce NO emissions in new combustion systems.
Resumo:
In this study, equations for the calculation of erosion wear caused by ash particles on convective heat exchanger tubes of steam boilers are presented. Anew, three-dimensional test arrangement was used in the testing of the erosion wear of convective heat exchanger tubes of steam boilers. When using the sleeve-method, three different tube materials and three tube constructions could be tested. New results were obtained from the analyses. The main mechanisms of erosionwear phenomena and erosion wear as a function of collision conditions and material properties have been studied. Properties of fossil fuels have also been presented. When burning solid fuels, such as pulverized coal and peat in steam boilers, most of the ash is entrained by the flue gas in the furnace. In bubbling andcirculating fluidized bed boilers, particle concentration in the flue gas is high because of bed material entrained in the flue gas. Hard particles, such as sharp edged quartz crystals, cause erosion wear when colliding on convective heat exchanger tubes and on the rear wall of the steam boiler. The most important ways to reduce erosion wear in steam boilers is to keep the velocity of the flue gas moderate and prevent channelling of the ash flow in a certain part of the cross section of the flue gas channel, especially near the back wall. One can do this by constructing the boiler with the following components. Screen plates can beused to make the velocity and ash flow distributions more even at the cross-section of the channel. Shield plates and plate type constructions in superheaters can also be used. Erosion testing was conducted with three types of tube constructions: a one tube row, an inline tube bank with six tube rows, and a staggered tube bank with six tube rows. Three flow velocities and two particle concentrations were used in the tests, which were carried out at room temperature. Three particle materials were used: quartz, coal ash and peat ash particles. Mass loss, diameter loss and wall thickness loss measurements of the test sleeves were taken. Erosion wear as a function of flow conditions, tube material and tube construction was analyzed by single-variable linear regression analysis. In developing the erosion wear calculation equations, multi-variable linear regression analysis was used. In the staggered tube bank, erosion wear had a maximum value in a tube row 2 and a local maximum in row 5. In rows 3, 4 and 6, the erosion rate was low. On the other hand, in the in-line tube bank the minimum erosion rate occurred in tube row 2 and in further rows the erosion had an increasing value, so that in a six row tube bank, the maximum value occurred in row 6.
Resumo:
The large hadron collider constructed at the European organization for nuclear research, CERN, is the world’s largest single measuring instrument ever built, and also currently the most powerful particle accelerator that exists. The large hadron collider includes six different experiment stations, one of which is called the compact muon solenoid, or the CMS. The main purpose of the CMS is to track and study residue particles from proton-proton collisions. The primary detectors utilized in the CMS are resistive plate chambers (RPCs). To obtain data from these detectors, a link system has been designed. The main idea of the link system is to receive data from the detector front-end electronics in parallel form, and to transmit it onwards in serial form, via an optical fiber. The system is mostly ready and in place. However, a problem has occurred with innermost RPC detectors, located in sector labeled RE1/1; transmission lines for parallel data suffer from signal integrity issues over long distances. As a solution to this, a new version of the link system has been devised, a one that fits in smaller space and can be located within the CMS, closer to the detectors. This RE1/1 link system has been so far completed only partially, with just the mechanical design and casing being done. In this thesis, link system electronics for RE1/1 sector has been designed, by modifying the existing link system concept to better meet the requirements of the RE1/1 sector. In addition to completion of the prototype of the RE1/1 link system electronics, some testing for the system has also been done, to ensure functionality of the design.
Resumo:
In the last two decades of studying the Solar Energetic Particle (SEP) phenomenon, intensive emphasis has been put on how and when and where these SEPs are injected into interplanetary space. It is well known that SEPs are related to solar flares and CMEs. However, the role of each in the acceleration of SEPs has been under debate since the major role was taken from flares ascribed to CMEs step by step after the skylab mission, which started the era of CME spaceborn observations. Since then, the shock wave generated by powerful CMEs in between 2-5 solar radii is considered the major accelerator. The current paradigm interprets the prolonged proton intensity-time profile in gradual SEP events as a direct effect of accelerated SEPs by shock wave propagating in the interplanetary medium. Thus the powerful CME is thought of as a starter for the acceleration and its shock wave as a continuing accelerator to result in such an intensity-time profile. Generally it is believed that a single powerful CME which might or might not be associated with a flare is always the reason behind such gradual events.
In this work we use the Energetic and Relativistic Nucleus and Electrons ERNE instrument on board Solar and Heliospheric Observatory SOHO to present an empirical study to show the possibility of multiple accelerations in SEP events. In the beginning we found 18 double-peaked SEP events by examining 88 SEP events. The peaks in the intensity-time profile were separated by 3-24 hours. We divided the SEP events according to possible multiple acceleration into four groups and in one of these groups we find evidence for multiple acceleration in velocity dispersion and change in the abundance ratio associated at transition to the second peak. Then we explored the intensity-time profiles of all SEP events during solar cycle 23 and found that most of the SEP events are associated with multiple eruptions at the Sun and we call those events as Multi-Eruption Solar Energetic Particles (MESEP) events. We use the data available by Large Angle and Spectrometric Coronograph LASCO on board SOHO to determine the CME associated with such events and YOHKOH and GOES satellites data to determine the flare associated with such events. We found four types of MESEP according to the appearance of the peaks in the intensity-time profile in large variation of energy levels. We found that it is not possible to determine whether the peaks are related to an eruption at the Sun or not, only by examining the anisotropy flux, He/p ratio and velocity dispersion. Then we chose a rare event in which there is evidence of SEP acceleration from behind previous CME. This work resulted in a conclusion which is inconsistent with the current SEP paradigm. Then we discovered through examining another MESEP event, that energetic particles accelerated by a second CME can penetrate a previous CME-driven decelerating shock. Finally, we report the previous two MESEP events with new two events and find a common basis for second CME SEPs penetrating previous decelerating shocks. This phenomenon is reported for the first time and expected to have significant impact on modification of the current paradigm of the solar energetic particle events.
Resumo:
In the theory part the membrane emulsification was studied. Emulsions are used in many industrial areas. Traditionally emulsions are prepared by using high shear in rotor-stator systems or in high pressure homogenizer systems. In membrane emulsification two immiscible liquids are mixed by pressuring one liquid through the membrane into the other liquid. With this technique energy could be saved, more homogeneous droplets could be formed and the amount of surfactant could be decreased. Ziegler-Natta and single-site catalysts are used in olefin polymerization processes. Nowadays, these catalysts are prepared according to traditional mixing emulsification. More homogeneous catalyst particles that have narrower particle size distribution might be prepared with membrane emulsification. The aim of the experimental part was to examine the possibility to prepare single site polypropylene catalyst using membrane emulsification technique. Different membrane materials and solidification techniques of the emulsion were examined. Also the toluene-PFC phase diagram was successfully measured during this thesis work. This phase diagram was used for process optimization. The polytetrafluoroethylene membranes had the largest contact angles with toluene and also the biggest difference between the contact angles measured with PFC and toluene. Despite of the contact angle measurement results no significant difference was noticed between particles prepared using PTFE membrane or metal sinter. The particle size distributions of catalyst prepared in these tests were quite wide. This would probably be fixed by using a membrane with a more homogeneous pore size distribution. It is also possible that the solidification rate has an effect on the particle sizes and particle morphology. When polymeric membranes are compared PTFE is probably still the best material for the process as it had the best chemical durability.
Resumo:
Magnetic nanoparticles are very important in modern industry. These particles are used in many different spheres of life. Nanoparticles have unusual physical and chemical properties connected both with quantum dimensional effects and with the increased role of the surface atoms. Most clearly the difference between the properties of bulk materials and nanoparticles can be seen in the magnetic properties of these materials. The most typical magnetic properties of nanomaterials are superparamagnetism with the size of the cluster from 1 to 10 nm; single-domain magnetic state of nanoclusters and nanostructures up to 20 nm; magnetization processes connected with magnetic cluster ordering and with its forms and sizes; quantum magnetic tunneling effects when magnetization changes by jumps and giant magnetoresistance effects. For research of the magnetic properties of iron-containing nanostructures, it is convenient to apply Mӧssbauer spectroscopy. In this work a number of nano-sized samples of iron oxides were examined by Mössbauer spectroscopy. The Mössbauer spectra of nanoparticles with various sizes were obtained. Mössbauer spectra of iron oxide nanoparticles were compared with the spectra of bulk samples. It was shown how the spectra of iron oxide nanoparticles change depending on the particle sizes.
Resumo:
New luminometric particle-based methods were developed to quantify protein and to count cells. The developed methods rely on the interaction of the sample with nano- or microparticles and different principles of detection. In fluorescence quenching, timeresolved luminescence resonance energy transfer (TR-LRET), and two-photon excitation fluorescence (TPX) methods, the sample prevents the adsorption of labeled protein to the particles. Depending on the system, the addition of the analyte increases or decreases the luminescence. In the dissociation method, the adsorbed protein protects the Eu(III) chelate on the surface of the particles from dissociation at a low pH. The experimental setups are user-friendly and rapid and do not require hazardous test compounds and elevated temperatures. The sensitivity of the quantification of protein (from 40 to 500 pg bovine serum albumin in a sample) was 20-500-fold better than in most sensitive commercial methods. The quenching method exhibited low protein-to-protein variability and the dissociation method insensitivity to the assay contaminants commonly found in biological samples. Less than ten eukaryotic cells were detected and quantified with all the developed methods under optimized assay conditions. Furthermore, two applications, the method for detection of the aggregation of protein and the cell viability test, were developed by utilizing the TR-LRET method. The detection of the aggregation of protein was allowed at a more than 10,000 times lower concentration, 30 μg/L, compared to the known methods of UV240 absorbance and dynamic light scattering. The TR-LRET method was combined with a nucleic acid assay with cell-impermeable dye to measure the percentage of dead cells in a single tube test with cell counts below 1000 cells/tube.
Resumo:
This research focused on operation of a manpower pool within a service business unit in Company X and aimed to identify how the operation should be improved in order to get most out of it concerning the future prospects of the service business unit. This was done by analyzing the current state of the manpower pool related operations in means of project business, project management and business models. The objective was to deepen the understanding and to highlight possible areas of improvement. The research was conducted as a qualitative single-case study utilizing also an action research method; the research approach was a combination of conceptual, action-oriented and constructive approaches. The primary data was collected with executing a comprehensive literature review and semi-structured theme interviews. The main results described how the manpower pool operates as part of the service business unit in project business by participating in different types of delivery projects; process flows for the project types were mapped. Project management was analyzed especially from the resource management point of view, and an Excel-based skills analysis model was constructed for this purpose. Utilization of operational business models was also studied to define strategic direction for development activities. The results were benchmarked against two competitors in order to specify lessons to be learnt from their use of operational business models.
Resumo:
Particle Image Velocimetry, PIV, is an optical measuring technique to obtain velocity information of a flow in interest. With PIV it is possible to achieve two or three dimensional velocity vector fields from a measurement area instead of a single point in a flow. Measured flow can be either in liquid or in gas form. PIV is nowadays widely applied to flow field studies. The need for PIV is to obtain validation data for Computational Fluid Dynamics calculation programs that has been used to model blow down experiments in PPOOLEX test facility in the Lappeenranta University of Technology. In this thesis PIV and its theoretical background are presented. All the subsystems that can be considered to be part of a PIV system are presented as well with detail. Emphasis is also put to the mathematics behind the image evaluation. The work also included selection and successful testing of a PIV system, as well as the planning of the installation to the PPOOLEX facility. Already in the preliminary testing PIV was found to be good addition to the measuring equipment for Nuclear Safety Research Unit of LUT. The installation to PPOOLEX facility was successful even though there were many restrictions considering it. All parts of the PIV system worked and they were found out to be appropriate for the planned use. Results and observations presented in this thesis are a good background to further PIV use.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.