24 resultados para gas losses
Resumo:
Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
Resumo:
Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.
Resumo:
An extensive electricity transmission network facilitates electricity trading between Finland, Sweden, Norway and Denmark. Currently most of the area's power generation is traded at NordPool, where the trading volumes have steadily increased since the early 1990's, when the exchange was founded. The Nordic electricity is expected to follow the current trend and further integrate with the other European electricity markets. Hydro power is the source for roughly a half of the supply in the Nordic electricity market and most of the hydro is generated in Norway. The dominating role of hydro power distinguishes the Nordic electricity market from most of the other market places. Production of hydro power varies mainly due to hydro reservoirs and demand for electricity. Hydro reservoirs are affected by water inflows that differ each year. The hydro reservoirs explain remarkably the behaviour of the Nordic electricity markets. Therefore among others, Kauppi and Liski (2008) have developed a model that analyzes the behaviour of the markets using hydro reservoirs as explanatory factors. Their model includes, for example, welfare loss due to socially suboptimal hydro reservoir usage, socially optimal electricity price, hydro reservoir storage and thermal reservoir storage; that are referred as outcomes. However, the model does not explain the real market condition but rather an ideal situation. In the model the market is controlled by one agent, i.e. one agent controls all the power generation reserves; it is referred to as a socially optimal strategy. Article by Kauppi and Liski (2008) includes an assumption where an individual agent has a certain fraction of market power, e.g. 20 % or 30 %. In order to maintain the focus of this thesis, this part of their paper is omitted. The goal of this thesis is two-fold. Firstly we expand the results from the socially optimal strategy for years 2006-08, as the earlier study finishes in 2005. The second objective is to improve on the methods from the previous study. This thesis results several outcomes (SPOT-price and welfare loss, etc.) due to socially optimal actions. Welfare loss is interesting as it describes the inefficiency of the market. SPOT-price is an important output for the market participants as it often has an effect on end users' electricity bills. Another function is to modify and try to improve the model by means of using more accurate input data, e.g. by considering pollution trade rights effect on input data. After modifications to the model, new welfare losses are calculated and compared with the same results before the modifications. The hydro reservoir has the higher explanatory significance in the model followed by thermal power. In Nordic markets, thermal power reserves are mostly nuclear power and other thermal sources (coal, natural gas, oil, peat). It can be argued that hydro and thermal reservoirs determine electricity supply. Roughly speaking, the model takes into account electricity demand and supply, and several parameters related to them (water inflow, oil price, etc.), yielding finally the socially optimal outcomes. The author of this thesis is not aware of any similar model being tested before. There have been some other studies that are close to the Kauppi and Liski (2008) model, but those have a somewhat different focus. For example, a specific feature in the model is the focus on long-run capacity usage that differs from the previous studies on short-run market power. The closest study to the model is from California's wholesale electricity markets that, however, uses different methodology. Work is constructed as follows.
Resumo:
Eddy covariance (EC)-flux measurement technique is based on measurement of turbulent motions of air with accurate and fast measurement devices. For instance, in order to measure methane flux a fast methane gas analyser is needed which measures methane concentration at least ten times in a second in addition to a sonic anemometer, which measures the three wind components with the same sampling interval. Previously measurement of methane flux was almost impossible to carry out with EC-technique due to lack of fast enough gas analysers. However during the last decade new instruments have been developed and thus methane EC-flux measurements have become more common. Performance of four methane gas analysers suitable for eddy covariance measurements are assessed in this thesis. The assessment and comparison was performed by analysing EC-data obtained during summer 2010 (1.4.-26.10.) at Siikaneva fen. The four participating methane gas analysers are TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and Prototype-7700 (LI-COR Biosciences, USA). RMT-200 functioned most reliably throughout the measurement campaign and the corresponding methane flux data had the smallest random error. In addition, methane fluxes calculated from data obtained from G1301-f and RMT-200 agree remarkably well throughout the measurement campaign. The calculated cospectra and power spectra agree well with corresponding temperature spectra. Prototype-7700 functioned only slightly over one month in the beginning of the measurement campaign and thus its accuracy and long-term performance is difficult to assess.