425 resultados para Print finishing processes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation deals with remote narrowband measurements of the electromagnetic radiation emitted by lightning flashes. A lightning flash consists of a number of sub-processes. The return stroke, which transfers electrical charge from the thundercloud to to the ground, is electromagnetically an impulsive wideband process; that is, it emits radiation at most frequencies in the electromagnetic spectrum, but its duration is only some tens of microseconds. Before and after the return stroke, multiple sub-processes redistribute electrical charges within the thundercloud. These sub-processes can last for tens to hundreds of milliseconds, many orders of magnitude longer than the return stroke. Each sub-process causes radiation with specific time-domain characteristics, having maxima at different frequencies. Thus, if the radiation is measured at a single narrow frequency band, it is difficult to identify the sub-processes, and some sub-processes can be missed altogether. However, narrowband detectors are simple to design and miniaturize. In particular, near the High Frequency band (High Frequency, 3 MHz to 30 MHz), ordinary shortwave radios can, in principle, be used as detectors. This dissertation utilizes a prototype detector which is essentially a handheld AM radio receiver. Measurements were made in Scandinavia, and several independent data sources were used to identify lightning sub-processes, as well as the distance to each individual flash. It is shown that multiple sub-processes radiate strongly near the HF band. The return stroke usually radiates intensely, but it cannot be reliably identified from the time-domain signal alone. This means that a narrowband measurement is best used to characterize the energy of the radiation integrated over the whole flash, without attempting to identify individual processes. The dissertation analyzes the conditions under which this integrated energy can be used to estimate the distance to the flash. It is shown that flash-by-flash variations are large, but the integrated energy is very sensitive to changes in the distance, dropping as approximately the inverse cube root of the distance. Flashes can, in principle, be detected at distances of more than 100 km, but since the ground conductivity can vary, ranging accuracy drops dramatically at distances larger than 20 km. These limitations mean that individual flashes cannot be ranged accurately using a single narrowband detector, and the useful range is limited to 30 kilometers at the most. Nevertheless, simple statistical corrections are developed, which enable an accurate estimate of the distance to the closest edge of an active storm cell, as well as the approach speed. The results of the dissertation could therefore have practical applications in real-time short-range lightning detection and warning systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The geomagnetic field is one of the most fundamental geophysical properties of the Earth and has significantly contributed to our understanding of the internal structure of the Earth and its evolution. Paleomagnetic and paleointensity data have been crucial in shaping concepts like continental drift, magnetic reversals, as well as estimating the time when the Earth's core and associated geodynamo processes begun. The work of this dissertation is based on reliable Proterozoic and Holocene geomagnetic field intensity data obtained from rocks and archeological artifacts. New archeomagnetic field intensity results are presented for Finland, Estonia, Bulgaria, Italy and Switzerland. The data were obtained using sophisticated laboratory setups as well as various reliability checks and corrections. Inter-laboratory comparisons between three laboratories (Helsinki, Sofia and Liverpool) were performed in order to check the reliability of different paleointensity methods. The new intensity results fill up considerable gaps in the master curves for each region investigated. In order to interpret the paleointensity data of the Holocene period, a novel and user-friendly database (GEOMAGIA50) was constructed. This provided a new tool to independently test the reliability of various techniques and materials used in paleointensity determinations. The results show that archeological artifacts, if well fired, are the most suitable materials. Also lavas yield reliable paleointensity results, although they appear more scattered. This study also shows that reliable estimates are obtained using the Thellier methodology (and its modifications) with reliability checks. Global paleointensity curves during Paleozoic and Proterozoic have several time gaps with few or no intensity data. To define the global intensity behavior of the Earth's magnetic field during these times new rock types (meteorite impact rocks) were investigated. Two case histories are presented. The Ilyinets (Ukraine) impact melt rocks yielded a reliable paleointensity value at 440 Ma (Silurian), whereas the results from Jänisjärvi impact melts (Russian Karelia, ca. 700 Ma) might be biased towards high intensity values because of non-ideal magnetic mineralogy. The features of the geomagnetic field at 1.1 Ga are not well defined due to problems related to reversal asymmetries observed in Keweenawan data of the Lake Superior region. In this work new paleomagnetic, paleosecular variation and paleointensity results are reported from coeval diabases from Central Arizona and help understanding the asymmetry. The results confirm the earlier preliminary observations that the asymmetry is larger in Arizona than in Lake Superior area. Two of the mechanisms proposed to explain the asymmetry remain plausible: the plate motion and the non-dipole influence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is widely accepted that the global climate is heating up due to human activities, such as burning of fossil fuels. Therefore we find ourselves forced to make decisions on what measures, if any, need to be taken to decrease our warming effect on the planet before any irrevocable damage occurs. Research is being conducted in a variety of fields to better understand all relevant processes governing Earth s climate, and to assess the relative roles of anthropogenic and biogenic emissions into the atmosphere. One of the least well quantified problems is the impact of small aerosol particles (both of anthropogenic and biogenic origin) on climate, through reflecting solar radiation and their ability to act as condensation nuclei for cloud droplets. In this thesis, the compounds driving the biogenic formation of new particles in the atmosphere have been examined through detailed measurements. As directly measuring the composition of these newly formed particles is extremely difficult, the approach was to indirectly study their different characteristics by measuring the hygroscopicity (water uptake) and volatility (evaporation) of particles between 10 and 50 nm. To study the first steps of the formation process in the sub-3 nm range, the nucleation of gaseous precursors to small clusters, the chemical composition of ambient naturally charged ions were measured. The ion measurements were performed with a newly developed mass spectrometer, which was first characterized in the laboratory before being deployed at a boreal forest measurement site. It was also successfully compared to similar, low-resolution instruments. The ambient measurements showed that sulfuric acid clusters dominate the negative ion spectrum during new particle formation events. Sulfuric acid/ammonia clusters were detected in ambient air for the first time in this work. Even though sulfuric acid is believed to be the most important gas phase precursor driving the initial cluster formation, measurements of the hygroscopicity and volatility of growing 10-50 nm particles in Hyytiälä showed an increasing role of organic vapors of a variety of oxidation levels. This work has provided additional insights into the compounds participating both in the initial formation and subsequent growth of atmospheric new aerosol particles. It will hopefully prove an important step in understanding atmospheric gas-to-particle conversion, which, by influencing cloud properties, can have important climate impacts. All available knowledge needs to be constantly updated, summarized, and brought to the attention of our decision-makers. Only by increasing our understanding of all the relevant processes can we build reliable models to predict the long-term effects of decisions made today.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the most striking natural phenomena affecting ozone are solar proton events (SPE), during which high-energy protons precipitate into the middle atmosphere in the polar regions. Ionisation caused by the protons results in changes in the lower ionosphere, and in production of neutral odd nitrogen and odd hydrogen species which then destroy ozone in well-known catalytic chemical reaction chains. Large SPEs are able to decrease the ozone concentration of upper stratosphere and mesosphere, but are not expected to significantly affect the ozone layer at 15--30~km altitude. In this work we have used the Sodankylä Ion and Neutral Chemistry Model (SIC) in studies of the short-term effects caused by SPEs. The model results were found to be in a good agreement with ionospheric observations from incoherent scatter radars, riometers, and VLF radio receivers as well as with measurements from the GOMOS/Envisat satellite instrument. For the first time, GOMOS was able to observe the SPE effects on odd nitrogen and ozone in the winter polar region. Ozone observations from GOMOS were validated against those from MIPAS/Envisat instrument, and a good agreement was found throughout the middle atmosphere. For the case of the SPE of October/November 2003, long-term ozone depletion was observed in the upper stratosphere. The depletion was further enhanced by the descent of odd nitrogen from the mesosphere inside the polar vortex, until the recovery occurred in late December. During the event, substantial diurnal variation of ozone depletion was seen in the mesosphere, caused mainly by the the strong diurnal cycle of the odd hydrogen species. In the lower ionosphere, SPEs increase the electron density which is very low in normal conditions. Therefore, SPEs make radar observations easier. In the case of the SPE of October, 1989, we studied the sunset transition of negative charge from electrons to ions, a long-standing problem. The observed phenomenon, which is controlled by the amount of solar radiation, was successfully explained by considering twilight changes in both the rate of photodetachment of negative ions and concentrations of minor neutral species. Changes in the magnetic field of the Earth control the extent of SPE-affected area. For the SPE of November 2001, the results indicated that for low and middle levels of geomagnetic disturbance the estimated cosmic radio noise absorption levels based on a magnetic field model are in a good agreement with ionospheric observations. For high levels of disturbance, the model overestimates the stretching of the geomagnetic field and the geographical extent of SPE-affected area. This work shows the importance of ionosphere-atmosphere interaction for SPE studies. By using both ionospheric and atmospheric observations, we have been able to cover for the most part the whole chain of SPE-triggered processes, from proton-induced ionisation to depletion of ozone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Controlled nuclear fusion is one of the most promising sources of energy for the future. Before this goal can be achieved, one must be able to control the enormous energy densities which are present in the core plasma in a fusion reactor. In order to be able to predict the evolution and thereby the lifetime of different plasma facing materials under reactor-relevant conditions, the interaction of atoms and molecules with plasma first wall surfaces have to be studied in detail. In this thesis, the fundamental sticking and erosion processes of carbon-based materials, the nature of hydrocarbon species released from plasma-facing surfaces, and the evolution of the components under cumulative bombardment by atoms and molecules have been investigated by means of molecular dynamics simulations using both analytic potentials and a semi-empirical tight-binding method. The sticking cross-section of CH3 radicals at unsaturated carbon sites at diamond (111) surfaces is observed to decrease with increasing angle of incidence, a dependence which can be described by a simple geometrical model. The simulations furthermore show the sticking cross-section of CH3 radicals to be strongly dependent on the local neighborhood of the unsaturated carbon site. The erosion of amorphous hydrogenated carbon surfaces by helium, neon, and argon ions in combination with hydrogen at energies ranging from 2 to 10 eV is studied using both non-cumulative and cumulative bombardment simulations. The results show no significant differences between sputtering yields obtained from bombardment simulations with different noble gas ions. The final simulation cells from the 5 and 10 eV ion bombardment simulations, however, show marked differences in surface morphology. In further simulations the behavior of amorphous hydrogenated carbon surfaces under bombardment with D^+, D^+2, and D^+3 ions in the energy range from 2 to 30 eV has been investigated. The total chemical sputtering yields indicate that molecular projectiles lead to larger sputtering yields than atomic projectiles. Finally, the effect of hydrogen ion bombardment of both crystalline and amorphous tungsten carbide surfaces is studied. Prolonged bombardment is found to lead to the formation of an amorphous tungsten carbide layer, regardless of the initial structure of the sample. In agreement with experiment, preferential sputtering of carbon is observed in both the cumulative and non-cumulative simulations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years there has been growing interest in selecting suitable wood raw material to increase end product quality and to increase the efficiency of industrial processes. Genetic background and growing conditions are known to affect properties of growing trees, but only a few parameters reflecting wood quality, such as volume and density can be measured on an industrial scale. Therefore research on cellular level structures of trees grown in different conditions is needed to increase understanding of the growth process of trees leading to desired wood properties. In this work the cellular and cell wall structures of wood were studied. Parameters, such as the mean microfibril angle (MFA), the spiral grain angles, the fibre length, the tracheid cell wall thickness and the cross-sectional shape of the tracheid, were determined as a function of distance from the pith towards the bark and mutual dependencies of these parameters were discussed. Samples from fast-grown trees, which belong to a same clone, grown in fertile soil and also from fertilised trees were measured. It was found that in fast-grown trees the mean MFA decreased more gradually from the pith to the bark than in reference stems. In fast-grown samples cells were shorter, more thin-walled and their cross-sections were rounder than in slower-grown reference trees. Increased growth rate was found to cause an increase in spiral grain variation both within and between annual rings. Furthermore, methods for determination of the mean MFA using x-ray diffraction were evaluated. Several experimental arrangements including the synchrotron radiation based microdiffraction were compared. For evaluation of the data analysis procedures a general form for diffraction conditions in terms of angles describing the fibre orientation and the shape of the cell was derived. The effects of these parameters on the obtained microfibril angles were discussed. The use of symmetrical transmission geometry and tangentially cut samples gave the most reliable MFA values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This three-phase design research describes the modelling processes for DC-circuit phenomena. The first phase presents an analysis of the development of the DC-circuit historical models in the context of constructing Volta s pile at the turn of the 18th century. The second phase involves the designing of a teaching experiment for comprehensive school third graders. Among other considerations, the design work utilises the results of the first phase and research literature of pupils mental models for DC-circuit phenomena. The third phase of the research was concerned with the realisation of the planned teaching experiment. The aim of this phase was to study the development of the external representations of DC-circuit phenomena in a small group of third graders. The aim of the study has been to search for new ways to guide pupils to learn DC-circuit phenomena while emphasing understanding at the qualitative level. Thus, electricity, which has been perceived as a difficult and abstract subject, could be learnt more comprehensively. Especially, the research of younger pupils learning of electricity concepts has not been of great interest at the international level, although DC-circuit phenomena are also taught in the lower classes of comprehensive schools. The results of this study are important, because there has tended to be more teaching of natural sciences in the lower classes of comprehensive schools, and attempts are being made to develop this trend in Finland. In the theoretical part of the research an Experimental-centred representation approach, which emphasises the role of experimentalism in the development of pupil s representations, is created. According to this approach learning at the qualitative level consists of empirical operations like experimenting, observations, perception, and prequantification of nature phenomena, and modelling operations like explaining and reasoning. Besides planning teaching, the new approach can be used as an analysis tool in describing both historical modelling and the development of pupils representations. In the first phase of the study, the research question was: How did the historical models of DC-circuit phenomena develop in Volta s time? The analysis uncovered three qualitative historical models associated with the historical concept formation process. The models include conceptions of the electric circuit as a scene in the DC-circuit phenomena, the comparative electric-current phenomenon as a cause of different observable effect phenomena, and the strength of the battery as a cause of the electric-current phenomenon. These models describe the concept formation process and its phases in Volta s time. The models are portrayed in the analysis using fragments of the models, where observation-based fragments and theoretical fragements are distinguished from each other. The results emphasise the significance of the qualitative concept formation and the meaning of language in the historical modelling of DC-circuit phenomena. For this reason these viewpoints are stressed in planning the teaching experiment in the second phase of the research. In addition, the design process utilised the experimentation behind the historical models of DC-circuit phenomena In the third phase of the study the research question is as follows: How will the small group s external representations of DC-circuit phenomena develop during the teaching experiment? The main question is divided into the following two sub questions: What kind of talk exists in the small group s learning? What kinds of external representations for DC-circuit phenomena exist in the small group discourse during the teaching experiment? The analysis revealed that the teaching experiment of the small group succeeded in its aim to activate talk in the small group. The designed connection cards proved especially successful in activating talk. The connection cards are cards that represent the components of the electric circuit. In the teaching experiment the pupils constructed different connections with the connection cards and discussed, what kinds of DC-circuit phenomena would take place in the corresponding real connections. The talk of the small group was analysed by comparing two situations, firstly, when the small group discussed using connections made with the connection cards and secondly with the same connections using real components. According to the results the talk of the small group included more higher-order thinking when using the connection cards than with similar real components. In order to answer the second sub question concerning the small group s external representations that appeared in the talk during the teaching experiment; student talk was visualised by the fragment maps which incorporate the electric circuit, the electric current and the source voltage. The fragment maps represent the gradual development of the external representations of DC-circuit phenomena in the small group during the teaching experiment. The results of the study challenge the results of previous research into the abstractness and difficulty of electricity concepts. According to this research, the external representations of DC-circuit phenomena clearly developed in the small group of third graders. Furthermore, the fragment maps uncover that although the theoretical explanations of DC-circuit phenomena, which have been obtained as results of typical mental model studies, remain undeveloped, learning at the qualitative level of understanding does take place.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmospheric aerosol particles have a significant impact on air quality, human health and global climate. The climatic effects of secondary aerosol are currently among the largest uncertainties limiting the scientific understanding of future and past climate changes. To better estimate the climatic importance of secondary aerosol particles, detailed information on atmospheric particle formation mechanisms and the vapours forming the aerosol is required. In this thesis we studied these issues by applying novel instrumentation in a boreal forest to obtain direct information on the very first steps of atmospheric nucleation and particle growth. Additionally, we used detailed laboratory experiments and process modelling to determine condensational growth properties, such as saturation vapour pressures, of dicarboxylic acids, which are organic acids often found in atmospheric samples. Based on our studies, we came to four main conclusions: 1) In the boreal forest region, both sulphurous compounds and organics are needed for secondary particle formation, the previous contributing mainly to particle formation and latter to growth; 2) A persistent pool of molecular clusters, both neutral and charged, is present and participates in atmospheric nucleation processes in boreal forests; 3) Neutral particle formation seems to dominate over ion-mediated mechanisms, at least in the boreal forest boundary layer; 4) The subcooled liquid phase saturation vapour pressures of C3-C9 dicarboxylic acids are of the order of 1e-5 1e-3 Pa at atmospheric temperatures, indicating that a mixed pre-existing particulate phase is required for their condensation in atmospheric conditions. The work presented in this thesis gives tools to better quantify the aerosol source provided by secondary aerosol formation. The results are particularly useful when estimating, for instance, anthropogenic versus biogenic influences and the fractions of secondary aerosol formation explained by neutral or ion-mediated nucleation mechanisms, at least in environments where the average particle formation rates are of the order of some tens of particles per cubic centimeter or lower. However, as the factors driving secondary particle formation are likely to vary depending on the environment, measurements on atmospheric nucleation and particle growth are needed from around the world to be able to better describe the secondary particle formation, and assess its climatic effects on a global scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmospheric aerosol particle formation events can be a significant source for tropospheric aerosols and thus influence the radiative properties and cloud cover of the atmosphere. This thesis investigates the analysis of aerosol size distribution data containing particle formation events, describes the methodology of the analysis and presents time series data measured inside the Boreal forest. This thesis presents a methodology to identify regional-scale particle formation, and to derive the basic characteristics such as growth and formation rates. The methodology can also be used to estimate concentration and source rates of the vapour causing particle growth. Particle formation was found to occur frequently in the boreal forest area over areas covering up to hundreds of kilometers. Particle formation rates of boreal events were found to be of the order of 0.01-5 cm^-3 s^-1, while the nucleation rates of 1 nm particles can be a few orders of magnitude higher. The growth rates of over 3 nm sized particles were of the order of a few nanometers per hour. The vapor concentration needed to sustain such growth is of the order of 10^7--10^8 cm^-3, approximately one order of magnitude higher than sulphuric acid concentrations found in the atmosphere. Therefore, one has to assume that other vapours, such as organics, have a key role in growing newborn particles to sizes where they can become climatically active. Formation event occurrence shows a clear annual variation with peaks in summer and autumns. This variation is similar to the variation exhibited the obtained formation rates of particles. The growth rate, on the other hand, reaches its highest values during summer. This difference in the annual behavior, and the fact that no coupling between the growth and formation process could be identified, suggest that these processes might be different ones, and that both are needed for a particle formation burst to be observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Man-induced climate change has raised the need to predict the future climate and its feedback to vegetation. These are studied with global climate models; to ensure the reliability of these predictions, it is important to have a biosphere description that is based upon the latest scientific knowledge. This work concentrates on the modelling of the CO2 exchange of the boreal coniferous forest, studying also the factors controlling its growing season and how these can be used in modelling. In addition, the modelling of CO2 gas exchange at several scales was studied. A canopy-level CO2 gas exchange model was developed based on the biochemical photosynthesis model. This model was first parameterized using CO2 exchange data obtained by eddy covariance (EC) measurements from a Scots pine forest at Sodankylä. The results were compared with a semi-empirical model that was also parameterized using EC measurements. Both of the models gave satisfactory results. The biochemical canopy-level model was further parameterized at three other coniferous forest sites located in Finland and Sweden. At all the sites, the two most important biochemical model parameters showed seasonal behaviour, i.e., their temperature responses changed according to the season. Modelling results were improved when these changeover dates were related to temperature indices. During summer-time the values of the biochemical model parameters were similar at all the four sites. Different control factors for CO2 gas exchange were studied at the four coniferous forests, including how well these factors can be used to predict the initiation and cessation of the CO2 uptake. Temperature indices, atmospheric CO2 concentration, surface albedo and chlorophyll fluorescence (CF) were all found to be useful and have predictive power. In addition, a detailed simulation study of leaf stomata in order to separate physical and biochemical processes was performed. The simulation study brought to light the relative contribution and importance of the physical transport processes. The results of this work can be used in improving CO2 gas exchange models in boreal coniferous forests. The meteorological and biological variables that represent the seasonal cycle were studied, and a method for incorporating this cycle into a biochemical canopy-level model was introduced.