897 resultados para 029900 OTHER PHYSICAL SCIENCES


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim of this study is to investigate composition of the crust in Finland using seismic wide-angle velocity models and laboratory measurements on P- and S-wave velocities of different rock types. The velocities adopted from wide-angle velocity models were compared with laboratory velocities of different rock types corrected for the crustal PT conditions in the study area. The wide-angle velocity models indicate that the P-wave velocity does not only increase step-wise at boundaries of major crustal layers, but there is also gradual increase of velocity within the layers. On the other hand, the laboratory measurements of velocities indicate that no single rock type is able to provide the gradual downward increasing trends. Thus, there must be gradual vertical changes in rock composition. The downward increase of velocities indicates that the composition of the crust becomes gradually more mafic with increasing depth. Even though single rock types cannot simulate the wide-angle model velocities, it can be done with a mixture of rock types. There are a large number of rock type mixtures giving the correct P-wave velocities. Therefore, the inverse solution of rock types and their proportions from velocities is a non-unique problem if only P-wave velocities is available. Amount of the possible rock type mixtures can be limitted using S-wave velocities, reflection seismic results and other geological and geophysical results of the study area. Crustal model FINMIX-2 is presented in this study and it suggest that the crustal velocity profiles can be simulated with rock type mixtures, where the upper crust consists of felsic gneisses and granitic-granodioritic rocks with a minor contribution of quartzite, amphibolite and diabase. In the middle crust the amphibolite proportion increases. The lower crust consists of tonalitic gneiss, mafic garnet granulite, hornblendite, pyroxenite and minor mafic eclogite. This composition model is in agreement with deep crustal kimberlite-hosted xenolith data in eastern Finland and reflectivity of the FIRE (Finnish Reflection Experiment). According to FINMIX-2 model the Moho is deeper and the crustal composition is a more mafic than an average global continental model would suggest. Composition models of southern Finland are quite similar than FINMIX-2 model. However, there are minor differencies between the models, which indicates areal differences of composition. Models of northern Finland shows that the crustal thickness is smaller than southern Finland and composition of the upper crust is different. Density profiles calculated from the lithological models suggest that there is practically no density contrast at Moho in areas of the high-velocity lower crust. This implies that crustal thickness in the central Fennoscandian Shield may have been controlled by the densities of the lower crustal and upper mantle rocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postglacial climate changes and vegetation responses were studied using a combination of biological and physical indicators preserved in lake sediments. Low-frequency trends, high-frequency events and rapid shifts in temperature and moisture balance were probed using pollen-based quantitative temperature reconstructions and oxygen-isotopes from authigenic carbonate and aquatic cellulose, respectively. Pollen and plant macrofossils were employed to shed light on the presence and response rates of plant populations in response to climate changes, particularly focusing on common boreal and temperate tree species. Additional geochemical and isotopic tracers facilitated the interpretation of pollen- and oxygen-isotope data. The results show that the common boreal trees were present in the Baltic region (~55°N) during the Lateglacial, which contrasts with the traditional view of species refuge locations in the south-European peninsulas during the glacial/interglacial cycles. The findings of this work are in agreement with recent paleoecological and genetic evidence suggesting that scattered populations of tree species persisted at higher latitudes, and that these taxa were likely limited to boreal trees. Moreover, the results demonstrate that stepwise changes in plant communities took place in concert with major climate fluctuations of the glacial/interglacial transition. Postglacial climate trends in northern Europe were characterized by rise, maxima and fall in temperatures and related changes in moisture balance. Following the deglaciation of the Northern Hemisphere and the early Holocene reorganization of the ice-ocean-atmosphere system, the long-term temperature trends followed gradually decreasing summer insolation. The early Holocene (~11,700-8000 cal yr BP) was overall cool, moist and oceanic, although the earliest Holocene effective humidity may have been low particularly in the eastern part of northern Europe. The gradual warming trend was interrupted by a cold event ~8200 cal yr BP. The maximum temperatures, ~1.5-3.0°C above modern values, were attained ~8000-4000 cal yr BP. This mid-Holocene peak warmth was coupled with low lake levels, low effective humidity and summertime drought. The late Holocene (~4000 cal yr BP-present) was characterized by gradually decreasing temperatures, higher lake levels and higher effective humidity. Moreover, the gradual trends of the late Holocene were probably superimposed by higher-frequency variability. The spatial variability of the Holocene temperature and moisture balance patterns were tentatively attributed to the differing heat capacities of continents and oceans, changes in atmospheric circulation modes and position of sites and subregions with respect to large water bodies and topographic barriers. The combination of physical and biological proxy archives is a pivotal aspect of this work, because non-climatic factors, such as postglacial migration, disturbances and competitive interactions, can influence reshuffling of vegetation and hence, pollen-based climate reconstructions. The oxygen-isotope records and other physical proxies presented in this work manifest that postglacial climate changes were the main driver of the establishment and expansion of temperate and boreal tree populations, and hence, large-scale and long-term vegetation patterns were in dynamic equilibrium with climate. A notable exception to this pattern may be the postglacial invasion of Norway spruce and the related suppression of mid-Holocene temperate forest. This salient step in north-European vegetation history, the development of the modern boreal ecosystem, cannot be unambiguously explained by current evidence of postglacial climate changes. The results of this work highlight that plant populations, including long-lived trees, may be able to respond strikingly rapidly to changes in climate. Moreover, interannual and seasonal variation and extreme events can exert an important influence on vegetation reshuffling. Importantly, the studies imply that the presence of diffuse refuge populations or local stands among the prevailing vegetation may have provided the means for extraordinarily rapid vegetation responses. Hence, if scattered populations are not provided and tree populations are to migrate long distances, their capacity to keep up with predicted rates of future climate change may be lower than previously thought.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports and discusses findings from a recent study which explored the science enrolment decisions of high achieving, or ‘science proficient’ secondary level students in Australia (Lyons 2003). The research was prompted by the increasing reluctance of such students to enrol in postcompulsory science courses, particularly in physics and chemistry. The study investigated the influences on students’ deliberations about taking a range of science courses. However, this report confines itself to decisions about enrolling in the physical sciences. The paper summarises the students’ experiences and conceptions of school science, as well as the characteristics of their ‘family worlds’ found to be influential in their decisions1. The paper discusses the important roles of cultural and social capital in these decisions, and concludes that enrolment in physical science courses was associated with congruence between the students’ conceptions of school science, and characteristics of their family backgrounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure and operation of CdTe, CdZnTe and Si pixel detectors based on crystalline semiconductors, bump bonding and CMOS technology and developed mainly at Oy Simage Ltd. And Oy Ajat Ltd., Finland for X- and gamma ray imaging are presented. This detector technology evolved from the development of Si strip detectors at the Finnish Research Institute for High Energy Physics (SEFT) which later merged with other physics research units to form the Helsinki Institute of Physics (HIP). General issues of X-ray imaging such as the benefits of the method of direct conversion of X-rays to signal charge in comparison to the indirect method and the pros and cons of photon counting vs. charge integration are discussed. A novel design of Si and CdTe pixel detectors and the analysis of their imaging performance in terms of SNR, MTF, DQE and dynamic range are presented in detail. The analysis shows that directly converting crystalline semiconductor pixel detectors operated in the charge integration mode can be used in X-ray imaging very close to the theoretical performance limits in terms of efficiency and resolution. Examples of the application of the developed imaging technology to dental intra oral and panoramic and to real time X-ray imaging are given. A CdTe photon counting gamma imager is introduced. A physical model to calculate the photo peak efficiency of photon counting CdTe pixel detectors is developed and described in detail. Simulation results indicates that the charge sharing phenomenon due to diffusion of signal charge carriers limits the pixel size of photon counting detectors to about 250 μm. Radiation hardness issues related to gamma and X-ray imaging detectors are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is a study of the x-ray scattering properties of tissues and tumours of the breast. Clinical radiography is based on the absorption of the x-rays when passing right through the human body and gives information about the densities of the tissues. Besides being absorbed, x-rays may change their direction within the tissues due to elastic scattering or even to refraction. The phenomenon of scattering is a nuisance to radiography in general, and to mammography in particular, because it reduces the quality of the images. However, scattered x-rays bear very useful information about the structure of the tissues at the supra-molecular level. Some pathologies, like breast cancer, produce alterations to the structures of the tissues, being especially evident in collagen-rich tissues. On the other hand, the change of direction due to refraction of the x-rays on the tissue boundaries can be mapped. The diffraction enhanced imaging (DEI) technique uses a perfect crystal to convert the angular deviations of the x-rays into intensity variations, which can be recorded as images. This technique is of especial interest in the cases were the densities of the tissues are very similar (like in mammography) and the absorption images do not offer enough contrast. This thesis explores the structural differences existing in healthy and pathological collagen in breast tissue samples by the small-angle x-ray scattering (SAXS) technique and compares these differences with the morphological information found in the DEI images and the histo-pathology of the same samples. Several breast tissue samples were studied by SAXS technique in the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. Scattering patterns of the different tissues of the breast were acquired and compared with the histology of the samples. The scattering signals from adipose tissue (fat), connective tissue (collagen) and necrotic tissue were identified. Moreover, a clear distinction could be done between the scattering signals from healthy collagen and from collagen from an invasive tumour. Scattering from collagen is very characteristic. It includes several scattering peaks and scattering features that carry information about the size and the spacing of the collagen fibrils in the tissues. It was found that the collagen fibrils in invaded tumours were thinner and had a d-spacing length 0,7% longer that fibrils from healthy tumours. The scattering signals from the breast tissues were compared with the histology by building colour-coded maps across the samples. They were also imaged with the DEI technique. There was a total agreement between the scattering maps, the morphological features seen in the images and the information of the histo- pathological examination. The thesis demonstrates that the x-ray scattering signal can be used to characterize tissues and that it carries important information about the pathological state of the breast tissues, thus showing the potential of the SAXS technique as a possible diagnostic tool for breast cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data assimilation provides an initial atmospheric state, called the analysis, for Numerical Weather Prediction (NWP). This analysis consists of pressure, temperature, wind, and humidity on a three-dimensional NWP model grid. Data assimilation blends meteorological observations with the NWP model in a statistically optimal way. The objective of this thesis is to describe methodological development carried out in order to allow data assimilation of ground-based measurements of the Global Positioning System (GPS) into the High Resolution Limited Area Model (HIRLAM) NWP system. Geodetic processing produces observations of tropospheric delay. These observations can be processed either for vertical columns at each GPS receiver station, or for the individual propagation paths of the microwave signals. These alternative processing methods result in Zenith Total Delay (ZTD) and Slant Delay (SD) observations, respectively. ZTD and SD observations are of use in the analysis of atmospheric humidity. A method is introduced for estimation of the horizontal error covariance of ZTD observations. The method makes use of observation minus model background (OmB) sequences of ZTD and conventional observations. It is demonstrated that the ZTD observation error covariance is relatively large in station separations shorter than 200 km, but non-zero covariances also appear at considerably larger station separations. The relatively low density of radiosonde observing stations limits the ability of the proposed estimation method to resolve the shortest length-scales of error covariance. SD observations are shown to contain a statistically significant signal on the asymmetry of the atmospheric humidity field. However, the asymmetric component of SD is found to be nearly always smaller than the standard deviation of the SD observation error. SD observation modelling is described in detail, and other issues relating to SD data assimilation are also discussed. These include the determination of error statistics, the tuning of observation quality control and allowing the taking into account of local observation error correlation. The experiments made show that the data assimilation system is able to retrieve the asymmetric information content of hypothetical SD observations at a single receiver station. Moreover, the impact of real SD observations on humidity analysis is comparable to that of other observing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emissions of coal combustion fly ash through real scale ElectroStatic Precipitators (ESP) were studied in different coal combustion and operation conditions. Sub-micron fly-ash aerosol emission from a power plant boiler and the ESP were determined and consequently the aerosol penetration, as based on electrical mobility measurements, thus giving thereby an indication for an estimate on the size and the maximum extent that the small particles can escape. The experimentals indicate a maximum penetration of 4% to 20 % of the small particles, as counted on number basis instead of the normally used mass basis, while simultaneously the ESP is operating at a nearly 100% collection efficiency on mass basis. Although the size range as such seems to appear independent of the coal, of the boiler or even of the device used for the emission control, the maximum penetration level on the number basis depends on the ESP operating parameters. The measured emissions were stable during stable boiler operation for a fired coal, and the emissions seemed each to be different indicating that the sub-micron size distribution of the fly-ash could be used as a specific characteristics for recognition, for instance for authenticity, provided with an indication of known stable operation. Consequently, the results on the emissions suggest an optimum particle size range for environmental monitoring in respect to the probability of finding traces from the samples. The current work embodies also an authentication system for aerosol samples for post-inspection from any macroscopic sample piece. The system can comprise newly introduced new devices, for mutually independent use, or, for use in a combination with each other, as arranged in order to promote the sampling operation length and/or the tag selection diversity. The tag for the samples can be based on naturally occurring measures and/or added measures of authenticity in a suitable combination. The method involves not only military related applications but those in civil industries as well. Alternatively to the samples, the system can be applied to ink for note printing or other monetary valued papers, but also in a filter manufacturing for marking fibrous filters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar ultraviolet (UV) radiation has a broad range of effects concerning life on Earth. Soon after the mid-1980s, it was recognized that the stratospheric ozone content was declining over large areas of the globe. Because the stratospheric ozone layer protects life on Earth from harmful UV radiation, this lead to concern about possible changes in the UV radiation due to anthropogenic activity. Initiated by this concern, many stations for monitoring of the surface UV radiation were founded in the late 1980s and early 1990s. As a consequence, there is an apparent lack of information on UV radiation further in the past: measurements cannot tell us how the UV radiation levels have changed on time scales of, for instance, several decades. The aim of this thesis was to improve our understanding of past variations in the surface UV radiation by developing techniques for UV reconstruction. Such techniques utilize commonly available meteorological data together with measurements of the total ozone column for reconstructing, or estimating, the amount of UV radiation reaching Earth's surface in the past. Two different techniques for UV reconstruction were developed. Both are based on first calculating the clear-sky UV radiation using a radiative transfer model. The clear-sky value is then corrected for the effect of clouds based on either (i) sunshine duration or (ii) pyranometer measurements. Both techniques account also for the variations in the surface albedo caused by snow, whereas aerosols are included as a typical climatological aerosol load. Using these methods, long time series of reconstructed UV radiation were produced for five European locations, namely Sodankylä and Jokioinen in Finland, Bergen in Norway, Norrköping in Sweden, and Davos in Switzerland. Both UV reconstruction techniques developed in this thesis account for the greater part of the factors affecting the amount of UV radiation reaching the Earth's surface. Thus, they are considered reliable and trustworthy, as suggested also by the good performance of the methods. The pyranometer-based method shows better performance than the sunshine-based method, especially for daily values. For monthly values, the difference between the performances of the methods is smaller, indicating that the sunshine-based method is roughly as good as the pyranometer-based for assessing long-term changes in the surface UV radiation. The time series of reconstructed UV radiation produced in this thesis provide new insight into the past UV radiation climate and how the UV radiation has varied throughout the years. Especially the sunshine-based UV time series, extending back to 1926 and 1950 at Davos and Sodankylä, respectively, also put the recent changes driven by the ozone decline observed over the last few decades into perspective. At Davos, the reconstructed UV over the period 1926-2003 shows considerable variation throughout the entire period, with high values in the mid-1940s, early 1960s, and in the 1990s. Moreover, the variations prior to 1980 were found to be caused primarily by variations in the cloudiness, while the increase of 4.5 %/decade over the period 1979-1999 was supported by both the decline in the total ozone column and changes in the cloudiness. Of the other stations included in this work, both Sodankylä and Norrköping show a clear increase in the UV radiation since the early 1980s (3-4 %/decade), driven primarily by changes in the cloudiness, and to a lesser extent by the diminution of the total ozone. At Jokioinen, a weak increase was found, while at Bergen there was no considerable overall change in the UV radiation level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a super­conducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.