341 resultados para Interfaces (Physical sciences)
Resumo:
Työssä tarkasteltiin ilmakehän yleisen kiertoliikkeen mallin, ECHAM5:n, ja Euroopan keskipitkien sääennusteiden keskuksen uudelleenanalyysijärjestelmän, ERA-40:n, lumen vesiarvon ja lumisten alueiden pinnan albedon mallintamista. Tarkoituksena oli selvittää näiden välisiä eroja sekä sitä, kuinka hyvin ECHAM5 kuvaa nykyilmaston lumioloja. Esimerkinomaisesti tarkasteltiin myös Rossby-keskuksen alueellisen ilmastomallin, RCA3:n, lumen mallintamistapaa. ECHAM5-simulaatioissa käytetty pakote oli havaintojen mukainen meriveden pintalämpötilan ja merijään jakauma. ECHAM5:n ja ERA-40:n aineistoja vertailtiin jaksolla 1986-1990 Pohjois-Euraasian alueella. ERA-40:n lumen vesiarvoja verrattiin lisäksi INTAS-SCCONE-hankkeen havaintoaineistoon. Saatujen tulosten mukaan ECHAM5:n lumen vesiarvo oli monilla alueilla ERA-40:n lumen vesiarvoa pienempi. Suurimmillaan erot olivat vesiarvon maksimialueilla Euraasian keskiosissa. ECHAM5:ssä myös eri vuosien välinen vaihtelu oli pienempää kuin ERA-40:ssä. Varsinkin tarkastelujakson viimeisinä vuosina, 1989 ja 1990, lumen vesiarvo sai Pohjois-Euroopan alueella ERA-40:n mukaan hyvin matalia arvoja, jotka selittyvät NAO-indeksin korkeilla arvoilla. NAO-ilmiön voimakkuus 1980-luvun lopulla ei kuitenkaan erotu ECHAM5:n lumen vesiarvossa. ERA-40:n lumianalyysissä on mukana lumensyvyyshavaintoja, mikä on suurin tuloksiin eroa aiheuttava tekijä. Lienee myös mahdollista, että ECHAM5-simulaatioissa käytetty pakote ei ole riittävän voimakas tuottamaan kaikilta osin realistista lumen vesiarvon jakaumaa. ERA-40:n ja INTAS-SCCONE-aineiston välillä ei ollut kovin suuria eroja. Lumisten alueiden pinnan albedon osana käytetty lumialbedo on ERA-40:ssä ennustettava muuttuja, ECHAM5:ssä se parametrisoidaan. Saatujen tulosten mukaan pinnan albedon arvot ovat ECHAM5:ssä laajalti suurempia kuin ERA-40:ssä. Erot aiheutuvat albedojen erilaisesta laskentatavasta sekä mallien erilaisista kasvillisuusjakaumista. ECHAM5 aliarvioi kasvillisuuden albedoa pienentävän vaikutuksen varsinkin pohjoisen havumetsävyöhykkeen alueella. ERA-40:n pinnan albedo lieneekin realistisempi kuin ECHAM5:n.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.
Resumo:
One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.
Resumo:
Aim of this study is to investigate composition of the crust in Finland using seismic wide-angle velocity models and laboratory measurements on P- and S-wave velocities of different rock types. The velocities adopted from wide-angle velocity models were compared with laboratory velocities of different rock types corrected for the crustal PT conditions in the study area. The wide-angle velocity models indicate that the P-wave velocity does not only increase step-wise at boundaries of major crustal layers, but there is also gradual increase of velocity within the layers. On the other hand, the laboratory measurements of velocities indicate that no single rock type is able to provide the gradual downward increasing trends. Thus, there must be gradual vertical changes in rock composition. The downward increase of velocities indicates that the composition of the crust becomes gradually more mafic with increasing depth. Even though single rock types cannot simulate the wide-angle model velocities, it can be done with a mixture of rock types. There are a large number of rock type mixtures giving the correct P-wave velocities. Therefore, the inverse solution of rock types and their proportions from velocities is a non-unique problem if only P-wave velocities is available. Amount of the possible rock type mixtures can be limitted using S-wave velocities, reflection seismic results and other geological and geophysical results of the study area. Crustal model FINMIX-2 is presented in this study and it suggest that the crustal velocity profiles can be simulated with rock type mixtures, where the upper crust consists of felsic gneisses and granitic-granodioritic rocks with a minor contribution of quartzite, amphibolite and diabase. In the middle crust the amphibolite proportion increases. The lower crust consists of tonalitic gneiss, mafic garnet granulite, hornblendite, pyroxenite and minor mafic eclogite. This composition model is in agreement with deep crustal kimberlite-hosted xenolith data in eastern Finland and reflectivity of the FIRE (Finnish Reflection Experiment). According to FINMIX-2 model the Moho is deeper and the crustal composition is a more mafic than an average global continental model would suggest. Composition models of southern Finland are quite similar than FINMIX-2 model. However, there are minor differencies between the models, which indicates areal differences of composition. Models of northern Finland shows that the crustal thickness is smaller than southern Finland and composition of the upper crust is different. Density profiles calculated from the lithological models suggest that there is practically no density contrast at Moho in areas of the high-velocity lower crust. This implies that crustal thickness in the central Fennoscandian Shield may have been controlled by the densities of the lower crustal and upper mantle rocks.
Resumo:
Microbial activity in soils is the main source of nitrous oxide (N2O) to the atmosphere. Nitrous oxide is a strong greenhouse gas in the troposphere and participates in ozone destructive reactions in the stratosphere. The constant increase in the atmospheric concentration, as well as uncertainties in the known sources and sinks of N2O underline the need to better understand the processes and pathways of N2O in terrestrial ecosystems. This study aimed at quantifying N2O emissions from soils in northern Europe and at investigating the processes and pathways of N2O from agricultural and forest ecosystems. Emissions were measured in forest ecosystems, agricultural soils and a landfill, using the soil gradient, chamber and eddy covariance methods. Processes responsible for N2O production, and the pathways of N2O from the soil to the atmosphere, were studied in the laboratory and in the field. These ecosystems were chosen for their potential importance to the national and global budget of N2O. Laboratory experiments with boreal agricultural soils revealed that N2O production increases drastically with soil moisture content, and that the contribution of the nitrification and denitrification processes to N2O emissions depends on soil type. Laboratory study with beech (Fagus sylvatica) seedlings demonstrated that trees can serve as conduits for N2O from the soil to the atmosphere. If this mechanism is important in forest ecosystems, the current emission estimates from forest soils may underestimate the total N2O emissions from forest ecosystems. Further field and laboratory studies are needed to evaluate the importance of this mechanism in forest ecosystems. The emissions of N2O from northern forest ecosystems and a municipal landfill were highly variable in time and space. The emissions of N2O from boreal upland forest soil were among the smallest reported in the world. Despite the low emission rates, the soil gradient method revealed a clear seasonal variation in N2O production. The organic topsoil was responsible for most of the N2O production and consumption in this forest soil. Emissions from the municipal landfill were one to two orders of magnitude higher than those from agricultural soils, which are the most important source of N2O to the atmosphere. Due to their small areal coverage, landfills only contribute minimally to national N2O emissions in Finland. The eddy covariance technique was demonstrated to be useful for measuring ecosystem-scale emissions of N2O in forest and landfill ecosystems. Overall, more measurements and integration between different measurement techniques are needed to capture the large variability in N2O emissions from natural and managed northern ecosystems.
Resumo:
The main obstacle for the application of high quality diamond-like carbon (DLC) coatings has been the lack of adhesion to the substrate as the coating thickness is increased. The aim of this study was to improve the filtered pulsed arc discharge (FPAD) method. With this method it is possible to achieve high DLC coating thicknesses necessary for practical applications. The energy of the carbon ions was measured with an optoelectronic time-of-flight method. An in situ cathode polishing system used for stabilizing the process yield and the carbon ion energies is presented. Simultaneously the quality of the coatings can be controlled. To optimise the quality of the deposition process a simple, fast and inexpensive method using silicon wafers as test substrates was developed. This method was used for evaluating the suitability of a simplified arc-discharge set-up for the deposition of the adhesion layer of DLC coatings. A whole new group of materials discovered by our research group, the diamond-like carbon polymer hybrid (DLC-p-h) coatings, is also presented. The parent polymers used in these novel coatings were polydimethylsiloxane (PDMS) and polytetrafluoroethylene (PTFE). The energy of the plasma ions was found to increase when the anode-cathode distance and the arc voltage were increased. A constant deposition rate for continuous coating runs was obtained with an in situ cathode polishing system. The novel DLC-p-h coatings were found to be water and oil repellent and harder than any polymers. The lowest sliding angle ever measured from a solid surface, 0.15 ± 0.03°, was measured on a DLC-PDMS-h coating. In the FPAD system carbon ions can be accelerated to high energies (≈ 1 keV) necessary for the optimal adhesion (the substrate is broken in the adhesion and quality test) of ultra thick (up to 200 µm) DLC coatings by increasing the anode-cathode distance and using high voltages (up to 4 kV). An excellent adhesion can also be obtained with the simplified arc-discharge device. To maintain high process yield (5µm/h over a surface area of 150 cm2) and to stabilize the carbon ion energies and the high quality (sp3 fraction up to 85%) of the resulting coating, an in situ cathode polishing system must be used. DLC-PDMS-h coating is the superior candidate coating material for anti-soiling applications where also hardness is required.
Resumo:
The structure and operation of CdTe, CdZnTe and Si pixel detectors based on crystalline semiconductors, bump bonding and CMOS technology and developed mainly at Oy Simage Ltd. And Oy Ajat Ltd., Finland for X- and gamma ray imaging are presented. This detector technology evolved from the development of Si strip detectors at the Finnish Research Institute for High Energy Physics (SEFT) which later merged with other physics research units to form the Helsinki Institute of Physics (HIP). General issues of X-ray imaging such as the benefits of the method of direct conversion of X-rays to signal charge in comparison to the indirect method and the pros and cons of photon counting vs. charge integration are discussed. A novel design of Si and CdTe pixel detectors and the analysis of their imaging performance in terms of SNR, MTF, DQE and dynamic range are presented in detail. The analysis shows that directly converting crystalline semiconductor pixel detectors operated in the charge integration mode can be used in X-ray imaging very close to the theoretical performance limits in terms of efficiency and resolution. Examples of the application of the developed imaging technology to dental intra oral and panoramic and to real time X-ray imaging are given. A CdTe photon counting gamma imager is introduced. A physical model to calculate the photo peak efficiency of photon counting CdTe pixel detectors is developed and described in detail. Simulation results indicates that the charge sharing phenomenon due to diffusion of signal charge carriers limits the pixel size of photon counting detectors to about 250 μm. Radiation hardness issues related to gamma and X-ray imaging detectors are discussed.
Resumo:
The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.