998 resultados para PHYSICAL SCIENCES


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This masters thesis explores some of the most recent developments in noncommutative quantum field theory. This old theme, first suggested by Heisenberg in the late 1940s, has had a renaissance during the last decade due to the firmly held belief that space-time becomes noncommutative at small distances and also due to the discovery that string theory in a background field gives rise to noncommutative field theory as an effective low energy limit. This has led to interesting attempts to create a noncommutative standard model, a noncommutative minimal supersymmetric standard model, noncommutative gravity theories etc. This thesis reviews themes and problems like those of UV/IR mixing, charge quantization, how to deal with the non-commutative symmetries, how to solve the Seiberg-Witten map, its connection to fluid mechanics and the problem of constructing general coordinate transformations to obtain a theory of noncommutative gravity. An emphasis has been put on presenting both the group theoretical results and the string theoretical ones, so that a comparison of the two can be made.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tietokonetomografiatutkimusten (TT-tutkimusten) määrä on kasvussa, ja niistä aiheutuu merkittävä osa röntgentutkimusten väestölle aiheuttamasta kollektiivisesta annoksesta. Jotta potilaan saama säteilyannos voitaisiin määrittää tarkasti, luotettavasti ja vertailukelpoisesti, mittalaitteet on kalibroitava kansainväliseen mittausjärjestelmään jäljittyvällä tavalla käyttä-en sovittuja standardisäteilylaatuja. TT-laitteen annosmittauksissa käytetään erityisiä pitkiä sylinterin mallisia ionisaatiokammiota (TT-kammio eli DLP-kammio), joilla mitataan ilma-kerman ja pituuden tuloa. TT-kammioidenkalibrointiin ei ole ollut vakiintunutta menettelyä Säteilyturvakeskuksessa (STUK) eikä yleisesti hyväksyttyä kansainvälistä ohjetta. STUK osallistuu Kansainvälisen atomienergiajärjestön IAEA:n ohjeluonnoksen (2005) koekäyttöön. Tässä työssä oli tarkoitus testata ohjeessa esitettyä TT-kammioiden kalibrointimenetelmää sekä aikaisemmin julkaistuja menetelmiä, kehittää niiden pohjalta STUKille oma kalibrointikäytäntö ja testata sen toimintaa. Työssä tarkasteltiin erilaisia kalibrointimenetelmiä ja TT-kammion toimintaa. Mittausten perusteella päädyttiin menettelyyn, jossa kalibrointi suoritetaan mittaamalla TT-kammion vastetta kolmella erilevyisellä säteilykeilan lisärajoittimen aukolla. Kammion vasteen tasai-suutta voidaan lisäksi tutkia 1 cm:n levyisellä aukolla. Kalibrointikerroin saadaan vertaamal-la kalibroitavalla TT-kammiolla mitattuja tuloksia vertailumittarilla (mittanormaalilla) saa-tuihin tuloksiin. TT-kammion kalibroinnissa vertailumittari on sylinteri-ionisaatiokammio. Jos halutaan arvioida kalibroitavan TT-kammion efektiivistä pituutta, on kammion kalib-rointikerroin laskettava myös TT-kammiolla avokentässä tehtyjen mittausten perusteella. Työssä esitellyllä menetelmällä saadun kalibrointikertoimen kokonaisepävarmuus on 2,4 %. TT-laitteen annosmittaustilanteessa yksinkertaisinta on arvioida sopiva kalibrointikerroin pelkän putkijännitteen avulla, joka nostaa kalibrointikertoimesta tulokseen aiheutuvat kokonaisepävarmuuden 4,7 prosenttiin, sillä kalibrointikerroin riippuu sekä röntgenputken jännitteestä että säteilyn suodatuksesta.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this work was the assessment about the structure and use of the conceptual model of occlusion in operational weather forecasting. In the beginning a survey has been made about the conceptual model of occlusion as introduced to operational forecasters in the Finnish Meteorological Institute (FMI). In the same context an overview has been performed about the use of the conceptual model in modern operational weather forecasting, especially in connection with the widespread use of numerical forecasts. In order to evaluate the features of the occlusions in operational weather forecasting, all the occlusion processes occurring during year 2003 over Europe and Northern Atlantic area have been investigated using the conceptual model of occlusion and the methods suggested in the FMI. The investigation has yielded a classification of the occluded cyclones on the basis of the extent the conceptual model has fitted the description of the observed thermal structure. The seasonal and geographical distribution of the classes has been inspected. Some relevant cases belonging to different classes have been collected and analyzed in detail: in this deeper investigation tools and techniques, which are not routinely used in operational weather forecasting, have been adopted. Both the statistical investigation of the occluded cyclones during year 2003 and the case studies have revealed that the traditional classification of the types of the occlusion on the basis of the thermal structure doesn t take into account the bigger variety of occlusion structures which can be observed. Moreover the conceptual model of occlusion has turned out to be often inadequate in describing well developed cyclones. A deep and constructive revision of the conceptual model of occlusion is therefore suggested in light of the result obtained in this work. The revision should take into account both the progresses which are being made in building a theoretical footing for the occlusion process and the recent tools and meteorological quantities which are nowadays available.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työssä tarkasteltiin ilmakehän yleisen kiertoliikkeen mallin, ECHAM5:n, ja Euroopan keskipitkien sääennusteiden keskuksen uudelleenanalyysijärjestelmän, ERA-40:n, lumen vesiarvon ja lumisten alueiden pinnan albedon mallintamista. Tarkoituksena oli selvittää näiden välisiä eroja sekä sitä, kuinka hyvin ECHAM5 kuvaa nykyilmaston lumioloja. Esimerkinomaisesti tarkasteltiin myös Rossby-keskuksen alueellisen ilmastomallin, RCA3:n, lumen mallintamistapaa. ECHAM5-simulaatioissa käytetty pakote oli havaintojen mukainen meriveden pintalämpötilan ja merijään jakauma. ECHAM5:n ja ERA-40:n aineistoja vertailtiin jaksolla 1986-1990 Pohjois-Euraasian alueella. ERA-40:n lumen vesiarvoja verrattiin lisäksi INTAS-SCCONE-hankkeen havaintoaineistoon. Saatujen tulosten mukaan ECHAM5:n lumen vesiarvo oli monilla alueilla ERA-40:n lumen vesiarvoa pienempi. Suurimmillaan erot olivat vesiarvon maksimialueilla Euraasian keskiosissa. ECHAM5:ssä myös eri vuosien välinen vaihtelu oli pienempää kuin ERA-40:ssä. Varsinkin tarkastelujakson viimeisinä vuosina, 1989 ja 1990, lumen vesiarvo sai Pohjois-Euroopan alueella ERA-40:n mukaan hyvin matalia arvoja, jotka selittyvät NAO-indeksin korkeilla arvoilla. NAO-ilmiön voimakkuus 1980-luvun lopulla ei kuitenkaan erotu ECHAM5:n lumen vesiarvossa. ERA-40:n lumianalyysissä on mukana lumensyvyyshavaintoja, mikä on suurin tuloksiin eroa aiheuttava tekijä. Lienee myös mahdollista, että ECHAM5-simulaatioissa käytetty pakote ei ole riittävän voimakas tuottamaan kaikilta osin realistista lumen vesiarvon jakaumaa. ERA-40:n ja INTAS-SCCONE-aineiston välillä ei ollut kovin suuria eroja. Lumisten alueiden pinnan albedon osana käytetty lumialbedo on ERA-40:ssä ennustettava muuttuja, ECHAM5:ssä se parametrisoidaan. Saatujen tulosten mukaan pinnan albedon arvot ovat ECHAM5:ssä laajalti suurempia kuin ERA-40:ssä. Erot aiheutuvat albedojen erilaisesta laskentatavasta sekä mallien erilaisista kasvillisuusjakaumista. ECHAM5 aliarvioi kasvillisuuden albedoa pienentävän vaikutuksen varsinkin pohjoisen havumetsävyöhykkeen alueella. ERA-40:n pinnan albedo lieneekin realistisempi kuin ECHAM5:n.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aim of this study is to investigate composition of the crust in Finland using seismic wide-angle velocity models and laboratory measurements on P- and S-wave velocities of different rock types. The velocities adopted from wide-angle velocity models were compared with laboratory velocities of different rock types corrected for the crustal PT conditions in the study area. The wide-angle velocity models indicate that the P-wave velocity does not only increase step-wise at boundaries of major crustal layers, but there is also gradual increase of velocity within the layers. On the other hand, the laboratory measurements of velocities indicate that no single rock type is able to provide the gradual downward increasing trends. Thus, there must be gradual vertical changes in rock composition. The downward increase of velocities indicates that the composition of the crust becomes gradually more mafic with increasing depth. Even though single rock types cannot simulate the wide-angle model velocities, it can be done with a mixture of rock types. There are a large number of rock type mixtures giving the correct P-wave velocities. Therefore, the inverse solution of rock types and their proportions from velocities is a non-unique problem if only P-wave velocities is available. Amount of the possible rock type mixtures can be limitted using S-wave velocities, reflection seismic results and other geological and geophysical results of the study area. Crustal model FINMIX-2 is presented in this study and it suggest that the crustal velocity profiles can be simulated with rock type mixtures, where the upper crust consists of felsic gneisses and granitic-granodioritic rocks with a minor contribution of quartzite, amphibolite and diabase. In the middle crust the amphibolite proportion increases. The lower crust consists of tonalitic gneiss, mafic garnet granulite, hornblendite, pyroxenite and minor mafic eclogite. This composition model is in agreement with deep crustal kimberlite-hosted xenolith data in eastern Finland and reflectivity of the FIRE (Finnish Reflection Experiment). According to FINMIX-2 model the Moho is deeper and the crustal composition is a more mafic than an average global continental model would suggest. Composition models of southern Finland are quite similar than FINMIX-2 model. However, there are minor differencies between the models, which indicates areal differences of composition. Models of northern Finland shows that the crustal thickness is smaller than southern Finland and composition of the upper crust is different. Density profiles calculated from the lithological models suggest that there is practically no density contrast at Moho in areas of the high-velocity lower crust. This implies that crustal thickness in the central Fennoscandian Shield may have been controlled by the densities of the lower crustal and upper mantle rocks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Microbial activity in soils is the main source of nitrous oxide (N2O) to the atmosphere. Nitrous oxide is a strong greenhouse gas in the troposphere and participates in ozone destructive reactions in the stratosphere. The constant increase in the atmospheric concentration, as well as uncertainties in the known sources and sinks of N2O underline the need to better understand the processes and pathways of N2O in terrestrial ecosystems. This study aimed at quantifying N2O emissions from soils in northern Europe and at investigating the processes and pathways of N2O from agricultural and forest ecosystems. Emissions were measured in forest ecosystems, agricultural soils and a landfill, using the soil gradient, chamber and eddy covariance methods. Processes responsible for N2O production, and the pathways of N2O from the soil to the atmosphere, were studied in the laboratory and in the field. These ecosystems were chosen for their potential importance to the national and global budget of N2O. Laboratory experiments with boreal agricultural soils revealed that N2O production increases drastically with soil moisture content, and that the contribution of the nitrification and denitrification processes to N2O emissions depends on soil type. Laboratory study with beech (Fagus sylvatica) seedlings demonstrated that trees can serve as conduits for N2O from the soil to the atmosphere. If this mechanism is important in forest ecosystems, the current emission estimates from forest soils may underestimate the total N2O emissions from forest ecosystems. Further field and laboratory studies are needed to evaluate the importance of this mechanism in forest ecosystems. The emissions of N2O from northern forest ecosystems and a municipal landfill were highly variable in time and space. The emissions of N2O from boreal upland forest soil were among the smallest reported in the world. Despite the low emission rates, the soil gradient method revealed a clear seasonal variation in N2O production. The organic topsoil was responsible for most of the N2O production and consumption in this forest soil. Emissions from the municipal landfill were one to two orders of magnitude higher than those from agricultural soils, which are the most important source of N2O to the atmosphere. Due to their small areal coverage, landfills only contribute minimally to national N2O emissions in Finland. The eddy covariance technique was demonstrated to be useful for measuring ecosystem-scale emissions of N2O in forest and landfill ecosystems. Overall, more measurements and integration between different measurement techniques are needed to capture the large variability in N2O emissions from natural and managed northern ecosystems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main obstacle for the application of high quality diamond-like carbon (DLC) coatings has been the lack of adhesion to the substrate as the coating thickness is increased. The aim of this study was to improve the filtered pulsed arc discharge (FPAD) method. With this method it is possible to achieve high DLC coating thicknesses necessary for practical applications. The energy of the carbon ions was measured with an optoelectronic time-of-flight method. An in situ cathode polishing system used for stabilizing the process yield and the carbon ion energies is presented. Simultaneously the quality of the coatings can be controlled. To optimise the quality of the deposition process a simple, fast and inexpensive method using silicon wafers as test substrates was developed. This method was used for evaluating the suitability of a simplified arc-discharge set-up for the deposition of the adhesion layer of DLC coatings. A whole new group of materials discovered by our research group, the diamond-like carbon polymer hybrid (DLC-p-h) coatings, is also presented. The parent polymers used in these novel coatings were polydimethylsiloxane (PDMS) and polytetrafluoroethylene (PTFE). The energy of the plasma ions was found to increase when the anode-cathode distance and the arc voltage were increased. A constant deposition rate for continuous coating runs was obtained with an in situ cathode polishing system. The novel DLC-p-h coatings were found to be water and oil repellent and harder than any polymers. The lowest sliding angle ever measured from a solid surface, 0.15 ± 0.03°, was measured on a DLC-PDMS-h coating. In the FPAD system carbon ions can be accelerated to high energies (≈ 1 keV) necessary for the optimal adhesion (the substrate is broken in the adhesion and quality test) of ultra thick (up to 200 µm) DLC coatings by increasing the anode-cathode distance and using high voltages (up to 4 kV). An excellent adhesion can also be obtained with the simplified arc-discharge device. To maintain high process yield (5µm/h over a surface area of 150 cm2) and to stabilize the carbon ion energies and the high quality (sp3 fraction up to 85%) of the resulting coating, an in situ cathode polishing system must be used. DLC-PDMS-h coating is the superior candidate coating material for anti-soiling applications where also hardness is required.