177 resultados para fysikaliska fenomen
Resumo:
Työssä tarkasteltiin ilmakehän yleisen kiertoliikkeen mallin, ECHAM5:n, ja Euroopan keskipitkien sääennusteiden keskuksen uudelleenanalyysijärjestelmän, ERA-40:n, lumen vesiarvon ja lumisten alueiden pinnan albedon mallintamista. Tarkoituksena oli selvittää näiden välisiä eroja sekä sitä, kuinka hyvin ECHAM5 kuvaa nykyilmaston lumioloja. Esimerkinomaisesti tarkasteltiin myös Rossby-keskuksen alueellisen ilmastomallin, RCA3:n, lumen mallintamistapaa. ECHAM5-simulaatioissa käytetty pakote oli havaintojen mukainen meriveden pintalämpötilan ja merijään jakauma. ECHAM5:n ja ERA-40:n aineistoja vertailtiin jaksolla 1986-1990 Pohjois-Euraasian alueella. ERA-40:n lumen vesiarvoja verrattiin lisäksi INTAS-SCCONE-hankkeen havaintoaineistoon. Saatujen tulosten mukaan ECHAM5:n lumen vesiarvo oli monilla alueilla ERA-40:n lumen vesiarvoa pienempi. Suurimmillaan erot olivat vesiarvon maksimialueilla Euraasian keskiosissa. ECHAM5:ssä myös eri vuosien välinen vaihtelu oli pienempää kuin ERA-40:ssä. Varsinkin tarkastelujakson viimeisinä vuosina, 1989 ja 1990, lumen vesiarvo sai Pohjois-Euroopan alueella ERA-40:n mukaan hyvin matalia arvoja, jotka selittyvät NAO-indeksin korkeilla arvoilla. NAO-ilmiön voimakkuus 1980-luvun lopulla ei kuitenkaan erotu ECHAM5:n lumen vesiarvossa. ERA-40:n lumianalyysissä on mukana lumensyvyyshavaintoja, mikä on suurin tuloksiin eroa aiheuttava tekijä. Lienee myös mahdollista, että ECHAM5-simulaatioissa käytetty pakote ei ole riittävän voimakas tuottamaan kaikilta osin realistista lumen vesiarvon jakaumaa. ERA-40:n ja INTAS-SCCONE-aineiston välillä ei ollut kovin suuria eroja. Lumisten alueiden pinnan albedon osana käytetty lumialbedo on ERA-40:ssä ennustettava muuttuja, ECHAM5:ssä se parametrisoidaan. Saatujen tulosten mukaan pinnan albedon arvot ovat ECHAM5:ssä laajalti suurempia kuin ERA-40:ssä. Erot aiheutuvat albedojen erilaisesta laskentatavasta sekä mallien erilaisista kasvillisuusjakaumista. ECHAM5 aliarvioi kasvillisuuden albedoa pienentävän vaikutuksen varsinkin pohjoisen havumetsävyöhykkeen alueella. ERA-40:n pinnan albedo lieneekin realistisempi kuin ECHAM5:n.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.
Resumo:
One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.
Resumo:
Aim of this study is to investigate composition of the crust in Finland using seismic wide-angle velocity models and laboratory measurements on P- and S-wave velocities of different rock types. The velocities adopted from wide-angle velocity models were compared with laboratory velocities of different rock types corrected for the crustal PT conditions in the study area. The wide-angle velocity models indicate that the P-wave velocity does not only increase step-wise at boundaries of major crustal layers, but there is also gradual increase of velocity within the layers. On the other hand, the laboratory measurements of velocities indicate that no single rock type is able to provide the gradual downward increasing trends. Thus, there must be gradual vertical changes in rock composition. The downward increase of velocities indicates that the composition of the crust becomes gradually more mafic with increasing depth. Even though single rock types cannot simulate the wide-angle model velocities, it can be done with a mixture of rock types. There are a large number of rock type mixtures giving the correct P-wave velocities. Therefore, the inverse solution of rock types and their proportions from velocities is a non-unique problem if only P-wave velocities is available. Amount of the possible rock type mixtures can be limitted using S-wave velocities, reflection seismic results and other geological and geophysical results of the study area. Crustal model FINMIX-2 is presented in this study and it suggest that the crustal velocity profiles can be simulated with rock type mixtures, where the upper crust consists of felsic gneisses and granitic-granodioritic rocks with a minor contribution of quartzite, amphibolite and diabase. In the middle crust the amphibolite proportion increases. The lower crust consists of tonalitic gneiss, mafic garnet granulite, hornblendite, pyroxenite and minor mafic eclogite. This composition model is in agreement with deep crustal kimberlite-hosted xenolith data in eastern Finland and reflectivity of the FIRE (Finnish Reflection Experiment). According to FINMIX-2 model the Moho is deeper and the crustal composition is a more mafic than an average global continental model would suggest. Composition models of southern Finland are quite similar than FINMIX-2 model. However, there are minor differencies between the models, which indicates areal differences of composition. Models of northern Finland shows that the crustal thickness is smaller than southern Finland and composition of the upper crust is different. Density profiles calculated from the lithological models suggest that there is practically no density contrast at Moho in areas of the high-velocity lower crust. This implies that crustal thickness in the central Fennoscandian Shield may have been controlled by the densities of the lower crustal and upper mantle rocks.
Resumo:
In the present work the methods of relativistic quantum chemistry have been applied to a number of small systems containing heavy elements, for which relativistic effects are important. First, a thorough introduction of the methods used is presented. This includes some of the general methods of computational chemistry and a special section dealing with how to include the effects of relativity in quantum chemical calculations. Second, after this introduction the results obtained are presented. Investigations on high-valent mercury compounds are presented and new ways to synthesise such compounds are proposed. The methods described were applied to certain systems containing short Pt-Tl contacts. It was possible to explain the interesting bonding situation in these compounds. One of the most common actinide compounds, uranium hexafluoride was investigated and a new picture of the bonding was presented. Furthermore the rareness of uranium-cyanide compounds was discussed. In a foray into the chemistry of gold, well known for its strong relativistic effects, investigations on different gold systems were performed. Analogies between Au$^+$ and platinum on one hand and oxygen on the other were found. New systems with multiple bonds to gold were proposed to experimentalists. One of the proposed systems was spectroscopically observed shortly afterwards. A very interesting molecule, which was theoretically predicted a few years ago is WAu$_{12}$. Some of its properties were calculated and the bonding situation was discussed. In a further study on gold compounds it was possible to explain the substitution pattern in bis[phosphane-gold(I)] thiocyanate complexes. This is of some help to experimentalists as the systems could not be crystallised and the structure was therefore unknown. Finally, computations on one of the heaviest elements in the periodic table were performed. Calculation on compounds containing element 110, darmstadtium, showed that it behaves similarly as its lighter homologue platinum. The extreme importance of relativistic effects for these systems was also shown.
Resumo:
Individual movement is very versatile and inevitable in ecology. In this thesis, I investigate two kinds of movement body condition dependent dispersal and small-range foraging movements resulting in quasi-local competition and their causes and consequences on the individual, population and metapopulation level. Body condition dependent dispersal is a widely evident but barely understood phenomenon. In nature, diverse relationships between body condition and dispersal are observed. I develop the first models that study the evolution of dispersal strategies that depend on individual body condition. In a patchy environment where patches differ in environmental conditions, individuals born in rich (e.g. nutritious) patches are on average stronger than their conspecifics that are born in poorer patches. Body condition (strength) determines competitive ability such that stronger individuals win competition with higher probability than weak individuals. Individuals compete for patches such that kin competition selects for dispersal. I determine the evolutionarily stable strategy (ESS) for different ecological scenarios. My models offer explanations for both dispersal of strong individuals and dispersal of weak individuals. Moreover, I find that within-family dispersal behaviour is not always reflected on the population level. This supports the fact that no consistent pattern is detected in data on body condition dependent dispersal. It also encourages the refining of empirical investigations. Quasi-local competition defines interactions between adjacent populations where one population negatively affects the growth of the other population. I model a metapopulation in a homogeneous environment where adults of different subpopulations compete for resources by spending part of their foraging time in the neighbouring patches, while their juveniles only feed on the resource in their natal patch. I show that spatial patterns (different population densities in the patches) are stable only if one age class depletes the resource very much but mainly the other age group depends on it.
Resumo:
Aim: To characterize the inhibition of platelet function by paracetamol in vivo and in vitro, and to evaluate the possible interaction of paracetamol and diclofenac or valdecoxib in vivo. To assess the analgesic effect of the drugs in an experimental pain model. Methods: Healthy volunteers received increasing doses of intravenous paracetamol (15, 22.5 and 30 mg/kg), or the combination of paracetamol 1 g and diclofenac 1.1 mg/kg or valdecoxib 40 mg (as the pro-drug parecoxib). Inhibition of platelet function was assessed with photometric aggregometry, the platelet function analyzer (PFA-100), and release of thromboxane B2. Analgesia was assessed with the cold pressor test. The inhibition coefficient of platelet aggregation by paracetamol was determined as well as the nature of interaction between paracetamol and diclofenac by an isobolographic analysis in vitro. Results: Paracetamol inhibited platelet aggregation and TxB2-release dose-dependently in volunteers and concentration-dependently in vitro. The inhibition coefficient was 15.2 mg/L (95% CI 11.8 - 18.6). Paracetamol augmented the platelet inhibition by diclofenac in vivo, and the isobole showed that this interaction is synergistic. Paracetamol showed no interaction with valdecoxib. PFA-100 appeared insensitive in detecting platelet dysfunction by paracetamol, and the cold-pressor test showed no analgesia. Conclusions: Paracetamol inhibits platelet function in vivo and shows synergism when combined with diclofenac. This effect may increase the risk of bleeding in surgical patients with an impaired haemostatic system. The combination of paracetamol and valdecoxib may be useful in patients with low risk for thromboembolism. The PFA-100 seems unsuitable for detection of platelet dysfunction and the cold-pressor test seems unsuitable for detection of analgesia by paracetamol.
Resumo:
The continuous production of blood cells, a process termed hematopoiesis, is sustained throughout the lifetime of an individual by a relatively small population of cells known as hematopoietic stem cells (HSCs). HSCs are unique cells characterized by their ability to self-renew and give rise to all types of mature blood cells. Given their high proliferative potential, HSCs need to be tightly regulated on the cellular and molecular levels or could otherwise turn malignant. On the other hand, the tight regulatory control of HSC function also translates into difficulties in culturing and expanding HSCs in vitro. In fact, it is currently not possible to maintain or expand HSCs ex vivo without rapid loss of self-renewal. Increased knowledge of the unique features of important HSC niches and of key transcriptional regulatory programs that govern HSC behavior is thus needed. Additional insight in the mechanisms of stem cell formation could enable us to recapitulate the processes of HSC formation and self-renewal/expansion ex vivo with the ultimate goal of creating an unlimited supply of HSCs from e.g. human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPS) to be used in therapy. We thus asked: How are hematopoietic stem cells formed and in what cellular niches does this happen (Papers I, II)? What are the molecular mechanisms that govern hematopoietic stem cell development and differentiation (Papers III, IV)? Importantly, we could show that placenta is a major fetal hematopoietic niche that harbors a large number of HSCs during midgestation (Paper I)(Gekas et al., 2005). In order to address whether the HSCs found in placenta were formed there we utilized the Runx1-LacZ knock-in and Ncx1 knockout mouse models (Paper II). Importantly, we could show that HSCs emerge de novo in the placental vasculature in the absence of circulation (Rhodes et al., 2008). Furthermore, we could identify defined microenvironmental niches within the placenta with distinct roles in hematopoiesis: the large vessels of the chorioallantoic mesenchyme serve as sites of HSC generation whereas the placental labyrinth is a niche supporting HSC expansion (Rhodes et al., 2008). Overall, these studies illustrate the importance of distinct milieus in the emergence and subsequent maturation of HSCs. To ensure proper function of HSCs several regulatory mechanisms are in place. The microenvironment in which HSCs reside provides soluble factors and cell-cell interactions. In the cell-nucleus, these cell-extrinsic cues are interpreted in the context of cell-intrinsic developmental programs which are governed by transcription factors. An essential transcription factor for initiation of hematopoiesis is Scl/Tal1 (stem cell leukemia gene/T-cell acute leukemia gene 1). Loss of Scl results in early embryonic death and total lack of all blood cells, yet deactivation of Scl in the adult does not affect HSC function (Mikkola et al., 2003b. In order to define the temporal window of Scl requirement during fetal hematopoietic development, we deactivated Scl in all hematopoietic lineages shortly after hematopoietic specification in the embryo . Interestingly, maturation, expansion and function of fetal HSCs was unaffected, and, as in the adult, red blood cell and platelet differentiation was impaired (Paper III)(Schlaeger et al., 2005). These findings highlight that, once specified, the hematopoietic fate is stable even in the absence of Scl and is maintained through mechanisms that are distinct from those required for the initial fate choice. As the critical downstream targets of Scl remain unknown, we sought to identify and characterize target genes of Scl (Paper IV). We could identify transcription factor Mef2C (myocyte enhancer factor 2 C) as a novel direct target gene of Scl specifically in the megakaryocyte lineage which largely explains the megakaryocyte defect observed in Scl deficient mice. In addition, we observed an Scl-independent requirement of Mef2C in the B-cell compartment, as loss of Mef2C leads to accelerated B-cell aging (Gekas et al. Submitted). Taken together, these studies identify key extracellular microenvironments and intracellular transcriptional regulators that dictate different stages of HSC development, from emergence to lineage choice to aging.
Resumo:
The structure and operation of CdTe, CdZnTe and Si pixel detectors based on crystalline semiconductors, bump bonding and CMOS technology and developed mainly at Oy Simage Ltd. And Oy Ajat Ltd., Finland for X- and gamma ray imaging are presented. This detector technology evolved from the development of Si strip detectors at the Finnish Research Institute for High Energy Physics (SEFT) which later merged with other physics research units to form the Helsinki Institute of Physics (HIP). General issues of X-ray imaging such as the benefits of the method of direct conversion of X-rays to signal charge in comparison to the indirect method and the pros and cons of photon counting vs. charge integration are discussed. A novel design of Si and CdTe pixel detectors and the analysis of their imaging performance in terms of SNR, MTF, DQE and dynamic range are presented in detail. The analysis shows that directly converting crystalline semiconductor pixel detectors operated in the charge integration mode can be used in X-ray imaging very close to the theoretical performance limits in terms of efficiency and resolution. Examples of the application of the developed imaging technology to dental intra oral and panoramic and to real time X-ray imaging are given. A CdTe photon counting gamma imager is introduced. A physical model to calculate the photo peak efficiency of photon counting CdTe pixel detectors is developed and described in detail. Simulation results indicates that the charge sharing phenomenon due to diffusion of signal charge carriers limits the pixel size of photon counting detectors to about 250 μm. Radiation hardness issues related to gamma and X-ray imaging detectors are discussed.