938 resultados para elementary PNRD
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
For achieving efficient fusion energy production, the plasma-facing wall materials of the fusion reactor should ensure long time operation. In the next step fusion device, ITER, the first wall region facing the highest heat and particle load, i.e. the divertor area, will mainly consist of tiles based on tungsten. During the reactor operation, the tungsten material is slowly but inevitably saturated with tritium. Tritium is the relatively short-lived hydrogen isotope used in the fusion reaction. The amount of tritium retained in the wall materials should be minimized and its recycling back to the plasma must be unrestrained, otherwise it cannot be used for fueling the plasma. A very expensive and thus economically not viable solution is to replace the first walls quite often. A better solution is to heat the walls to temperatures where tritium is released. Unfortunately, the exact mechanisms of hydrogen release in tungsten are not known. In this thesis both experimental and computational methods have been used for studying the release and retention of hydrogen in tungsten. The experimental work consists of hydrogen implantations into pure polycrystalline tungsten, the determination of the hydrogen concentrations using ion beam analyses (IBA) and monitoring the out-diffused hydrogen gas with thermodesorption spectrometry (TDS) as the tungsten samples are heated at elevated temperatures. Combining IBA methods with TDS, the retained amount of hydrogen is obtained as well as the temperatures needed for the hydrogen release. With computational methods the hydrogen-defect interactions and implantation-induced irradiation damage can be examined at the atomic level. The method of multiscale modelling combines the results obtained from computational methodologies applicable at different length and time scales. Electron density functional theory calculations were used for determining the energetics of the elementary processes of hydrogen in tungsten, such as diffusivity and trapping to vacancies and surfaces. Results from the energetics of pure tungsten defects were used in the development of an classical bond-order potential for describing the tungsten defects to be used in molecular dynamics simulations. The developed potential was utilized in determination of the defect clustering and annihilation properties. These results were further employed in binary collision and rate theory calculations to determine the evolution of large defect clusters that trap hydrogen in the course of implantation. The computational results for the defect and trapped hydrogen concentrations were successfully compared with the experimental results. With the aforedescribed multiscale analysis the experimental results within this thesis and found in the literature were explained both quantitatively and qualitatively.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.
Resumo:
This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.
Resumo:
The electroweak theory is the part of the standard model of particle physics that describes the weak and electromagnetic interactions between elementary particles. Since its formulation almost 40 years ago, it has been experimentally verified to a high accuracy and today it has a status as one of the cornerstones of particle physics. Thermodynamics of electroweak physics has been studied ever since the theory was written down and the features the theory exhibits at extreme conditions remain an interesting research topic even today. In this thesis, we consider some aspects of electroweak thermodynamics. Specifically, we compute the pressure of the standard model to high precision and study the structure of the electroweak phase diagram when finite chemical potentials for all the conserved particle numbers in the theory are introduced. In the first part of the thesis, the theory, methods and essential results from the computations are introduced. The original research publications are reprinted at the end.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
As an extension to an activity introducing Year 5 students to the practice of statistics, the software TinkerPlots made it possible to collect repeated random samples from a finite population to informally explore students’ capacity to begin reasoning with a distribution of sample statistics. This article provides background for the sampling process and reports on the success of students in making predictions for the population from the collection of simulated samples and in explaining their strategies. The activity provided an application of the numeracy skill of using percentages, the numerical summary of the data, rather than graphing data in the analysis of samples to make decisions on a statistical question. About 70% of students made what were considered at least moderately good predictions of the population percentages for five yes–no questions, and the correlation between predictions and explanations was 0.78.
Resumo:
This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Evaluation practices have pervaded the Finnish society and welfare state. At the same time the term effectiveness has become a powerful organising concept in welfare state activities. The aim of the study is to analyse how the outcome-oriented society came into being through historical processes, to answer the question of how social policy and welfare state practices were brought under the governance of the concept of effectiveness . Discussions about social imagination, Michel Foucault s conceptions of the history of the present and of governmentality, genealogy and archaeology, along with Ian Hacking s notions of dynamic nominalism and styles of reasoning, are used as the conceptual and methodological starting points for the study. In addition, Luc Boltanski s and Laurent Thévenot s ideas of orders of worth , regimes of evaluation in everyday life, are employed. Usually, evaluation is conceptualised as an autonomous epistemic culture and practice (evaluation as epistemic practice), but evaluation is here understood as knowledge-creation processes elementary to different epistemic practices (evaluation in epistemic practices). The emergence of epistemic cultures and styles of reasoning about the effectiveness or impacts of welfare state activities are analysed through Finnish social policy and social work research. The study uses case studies which represent debates and empirical research dealing with the effectiveness and quality of social services and social work. While uncertainty and doubts over the effects and consequences of welfare policies have always been present in discourses about social policy, the theme has not been acknowledged much in social policy research. To resolve these uncertainties, eight styles of reasoning about such effects have emerged over time. These are the statistical, goal-based, needs-based, experimental, interaction-based, performance measurement, auditing and evidence-based styles of reasoning. Social policy research has contributed in various ways to the creation of these epistemic practices. The transformation of the welfare state, starting at the end of 1980s, increased market-orientation and trimmed public welfare responsibilities, and led to the adoption of the New Public Management (NPM) style of leadership. Due to these developments the concept of effectiveness made a breakthrough, and new accountabilities with their knowledge tools for performance measurement and auditing and evidence-based styles of reasoning became more dominant in the ruling of the welfare state. Social sciences and evaluation have developed a heteronomous relation with each other, although there still remain divergent tendencies between them. Key words: evaluation, effectiveness, social policy, welfare state, public services, sociology of knowledge
Resumo:
Models of Maximal Flavor Violation (MxFV) in elementary particle physics may contain at least one new scalar SU$(2)$ doublet field $\Phi_{FV} = (\eta^0,\eta^+)$ that couples the first and third generation quarks ($q_1,q_3$) via a Lagrangian term $\mathcal{L}_{FV} = \xi_{13} \Phi_{FV} q_1 q_3$. These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb$^{-1}$ collected by the CDF II detector in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV are analyzed for evidence of the MxFV signature. For a neutral scalar $\eta^0$ with $m_{\eta^0} = 200$ GeV/$c^2$ and coupling $\xi_{13}=1$, $\sim$ 11 signal events are expected over a background of $2.1 \pm 1.8$ events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling $\xi_{13}$ for $m_{\eta^0} = 180-300$ GeV/$c^2$.
Resumo:
In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.
Resumo:
Chlamydia pneumoniae can cause acute respiratory infections including pneumonia. Repeated and persistent Chlamydia infections occur and persistent C. pneumoniae infection may have a role in the pathogenesis of atherosclerosis and coronary heart disease and may also contribute to the development of chronic inflammatory lung diseases like chronic obstructive pulmonary disease (COPD) and asthma. In this thesis in vitro models for persistent C. pneumonia infection were established in epithelial and monocyte/macrophage cell lines. Expression of host cell genes in the persistent C. pneumoniae infection model of epithelial cells was studied by microarray and RT-PCR. In the monocyte/macrophage infection model expression of selected C. pneumoniae genes were studied by RT-PCR and immunofluorescence microscopy. Chlamydia is able to modulate host cell gene expression and apoptosis of host cells, which may assist Chlamydia to evade the host cells' immune responses. This, in turn, may lead to extended survival of the organism inside epithelial cells and promote the development of persistent infection. To simulate persistent C. pneumoniae infection in vivo, we set up a persistent infection model exposing the HL cell cultures to IFN-gamma. When HL cell cultures were treated with moderate concentration of IFN-gamma, the replication of C. pneumoniae DNA was unaffected while differentiation into infectious elementary bodies (EB) was strongly inhibited. By transmission electron microscopy small atypical inclusions were identified in IFN-gamma treated cultures. No second cycle of infection was observed in cells exposed to IFN-gamma , whereas C. pneumoniae was able to undergo a second cycle of infection in unexposed HL cells. Although monocytic cells can naturally restrict chlamydial growth, IFN-gamma further reduced production of infectious C. pneumoniae in Mono Mac 6 cells. Under both studied conditions no second cycle of infection could be detected in monocytic cell line suggesting persistent infection in these cells. As a step toward understanding the role of host genes in the development and pathogenesis of persistent C. pneumoniae infection, modulation of host cell gene expression during IFN-gamma induced persistent infection was examined and compared to that seen during active C. pneumoniae infection or IFN-gamma treatment. Total RNA was collected at 6 to 150 h after infection of an epithelial cell line (HL) and analyzed by a cDNA array (available at that time) representing approximately 4000 human transcripts. In initial analysis 250 of the 4000 genes were identified as differentially expressed upon active and persistent chlamydial infection and IFN-gamma treatment. In persistent infection more potent up-regulation of many genes was observed in IFN-gamma induced persistent infection than in active infection or in IFN-gamma treated cell cultures. Also sustained up-regulation was observed for some genes. In addition, we could identify nine host cell genes whose transcription was specifically altered during the IFN-gamma induced persistent C. pneumoniae infection. Strongest up-regulation in persistent infection in relation to controls was identified for insulin like growth factor binding protein 6, interferon-stimulated protein 15 kDa, cyclin D1 and interleukin 7 receptor. These results suggest that during persistent infection, C. pneumoniae reprograms the host transcriptional machinery regulating a variety of cellular processes including adhesion, cell cycle regulation, growth and inflammatory response, all of which may play important roles in the pathogenesis of persistent C. pneumoniae infection. C. pneumoniae DNA can be detected in peripheral blood mononuclear cells indicating that the bacterium can also infect monocytic cells in vivo and thereby monocytes can assist the spread of infection from the lungs to other anatomical sites. Persistent infection established at these sites could promote inflammation and enhance pathology. Thus, the mononuclear cells are in a strategic position in the development of persistent infection. To investigate the intracellular replication and fate of C. pneumoniae in mononuclear cells we analyzed the transcription of 11 C. pneumoniae genes in Mono Mac 6 cells during infection by real time RT-PCR. Our results suggest that the transcriptional profile of the studied genes in monocytes is different from that seen in epithelial cells and that IFN-gamma has a less significant effect on C. pneumoniae transcription in monocytes. Furthermore, our study shows that type III secretion system (T3SS) related genes are transcribed and that Chlamydia possesses a functional T3SS during infection in monocytes. Since C. pneumoniae infection in monocytes has been implicated to have reduced antibiotic susceptibility, this creates opportunities for novel therapeutics targeting T3SS in the management of chlamydial infection in monocytes.
Resumo:
According to Wen's theory, a universal behavior of the fractional quantum Hall edge is expected at sufficiently low energies, where the dispersion of the elementary edge excitation is linear. A microscopic calculation shows that the actual dispersion is indeed linear at low energies, but deviates from linearity beyond certain energy, and also exhibits an "edge roton minimum." We determine the edge exponent from a microscopic approach, and find that the nonlinearity of the dispersion makes a surprisingly small correction to the edge exponent even at energies higher than the roton energy. We explain this insensitivity as arising from the fact that the energy at maximum spectral weight continues to show an almost linear behavior up to fairly high energies. We also study, in an effective-field theory, how interactions modify the exponent for a reconstructed edge with multiple edge modes. Relevance to experiment is discussed.
Resumo:
The purpose of this Master s thesis is on one hand to find out how CLIL (Content and Language Integrated Learning) teachers and English teachers perceive English and its use in teaching, and on the other hand, what they consider important in subject teacher education in English that is being planned and piloted in STEP Project at the University of Helsinki Department of Teacher Education. One research question is also what kind of language requirements teachers think CLIL teachers should have. The research results are viewed in light of previous research and literature on CLIL education. Six teachers participate in this study. Two of them are English teachers in the comprehensive school, two are class teachers in bilingual elementary education, and two are subject teachers in bilingual education, one of whom teaches in a lower secondary school and the other in an upper secondary school. One English teacher and one bilingual class teacher have graduated from a pilot class teacher program in English that started at the University of Helsinki in the middle of the 1990 s. The bilingual subject teachers are not trained in English but they have learned English elsewhere, which is a particular focus of interest in this study because it is expected that a great number of CLIL teachers in Finland do not have actual studies in English philology. The research method is interview and this is a qualitative case study. The interviews are recorded and transcribed for the ease of analysis. The English teachers do not always use English in their lessons and they would not feel confident in teaching another subject completely in English. All of the CLIL teachers trust their English skills in teaching, but the bilingual class teachers also use Finnish during lessons either because some teaching material is in Finnish, or they feel that rules and instructions are understood better in mother tongue or students English skills are not strong enough. One of the bilingual subject teachers is the only one who consciously uses only English in teaching and in discussions with students. Although teachers good English skills are generally considered important, only the teachers who have graduated from the class teacher education in English consider it important that CLIL teachers would have studies in English philology. Regarding the subject teacher education program in English, the respondents hope that its teachers will have strong enough English skills and that it will deliver what it promises. Having student teachers of different subjects studying together is considered beneficial. The results of the study show that acquiring teaching material in English continues to be the teachers own responsibility and a huge burden for the teachers, and there has, in fact, not been much progress in the matter since the beginning of CLIL education. The bilingual subject teachers think, however, that using one s own material can give new inspiration to teaching and enable the use of various pedagogical methods. Although it is questionable if the language competence requirements set for CLIL teachers by the Finnish Ministry of Education are not adhered to, it becomes apparent in the study that studies in English philology do not necessarily guarantee strong enough language skills for CLIL teaching, but teachers own personality and self-confidence have significance. Keywords: CLIL, bilingual education, English, subject teacher training, subject teacher education in English, STEP