42 resultados para 240302 Nuclear and Particle Physics
Resumo:
This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.
Resumo:
Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.
Resumo:
The electroweak theory is the part of the standard model of particle physics that describes the weak and electromagnetic interactions between elementary particles. Since its formulation almost 40 years ago, it has been experimentally verified to a high accuracy and today it has a status as one of the cornerstones of particle physics. Thermodynamics of electroweak physics has been studied ever since the theory was written down and the features the theory exhibits at extreme conditions remain an interesting research topic even today. In this thesis, we consider some aspects of electroweak thermodynamics. Specifically, we compute the pressure of the standard model to high precision and study the structure of the electroweak phase diagram when finite chemical potentials for all the conserved particle numbers in the theory are introduced. In the first part of the thesis, the theory, methods and essential results from the computations are introduced. The original research publications are reprinted at the end.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.
Resumo:
Models of Maximal Flavor Violation (MxFV) in elementary particle physics may contain at least one new scalar SU$(2)$ doublet field $\Phi_{FV} = (\eta^0,\eta^+)$ that couples the first and third generation quarks ($q_1,q_3$) via a Lagrangian term $\mathcal{L}_{FV} = \xi_{13} \Phi_{FV} q_1 q_3$. These models have a distinctive signature of same-charge top-quark pairs and evade flavor-changing limits from meson mixing measurements. Data corresponding to 2 fb$^{-1}$ collected by the CDF II detector in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV are analyzed for evidence of the MxFV signature. For a neutral scalar $\eta^0$ with $m_{\eta^0} = 200$ GeV/$c^2$ and coupling $\xi_{13}=1$, $\sim$ 11 signal events are expected over a background of $2.1 \pm 1.8$ events. Three events are observed in the data, consistent with background expectations, and limits are set on the coupling $\xi_{13}$ for $m_{\eta^0} = 180-300$ GeV/$c^2$.
Resumo:
Defects in mitochondrial DNA (mtDNA) maintenance cause a range of human diseases, including autosomal dominant progressive external ophthalmoplegia (adPEO). This study aimed to clarify the molecular background of adPEO. We discovered that deoxynucleoside triphosphate (dNTP) metabolism plays a crucial in mtDNA maintenance and were thus prompted to search for therapeutic strategies based on the modulation of cellular dNTP pools or mtDNA copy number. Human mtDNA is a 16.6 kb circular molecule present in hundreds to thousands of copies per cell. mtDNA is compacted into nucleoprotein clusters called nucleoids. mtDNA maintenance diseases result from defects in nuclear encoded proteins that maintain the mtDNA. These syndromes typically afflict highly differentiated, post-mitotic tissues such as muscle and nerve, but virtually any organ can be affected. adPEO is a disease where mtDNA molecules with large-scale deletions accumulate in patients tissues, particularly in skeletal muscle. Mutations in five nuclear genes, encoding the proteins ANT1, Twinkle, POLG, POLG2 and OPA1, have previously been shown to cause adPEO. Here, we studied a large North American pedigree with adPEO, and identified a novel heterozygous mutation in the gene RRM2B, which encodes the p53R2 subunit of the enzyme ribonucleotide reductase (RNR). RNR is the rate-limiting enzyme in dNTP biosynthesis, and is required both for nuclear and mitochondrial DNA replication. The mutation results in the expression of a truncated form of p53R2, which is likely to compete with the wild-type allele. A change in enzyme function leads to defective mtDNA replication due to altered dNTP pools. Therefore, RRM2B is a novel adPEO disease gene. The importance of adequate dNTP pools and RNR function for mtDNA maintenance has been established in many organisms. In yeast, induction of RNR has previously been shown to increase mtDNA copy number, and to rescue the phenotype caused by mutations in the yeast mtDNA polymerase. To further study the role of RNR in mammalian mtDNA maintenance, we used mice that broadly overexpress the RNR subunits Rrm1, Rrm2 or p53R2. Active RNR is a heterotetramer consisting of two large subunits (Rrm1) and two small subunits (either Rrm2 or p53R2). We also created bitransgenic mice that overexpress Rrm1 together with either Rrm2 or p53R2. In contrast to the previous findings in yeast, bitransgenic RNR overexpression led to mtDNA depletion in mouse skeletal muscle, without mtDNA deletions or point mutations. The mtDNA depletion was associated with imbalanced dNTP pools. Furthermore, the mRNA expression levels of Rrm1 and p53R2 were found to correlate with mtDNA copy number in two independent mouse models, suggesting nuclear-mitochondrial cross talk with regard to mtDNA copy number. We conclude that tight regulation of RNR is needed to prevent harmful alterations in the dNTP pool balance, which can lead to disordered mtDNA maintenance. Increasing the copy number of wild-type mtDNA has been suggested as a strategy for treating PEO and other mitochondrial diseases. Only two proteins are known to cause a robust increase in mtDNA copy number when overexpressed in mice; the mitochondrial transcription factor A (TFAM), and the mitochondrial replicative helicase Twinkle. We studied the mechanisms by which Twinkle and TFAM elevate mtDNA levels, and showed that Twinkle specifically implements mtDNA synthesis. Furthermore, both Twinkle and TFAM were found to increase mtDNA content per nucleoid. Increased mtDNA content in mouse tissues correlated with an age-related accumulation of mtDNA deletions, depletion of mitochondrial transcripts, and progressive respiratory dysfunction. Simultaneous overexpression of Twinkle and TFAM led to a further increase in the mtDNA content of nucleoids, and aggravated the respiratory deficiency. These results suggested that high mtDNA levels have detrimental long-term effects in mice. These data have to be considered when developing and evaluating treatment strategies for elevating mtDNA copy number.
Resumo:
Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.
Resumo:
Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.
Resumo:
Hollow atoms in which the K shell is empty while the outer shells are populated allow studying a variety of important and unusual properties of atoms. The diagram x-ray emission lines of such atoms, the K-h alpha(1,2) hypersatellites (HSs), were measured for the 3d transition metals, Z=23-30, with a high energy resolution using photoexcitation by monochromatized synchrotron radiation. Good agreement with ab initio relativistic multiconfigurational Dirac-Fock calculations was found. The measured HS intensity variation with the excitation energy yields accurate values for the excitation thresholds, excludes contributions from shake-up processes, and indicates domination near threshold of a nonshake process. The Z variation of the HS shifts from the diagram line K alpha(1,2), the K-h alpha(1)-K-h alpha(2) splitting, and the K-h alpha(1)/K-h alpha(2) intensity ratio, derived from the measurements, are also discussed with a particular emphasis on the QED corrections and Breit interaction.
Resumo:
The K-shell diagram (K alpha(1,2) and K beta(1,3)) and hypersatellite (HS) (K-h alpha(1,2)) spectra of Y, Zr, Mo, and Pd have been measured with high energy-resolution using photoexcitation by 90 keV synchrotron radiation. Comparison of the measured and ab initio calculated HS spectra demonstrates the importance of quantum electrodynamical (QED) effects for the HS spectra. Phenomenological fits of the measured spectra by Voigt functions yield accurate values for the shift of the HS from the diagram lines, the splitting of the HS lines, and their intensity ratio. Good agreement with theory was found for all quantities except for the intensity ratio, which is dominated by the intermediacy of the coupling of the angular momenta. The observed deviations imply that our current understanding of the variation of the coupling scheme from LS to jj across the periodic table may require some revision.
Resumo:
QCD factorization in the Bjorken limit allows to separate the long-distance physics from the hard subprocess. At leading twist, only one parton in each hadron is coherent with the hard subprocess. Higher twist effects increase as one of the active partons carries most of the longitudinal momentum of the hadron, x -> 1. In the Drell-Yan process \pi N -> \mu^- mu^+ + X, the polarization of the virtual photon is observed to change to longitudinal when the photon carries x_F > 0.6 of the pion. I define and study the Berger-Brodsky limit of Q^2 -> \infty with Q^2(1-x) fixed. A new kind of factorization holds in the Drell-Yan process in this limit, in which both pion valence quarks are coherent with the hard subprocess, the virtual photon is longitudinal rather than transverse, and the cross section is proportional to a multiparton distribution. Generalized parton distributions contain information on the longitudinal momentum and transverse position densities of partons in a hadron. Transverse charge densities are Fourier transforms of the electromagnetic form factors. I discuss the application of these methods to the QED electron, studying the form factors, charge densities and spin distributions of the leading order |e\gamma> Fock state in impact parameter and longitudinal momentum space. I show how the transverse shape of any virtual photon induced process, \gamma^*(q)+i -> f, may be measured. Qualitative arguments concerning the size of such transitions have been previously made in the literature, but without a precise analysis. Properly defined, the amplitudes and the cross section in impact parameter space provide information on the transverse shape of the transition process.