201 resultados para Didactics of Physics
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
The cross section for photon production in association with at least one jet containing a $b$-quark hadron has been measured in proton antiproton collisions at $\sqrt{s}=1.96$ TeV. The analysis uses a data sample corresponding to an integrated luminosity of 340 pb$^{-1}$ collected with the CDF II detector. Both the differential cross section as a function of photon transverse energy $E_T^{\gamma}$, $d \sigma$($p \overline{p} \to \gamma + \geq 1 b$-jet)/$d E_T^{\gamma}$ and the total cross section $\sigma$($p \overline{p} \to \gamma + \geq 1 b$-jet; $E_T^{\gamma}> 20$ GeV) are measured. Comparisons to a next-to-leading order prediction of the process are presented.
Resumo:
We report a measurement of the production cross section for b hadrons in p-pbar collisions at sqrt{s}=1.96 TeV. Using a data sample derived from an integrated luminosity of 83 pb^-1 collected with the upgraded Collider Detector (CDF II) at the Fermilab Tevatron, we analyze b hadrons, H_b, partially reconstructed in the semileptonic decay mode H_b -> mu^- D^0 X. Our measurement of the inclusive production cross section for b hadrons with transverse momentum p_T > 9 GeV/c and rapidity |y|
Resumo:
We report the first observation of single top quark production using 3.2 fb^-1 of pbar p collision data with sqrt{s}=1.96 TeV collected by the Collider Detector at Fermilab. The significance of the observed data is 5.0 standard deviations, and the expected sensitivity for standard model production and decay is in excess of 5.9 standard deviations. Assuming m_t=175 GeV/c^2, we measure a cross section of 2.3 +0.6 -0.5 (stat+syst) pb, extract the CKM matrix element value |V_{tb}|=0.91 +-0.11 (stat+syst) 0.07(theory), and set the limit |V_{tb}|>0.71 at the 95% C.L.
Resumo:
We present a measurement of the $\ttbar$ production cross section in $\ppbar$ collisions at $\sqrt{s}=1.96$ TeV using events containing a high transverse momentum electron or muon, three or more jets, and missing transverse energy. Events consistent with $\ttbar$ decay are found by identifying jets containing candidate heavy-flavor semileptonic decays to muons. The measurement uses a CDF Run II data sample corresponding to $2 \mathrm{fb^{-1}}$ of integrated luminosity. Based on 248 candidate events with three or more jets and an expected background of $79.5\pm5.3$ events, we measure a production cross section of $9.1\pm 1.6 \mathrm{pb}$.
Resumo:
We search for new charmless decays of neutral b-hadrons to pairs of charged hadrons with the upgraded Collider Detector at the Fermilab Tevatron. Using a data sample corresponding to 1 fb-1 of integrated luminosity, we report the first observation of the B0s->K-pi+ decay, with a significance of 8.2 sigma, and measure BR(B0s->K-pi+)= (5.0+-0.7(stat)+-0.8(syst))*10^{-6}. We also report the first observation of charmless b-baryon decays in the channels Lambda_b -> p pi and Lambda_b -> pK with significances of 6.0 sigma and 11.5 sigma respectively, and we measure BR(Lambda_b->p pi-) = (3.5+-0.6(stat)+-0.9(syst))*10^{-6} and BR(Lambda_b->p K-) = (5.6+-0.8(stat)+-1.5(syst))*10^{-6}. No evidence is found for the decays B0->K+K- and B0s -> pi+pi-, and we set an improved upper limit BR(B0s -> pi+pi-) K+pi-)$ as a reference.
Resumo:
We report a measurement of the top quark mass $M_t$ in the dilepton decay channel $t\bar{t}\to b\ell'^{+}\nu'_\ell\bar{b}\ell^{-}\bar{\nu}_{\ell}$. Events are selected with a neural network which has been directly optimized for statistical precision in top quark mass using neuroevolution, a technique modeled on biological evolution. The top quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb$^{-1}$ of $p\bar{p}$ collisions collected with the CDF II detector, yielding a measurement of $M_t= 171.2\pm 2.7(\textrm{stat.})\pm 2.9(\textrm{syst.})\mathrm{GeV}/c^2$.
Resumo:
Magnetic susceptibility measurements were performed on freshly fallen Almahata Sitta meteorites. Most recovered samples are polymict ureilites. Those found in the first four months since impact, before the meteorites were exposed to rain, have a magnetic susceptibility in the narrow range of 4.92 ± 0.08 log 10-9 Am2/kg close to the range of other ureilite falls 4.95 ± 0.14 log 10-9 Am2/kg reported by Rochette et al. (2009). The Almahata Sitta samples collected one year after the fall have similar values (4.90 ± 0.06 log 10-9 Am2/kg), revealing that the effect of one-year of terrestrial weathering was not severe yet. However, our reported values are higher than derived from polymict (brecciated) ureilites 4.38 ± 0.47 log 10-9 Am2/kg (Rochette et al. 2009) containing both falls and finds confirming that these are significantly weathered. Additionally other fresh-looking meteorites of non-ureilitic compositions were collected in the Almahata Sitta strewn field. Magnetic susceptibility measurements proved to be a convenient non-destructive method for identifying non-ureilitic meteorites among those collected in the Almahata Sitta strewn field, even among fully crusted. Three such meteorites, no. 16, 25, and 41, were analyzed and their composition determined as EH6, H5 and EL6 respectively (Zolensky et al., 2010). A high scatter of magnetic susceptibility values among small (< 5 g) samples revealed high inhomogeneity within the 2008 TC3 material at scales below 1-2 cm.
Resumo:
Measurements of inclusive charged-hadron transverse-momentum and pseudorapidity distributions are presented for proton-proton collisions at sqrt(s) = 0.9 and 2.36 TeV. The data were collected with the CMS detector during the LHC commissioning in December 2009. For non-single-diffractive interactions, the average charged-hadron transverse momentum is measured to be 0.46 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 0.9 TeV and 0.50 +/- 0.01 (stat.) +/- 0.01 (syst.) GeV/c at 2.36 TeV, for pseudorapidities between -2.4 and +2.4. At these energies, the measured pseudorapidity densities in the central region, dN(charged)/d(eta) for |eta|
Resumo:
Volatile organic compounds (VOCs) are emitted into the atmosphere from natural and anthropogenic sources, vegetation being the dominant source on a global scale. Some of these reactive compounds are deemed major contributors or inhibitors to aerosol particle formation and growth, thus making VOC measurements essential for current climate change research. This thesis discusses ecosystem scale VOC fluxes measured above a boreal Scots pine dominated forest in southern Finland. The flux measurements were performed using the micrometeorological disjunct eddy covariance (DEC) method combined with proton transfer reaction mass spectrometry (PTR-MS), which is an online technique for measuring VOC concentrations. The measurement, calibration, and calculation procedures developed in this work proved to be well suited to long-term VOC concentration and flux measurements with PTR-MS. A new averaging approach based on running averaged covariance functions improved the determination of the lag time between wind and concentration measurements, which is a common challenge in DEC when measuring fluxes near the detection limit. The ecosystem scale emissions of methanol, acetaldehyde, and acetone were substantial. These three oxygenated VOCs made up about half of the total emissions, with the rest comprised of monoterpenes. Contrary to the traditional assumption that monoterpene emissions from Scots pine originate mainly as evaporation from specialized storage pools, the DEC measurements indicated a significant contribution from de novo biosynthesis to the ecosystem scale monoterpene emissions. This thesis offers practical guidelines for long-term DEC measurements with PTR-MS. In particular, the new averaging approach to the lag time determination seems useful in the automation of DEC flux calculations. Seasonal variation in the monoterpene biosynthesis and the detailed structure of a revised hybrid algorithm, describing both de novo and pool emissions, should be determined in further studies to improve biological realism in the modelling of monoterpene emissions from Scots pine forests. The increasing number of DEC measurements of oxygenated VOCs will probably enable better estimates of the role of these compounds in plant physiology and tropospheric chemistry. Keywords: disjunct eddy covariance, lag time determination, long-term flux measurements, proton transfer reaction mass spectrometry, Scots pine forests, volatile organic compounds
Resumo:
We report the results of a study of multi-muon events produced at the Fermilab Tevatron collider and acquired with the CDF II detector using a dedicated dimuon trigger. The production cross section and kinematics of events in which both muon candidates are produced inside the beam pipe of radius 1.5 cm are successfully modeled by known processes which include heavy flavor production. In contrast, we are presently unable to fully account for the number and properties of the remaining events, in which at least one muon candidate is produced outside of the beam pipe, in terms of the same understanding of the CDF II detector, trigger, and event reconstruction.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.