943 resultados para Statistical physics
Resumo:
We present a measurement of the top quark mass and of the top-antitop pair production cross section using p-pbar data collected with the CDFII detector at the Tevatron Collider at the Fermi National Accelerator Laboratory and corresponding to an integrated luminosity of 2.9 fb-1. We select events with six or more jets satisfying a number of kinematical requirements imposed by means of a neural network algorithm. At least one of these jets must originate from a b quark, as identified by the reconstruction of a secondary vertex inside the jet. The mass measurement is based on a likelihood fit incorporating reconstructed mass distributions representative of signal and background, where the absolute jet energy scale (JES) is measured simultaneously with the top quark mass. The measurement yields a value of 174.8 +- 2.4(stat+JES) ^{+1.2}_{-1.0}(syst) GeV/c^2, where the uncertainty from the absolute jet energy scale is evaluated together with the statistical uncertainty. The procedure measures also the amount of signal from which we derive a cross section, sigma_{ttbar} = 7.2 +- 0.5(stat) +- 1.0 (syst) +- 0.4 (lum) pb, for the measured values of top quark mass and JES.
Resumo:
We investigate the effects of new physics scenarios containing a high mass vector resonance on top pair production at the LHC, using the polarization of the produced top. In particular we use kinematic distributions of the secondary lepton coming from top decay, which depends on top polarization, as it has been shown that the angular distribution of the decay lepton is insensitive to the anomalous tbW vertex and hence is a pure probe of new physics in top quark production. Spin sensitive variables involving the decay lepton are used to probe top polarization. Some sensitivity is found for the new couplings of the top.
Resumo:
A combined mass and particle identification fit is used to make the first observation of the decay B̅ s0→Ds±K∓ and measure the branching fraction of B̅ s0→Ds±K∓ relative to B̅ s0→Ds+π-. This analysis uses 1.2 fb-1 integrated luminosity of pp̅ collisions at √s=1.96 TeV collected with the CDF II detector at the Fermilab Tevatron collider. We observe a B̅ s0→Ds±K∓ signal with a statistical significance of 8.1σ and measure B(B̅ s0→Ds±K∓)/B(B̅ s0→Ds+π-)=0.097±0.018(stat)±0.009(syst).
Resumo:
We search for b→sμ+μ- transitions in B meson (B+, B0, or Bs0) decays with 924 pb-1 of pp̅ collisions at √s=1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. We find excesses with significances of 4.5, 2.9, and 2.4 standard deviations in the B+→μ+μ-K+, B0→μ+μ-K*(892)0, and Bs0→μ+μ-ϕ decay modes, respectively. Using B→J/ψh (h=K+, K*(892)0, ϕ) decays as normalization channels, we report branching fractions for the previously observed B+ and B0 decays, B(B+→μ+μ-K+)=(0.59±0.15±0.04)×10-6, and B(B0→μ+μ-K*(892)0)=(0.81±0.30±0.10)×10-6, where the first uncertainty is statistical, and the second is systematic. We set an upper limit on the relative branching fraction B(Bs0→μ+μ-ϕ)/B(Bs0→J/ψϕ)<2.6(2.3)×10-3 at the 95(90)% confidence level, which is the most stringent to date.
Resumo:
This research is connected with an education development project for the four-year-long officer education program at the National Defence University. In this curriculum physics was studied in two alternative course plans namely scientific and general. Observations connected to the later one e.g. student feedback and learning outcome gave indications that action was needed to support the course. The reform work was focused on the production of aligned course related instructional material. The learning material project produced a customized textbook set for the students of the general basic physics course. The research adapts phases that are typical in Design Based Research (DBR). The research analyses the feature requirements for physics textbook aimed at a specific sector and frames supporting instructional material development, and summarizes the experiences gained in the learning material project when the selected frames have been applied. The quality of instructional material is an essential part of qualified teaching. The goal of instructional material customization is to increase the product's customer centric nature and to enhance its function as a support media for the learning process. Textbooks are still one of the core elements in physics teaching. The idea of a textbook will remain but the form and appearance may change according to the prevailing technology. The work deals with substance connected frames (demands of a physics textbook according to the PER-viewpoint, quality thinking in educational material development), frames of university pedagogy and instructional material production processes. A wide knowledge and understanding of different frames are useful in development work, if they are to be utilized to aid inspiration without limiting new reasoning and new kinds of models. Applying customization even in the frame utilization supports creative and situation aware design and diminishes the gap between theory and practice. Generally, physics teachers produce their own supplementary instructional material. Even though customization thinking is not unknown the threshold to produce an entire textbook might be high. Even though the observations here are from the general physics course at the NDU, the research gives tools also for development in other discipline related educational contexts. This research is an example of an instructional material development work together the questions it uncovers, and presents thoughts when textbook customization is rewarding. At the same time, the research aims to further creative customization thinking in instruction and development. Key words: Physics textbook, PER (Physics Education Research), Instructional quality, Customization, Creativity
Resumo:
Evidence is reported for a narrow structure near the $J/\psi\phi$ threshold in exclusive $B^+\to J/\psi\phi K^+$ decays produced in $\bar{p} p $ collisions at $\sqrt{s}=1.96 \TeV$. A signal of $14\pm5$ events, with statistical significance in excess of 3.8 standard deviations, is observed in a data sample corresponding to an integrated luminosity of $2.7 \ifb$, collected by the CDF II detector. The mass and natural width of the structure are measured to be $4143.0\pm2.9(\mathrm{stat})\pm1.2(\mathrm{syst}) \MeVcc$ and $11.7^{+8.3}_{-5.0}(\mathrm{stat})\pm3.7(\mathrm{syst}) \MeVcc$.
Resumo:
A combined mass and particle identification fit is used to make the first observation of the decay Bs --> Ds K and measure the branching fraction of Bs --> Ds K relative to Bs --> Ds pi. This analysis uses 1.2 fb^-1 integrated luminosity of pbar-p collisions at sqrt(s) = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron collider. We observe a Bs --> Ds K signal with a statistical significance of 8.1 sigma and measure Br(Bs --> Ds K)/Br(Bs --> Ds pi) = 0.097 +- 0.018(stat) +- 0.009(sys).
Resumo:
We present results of a signature-based search for new physics using a dijet plus missing transverse energy data sample collected in 2 fb-1 of p-pbar collisions at sqrt(s) = 1.96 TeV with the CDF II detector at the Fermilab Tevatron. We observe no significant event excess with respect to the standard model prediction and extract a 95% C.L. upper limit on the cross section times acceptance for a potential contribution from a non-standard model process. Based on this limit the mass of a first or second generation scalar leptoquark is constrained to be above 187 GeV/c^2.
Resumo:
We report a measurement of the production cross section for b hadrons in pp̅ collisions at √s=1.96 TeV. Using a data sample derived from an integrated luminosity of 83 pb-1 collected with the upgraded Collider Detector (CDF II) at the Fermilab Tevatron, we analyze b hadrons, Hb, partially reconstructed in the semileptonic decay mode Hb→μ-D0X. Our measurement of the inclusive production cross section for b hadrons with transverse momentum pT>9 GeV/c and rapidity |y|<0.6 is σ=1.30 μb±0.05 μb(stat)±0.14 μb(syst)±0.07 μb(B), where the uncertainties are statistical, systematic, and from branching fractions, respectively. The differential cross sections dσ/dpT are found to be in good agreement with recent measurements of the Hb cross section and well described by fixed-order next-to-leading logarithm predictions.
Resumo:
We report a measurement of the production cross section for b hadrons in pp̅ collisions at √s=1.96 TeV. Using a data sample derived from an integrated luminosity of 83 pb-1 collected with the upgraded Collider Detector (CDF II) at the Fermilab Tevatron, we analyze b hadrons, Hb, partially reconstructed in the semileptonic decay mode Hb→μ-D0X. Our measurement of the inclusive production cross section for b hadrons with transverse momentum pT>9 GeV/c and rapidity |y|<0.6 is σ=1.30 μb±0.05 μb(stat)±0.14 μb(syst)±0.07 μb(B), where the uncertainties are statistical, systematic, and from branching fractions, respectively. The differential cross sections dσ/dpT are found to be in good agreement with recent measurements of the Hb cross section and well described by fixed-order next-to-leading logarithm predictions.
Resumo:
We report a measurement of the top quark mass $M_t$ in the dilepton decay channel $t\bar{t}\to b\ell'^{+}\nu'_\ell\bar{b}\ell^{-}\bar{\nu}_{\ell}$. Events are selected with a neural network which has been directly optimized for statistical precision in top quark mass using neuroevolution, a technique modeled on biological evolution. The top quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb$^{-1}$ of $p\bar{p}$ collisions collected with the CDF II detector, yielding a measurement of $M_t= 171.2\pm 2.7(\textrm{stat.})\pm 2.9(\textrm{syst.})\mathrm{GeV}/c^2$.
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Physics teachers are in a key position to form the attitudes and conceptions of future generations toward science and technology, as well as to educate future generations of scientists. Therefore, good teacher education is one of the key areas of physics departments education program. This dissertation is a contribution to the research-based development of high quality physics teacher education, designed to meet three central challenges of good teaching. The first challenge relates to the organization of physics content knowledge. The second challenge, connected to the first one, is to understand the role of experiments and models in (re)constructing the content knowledge of physics for purposes of teaching. The third challenge is to provide for pre-service physics teachers opportunities and resources for reflecting on or assessing their knowledge and experience about physics and physics education. This dissertation demonstrates how these challenges can be met when the content knowledge of physics, the relevant epistemological aspects of physics and the pedagogical knowledge of teaching and learning physics are combined. The theoretical part of this dissertation is concerned with designing two didactical reconstructions for purposes of physics teacher education: the didactical reconstruction of processes (DRoP) and the didactical reconstruction of structures (DRoS). This part starts with taking into account the required professional competencies of physics teachers, the pedagogical aspects of teaching and learning, and the benefits of the graphical ways of representing knowledge. Then it continues with the conceptual and philosophical analysis of physics, especially with the analysis of experiments and models role in constructing knowledge. This analysis is condensed in the form of the epistemological reconstruction of knowledge justification. Finally, these two parts are combined in the designing and production of the DRoP and DRoS. The DRoP captures the knowledge formation of physical concepts and laws in concise and simplified form while still retaining authenticity from the processes of how concepts have been formed. The DRoS is used for representing the structural knowledge of physics, the connections between physical concepts, quantities and laws, to varying extents. Both DRoP and DRoS are represented in graphical form by means of flow charts consisting of nodes and directed links connecting the nodes. The empirical part discusses two case studies that show how the three challenges are met through the use of DRoP and DRoS and how the outcomes of teaching solutions based on them are evaluated. The research approach is qualitative; it aims at the in-depth evaluation and understanding about the usefulness of the didactical reconstructions. The data, which were collected from the advanced course for prospective physics teachers during 20012006, consisted of DRoP and DRoS flow charts made by students and student interviews. The first case study discusses how student teachers used DRoP flow charts to understand the process of forming knowledge about the law of electromagnetic induction. The second case study discusses how student teachers learned to understand the development of physical quantities as related to the temperature concept by using DRoS flow charts. In both studies, the attention is focused on the use of DRoP and DRoS to organize knowledge and on the role of experiments and models in this organization process. The results show that students understanding about physics knowledge production improved and their knowledge became more organized and coherent. It is shown that the flow charts and the didactical reconstructions behind them had an important role in gaining these positive learning results. On the basis of the results reported here, the designed learning tools have been adopted as a standard part of the teaching solutions used in the physics teacher education courses in the Department of Physics, University of Helsinki.