970 resultados para Top Quark Monte Carlo All-Hadronic Decay Mass Fit Cambridge-Aachen CMS LHC CERN
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
The Metropolis algorithm has been generalized to allow for the variation of shape and size of the MC cell. A calculation using different potentials illustrates how the generalized method can be used for the study of crystal structure transformations. A restricted MC integration in the nine dimensional space of the cell components also leads to the stable structure for the Lennard-Jones potential.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a measurement of the electric charge of the top quark using $\ppbar$ collisions corresponding to an integrated luminosity of 2.7~fb$^{-1}$ at the CDF II detector. We reconstruct $\ttbar$ events in the lepton+jets final state and use kinematic information to determine which $b$-jet is associated with the leptonically- or hadronically-decaying $t$-quark. Soft lepton taggers are used to determine the $b$-jet flavor. Along with the charge of the $W$ boson decay lepton, this information permits the reconstruction of the top quark's electric charge. Out of 45 reconstructed events with $2.4\pm0.8$ expected background events, 29 are reconstructed as $\ttbar$ with the standard model $+$2/3 charge, whereas 16 are reconstructed as $\ttbar$ with an exotic $-4/3$ charge. This is consistent with the standard model and excludes the exotic scenario at 95\% confidence level. This is the strongest exclusion of the exotic charge scenario and the first to use soft leptons for this purpose.
Resumo:
We present a measurement of the top-quark width using $t\bar{t}$ events produced in $p\bar{p}$ collisions at Fermilab's Tevatron collider and collected by the CDF II detector. In the mode where the top quark decays to a $W$ boson and a bottom quark, we select events in which one $W$ decays leptonically and the other hadronically~(lepton + jets channel) . From a data sample corresponding to 4.3~fb$^{-1}$ of integrated luminosity, we identify 756 candidate events. The top-quark mass and the mass of $W$ boson that decays hadronically are reconstructed for each event and compared with templates of different top-quark widths~($\Gamma_t$) and deviations from nominal jet energy scale~($\Delta_{JES}$) to perform a simultaneous fit for both parameters, where $\Delta_{JES}$ is used for the {\it in situ} calibration of the jet energy scale. By applying a Feldman-Cousins approach, we establish an upper limit at 95$\%$ confidence level~(CL) of $\Gamma_t $
Resumo:
"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."
Resumo:
We present a search for the lightest supersymmetric partner of the top quark in proton-antiproton collisions at a center-of-mass energy √s=1.96 TeV. This search was conducted within the framework of the R parity conserving minimal supersymmetric extension of the standard model, assuming the stop decays dominantly to a lepton, a sneutrino, and a bottom quark. We searched for events with two oppositely-charged leptons, at least one jet, and missing transverse energy in a data sample corresponding to an integrated luminosity of 1 fb-1 collected by the Collider Detector at Fermilab experiment. No significant evidence of a stop quark signal was found. Exclusion limits at 95% confidence level in the stop quark versus sneutrino mass plane are set. Stop quark masses up to 180 GeV/c2 are excluded for sneutrino masses around 45 GeV/c2, and sneutrino masses up to 116 GeV/c2 are excluded for stop quark masses around 150 GeV/c2.
Resumo:
We report the first observation of single top quark production using 3.2 fb^-1 of pbar p collision data with sqrt{s}=1.96 TeV collected by the Collider Detector at Fermilab. The significance of the observed data is 5.0 standard deviations, and the expected sensitivity for standard model production and decay is in excess of 5.9 standard deviations. Assuming m_t=175 GeV/c^2, we measure a cross section of 2.3 +0.6 -0.5 (stat+syst) pb, extract the CKM matrix element value |V_{tb}|=0.91 +-0.11 (stat+syst) 0.07(theory), and set the limit |V_{tb}|>0.71 at the 95% C.L.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
We report on a search for the production of the Higgs boson decaying to two bottom quarks accompanied by two additional quarks. The data sample used corresponds to an integrated luminosity of approximately 4 fb-1 of pp̅ collisions at √s=1.96 TeV recorded by the CDF II experiment. This search includes twice the integrated luminosity of the previous published result, uses analysis techniques to distinguish jets originating from light flavor quarks and those from gluon radiation, and adds sensitivity to a Higgs boson produced by vector boson fusion. We find no evidence of the Higgs boson and place limits on the Higgs boson production cross section for Higgs boson masses between 100 GeV/c2 and 150 GeV/c2 at the 95% confidence level. For a Higgs boson mass of 120 GeV/c2, the observed (expected) limit is 10.5 (20.0) times the predicted standard model cross section.
Resumo:
The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.
Monte Carlo simulation of network formation based on structural fragments in epoxy-anhydride systems
Resumo:
A method combining the Monte Carlo technique and the simple fragment approach has been developed for simulating network formation in amine-catalysed epoxy-anhydride systems. The method affords a detailed insight into the nature and composition of the network, showing the distribution of various fragments. It has been used to characterize the network formation in the reaction of the diglycidyl ester of isophthalic acid with hexahydrophthalic anhydride, catalysed by benzyldimethylamine. Pre-gel properties like number and weight distributions and average molecular weights have been calculated as a function of epoxy conversion, leading to a prediction of the gel-point conversion. Analysis of the simulated network further yields other characteristic properties such as concentration of crosslink points, distribution and concentration of elastically active chains, average molecular weight between crosslinks, sol content and mass fraction of pendent chains. A comparison has been made of the properties obtained through simulation with those predicted by the fragment approach alone, which, however, gives only average properties. The Monte Carlo simulation results clearly show that loops and other cyclic structures occur in the gel. This may account for the differences observed between the results of the simulation and the fragment model in the post-gel phase. Copyright (C) 1996 Elsevier Science Ltd.
Resumo:
A Monte Carlo filter, based on the idea of averaging over characteristics and fashioned after a particle-based time-discretized approximation to the Kushner-Stratonovich (KS) nonlinear filtering equation, is proposed. A key aspect of the new filter is the gain-like additive update, designed to approximate the innovation integral in the KS equation and implemented through an annealing-type iterative procedure, which is aimed at rendering the innovation (observation prediction mismatch) for a given time-step to a zero-mean Brownian increment corresponding to the measurement noise. This may be contrasted with the weight-based multiplicative updates in most particle filters that are known to precipitate the numerical problem of weight collapse within a finite-ensemble setting. A study to estimate the a-priori error bounds in the proposed scheme is undertaken. The numerical evidence, presently gathered from the assessed performance of the proposed and a few other competing filters on a class of nonlinear dynamic system identification and target tracking problems, is suggestive of the remarkably improved convergence and accuracy of the new filter. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Methane and ethane are the simplest hydrocarbon molecules that can form clathrate hydrates. Previous studies have reported methods for calculating the three-phase equilibrium using Monte Carlo simulation methods in systems with a single component in the gas phase. Here we extend those methods to a binary gas mixture of methane and ethane. Methane-ethane system is an interesting one in that the pure components form sII clathrate hydrate whereas a binary mixture of the two can form the sII clathrate. The phase equilibria computed from Monte Carlo simulations show a good agreement with experimental data and are also able to predict the sI-sII structural transition in the clathrate hydrate. This is attributed to the quality of the TIP4P/Ice and TRaPPE models used in the simulations. (C) 2014 Elsevier B.V. All rights reserved.