980 resultados para template overlap method top ATLAS
Resumo:
An updated search is performed for gluino, top squark, or bottom squark R-hadrons that have come to rest within the ATLAS calorimeter, and decay at some later time to hadronic jets and a neutralino, using 5.0 and 22.9 fb(-1) of pp collisions at 7 and 8 TeV, respectively. Candidate decay events are triggered in selected empty bunch crossings of the LHC in order to remove pp collision backgrounds. Selections based on jet shape and muon system activity are applied to discriminate signal events from cosmic ray and beam-halo muon backgrounds. In the absence of an excess of events, improved limits are set on gluino, stop, and sbottom masses for different decays, lifetimes, and neutralino masses. With a neutralino of mass 100 GeV, the analysis excludes gluinos with mass below 832 GeV (with an expected lower limit of 731 GeV), for a gluino lifetime between 10 mu s and 1000 s in the generic R-hadron model with equal branching ratios for decays to q (q) over bar(chi) over tilde (0) and g (chi) over tilde (0). Under the same assumptions for the neutralino mass and squark lifetime, top squarks and bottom squarks in the Regge R-hadron model are excluded with masses below 379 and 344 GeV, respectively.
Resumo:
A measurement of the production processes of the recently discovered Higgs boson is performed in the two-photon final state using 4.5 fb −1 of proton-proton collisions data at s √ =7 TeV and 20.3 fb −1 at s √ =8 TeV collected by the ATLAS detector at the Large Hadron Collider. The number of observed Higgs boson decays to diphotons divided by the corresponding Standard Model prediction, called the signal strength, is found to be μ=1.17±0.27 at the value of the Higgs boson mass measured by ATLAS, m H =125.4 GeV . The analysis is optimized to measure the signal strengths for individual Higgs boson production processes at this value of m H . They are found to be μ ggF =1.32±0.38 , μ VBF =0.8±0.7 , μ WH =1.0±1.6 , μ ZH =0.1 +3.7 −0.1 , and μ tt ¯ H =1.6 +2.7 −1.8 , for Higgs boson production through gluon fusion, vector-boson fusion, and in association with a W or Z boson or a top-quark pair, respectively. Compared with the previously published ATLAS analysis, the results reported here also benefit from a new energy calibration procedure for photons and the subsequent reduction of the systematic uncertainty on the diphoton mass resolution. No significant deviations from the predictions of the Standard Model are found.
Resumo:
ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √sNN = 2.76 TeV are shown using a dataset of approximately 7μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transversemomenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with two- and four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.
Resumo:
This paper presents the performance of the ATLAS muon reconstruction during the LHC run with pp collisions at √s = 7–8 TeV in 2011–2012, focusing mainly on data collected in 2012. Measurements of the reconstruction efficiency and of the momentum scale and resolution, based on large reference samples of J/ψ → μμ, Z → μμ and ϒ → μμ decays, are presented and compared to Monte Carlo simulations. Corrections to the simulation, to be used in physics analysis, are provided. Over most of the covered phase space (muon |η| < 2.7 and 5 ≲ pT ≲ 100 GeV) the efficiency is above 99% and is measured with per-mille precision. The momentum resolution ranges from 1.7% at central rapidity and for transverse momentum pT ≅ 10 GeV, to 4% at large rapidity and pT ≅ 100 GeV. The momentum scale is known with an uncertainty of 0.05% to 0.2% depending on rapidity. A method for the recovery of final state radiation from the muons is also presented.
Resumo:
This paper presents a measurement of the cross-section for high transverse momentum W and Z bosons produced in pp collisions and decaying to allhadronic final states. The data used in the analysis were recorded by the ATLAS detector at the CERN Large Hadron Collider at a centre-of-mass energy of s = 7 TeV and correspond to an integrated luminosity of 4.6 fb−1. The measurement is performed by reconstructing the boosted W or Z bosons in single jets. The reconstructed jet mass is used to identify the W and Z bosons, and a jet substructure method based on energy cluster information in the jet centre-of mass frame is used to suppress the large multi-jet background. The cross-section for events with a hadronically decaying W or Z boson, with transverse momentum pT > 320 GeV and pseudorapidity |η| < 1.9, is measured to be σ + = ± W Z 8.5 1.7 pb and is compared to next-to-leading-order calculations. The selected events are further used to study jet grooming techniques.
Resumo:
Results of a search for supersymmetry via direct production of third-generation squarks are reported, using 20.3 fb −1 of proton-proton collision data at √s =8 TeV recorded by the ATLAS experiment at the LHC in 2012. Two different analysis strategies based on monojetlike and c -tagged event selections are carried out to optimize the sensitivity for direct top squark-pair production in the decay channel to a charm quark and the lightest neutralino (t 1 →c+χ ˜ 0 1 ) across the top squark–neutralino mass parameter space. No excess above the Standard Model background expectation is observed. The results are interpreted in the context of direct pair production of top squarks and presented in terms of exclusion limits in the m ˜t 1, m ˜ X0 1 ) parameter space. A top squark of mass up to about 240 GeV is excluded at 95% confidence level for arbitrary neutralino masses, within the kinematic boundaries. Top squark masses up to 270 GeV are excluded for a neutralino mass of 200 GeV. In a scenario where the top squark and the lightest neutralino are nearly degenerate in mass, top squark masses up to 260 GeV are excluded. The results from the monojetlike analysis are also interpreted in terms of compressed scenarios for top squark-pair production in the decay channel t ˜ 1 →b+ff ′ +χ ˜ 0 1 and sbottom pair production with b ˜ 1 →b+χ ˜ 0 1 , leading to a similar exclusion for nearly mass-degenerate third-generation squarks and the lightest neutralino. The results in this paper significantly extend previous results at colliders.
Resumo:
This paper reports the results of a search for strong production of supersymmetric particles in 20.1 fb−1 of proton-proton collisions at a centre-of-mass energy of 8TeV using the ATLAS detector at the LHC. The search is performed separately in events with either zero or at least one high-pT lepton (electron or muon), large missing transverse momentum, high jet multiplicity and at least three jets identified as originated from the fragmentation of a b-quark. No excess is observed with respect to the Standard Model predictions. The results are interpreted in the context of several supersymmetric models involving gluinos and scalar top and bottom quarks, as well as a mSUGRA/CMSSM model. Gluino masses up to 1340 GeV are excluded, depending on the model, significantly extending the previous ATLAS limits.
Resumo:
A search for squarks and gluinos in final states containing high-pT jets, missing transverse momentum and no electrons or muons is presented. The data were recorded in 2012 by the ATLAS experiment in √s = 8TeV proton-proton collisions at the Large Hadron Collider, with a total integrated luminosity of 20.3 fb−1. Results are interpreted in a variety of simplified and specific supersymmetry-breaking models assuming that R-parity is conserved and that the lightest neutralino is the lightest supersymmetric particle. An exclusion limit at the 95% confidence level on the mass of the gluino is set at 1330GeV for a simplified model incorporating only a gluino and the lightest neutralino. For a simplified model involving the strong production of first- and second-generation squarks, squark masses below 850GeV (440GeV) are excluded for a massless lightest neutralino, assuming mass degenerate (single light-flavour) squarks. In mSUGRA/CMSSM models with tan β = 30, A0 = −2m0 and μ > 0, squarks and gluinos of equal mass are excluded for masses below 1700GeV. Additional limits are set for non-universal Higgs mass models with gaugino mediation and for simplified models involving the pair production of gluinos, each decaying to a top squark and a top quark, with the top squark decaying to a charm quark and a neutralino. These limits extend the region of supersymmetric parameter space excluded by previous searches with the ATLAS detector.
Resumo:
The integrated elliptic flow of charged particles produced in Pb+Pb collisions at √sNN = 2.76 TeV has been measured with the ATLAS detector using data collected at the Large Hadron Collider. The anisotropy parameter, v2, was measured in the pseudorapidity range |η| ≤ 2.5 with the event-plane method. In order to include tracks with very low transverse momentum pT, thus reducing the uncertainty in v2 integrated over pT, a 1 μb−1 data sample recorded without a magnetic field in the tracking detectors is used. The centrality dependence of the integrated v2 is compared to other measurements obtained with higher pT thresholds. The integrated elliptic flow is weakly decreasing with |η|. The integrated v2 transformed to the rest frame of one of the colliding nuclei is compared to the lower-energy RHIC data.
Resumo:
A measurement of event-plane correlations involving two or three event planes of different order is presented as a function of centrality for 7 μb −1 Pb+Pb collision data at √s NN =2.76 TeV, recorded by the ATLAS experiment at the Large Hadron Collider. Fourteen correlators are measured using a standard event-plane method and a scalar-product method, and the latter method is found to give a systematically larger correlation signal. Several different trends in the centrality dependence of these correlators are observed. These trends are not reproduced by predictions based on the Glauber model, which includes only the correlations from the collision geometry in the initial state. Calculations that include the final-state collective dynamics are able to describe qualitatively, and in some cases also quantitatively, the centrality dependence of the measured correlators. These observations suggest that both the fluctuations in the initial geometry and the nonlinear mixing between different harmonics in the final state are important for creating these correlations in momentum space.
Resumo:
Double-differential dijet cross-sections measured in pp collisions at the LHC with a 7TeV centre-of-mass energy are presented as functions of dijet mass and half the rapidity separation of the two highest-pT jets. These measurements are obtained using data corresponding to an integrated luminosity of 4.5 fb−1, recorded by the ATLAS detector in 2011. The data are corrected for detector effects so that cross-sections are presented at the particle level. Cross-sections are measured up to 5TeV dijet mass using jets reconstructed with the anti-kt algorithm for values of the jet radius parameter of 0.4 and 0.6. The cross-sections are compared with next-to-leading-order perturbative QCD calculations by NLOJet++ corrected to account for non-perturbative effects. Comparisons with POWHEG predictions, using a next-to-leading-order matrix element calculation interfaced to a partonshower Monte Carlo simulation, are also shown. Electroweak effects are accounted for in both cases. The quantitative comparison of data and theoretical predictions obtained using various parameterizations of the parton distribution functions is performed using a frequentist method. In general, good agreement with data is observed for the NLOJet++ theoretical predictions when using the CT10, NNPDF2.1 and MSTW 2008 PDF sets. Disagreement is observed when using the ABM11 and HERAPDF1.5 PDF sets for some ranges of dijet mass and half the rapidity separation. An example setting a lower limit on the compositeness scale for a model of contact interactions is presented, showing that the unfolded results can be used to constrain contributions to dijet production beyond that predicted by the Standard Model.
Resumo:
Standard stereotaxic reference systems play a key role in human brain studies. Stereotaxic coordinate systems have also been developed for experimental animals including non-human primates, dogs, and rodents. However, they are lacking for other species being relevant in experimental neuroscience including sheep. Here, we present a spatial, unbiased ovine brain template with tissue probability maps (TPM) that offer a detailed stereotaxic reference frame for anatomical features and localization of brain areas, thereby enabling inter-individual and cross-study comparability. Three-dimensional data sets from healthy adult Merino sheep (Ovis orientalis aries, 12 ewes and 26 neutered rams) were acquired on a 1.5 T Philips MRI using a T1w sequence. Data were averaged by linear and non-linear registration algorithms. Moreover, animals were subjected to detailed brain volume analysis including examinations with respect to body weight (BW), age, and sex. The created T1w brain template provides an appropriate population-averaged ovine brain anatomy in a spatial standard coordinate system. Additionally, TPM for gray (GM) and white (WM) matter as well as cerebrospinal fluid (CSF) classification enabled automatic prior-based tissue segmentation using statistical parametric mapping (SPM). Overall, a positive correlation of GM volume and BW explained about 15% of the variance of GM while a positive correlation between WM and age was found. Absolute tissue volume differences were not detected, indeed ewes showed significantly more GM per bodyweight as compared to neutered rams. The created framework including spatial brain template and TPM represent a useful tool for unbiased automatic image preprocessing and morphological characterization in sheep. Therefore, the reported results may serve as a starting point for further experimental and/or translational research aiming at in vivo analysis in this species.
Resumo:
Image-based modeling is a popular approach to perform patient-specific biomechanical simulations. Accurate modeling is critical for orthopedic application to evaluate implant design and surgical planning. It has been shown that bone strength can be estimated from the bone mineral density (BMD) and trabecular bone architecture. However, these findings cannot be directly and fully transferred to patient-specific modeling since only BMD can be derived from clinical CT. Therefore, the objective of this study was to propose a method to predict the trabecular bone structure using a µCT atlas and an image registration technique. The approach has been evaluated on femurs and patellae under physiological loading. The displacement and ultimate force for femurs loaded in stance position were predicted with an error of 2.5% and 3.7%, respectively, while predictions obtained with an isotropic material resulted in errors of 7.3% and 6.9%. Similar results were obtained for the patella, where the strain predicted using the registration approach resulted in an improved mean squared error compared to the isotropic model. We conclude that the registration of anisotropic information from of a single template bone enables more accurate patient-specific simulations from clinical image datasets than isotropic model.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.