861 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A measurement of spin correlation in tt¯ production is presented using data collected with the ATLAS detector at the Large Hadron Collider in proton-proton collisions at a center-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 20.3 fb−1. The correlation between the top and antitop quark spins is extracted from dilepton tt¯ events by using the difference in azimuthal angle between the two charged leptons in the laboratory frame. In the helicity basis the measured degree of correlation corresponds to Ahelicity=0.38±0.04, in agreement with the Standard Model prediction. A search is performed for pair production of top squarks with masses close to the top quark mass decaying to predominantly right-handed top quarks and a light neutralino, the lightest supersymmetric particle. Top squarks with masses between the top quark mass and 191 GeV are excluded at the 95% confidence level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search is performed for top-quark pairs (tt¯) produced together with a photon (γ) with transverse momentum >20 GeV using a sample of tt¯ candidate events in final states with jets, missing transverse momentum, and one isolated electron or muon. The dataset used corresponds to an integrated luminosity of 4.59 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. In total 140 and 222 tt¯γ candidate events are observed in the electron and muon channels, to be compared to the expectation of 79±26 and 120±39 non-tt¯γ background events respectively. The production of tt¯γ events is observed with a significance of 5.3 standard deviations away from the null hypothesis. The tt¯γ production cross section times the branching ratio (BR) of the single-lepton decay channel is measured in a fiducial kinematic region within the ATLAS acceptance. The measured value is σfidtt¯γ=63±8(stat.)+17−13(syst.)±1(lumi.) fb per lepton flavor, in good agreement with the leading-order theoretical calculation normalized to the next-to-leading-order theoretical prediction of 48±10 fb.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search is performed for Higgs bosons produced in association with top quarks using the diphoton decay mode of the Higgs boson. Selection requirements are optimized separately for leptonic and fully hadronic final states from the top quark decays. The dataset used corresponds to an integrated luminosity of 4.5 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV and 20.3 fb−1 at 8 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess over the background prediction is observed and upper limits are set on the tt¯H production cross section. The observed exclusion upper limit at 95% confidence level is 6.7 times the predicted Standard Model cross section value. In addition, limits are set on the strength of the Yukawa coupling between the top quark and the Higgs boson, taking into account the dependence of the tt¯H and tH cross sections as well as the H→γγ branching fraction on the Yukawa coupling. Lower and upper limits at 95% confidence level are set at −1.3 and +8.0 times the Yukawa coupling strength in the Standard Model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search for new charged massive gauge bosons, called W′, is performed with the ATLAS detector at the LHC, in proton--proton collisions at a centre-of-mass energy of s√ = 8 TeV, using a dataset corresponding to an integrated luminosity of 20.3 fb−1. This analysis searches for W′ bosons in the W′→tb¯ decay channel in final states with electrons or muons, using a multivariate method based on boosted decision trees. The search covers masses between 0.5 and 3.0 TeV, for right-handed or left-handed W′ bosons. No significant deviation from the Standard Model expectation is observed and limits are set on the W′→tb¯ cross-section times branching ratio and on the W′-boson effective couplings as a function of the W′-boson mass using the CLs procedure. For a left-handed (right-handed) W′ boson, masses below 1.70 (1.92) TeV are excluded at 95% confidence level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A search for the Standard Model Higgs boson produced in association with a pair of top quarks, tt¯H, is presented. The analysis uses 20.3 fb−1 of pp collision data at s√ = 8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H to bb¯ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by tt¯+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible tt¯+bb¯ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95% confidence level. The ratio of the measured tt¯H signal cross section to the Standard Model expectation is found to be μ=1.5±1.1 assuming a Higgs boson mass of 125 GeV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the available resources, thus providing way for the efficient use of the available resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents how workflow-oriented, single-user Grid portals could be extended to meet the requirements of users with collaborative needs. Through collaborative Grid portals different research and engineering teams would be able to share knowledge and resources. At the same time the workflow concept assures that the shared knowledge and computational capacity is aggregated to achieve the high-level goals of the group. The paper discusses the different issues collaborative support requires from Grid portal environments during the different phases of the workflow-oriented development work. While in the design period the most important task of the portal is to provide consistent and fault tolerant data management, during the workflow execution it must act upon the security framework its back-end Grids are built on.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A quantitative assessment of Cloudsat reflectivities and basic ice cloud properties (cloud base, top, and thickness) is conducted in the present study from both airborne and ground-based observations. Airborne observations allow direct comparisons on a limited number of ocean backscatter and cloud samples, whereas the ground-based observations allow statistical comparisons on much longer time series but with some additional assumptions. Direct comparisons of the ocean backscatter and ice cloud reflectivities measured by an airborne cloud radar and Cloudsat during two field experiments indicate that, on average, Cloudsat measures ocean backscatter 0.4 dB higher and ice cloud reflectivities 1 dB higher than the airborne cloud radar. Five ground-based sites have also been used for a statistical evaluation of the Cloudsat reflectivities and basic cloud properties. From these comparisons, it is found that the weighted-mean difference ZCloudsat − ZGround ranges from −0.4 to +0.3 dB when a ±1-h time lag around the Cloudsat overpass is considered. Given the fact that the airborne and ground-based radar calibration accuracy is about 1 dB, it is concluded that the reflectivities of the spaceborne, airborne, and ground-based radars agree within the expected calibration uncertainties of the airborne and ground-based radars. This result shows that the Cloudsat radar does achieve the claimed sensitivity of around −29 dBZ. Finally, an evaluation of the tropical “convective ice” profiles measured by Cloudsat has been carried out over the tropical site in Darwin, Australia. It is shown that these profiles can be used statistically down to approximately 9-km height (or 4 km above the melting layer) without attenuation and multiple scattering corrections over Darwin. It is difficult to estimate if this result is applicable to all types of deep convective storms in the tropics. However, this first study suggests that the Cloudsat profiles in convective ice need to be corrected for attenuation by supercooled liquid water and ice aggregates/graupel particles and multiple scattering prior to their quantitative use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article focuses on the characteristics of persistent thin single-layer mixed-phase clouds. We seek to answer two important questions: (i) how does ice continually nucleate and precipitate from these clouds, without the available ice nuclei becoming depleted? (ii) how do the supercooled liquid droplets persist in spite of the net flux of water vapour to the growing ice crystals? These questions are answered quantitatively using in situ and radar observations of a long-lived mixed-phase cloud layer over the Chilbolton Observatory. Doppler radar measurements show that the top 500 m of cloud (the top 250 m of which is mixed-phase, with ice virga beneath) is turbulent and well-mixed, and the liquid water content is adiabatic. This well-mixed layer is bounded above and below by stable layers. This inhibits entrainment of fresh ice nuclei into the cloud layer, yet our in situ and radar observations show that a steady flux of ≈100 m−2s−1 ice crystals fell from the cloud over the course of ∼1 day. Comparing this flux to the concentration of conventional ice nuclei expected to be present within the well-mixed layer, we find that these nuclei would be depleted within less than 1 h. We therefore argue that nucleation in these persistent supercooled clouds is strongly time-dependent in nature, with droplets freezing slowly over many hours, significantly longer than the few seconds residence time of an ice nucleus counter. Once nucleated, the ice crystals are observed to grow primarily by vapour deposition, because of the low liquid water path (21 g m−2) yet vapour-rich environment. Evidence for this comes from high differential reflectivity in the radar observations, and in situ imaging of the crystals. The flux of vapour from liquid to ice is quantified from in situ measurements, and we show that this modest flux (3.3 g m−2h−1) can be readily offset by slow radiative cooling of the layer to space.