35 resultados para LHC, CMS, Grid computing, distributed analysis, top physics, Higgs physics
Resumo:
A measurement of the mass difference between top and anti-top quarks is presented. In a 4.7 fb−14.7 fb−1 data sample of proton–proton collisions at View the MathML sources=7 TeV recorded with the ATLAS detector at the LHC, events consistent with View the MathML sourcett¯ production and decay into a single charged lepton final state are reconstructed. For each event, the mass difference between the top and anti-top quark candidate is calculated. A two b -tag requirement is used in order to reduce the background contribution. A maximum likelihood fit to these per-event mass differences yields View the MathML sourceΔm≡mt−mt¯=0.67±0.61(stat)±0.41(syst) GeV, consistent with CPT invariance.
Resumo:
In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
Very recently, the ATLAS and CMS Collaborations reported diboson and dijet excesses above standard model expectations in the invariant mass region of 1.8–2.0 TeV. Interpreting the diboson excess of events in a model independent fashion suggests that the vector boson pair production searches are best described by WZ or ZZ topologies, because states decaying into W+W− pairs are strongly constrained by semileptonic searches. Under the assumption of a low string scale, we show that both the diboson and dijet excesses can be steered by an anomalous U(1) field with very small coupling to leptons. The Drell–Yan bounds are then readily avoided because of the leptophobic nature of the massive Z′ gauge boson. The non-negligible decay into ZZ required to accommodate the data is a characteristic footprint of intersecting D-brane models, wherein the Landau–Yang theorem can be evaded by anomaly-induced operators involving a longitudinal Z. The model presented herein can be viewed purely field-theoretically, although it is particularly well motivated from string theory. Should the excesses become statistically significant at the LHC13, the associated Zγ topology would become a signature consistent only with a stringy origin.
Resumo:
The paper revives a theoretical definition of party coherence as being composed of two basic elements, cohesion and factionalism, to propose and apply a novel empirical measure based on spin physics. The simultaneous analysis of both components using a single measurement concept is applied to data representing the political beliefs of candidates in the Swiss general elections of 2003 and 2007, proposing a connection between the coherence of the beliefs party members hold and the assessment of parties being at risk of splitting. We also compare our measure with established polarization measures and demonstrate its advantage with respect to multi-dimensional data that lack clear structure. Furthermore, we outline how our analysis supports the distinction between bottom-up and top-down mechanisms of party splitting. In this way, we are able to turn the intuition of coherence into a defined quantitative concept that, additionally, offers a methodological basis for comparative research of party coherence. Our work serves as an example of how a complex systems approach allows to get a new perspective on a long-standing issue in political science.