24 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
How do developers and designers of a new technology make sense of intended users? The critical groundwork for user-centred technology development begins not by involving actual users' exposure to the technological artefact but much earlier, with designers' and developers' vision of future users. Thus, anticipating intended users is critical to technology uptake. We conceptualise the anticipation of intended users as a form of prospective sensemaking in technology development. Employing a narrative analytical approach and drawing on four key communities in the development of Grid computing, we reconstruct how each community anticipated the intended Grid user. Based on our findings, we conceptualise user anticipation in Terms of two key dimensions, namely the intended possibility to inscribe user needs into the technological artefact as well as the intended scope of the application domain. In turn, these dimensions allow us to develop an initial typology of intended user concepts that in turn might provide a key building block towards a generic typology of intended users.
Resumo:
In addition to multi-national Grid infrastructures, several countries operate their own national Grid infrastructures to support science and industry within national borders. These infrastructures have the benefit of better satisfying the needs of local, regional and national user communities. Although Switzerland has strong research groups in several fields of distributed computing, only recently a national Grid effort was kick-started to integrate a truly heterogeneous set of resource providers, middleware pools, and users. In the following. article we discuss our efforts to start Grid activities at a national scale to combine several scientific communities and geographical domains. We make a strong case for the need of standards that have to be built on top of existing software systems in order to provide support for a heterogeneous Grid infrastruc
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
A measurement of jet shapes in top-quark pair events using 1.8 fb−1 of s√=7 TeV pp collision data recorded by the ATLAS detector at the LHC is presented. Samples of top-quark pair events are selected in both the single-lepton and dilepton final states. The differential and integrated shapes of the jets initiated by bottom-quarks from the top-quark decays are compared with those of the jets originated by light-quarks from the hadronic W-boson decays W→qq¯′ in the single-lepton channel. The light-quark jets are found to have a narrower distribution of the momentum flow inside the jet area than b-quark jets.
Resumo:
A search is presented for production of a heavy up-type quark (t') together with its antiparticle, assuming a significant branching ratio for subsequent decay into a W boson and a b quark. The search is based on 4.7 fb(-1) of pp collisions root s = 7 TeV recorded in 2011 with the ATLAS detector at the CERN Large Hadron Collider. Data are analyzed in the lepton + jets final state, characterized by a high-transverse-momentum isolated electron or muon, large missing transverse momentum and at least three jets. The analysis strategy relies on the substantial boost of the W bosons in the t'(t') over bar signal when m(t') greater than or similar to 400 GeV. No significant excess of events above the Standard Model expectation is observed and the result of the search is interpreted in the context of fourth-generation and vector-like quark models. Under the assumption of a branching ratio BR(t' -> W b) = I, a fourth-generation t' quark with mass lower than 656 GeV is excluded at 95% confidence level. In addition, in light of the recent discovery of a new boson of mass similar to 126 GeV at the LHC, upper limits are derived in the two-dimensional plane of BR(t' -> Wb) versus BR(t' -> Ht), where H is the Standard Model Higgs boson, for vector-like quarks of various masses.
Resumo:
This paper presents a measurement of the top quark pair () production charge asymmetry A (C) using 4.7 fb(-1) of proton-proton collisions at a centre-of-mass energy root s = 7 TeV collected by the ATLAS detector at the LHC. A -enriched sample of events with a single lepton (electron or muon), missing transverse momentum and at least four high transverse momentum jets, of which at least one is tagged as coming from a b-quark, is selected. A likelihood fit is used to reconstruct the event kinematics. A Bayesian unfolding procedure is employed to estimate A (C) at the parton-level. The measured value of the production charge asymmetry is A (C) = 0.006 +/- 0.010, where the uncertainty includes both the statistical and the systematic components. Differential A (C) measurements as a function of the invariant mass, the rapidity and the transverse momentum of the system are also presented. In addition, A (C) is measured for a subset of events with large velocity, where physics beyond the Standard Model could contribute. All measurements are consistent with the Standard Model predictions.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.