861 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics
Resumo:
The extreme sensitivity of the mass of the Higgs boson to quantum corrections from high mass states, makes it 'unnaturally' light in the standard model. This 'hierarchy problem' can be solved by symmetries, which predict new particles related, by the symmetry, to standard model fields. The Large Hadron Collider (LHC) can potentially discover these new particles, thereby finding the solution to the hierarchy problem. However, the dynamics of the Higgs boson is also sensitive to this new physics. We show that in many scenarios the Higgs can be a complementary and powerful probe of the hierarchy problem at the LHC and future colliders. If the top quark partners carry the color charge of the strong nuclear force, the production of Higgs pairs is affected. This effect is tightly correlated with single Higgs production, implying that only modest enhancements in di-Higgs production occur when the top partners are heavy. However, if the top partners are light, we show that di-Higgs production is a useful complementary probe to single Higgs production. We verify this result in the context of a simplified supersymmetric model. If the top partners do not carry color charge, their direct production is greatly reduced. Nevertheless, we show that such scenarios can be revealed through Higgs dynamics. We find that many color neutral frameworks leave observable traces in Higgs couplings, which, in some cases, may be the only way to probe these theories at the LHC. Some realizations of the color neutral framework also lead to exotic decays of the Higgs with displaced vertices. We show that these decays are so striking that the projected sensitivity for these searches, at hadron colliders, is comparable to that of searches for colored top partners. Taken together, these three case studies show the efficacy of the Higgs as a probe of naturalness.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
L'esperimento ATLAS, come gli altri esperimenti che operano al Large Hadron Collider, produce Petabytes di dati ogni anno, che devono poi essere archiviati ed elaborati. Inoltre gli esperimenti si sono proposti di rendere accessibili questi dati in tutto il mondo. In risposta a questi bisogni è stato progettato il Worldwide LHC Computing Grid che combina la potenza di calcolo e le capacità di archiviazione di più di 170 siti sparsi in tutto il mondo. Nella maggior parte dei siti del WLCG sono state sviluppate tecnologie per la gestione dello storage, che si occupano anche della gestione delle richieste da parte degli utenti e del trasferimento dei dati. Questi sistemi registrano le proprie attività in logfiles, ricchi di informazioni utili agli operatori per individuare un problema in caso di malfunzionamento del sistema. In previsione di un maggiore flusso di dati nei prossimi anni si sta lavorando per rendere questi siti ancora più affidabili e uno dei possibili modi per farlo è lo sviluppo di un sistema in grado di analizzare i file di log autonomamente e individuare le anomalie che preannunciano un malfunzionamento. Per arrivare a realizzare questo sistema si deve prima individuare il metodo più adatto per l'analisi dei file di log. In questa tesi viene studiato un approccio al problema che utilizza l'intelligenza artificiale per analizzare i logfiles, più nello specifico viene studiato l'approccio che utilizza dell'algoritmo di clustering K-means.
Resumo:
We revisit the mechanism for violating the weak cosmic-censorship conjecture (WCCC) by overspinning a nearly-extreme charged black hole. The mechanism consists of an incoming massless neutral scalar particle, with low energy and large angular momentum, tunneling into the hole. We investigate the effect of the large angular momentum of the incoming particle on the background geometry and address recent claims that such a backreaction would invalidate the mechanism. We show that the large angular momentum of the incident particle does not constitute an obvious impediment to the success of the overspinning quantum mechanism, although the induced backreaction turns out to be essential to restoring the validity of the WCCC in the classical regime. These results seem to endorse the view that the ""cosmic censor"" may be oblivious to processes involving quantum effects.
Resumo:
This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The phase estimation algorithm is so named because it allows an estimation of the eigenvalues associated with an operator. However, it has been proposed that the algorithm can also be used to generate eigenstates. Here we extend this proposal for small quantum systems, identifying the conditions under which the phase-estimation algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate two simple examples, one in which the algorithm effectively generates eigenstates, and one in which it does not.
Resumo:
In this paper, we propose a new technique that can identify transaction-local memory (i.e. captured memory), in managed environments, while having a low runtime overhead. We implemented our proposal in a well known STM framework (Deuce) and we tested it in STMBench7 with two different STMs: TL2 and LSA. In both STMs the performance improved significantly (4 times and 2.6 times, respectively). Moreover, running the STAMP benchmarks with our approach shows improvements of 7 times in the best case for the Vacation application.
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The tt¯ production cross-section dependence on jet multiplicity and jet transverse momentum is reported for proton--proton collisions at a centre-of-mass energy of 7 TeV in the single-lepton channel. The data were collected with the ATLAS detector at the CERN Large Hadron Collider and comprise the full 2011 data sample corresponding to an integrated luminosity of 4.6 fb−1. Differential cross-sections are presented as a function of the jet multiplicity for up to eight jets using jet transverse momentum thresholds of 25, 40, 60, and 80 GeV, and as a function of jet transverse momentum up to the fifth jet. The results are shown after background subtraction and corrections for all detector effects, within a kinematic range closely matched to the experimental acceptance. Several QCD-based Monte Carlo models are compared with the results. Sensitivity to the parton shower modelling is found at the higher jet multiplicities, at high transverse momentum of the leading jet and in the transverse momentum spectrum of the fifth leading jet. The MC@NLO+HERWIG MC is found to predict too few events at higher jet multiplicities.
Resumo:
Projecte d'adaptació del programa GNU Chess al sistema de grid computing 'Condor'. I amb això, es planteja un estudi sobre els algorismes de cerca i la seva aplicació en entorns distribuïts. Una sèrie de proves sobre unes mostres de una partida d'escacs contra el propi GNU Chess ens ajuden a posar de relleu els avantatges i inconvenients de cada un dels algorismes proposats.
Resumo:
El objetivo del presente trabajo será aplicar la tecnología de la virtualización en un supuesto cotidiano para un usuario informático particular. Concretamente, como hipótesis de trabajo, asumiremos el caso de un usuario que utiliza su PC de sobremesa particular como herramienta para su trabajo. Este usuario, que también se dedica al mundo de la informática, utilizará el citado PC de sobremesa para realizar desarrollos, elaborar documentación, etc.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.