27 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scheduling parallel and distributed applications efficiently onto grid environments is a difficult task and a great variety of scheduling heuristics has been developed aiming to address this issue. A successful grid resource allocation depends, among other things, on the quality of the available information about software artifacts and grid resources. In this article, we propose a semantic approach to integrate selection of equivalent resources and selection of equivalent software artifacts to improve the scheduling of resources suitable for a given set of application execution requirements. We also describe a prototype implementation of our approach based on the Integrade grid middleware and experimental results that illustrate its benefits. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work, the effects of spatial constraints on the efficiency of task execution in systems underlain by geographical complex networks are investigated, where the probability of connection decreases with the distance between the nodes. The investigation considers several configurations of the parameters defining the network connectivity, and the Barabasi-Albert network model is also considered for comparisons. The results show that the effect of connectivity is significant only for shorter tasks, the locality of connection simplied by the spatial constraints reduces efficiency, and the addition of edges can improve the efficiency of the execution, although with increasing locality of the connections the improvement is small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We revisit the mechanism for violating the weak cosmic-censorship conjecture (WCCC) by overspinning a nearly-extreme charged black hole. The mechanism consists of an incoming massless neutral scalar particle, with low energy and large angular momentum, tunneling into the hole. We investigate the effect of the large angular momentum of the incoming particle on the background geometry and address recent claims that such a backreaction would invalidate the mechanism. We show that the large angular momentum of the incident particle does not constitute an obvious impediment to the success of the overspinning quantum mechanism, although the induced backreaction turns out to be essential to restoring the validity of the WCCC in the classical regime. These results seem to endorse the view that the ""cosmic censor"" may be oblivious to processes involving quantum effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a method for predicting resource availability in opportunistic grids by means of use pattern analysis (UPA), a technique based on non-supervised learning methods. This prediction method is based on the assumption of the existence of several classes of computational resource use patterns, which can be used to predict the resource availability. Trace-driven simulations validate this basic assumptions, which also provide the parameter settings for the accurate learning of resource use patterns. Experiments made with an implementation of the UPA method show the feasibility of its use in the scheduling of grid tasks with very little overhead. The experiments also demonstrate the method`s superiority over other predictive and non-predictive methods. An adaptative prediction method is suggested to deal with the lack of training data at initialization. Further adaptative behaviour is motivated by experiments which show that, in some special environments, reliable resource use patterns may not always be detected. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud-aerosol interaction is a key issue in the climate system, affecting the water cycle, the weather, and the total energy balance including the spatial and temporal distribution of latent heat release. Information on the vertical distribution of cloud droplet microphysics and thermodynamic phase as a function of temperature or height, can be correlated with details of the aerosol field to provide insight on how these particles are affecting cloud properties and their consequences to cloud lifetime, precipitation, water cycle, and general energy balance. Unfortunately, today's experimental methods still lack the observational tools that can characterize the true evolution of the cloud microphysical, spatial and temporal structure in the cloud droplet scale, and then link these characteristics to environmental factors and properties of the cloud condensation nuclei. Here we propose and demonstrate a new experimental approach (the cloud scanner instrument) that provides the microphysical information missed in current experiments and remote sensing options. Cloud scanner measurements can be performed from aircraft, ground, or satellite by scanning the side of the clouds from the base to the top, providing us with the unique opportunity of obtaining snapshots of the cloud droplet microphysical and thermodynamic states as a function of height and brightness temperature in clouds at several development stages. The brightness temperature profile of the cloud side can be directly associated with the thermodynamic phase of the droplets to provide information on the glaciation temperature as a function of different ambient conditions, aerosol concentration, and type. An aircraft prototype of the cloud scanner was built and flew in a field campaign in Brazil. The CLAIM-3D (3-Dimensional Cloud Aerosol Interaction Mission) satellite concept proposed here combines several techniques to simultaneously measure the vertical profile of cloud microphysics, thermodynamic phase, brightness temperature, and aerosol amount and type in the neighborhood of the clouds. The wide wavelength range, and the use of multi-angle polarization measurements proposed for this mission allow us to estimate the availability and characteristics of aerosol particles acting as cloud condensation nuclei, and their effects on the cloud microphysical structure. These results can provide unprecedented details on the response of cloud droplet microphysics to natural and anthropogenic aerosols in the size scale where the interaction really happens.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In-situ measurements in convective clouds (up to the freezing level) over the Amazon basin show that smoke from deforestation fires prevents clouds from precipitating until they acquire a vertical development of at least 4 km, compared to only 1-2 km in clean clouds. The average cloud depth required for the onset of warm rain increased by similar to 350 m for each additional 100 cloud condensation nuclei per cm(3) at a super-saturation of 0.5% (CCN0.5%). In polluted clouds, the diameter of modal liquid water content grows much slower with cloud depth (at least by a factor of similar to 2), due to the large number of droplets that compete for available water and to the suppressed coalescence processes. Contrary to what other studies have suggested, we did not observe this effect to reach saturation at 3000 or more accumulation mode particles per cm(3). The CCN0.5% concentration was found to be a very good predictor for the cloud depth required for the onset of warm precipitation and other microphysical factors, leaving only a secondary role for the updraft velocities in determining the cloud drop size distributions. The effective radius of the cloud droplets (r(e)) was found to be a quite robust parameter for a given environment and cloud depth, showing only a small effect of partial droplet evaporation from the cloud's mixing with its drier environment. This supports one of the basic assumptions of satellite analysis of cloud microphysical processes: the ability to look at different cloud top heights in the same region and regard their r(e) as if they had been measured inside one well developed cloud. The dependence of r(e) on the adiabatic fraction decreased higher in the clouds, especially for cleaner conditions, and disappeared at r(e)>=similar to 10 mu m. We propose that droplet coalescence, which is at its peak when warm rain is formed in the cloud at r(e)=similar to 10 mu m, continues to be significant during the cloud's mixing with the entrained air, cancelling out the decrease in r(e) due to evaporation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We examine the possibility that a new strong interaction is accessible to the Tevatron and the LHC. In an effective theory approach, we consider a scenario with a new color-octet interaction with strong couplings to the top quark, as well as the presence of a strongly coupled fourth generation which could be responsible for electroweak symmetry breaking. We apply several constraints, including the ones from flavor physics. We study the phenomenology of the resulting parameter space at the Tevatron, focusing on the forward-backward asymmetry in top pair production, as well as in the production of the fourth-generation quarks. We show that if the excess in the top production asymmetry is indeed the result of this new interaction, the Tevatron could see the first hints of the strongly coupled fourth-generation quarks. Finally, we show that the LHC with root s = 7 TeV and 1 fb(-1) integrated luminosity should observe the production of fourth-generation quarks at a level at least 1 order of magnitude above the QCD prediction for the production of these states.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Models of warped extra dimensions with custodial symmetry usually predict the existence of a light Kaluza-Klein fermion arising as a partner of the right-handed top quark, sometimes called light custodians which we will denote (b) over tilde (R). The production of these particles at the LHC can give rise to multi-W events which could be observed in same-sign dilepton channels, but its mass reconstruction is challenging. In this paper we study the possibility of finding a signal for the pair production of this new particle at the LHC focusing on a rarer, but cleaner decay mode of a light custodian into a Z boson and a b-quark. In this mode it would be possible to reconstruct the light custodian mass. In addition to the dominant standard model QCD production processes, we include the contribution of a Kaluza-Klein gluon first mode. We find that (b) over tilde (R) stands out from the background as a peak in the bZ invariant mass. However, when taking into account only the electronic and muonic decay modes of the Z boson and b-tagging efficiencies, the LHC will have access only to the very light range of masses, m((b) over tilde) = O(500) GeV.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this Letter we study the process of gluon fusion into a pair of Higgs bosons in a model with one universal extra dimension. We find that the contributions from the extra top quark Kaluza-Klem excitations lead to a Higgs pair production cross section at the LHC that can be significantly altered compared to the Standard Model value for small values of the compactification scale. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.