861 resultados para LHC, CMS, Grid Computing, Cloud Comuting, Top Physics
Resumo:
In this work we introduce a mapping between the so-called deformed hyperbolic potentials, which are presenting a continuous interest in the last few years, and the corresponding nondeformed ones. As a consequence, we conclude that these deformed potentials do not pertain to a new class of exactly solvable potentials, but to the same one of the corresponding nondeformed ones. Notwithstanding, we can reinterpret this type of deformation as a kind of symmetry of the nondeformed potentials. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Computational grids allow users to share resources of distributed machines, even if those machines belong to different corporations. The scheduling of applications must be performed aiming at performance goals, and focusing on choose which processes can have access to specif resources, and which resources. In this article we discuss aspects of scheduling of application in grid computing environment. We also present a tool for scheduling simulation along with test scenarios and results.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Two-particle azimuthal (Delta phi) and pseudorapidity (Delta eta) correlations using a trigger particle with large transverse momentum (p(T)) in d+Au, Cu+Cu, and Au+Au collisions at root s(NN) = 62.4 GeV and 200 GeV from the STAR experiment at the Relativistic Heavy Ion Collider are presented. The near-side correlation is separated into a jet-like component, narrow in both Delta phi and Delta eta, and the ridge, narrow in Delta phi but broad in Delta eta. Both components are studied as a function of collision centrality, and the jet-like correlation is studied as a function of the trigger and associated p(T). The behavior of the jet-like component is remarkably consistent for different collision systems, suggesting it is produced by fragmentation. The width of the jet-like correlation is found to increase with the system size. The ridge, previously observed in Au+Au collisions at root s(NN) = 200 GeV, is also found in Cu+Cu collisions and in collisions at root s(NN) = 62.4 GeV, but is found to be substantially smaller at root s(NN) = 62.4 GeV than at root s(NN) = 200 GeV for the same average number of participants (< N-part >). Measurements of the ridge are compared to models.
Resumo:
We report on the mid-rapidity mass spectrum of di-electrons and cross sections of pseudoscalar and vector mesons via e(+) e(-) decays, from root s = 200 GeV p + p collisions, measured by the large-acceptance experiment STAR at the Relativistic Heavy Ion Collider. The ratio of the di-electron continuum to the combinatorial background is larger than 10% over the entire mass range. Simulations of di-electrons from light-meson decays and heavy-flavor decays (charmonium and open charm correlation) are found to describe the data. The extracted omega -> e(+) e(-) invariant yields are consistent with previous measurements. The mid-rapidity yields (dN/dy) of phi and J/psi are extracted through their di-electron decay channels and are consistent with the previous measurements of phi -> K+ K- and J/psi -> e(+) e(-). Our results suggest a new upper limit of the branching ratio of the eta -> e(+) e(-) of 1.7 x 10(-5) at the 90% confidence level.
Resumo:
Questa tesi si pone l’obiettivo di effettuare un’analisi aggiornata sulla recente evoluzione del Cloud Computing e dei nuovi modelli architetturali a sostegno della continua crescita di richiesta di risorse di computazione, di storage e di rete all'interno dei data center, per poi dedicarsi ad una fase sperimentale di migrazioni live singole e concorrenti di macchine virtuali, studiandone le prestazioni a livello di risorse applicative e di rete all’interno della piattaforma open source di virtualizzazione QEMU-KVM, oggi alla base di sistemi cloud-based come Openstack. Nel primo capitolo, viene effettuato uno studio dello stato dell’arte del Cloud Computing, dei suoi attuali limiti e delle prospettive offerte da un modello di Cloud Federation nel futuro immediato. Nel secondo capitolo vengono discusse nel dettaglio le tecniche di live migration, di recente riferimento per la comunità scientifica internazionale e le possibili ottimizzazioni in scenari inter e intra data center, con l’intento di definire la base teorica per lo studio approfondito dell’implementazione effettiva del processo di migrazione su piattaforma QEMU-KVM, che viene affrontato nel terzo capitolo. In particolare, in quest’ultimo sono descritti i principi architetturali e di funzionamento dell'hypervisor e viene definito il modello di progettazione e l’algoritmo alla base del processo di migrazione. Nel quarto capitolo, infine, si presenta il lavoro svolto, le scelte configurative e progettuali per la creazione di un ambiente di testbed adatto allo studio di sessioni di live migration concorrenti e vengono discussi i risultati delle misure di performance e del comportamento del sistema, tramite le sperimentazioni effettuate.
Resumo:
How do developers and designers of a new technology make sense of intended users? The critical groundwork for user-centred technology development begins not by involving actual users' exposure to the technological artefact but much earlier, with designers' and developers' vision of future users. Thus, anticipating intended users is critical to technology uptake. We conceptualise the anticipation of intended users as a form of prospective sensemaking in technology development. Employing a narrative analytical approach and drawing on four key communities in the development of Grid computing, we reconstruct how each community anticipated the intended Grid user. Based on our findings, we conceptualise user anticipation in Terms of two key dimensions, namely the intended possibility to inscribe user needs into the technological artefact as well as the intended scope of the application domain. In turn, these dimensions allow us to develop an initial typology of intended user concepts that in turn might provide a key building block towards a generic typology of intended users.
Resumo:
Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.
Resumo:
The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.
Resumo:
Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.
Resumo:
High-Performance Computing, Cloud computing and next-generation applications such e-Health or Smart Cities have dramatically increased the computational demand of Data Centers. The huge energy consumption, increasing levels of CO2 and the economic costs of these facilities represent a challenge for industry and researchers alike. Recent research trends propose the usage of holistic optimization techniques to jointly minimize Data Center computational and cooling costs from a multilevel perspective. This paper presents an analysis on the parameters needed to integrate the Data Center in a holistic optimization framework and leverages the usage of Cyber-Physical systems to gather workload, server and environmental data via software techniques and by deploying a non-intrusive Wireless Sensor Net- work (WSN). This solution tackles data sampling, retrieval and storage from a reconfigurable perspective, reducing the amount of data generated for optimization by a 68% without information loss, doubling the lifetime of the WSN nodes and allowing runtime energy minimization techniques in a real scenario.
Resumo:
Postprint
Resumo:
Preface. The evolution of cognitive neuroscience has been spurred by the development of increasingly sophisticated investigative techniques to study human cognition. In Methods in Mind, experts examine the wide variety of tools available to cognitive neuroscientists, paying particular attention to the ways in which different methods can be integrated to strengthen empirical findings and how innovative uses for established techniques can be developed. The book will be a uniquely valuable resource for the researcher seeking to expand his or her repertoire of investigative techniques. Each chapter explores a different approach. These include transcranial magnetic stimulation, cognitive neuropsychiatry, lesion studies in nonhuman primates, computational modeling, psychophysiology, single neurons and primate behavior, grid computing, eye movements, fMRI, electroencephalography, imaging genetics, magnetoencephalography, neuropharmacology, and neuroendocrinology. As mandated, authors focus on convergence and innovation in their fields; chapters highlight such cross-method innovations as the use of the fMRI signal to constrain magnetoencephalography, the use of electroencephalography (EEG) to guide rapid transcranial magnetic stimulation at a specific frequency, and the successful integration of neuroimaging and genetic analysis. Computational approaches depend on increased computing power, and one chapter describes the use of distributed or grid computing to analyze massive datasets in cyberspace. Each chapter author is a leading authority in the technique discussed.