12 resultados para Distributed replication system
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
L'obiettivo di questa tesi è studiare la fattibilità dello studio della produzione associata ttH del bosone di Higgs con due quark top nell'esperimento CMS, e valutare le funzionalità e le caratteristiche della prossima generazione di toolkit per l'analisi distribuita a CMS (CRAB versione 3) per effettuare tale analisi. Nel settore della fisica del quark top, la produzione ttH è particolarmente interessante, soprattutto perchè rappresenta l'unica opportunità di studiare direttamente il vertice t-H senza dover fare assunzioni riguardanti possibili contributi dalla fisica oltre il Modello Standard. La preparazione per questa analisi è cruciale in questo momento, prima dell'inizio del Run-2 dell'LHC nel 2015. Per essere preparati a tale studio, le implicazioni tecniche di effettuare un'analisi completa in un ambito di calcolo distribuito come la Grid non dovrebbero essere sottovalutate. Per questo motivo, vengono presentati e discussi un'analisi dello stesso strumento CRAB3 (disponibile adesso in versione di pre-produzione) e un confronto diretto di prestazioni con CRAB2. Saranno raccolti e documentati inoltre suggerimenti e consigli per un team di analisi che sarà eventualmente coinvolto in questo studio. Nel Capitolo 1 è introdotta la fisica delle alte energie a LHC nell'esperimento CMS. Il Capitolo 2 discute il modello di calcolo di CMS e il sistema di analisi distribuita della Grid. Nel Capitolo 3 viene brevemente presentata la fisica del quark top e del bosone di Higgs. Il Capitolo 4 è dedicato alla preparazione dell'analisi dal punto di vista degli strumenti della Grid (CRAB3 vs CRAB2). Nel capitolo 5 è presentato e discusso uno studio di fattibilità per un'analisi del canale ttH in termini di efficienza di selezione.
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
Il lavoro svolto è dedicato alla realizzazione ed implementazione di un sistema distribuito "smart" per il controllo degli accessi. Il progetto sviluppato è inquadrato nel contesto di "SPOT Software", che necessita di migliorare il processo aziendale di controllo accessi e gestione presenze al fine di aumentarne usabilità ed efficienza. Saranno affrontate in generale le tematiche di Internet of Things, Smart Building, Smart City e sistemi embedded, approfondendo il ruolo delle tecnologie di comunicazione NFC e BLE, al centro di questo lavoro. Successivamente sarà discussa la progettazione di ognuno dei tre nodi del sistema, motivando le scelte tecnologiche e progettuali: Web application, Smart device e Smartphone app.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
Nowadays words like Smart City, Internet of Things, Environmental Awareness surround us with the growing interest of Computer Science and Engineering communities. Services supporting these paradigms are definitely based on large amounts of sensed data, which, once obtained and gathered, need to be analyzed in order to build maps, infer patterns, extract useful information. Everything is done in order to achieve a better quality of life. Traditional sensing techniques, like Wired or Wireless Sensor Network, need an intensive usage of distributed sensors to acquire real-world conditions. We propose SenSquare, a Crowdsensing approach based on smartphones and a central coordination server for time-and-space homogeneous data collecting. SenSquare relies on technologies such as CoAP lightweight protocol, Geofencing and the Military Grid Reference System.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
Airborne Particulate Matter (PM), can get removed from the atmosphere through wet and dry mechanisms, and physically/chemically interact with materials and induce premature decay. The effect of dry depositions is a complex issue, especially for outdoor materials, because of the difficulties to collect atmospheric deposits repeatable in terms of mass and homogeneously distributed on the entire investigated substrate. In this work, to overcome these problems by eliminating the variability induced by outdoor removal mechanisms (e.g. winds and rainfalls), a new sampling system called ‘Deposition Box’, was used for PM sampling. Four surrogate materials (Cellulose Acetate, Regenerated Cellulose, Cellulose Nitrate and Aluminum) with different surfaces features were exposed in the urban-marine site of Rimini (Italy), in vertical and horizontal orientations. Homogeneous and reproducible PM deposits were obtained and different analytical techniques (IC, AAS, TOC, VP-SEM-EDX, Vis-Spectrophotometry) were employed to characterize their mass, dimension and composition. Results allowed to discriminate the mechanisms responsible of the dry deposition of atmospheric particles on surfaces with different nature and orientation and to determine which chemical species, and in which amount, tend to preferentially deposit on them. This work demonstrated that “Deposition Box” can represent an affordable tool to study dry deposition fluxes on materials and results obtained will be fundamental in order to extend this kind of exposure to actual building and heritage materials, to investigate the PM contribution in their decay.
Resumo:
With the increasing of the distributed generation, DC microgrids have become more and more common in the electrical network. To connect devices in a microgrid, converter are necessary, but they are also source of disturbances due to their functioning. In this thesis, measurement and simulation of conducted emissions, within the frequency range 2-150kHz, of a DC/DC buck converter are studied.
Resumo:
Al giorno d'oggi il reinforcement learning ha dimostrato di essere davvero molto efficace nel machine learning in svariati campi, come ad esempio i giochi, il riconoscimento vocale e molti altri. Perciò, abbiamo deciso di applicare il reinforcement learning ai problemi di allocazione, in quanto sono un campo di ricerca non ancora studiato con questa tecnica e perchè questi problemi racchiudono nella loro formulazione un vasto insieme di sotto-problemi con simili caratteristiche, per cui una soluzione per uno di essi si estende ad ognuno di questi sotto-problemi. In questo progetto abbiamo realizzato un applicativo chiamato Service Broker, il quale, attraverso il reinforcement learning, apprende come distribuire l'esecuzione di tasks su dei lavoratori asincroni e distribuiti. L'analogia è quella di un cloud data center, il quale possiede delle risorse interne - possibilmente distribuite nella server farm -, riceve dei tasks dai suoi clienti e li esegue su queste risorse. L'obiettivo dell'applicativo, e quindi del data center, è quello di allocare questi tasks in maniera da minimizzare il costo di esecuzione. Inoltre, al fine di testare gli agenti del reinforcement learning sviluppati è stato creato un environment, un simulatore, che permettesse di concentrarsi nello sviluppo dei componenti necessari agli agenti, invece che doversi anche occupare di eventuali aspetti implementativi necessari in un vero data center, come ad esempio la comunicazione con i vari nodi e i tempi di latenza di quest'ultima. I risultati ottenuti hanno dunque confermato la teoria studiata, riuscendo a ottenere prestazioni migliori di alcuni dei metodi classici per il task allocation.
Resumo:
In this thesis, a tube-based Distributed Economic Predictive Control (DEPC) scheme is presented for a group of dynamically coupled linear subsystems. These subsystems are components of a large scale system and control inputs are computed based on optimizing a local economic objective. Each subsystem is interacting with its neighbors by sending its future reference trajectory, at each sampling time. It solves a local optimization problem in parallel, based on the received future reference trajectories of the other subsystems. To ensure recursive feasibility and a performance bound, each subsystem is constrained to not deviate too much from its communicated reference trajectory. This difference between the plan trajectory and the communicated one is interpreted as a disturbance on the local level. Then, to ensure the satisfaction of both state and input constraints, they are tightened by considering explicitly the effect of these local disturbances. The proposed approach averages over all possible disturbances, handles tightened state and input constraints, while satisfies the compatibility constraints to guarantee that the actual trajectory lies within a certain bound in the neighborhood of the reference one. Each subsystem is optimizing a local arbitrary economic objective function in parallel while considering a local terminal constraint to guarantee recursive feasibility. In this framework, economic performance guarantees for a tube-based distributed predictive control (DPC) scheme are developed rigorously. It is presented that the closed-loop nominal subsystem has a robust average performance bound locally which is no worse than that of a local robust steady state. Since a robust algorithm is applying on the states of the real (with disturbances) subsystems, this bound can be interpreted as an average performance result for the real closed-loop system. To this end, we present our outcomes on local and global performance, illustrated by a numerical example.
Resumo:
The Internet of Things (IoT) is a critical pillar in the digital transformation because it enables interaction with the physical world through remote sensing and actuation. Owing to the advancements in wireless technology, we now have the opportunity of using their features to the best of our abilities and improve over the current situation. Indeed, the Internet of Things market is expanding at an exponential rate, with devices such as alarms and detectors, smart metres, trackers, and wearables being used on a global scale for automotive and agriculture, environment monitoring, infrastructure surveillance and management, healthcare, energy and utilities, logistics, good tracking, and so on. The Third Generation Partnership Project (3GPP) acknowledged the importance of IoT by introducing new features to support it. In particular, in Rel.13, the 3GPP introduced the so-called IoT to support Low Power Wide Area Networks (LPWAN).As these devices will be distributed in areas where terrestrial networks are not feasible or commercially viable, satellite networks will play a complementary role due to their ability to provide global connectivity via their large footprint size and short service deployment time. In this context, the goal of this thesis is to investigate the viability of integrating IoT technology with satellite communication (SatCom) systems, with a focus on the Random Access(RA) Procedure. Indeed, the RA is the most critical procedure because it allows the UE to achieve uplink synchronisation, obtain the permanent ID, and obtain uplink transmission resources. The goal of this thesis is to evaluate preamble detection in the SatCom environment.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.