447 resultados para Globus Toolkit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] En este proyecto se trata el proceso de análisis y desarrollo llevado a cabo con el objetivo de construir un prototipo funcional de simulador virtual de endoscopia rígida monocanal orientado a la histeroscopia. Para el desarrollo del prototipo se toma como base el entorno ESQUI, un entorno de simulación virtual médica de código libre. Este entorno provee una librería, basada a su vez en la conocida librería gráfica VTK(Visual ToolKit), cuyo propósito es poner a disposición del programador toda la algoritmia necesaria para construir una simulación médica virtual. En este proyecto, esta librería se depuró y amplió para mejorar el soporte a las técnicas de endoscopia rígida que se persiguen simular. Por otro lado se emplea el Simball 4D, un dispositivo de interfaz humana de la empresa G-coder Systems, para capturar la interacción del usuario emulando la morfología y dinámica de un endoscopio rígido. Todos estos elementos se conectan con una interfaz gráfica sencilla, intuitiva y práctica soportada por wxWidgets y utilizando Python como lenguaje de scripting. Finalmente, se analiza el prototipo resultante y se proponen una serie de líneas futuras de cara a la aplicación didáctica del mismo, tanto en relación a los objetivos conceptuales del prototipo como a los aspectos específicos del entorno ESQUI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To assist rational compound design of organic semiconductors, two problems need to be addressed. First, the material morphology has to be known at an atomistic level. Second, with the morphology at hand, an appropriate charge transport model needs to be developed in order to link charge carrier mobility to structure.rnrnThe former can be addressed by generating atomistic morphologies using molecular dynamics simulations. However, the accessible range of time- and length-scales is limited. To overcome these limitations, systematic coarse-graining methods can be used. In the first part of the thesis, the Versatile Object-oriented Toolkit for Coarse-graining Applications is introduced, which provides a platform for the implementation of coarse-graining methods. Tools to perform Boltzmann inversion, iterative Boltzmann inversion, inverse Monte Carlo, and force-matching are available and have been tested on a set of model systems (water, methanol, propane and a single hexane chain). Advantages and problems of each specific method are discussed.rnrnIn partially disordered systems, the second issue is closely connected to constructing appropriate diabatic states between which charge transfer occurs. In the second part of the thesis, the description initially used for small conjugated molecules is extended to conjugated polymers. Here, charge transport is modeled by introducing conjugated segments on which charge carriers are localized. Inter-chain transport is then treated within a high temperature non-adiabatic Marcus theory while an adiabatic rate expression is used for intra-chain transport. The charge dynamics is simulated using the kinetic Monte Carlo method.rnrnThe entire framework is finally employed to establish a relation between the morphology and the charge mobility of the neutral and doped states of polypyrrole, a conjugated polymer. It is shown that for short oligomers, charge carrier mobility is insensitive to the orientational molecular ordering and is determined by the threshold transfer integral which connects percolating clusters of molecules that form interconnected networks. The value of this transfer integral can be related to the radial distribution function. Hence, charge mobility is mainly determined by the local molecular packing and is independent of the global morphology, at least in such a non-crystalline state of a polymer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In der vorliegenden Arbeit werden verschiedene Wassermodelle in sogenannten Multiskalen-Computersimulationen mit zwei Auflösungen untersucht, in atomistischer Auflösung und in einer vergröberten Auflösung, die als "coarse-grained" bezeichnet wird. In der atomistischen Auflösung wird ein Wassermolekül, entsprechend seiner chemischen Struktur, durch drei Atome beschrieben, im Gegensatz dazu wird ein Molekül in der coarse-grained Auflösung durch eine Kugel dargestellt.rnrnDie coarse-grained Modelle, die in dieser Arbeit vorgestellt werden, werden mit verschiedenen coarse-graining Methoden entwickelt. Hierbei kommen hauptsächlich die "iterative Boltzmann Inversion" und die "iterative Monte Carlo Inversion" zum Einsatz. Beides sind struktur-basierte Ansätze, die darauf abzielen bestimmte strukturelle Eigenschaften, wie etwa die Paarverteilungsfunktionen, des zugrundeliegenden atomistischen Systems zu reproduzieren. Zur automatisierten Anwendung dieser Methoden wurde das Softwarepaket "Versatile Object-oriented Toolkit for Coarse-Graining Applications" (VOTCA) entwickelt.rnrnEs wird untersucht, in welchem Maße coarse-grained Modelle mehrere Eigenschaftenrndes zugrundeliegenden atomistischen Modells gleichzeitig reproduzieren können, z.B. thermodynamische Eigenschaften wie Druck und Kompressibilität oder strukturelle Eigenschaften, die nicht zur Modellbildung verwendet wurden, z.B. das tetraedrische Packungsverhalten, welches für viele spezielle Eigenschaft von Wasser verantwortlich ist.rnrnMit Hilfe des "Adaptive Resolution Schemes" werden beide Auflösungen in einer Simulation kombiniert. Dabei profitiert man von den Vorteilen beider Modelle:rnVon der detaillierten Darstellung eines räumlich kleinen Bereichs in atomistischer Auflösung und von der rechnerischen Effizienz des coarse-grained Modells, die den Bereich simulierbarer Zeit- und Längenskalen vergrössert.rnrnIn diesen Simulationen kann der Einfluss des Wasserstoffbrückenbindungsnetzwerks auf die Hydration von Fullerenen untersucht werden. Es zeigt sich, dass die Struktur der Wassermoleküle an der Oberfläche hauptsächlich von der Art der Wechselwirkung zwischen dem Fulleren und Wasser und weniger von dem Wasserstoffbrückenbindungsnetzwerk dominiert wird.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'obiettivo di questa tesi è studiare la fattibilità dello studio della produzione associata ttH del bosone di Higgs con due quark top nell'esperimento CMS, e valutare le funzionalità e le caratteristiche della prossima generazione di toolkit per l'analisi distribuita a CMS (CRAB versione 3) per effettuare tale analisi. Nel settore della fisica del quark top, la produzione ttH è particolarmente interessante, soprattutto perchè rappresenta l'unica opportunità di studiare direttamente il vertice t-H senza dover fare assunzioni riguardanti possibili contributi dalla fisica oltre il Modello Standard. La preparazione per questa analisi è cruciale in questo momento, prima dell'inizio del Run-2 dell'LHC nel 2015. Per essere preparati a tale studio, le implicazioni tecniche di effettuare un'analisi completa in un ambito di calcolo distribuito come la Grid non dovrebbero essere sottovalutate. Per questo motivo, vengono presentati e discussi un'analisi dello stesso strumento CRAB3 (disponibile adesso in versione di pre-produzione) e un confronto diretto di prestazioni con CRAB2. Saranno raccolti e documentati inoltre suggerimenti e consigli per un team di analisi che sarà eventualmente coinvolto in questo studio. Nel Capitolo 1 è introdotta la fisica delle alte energie a LHC nell'esperimento CMS. Il Capitolo 2 discute il modello di calcolo di CMS e il sistema di analisi distribuita della Grid. Nel Capitolo 3 viene brevemente presentata la fisica del quark top e del bosone di Higgs. Il Capitolo 4 è dedicato alla preparazione dell'analisi dal punto di vista degli strumenti della Grid (CRAB3 vs CRAB2). Nel capitolo 5 è presentato e discusso uno studio di fattibilità per un'analisi del canale ttH in termini di efficienza di selezione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation was conducted within the project Language Toolkit, which has the aim of integrating the worlds of work and university. In particular, it consists of the translation into English of documents commissioned by the Italian company TR Turoni and its primary purpose is to demonstrate that, in the field of translation for companies, the existing translation support tools and software can optimise and facilitate the translation process. The work consists of five chapters. The first introduces the Language Toolkit project, the TR Turoni company and its relationship with the CERMAC export consortium. After outlining the current state of company internationalisation, the importance of professional translators in enhancing the competitiveness of companies that enter new international markets is highlighted. Chapter two provides an overview of the texts to be translated, focusing on the textual function and typology and on the addressees. After that, manual translation and the main software developed specifically for translators are described, with a focus on computer-assisted translation (CAT) and machine translation (MT). The third chapter presents the target texts and the corresponding translations. Chapter four is dedicated to the analysis of the translation process. The first two texts were translated manually, with the support of a purpose-built specialized corpus. The following two documents were translated with the software SDL Trados Studio 2011 and its applications. The last texts were submitted to the Google Translate service and to a process of pre and post-editing. Finally, in chapter five conclusions are drawn about the main limits and potentialities of the different translations techniques. In addition to this, the importance of an integrated use of all available instruments is underlined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is part of the Language Toolkit project which is a collaboration between the School of Foreign Languages and Literature, Interpreting and Translation of the University of Bologna, Forlì campus, and the Chamber of Commerce of Forlì-Cesena. This project aims to create an exchange between translation students and companies who want to pursue a process of internationalization. The purpose of this dissertation is demonstrating the benefits that translation systems can bring to businesses. In particular, it consists of the translation into English of documents supplied by the Italian company Technologica S.r.l. and the creation of linguistic resources that can be integrated into computer-assisted translation (CAT) software, in order to optimize the translation process. The latter is claimed to be a priority with respect to the actual translation products (the target texts), since the analysis conducted on the source texts highlighted that the company could streamline and optimize its English language communication thanks to the use of open source CAT tools such as OmegaT. The work consists of five chapters. The first introduces the Language Toolkit project, the company (Technologica S.r.l ) and its products. The second chapter provides some considerations about technical translation, its features and some misconceptions about it. The difference between technical translation and scientific translation is then clarified and an overview is offered of translation aids such as those used for computer-assisted translation, machine translation, termbases and translation memories. The third chapter contains the analysis of the texts commissioned by Technologica S.r.l. and their categorization. The fourth chapter describes the translation process, with particular attention to terminology extraction and the creation of a bilingual glossary based on a specialized corpus. The glossary was integrated into the OmegaT software in order to facilitate the translation process both for the present task and for future applications. The memory deriving from the translation represents a sort of hybrid resource between a translation memory and a glossary. This was found to be the most appropriate format, given the specific nature of the texts to be translated. Finally, in chapter five conclusions are offered about the importance of language training within a company environment, the potentialities of translation aids and the benefits that they would bring to a company wishing to internationalize itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro di tesi s’inserisce all’interno del progetto Language Toolkit, nato dalla collaborazione tra la Camera di Commercio di Forlì-Cesena e la Scuola di Lingue e Letterature, Traduzione e Interpretazione di Forlì, al fine di avvicinare il mondo dell’università al mondo del lavoro. In particolare, la tesi è frutto della collaborazione tra la laureanda e la Graziani Packaging, azienda italiana leader nel settore del packaging ortofrutticolo e industriale, e consiste nella revisione di una brochure, nella localizzazione del sito web www.graziani.com e nella traduzione di una presentazione PowerPoint, il tutto dall’italiano al tedesco. Il lavoro si compone di tre capitoli. Nel primo, che rappresenta un approfondimento teorico sul tema della localizzazione, si analizza nel dettaglio l’acronimo GILT, si espongono brevemente le tappe principali che hanno caratterizzato la nascita e lo sviluppo del settore della localizzazione e si esaminano le caratteristiche linguistiche, culturali e tecniche della localizzazione di siti web. Il secondo capitolo è dedicato al concetto di qualità in ambito traduttivo e al tema della revisione. In particolare, nella prima parte si analizzano gli standard di qualità professionali, i criteri di qualità proposti da Scarpa (2008) e la classificazione degli errori di traduzione sviluppata da Mossop (2001), mentre l’ultima parte riguarda il processo di revisione in concreto. Nel terzo capitolo, infine, vengono analizzati i tre testi di partenza forniti dall’azienda al traduttore (brochure, sito web e presentazione PowerPoint), e viene esposto e commentato il lavoro di revisione, localizzazione e traduzione degli stessi, ponendo una particolare enfasi sugli aspetti linguistici, culturali e tecnici più interessanti che lo hanno caratterizzato. La tesi si chiude con un glossario terminologico contenente i termini chiave, in italiano e in tedesco, relativi al dominio del packaging ortofrutticolo e industriale, individuati nel corso del lavoro di revisione, localizzazione e traduzione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this dissertation is to provide a translation in English of the Notes on the Consolidated Financial Statements of MNLG S.r.l., holding company of the Italian Sorma Group. This translation work is one example of the technical material produced in accordance with the project called Language Toolkit, set up by the Chamber of Commerce of Forlì-Cesena, to support the internationalization of the companies established in the territory. This initiative has represented a unique opportunity for me to put into practice the knowledge and abilities learnt in the translation field during these years at university. It also allowed me to give a concrete purpose to my dissertation, that is to provide a technical document translated into a foreign language. By making its Consolidated Financial Statement readily available in English, the company MNLG S.r.l. can in fact increase the number of possible investors and guarantee a more transparent financial informative to its shareholders. This translation work is divided into six chapters: the first one describes the project, its main objectives and the ways in which it was developed. The second chapter deals with the notions of Consolidated Financial Statements and presents the accounting documents of which the Financial Statements are made up as well as the norms according to which they are prepared. The third chapter, instead, focuses on the translation procedure applied and especially on the documentation process, analysing the differences between the International Accounting Standards and the accounting standards used in Italy. The fourth chapter provides a description of the translation resources built for the translation of this specific document. The fifth chapter includes the English version of the Notes on the Consolidated Financial Statements and, to conclude, the sixth chapter analyses the difficulties encountered in translating and the strategies adopted to overcome them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Masterarbeit entstand im Rahmen des vom Department Traduzione e Interpretazione (Übersetzen und Dolmetschen) der Universität Bologna in Zusammenarbeit mit der Handelskammer Forlì ins Leben gerufenen Projekts Language Toolkit, das unter anderem die Durchführung von Lokalisierungsarbeiten für Unternehmen der Provinz Forlì-Cesena vorsieht. In meinem Fall ging es dabei um das Unternehmen Remedia, welches mich mit der Lokalisierung von Teilen seiner Website (www.remediaerbe.it) ins Deutsche beauftragte. Insbesondere ging es dabei um die Beschreibungen der Hauptprodukte der Abteilung Kräuterheilmittel, bei denen die Werbefunktion und die Verständlichkeit der Webinhalte eine besondere Rolle spielten. Auf ausdrücklichen Wunsch des Unternehmens sollte jedoch die emotionale und evokative Sprache der Ausgangstexte in der Übersetzung nachempfunden werden. Dies führte allerdings zu einer stilistisch treuen Übersetzung, die als Werbetext nicht effizient genug war, weshalb sie vor allem unter dem Gesichtspunkt der Struktur und der Prägnanz verbessert werden musste. Im ersten Kapitel dieser Masterarbeit werden das Unternehmen Remedia, die Palette seines Angebots an natürlichen Produkten und das Lokalisierungsprojekt vorgestellt. Das zweite Kapitel enthält eine Einführung in die Lokalisierung und thematisiert die Rolle des Übersetzers in dieser Branche. Darüber hinaus wird das Thema der Web Usability im Zusammenhang mit der Anordnung der Inhalte in einer Website behandelt und die Abteilung Kräuterheilmittel der Website von Remedia beschrieben. Im dritten Kapitel gehe ich auf die Besonderheiten von Werbetexten in der Kommunikation zwischen Unternehmen und Kunden unter besonderer Berücksichtigung der die Verständlichkeit von Webtexten beeinflussenden Faktoren wie Layout, Textgestaltung und Prägnanz ein. Das vierte Kapitel enthält die vollständige Übersetzung, die der lokalisierten Website entspricht. Im fünften Kapitel schließlich werden die Phasen des Lokalisierungsverfahrens vorgestellt und die wichtigsten Verbesserungen besprochen, die bei der Überarbeitung der stilistisch treuen Übersetzung vorgenommen wurden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro si inserisce nell’ambito del progetto Language Toolkit, portato avanti dalla Camera di Commercio di Forlì-Cesena e dalla Scuola di Lingue, Letterature, Traduzione e Interpretazione di Forlì. Il progetto coinvolge gli studenti della Scuola e le piccole-medie aziende del territorio provinciale e ha due obiettivi principali: creare occasioni di contatto tra il mondo accademico e quello professionale e favorire l’internazionalizzazione delle aziende del territorio. L’azienda con cui si è collaborato è la Pieri s.r.l. di Pievesestina di Cesena, attiva nel settore di confezionamento e movimentazione di carichi pallettizzati. Ci si è occupati principalmente di sistematizzare la terminologia aziendale in ottica trilingue (italiano - inglese - francese), partendo dall’analisi della documentazione aziendale. Sono stati individuati due gruppi di destinatari: i dipendenti dell’azienda e i traduttori esterni a cui essa ricorre saltuariamente. Viste le esigenze molto diverse dei due gruppi, sono stati elaborati due termbase con caratteristiche distinte, volte a renderli massimamente funzionali rispetto agli scopi. Dopo un breve inquadramento della situazione di partenza focalizzato sul progetto, l’azienda e i materiali di lavoro (capitolo 1), sono state fornite le basi teoriche (capitolo 2) che hanno guidato l’attività pratica di documentazione e di estrazione terminologica (capitolo 3). Sono stati presentati i due database terminologici costruiti, motivando le scelte compiute per differenziare le schede terminologiche in funzione dei destinatari (capitolo 4). Infine, si è tentato di valutare le risorse costruite attraverso la loro applicazione pratica in due attività: l’analisi della versione francese del manuale di istruzioni di una delle macchine Pieri e la traduzione verso il francese di alcune offerte commerciali, svolta utilizzando il programma di traduzione assistita SDL Trados Studio (capitolo 5).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’esperimento CMS a LHC ha raccolto ingenti moli di dati durante Run-1, e sta sfruttando il periodo di shutdown (LS1) per evolvere il proprio sistema di calcolo. Tra i possibili miglioramenti al sistema, emergono ampi margini di ottimizzazione nell’uso dello storage ai centri di calcolo di livello Tier-2, che rappresentano - in Worldwide LHC Computing Grid (WLCG)- il fulcro delle risorse dedicate all’analisi distribuita su Grid. In questa tesi viene affrontato uno studio della popolarità dei dati di CMS nell’analisi distribuita su Grid ai Tier-2. Obiettivo del lavoro è dotare il sistema di calcolo di CMS di un sistema per valutare sistematicamente l’ammontare di spazio disco scritto ma non acceduto ai centri Tier-2, contribuendo alla costruzione di un sistema evoluto di data management dinamico che sappia adattarsi elasticamente alle diversi condizioni operative - rimuovendo repliche dei dati non necessarie o aggiungendo repliche dei dati più “popolari” - e dunque, in ultima analisi, che possa aumentare l’“analysis throughput” complessivo. Il Capitolo 1 fornisce una panoramica dell’esperimento CMS a LHC. Il Capitolo 2 descrive il CMS Computing Model nelle sue generalità, focalizzando la sua attenzione principalmente sul data management e sulle infrastrutture ad esso connesse. Il Capitolo 3 descrive il CMS Popularity Service, fornendo una visione d’insieme sui servizi di data popularity già presenti in CMS prima dell’inizio di questo lavoro. Il Capitolo 4 descrive l’architettura del toolkit sviluppato per questa tesi, ponendo le basi per il Capitolo successivo. Il Capitolo 5 presenta e discute gli studi di data popularity condotti sui dati raccolti attraverso l’infrastruttura precedentemente sviluppata. L’appendice A raccoglie due esempi di codice creato per gestire il toolkit attra- verso cui si raccolgono ed elaborano i dati.