955 resultados para metadati, CMS, Drupal
Resumo:
Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
Present the measurement of a rare Standard Model processes, pp →W±γγ for the leptonic decays of the W±. The measurement is made with 19.4 fb−1 of 8 TeV data collected in 2012 by the CMS experiment. The measured cross section is consistent with the Standard Model prediction and has a significance of 2.9σ. Limits are placed on dimension-8 Effective Field Theories of anomalous Quartic Gauge Couplings. The analysis has particularly sensitivity to the fT,0 coupling and a 95% confidence limit is placed at −35.9 < fT,0/Λ4< 36.7 TeV−4. Studies of the pp →Zγγ process are also presented. The Zγγ signal is in strict agreement with the Standard Model and has a significance of 5.9σ.
Resumo:
Résumé : Par l’adoption des Projets de loi 33 et 34, en 2006 et 2009 respectivement, le gouvernement du Québec a créé de nouvelles organisations privées dispensatrices de soins spécialisés, soient les centres médicaux spécialisés. Il a de ce fait encadré leur pratique, notamment dans l’objectif d’assurer un niveau de qualité et de sécurité satisfaisant des soins qui y sont dispensés. L’auteure analyse les différents mécanismes existants pour assurer la qualité et la sécurité des soins offerts en centres médicaux spécialisés, afin de constater si l’objectif recherché par le législateur est rencontré. Ainsi, elle expose les mécanismes spécifiques prévus dans la Loi sur les services de santé et services sociaux applicables aux centres médicaux spécialisés qui jouent un rôle quant au maintien de la qualité et de la sécurité des services, de même que des mécanismes indirects ayant une incidence sur ce plan, tels que la motivation économique et les recours en responsabilité. Ensuite, elle s’attarde aux processus issus de la règlementation professionnelle. Elle arrive à la conclusion que deux mécanismes sont manquants pour rencontrer l’objectif visé par le législateur et propose, à ce titre, des pistes de solution.
Resumo:
El proceso de desarrollo de software define una secuencia de actividades que se aplican en la creación de un producto o aplicación de tipo software.Entre las actividades que se pueden realizar dentro del ciclo de vida del desarrollo software destacamos algunas como pueden ser la captura de requisitos, el análisis, el diseño, la implementación, las pruebas, la documentación, el despliegue o el mantenimiento. En este trabajo fin de grado se propone desarrollar una aplicación web desde sus primeras etapas hasta las últimas, indicando como aplica cada una de ellas al ejemplo de un proyecto real. Para nuestro caso,el desarrollo de software consistirá en la creación de una aplicación web para una clínica podológica, en la que vamos a tener un cliente, en este caso el propietario de la clínica, que demanda funcionalidades y que necesita que el software cumpla con sus necesidades. Dicha página Web servirá para promocionar los servicios de la clínica, mostrar una galería de fotos, tendrá un formulario de contacto, gran cantidad de páginas de información, un mapa de geo- localización para mostrar la ubicación, menús de navegación, mapa web, buscador y otras funcionalidades más típicas de cualquier página Web. Además, la aplicación deberá cumplir ciertos requisitos de usabilidad así como ser navegable en dispositivos móviles, responsivo. La creación de la aplicación se hará con el sistema gestor de contenidos Drupal, una herramienta muy utilizada actualmente para crear y gestionar aplicaciones Web y con la que podremos implementar todas las funcionalidades demandadas por nuestro cliente.
Resumo:
SCOPUS: ar.j
Resumo:
La sezione d'urto differenziale di produzione di coppie t/t viene misurata utilizzando dati raccolti nel 2012 dall'esperimento CMS in collisioni protone-protone con un'energia nel centro di massa di 8 TeV. La misura viene effettuata su eventi che superano una serie di selezioni applicate al fine di migliorare il rapporto segnale/rumore. In particolare, facendo riferimento al canale all-hadronic, viene richiesta la presenza di almeno sei jet nello stato finale del decadimento della coppia t/t di cui almeno due con quark b. Ottenuto un campione di eventi sufficientemente puro, si può procedere con un fit cinematico, che consiste nel minimizzare una funzione chi quadro in cui si considera tra i parametri liberi la massa invariante associata ai quark top; le cui distribuzioni, richiedendo che il chi quadro sia <10, vengono ricostruite per gli eventi candidati, per il segnale, ottenuto mediante eventi simulati, e per il fondo, modellizzato negando la presenza di jet con b-tag nello stato finale del decadimento della coppia t/t. Con le suddette distribuzioni, attraverso un fit di verosimiglianza, si deducono le frazioni di segnale e di fondo presenti negli eventi. È dunque possibile riempire un istogramma di confronto tra gli eventi candidati e la somma di segnale+fondo per la massa invariante associata ai quark top. Considerando l'intervallo di valori nel quale il rapporto segnale/rumore è migliore si possono ottenere istogrammi di confronto simili al precedente anche per la quantità di moto trasversa del quark top e la massa invariante e la rapidità del sistema t/t. Infine, la sezione d'urto differenziale è misurata attraverso le distribuzioni di tali variabili dopo aver sottratto negli eventi il fondo.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
The t/t production cross section is measured with the CMS detector in the all-jets channel in $pp$ collisions at the centre-of-mass energy of 13 TeV. The analysis is based on the study of t/t events in the boosted topology, namely events in which decay products of the quark top have a high Lorentz boost and are thus reconstructed in the detector as a single, wide jet. The data sample used in this analysis corresponds to an integrated luminosity of 2.53 fb-1. The inclusive cross section is found to be sigma(t/t) = 727 +- 46 (stat.) +115-112 (sys.) +- 20~(lumi.) pb, a value which is consistent with the theoretical predictions. The differential, detector-level cross section is measured as a function of the transverse momentum of the leading jet and compared to the QCD theoretical predictions. Finally, the differential, parton-level cross section is reported, measured as a function of the transverse momentum of the leading parton, extrapolated to the full phase space and compared to the QCD predictions.
Resumo:
Registration fees for this workshop are being met by ARCS. There is no cost to attend; however space is limited.----- The Australian Research Collaboration Service (ARCS) has been supporting a wide range of Collaboration Services and Tools which have been allowing researchers, groups and research communities to share ideas and collaborate across organisational boundaries.----- This workshop will give an introduction into a number of web based and real-time collaboration tools and services which researchers may find useful for day-to-day collaboration with members of a research team located within an institution or across institutions. Attendees will be shown how a number of these tools work with strong emphasis placed on how these tools can help facilitate communication and collaboration. Attendees will have the opportunity to try out a number of examples themselves, and interact with the workshop staff to discuss how their own use cases could benefit from the tools and services which can be provided.----- Outline: A hands on introduction will be given to a number of services which ARCS is now operating and/or supporting such as:--- * EVO – A video conferencing environment, which is particularly suited to desktop or low bandwidth applications.--- * AccessGrid – An open source video conferencing and collaboration tool kit, which is great for room to room meetings.--- * Sakai – An online collaboration and learning environment, support teaching and learning, ad hoc group collaboration, support for portfolios and research collaboration.--- * Plone and Drupal – A ready-to-run content management system, that provides you with a system for managing web content that is ideal for project groups, communities, web sites, extranets and intranets.--- * Wikis – A way to easily create, edit, and link pages together, to create collaborative websites.
Resumo:
This paper presents a brief analysis of Seoul trans-youth’s search for identity through urban social networking, arguing that technological, socio-cultural and environmental (urban) contexts frame how mobility and ubiquity are (re)created in Seoul. The paper is empirically based on fieldwork conducted in Seoul, South Korea, from 2007 to 2008 as part of a research project on the mobile play culture of Seoul trans-youth(a term that will be explained in detail in the following section). Shared Visual Ethnography (SVE) was used as the research method which involved sharing of visual ethnographic data that were created by the participants. More specifically, the participants were asked to take photos, which were then shared and discussed with other participants and the researcher on the photo-sharing service Flickr. The research also involved a questionnaire and daily activity diaries, as well as interviews. A total of 44 Korean transyouths – including 23 females and 21 males – participated in interviews and photo-sharing. The paper draws specifically on the qualitative data from individual and/or group interviews, the total duration of which was 2–2.5 hours for each participant.