972 resultados para CMS,DT,HL-LHC,Phase-2
Resumo:
Background and aims: Advances in modern medicine have led to improved outcomes after stroke, yet an increased treatment burden has been placed on patients. Treatment burden is the workload of health care for people with chronic illness and the impact that this has on functioning and well-being. Those with comorbidities are likely to be particularly burdened. Excessive treatment burden can negatively affect outcomes. Individuals are likely to differ in their ability to manage health problems and follow treatments, defined as patient capacity. The aim of this thesis was to explore the experience of treatment burden for people who have had a stroke and the factors that influence patient capacity. Methods: There were four phases of research. 1) A systematic review of the qualitative literature that explored the experience of treatment burden for those with stroke. Data were analysed using framework synthesis, underpinned by Normalisation Process Theory (NPT). 2) A cross-sectional study of 1,424,378 participants >18 years, demographically representative of the Scottish population. Binary logistic regression was used to analyse the relationship between stroke and the presence of comorbidities and prescribed medications. 3) Interviews with twenty-nine individuals with stroke, fifteen analysed by framework analysis underpinned by NPT and fourteen by thematic analysis. The experience of treatment burden was explored in depth along with factors that influence patient capacity. 4) Integration of findings in order to create a conceptual model of treatment burden and patient capacity in stroke. Results: Phase 1) A taxonomy of treatment burden in stroke was created. The following broad areas of treatment burden were identified: making sense of stroke management and planning care; interacting with others including health professionals, family and other stroke patients; enacting management strategies; and reflecting on management. Phase 2) 35,690 people (2.5%) had a diagnosis of stroke and of the 39 co-morbidities examined, 35 were significantly more common in those with stroke. The proportion of those with stroke that had >1 additional morbidities present (94.2%) was almost twice that of controls (48%) (odds ratio (OR) adjusted for age, gender and socioeconomic deprivation; 95% confidence interval: 5.18; 4.95-5.43) and 34.5% had 4-6 comorbidities compared to 7.2% of controls (8.59; 8.17-9.04). In the stroke group, 12.6% of people had a record of >11 repeat prescriptions compared to only 1.5% of the control group (OR adjusted for age, gender, deprivation and morbidity count: 15.84; 14.86-16.88). Phase 3) The taxonomy of treatment burden from Phase 1 was verified and expanded. Additionally, treatment burdens were identified as arising from either: the workload of healthcare; or the endurance of care deficiencies. A taxonomy of patient capacity was created. Six factors were identified that influence patient capacity: personal attributes and skills; physical and cognitive abilities; support network; financial status; life workload, and environment. A conceptual model of treatment burden was created. Healthcare workload and the presence of care deficiencies can influence and be influenced by patient capacity. The quality and configuration of health and social care services influences healthcare workload, care deficiencies and patient capacity. Conclusions: This thesis provides important insights into the considerable treatment burden experienced by people who have had a stroke and the factors that affect their capacity to manage health. Multimorbidity and polypharmacy are common in those with stroke and levels of these are high. Findings have important implications for the design of clinical guidelines and healthcare delivery, for example co-ordination of care should be improved, shared decision-making enhanced, and patients better supported following discharge from hospital.
Resumo:
In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The RPC Detector Control System (RCS) is the main subject of this PhD work. The project, involving the Lappeenranta University of Technology, the Warsaw University and INFN of Naples, is aimed to integrate the different subsystems for the RPC detector and its trigger chain in order to develop a common framework to control and monitoring the different parts. In this project, I have been strongly involved during the last three years on the hardware and software development, construction and commissioning as main responsible and coordinator. The CMS Resistive Plate Chambers (RPC) system consists of 912 double-gap chambers at its start-up in middle of 2008. A continuous control and monitoring of the detector, the trigger and all the ancillary sub-systems (high voltages, low voltages, environmental, gas, and cooling), is required to achieve the operational stability and reliability of a so large and complex detector and trigger system. Role of the RPC Detector Control System is to monitor the detector conditions and performance, control and monitor all subsystems related to RPC and their electronics and store all the information in a dedicated database, called Condition DB. Therefore the RPC DCS system has to assure the safe and correct operation of the sub-detectors during all CMS life time (more than 10 year), detect abnormal and harmful situations and take protective and automatic actions to minimize consequential damages. The analysis of the requirements and project challenges, the architecture design and its development as well as the calibration and commissioning phases represent themain tasks of the work developed for this PhD thesis. Different technologies, middleware and solutions has been studied and adopted in the design and development of the different components and a big challenging consisted in the integration of these different parts each other and in the general CMS control system and data acquisition framework. Therefore, the RCS installation and commissioning phase as well as its performance and the first results, obtained during the last three years CMS cosmic runs, will be
Resumo:
The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The first LHC pp collisions at centre-of-mass energies of 0.9 and 2.36 TeV were recorded by the CMS detector in December 2009. The trajectories of charged particles produced in the collisions were reconstructed using the all-silicon Tracker and their momenta were measured in the 3.8 T axial magnetic field. Results from the Tracker commissioning are presented including studies of timing, efficiency, signal-to-noise, resolution, and ionization energy. Reconstructed tracks are used to benchmark the performance in terms of track and vertex resolutions, reconstruction of decays, estimation of ionization energy loss, as well as identification of photon conversions, nuclear interactions, and heavy-flavour decays.
Resumo:
The CMS Level-1 trigger was used to select cosmic ray muons and LHC beam events during data-taking runs in 2008, and to estimate the level of detector noise. This paper describes the trigger components used, the algorithms that were executed, and the trigger synchronisation. Using data from extended cosmic ray runs, the muon, electron/photon, and jet triggers have been validated, and their performance evaluated. Efficiencies were found to be high, resolutions were found to be good, and rates as expected. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
The CMS Hadron Calorimeter in the barrel, endcap and forward regions is fully commissioned. Cosmic ray data were taken with and without magnetic field at the surface hall and after installation in the experimental hall, hundred meters underground. Various measurements were also performed during the few days of beam in the LHC in September 2008. Calibration parameters were extracted, and the energy response of the HCAL determined from test beam data has been checked. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
This paper discusses the design and performance of the time measurement technique and of the synchronization systems of the CMS hadron calorimeter. Time measurement performance results are presented from test beam data taken in the years 2004 and 2006. For hadronic showers of energy greater than 100 GeV, the timing resolution is measured to be about 1.2 ns. Time synchronization and out-of-time background rejection results are presented from the Cosmic Run At Four Tesla and LHC beam runs taken in the Autumn of 2008. The inter-channel synchronization is measured to be within 2 ns. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
The results of searches for new resonances decaying to a pair of massive vector bosons (WW, WZ, ZZ) are presented. All searches are performed using 5.0 fb-1 of proton-proton collisions, at TeV of center of mass energy, collected by the Compact Muon Solenoid detector at the Large Hadron Collider. No significant excess compared to the standard model background expectation is observed, and upper limits at 95% confidence level are set on the production cross section times the branching fraction of hypothetical particles decaying to a pair of vector bosons. The results are interpreted in the context of several benchmark models, such as the Randall-Sundrum gravitons, the Sequential Standard Model W′, and Technicolor. Graviton resonances in the Randall-Sundrum model with masses smaller than 940 GeV/c2, for coupling parameter k/MPl = 0.05 are excluded. Bulk (ADPS) Randall-Sundrum gravitons with masses smaller than 610 GeV/c2 are excluded, for k/MPl = 0.05. Sequential Standard Model W′ with masses smaller than 1143 GeV/c2 are excluded, as well as ρTC in the 167-687 GeV/c2 mass range, in Low Scale Technicolor models with M(πTC) = 3/4 M(ρTC) - 25 GeV/c2. © 2013 IOP Publishing Ltd.
Resumo:
Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.
Resumo:
The performances of the H → ZZ* → 4l analysis are studied in the context of the High Luminosity upgrade of the LHC collider, with the CMS detector. The high luminosity (up to L = 5 × 10^34 cm−2s−1) of the accelerator poses very challenging experimental con- ditions. In particular, the number of overlapping events per bunch crossing will increase to 140. To cope with this difficult environment, the CMS detector will be upgraded in two stages: Phase-I and Phase-II. The tools used in the analysis are the CMS Full Simulation and the fast parametrized Delphes simulation. A validation of Delphes with respect to the Full Simulation is performed, using reference Phase-I detector samples. Delphes is then used to simulate the Phase-II detector response. The Phase-II configuration is compared with the Phase-I detector and the same Phase-I detector affected by aging processes, both modeled with the Full Simulation framework. Conclusions on these three scenarios are derived: the degradation in performances observed with the “aged” scenario shows that a major upgrade of the detector is mandatory. The specific upgrade configuration studied allows to keep the same performances as in Phase-I and, in the case of the four-muons channel, even to exceed them.
Resumo:
L’esperimento CMS a LHC ha raccolto ingenti moli di dati durante Run-1, e sta sfruttando il periodo di shutdown (LS1) per evolvere il proprio sistema di calcolo. Tra i possibili miglioramenti al sistema, emergono ampi margini di ottimizzazione nell’uso dello storage ai centri di calcolo di livello Tier-2, che rappresentano - in Worldwide LHC Computing Grid (WLCG)- il fulcro delle risorse dedicate all’analisi distribuita su Grid. In questa tesi viene affrontato uno studio della popolarità dei dati di CMS nell’analisi distribuita su Grid ai Tier-2. Obiettivo del lavoro è dotare il sistema di calcolo di CMS di un sistema per valutare sistematicamente l’ammontare di spazio disco scritto ma non acceduto ai centri Tier-2, contribuendo alla costruzione di un sistema evoluto di data management dinamico che sappia adattarsi elasticamente alle diversi condizioni operative - rimuovendo repliche dei dati non necessarie o aggiungendo repliche dei dati più “popolari” - e dunque, in ultima analisi, che possa aumentare l’“analysis throughput” complessivo. Il Capitolo 1 fornisce una panoramica dell’esperimento CMS a LHC. Il Capitolo 2 descrive il CMS Computing Model nelle sue generalità, focalizzando la sua attenzione principalmente sul data management e sulle infrastrutture ad esso connesse. Il Capitolo 3 descrive il CMS Popularity Service, fornendo una visione d’insieme sui servizi di data popularity già presenti in CMS prima dell’inizio di questo lavoro. Il Capitolo 4 descrive l’architettura del toolkit sviluppato per questa tesi, ponendo le basi per il Capitolo successivo. Il Capitolo 5 presenta e discute gli studi di data popularity condotti sui dati raccolti attraverso l’infrastruttura precedentemente sviluppata. L’appendice A raccoglie due esempi di codice creato per gestire il toolkit attra- verso cui si raccolgono ed elaborano i dati.
Resumo:
Solid-phase microextraction, using on-line bis(trimethylsilyl)trifluoroacetamide derivatisation, gas chromatography, and mass spectrometry, was evaluated in the quantification of 3-chloro-4-(dichloromethyl)-5-hydroxy-2(5H)-furanone (MX) in water samples. Fibres encompassing a wide range of polarities were used with headspace and direct immersion sampling. For the immersion procedure, various parameters affecting MX extraction, including pH, salinity, temperature, and extraction time were evaluated. The optimised method (polyacrylate fibre; 20% Na2SO4; pH 2.0; 60 min; 20 °C) was applied for reservoir chlorinated water samples-either natural or spiked with MX (50 ng L-1 and 100 ng L-1). The recovery of MX ranged from 44 to 72%. Quantification of MX in water samples was done using external standard and the selected ion monitoring mode. Correlation coefficient (0.98%), relative standard deviation (5%), limit of detection (30 ng L-1) and limit of quantification (50 ng L-1) were obtained from calibration curve.