967 resultados para CMS
Resumo:
Cas ligand with multiple Src homology (SH) 3 domains (CMS) is an ubiquitously expressed signal transduction molecule that interacts with the focal adhesion protein p130Cas. CMS contains three SH3 in its NH2 terminus and proline-rich sequences in its center region. The latter sequences mediate the binding to the SH3 domains of p130Cas, Src-family kinases, p85 subunit of phosphatidylinositol 3-kinase, and Grb2. The COOH-terminal region contains putative actin binding sites and a coiled-coil domain that mediates homodimerization of CMS. CMS is a cytoplasmic protein that colocalizes with F-actin and p130Cas to membrane ruffles and leading edges of cells. Ectopic expression of CMS in COS-7 cells resulted in alteration in arrangement of the actin cytoskeleton. We observed a diffuse distribution of actin in small dots and less actin fiber formation. Altogether, these features suggest that CMS functions as a scaffolding molecule with a specialized role in regulation of the actin cytoskeleton.
Resumo:
Description based on: Dec. 1982; title from cover.
Resumo:
The proliferation of course management systems (CMS) in the last decade stimulated educators in establishing novel active e-learning practices. Only a few of these practices, however, have been systematically described and published as pedagogic patterns. The lack of formal patterns is an obstacle to the systematic reuse of beneficial active e-learning experiences. This paper aims to partially fill the void by offering a collection of active e-learning patterns that are derived from our continuous course design experience in standard CMS environments, such as Moodle and Black-board. Our technical focus is on active e-learning patterns that can boost student interest in computing-related fields and increase student enrolment in computing-related courses. Members of the international e-learning community can benefit from active e-learning patterns by applying them in the design of new CMS-based courses – in computing and other technical fields.
Resumo:
Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.
Resumo:
This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.
Resumo:
Present the measurement of a rare Standard Model processes, pp →W±γγ for the leptonic decays of the W±. The measurement is made with 19.4 fb−1 of 8 TeV data collected in 2012 by the CMS experiment. The measured cross section is consistent with the Standard Model prediction and has a significance of 2.9σ. Limits are placed on dimension-8 Effective Field Theories of anomalous Quartic Gauge Couplings. The analysis has particularly sensitivity to the fT,0 coupling and a 95% confidence limit is placed at −35.9 < fT,0/Λ4< 36.7 TeV−4. Studies of the pp →Zγγ process are also presented. The Zγγ signal is in strict agreement with the Standard Model and has a significance of 5.9σ.
Resumo:
Résumé : Par l’adoption des Projets de loi 33 et 34, en 2006 et 2009 respectivement, le gouvernement du Québec a créé de nouvelles organisations privées dispensatrices de soins spécialisés, soient les centres médicaux spécialisés. Il a de ce fait encadré leur pratique, notamment dans l’objectif d’assurer un niveau de qualité et de sécurité satisfaisant des soins qui y sont dispensés. L’auteure analyse les différents mécanismes existants pour assurer la qualité et la sécurité des soins offerts en centres médicaux spécialisés, afin de constater si l’objectif recherché par le législateur est rencontré. Ainsi, elle expose les mécanismes spécifiques prévus dans la Loi sur les services de santé et services sociaux applicables aux centres médicaux spécialisés qui jouent un rôle quant au maintien de la qualité et de la sécurité des services, de même que des mécanismes indirects ayant une incidence sur ce plan, tels que la motivation économique et les recours en responsabilité. Ensuite, elle s’attarde aux processus issus de la règlementation professionnelle. Elle arrive à la conclusion que deux mécanismes sont manquants pour rencontrer l’objectif visé par le législateur et propose, à ce titre, des pistes de solution.
Resumo:
SCOPUS: ar.j
Resumo:
La sezione d'urto differenziale di produzione di coppie t/t viene misurata utilizzando dati raccolti nel 2012 dall'esperimento CMS in collisioni protone-protone con un'energia nel centro di massa di 8 TeV. La misura viene effettuata su eventi che superano una serie di selezioni applicate al fine di migliorare il rapporto segnale/rumore. In particolare, facendo riferimento al canale all-hadronic, viene richiesta la presenza di almeno sei jet nello stato finale del decadimento della coppia t/t di cui almeno due con quark b. Ottenuto un campione di eventi sufficientemente puro, si può procedere con un fit cinematico, che consiste nel minimizzare una funzione chi quadro in cui si considera tra i parametri liberi la massa invariante associata ai quark top; le cui distribuzioni, richiedendo che il chi quadro sia <10, vengono ricostruite per gli eventi candidati, per il segnale, ottenuto mediante eventi simulati, e per il fondo, modellizzato negando la presenza di jet con b-tag nello stato finale del decadimento della coppia t/t. Con le suddette distribuzioni, attraverso un fit di verosimiglianza, si deducono le frazioni di segnale e di fondo presenti negli eventi. È dunque possibile riempire un istogramma di confronto tra gli eventi candidati e la somma di segnale+fondo per la massa invariante associata ai quark top. Considerando l'intervallo di valori nel quale il rapporto segnale/rumore è migliore si possono ottenere istogrammi di confronto simili al precedente anche per la quantità di moto trasversa del quark top e la massa invariante e la rapidità del sistema t/t. Infine, la sezione d'urto differenziale è misurata attraverso le distribuzioni di tali variabili dopo aver sottratto negli eventi il fondo.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
The t/t production cross section is measured with the CMS detector in the all-jets channel in $pp$ collisions at the centre-of-mass energy of 13 TeV. The analysis is based on the study of t/t events in the boosted topology, namely events in which decay products of the quark top have a high Lorentz boost and are thus reconstructed in the detector as a single, wide jet. The data sample used in this analysis corresponds to an integrated luminosity of 2.53 fb-1. The inclusive cross section is found to be sigma(t/t) = 727 +- 46 (stat.) +115-112 (sys.) +- 20~(lumi.) pb, a value which is consistent with the theoretical predictions. The differential, detector-level cross section is measured as a function of the transverse momentum of the leading jet and compared to the QCD theoretical predictions. Finally, the differential, parton-level cross section is reported, measured as a function of the transverse momentum of the leading parton, extrapolated to the full phase space and compared to the QCD predictions.
Resumo:
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.